Closed Bug 1174145 Opened 9 years ago Closed 9 years ago

adjust checksums builder to work in s3

Categories

(Release Engineering :: Release Automation: Other, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: bhearsum, Assigned: nthomas)

References

Details

Attachments

(6 files, 8 obsolete files)

20.57 KB, patch
nthomas
: review+
nthomas
: checked-in+
Details | Diff | Splinter Review
15.36 KB, patch
nthomas
: review+
nthomas
: checked-in+
Details | Diff | Splinter Review
1.49 KB, patch
nthomas
: review+
Details | Diff | Splinter Review
2.52 KB, patch
nthomas
: review+
Details | Diff | Splinter Review
1.03 KB, patch
nthomas
: review+
Details | Diff | Splinter Review
1.55 KB, patch
rail
: review+
nthomas
: checked-in+
Details | Diff | Splinter Review
From bug 1165405:
Issue: Uses rsync to pull down files, combines, signs, rsync back up. Creates contrib dirs
Code: http://hg.mozilla.org/build/tools/file/default/scripts/release/generate-sums.py
Fix: convert rsync d/l to walking S3 LIST, existing combine & sign, upload.py & post-upload.py to push back up
SWAG: 3 days
Unknowns: what we are doing with contrib/ directories; maybe create in post_upload instead when candidates dir first created


The function that actually generates the big checksums files (https://github.com/mozilla/build-tools/blob/master/lib/python/release/signing.py#L5) currently relies on files being on disk. It's probably worth either reworking or replacing this not to require that -- there's no reason we ever need to put the individual checksums files on disk, since we'll be downloading them in-process. We can probably ||ize this by having workers pop individual checksums filenames off of a queue, download them, parse, and add their contents into a bigger data structure.

I'm going to ignore the contrib stuff for my first pass, we still need to make a decision about what we're doing with them.
I might try to validate the checksums files against their .asc's at the same time as doing this. Not strictly required, but it's something we've wanted to do for a long time IIRC (though I can't find a bug for it....)
I got a good start on this today, progress is at https://github.com/mozilla/build-mozharness/compare/master...bhearsum:s3-checksums?expand=1

I ended up starting fresh with a mozharness script for two reasons:
1) I'm reimplementing enough of the script that I may as well port to mozharness at the same time
2) I need boto to run, and it's probably easier to get it installed via mozharness' existing virtualenv mechanism

I'm also taking this opportunity to get tidy up and stop things like mar.exe from going in the files. I think the simple include mechanism I've got in the current patch should be enough, though I may want to move some of that to a config (or maybe it doesn't matter, since we can override in the task definition once we move over to Taskcluster?).

This script seems to work on the test bucket that I created:
➜  release git:(s3-checksums) ✗ python generate-checksums.py --bucket-name bhearsum-checksums-test                  
16:38:02     INFO - MultiFileLogger online at 20150612 16:38:02 in /tmp/mozharness/scripts/release
16:38:02     INFO - Run as generate-checksums.py --bucket-name bhearsum-checksums-test
<snip>
16:38:03     INFO - #####
16:38:03     INFO - ##### Running activate-virtualenv step.
16:38:03     INFO - #####
16:38:03     INFO - Running main action method: activate_virtualenv
16:38:03     INFO - #####
16:38:03     INFO - ##### Running collect-individual-checksums step.
16:38:03     INFO - #####
16:38:03     INFO - Running main action method: collect_individual_checksums
16:38:03     INFO - Bucket name is: bhearsum-checksums-test
16:38:03     INFO - Connecting to S3
16:38:03     INFO - Getting key names from bucket
16:38:20     INFO - #####
16:38:20     INFO - ##### Running create-big-checksums step.
16:38:20     INFO - #####
16:38:20     INFO - Running main action method: create_big_checksums
16:38:20     INFO - Creating big checksums file: SHA512SUMS
16:38:20     INFO - Copying logs to upload dir...
➜  release git:(s3-checksums) ✗ head SHA512SUMS 
d5b60a5e6e4983af2c80a6f39d5b051f6dc7ce71d4982657aab3c3c357785f55dd55ca94d7f3278739913918006e0290251247ca0e14b03f880797c5c1f7a7e1  linux-i686/ach/firefox-39.0b5.tar.bz2
a0ddcfaea52c87897df5f28752104e8c693763809794f7a8e6cda61db65195ad1b17a40a52face1ed8e9fbd8d78cc6f88d19b2aace7810f3a6e3cbce7fa96df0  linux-i686/af/firefox-39.0b5.tar.bz2
6d7b8d55d9fb46e8390b8601541dfecb8393db141b03a7f8f56dc6100f70dcfcbe8e86689e12dc1fe3eb9da2dd69cbdbdfd80730b4cbe8424f64b57c7122b0bf  linux-i686/an/firefox-39.0b5.tar.bz2
4aeea8fd88453c8b072650914e39961110412bf0886032bb3315a0225b81c3c83a1d5a516519d4f3a9a2d45b4ce5237e415dbd33b44f823a69dcfcbc712dea41  linux-i686/ar/firefox-39.0b5.tar.bz2
7f6cb0d34a45c75016ba52b42225d2c93c964d14c59a764d05e123a5b4f5b827977caa2467e1e0c872e6b619954d8db230c9295e5d4d532f942ffc11e0f2196f  linux-i686/as/firefox-39.0b5.tar.bz2
d1eca3f30653e1ec6c4aa8576795ee80e7302180fd6ee6fa8aab3603998148cc82cff4327867e5e212e3fde01234ec5976b4780422dd6e67889b7e9efe228e04  linux-i686/ast/firefox-39.0b5.tar.bz2
c261a94a5644f8c3fd22ac02c42b635a29190e8af09e12935038f66eb6a80bb6f7ca072e55658094b8bf0de8f63339cecae7ab8faafdca8ed0c2d647182d489a  linux-i686/az/firefox-39.0b5.tar.bz2
1cc45fbf7ca596a8336e959df498240d5e3e92efacc862c709d0f0b4e6513a9c961ee18643a240332beb8db3496012714c25779a9095eb66894228d03d3dadf8  linux-i686/be/firefox-39.0b5.tar.bz2
afd5286c37dbf97422a38c22beb2b425473a758948df1faf4196546b6ad08d9782ef49271fcd7cbd29f16decdf659a1654ef8706574e6d468017f7bf12a52726  linux-i686/bg/firefox-39.0b5.tar.bz2
10cd53f2122d0c837d76f5a187107d8a7c7d050aa90402b46f585990380359a7cba855ea88b4ebf2d51500c2c21d2b1e349a7490e8bde31e7311ac3a35342b0d  linux-i686/bn-BD/firefox-39.0b5.tar.bz2


Still todo:
* Make sure config options that are lists actually work as intended (eg, can override the default rather than just appending to it)
* Try to implement .asc verification
* Sign the new files & put them into the bucket
* Figure out how to get necessary AWS credentials onto slaves running these jobs
* Adjust BuildFactory to use new script, test
(In reply to Ben Hearsum [:bhearsum] from comment #2)
> * Figure out how to get necessary AWS credentials onto slaves running these
> jobs

The candidates bucket will probably end up public, so we could work with the Cloud people to make sure anonymous API users can have enough perms (eg s3:ListBucket and s3:GetObject). http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-2 is an example of this.
And if it's public here's always using the REST API and parsing the XML too (yes, I know!).

Alternatively we can have a role which grants access, and just apply that to jobs we run in AWS. Not sure if that can cross the account barrier of us <--> cloud services though.
(In reply to Nick Thomas [:nthomas] from comment #3)
> (In reply to Ben Hearsum [:bhearsum] from comment #2)
> > * Figure out how to get necessary AWS credentials onto slaves running these
> > jobs
> 
> The candidates bucket will probably end up public, so we could work with the
> Cloud people to make sure anonymous API users can have enough perms (eg
> s3:ListBucket and s3:GetObject).
> http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.
> html#example-bucket-policies-use-case-2 is an example of this.
> And if it's public here's always using the REST API and parsing the XML too
> (yes, I know!).

Ah, good point! This would make things a whole lot easier, and I presume we'll need this anyways so folks like QE can download candidates...
(In reply to Ben Hearsum [:bhearsum] from comment #4)
> (In reply to Nick Thomas [:nthomas] from comment #3)
> > (In reply to Ben Hearsum [:bhearsum] from comment #2)
> > > * Figure out how to get necessary AWS credentials onto slaves running these
> > > jobs
> > 
> > The candidates bucket will probably end up public, so we could work with the
> > Cloud people to make sure anonymous API users can have enough perms (eg
> > s3:ListBucket and s3:GetObject).
> > http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.
> > html#example-bucket-policies-use-case-2 is an example of this.
> > And if it's public here's always using the REST API and parsing the XML too
> > (yes, I know!).
> 
> Ah, good point! This would make things a whole lot easier, and I presume
> we'll need this anyways so folks like QE can download candidates...

Except I just realized that we need to be able send the newly created *SUMS files back up, d'oh...
(In reply to Ben Hearsum [:bhearsum] from comment #2)
> * Try to implement .asc verification

The tricky part of this is how to get the public key on the machine doing the verification. It's available in the candidates directory, but if we're downloading it from the same place as the files we're verifying it's a little pointless. For now, deploying it to the relevant machines with Puppet will probably be fine. We might need to create a throwaway keychain as part of this job, I think GPG needs an actual keychain to do any verification.

In a future world where we're on Taskcluster we can probably bake it into a Docker image or something...
I'm going to go with post_upload.py uploads, like the original plan in comment #0. This means that there will be no AWS credentials to worry about, just bucket permissions to make sure we can get anonymous read-only access.

At some point this will have to change, but hopefully we can implement these as Taskcluster artifacts when we move the checksums job implementation over there.
I'm not totally happy with this, but it works. I had to duplicate some of the logic from build/tools (checksums file parser + post upload command generation) into mozharness, because I don't think we have a better way to deal with things like that.

I didn't create any configs for this because I thought that using command line options would make it easier to convert to a native Taskcluster Task (and I also don't like duplicating information we already have in the release configs). I can change that if you want - I don't feel strongly.

With this, we're only generating SHA512SUMS now. It's trivial to turn md5 and sha1 back on, but should we bother?

I started poking at verifying the checksums signatures but it was taking longer than I thought. My WIP is at https://github.com/bhearsum/mozharness/compare/s3-checksums...bhearsum:s3-checksums-verify?expand=1 and I could probably finish it with another day of work...do you think it's worthwhile?

I also found and fixed a bug in SigningMixin where it doesn't accept multiple signing formats.
Attachment #8623266 - Flags: feedback?(nthomas)
Comment on attachment 8623266 [details] [diff] [review]
mozharness script to generate *SUMS files

Review of attachment 8623266 [details] [diff] [review]:
-----------------------------------------------------------------

My SWAG was a little optimistic, sorry!  

Re your queries
* I'm not worried about the duplication into mozharness, it can die in tools in time
* Similar for the lack of config file, TC is the right future to be thinking of
* just SHA512SUMS is fine by me, it's well time we dropped MD5
* we can leave the checksums.asc verification for later if you like

::: mozharness/mozilla/checksums.py
@@ +8,5 @@
> +        if size < 0:
> +            raise ValueError("Found negative value (%d) for size." % size)
> +        if file_ not in fileInfo:
> +            fileInfo[file_] = {}
> +            fileInfo[file_]['hashes'] = {}

Style choice to do that in two lines instead of one ?

@@ +15,5 @@
> +        elif fileInfo[file_]['size'] != size:
> +            raise ValueError("Found different sizes for same file %s (%s and %s)" % (file_, fileInfo[file_]['size'], size))
> +        # Same goes for the hash.
> +        elif type_ in fileInfo[file_]['hashes'] and fileInfo[file_]['hashes'][type_] != hash_:
> +            raise ValueError("Found different %s hashes for same file %s (%s and %s)" % (type_, file_, fileInfo[file_]['hashes'][type_], hash_))

What sort of problems would make these two tests true ? Can those issues happen in a wider context, so that an uploaded file is present in more than one .checksums file ? Like, l10n or something. If so, this
  self.checksums.update(parse_checksums_file(sums))
might need more care. While multiple workers update simultaneously, :-(

::: mozharness/mozilla/signing.py
@@ +53,5 @@
>              '-n', nonce,
>              '-c', host_cert,
>          ]
> +        for f in formats:
> +            cmd += ['-f', f]

We pass formats=None sometimes, which this won't like. Otherwise we pass strings, so the callers need updating.

::: scripts/release/generate-checksums.py
@@ +119,5 @@
> +                "\.tar\.bz2",
> +                "\.dmg",
> +                "\.bundle",
> +                "\.mar",
> +                "\.asc",

Do you know why we include *.asc for partials in the SUMS files ? If it was me I'd verify the SUMS file is good using it's own .asc and the KEY, then ones for the partials are redundant. Or verify the partial by itself. I mention it because there are two lines for each partial.mar.asc in the staging run. Nice cleanup otherwise!

@@ +129,5 @@
> +        if self.config.get("bucket_name"):
> +            return self.config["bucket_name"]
> +        else:
> +            return "{}-{}build{}-candidates".format(
> +                self.config["product"], self.config["version"], self.config["build_number"]

We may have a single candidates bucket, and need to use a prefix on the .list() to look in the right sub-folder.

@@ +184,5 @@
> +
> +        tools_dir = path.join(dirs["abs_work_dir"], "tools")
> +        self.vcs_checkout(
> +            repo=self.config["tools_repo"],
> +            vcs="hg",

Do we have any importable machinery for using the local copy which runner is keeping up to date ?
Attachment #8623266 - Flags: feedback?(nthomas) → feedback+
(In reply to Nick Thomas [:nthomas] from comment #10)
> Comment on attachment 8623266 [details] [diff] [review]
> mozharness script to generate *SUMS files
> 
> Review of attachment 8623266 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> My SWAG was a little optimistic, sorry!  

I expanded the scope a little by moving to Mozharness anyways, but no worries!

> ::: mozharness/mozilla/checksums.py
> @@ +8,5 @@
> > +        if size < 0:
> > +            raise ValueError("Found negative value (%d) for size." % size)
> > +        if file_ not in fileInfo:
> > +            fileInfo[file_] = {}
> > +            fileInfo[file_]['hashes'] = {}
> 
> Style choice to do that in two lines instead of one ?

I just inherited what's in the tools version, may as well update this style though.

> @@ +15,5 @@
> > +        elif fileInfo[file_]['size'] != size:
> > +            raise ValueError("Found different sizes for same file %s (%s and %s)" % (file_, fileInfo[file_]['size'], size))
> > +        # Same goes for the hash.
> > +        elif type_ in fileInfo[file_]['hashes'] and fileInfo[file_]['hashes'][type_] != hash_:
> > +            raise ValueError("Found different %s hashes for same file %s (%s and %s)" % (type_, file_, fileInfo[file_]['hashes'][type_], hash_))
> 
> What sort of problems would make these two tests true ?

In practice, I don't think we've actually ever hit either of these exceptions, so I'm not too worried about them within the scope of a single platform+locale.

> Can those issues
> happen in a wider context, so that an uploaded file is present in more than
> one .checksums file ? Like, l10n or something. If so, this
>   self.checksums.update(parse_checksums_file(sums))
> might need more care. While multiple workers update simultaneously, :-(

Eg: host/bin/mar, which exists in many checksums files? If it wasn't an excluded file we'd have weirdness wit hit because it has different values per platform.

Perhaps self.checksums should be write-once, and we should bail if we try to update the value of a file that already exists in it. This would necessitate moving the include checks to collect_individual_checksums to avoid unnecessarily bailing because of host/bin/mar.

I think we can make this work with ThreadPool without too much trouble by either:
* Acquiring a multiprocessing.Lock (https://docs.python.org/2/library/multiprocessing.html#synchronization-between-processes) prior to doing any checks for existence in self.checksums, and releasing it after updating it. This would probably slow down run time a lot, though.
* Only parallelizing get_key, and then updating self.checksums serially. We'd have to store all of the checksums files in memory briefly, but I doubt that would use any meaningful amount of memory.

> ::: mozharness/mozilla/signing.py
> @@ +53,5 @@
> >              '-n', nonce,
> >              '-c', host_cert,
> >          ]
> > +        for f in formats:
> > +            cmd += ['-f', f]
> 
> We pass formats=None sometimes, which this won't like. Otherwise we pass
> strings, so the callers need updating.

Good point. I think I can just add back the "if formats" guard to this rather than updating the callers, though.

> ::: scripts/release/generate-checksums.py
> @@ +119,5 @@
> > +                "\.tar\.bz2",
> > +                "\.dmg",
> > +                "\.bundle",
> > +                "\.mar",
> > +                "\.asc",
> 
> Do you know why we include *.asc for partials in the SUMS files ? If it was
> me I'd verify the SUMS file is good using it's own .asc and the KEY, then
> ones for the partials are redundant. Or verify the partial by itself. I
> mention it because there are two lines for each partial.mar.asc in the
> staging run. Nice cleanup otherwise!

Hmm, good catch. I have no idea why we push partial mar ascs to releases/...maybe they just never stood out as something that shouldn't be.

I think the reason we have duplicates here is because of a think-o in create_big_checksums - it iterates over self.checksums, and then iterates over the includes inside of that, so we end up with as many entries as patterns that you match. Regardless of what we do with partial mar ascs this needs fixing.

Tangentially, this makes me think about what we can do to make sure that this script and whatever is moving things to the releases bucket (beetmover, presumably) have the same set of includes. Perhaps a shared config is in order for that.

> @@ +129,5 @@
> > +        if self.config.get("bucket_name"):
> > +            return self.config["bucket_name"]
> > +        else:
> > +            return "{}-{}build{}-candidates".format(
> > +                self.config["product"], self.config["version"], self.config["build_number"]
> 
> We may have a single candidates bucket, and need to use a prefix on the
> .list() to look in the right sub-folder.

Oooh, good to know. Do you know when this will be decided? Should be mostly a semantics change (candidates dir vs. bundle name)

> @@ +184,5 @@
> > +
> > +        tools_dir = path.join(dirs["abs_work_dir"], "tools")
> > +        self.vcs_checkout(
> > +            repo=self.config["tools_repo"],
> > +            vcs="hg",
> 
> Do we have any importable machinery for using the local copy which runner is
> keeping up to date ?

Good question, I'll look into this.
Random other thought: this current script won't work until we have everything in s3, not sure if it's worthwhile adjusting it to work with FTP as well.
I think I've addressed everything with this patch, namely:
* Serialized the actual parsing of the checksums, leaving the gathering in the ThreadPool
* Fixed other consumers of query_moz_sign_cmd
* Update includes to stop including .asc files - I figure that even if we're still pushing them to the releases dir there's no point in including them in the *SUMS file
* Use hgtool to clone build/tools (my buildbotcustom patch will get us using the runner checkout instead of cloning fresh, too).

I tested this again for Firefox and Thunderbird - all seems well. I did a very brief test for Fennec even though we don't run it there...it should work fine as long as we update the includes for APKs.
Attachment #8624219 - Flags: review?(nthomas)
So I thought a bit more about whether it's worthwhile to add support for rsync/ftp scraping to this script, and I don't think it is. As long as we have some sort of overlap period where we can do either, it should be easy to swap in the new script. The mozharness patch can be landed at any time, and when we're ready to switch to S3 we just need to land this.
Attachment #8623266 - Attachment is obsolete: true
Attachment #8624222 - Flags: review?(nthomas)
I took this opportunity to look for other dead code in the tools library, verifying that things were unused with both mxr + a local grep of all of my repos.
Attachment #8624223 - Flags: review?(nthomas)
See Also: → 1160385
Comment on attachment 8624219 [details] [diff] [review]
fix sign command usage; update includes; protect against races updating self.checksums

Review of attachment 8624219 [details] [diff] [review]:
-----------------------------------------------------------------

We'll land all three of these when we swap to the S3 upload host ?

::: scripts/release/generate-checksums.py
@@ +176,5 @@
> +            for f, info in parse_checksums_file(c).iteritems():
> +                for pattern in self.config["includes"]:
> +                    if re.search(pattern, f):
> +                        if f in self.checksums:
> +                            self.fatal("Found duplicate checksum entry for {}, don't know which one to pick.".format(f))

Have you verified that this works if more than one format is passed in ? Could be I'm just failing to parse properly.
Attachment #8624219 - Flags: review?(nthomas) → review+
Attachment #8624222 - Flags: review?(nthomas) → review+
Attachment #8624223 - Flags: review?(nthomas) → review+
(In reply to Nick Thomas [:nthomas] from comment #16)
> Comment on attachment 8624219 [details] [diff] [review]
> fix sign command usage; update includes; protect against races updating
> self.checksums
> 
> Review of attachment 8624219 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> We'll land all three of these when we swap to the S3 upload host ?

I was thinking I'll land the Mozharness one right away, but maybe not since it may need an adjustment depending what we do with bucket names and prefixes...

> ::: scripts/release/generate-checksums.py
> @@ +176,5 @@
> > +            for f, info in parse_checksums_file(c).iteritems():
> > +                for pattern in self.config["includes"]:
> > +                    if re.search(pattern, f):
> > +                        if f in self.checksums:
> > +                            self.fatal("Found duplicate checksum entry for {}, don't know which one to pick.".format(f))
> 
> Have you verified that this works if more than one format is passed in ?
> Could be I'm just failing to parse properly.

I don't think I explicitly tested this...let's find out!
(In reply to Nick Thomas [:nthomas] from comment #16)
> Comment on attachment 8624219 [details] [diff] [review]
> fix sign command usage; update includes; protect against races updating
> self.checksums
> 
> Review of attachment 8624219 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> We'll land all three of these when we swap to the S3 upload host ?
> 
> ::: scripts/release/generate-checksums.py
> @@ +176,5 @@
> > +            for f, info in parse_checksums_file(c).iteritems():
> > +                for pattern in self.config["includes"]:
> > +                    if re.search(pattern, f):
> > +                        if f in self.checksums:
> > +                            self.fatal("Found duplicate checksum entry for {}, don't know which one to pick.".format(f))
> 
> Have you verified that this works if more than one format is passed in ?
> Could be I'm just failing to parse properly.

Looks like it works fine...I ran with md5, sha1, and sha512 and ended up with three files with the same number of lines, and the correct checksums on each. Is the "f" throwing you off, maybe? It's the filename here (not format).

Given that, this is just waiting on bucket/prefix confirmation, then we should be ready to go.
I got in touch with oremj and he said:
This is the currently bucketmap
https://github.com/mozilla-services/product-delivery-tools/blob/master/bucketmap.go

All of those will be prefixed with net-mozaws-stage-delivery- in stage and
net-mozaws-prod-delivery- in production.

For example, http://ftp.mozilla.org/pub/mozilla.org/firefox/releases/38.0/linux-x86_64/en-US/firefox-38.0.tar.bz2
would map to s3://net-mozaws-prod-delivery-firefox/pub/mozilla.org/firefox/releases/38.0/linux-x86_64/en-US/firefox-38.0.tar.bz2



So, the bucket name stuff here definitely needs some adjustment. Looks like the bucket name should probably be specified in the release config, and we'll find the prefix within the bucket based on product/version/build number.
Depends on: 1181892
I haven't been able to fully test this yet beacuse of bug 1181892 only being partially fixed, but I can get far enough to confirm that all of the checksums files get found, and attempted to be processed. This means that everything new here has been tested except for the post upload command. I want to give it a full end to end run before landing, but I'm 95% confident this will work.

Changes since the last patch:
* Replace --bucket-name with --bucket-prefix. The latter is what the post upload command takes, so it's easiest if we use the same thing here, and appended product to it where needed.
* Adjust bucket prefix to match reality (no leading /, no mozilla.org)
* --bucket-prefix to post upload command
Attachment #8645079 - Flags: review?(nthomas)
Oh, the other thing to note is that my testing had me locally s/candidates/nightly/ in the first part of the path, because of bug 1192311. I don't think doing so invalidates any of my testing.
Comment on attachment 8645079 [details] [diff] [review]
updated checksums script to work with real bucket names

Review of attachment 8645079 [details] [diff] [review]:
-----------------------------------------------------------------

::: scripts/release/generate-checksums.py
@@ +125,5 @@
> +                "^.*\.xpi$",
> +            ]
> +
> +    def _get_bucket_name(self):
> +        return "{}-{}".format(self.config["bucket_prefix"], self.config["product"])

I'm wondering if this works for mobile builds, which are in the archive bucket IIRC.
(In reply to Nick Thomas [:nthomas] from comment #23)
> Comment on attachment 8645079 [details] [diff] [review]
> updated checksums script to work with real bucket names
> 
> Review of attachment 8645079 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: scripts/release/generate-checksums.py
> @@ +125,5 @@
> > +                "^.*\.xpi$",
> > +            ]
> > +
> > +    def _get_bucket_name(self):
> > +        return "{}-{}".format(self.config["bucket_prefix"], self.config["product"])
> 
> I'm wondering if this works for mobile builds, which are in the archive
> bucket IIRC.

Gah. What a pain. Would I be crazy to suggest that we finally start using "fennec" in this directory? It really sucks that we use productName for Firefox, but appName for Fennec :(.
Attachment #8645079 - Flags: review?(nthomas)
Just realized we need to write the patch against the gecko too.
(In reply to Nick Thomas [:nthomas] from comment #25)
> Just realized we need to write the patch against the gecko too.

Good point. What do you want to do about the fennec/mobile product name? I realized we'll have to update all of the CI invocations to make it consistent, which may break other tools. It's a good chance for consistency though...

If we go that route maybe we can get the new post upload script to s/mobile/fennec/ to make it easier to gradually update our tools.
Flags: needinfo?(nthomas)
I'm not sure about fennec vs mobile. You can't really do a (filesystem) symlink equivalent in S3. In theory you could move everything into fennec/, and keep the old links alive by making tree of keys in mobile/ which 302 to the fennec/ location for each, but that sounds like a fun transition. Or were you thinking more a README which says 'newer files live over here' ?
Flags: needinfo?(nthomas)
(In reply to Nick Thomas [:nthomas] from comment #27)
> I'm not sure about fennec vs mobile. You can't really do a (filesystem)
> symlink equivalent in S3. In theory you could move everything into fennec/,
> and keep the old links alive by making tree of keys in mobile/ which 302 to
> the fennec/ location for each, but that sounds like a fun transition. Or
> were you thinking more a README which says 'newer files live over here' ?

I was thinking of the README method :). It seems like the only chance we'll get to break compat like this.
Attached patch gecko-based version of patch (obsolete) — Splinter Review
This assumes you're OK with the renaming of mobile -> fennec w/ a README pointer. If not, I'll adjust.

Either way this patch needs backporting all the way to esr38.
Attachment #8624219 - Attachment is obsolete: true
Attachment #8645079 - Attachment is obsolete: true
Attachment #8649976 - Flags: review?(nthomas)
Comment on attachment 8649976 [details] [diff] [review]
gecko-based version of patch

Sorry, this won't work. The mapping is

firefox: <prefix>-firefox
mobile, thunderbird & everything else:  {prefix}-archive
Attachment #8649976 - Flags: review?(nthomas) → review-
I think I've got everything addressed now, here's the summary:
* Fix docstring of bucket-prefix to make it clear it's talking about bucket name, and not the prefix of files within the bucket.
* Change _get_bucket_prefix to _get_file_prefix, since what I've been calling "bucket prefix" conflicts with the new post upload's script idea of what a bucket prefix is.
* Fix bucket name generation to use -firefox or -archive where appropriate
* Use stage_product instead of product, to maintain compatibility with existing fennec uploads (while avoiding special casing within this script).

I've also got patches to buildbotcustom and buildbot-configs to support this.
Attachment #8649976 - Attachment is obsolete: true
Attachment #8651055 - Flags: review?(nthomas)
Pretty self explanatory, I think.
Attachment #8624222 - Attachment is obsolete: true
Attachment #8651056 - Flags: review?(nthomas)
I was a little surprised that this wasn't there already, until I found https://hg.mozilla.org/projects/ash/rev/b3432877c7fe. I have a feeling there might be other plans for how/where to put this, so feel free to smarten me up.
Attachment #8651057 - Flags: review?(nthomas)
Comment on attachment 8651056 [details] [diff] [review]
fix args to new checksums script + use archiver

FYI, the reviews are lagging here because I'm verifying in staging for a bunch of scenarios. So far Firefox works with this

>diff --git a/process/release.py b/process/release.py
...
>+                "--build-number", releaseConfig["buildNumber"],
>+                "--bucket-prefix", branchConfig["bucket_prefix"],

changed to "--bucket-name-prefix", to match the mozharness script. It doesn't get uploaded but I think that's completely unrelated (server ssh config after some new changes got deployed).
(In reply to Nick Thomas [:nthomas] from comment #34)
> Comment on attachment 8651056 [details] [diff] [review]
> fix args to new checksums script + use archiver
> 
> FYI, the reviews are lagging here because I'm verifying in staging for a
> bunch of scenarios. So far Firefox works with this

No worries, I appreciate your testing!

> >diff --git a/process/release.py b/process/release.py
> ...
> >+                "--build-number", releaseConfig["buildNumber"],
> >+                "--bucket-prefix", branchConfig["bucket_prefix"],
> 
> changed to "--bucket-name-prefix", to match the mozharness script. It
> doesn't get uploaded but I think that's completely unrelated (server ssh
> config after some new changes got deployed).

Let me know if that's not the case and I'll help debug.
Attached patch fix bucket name prefix arg (obsolete) — Splinter Review
Here's a new patch, before I forget...
Attachment #8651056 - Attachment is obsolete: true
Attachment #8651056 - Flags: review?(nthomas)
Attachment #8654072 - Flags: review?(nthomas)
Worked nicely for Firefox after the upload host was fixed up. I realized we're missing the KEY file though, luckily we already have the tools repo checked out.
(In reply to Nick Thomas [:nthomas] from comment #37)
> Worked nicely for Firefox after the upload host was fixed up. I realized
> we're missing the KEY file though, luckily we already have the tools repo
> checked out.

\o/. It looks like we could also just move the KEY file to mozharness. AFAICT it's not used anywhere else (though "KEY" is a bit hard to sanely grep for across our repos.
Comment on attachment 8654072 [details] [diff] [review]
fix bucket name prefix arg

This has been working great for Firefox, and I enabled it for Fennec and there were no problems. Getting Thunderbird set up I noticed a nit ...

>diff --git a/process/release.py b/process/release.py
...
>+                "--tools-repo", branchConfig["platforms"]["linux"]["tools_repo_cache"],

Should be linux64 instead of linux.
Attached patch fix tools repo cache platform (obsolete) — Splinter Review
(In reply to Nick Thomas [:nthomas] from comment #39)
> Comment on attachment 8654072 [details] [diff] [review]
> fix bucket name prefix arg
> 
> This has been working great for Firefox, and I enabled it for Fennec and
> there were no problems. Getting Thunderbird set up I noticed a nit ...

\o/

> >diff --git a/process/release.py b/process/release.py
> ...
> >+                "--tools-repo", branchConfig["platforms"]["linux"]["tools_repo_cache"],
> 
> Should be linux64 instead of linux.

Here's a fix :)

Thanks again for doing all this testing.
Attachment #8654072 - Attachment is obsolete: true
Attachment #8654072 - Flags: review?(nthomas)
Attachment #8657074 - Flags: review?(nthomas)
Attachment #8651055 - Flags: review?(nthomas) → review+
Comment on attachment 8651057 [details] [diff] [review]
[buildbot-configs] add bucket prefix to configs

>diff --git a/mozilla/production_config.py b/mozilla/production_config.py
>+    'bucket_prefix': 'net-mozaws-prod-delivery',

We don't technically need this for production, but I'll ask oremj if it will cause any issues.
Attachment #8651057 - Flags: review?(nthomas) → review+
Comment on attachment 8657074 [details] [diff] [review]
fix tools repo cache platform

>diff --git a/process/release.py b/process/release.py
>+                "--bucket-prefix-name", branchConfig["bucket_prefix"],

Should be --bucket-name-prefix.

>+                "--gecko-repo", "%s%s" % (branchConfig["hgurl"], sourceRepoInfo["path"]),

For Thunderbird support I needed to change sourceRepoInfo["path"] to relengapi_archiver_repo_path. Otherwise it tries to get upload.py from the comm repo.

r+ with those fixes.
Attachment #8657074 - Flags: review?(nthomas) → review+
Attachment #8660812 - Flags: review?(nthomas) → review+
Attachment #8657074 - Attachment is obsolete: true
Attachment #8624223 - Attachment description: rip out old checksums script + other dead code → [tools] rip out old checksums script + other dead code
Attachment #8651055 - Attachment description: fix bucket name and prefix issues → [gecko] fix bucket name and prefix issues
Attachment #8651057 - Attachment description: add bucket prefix to buildbot configs → [buildbot-configs] add bucket prefix to configs
Attachment #8660812 - Attachment description: fix bucket name prefix + gecko repo path → [buildbotcustom] fix bucket name prefix + gecko repo path
I'm out until Mozlando. Nick, I'll assign this to you since you're the next most familiar with it...I think the only thing left is to land when ready though, modulo any merge conficts in the dead code removal.
Assignee: bhearsum → nthomas
Keywords: leave-open
Status - ready to go, patches for buildbot-config and buildbotcustom to land at transition.
Correction:

Status - ready to go
To land at transition: patches for buildbot-config, buildbotcustom, and tools
We still need this in release runner.
Attachment #8677119 - Flags: review?(nthomas)
Attachment #8677119 - Flags: review?(nthomas) → review+
Comment on attachment 8624223 [details] [diff] [review]
[tools] rip out old checksums script + other dead code

Landed yesterday as https://hg.mozilla.org/build/tools/rev/218e77398112.
Attachment #8624223 - Flags: checked-in+
Blocks: 1217978
For Fennec with automatic pushes we don't have checksums as an upstream, so the push runs early and we miss out on SHA512SUMS{,.asc} in the releases directory. This removes the special case, although diff looks pretty funky until you do a diff -w:

diff --git a/process/release.py b/process/release.py
--- a/process/release.py
+++ b/process/release.py
@@ -1885,9 +1885,8 @@ def generateReleaseBranchObjects(release
                 builderNames=ui_update_tests_builders[channel],
             ))

     if releaseConfig.get('enableAutomaticPushToMirrors'):
-        if not hasPlatformSubstring(releaseConfig["enUSPlatforms"], "android"):
             push_to_mirrors_upstreams.extend([
                 builderPrefix("%s_checksums" % releaseConfig["productName"]),
             ])
             if releaseConfig.get('enablePermissionCheck'):
Attachment #8679226 - Flags: review?(rail)
There's another issue for Fennec - the SHA512SUMS file contains only the source bundle. I think that's because the default set of includes at 
 http://hg.mozilla.org/releases/mozilla-beta/file/e2dcd59704ad/testing/mozharness/scripts/release/generate-checksums.py#l118
doesn't have something like '^fennec.*\.apk$'. If we did have that we'd just end up with lots of hashes for files like fennec-42.0b10.<locale>.<platform>.apk (eg fennec-42.0b10.en-US.android-arm.apk), instead of <platform>/<locale>/fennec-42.0b10.<locale>.android-arm.apk. Which makes me just want to turn things back off for Fennec :-S
Attachment #8679226 - Flags: review?(rail) → review+
Depends on: 1221386
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Depends on: 1243873
Removing leave-open keyword from resolved bugs, per :sylvestre.
Keywords: leave-open
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: