Closed Bug 1163084 Opened 9 years ago Closed 8 years ago

Releng work for producing dummy partner Android APK

Categories

(Release Engineering :: General, defect)

defect
Not set
normal

Tracking

(firefox40 affected)

RESOLVED FIXED
Tracking Status
firefox40 --- affected

People

(Reporter: nalexander, Assigned: kmoir)

References

(Blocks 1 open bug)

Details

Attachments

(2 files, 2 obsolete files)

This ticket tracks making mozconfig, mozharness, treeherder, etc changes to support a new "dummy partner APK" Android build job.

The features requested are motivated in https://bugzilla.mozilla.org/show_bug.cgi?id=1163080#c0.

This ticket is the equivalent of Bug 1073772, which did this work for Android split APKs.

It is possible that releng would prefer to make a general mechanism for this work so that /any/ Android build job can specify an external branding or distribution repo (external meaning outside of the regular trees), but I still think we'll need a new build job with treeherder visibility, etc.
Component: Build Config & IDE Support → General Automation
Product: Firefox for Android → Release Engineering
QA Contact: catlee
Version: Trunk → unspecified
catlee: here's the releng ticket we discussed in this morning's meeting.
Flags: needinfo?(catlee)
Flags: needinfo?(catlee)
This ticket will be a public blueprint for our partner integration builds: that is, it will be a new build flavour (with custom mozconfig and branding) that includes a partner distribution in the produced APKs.  By a new build "flavour", I mean it will produce branded APKs (org.mozilla.firefox_partner) for all the APK splits we have and will have in the future (right now, Android SDK 9-10 and Android SDK 11+).  It's not a hard requirement to run our full test suite against these builds, although if we can that would be extremely valuable.

Since this is public, we expect to fetch a test distribution from a well-known public endpoint (likely a github repo).  For our actual partner builds, we will want to fetch a test distribution from a private endpoint.

catlee: jlund: this is important for our partner work.  Can one of you make contact so we can agree on a timeline for this to happen?
Flags: needinfo?(jlund)
Flags: needinfo?(catlee)
I have made a note to bring this up at my 1x1 this week. I'll discuss if this is something I will be picking up personally and where it sits on priority.

Is there a hard timeline requirement for this from your end?
Flags: needinfo?(jlund)
(In reply to Jordan Lund (:jlund) from comment #3)
> I have made a note to bring this up at my 1x1 this week. I'll discuss if
> this is something I will be picking up personally and where it sits on
> priority.
> 
> Is there a hard timeline requirement for this from your end?

I think we expect to want the private partner builds parallel to this public dummy partner build by the end of Q3.  mfinkle: kar: set me straight?
Flags: needinfo?(krudnitski)
Q3 at the latest. I'd like to see some test builds prior so we can have some testing done. I'd actually like to see this in-flight by the end of Q3 so we can push forward with some key partners. However, I can't confirm (or push) for a deployment timeline from partners until I know that this is working (test as well as release). 

Please let me know what you need to discuss timing and priority.

Thanks! Karen
Flags: needinfo?(krudnitski)
:nalexander

To reiterate the requirements let me summarize and ask some further questions

*new generic Android SDK 9-10 and Android SDK 11+ builds for partners
*custom mozconfig (in tree) and branding (consumed from external endpoint)
*We will fetch the branding and or test suites to run from an external endpoint, probably github
*mechanism to consume these external test endpoints do not currently in place so will have to be implemented
*what format will the tests be in the external endpoint?  Will we have to build the test zips or will it be in a readily consumable format. Will they be in the same format as existing mozilla test suites?
*external endpoints consumed will be both private and public
*would be ideal to run these tests and have results on treeherder
Flags: needinfo?(nalexander)
(In reply to Kim Moir [:kmoir] from comment #7)
> :nalexander
> 
> To reiterate the requirements let me summarize and ask some further questions
> 
> *new generic Android SDK 9-10 and Android SDK 11+ builds for partners

Correct.  We could make do with 11+ only, but since we expect to grow APK splits, might as well cross the conceptual bridge now.

> *custom mozconfig (in tree) and branding (consumed from external endpoint)

Correct.

> *We will fetch the branding and or test suites to run from an external
> endpoint, probably github

We will fetch the branding and Android distribution files from Github.  The test suite I expect to be exactly the same as mainline.  We may discover the need to conditionally enable and disable tests, but I expect that to be done through mozconfig means.

> *mechanism to consume these external test endpoints do not currently in
> place so will have to be implemented

Not sure what this means.  We've been talking about "test builds", by which we mean builds that exercise this new functionality: pulling partner resources (public resources for this ticket) and running tests for those resources (testing Android distributions).

> *what format will the tests be in the external endpoint?  Will we have to
> build the test zips or will it be in a readily consumable format. Will they
> be in the same format as existing mozilla test suites?

We don't expect to modify the test suite out of tree at all.

> *external endpoints consumed will be both private and public

Correct.  Public on day one, so that we can point partners at real working examples and tests exercising the promised functionality.

> *would be ideal to run these tests and have results on treeherder

Correct.  Definitely the public builds should be public.  Partner specific builds don't need to appear on TH.

To re-iterate a good first step:

* clone existing Android API jobs;
* specify new public-distribution mozconfig;
* git checkout mozilla/public-distribution at beginning of build;
* over to me implement mozconfig switch adding Android distribution to build;
* build and run tests as normal.
Flags: needinfo?(nalexander)
Thanks for the clarification.  So the regarding the public mozconfig.  Mozconfigs are hierarchical and inherit other mozconfigs.  How do you see this being represented in a public repo? Today both the mozconfigs and the code we build are stored in the same repo so I'm trying to envision how they will be separated and consumed.
Flags: needinfo?(nalexander)
(In reply to Kim Moir [:kmoir] from comment #9)
> Thanks for the clarification.  So the regarding the public mozconfig. 
> Mozconfigs are hierarchical and inherit other mozconfigs.  How do you see
> this being represented in a public repo? Today both the mozconfigs and the
> code we build are stored in the same repo so I'm trying to envision how they
> will be separated and consumed.

Hmm, this is interesting.  I'd rather assumed that the mozconfig itself would still be in tree, tied to the job.  That is, this public-partner test build would have

mobile/android/config/public-partner/mozconfig, etc.

It's not really clear to me how we would trigger a private build.  Would that be a push to a special repo?  How do we establish the upstream private git repo to pull branding and distribution from in that case?  I feel like there is probably already a pattern for this practiced by Desktop and/or b2g builds.

If there isn't, how do we tell the mozharness script (which I assume will checkout the per-partner private git repo information) which partner and repo are in play?  Presumably this can include mozconfig details too.

Sorry for my ignorance about these releng/mozharness configuration matters.  I know the build bits much better than the automation bits.
Flags: needinfo?(nalexander) → needinfo?(kmoir)
With previous b2g private builds we had private repos.  Builds were scheduled, not run on commit. There was a different set of credentials to store private builds on a separate server that was not publicly available.


Yes, there are a lot of moving parts, mozconfig, mozharness, binary repos etc. So you are saying in tree mozconfig but the actual location of the mozconfig would be stored in a private repo.  

Today
Mozconfigs are referenced by buildbot configs
mozharness is in tree

Let me talk to someone in releng about how we did the private builds before to use a a model for this implementation. I wasn't involved in setting up private builds before.
Flags: needinfo?(kmoir)
Assignee: nobody → kmoir
Notes from the meeting I had with rail are here
https://releng.etherpad.mozilla.org/dummyandroidapk

Reached out to mconnor earlier in week for more info on partner portal but no more info yet
(In reply to Kim Moir [:kmoir] from comment #12)
> Notes from the meeting I had with rail are here
> https://releng.etherpad.mozilla.org/dummyandroidapk
> 
> Reached out to mconnor earlier in week for more info on partner portal but
> no more info yet

This pad isn't public.  kmoir, can you make it public?
Flags: needinfo?(kmoir)
I sent you an email with the access pw
Flags: needinfo?(kmoir)
We've been working on standing this up. Initially I was thinking that we needed a new taskcluster docker image because the logic for which sources are checked out are baked into the image itself for android builds. So I first created a new image and then reverted to just upticking the version of the current image. But this led to the generic linux docker image getting bloated with android specific stuff.

I then noticed that b2g desktop builds checkout gaia and other sources outside of the image and just before the mozharness call. I'm going to copy this behaviour and checkout the partner repo within a taskcluster task build script.

we will hopefully have patches up tomorrow.
Attached patch 150813_dummyapk_initial.patch (obsolete) — Splinter Review
initial groundwork.

I tried scheduling this on try through taskcluster but seems the decision task didn't like my task graph.

I'm not sure why as manually running it shows that the task should be scheduled by my commit message: https://irccloud.mozilla.com/pastebin/5cDTF7Ng

but I think we are nearing the bits that are needed for public partner repos. I am using the sample distribution but also including a mozconfig manifest to use.

https://github.com/lundjordan/fennec-distribution-sample/commits/master
progress - I figured out scheduling. I need to work out a few typo kinks but we are nearly there for public dummy builds
cool, so this taskcluster push: https://treeherder.mozilla.org/#/jobs?repo=try&revision=3e4a4f54eb85&exclusion_profile=false

has this build: https://treeherder.mozilla.org/logviewer.html#?job_id=11623492&repo=try which:

1) checks out gecko
2) checks out https://github.com/lundjordan/fennec-distribution-sample
   ### android fennec specific
   if [ -n "$PARTNER_PROPS_PATH}" ]; then
       # e.g. dummy partner apk
       pull-external-gecko-source.sh $WORKSPACE/build/src $PARTNER_PROPS_PATH $WORKSPACE/build/partner
   fi
3) uses a json manifest from fennec-distribution-sample repo to determine the mozconfig (which is basically the api-11 opt mozconfig right now)

03:51:47     INFO - Using mozconfig based on manifest contents
03:51:47     INFO - Reading from file /home/worker/workspace/build/partner/mozconfigs/mozconfig1.json
03:51:47     INFO - Reading from file /home/worker/workspace/build/src/mobile/android/config/mozconfigs/public-partner/distribution_sample/mozconfig1
03:51:47     INFO - Contents:
03:51:47     INFO -  # I'm a mozconfig that is being pointed by a partner manifest that lives in a github repo! neato yo
03:51:47     INFO -  . "$topsrcdir/mobile/android/config/mozconfigs/common"
03:51:47     INFO -  # ...

4) and then builds android. and fails too! however the failure looks to be unrelated to my job, as it appears like it is failing for all of taskcluster android in a similar way, e.g. https://treeherder.mozilla.org/logviewer.html#?job_id=2220913&repo=mozilla-central

I think we are ready for some reviews and then also to try out a mozconfig that is specific to fennec-distribution-sample
jlund: great work!

> 4) and then builds android. and fails too! however the failure looks to be
> unrelated to my job, as it appears like it is failing for all of taskcluster
> android in a similar way, e.g.
> https://treeherder.mozilla.org/logviewer.html#?job_id=2220913&repo=mozilla-
> central

There appear to be two problems: one to do with "influxdb", and another to do with javac not actually working (GLIBC issues).  Paging djmitchell for diagnosis.
Flags: needinfo?(dustin)
Oh, ugh, that's a Java built for Ubuntu (newer libc).  We'll need to re-package the JDK from the CentOS 6 RPMs, rather than from the Ubuntu RPMs, updating testing/taskcluster/scripts/misc/repackage-jdk.sh.
Flags: needinfo?(dustin)
That's bug 1206106.  I won't get a chance to work on that today, so if you want to steal it (it should be a fairly straightforward rewrite of the script) please feel free.
Depends on: 1206106
Oh, and the influxdb thing is normal (I'm told) -- just gets caught by the treeherder log parser.
gecko mozconfig incoming in next patch
Attachment #8664562 - Flags: review?(nalexander)
Attached patch 150922_dummy_apk_tc-gecko.patch (obsolete) — Splinter Review
this schedules on try a dummy apk variant via taskcluster. for now this is just a proof of concept. It should:

* checkout https://github.com/mozilla/fennec-distribution-sample
* checkout gecko
* tell mh to use a manifest within fennec-distribution-sample that points to a in-gecko mozconfig. this allows partner repos (in this case: fennec-distribution-sample) to point to different mozconfigs at will
* right now you'll notice that this build should be pretty much equivalent to api-11 opt. hence the proof of concept for now
Attachment #8664573 - Flags: review?(mshal)
Comment on attachment 8664573 [details] [diff] [review]
150922_dummy_apk_tc-gecko.patch

currently android builds are broken due to https://bugzil.la/1189892 and https://bugzil.la/1206106 but I wanted to get the review process started for the variant logic
Comment on attachment 8664562 [details] [review]
add mozconfig manifest to sample apk partner repo

Fold these down and update the commit message, please.  Then bombs away.
Attachment #8664562 - Flags: review?(nalexander) → review+
Comment on attachment 8664573 [details] [diff] [review]
150922_dummy_apk_tc-gecko.patch

Review of attachment 8664573 [details] [diff] [review]:
-----------------------------------------------------------------

So the overall idea here is to embed JSON props files under mobile/android/config/mozconfigs, containing a repo path and revision from which to pull the partner stuff.  Actually, it looks like there are two manifests?

The task specifies PARTNER_PROPS_PATH, which is a path to an in-tree JSON file containing a repository and revision for the partner stuff.  build-linux.sh checks this out into $workspace/build/partner.  The mozharness config specifies `src_mozconfig_manifest`, in which buildbase looks up the `gecko_path` property and joins that to the source directory to get the mozconfig path.  I think you meant to join it to the build directory (the objdir) in this case?

So there are really two places with pointers to config:
 1. specify partner repo/revision in an in-tree JSON file
 2. specify a mozconfig path within that repo in an in-tree mozharness config

One of the neat things about having a decision task is, there's no need to interpret this stuff "live" during the build.  It can all be parsed out in the decision task instead and put into the task definition.  Then the task definition says explicitly what's to be done, rather than referring to instructions in specific revs of specific repos.  We have the leeway to write arbitrary code in the decision task (it's just a mach command), but in this case I don't even think that's necessary.  Here's a stab at how that might work:

For partner builds, all of the task .yml files for a particular partner inherit from the same parent file, and that file encodes the repo and revision into something general like

  env:
    EXTRA_CHECKOUT_1_REPOSITORY: "{{partner_repo}}"
    EXTRA_CHECKOUT_1_PATH: "build/partner"
    EXTRA_CHECKOUT_1_REV: "{{partner_revision}}"
    EXTRA_CHECKOUT_1_REF: "{{partner_revision}}"
    PARTNER_MOZCONFIG_PATH: "{{partner_mozconfig_path}}"

and checkout-sources.sh gets extended to look for EXTRA_CHECKOUT_$N_* for increasing N until it finds no more (maybe there's a more elegant way to do this?).  The buildbase.py script would use this path if its configured `src_mozconfig` is None (or maybe if src_mozconfig is `$PARTNER_MOZCONFIG_PATH`, and buildbase.py gets the value from an env var if the config starts with `$`).  You'd need to specify the directory it's relative to (probably the workdir, since that's what EXTRA_CHECKOUT_$N_PATH is relative to).

The advantage here is that it's clear in the task definition that there is an additional checkout going on, and that there is a partner mozconfig being used.  There's no need for a manifest, either -- partners would just bump the revisions in the parent .yml file.

Finally, we need to think about how to handle the partner clones -- if tc-vcs is doing a full clone every time, we may end up bludgeoning github.  I don't think there's a good answer yet, and probably the load will initially be small, but please do bring it up in #taskcluster.

My remaining comments are all nits on the current patch, and can be ignored if you choose something like what I proposed above.

::: mobile/android/config/mozconfigs/public-partner/distribution_sample/repo_props.json
@@ +1,2 @@
> +{
> +    "revision": "4a322374d947b450d2807ebfc28f32728d7b4227", 

trailing WS

::: testing/mozharness/configs/builds/releng_sub_android_configs/64_api_11_partner_sample1.py
@@ +2,5 @@
> +    'base_name': 'Android armv7 API 11+ partner Sample1 %(branch)s',
> +    'stage_platform': 'android-api-11-partner-sample1',
> +    'build_type': 'api-11-partner-sample1-opt',
> +    'src_mozconfig': None,  # use manifest to determine mozconfig src
> +    'src_mozconfig_manifest': 'partner/mozconfigs/mozconfig1.json',

So this path is relative not to the source dir, but to the build dir (which has had the partner repo checked out at partner/)

::: testing/taskcluster/scripts/builder/build-linux.sh
@@ +4,5 @@
>  
>  echo "running as" $(id)
>  
> +DIRNAME=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
> +PATH=$DIRNAME:$PATH

This will affect the PATH for the entire build -- is it really necessary?  Couldn't you just use `$DIRNAME/pull-external-gecko-source.sh`?

::: testing/taskcluster/scripts/builder/source_props.py
@@ +2,3 @@
>  
>  '''
> +Command line interface to fetch details of an external repo based on a in-gecko-tree manifest

It's a little more general, actually -- it just reads top-level keys from JSON files.  Maybe rename to `get_property.py` and pass it a full path ($gecko/$props_path) as a single argument?

@@ +40,5 @@
>  if args.prop == 'repository':
> +    if props.get('full_repo_path'):
> +        print(urlparse.urljoin(props['full_repo_path']))
> +    else:
> +        print(urlparse.urljoin('https://hg.mozilla.org', props['repo_path']))

It seems like this logic is in the wrong place -- it should either be in the caller or repos should just be spelled out in the properties file.  The latter would probably be easier if you left the b2g stuff alone in this bug (so, copies rather than moves in this diff) and filed a secondary bug to switch b2g to using `get_prop.py` with a full URL in `b2g/config/gaia.json`.  I assume that would require some tweaks to the b2g bumper to produce such full URLs.

Then this section can just be reduced to `print(props[args.prop])`.  In fact, the whole thing boils down to a one-liner:

> python -c 'import json, sys; print(json.load(open(sys.argv[1]))[sys.argv[2]])' path/to/props.json propname
Wander, do you have thoughts on what I suggested above?  In particular, is there a nicer way to specify an arbitrary number of "extra" checkouts that checkout-sources should perform?
Flags: needinfo?(wcosta)
as always, thanks for the thoughtful and detailed review!

> So the overall idea here is to embed JSON props files under
> mobile/android/config/mozconfigs, containing a repo path and revision from
> which to pull the partner stuff.  Actually, it looks like there are two
> manifests?

yeah, basically the goal is to have a build that checks out a partner repo (determined by in-gecko manifest) and uses a gecko mozconfig to build from (determined by a partner repo manifest).


> 
> mozharness config specifies `src_mozconfig_manifest`, in which buildbase
> looks up the `gecko_path` property and joins that to the source directory to
> get the mozconfig path.  I think you meant to join it to the build directory
> (the objdir) in this case?

currently our build logic has it so mozharness uses a mozconfig path determined by a mozharness config to copy from and put in the src objdir. for the partner builds, all that is changing is that the partner repo is determining the gecko mozconfig path, not the mozharness config.


> One of the neat things about having a decision task is, there's no need to
> interpret this stuff "live" during the build.  It can all be parsed out in
> the decision task instead and put into the task definition.  Then the task
> definition says explicitly what's to be done, rather than referring to
> instructions in specific revs of specific repos.

interesting. I am totally up for improving how I am currently doing things and re-writing to define this in the decision task.

some background: my motivation for the patch derived from not knowing anything about TC before hand and by needing to add two similar android variants: this dummy apk bug and b2gdroid(https://bugzilla.mozilla.org/show_bug.cgi?id=1199720#c18). They are similar in that they both required the checkout of an additional repo along side the gecko src. I noticed that the existing TC b2g-desktop builds are a perfect job type to copy from; b2g desktop builds are a desktop variant of normal ff desktop builds but they checkout one additional repo, gaia, and use a different mozconfig.

tl;dr I basically took how b2g desktop build do things. They define the additional repo in the build task yml and not the decision task. Maybe if we do go with the decision task approach for b2gdroid and dummy apk, we should change b2g desktop too?


> and checkout-sources.sh gets extended to look for EXTRA_CHECKOUT_$N_* for
> increasing N until it finds no more (maybe there's a more elegant way to do
> this?).

This sounds good for future variants that have more than one dependency repo (mh does something similar where you can define a config['repos'] which is a list of dep repos)

but in the dummy apk case, I am pretty sure we would only have one partner repo per apk partner variant build.


> Finally, we need to think about how to handle the partner clones -- if
> tc-vcs is doing a full clone every time, we may end up bludgeoning github. 
> I don't think there's a good answer yet, and probably the load will
> initially be small, but please do bring it up in #taskcluster.

I'll make sure to raise this issue.

I won't follow up with the patch nits until we decide the overall strategy for this and b2gdroid going forward.
Comment on attachment 8664573 [details] [diff] [review]
150922_dummy_apk_tc-gecko.patch

>diff --git a/mobile/android/config/mozconfigs/public-partner/distribution_sample/mozconfig1 b/mobile/android/config/mozconfigs/public-partner/distribution_sample/mozconfig1
>new file mode 100644
>--- /dev/null
>+++ b/mobile/android/config/mozconfigs/public-partner/distribution_sample/mozconfig1

Is there a reason you have a '1' here in the filename (and elsewhere, like 64_api_11_partner_sample1.py)? Are we expecting multiple samples or something?

>@@ -0,0 +1,21 @@
>+# I'm a mozconfig that is being pointed by a partner manifest that lives in a github repo! neato yo

Can you link to the github repo here? Or does it not actually exist for the sample? And maybe ditch the "neato yo" :)

>diff --git a/testing/taskcluster/scripts/builder/build-linux.sh b/testing/taskcluster/scripts/builder/build-linux.sh
>+DIRNAME=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
>+PATH=$DIRNAME:$PATH

Why do you need to add this to PATH? Would just using ./pull-external-gecko-source.sh later work?

> # Ensure that in tree libraries can be found
> export LIBRARY_PATH=$LIBRARY_PATH:$WORKSPACE/src/obj-firefox:$WORKSPACE/src/gcc/lib64
> 
>+
> # test required parameters are supplied

nit: Random newline inserted

>diff --git a/testing/mozharness/configs/builds/releng_sub_android_configs/64_api_11.py b/testing/mozharness/configs/builds/releng_sub_android_configs/64_api_11_partner_sample1.py
>+    'src_mozconfig': None,  # use manifest to determine mozconfig src
>+    'src_mozconfig_manifest': 'partner/mozconfigs/mozconfig1.json',

Is this supposed to match the mobile/android/config/mozconfigs/public-partner/distribution_sample/mozconfig1 created above? Or where does mozconfig1.json come from?
Attachment #8664573 - Flags: review?(mshal) → review+
In general I think we have a chance to smooth over some of the rough edges of the B2G approach to things.  Note that the B2G approach was designed when mozharness was out-of-tree, so that may explain some of the clunkiness.  That said, with this functionality in place it'd be great to loop back and do the same with the b2g desktop builds too -- nice, but not required!

And you're right, I'm being overly general in trying to allow multiple extra repos -- one is sufficient for this case, and if someone needs more, well, patches accepted :)
Blocks: 1199720
thanks for the reviews.

I'll investigate the feasibility of driving the external repo checkouts from the decision task logic
Depends on: 1118394
(In reply to Dustin J. Mitchell [:dustin] from comment #28)
> Wander, do you have thoughts on what I suggested above? 

tc-vcs can be configured to cache partner repos (including github).

> In particular, is
> there a nicer way to specify an arbitrary number of "extra" checkouts that
> checkout-sources should perform?

I actually don't come up with a better idea.
Flags: needinfo?(wcosta)
trial run of defining the partner repo in the decision task: https://treeherder.mozilla.org/#/jobs?repo=try&revision=d9143f16bd98

this try job is using a docker image from my personal docker hub account since I had to modify the static checkout-sources.sh file

will check back tomorrow if it's green ;)
https://hg.mozilla.org/try/rev/d9143f16bd98#l3.12
    3.12 +# TODO - include tools repository in EXTRA_CHECKOUT_REPOSITORIES list
    3.13 +for extra_repo in $EXTRA_CHECKOUT_REPOSITORIES; do
    3.14 +    BASE_REPO="${extra_repo}_BASE_REPOSITORY"
    3.15 +    HEAD_REPO="${extra_repo}_HEAD_REPOSITORY"
    3.16 +    HEAD_REV="${extra_repo}_HEAD_REV"
    3.17 +    HEAD_REF="${extra_repo}_HEAD_REF"
    3.18 +    DEST_DIR="${extra_repo}_DEST_DIR"
    3.19 +
    3.20 +    tc-vcs checkout ${!DEST_DIR} ${!BASE_REPO} ${!HEAD_REPO} ${!HEAD_REV} ${!HEAD_REF}
    3.21 +done

very nice!  Looks like it successfully checked that out:

++ for extra_repo in '$EXTRA_CHECKOUT_REPOSITORIES'
++ BASE_REPO=PARTNER_BASE_REPOSITORY
++ HEAD_REPO=PARTNER_HEAD_REPOSITORY
++ HEAD_REV=PARTNER_HEAD_REV
++ HEAD_REF=PARTNER_HEAD_REF
++ DEST_DIR=PARTNER_DEST_DIR
++ tc-vcs checkout /home/worker/workspace/build/partner https://github.com/lundjordan/fennec-distribution-sample https://github.com/lundjordan/fennec-distribution-sample 4a322374d947b450d2807ebfc28f32728d7b4227

and later read the mozconfig from that repo:

06:41:49     INFO - Reading from file /home/worker/workspace/build/partner/mozconfigs/mozconfig1.json
06:41:49     INFO - Reading from file /home/worker/workspace/build/src/mobile/android/config/mozconfigs/public-partner/distribution_sample/mozconfig1
06:41:49     INFO - Contents:
06:41:49     INFO -  # I'm a mozconfig that is being pointed by a partner manifest that lives in a github repo! neato yo

it just failed with the same missing req for aapt, which should be coming when bug 1189892 lands.  So it looks like this is effectively green!

So it looks like partner/mozconfigs/mozconfig1.json is acting as a redirect to the proper mozconfig in the partner repo, requiring a little special-casing in mozharness.  Would it be possible to implement that with something simpler?  Either putting that path in the task description, or perhaps just making `partner.mozconfig` in the root of the partner repo a symlink to the desired mozconfig?  This isn't bad as-is, I'm just trying to further simplify :)
> So it looks like partner/mozconfigs/mozconfig1.json is acting as a redirect
> to the proper mozconfig in the partner repo, requiring a little
> special-casing in mozharness.  Would it be possible to implement that with
> something simpler?  Either putting that path in the task description, or
> perhaps just making `partner.mozconfig` in the root of the partner repo a
> symlink to the desired mozconfig?  This isn't bad as-is, I'm just trying to
> further simplify :)

nalexander: I know it's simply paramount to just get these stood up from a mobile perspective but I wanted to confirm that you folks are flexible with regards to how we obtain the mozconfig contents. I see 3 options:

1) have a manifest in partner repo that points to gecko-tree mozconfig path
    - this is how I have it now. this allows partners to change which gecko mozconfig is used but still requires a gecko yml change to update the partner rev with the new path change (unless we set partner checkout to always take the head of a branch)
2) define the mozconfig gecko path in taskcluster
    - this would require a gecko yml change to change which mozconfig is used. no partner repo change required.
3) define the actual mozconfig contents (not the path) in the partner repo itself
    - this puts the whole mozconfig file outside of the gecko repo which adds some opaqueness but gives more control to the partner. To use a change in the partner repo mozconfig, you would still need to update the taskcluster partner rev to checkout (again, unless we tell gecko to always take the head of a branch).

If you have no opinion/pref, I think I will go with option 3 to satisfy dustin's feedback as it would require the least amount of diff lines and keep things simple.
Flags: needinfo?(nalexander)
Oops, I had missed the part where the mozconfig still lived in the gecko tree, but was just pointed-to by the partner repo.
(In reply to Jordan Lund (:jlund) from comment #37)
> > So it looks like partner/mozconfigs/mozconfig1.json is acting as a redirect
> > to the proper mozconfig in the partner repo, requiring a little
> > special-casing in mozharness.  Would it be possible to implement that with
> > something simpler?  Either putting that path in the task description, or
> > perhaps just making `partner.mozconfig` in the root of the partner repo a
> > symlink to the desired mozconfig?  This isn't bad as-is, I'm just trying to
> > further simplify :)
> 
> nalexander: I know it's simply paramount to just get these stood up from a
> mobile perspective but I wanted to confirm that you folks are flexible with
> regards to how we obtain the mozconfig contents. I see 3 options:
> 
> 1) have a manifest in partner repo that points to gecko-tree mozconfig path
>     - this is how I have it now. this allows partners to change which gecko
> mozconfig is used but still requires a gecko yml change to update the
> partner rev with the new path change (unless we set partner checkout to
> always take the head of a branch)

I'm fine with this.  So you need to touch mozilla-central to bump the mozconfig, and the partner repo to pick up the changes.

> 2) define the mozconfig gecko path in taskcluster
>     - this would require a gecko yml change to change which mozconfig is
> used. no partner repo change required.

The TC build script is in mozilla-central, right?  So this requires an m-c commit.  (And no partner repo change is needed.)

> 3) define the actual mozconfig contents (not the path) in the partner repo
> itself
>     - this puts the whole mozconfig file outside of the gecko repo which
> adds some opaqueness but gives more control to the partner. To use a change
> in the partner repo mozconfig, you would still need to update the
> taskcluster partner rev to checkout (again, unless we tell gecko to always
> take the head of a branch).

My concern here is that the mozconfig will lag.  These things do change as the build system evolves.  I'd prefer to have the mozconfigs in m-c, so that global search replace is likely to do the right thing.

> If you have no opinion/pref, I think I will go with option 3 to satisfy
> dustin's feedback as it would require the least amount of diff lines and
> keep things simple.

Option 3 is my least favourite.  I think 1) makes slightly more sense than 2), since I expect every partner mozconfig to look like "default mozconfig" + "one or two special lines about distribution and branding".  So they would all reference the same "partner" mozconfig, and then modify the contents; rather than all specifying different mozconfigs in the tree.  If we grew partner-feature-set-1, partner-feature-set-2, partner-feature-set-N mozconfigs I think something will have gone wrong with Fennec's process.
Flags: needinfo?(nalexander)
> Option 3 is my least favourite.  I think 1) makes slightly more sense than
> 2)

roger. makes sense. sticking with (1)

patch incoming
Thanks, good to know the reasoning!
(In reply to Nick Alexander :nalexander from comment #26)
> Comment on attachment 8664562 [details] [review]
> add mozconfig manifest to sample apk partner repo
> 
> Fold these down and update the commit message, please.  Then bombs away.

so we are sticking with this strategy. I squashed them but I don't have write permissions.

nick: would you mind merging https://github.com/mozilla/fennec-distribution-sample/pull/1 or 302'ing to someone who can?
Flags: needinfo?(nalexander)
this can be seen working here: https://treeherder.mozilla.org/logviewer.html#?job_id=12013102&repo=try

some notes:
* I will need to either get permission to push to docker.hub/taskcluster/desktop-build or have someone push for me. The docker REGISTRY and VERSION bits can be ignored.
* I am still pointing to my forked example partner repo: https://github.com/lundjordan/fennec-distribution-sample and can change that back to mozilla's once my PR has been merged in
Attachment #8660526 - Attachment is obsolete: true
Attachment #8664573 - Attachment is obsolete: true
Attachment #8667093 - Flags: review?(dustin)
Comment on attachment 8667093 [details] [diff] [review]
150928_dummy_apk-tc-mh.patch

Review of attachment 8667093 [details] [diff] [review]:
-----------------------------------------------------------------

I'm happy to pull and push images for you.
Attachment #8667093 - Flags: review?(dustin) → review+
(In reply to Jordan Lund (:jlund) from comment #42)
> (In reply to Nick Alexander :nalexander from comment #26)
> > Comment on attachment 8664562 [details] [review]
> > add mozconfig manifest to sample apk partner repo
> > 
> > Fold these down and update the commit message, please.  Then bombs away.
> 
> so we are sticking with this strategy. I squashed them but I don't have
> write permissions.
> 
> nick: would you mind merging
> https://github.com/mozilla/fennec-distribution-sample/pull/1 or 302'ing to
> someone who can?

Done.
Flags: needinfo?(nalexander)
Depends on: 1209614
Keywords: leave-open
Depends on: 1210631
Flags: needinfo?(catlee)
jlund, two questions: 1) why is this still open?

2) why does this not work any more?  https://treeherder.mozilla.org/#/jobs?repo=try&revision=fce7879bf072&selectedJob=16308920
Flags: needinfo?(jlund)
(In reply to Nick Alexander :nalexander from comment #48)
> jlund, two questions: 1) why is this still open?
>

I suppose the initial goal of this bug has been resolved. further requests/features can be follow ups. I'll close this once question (2) has been addressed.

> 2) why does this not work any more? 
> https://treeherder.mozilla.org/#/
> jobs?repo=try&revision=fce7879bf072&selectedJob=16308920

hm, good question. it seems like this is an internal thing to tc-vcs that does some caching of artifacts so we don't need to actually clone. At least that's what this to me suggests:


++ tc-vcs checkout /home/worker/workspace/build/partner https://github.com/mozilla/fennec-distribution-sample https://github.com/mozilla/fennec-distribution-sample 756f0378d4cac87e5e6c405249ede5effe082da2
[taskcluster-vcs] detectHg: start fetching head of https://github.com/mozilla/fennec-distribution-sample
[taskcluster-vcs] detectHg: end fetching head of https://github.com/mozilla/fennec-distribution-sample
[taskcluster-vcs:error] Artifact "public/github.com/mozilla/fennec-distribution-sample.tar.gz" not found for task ID NUWLwXEDSGmuhA6DWl3Ong.  This could be caused by the artifact not being created or being marked as expired.
[taskcluster-vcs:error] Could not clone repository using cached copy. Use '--force-clone' to perform a full clone.


if you follow that task id in question ( I guess tc-vcs spins off a sub TC task to figure out whether to use a cache or clone), you can see that the task succeeded but something is still not right:

https://tools.taskcluster.net/task-inspector/#NUWLwXEDSGmuhA6DWl3Ong/0

My best stab in the dark is tc-vcs syntax has changed or we have a bug. garndt, any ideas?
Flags: needinfo?(jlund) → needinfo?(garndt)
The task is correct, but the artifacts expire after 30 days (that one is > 30 days old).  I kicked off a task [1] to cache it again.  We don't have this as part of course typical list of things to cache so could a bug be entered for that to remind me when I'm around again in my usual timezone next week?

In production we purposely disable cloning repos with tc-vcs that do not have a cached copy just to prevent repeated full clones being done and bringing services down (such as when we have a thundering herd problem with gitmo). Sorry for the hassle :( .  this has been a change rolling out in the past couple of months.


[1] https://tools.taskcluster.net/task-inspector/#F8kwgWO-QtCvCirMR2X0bw/0
Flags: needinfo?(garndt)
Blocks: 1255119
(In reply to Greg Arndt [:garndt] from comment #50)
> The task is correct, but the artifacts expire after 30 days (that one is >
> 30 days old).  I kicked off a task [1] to cache it again.

> [1] https://tools.taskcluster.net/task-inspector/#F8kwgWO-QtCvCirMR2X0bw/0

I tried to rerun this but I don't have the right scopes: 403:  You do not have sufficient scopes

garndt, could you trigger this again? I suspect we are not running the partner job that needs it every month.
Flags: needinfo?(garndt)
Sure thing, here is the retriggered task.

https://tools.taskcluster.net/task-inspector/#VA_lTEPNTs-4XUQwHxwZTA/0
Flags: needinfo?(garndt)
thanks!

nick: could you try triggering another job and see if 1255119 is unblocked?
Flags: needinfo?(nalexander)
(In reply to Jordan Lund (:jlund) from comment #53)
> thanks!
> 
> nick: could you try triggering another job and see if 1255119 is unblocked?

https://treeherder.mozilla.org/#/jobs?repo=try&revision=f67c6d9f4273
Flags: needinfo?(nalexander)
(In reply to Nick Alexander :nalexander from comment #54)
> (In reply to Jordan Lund (:jlund) from comment #53)
> > thanks!
> > 
> > nick: could you try triggering another job and see if 1255119 is unblocked?
> 
> https://treeherder.mozilla.org/#/jobs?repo=try&revision=f67c6d9f4273

This job failed due to unrelated changes that were in my tree.  I'll push a clean one.
Depends on: 1255855
(In reply to Nick Alexander :nalexander from comment #55)
> (In reply to Nick Alexander :nalexander from comment #54)
> > (In reply to Jordan Lund (:jlund) from comment #53)
> > > thanks!
> > > 
> > > nick: could you try triggering another job and see if 1255119 is unblocked?
> > 
> > https://treeherder.mozilla.org/#/jobs?repo=try&revision=f67c6d9f4273
> 
> This job failed due to unrelated changes that were in my tree.  I'll push a
> clean one.

I filed Bug 1255855 to track this, but I'm calling this ticket done and dusted.  Thanks, everybody!
Status: NEW → RESOLVED
Closed: 8 years ago
Keywords: leave-open
Resolution: --- → FIXED
Component: General Automation → General
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: