Closed Bug 921040 Opened 11 years ago Closed 9 years ago

Cross-compile Firefox for Mac on Linux

Categories

(Infrastructure & Operations Graveyard :: CIDuty, task)

x86_64
Linux
task
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: coop, Assigned: ted)

References

Details

(Whiteboard: [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2212] [capacity])

Attachments

(2 files, 2 obsolete files)

Once we have go-ahead from legal, we should try to make this happen as quickly s possible. It will be *amazing* to get out of the hardware game for building on Mac if we can figure this out.
I corresponded with bz who had tried this before. Here is his response:

I had this working at some point a few years back, doing OSX builds on Linux, including distcc stuff.  At the time I was using Apple's gcc on Linux (via the toolwhip project), but presumably we'd just use clang now...

In any case, I documented what I did at the time here:

http://weblogs.mozillazine.org/bz/archives/020363.html
http://weblogs.mozillazine.org/bz/archives/020371.html
http://weblogs.mozillazine.org/bz/archives/020405.html

So the gotchas (mostly lifted from those blog posts plus vague memories):

1)  You will probably have to fix whatever cross-compiling bugs have crept into our build system over the last 3.5 years.  I certainly had to fix a bunch at the time (see link in the third blog post).

2)  You'll need copies of the relevant SDKs on the Linux boxes and will need to explicitly point to them in the mozconfig, I suspect (CROSS_LIB_PATH, etc).

3)  You'll probably need to fix bug 543111, since I doubt breakpad magically got happier cross-compiling since then.

4)  Like the third blog post says, make package needs a Linux binary that can create a dmg.  Or something.

Apart from those issues, it actually almost Just Worked back then...
DMG creation was always the hard part of this: the DMG format isn't documented, and even though there are some tools that have mostly reverse-engineered it I worry about that more than anything else.

I guess we're doing this for cost reasons?
We could do DMG's as a service. Just have a small bank of Macs that expose an API for DMG "packaging." Upload a bunch of files and command options and get a DMG back.
Yeah, I was thinking that myself. Just generate a .zip file, push it somewhere, have something else produce the .dmg.

Per comment 1, symbol dumping is also going to be an issue, but that should be pretty solvable. (Parsing Mach-O isn't a big issue, it's just that we we ObjC and OS X-specific headers to do the symbol dumping.)
w.r.t. symbol dumping, LLVM has APIs to parse and extract info from binary files. They should work regardless of the host architecture.

http://www.llvm.org/doxygen/group__LLVMCObject.html

That's just the C interface. The C++ interface, while technically non-stable, is much more thorough.

There's even a Python binding to the C interface.
(In reply to Benjamin Smedberg  [:bsmedberg] from comment #2)
> DMG creation was always the hard part of this: the DMG format isn't
> documented, and even though there are some tools that have mostly
> reverse-engineered it I worry about that more than anything else.

Reading DMG can be kind of hard, because there are so many variants, but creating one should be straightforward. There just aren't any tools to do it currently. After all, it's "only" a hfs+ image chunked and compressed in a special format.
Depends on: 921494
Assignee: nobody → joey
(In reply to Benjamin Smedberg  [:bsmedberg] from comment #2)
> DMG creation was always the hard part of this: the DMG format isn't
> documented, and even though there are some tools that have mostly
> reverse-engineered it I worry about that more than anything else.
Yep, DMGs are a real concern here. Signing is another. One fall-back plan is to do the compile+link on VM, and then do DMG (and maybe signing?) on physical hardware. We already do something similar for signing Windows builds and thats working well.


> I guess we're doing this for cost reasons?
Its more for dynamic burst capacity and for rapid scaling. Our linux, b2g and windows infrastructure is scaling up with load nicely, but OSX isnt. Offloading most of the overall build time like this will reduces bottleneck, and let us better handle our variable and growing workload... across *all* the platforms that Mozilla supports.
Group: mozilla-corporation-confidential
I need to dig up another link to go with this but here is some info for building clang as a cross compiler:
    http://wiki.osdev.org/LLVM_Cross-Compiler

build_toolchain and broomstick will download all the sources needed and build binutils, gcc, llvm, clang, etc:
  https://raw.github.com/berkus/metta/develop/build_toolchain.sh
  http://wiki.osdev.org/Boomstick

Package versions are stale and some of the gcc tarball names have for v4.8 but it will build for the most part with a checkout from trunk.  Building clang eventually fails on a missing header -- cxxabi.h.  libcxx and libcxxabi are separate projects in the llvm source tree and will need to be built/included to generate the missing header and compile in abi support.


For archive images, here are a few pages to read/write as filesystem mounts, using command command line tools, etc: 
https://launchpad.net/ubuntu/+source/hfsplus
https://code.google.com/p/atv-bootloader/wiki/InstallHFSTools
https://github.com/shinh/maloader [eshan]
As far as getting this installed, we'll either need to install it from tooltool or (assuming there's more than a clang binary involved!) install it as an RPM.  If the latter is the plan, then the best input for us will be a .spec that we can check into puppet, similar to

  http://hg.mozilla.org/build/puppet/file/13b5eef36fb9/modules/packages/manifests/mozilla/python27.spec

That way when we need to perform an upgrade, we have the spec right there, and can see the changes that we make in the puppet hg logs.
Since we're talking about a whole new platform configuration, can we do this right and design reproducibility outside of release automation from the beginning?

I want something in m-c that points to a single "appliance revision" for constructing the build environment. This can be a tarball of a chroot environment, Docker image identifier, etc. I want to 1) avoid the issues with time-varying build environments as described at http://gregoryszorc.com/blog/2013/07/16/analysis-of-firefox%27s-build-automation/ 2) enable any developer in the world to recreate a bit-exact copy of the build environment 3) don't fall into performance suckitude like bug 851294. #2 is particularly important for this platform configuration because of its "hackiness" and involvement to configure locally.

I don't want to block this effort: I just ask that we consider reproducibility as early in the process as possible.
Well, I just tried to comment on the blog post, but Disquis ate it.  So I'll comment here, and you can copy it into the blog if you'd like.

---

PuppetAgain actually aims to do exactly what you're suggesting.  The releng infrastructure should be knowable (to use Mitchell's word) through the puppet hg repository.  If you need to know when an important package was updated, the hg history should tell you, with reasonably intelligible diffs (so, not binary diffs).  It's also externally reproducible: if you've got the resources, you can create a bit-exact[1] copy of the build environment.  It's usually far easier to get access to a releng host that's already built for you, rather than rsync 300GB to your laptop, but it *is* possible.

You mentioned that some packages are using ensure => latest, but you didn't look behind the curtain.  For two of those packages (mercurial and python), the RPMs and whatnot are built from spec files in the same directory (http://hg.mozilla.org/build/puppet/file/tip/modules/packages/manifests/mozilla/), so "latest" means "the version in my sibling file".  I can see where that would make things difficult if you tried to re-build an older revision, so we should probably change that.

As for mock, that's from the upstream CentOS repositories, and we use "latest" because the precise version of mock in use didn't seem critical to building the right bits of Firefox.  If that's wrong, please file a bug in Relops:Puppet.  However, even with "latest", that version doesn't change willy-nilly: all of the mirror repositories are frozen, and are not updated except during very carefully controlled refreshes, which have so far happened zero times.  During those refreshes, any effects on the build system reveal bugs where we've failed to pin a version correctly, or have developed an unintentional dependency on a particular version.

So, we're already more successful at this than you're suggesting.  There's always bugs, but fewer and fewer!

[1] Modulo secrets, hostnames, timestamps, and stuff like that.  Call it "functionally identical"
And do your #2, I'm not sure why an RPM that is easily available both in binary and source (and spec) form doesn't satisfy that requirement better than some kind of (lightweight) system snapshot.  System snapshots have the disadvantage of being all-or-nothing - you can't mix a docker image with the clang version you want with another docker image containing the version of Python you want, for example.

It would be interesting to explore the difficulty of creating chroot snapshots or docker images of freshly-built releng systems periodically.  This has been discussed before, but not very definitively.  It's not something I have time for, but I filed bug 925912 so that the idea is not lost.

Anyway, I'm happy with whatever solution you choose - I just want to make sure it's clear how existing tools could handle this.
Just use tooltool. Note that our clang for linux might just work already. I've used a stock debian clang in my distcc mac-on-linux builds, and I don't think the packages are explicitely built with mac support.
Makes sense, since that's how clang's being distributed now.

What about all the headers (and libraries?) -- how will those be installed?
Blocks: 927061
Thanks to all for the suggestions in comment 10 through comment 15. Those are all relevant questions _after_ we can show the cross compile is possible and reasonable within the constraints of the build system.

Please do offer any help or insight into the "can we do this at all" questions here, and put deployment issues into (or blocking) bug 927061.

This bug will continue to track the progress of getting a binary that works. Thanks!
mshal found another cross-compile wiki and was able to generate mach-o object files.
  http://clang.llvm.org/docs/CrossCompilation.html
Transferring them mac hardware and linking produced a usable binary.

Lingering problem is standing up the linker and assember.
I thought i mentioned it already but i can't see it here: i seem to remember Apple is planning to switch to lld as a linker in xcode.
(In reply to Mike Hommey [:glandium] from comment #18)
> I thought i mentioned it already but i can't see it here: i seem to remember
> Apple is planning to switch to lld as a linker in xcode.

Implying: we could try lld.
This is an interesting effort.

If scaling is the issue, how about running OS X in VMWare or VirtualBox on Linux servers? Wouldn't that be easier than cross-compiling?
(In reply to Stefan Arentz [:st3fan] from comment #20)
> This is an interesting effort.
> 
> If scaling is the issue, how about running OS X in VMWare or VirtualBox on
> Linux servers? Wouldn't that be easier than cross-compiling?

Last I checked it was against Apple's licensing terms to run OSX on any non-Apple hardware. (You could run it in VirtualBox on Apple hardware, for example, but that doesn't help us here). That's why this bug is to try to get a working OSX build without using OSX itself, so that we can use any hardware we want. At least, as far as I understand it :)
Would using Darwin make this easier? It's not Linux, but may run the OS X compilation tools just fine.
(In reply to Mike Hommey [:glandium] from comment #19)
> (In reply to Mike Hommey [:glandium] from comment #18)
> > I thought i mentioned it already but i can't see it here: i seem to remember
> > Apple is planning to switch to lld as a linker in xcode.
> 
> Implying: we could try lld.

Or, as froydnj mentioned yesterday on irc, try cctools, which does contain apple's ld:
http://www.opensource.apple.com/source/cctools/cctools-839/ld/

I don't know if it builds and works out of the box on Linux, though.
(In reply to Mike Hommey [:glandium] from comment #23)
> (In reply to Mike Hommey [:glandium] from comment #19)
> > (In reply to Mike Hommey [:glandium] from comment #18)
> > > I thought i mentioned it already but i can't see it here: i seem to remember
> > > Apple is planning to switch to lld as a linker in xcode.
> > 
> > Implying: we could try lld.
> 
> Or, as froydnj mentioned yesterday on irc, try cctools, which does contain
> apple's ld:
> http://www.opensource.apple.com/source/cctools/cctools-839/ld/
> 
> I don't know if it builds and works out of the box on Linux, though.

I "cloned" cctools this afternoon (I'm sure tarballs exist somewhere, but I just wget'd the source tree).  I'm going to see how much work it might be getting things together on Linux tomorrow.
(In reply to Nathan Froyd (:froydnj) from comment #24)
> I "cloned" cctools this afternoon (I'm sure tarballs exist somewhere, but I
> just wget'd the source tree).  I'm going to see how much work it might be
> getting things together on Linux tomorrow.

FYI, FWIW, http://code.google.com/p/toolwhip/source/browse/trunk/cctools.README?r=161
This is an old fork, but it has some things figured.
(In reply to Florian Bender from comment #22)
> Would using Darwin make this easier? It's not Linux, but may run the OS X
> compilation tools just fine.

That's an interesting suggestion, provided we don't have any particular reason to want all the production machines to run Linux.

I'm wondering if what we really want here is to have a single machine setup capable of compiling for all platforms.
I think the primary goal is to have a machine we can run on AWS which can build or at least mostly-build mac builds. If Darwin can do that better than Linux, I suspect that would be acceptable.
(In reply to Benjamin Smedberg  [:bsmedberg] from comment #27)
> I think the primary goal is to have a machine we can run on AWS which can
> build or at least mostly-build mac builds. 

This is correct, as it provides the most benefit to the CI infrastructure. Anything else is likely going to be more expensive and less flexible. AWS EC2 compatibility is our  choice, until shown unworkable.

> If Darwin can do that better than
> Linux, I suspect that would be acceptable.

Darwin is not available as a AWS EC2 image, so is out of the running.
(In reply to Hal Wine [:hwine] (use needinfo) from comment #28)
> Darwin is not available as a AWS EC2 image, so is out of the running.

I was actually investigating the availability of some Mac OS tools (specifically hdiutil) on Darwin.
I guess I can safely stop.
Just now I checked VMWare's "Compatibility Guide" for "Guest OS" with "OS Vendor" == Apple.  (Visit the following URL, select Apple under OS Vendor, and click Update and View Results.)  I expected to find only "Fusion" (VMWare's version of Workstation for OS X).  But I also found something called ESXi.

According to http://www.vmware.com/products/vsphere/features/esxi-hypervisor.html, "vSphere ESXi Hypervisor" is a "bare-metal hypervisor that installs directly on top of your physical server and partitions it into multiple virtual machines".

So if Amazon's AWS EC2 uses VMWare, or is somehow VMWare-compatible, we may actually be able to run OS X on it.  This capability, if it exists, is likely to be hard to find -- since Apple might get pissed off it were prominently advertised.  So it might repay someone's effort to double check.
> (Visit the following URL, select Apple under OS Vendor, and click Update and View Results.)

http://www.vmware.com/resources/compatibility/search.php?deviceCategory=software&testConfig=16
Amazon doesn't use VMware - they're Xen based.  And users cannot supply their own kernels.
(In reply to Nathan Froyd (:froydnj) from comment #24)
> I "cloned" cctools this afternoon (I'm sure tarballs exist somewhere, but I
> just wget'd the source tree).  I'm going to see how much work it might be
> getting things together on Linux tomorrow.

Imports of cctools and ld64 are now available at:

https://github.com/froydnj/cctools
https://github.com/froydnj/ld64

Ought to be interesting to see how much patching is required...
(In reply to Steven Michaud from comment #30)
> According to
> http://www.vmware.com/products/vsphere/features/esxi-hypervisor.html,
> "vSphere ESXi Hypervisor" is a "bare-metal hypervisor that installs directly
> on top of your physical server and partitions it into multiple virtual
> machines".

That's still bound to the OSX license saying that you can't run it virtualized when the host is not Apple hardware, even if technically, it works. It's stupid, but it is how it is.
Just a note about status for the lld linker from llvm's website.  Code is likely pre-alpha quality and will need time to harden/rigorously tested for production use.

http://lld.llvm.org/

Current Status
==============
lld is in its early stages of development.
It can currently self host on Linux x86-64 with -static.
status: cross-compiling binutils

binutils can cross-compile w/o much effort and generate a few tools as mach-o binaries.
Unfortunately as, ld and gprof are not on that list.  Configure will exclude these three because of a dependency a few packages that are also used while building gcc.  ppl, gmp and there may be others.  The build currently fails with compile problems and may have a chicken-n-egg problem if any tools from binutils are needed.  If so a bootstrap build or helper binaries may be needed to finish building the dependent packages before building the final build/link of binutils.
(In reply to Joey Armstrong [:joey] from comment #36)
> status: cross-compiling binutils
> 
> binutils can cross-compile w/o much effort and generate a few tools as
> mach-o binaries.
> Unfortunately as, ld and gprof are not on that list.  Configure will exclude
> these three because of a dependency a few packages that are also used while
> building gcc.  ppl, gmp and there may be others.

This explanation seems incomplete.  You mention ld and gprof, but then say that configure excludes these *three* packages...what's the missing one?

The dependencies you list (ppl, gmp) are for building gcc, not binutils.  Are you trying to build an entire gcc-based toolchain?
(In reply to Nathan Froyd (:froydnj) from comment #37)
> (In reply to Joey Armstrong [:joey] from comment #36)
> > status: cross-compiling binutils
> > 
> > binutils can cross-compile w/o much effort and generate a few tools as
> > mach-o binaries.
> > Unfortunately as, ld and gprof are not on that list.  Configure will exclude
> > these three because of a dependency a few packages that are also used while
> > building gcc.  ppl, gmp and there may be others.
> 
> This explanation seems incomplete.  You mention ld and gprof, but then say
> that configure excludes these *three* packages...what's the missing one?
> 
> The dependencies you list (ppl, gmp) are for building gcc, not binutils. 
> Are you trying to build an entire gcc-based toolchain?

The problem is exactly as stated.  While building binutils - as, ld and gprof are not generated.  The reason for this is ppl and gmp are not available which short-circuits traversal and building of the ld/ and as/ directories.
(In reply to Joey Armstrong [:joey] from comment #38)
> (In reply to Nathan Froyd (:froydnj) from comment #37)
> > (In reply to Joey Armstrong [:joey] from comment #36)
> > > status: cross-compiling binutils
> > > 
> > > binutils can cross-compile w/o much effort and generate a few tools as
> > > mach-o binaries.
> > > Unfortunately as, ld and gprof are not on that list.  Configure will exclude
> > > these three because of a dependency a few packages that are also used while
> > > building gcc.  ppl, gmp and there may be others.
> > 
> > This explanation seems incomplete.  You mention ld and gprof, but then say
> > that configure excludes these *three* packages...what's the missing one?
> > 
> > The dependencies you list (ppl, gmp) are for building gcc, not binutils. 
> > Are you trying to build an entire gcc-based toolchain?
> 
> The problem is exactly as stated.  While building binutils - as, ld and
> gprof are not generated.
>

Whoops, I can't read.  Sorry about that!

> The reason for this is ppl and gmp are not
> available which short-circuits traversal and building of the ld/ and as/
> directories.

That's unfortunate; it shouldn't.
Sorry if what I'm saying here is repeated elsewhere already, but the approach that Google has taken to provide Chrome Mac builds on Linux is to create a Mach-O loader <https://github.com/shinh/maloader> which can load the Apple closed source linker binary in order to link a Mach-O executable.  As for the compiler, clang can already do the cross-compiling we need here.  With that, all you need on the Linux systems doing the build is the same version of clang that we use on Mac today, maloader, Apple's ld binary, and its system headers.  Is there any reason why we're not simply following what they've done here?
Also, please note that the LLVM lld is not ready for production yet.  We can only assume that it is once Apple starts to ship it with xcode.
(In reply to :Ehsan Akhgari (needinfo? me!) from comment #41)
> Also, please note that the LLVM lld is not ready for production yet.  We can
> only assume that it is once Apple starts to ship it with xcode.

https://bugzilla.mozilla.org/show_bug.cgi?id=921040#c35

details are mentioned on the llvm lld project page
(In reply to :Ehsan Akhgari (needinfo? me!) from comment #40)
> Sorry if what I'm saying here is repeated elsewhere already, but the
> approach that Google has taken to provide Chrome Mac builds on Linux is to
> create a Mach-O loader <https://github.com/shinh/maloader> which can load
> the Apple closed source linker binary in order to link a Mach-O executable. 
> As for the compiler, clang can already do the cross-compiling we need here. 
> With that, all you need on the Linux systems doing the build is the same
> version of clang that we use on Mac today, maloader, Apple's ld binary, and
> its system headers.  Is there any reason why we're not simply following what
> they've done here?

Recorded for posterity here, maloader will be another option to evaluate:
https://bugzilla.mozilla.org/show_bug.cgi?id=921040#c9
> a Mach-O loader <https://github.com/shinh/maloader> which can load
> the Apple closed source linker binary

Even better if this can load Apple's hdiutil binary ... or can be made
to do so.
(In reply to Nathan Froyd (:froydnj) from comment #39)
> (In reply to Joey Armstrong [:joey] from comment #38)
> > (In reply to Nathan Froyd (:froydnj) from comment #37)
> > > (In reply to Joey Armstrong [:joey] from comment #36)
> > > > status: cross-compiling binutils
> > > > 
> > > > binutils can cross-compile w/o much effort and generate a few tools as
> > > > mach-o binaries.
> > > > Unfortunately as, ld and gprof are not on that list.  Configure will exclude
> > > > these three because of a dependency a few packages that are also used while
> > > > building gcc.  ppl, gmp and there may be others.
> > > 
> > > This explanation seems incomplete.  You mention ld and gprof, but then say
> > > that configure excludes these *three* packages...what's the missing one?
> > > 
> > > The dependencies you list (ppl, gmp) are for building gcc, not binutils. 
> > > Are you trying to build an entire gcc-based toolchain?
> > 
> > The problem is exactly as stated.  While building binutils - as, ld and
> > gprof are not generated.
> >
> 
> Whoops, I can't read.  Sorry about that!
> 
> > The reason for this is ppl and gmp are not
> > available which short-circuits traversal and building of the ld/ and as/
> > directories.
> 
> That's unfortunate; it shouldn't.

I'm pretty sure that isn't the reason, the binutils root configure script has a couple of explicit cases on $TARGET that do

*-*-darwin*
  NOCONFIGDIRS += ld gas gprof
;;

exactly why that is I'm not sure, I rm'd them and am now building so we'll see what happens.
(In reply to Trevor Saunders (:tbsaunde) from comment #45)
> (In reply to Nathan Froyd (:froydnj) from comment #39)
> > (In reply to Joey Armstrong [:joey] from comment #38)
> > > (In reply to Nathan Froyd (:froydnj) from comment #37)
> > > > (In reply to Joey Armstrong [:joey] from comment #36)
> > > > > status: cross-compiling binutils
> > > > > 
> > > > > binutils can cross-compile w/o much effort and generate a few tools as
> > > > > mach-o binaries.
> > > > > Unfortunately as, ld and gprof are not on that list.  Configure will exclude
> > > > > these three because of a dependency a few packages that are also used while
> > > > > building gcc.  ppl, gmp and there may be others.
> > > > 
> > > > This explanation seems incomplete.  You mention ld and gprof, but then say
> > > > that configure excludes these *three* packages...what's the missing one?
> > > > 
> > > > The dependencies you list (ppl, gmp) are for building gcc, not binutils. 
> > > > Are you trying to build an entire gcc-based toolchain?
> > > 
> > > The problem is exactly as stated.  While building binutils - as, ld and
> > > gprof are not generated.
> > >
> > 
> > Whoops, I can't read.  Sorry about that!
> > 
> > > The reason for this is ppl and gmp are not
> > > available which short-circuits traversal and building of the ld/ and as/
> > > directories.
> > 
> > That's unfortunate; it shouldn't.
> 
> I'm pretty sure that isn't the reason, the binutils root configure script
> has a couple of explicit cases on $TARGET that do
> 
> *-*-darwin*
>   NOCONFIGDIRS += ld gas gprof
> ;;
> 
> exactly why that is I'm not sure, I rm'd them and am now building so we'll
> see what happens.

turns out ld's configure just fails because ld/configure.tgt doesn't say how to handle darwin, so I think its a safe guess binutils ld doesn't and never did support linking for darwin.
From the maloader page:
* TODO
- make ld in xcode4 work
Anyone else have any luck with a linker? I've tried a few of the suggestions so far - just thought I'd document the results:

maloader:
 - The package builds fine, but running ./ld-mac ./hello_world seg-faults (with a hello_world executable built on a Mac)
 - As glandium pointed out, the README suggests that only XCode3 (not 4) will work, assuming it didn't segfault :)

lld:
 - The package builds fine in a fresh clone of llvm. I have:
    llvm/.git
    llvm/tools/clang/.git
    llvm/tools/lld/.git
    llvm/projects/compiler-rt/.git
 - I'm not sure how to run it and actually link something:
   $ clang -target x86_64-aple-darwin -c ok.c 
   # The resulting ok.o file can be copied to an OSX machine, linked there, and runs fine

   $ lld -flavor darwin -macosx_version_min 10.8.0 -o ok ok.o 
   Undefined Symbol: command line option -entry : _main
   symbol(s) not found
(I'm probably missing something obvious)

cctools:
 - This builds with patches as froydnj pointed out
 - The docs seem to suggest that even though it is built as a 32-bit executable, it can handle 64-bit Mach-O objects just fine, but that doesn't seem to be the case:
   $ ./ld_classic -macosx_version_min 10.8.0 ok.o -o ok
   ./ld_classic: does not support 64-bit architectures

 - Compiling a 32-bit object seems a little closer:
   $ clang -target i686-apple-darwin -c ok.c -o ok-i686.o
   $ ./ld_classic -macosx_version_min 10.8.0 ok-i686.o -o ok
   ./ld_classic: Undefined symbols:
   _printf

Note the command-line reported by running clang on OSX looks like this:
 "/usr/bin/ld" -demangle -dynamic -arch x86_64 -macosx_version_min 10.8.0 -o a.out ok.o -lSystem /usr/bin/../lib/clang/5.0/lib/darwin/libclang_rt.osx.a

Adding libclant_rt.osx.a to the command-line doesn't change anything, but adding libSystem.dylib gives:

./ld_classic: libSystem.dylib load command 4 unknown cmd field

So who knows :). I'm sure one of these will turn something useful, I just thought I'd document my results so far.

Anyone else get any promising results? Or have other suggestions to try?
(In reply to Michael Shal [:mshal] from comment #48)
> Anyone else get any promising results? Or have other suggestions to try?

Did you try ld64?  Or were you expecting me to do that (which I can do)?
Flags: needinfo?(mshal)
(In reply to Nathan Froyd (:froydnj) from comment #49)
> (In reply to Michael Shal [:mshal] from comment #48)
> > Anyone else get any promising results? Or have other suggestions to try?
> 
> Did you try ld64?  Or were you expecting me to do that (which I can do)?

I haven't been able to get ld64 to compile, unfortunately. Even though the toolwhip version comes with a Makefile.linux and instructions on how to build it, I wind up with tons of errors. Some are easy to fix, like missing stdio.h or other includes, but there are lots of other errors as well.

I wasn't expecting you to try it, but if you have time, I definitely think another pair of eyes couldn't hurt! :)
Flags: needinfo?(mshal)
maloader status:
=================
This test was a simple helloworld compile.  After installing maloader, transferring files and setting up a work area on linux.  clang --target=zzz  can be used to generate a common binary usable on either the linux box or on a mac:


[phantasm::hello] uname
Linux

[phantasm::hello] ./bld.sh

[phantasm::hello] file hello.mac
hello.mac: Mach-O 64-bit executable

[phantasm::hello] ./hello.mac
Hello world

[phantasm::hello] cksum hello.mac
3954674961 8680 hello.mac

scp hello.mac [....]



[banshee-2::~] uname
Darwin

[banshee-2::~] ./hello.mac
Hello world

[banshee-2::~] file hello.mac
hello.mac: Mach-O 64-bit executable x86_64

[banshee-2::~] cksum hello.mac
3954674961 8680 hello.mac
Ok, so the reason my maloader was segfaulting while joey's works turns out to be because my hello-world binary was compiled for post-10.8 OSX, while his uses a pre-10.8 convention. Specifically, the entry point into the binary in pre-10.8 land is defined by LC_UNIXTHREAD, which maloader supports. In post-10.8 land, it is defined by LC_MAIN, which maloader doesn't support. Since maloader is unable to pull out the entry point, it tries to jump to 0x0 and dies. (Some details about LC_UNIXTHREAD and LC_MAIN are here: http://stackoverflow.com/questions/14422229/basic-os-x-assembly-and-the-mach-o-format )

It looks like maloader is incorporated into darling (http://darling.dolezel.info/en/Darling) which does support LC_MAIN. Unfortunately, I wasn't able to get darling to build. On the bright side, I was able to peak at how it was using LC_MAIN, and add that to maloader. Now maloader runs my hello-world executable too.

I still need to do some further testing, but assuming that's it, we can probably get that change upstreamed to maloader and use it on both pre- and post- 10.8 binaries.
(In reply to Michael Shal [:mshal] from comment #52)
> I still need to do some further testing, but assuming that's it, we can
> probably get that change upstreamed to maloader and use it on both pre- and
> post- 10.8 binaries.

It sounds like maloader/darling are what y'all are going to use, but assuming it doesn't work out somehow, I have an almost-linkable current-as-of-Apple's-open-source-release ld64 here:

https://github.com/froydnj/ld64/tree/ld64-136-linux

Requires a minimal Mac /usr/include derived from toolwhip.  The patches required, though numerous, are all relatively straightforward.  They do remove some functionality (e.g. LTO is not supported, but would be straightforward to add back), but it's all non-essential functionality from the point of Firefox.
(In reply to Nathan Froyd (:froydnj) from comment #53)
> (In reply to Michael Shal [:mshal] from comment #52)
> > I still need to do some further testing, but assuming that's it, we can
> > probably get that change upstreamed to maloader and use it on both pre- and
> > post- 10.8 binaries.
> 
> It sounds like maloader/darling are what y'all are going to use, but
> assuming it doesn't work out somehow, I have an almost-linkable
> current-as-of-Apple's-open-source-release ld64 here:

I don't think we are dead-set one way or another yet. Since Joey was able to get maloader running that seemed like the most promising at the moment, but that is still just at the hello-world stage!

> 
> https://github.com/froydnj/ld64/tree/ld64-136-linux
> 
> Requires a minimal Mac /usr/include derived from toolwhip.  The patches
> required, though numerous, are all relatively straightforward.  They do
> remove some functionality (e.g. LTO is not supported, but would be
> straightforward to add back), but it's all non-essential functionality from
> the point of Firefox.

Sounds good to me - I'll give it another try. Thanks for putting this together!
(In reply to Nathan Froyd (:froydnj) from comment #53)
> (In reply to Michael Shal [:mshal] from comment #52)
> > I still need to do some further testing, but assuming that's it, we can
> > probably get that change upstreamed to maloader and use it on both pre- and
> > post- 10.8 binaries.
> 
> It sounds like maloader/darling are what y'all are going to use, but
> assuming it doesn't work out somehow, I have an almost-linkable
> current-as-of-Apple's-open-source-release ld64 here:
> 
> https://github.com/froydnj/ld64/tree/ld64-136-linux
> 
> Requires a minimal Mac /usr/include derived from toolwhip.

This tree now builds on my machine.  Download:

http://people.mozilla.org/~nfroyd/usr_include.tar.bz2

and unpack it somewhere convenient.  Clone the git repo and change src/Makefile.linux's CROSS_USR_INCLUDE to point at the usr_include directory which resulted from unpacking the tarball.  Then:

cd $GIT_CLONE_PATH/src && make -f Makefile.linux ld64

should build the tree on your machine, too.  (You'll need libuuid and its headers; the package is uuid-dev on my Ubuntu and Debian machines.)

The build procedure is clunky, I know.

Next step is transferring an OS X sysroot to Linux and having ld64 find it.
Question from Mr Pragmatic:

Apple announced that the 12-core Mac Pro will ship in December.

Why don't we buy 5 of these, put them in our data centre, put VMWare Fusion on them and then have like 60 independent OS X instances where we can both build *and test*.

Compiling is just one part of what we want to scale out right? We also want to test. Which can only be done on a real OS X instance.

Having virtualized OS X will also be really nice for the fuzzing infrastructure that cdiehl is working on.

And, as a bonus, we can virtualize 10.5 up to 10.9. Those all run on VMWare Fusion. Legally.
(In reply to Stefan Arentz [:st3fan] from comment #56)
> And, as a bonus, we can virtualize 10.5 up to 10.9. Those all run on VMWare
> Fusion. Legally.

Only 10.7 and up can run virtualize legally.
> Only 10.7 and up can run virtualize legally.

Apple allows VMware to virtualize 10.5 and 10.6 *Server* on Apple hardware, but not the client versions.
(Note: I'm not involved with this project and I'm not trying to be stop energy, just offering some historical perspective/insight.)

(In reply to Stefan Arentz [:st3fan] from comment #56)
> Why don't we buy 5 of these, put them in our data centre, put VMWare Fusion
> on them and then have like 60 independent OS X instances where we can both
> build *and test*.

Whenever we've used VMs in the past, especially for building, I/O has been our bottleneck. We've either have to underload the CPU/RAM or live with slow I/O to make VMs work. I _think_ (but I'm not completely sure), that I/O isn't an issue with our EC2 machines. If that's true, we get better bang for the buck there. (And it may be better value anyways because we won't be paying the Apple Premium.)

I realize it was just a number out of thin air, but 60 isn't actually very many instances compared to our load. We have over nearly 100 i5 minis doing builds right now and they do manage to keep up, but our load continues to grow. Obviously it's easy to say we can just buy more but the amount of time and effort it takes to do so is much more than you'd imagine. We've been unable to either buy enough machines in advance or buy continually enough to keep up with load in the past.
mshal and I went back and forth fixing problems on IRC that we discovered.  The good news is that Linux-hosted ld64 can link runnable binaries!

He mentioned that he was seeing occasional deadlocks, though.  I've pushed a fix that I think addresses this and I'm not seeing deadlocks in testing on my machine.  Michael, does git HEAD work for you now?
Flags: needinfo?(mshal)
This is somewhat orthogonal to this bug, but nevertheless relevant: Apple now provides "bots" for CI and automatic testing in the new Xcode via Mac OS Server instances. Infos should be in your Apple Developer connected inbox. This may solve the underlying issue through other means while keeping the restriction on Apple hardware, unfortunately.
(In reply to Nathan Froyd (:froydnj) from comment #60)
> He mentioned that he was seeing occasional deadlocks, though.  I've pushed a
> fix that I think addresses this and I'm not seeing deadlocks in testing on
> my machine.  Michael, does git HEAD work for you now?

Cool! I can confirm it is working without deadlocks now.

I saw in IRC that you were able to get through configure? Can you post your mozconfig? With the deadlock fixed, I get through the main configure, Reticulating splines, but then die in js/src/configure.
Flags: needinfo?(mshal) → needinfo?(nfroyd)
This is my mozconfig:

# Move the ld64 binary to $CROSS_LD_PATH/ld.
CROSS_LD_PATH=/home/froydnj/src/ld64.git/
CROSS_SYSROOT=/opt/build/froydnj/OSX-10.6-SDK
FLAGS="-target x86_64-apple-darwin -mlinker-version=136 -B $CROSS_LD_PATH -Wl,-syslibroot,$CROSS_SYSROOT"
export HOST_CC=gcc
export HOST_CXX=g++

export CC="clang $FLAGS"
export CXX="clang++ $FLAGS"
export CPP="clang $FLAGS -E"

export CROSS_COMPILE=1

ac_add_options --target=x86_64-apple-darwin11
ac_add_options --with-macos-sdk=$CROSS_SYSROOT
ac_add_options --enable-debug
ac_add_options --disable-optimize
# ICU does not work.
ac_add_options --without-intl-api
# Breakpad attempts to compile host tools with mac headers.
ac_add_options --disable-crashreporter

You'll also need to do:

cd $CROSS_SYSROOT/usr/lib && ln -s libstdc++.6.dylib libstdc++.dylib

If $CROSS_SYSROOT/usr/lib/libstdc++.dylib doesn't exist (it didn't in my SDK).  I'm not sure if Apple's clang handles the absence of libstd++.dylib properly or not.

I just discovered --disable-crashreporter was needed, so I'm recompiling now.  But a good chunk of the tree compiles, even without cctools lying around, apparently.  We'll see if libxul links. :)

I have a small patch to configure that is required so cross-compiles pick up va_copy correctly.  I'll post that in another, dependent bug in just a moment.
Flags: needinfo?(nfroyd)
(In reply to Nathan Froyd (:froydnj) from comment #63)
> I just discovered --disable-crashreporter was needed, so I'm recompiling
> now.

Build still falls over under crashreporter bits with --disable-crashreporter.  Disappointing.
Depends on: 931043
Depends on: 543111
Depends on: 931053
With the configure and breakpad patches, I get to:

make[5]: Entering directory `/opt/build/froydnj/cross-osx/nsprpub/pr'
make -C include install
make[6]: Entering directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include'
../../config/./nsinstall -t -m 0644 /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/nspr.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/pratom.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prbit.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prclist.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prcmon.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prcountr.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prcvar.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prdtoa.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prenv.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prerr.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prerror.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prinet.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prinit.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prinrval.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prio.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/pripcsem.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prlink.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prlock.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prlog.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prlong.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prmem.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prmon.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prmwait.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prnetdb.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prolock.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prpdce.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prprf.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prproces.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prrng.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prrwlock.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prshma.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prshm.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prsystem.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prthread.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prtime.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prtpool.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prtrace.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prtypes.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prvrsion.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/prwin16.h /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr/
make -C md install
make[7]: Entering directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include/md'
../../../config/./nsinstall -D /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr/md
../../../config/./nsinstall -t -m 644 /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_aix32.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_aix64.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_beos.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_bsdi.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_darwin.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_dgux.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_freebsd.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_hpux32.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_hpux64.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_irix32.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_irix64.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_linux.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_netbsd.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_nto.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_openbsd.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_os2.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_osf1.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_qnx.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_riscos.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_scoos.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_solaris.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_symbian.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_unixware7.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_unixware.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_win95.cfg /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_winnt.cfg /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr/md
../../../config/./nsinstall -t -m 644 /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/md/_darwin.cfg /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr
mv -f /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr/_darwin.cfg /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr/prcpucfg.h
make[7]: Leaving directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include/md'
make -C private install
make[7]: Entering directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include/private'
../../../config/./nsinstall -t -m 0644 /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/private/pprio.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/private/pprthred.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/private/prpriv.h /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr/private
make[7]: Leaving directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include/private'
make -C obsolete install
make[7]: Entering directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include/obsolete'
../../../config/./nsinstall -t -m 0644 /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/obsolete/pralarm.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/obsolete/probslet.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/obsolete/protypes.h /home/froydnj/src/mozilla-central-official.git/nsprpub/pr/include/obsolete/prsem.h /opt/build/froydnj/cross-osx/config/nspr/../../dist/include/nspr/obsolete
make[7]: Leaving directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include/obsolete'
make[6]: Leaving directory `/opt/build/froydnj/cross-osx/nsprpub/pr/include'
make -C src install
make[6]: Entering directory `/opt/build/froydnj/cross-osx/nsprpub/pr/src'
rm -f libnspr4.a
echo cr libnspr4.a ./prvrsion.o io/./prfdcach.o io/./prmwait.o io/./prmapopt.o io/./priometh.o io/./pripv6.o io/./prlayer.o io/./prlog.o io/./prmmap.o io/./prpolevt.o io/./prprf.o io/./prscanf.o io/./prstdio.o threads/./prcmon.o threads/./prrwlock.o threads/./prtpd.o linking/./prlink.o malloc/./prmalloc.o malloc/./prmem.o md/./prosdep.o memory/./prshm.o memory/./prshma.o memory/./prseg.o misc/./pralarm.o misc/./pratom.o misc/./prcountr.o misc/./prdtoa.o misc/./prenv.o misc/./prerr.o misc/./prerror.o misc/./prerrortable.o misc/./prinit.o misc/./prinrval.o misc/./pripc.o misc/./prlog2.o misc/./prlong.o misc/./prnetdb.o misc/./praton.o misc/./prolock.o misc/./prrng.o misc/./prsystem.o misc/./prthinfo.o misc/./prtpool.o misc/./prtrace.o misc/./prtime.o pthreads/./ptsynch.o pthreads/./ptio.o pthreads/./ptthread.o pthreads/./ptmisc.o md/unix/./unix.o md/unix/./unix_errors.o md/unix/./uxproces.o md/unix/./uxrng.o md/unix/./uxshm.o md/unix/./uxwrap.o md/unix/./darwin.o md/unix/./os_Darwin.o 
cr libnspr4.a ./prvrsion.o io/./prfdcach.o io/./prmwait.o io/./prmapopt.o io/./priometh.o io/./pripv6.o io/./prlayer.o io/./prlog.o io/./prmmap.o io/./prpolevt.o io/./prprf.o io/./prscanf.o io/./prstdio.o threads/./prcmon.o threads/./prrwlock.o threads/./prtpd.o linking/./prlink.o malloc/./prmalloc.o malloc/./prmem.o md/./prosdep.o memory/./prshm.o memory/./prshma.o memory/./prseg.o misc/./pralarm.o misc/./pratom.o misc/./prcountr.o misc/./prdtoa.o misc/./prenv.o misc/./prerr.o misc/./prerror.o misc/./prerrortable.o misc/./prinit.o misc/./prinrval.o misc/./pripc.o misc/./prlog2.o misc/./prlong.o misc/./prnetdb.o misc/./praton.o misc/./prolock.o misc/./prrng.o misc/./prsystem.o misc/./prthinfo.o misc/./prtpool.o misc/./prtrace.o misc/./prtime.o pthreads/./ptsynch.o pthreads/./ptio.o pthreads/./ptthread.o pthreads/./ptmisc.o md/unix/./unix.o md/unix/./unix_errors.o md/unix/./uxproces.o md/unix/./uxrng.o md/unix/./uxshm.o md/unix/./uxwrap.o md/unix/./darwin.o md/unix/./os_Darwin.o
echo libnspr4.a
libnspr4.a
../../config/./nsinstall -t -m 0755 ./libnspr4.a  /opt/build/froydnj/cross-osx/config/nspr/../../dist/sdk/lib/
../../config/./nsinstall: cannot access ./libnspr4.a: No such file or directory
make[6]: *** [install] Error 1

which might be because I don't have cross tools for OS X installed.  I'm digging for where that particular invocation of |cr| (?!) is coming from.
(In reply to Nathan Froyd (:froydnj) from comment #65)
> I'm digging for where that particular invocation of |cr| (?!) is coming from.

$(AR) cr ...
(In reply to Mike Hommey [:glandium] from comment #66)
> (In reply to Nathan Froyd (:froydnj) from comment #65)
> > I'm digging for where that particular invocation of |cr| (?!) is coming from.
> 
> $(AR) cr ...

Ah, duh, this is because I don't have a cross |ar|.  mshal, how complicated was it to get cctools working?  Did you just use toolwhip?
Flags: needinfo?(mshal)
(In reply to Nathan Froyd (:froydnj) from comment #67)
> Ah, duh, this is because I don't have a cross |ar|.  mshal, how complicated
> was it to get cctools working?  Did you just use toolwhip?

ar is ar. You don't need a "cross" ar. You might need a cross "ranlib", though.
(In reply to Nathan Froyd (:froydnj) from comment #67)
> Ah, duh, this is because I don't have a cross |ar|.  mshal, how complicated
> was it to get cctools working?  Did you just use toolwhip?

Building a cross binutils gets us 'ar', 'ranlib', and many others (but not 'ld', hence the need for ld64 and such). Here's how I configured it:

../configure --target=x86_64-apple-darwin --prefix=/home/marf/install-darwin-binutils
Flags: needinfo?(mshal)
Still dying when building NSS:

clang -target x86_64-apple-darwin -mlinker-version=136 -B /home/froydnj/src/ld64.git/ -Wl,-syslibroot,/opt/build/froydnj/OSX-10.6-SDK -dynamiclib -compatibility_version 1 -current_version 1 -install_name @executable_path/libnssckbi.dylib -headerpad_max_install_names -isysroot /opt/build/froydnj/OSX-10.6-SDK -exported_symbols_list /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/nssckbi.def -o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/libnssckbi.dylib /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/anchor.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/constants.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/bfind.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/binst.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/bobject.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/bsession.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/bslot.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/btoken.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/certdata.o /opt/build/froydnj/cross-osx/security/nss/lib/ckfw/builtins/ckbiver.o   /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssckfw.a /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssb.a  -L/opt/build/froydnj/cross-osx/dist/lib -lplc4 -lplds4 -lnspr4  
ld: warning: ignoring file /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssckfw.a, file was built for (null) which is not the architecture being linked (x86_64): /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssckfw.a
Undefined symbols for architecture x86_64:
  "_NSSCKFWC_CancelFunction", referenced from:
      _builtinsC_CancelFunction in anchor.o
  "_NSSCKFWC_CloseAllSessions", referenced from:
      _builtinsC_CloseAllSessions in anchor.o
  "_NSSCKFWC_CloseSession", referenced from:
      _builtinsC_CloseSession in anchor.o

...and lots more symbols like that.
(In reply to Nathan Froyd (:froydnj) from comment #70)
> ld: warning: ignoring file
> /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssckfw.a,
> file was built for (null) which is not the architecture being linked
> (x86_64):
> /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssckfw.a

This warning appears to be because Apple's ld expects all objects in an archive to be aligned to 32-bit boundaries.  binutils doesn't seem to enforce that.
(In reply to Nathan Froyd (:froydnj) from comment #71)
> (In reply to Nathan Froyd (:froydnj) from comment #70)
> > ld: warning: ignoring file
> > /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssckfw.a,
> > file was built for (null) which is not the architecture being linked
> > (x86_64):
> > /opt/build/froydnj/cross-osx/security/build/../../dist/lib/libnssckfw.a
> 
> This warning appears to be because Apple's ld expects all objects in an
> archive to be aligned to 32-bit boundaries.  binutils doesn't seem to
> enforce that.

OK, with some small changes to binutils, I can now link libnssckbi.dylib.  Removing stale .a files and kicking off a new build.

I'll work on getting the fixes to binutils upstreamed.
Depends on: 932127
After fixing more JS cross-compilation issues (bug 932127), executables in js/src/ (the shell and test executables) link.  Yay!

The only problem is that the linking is ridiculously slow.  8 minutes 30 seconds to link the JS shell slow (2.6GHz Core 2).  Linking libxul could easily take more than an hour, maybe more than two.  Profiles say that we are spending all (yes, *all*) of our time copying strings around.

glandium thinks that this might be because the linker is churning through the debug information and that doesn't happen on OS X because of separate debug information files (ergo, the code has never been optimized).  clang doesn't seem to provide any bits for splitting out the debug information, and the flags for the linker for stripping debug information don't seem to have any effect.  I'll try investigating exactly what gets done on OS X tomorrow and see if there are any substantial differences.
AIUI, what happens on Mac is that the compiler produces .o files with DWARF as usual, but the linker only links the code sections during linking, not the debug info. To get linked debug info you have to run dsymutil on the binary, which tracks down all the .o files and links their debug info into another file.
(In reply to Ted Mielczarek [:ted.mielczarek] from comment #74)
> AIUI, what happens on Mac is that the compiler produces .o files with DWARF
> as usual, but the linker only links the code sections during linking, not
> the debug info. To get linked debug info you have to run dsymutil on the
> binary, which tracks down all the .o files and links their debug info into
> another file.

Theoretically, we are using the same linker, though, so I'm not entirely sure why the linker on Linux is taking so long.

I did notice that .a files are significantly larger when generated by binutils than by Apple's tools; I'm seeing if that makes a difference in the link times.
(In reply to Nathan Froyd (:froydnj) from comment #75)
> I did notice that .a files are significantly larger when generated by
> binutils than by Apple's tools; I'm seeing if that makes a difference in the
> link times.

Ah, could it be stripping by default?
(In reply to Mike Hommey [:glandium] from comment #76)
> (In reply to Nathan Froyd (:froydnj) from comment #75)
> > I did notice that .a files are significantly larger when generated by
> > binutils than by Apple's tools; I'm seeing if that makes a difference in the
> > link times.
> 
> Ah, could it be stripping by default?

clang on OS X seems to -dead_strip (--gc-sections) by default, which clang on Linux targeting OS X does not (I might just have an old clang version).  Adding -dead_strip makes the link about 5% faster.

The .a size differential is because binutils's ar/ranlib includes symbols for .eh_frame (!) in the table of contents for the archive, whereas Apple's does not.  Fixing this is pretty easy, but it doesn't seem to help at all.

The __LINKEDIT segment for the JS shell is about 12MB.  Googling around suggests that the linker does clever things with __LINKEDIT for 10.6+, resulting in significantly smaller sections.  I thought I was passing the proper -macosx_version_min flags to trigger this sort of thing, but I need to double-checking.
Well, I figured out my issues with getting through configure:

1) The name of the sdk path is important. I had /home/mshal/cross/sysroot, but got errors like:

1 warning generated.
ld: malformed 32-bit x.y.z version number: 3.5.0-42-generic
clang-3.4: error: linker command failed with exit code 1 (use -v to see invocation)
configure: failed program was:

#line 2572 "configure"
#include "confdefs.h"

main(){return(0);}
configure: error: installation or configuration problem: C compiler cannot create executables.
*** Fix above errors and then restart with               "make -f client.mk build"
make: *** [configure] Error 1

It is trying to pull the version info from the path, and when that fails it goes to a sysctl call and grabs it from the host, which is wrong in this case. Instead I renamed my sysroot to /home/mshal/cross/OSX-10.6-SDK similar to froydnj and now it can link.

2) Somehow I missed defining CPP in mozconfig, though froydnj had it in his. Neglecting to define it results in:

configure: error: Your host toolchain does not support C++0x/C++11 mode properly. Please upgrade your toolchain
------ config.log ------
/usr/include/c++/4.7/functional:1160:58: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1169:58: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1179:58: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1282:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1292:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1302:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1312:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1322:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1332:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1342:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1352:55: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1591:21: error: expected identifier before 'unsigned'
/usr/include/c++/4.7/functional:1726:23: error: expected unqualified-id before 'unsigned'
/usr/include/c++/4.7/functional:1727:23: error: expected unqualified-id before 'unsigned'
/usr/include/c++/4.7/functional:1735:26: error: '_M_max_size' was not declared in this scope
/usr/include/c++/4.7/functional:1736:31: error: '_M_max_align' was not declared in this scope
/usr/include/c++/4.7/functional:1737:7: error: '_M_max_align' was not declared in this scope
configure: failed program was:
#line 10489 "configure"
#include "confdefs.h"
#include <memory>
int main() {

; return 0; }
configure: error: Your host toolchain does not support C++0x/C++11 mode properly. Please upgrade your toolchain

With these changes I can get through configure with clang/ld64. Now let's see if I can reproduce the slow linking :)
Apparently my configury was not detecting that clang supported visibility attributes et al.  That might be responsible for a profusion of symbols that needed processing.  Trying to verify that possibility with a build.

I needed to update my mozconfig with:

CROSS_LD_PATH=/home/froydnj/src/ld64.git/
# The format of the sysroot directory is very important.
# It must contain the OS X version number.
CROSS_SYSROOT=/opt/build/froydnj/OSX-10.6-SDK
FLAGS="-target x86_64-apple-darwin -mlinker-version=136 -B $CROSS_LD_PATH"

export HOST_CC=gcc
export HOST_CXX=g++
# ld ignores this flag.  We need to set HOST_LDFLAGS to something non-empty
# so that configure thinks that we actually set it.
export HOST_LDFLAGS="-g"

export LDFLAGS="-Wl,-syslibroot,$CROSS_SYSROOT -Wl,-dead_strip"
export CC="clang $FLAGS"
export CXX="clang++ $FLAGS"
export CPP="clang $FLAGS -E"

so that clang doesn't complain that you're using -Wl in a command that doesn't link (a compile error?).  Some fiddling with configure may be in order to ensure that LDFLAGS is properly passed around.
(In reply to Nathan Froyd (:froydnj) from comment #79)
> Apparently my configury was not detecting that clang supported visibility
> attributes et al.  That might be responsible for a profusion of symbols that
> needed processing.  Trying to verify that possibility with a build.

Doing the right thing with visibility didn't seem to help.

I can't tell yet whether WebRTC's build system supports cross-compiles, but I do know our use of it doesn't.  Added --disable-webrtc to the mozconfig to make things go a little further.
(In reply to Nathan Froyd (:froydnj) from comment #80)
> I can't tell yet whether WebRTC's build system supports cross-compiles, but
> I do know our use of it doesn't.  Added --disable-webrtc to the mozconfig to
> make things go a little further.

Finally got all the way to libxul linking.  Stymied by the requirement that /System/Library/PrivateFrameworks exists.  And this is not only a problem for libxul:

http://mxr.mozilla.org/mozilla-central/search?string=System/Library/Private
(In reply to Nathan Froyd (:froydnj) from comment #80)
> (In reply to Nathan Froyd (:froydnj) from comment #79)
> > Apparently my configury was not detecting that clang supported visibility
> > attributes et al.  That might be responsible for a profusion of symbols that
> > needed processing.  Trying to verify that possibility with a build.
> 
> Doing the right thing with visibility didn't seem to help.

I can't quite understand what's going on here.  I built the js shell on my Mac.  The __LINKEDIT sections are virtually the same size for the Mac-built and Linux-built binaries (the Linux one is actually slightly smaller).  The commands for the linker are identical save for path differences.  libjs_static.a is roughly the same size (slightly smaller on Linux).  But the Mac-built one links hundreds of times faster than the Linux-built one.  I'm not quite sure what's happening.  Need to try moving objects/libraries between machines to see if something particular is the culprit.
(In reply to Nathan Froyd (:froydnj) from comment #82)
> I can't quite understand what's going on here.  I built the js shell on my
> Mac.  The __LINKEDIT sections are virtually the same size for the Mac-built
> and Linux-built binaries (the Linux one is actually slightly smaller).  The
> commands for the linker are identical save for path differences. 
> libjs_static.a is roughly the same size (slightly smaller on Linux).  But
> the Mac-built one links hundreds of times faster than the Linux-built one. 
> I'm not quite sure what's happening.  Need to try moving objects/libraries
> between machines to see if something particular is the culprit.

Did you try building the linker from source on mac? Maybe that's actually not the same at all.
(In reply to Nathan Froyd (:froydnj) from comment #82)
> I can't quite understand what's going on here.

I'm an idiot.  The strlcpy routine I had imported into ld64 was busted.  Fixing that (I've pushed fixes to github) makes the linker run significantly faster.  Now to figure out a fix for the /System/Library/Private stuff.
Depends on: 933071
Depends on: 933231
(In reply to Nathan Froyd (:froydnj) from comment #84)
> Now to figure out a fix for the /System/Library/Private stuff.

With bug 933071 applied and configured appropriately, I can now link libxul.  Unfortunately, the build falls over immediately after executing dependentlibs.py because I don't have otool installed.
Flash news report on irc:
 08:07 < froydnj> IT LIIIIIIVES

\o/ +100 for the Halloween timing!
With this patch, all the patches from dependent bugs, a mozconfig like:

CROSS_LD_PATH=/home/froydnj/src/ld64.git/
CROSS_SYSROOT=/opt/build/froydnj/OSX-10.6-SDK
CROSS_PRIVATE_FRAMEWORKS=/opt/build/froydnj/PrivateFrameworks
FLAGS="-target x86_64-apple-darwin -mlinker-version=136 -B $CROSS_LD_PATH"

export HOST_CC=gcc
export HOST_CXX=g++
export HOST_LDFLAGS="-g"

export LDFLAGS="-Wl,-syslibroot,$CROSS_SYSROOT -Wl,-dead_strip"
export CC="clang $FLAGS"
export CXX="clang++ $FLAGS"
export CPP="clang $FLAGS -E"

CROSS_TOOLS_PREFIX=/opt/build/froydnj/darwin-crosstools/bin/x86_64-apple-darwin
export AR=${CROSS_TOOLS_PREFIX}-ar
export RANLIB=${CROSS_TOOLS_PREFIX}-ranlib
export STRIP=${CROSS_TOOLS_PREFIX}-strip

export CROSS_COMPILE=1

ac_add_options --target=x86_64-apple-darwin11
ac_add_options --with-macos-sdk=$CROSS_SYSROOT
ac_add_options --with-macos-private-frameworks=$CROSS_PRIVATE_FRAMEWORKS
ac_add_options --enable-debug
ac_add_options --disable-optimize
ac_add_options --without-intl-api
ac_add_options --disable-crashreporter
ac_add_options --disable-webrtc

an a Mach-O objdump at ${CROSS_TOOLS_PREFIX}-objdump, I'm able to complete the
build.  Then, on some Mac, I can do:

mac$ mkdir cross-dist
mac$ cd cross-dist
mac$ ssh $build_machine 'cd $objdir/dist; tar cf - . -h' | gtar xf -
mac$ open ./NightlyDebug.app

and voilĂ !  Working Mac Firefox compiled on a Linux machine.

This doesn't address uses of otool elsewhere in the tree or in
third-party projects.  It may be worth just using cctools for otool (and
ar/ranlib) to minimize pain for ourselves and/or other people.
Depends on: 933320
WebRTC builds and links with the patches in bug 933320.  I'm pretty sure they break "normal" Mac compilation--or at least configurations where you don't set --with-macos-sdk--so they're not quite ready for prime time yet.
FWIW, an --enable-optimize --disable-debug build also works on my machine, clobber timings with make -j4:

real	37m14.584s
user	116m47.554s
sys	7m29.324s

This is roughly as faster as my (newer CPU) Mac mini--at least, as fast as I remember my Mac mini being several months ago.
For the patch in this bug, what are these two checks needed for?

+        if len(ar_basename) > 2 and ar_basename.find('darwin') != -1 ...

I had my AR set to "/home/mshal/cross/install-binutils/x86_64-apple-darwin/bin/ar", which made both of these fail. I just commented the checks out locally to get it working.
(In reply to Michael Shal [:mshal] from comment #90)
> For the patch in this bug, what are these two checks needed for?
> 
> +        if len(ar_basename) > 2 and ar_basename.find('darwin') != -1 ...
> 
> I had my AR set to
> "/home/mshal/cross/install-binutils/x86_64-apple-darwin/bin/ar", which made
> both of these fail. I just commented the checks out locally to get it
> working.

Ah, so the intent of these checks is that you'd have:

AR=/home/mshal/cross/install-binutils/bin/x86_64-apple-darwin-ar

(The reason x86_64-apple-darwin/bin/ar exists at all is esoteric; it shouldn't normally be used.)

But yes, the check could be made more robust, perhaps by checking the output of objdump or something.
ICU's cross-compilation model is interesting. It needs to build & run some tools locally on the host (pkgdata, as an example). But it expects those tools to already be built and available in a path somewhere, rather than just using a host-compiler to build a host version of pkgdata as part of the build process. (See: http://mxr.mozilla.org/mozilla-central/source/intl/icu/source/configure.in#184 )

I think we'll either need to build it twice in our tree (host first, then the cross one), or assume that the cross-compilation environment will have host ICU tools available. If we go with the latter approach, I think we can support ICU with a few small configure tweaks. I'm not exactly sure how building it twice as part of our actual build process would go, since we'd need to run configure twice, and the cross-configure won't complete until the host ICU build has finished.
(In reply to Michael Shal [:mshal] from comment #92)
> ICU's cross-compilation model is interesting. It needs to build & run some
> tools locally on the host (pkgdata, as an example). But it expects those
> tools to already be built and available in a path somewhere, rather than
> just using a host-compiler to build a host version of pkgdata as part of the
> build process. (See:
> http://mxr.mozilla.org/mozilla-central/source/intl/icu/source/configure.
> in#184 )
> 
> I think we'll either need to build it twice in our tree (host first, then
> the cross one), or assume that the cross-compilation environment will have
> host ICU tools available. If we go with the latter approach, I think we can
> support ICU with a few small configure tweaks. I'm not exactly sure how
> building it twice as part of our actual build process would go, since we'd
> need to run configure twice, and the cross-configure won't complete until
> the host ICU build has finished.

What do we do currently for Android cross-compiles?  Do we already have the necessary ICU bits in the build environment?

How hard do you think it would be to just fix ICU to do the right thing and compile host binaries in a cross configuration?  (It is probably more expedient to just make sure our build environment has the host tools.)
(In reply to Nathan Froyd (:froydnj) from comment #93)
> What do we do currently for Android cross-compiles?  Do we already have the
> necessary ICU bits in the build environment?

It appears ICU isn't supported on Android yet (https://bugzilla.mozilla.org/show_bug.cgi?id=864843). The top-level configure.in defaults to --without-intl-api unless we build for MOZ_BUILD_APP="browser".

> 
> How hard do you think it would be to just fix ICU to do the right thing and
> compile host binaries in a cross configuration?  (It is probably more
> expedient to just make sure our build environment has the host tools.)

I'm not sure yet - I'll take a look. One thing that I don't understand at this point is if all of these binaries are only necessary on the host, or if we do need cross-versions of them as well. Does anyone know if we just need the libraries (libic*.a) to be compiled for the target arch? Or do we need bin/* as well?
As far as I can tell, we don't need the cross binaries, just the cross libraries. At least, the ICU binaries don't show up anywhere in $(DIST).

From the build log, it looks like we need these host programs for the cross build: icupkg, gencnval, makeconv, genrb, gensprep, gencfu, and pkgdata. These all link in the libraries libicutu, libicui18n, libicuuc, and libicudata, so we need host versions of those as well. (I tried linking gencnval without those libraries to see if they were actually necessary, but they are all required).

It certainly is *possible* to hack up their Makefiles & configure scripts to support a host toolchain, but I believe it would be a fairly significant change. We'd basically have to add all that infrastructure in ourselves (HOST_* variables, new Makefile rules, etc). When we import 3rd-party build systems like this, my understanding is that we want to make as few changes as possible so that upgrading is easier. At this point I think we should either look to using moz.build for ICU, or just following their cross-compilation instructions as close as possible and build 2 versions: http://icu-project.org/repos/icu/icu/trunk/readme.html#HowToCrossCompileICU
See bug 912371 for cross compiling ICU.
Depends on: 912371
Assignee: joey → nobody
Here's a cross build of firefox 25.0, with patches from bugs 931043, 931053, 932127, 933231, 933071, and 921040 applied. Some features are currently disabled, as shown by the relevant bits of the mozconfig:

ac_add_options --enable-debug
ac_add_options --disable-optimize
ac_add_options --without-intl-api
ac_add_options --disable-crashreporter
ac_add_options --disable-webrtc

Tar file: http://people.mozilla.org/~mshal/mozilla-osx-cross-compile.tar.gz

Usage:
 1) tar -xzvf mozilla-osx-cross-compile.tar.gz
   (unpacks into dist/...)
 2) Navigate to NightlyDebug.app and double-click it

I've tested only a bit so far by making sure it can browse a couple pages, but nothing extensive.
Anthony, 

Please take the build from comment 98 and test it like a release candidate (if that makes sense) to see how stability and performance compare with the released FF 25.0. We'd also like to verify the build runs on all versions of OSX that we support.

The optimized build (comment 98) has the same limitations as the debug build (comment 97).

This work is in support of the Q4 goal "Deliver the features and functionality needed by the other product groups to deliver awesome products" :)  In particular, the results of this testing are needed to place a hardware order in Q4. Ideally we'd like to know the answer within one week. Please let me know if that is not possible.

Thanks!
Flags: needinfo?(anthony.s.hughes)
Keywords: qawanted
Hal, performance testing is normally accomplished by running the build through a compare-talos run, and our QA team probably doesn't have much experience with that.
If you want to know that the builds are stable and/or performant then I suspect you'll need to run a compare-talos run as Benjamin suggests. 

QA can run our Mozmill automation and some manual spotchecking but all this will give you is fairly superficial evidence that the builds aren't obviously broken. Let me know if this is still valuable.
Flags: needinfo?(anthony.s.hughes)
So many facets to QA here -- thanks for the lesson.

Yes, Mozmill & spotchecking will be very helpful. We can't use compare-talos until we can build on try, and that is still a ways off. If Mozmill execution times can be compared, that is likely a "good enough" performance metric for now.

We're looking for a confirmation that nothing is horribly broken on the various OS versions. Whatever insight your tooling or spotchecking can provide into that question will be very valuable in making the purchase decision.
(In reply to Hal Wine [:hwine] (use needinfo) from comment #102)
> We can't use compare-talos until we can build on try.

Building is not the biggest issue (actually, you can build on try, if you commit all the required files in the tree you push to try, although you won't get a dmg out). Try is not going to take a mac build out of a linux build slave and run it on a mac test slave.
You can run Talos locally and compare results that way. mach talos-test. Not official, but it should hopefully identify any obvious issues.
No longer depends on: 921494
Depends on: 935237
No longer depends on: 935237
Depends on: 935237
Mihaela, can you please take care of the testing here? I'll send you an email with more detailed instructions.
Flags: needinfo?(mihaela.velimiroviciu)
(In reply to Michael Shal [:mshal] from comment #98)
> Here's the opt version:
> http://people.mozilla.org/~mshal/mozilla-osx-cross-compile-opt.tar.gz

dist/Nightly.app can be copied into an hfsplus mounted filesystem and create a usable dmg file.


Files manually copied from a nightly download to add icons and fluff
====================================================================
-rwxr-xr-x        82 2013-11-05 07:07 ./._.background
drwxr-xr-x         0 2013-11-05 07:07 ./.background/
-rw-r--r--        82 2013-11-05 06:03 ./.background/._background.png
-rw-r--r--    129900 2013-11-05 06:03 ./.background/background.png
-rw-r--r--        82 2013-11-05 06:03 ./.background/._background.png
-rw-r--r--    129900 2013-11-05 06:03 ./.background/background.png
-rw-r--r--        82 2013-11-05 06:03 ./._.DS_Store
-rw-r--r--     12292 2013-11-05 06:03 ./.DS_Store
-rw-r--r--        82 2013-11-05 06:03 ./._.VolumeIcon.icns
-rw-r--r--    891873 2013-11-05 06:03 ./.VolumeIcon.icns


crashreporter can probably be ignored but all of the following exist in the nightly dmg download but are not mentioned in the cross compile, dist/ directory.

./.DS_Store
./.VolumeIcon.icns
./.background
./.background/background.png
./Nightly.app/Contents/MacOS/browser/crashreporter-override.ini
./Nightly.app/Contents/MacOS/browser/omni.ja
./Nightly.app/Contents/MacOS/crashreporter.app
./Nightly.app/Contents/MacOS/crashreporter.app/Contents
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Info.plist
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/MacOS
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/MacOS/crashreporter
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/MacOS/crashreporter.ini
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/PkgInfo
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/InfoPlist.strings
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenu.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenu.nib/classes.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenu.nib/info.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenu.nib/keyedobjects.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenuRTL.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenuRTL.nib/classes.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenuRTL.nib/info.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/English.lproj/MainMenuRTL.nib/keyedobjects.nib
./Nightly.app/Contents/MacOS/crashreporter.app/Contents/Resources/crashreporter.icns
./Nightly.app/Contents/MacOS/libfreebl3.chk
./Nightly.app/Contents/MacOS/libnssdbm3.chk
./Nightly.app/Contents/MacOS/libsoftokn3.chk
./Nightly.app/Contents/MacOS/omni.ja
./Nightly.app/Contents/MacOS/removed-files
./Nightly.app/Contents/MacOS/webapprt/omni.ja
./Nightly.app/precomplete
> crashreporter can probably be ignored but all of the following exist in the
> nightly dmg download but are not mentioned in the cross compile, dist/
> directory.
> 
> ./.DS_Store

that's some sort of mac indexing thingy, I'm not sure if there's a way to generate it on linux, but I suspect it doesn't matter.  The rest of this looks like stuff that is just an  effect of taring up dist/ instead of running make package.
>> ./.DS_Store
> 
> that's some sort of mac indexing thingy

The .DS_Store file tells the OS how to display a directory's contents in the Finder UI.  So it determines where icons are located in the "Icon view" of a directory.  This can be important if the directory in question is at the top level of a mounted DMG image.
I've been doing some work with Mike Perry [1] and Georg Koppen [2] & [3] from the Tor Project who have been using my old toolchain4-derived Darwin targeting cross compilers to build OS X TBB Firefox ESR17.

In order to build ESR24, they needed to upgrade their compilers to Clang and as luck would have it I was working with Yann Diorcet on a crosstool-ng fork [4] whose main purpose being to merge those toolchain4 compilers and add Clang support.

This work is starting to come to shape, and large projects, including Firefox 24 ESR and Python 2.7.5 can now be built with it.

Please see the attached archive with a "build.sh" script that I hacked up which should:
 1. Work on a fresh Ubuntu 12.04.3-desktop-amd64 install (desktop-i386 has an issue that I will investigate further; I suspect it runs out of memory when linking XUL).
 2. apt-get install all dependencies.
 3. Download the Flosoft Mac OS X 10.6 SDK from launchpad [5]
 4. Build Darwin targeting crosstool-ng (full ld64, LTO, Clang 3.3 and GCC 4.2).
 5. Download and patch Firefox ESR24.
 6. Build it for OS X (i686).

There are three patches included:
 1. use-ACTRYCOMPILE-to-check-for-how-to-copy-vaargs.patch by Nathan Froyd, backported to ESR24.
 2. disable-MOZ_ENABLE_PROFILER_SPS.patch as suggested by Georg Koppen.
 3. export-AR-and-RANLIB-to-fix-xdarwin-libffi-build.patch by myself so that libtool doesn't use the build machine's ar and ranlib.

And also two hacks perpetrated:
 1. *Important:* Creates a /System/Library symlink in your Ubuntu root because some of the ESR24 build system hard codes finding frameworks in /System/Library/PrivateFrameworks.
 2. A symlink is also made from "x86_64-apple-darwin10-otool" to "otool" as the build system tries to run "otool".

Yann and I intend this toolchain to be as similar as possible to the ones that Apple provides. Our crosstool-ng fork can be built to target iOS. Yann added support for building Windows-targeting MinGW-w64 toolchains and is also fixing Clang issues on Windows. I added the possibility to use Windows and OS X as the host and build machine (canadian builds are a long term goal)

The attached "build.sh" script *may* run on OS X, given the correct environment (homebrew).

You can see a screenshot of FirefoxDebug running [6].

If you want to test Firefox, you can get it at [7]. A Debug build can be gotten from [8].

I also uploaded prebuilt cross compilers - host/build=Ubuntu 12.04.3 x86-64, target=Darwin i686/x86-64 [9]. These *should* be fully relocatable. If you want to build Firefox using my "build.sh" script without first building the toolchain then extract this archive to your $HOME folder first.

I would like you to consider this option for your Mac cross build system and would be more than willing to provide any assistance needed to make this happen.

[1] https://blog.torproject.org/blog/deterministic-builds-part-two-technical-details
[2] https://trac.torproject.org/projects/tor/ticket/9711
[3] https://trac.torproject.org/projects/tor/ticket/9829
[4] https://github.com/diorcety/crosstool-ng/tree/cctools-llvm
[5] https://launchpad.net/~flosoft/+archive/cross-apple/+files/apple-uni-sdk-10.6_20110407.orig.tar.gz
[6] https://www.dropbox.com/s/98ly6o190rh96up/FirefoxDebug-ScreenShot.png
[7] https://www.dropbox.com/s/v6tsfz4u965kk7h/Firefox-darwin-i686.app-20131107-built-on-linux-gnu-x86_64.tar.bz2
[8] https://www.dropbox.com/s/dmz7e1cs2htmyh2/FirefoxDebug-darwin-i686.app-20131107-built-on-linux-gnu-x86_64.tar.bz2
[9] https://www.dropbox.com/s/2f52qrwucs2dlzm/cross-target-x86_64-apple-darwin10-host-x86_64-linux.tar.xz
(In reply to Trevor Saunders (:tbsaunde) from comment #107)
> > crashreporter can probably be ignored but all of the following exist in the
> > nightly dmg download but are not mentioned in the cross compile, dist/
> > directory.
> > 
> > ./.DS_Store
> 
> that's some sort of mac indexing thingy, I'm not sure if there's a way to
> generate it on linux, but I suspect it doesn't matter.  The rest of this
> looks like stuff that is just an  effect of taring up dist/ instead of
> running make package.

Yes most of the items on the first list were mac specific or like the background/icons not easily added by the command line.  This is easy to work around, just pre-populate a tarball with elements and unpack it into the dmg image before copying files over.
No longer depends on: 935237
(In reply to Ray Donnelly from comment #109)
> Created attachment 828606 [details]
> Script config files and patches for building Mac OS X cross compiler and
> Firefox ESR24

Moving this to separate bug 936115, as there is already a lot going on here, and this feels outside the scope of work here. 

(Ray, thanks for your patch and welcome to bugzilla - would you mind reattaching your patch to the new bug? I've removed (obsoleted) it from here.)
Comment on attachment 828606 [details]
Script config files and patches for building Mac OS X cross compiler and Firefox ESR24

This now belongs with bug 936115, please reattach there.
Attachment #828606 - Attachment is obsolete: true
(In reply to Gregory Szorc [:gps] from comment #104)
> You can run Talos locally and compare results that way. mach talos-test. Not
> official, but it should hopefully identify any obvious issues.

We'll come back to this approach after bug 935997 is resolved.

For now, I think the mozmill tests will be "good enough" for this stage
Did we have a story for what we want to do for packaging?

I did some work on modifying the crashreporter's Mac symbol dumper to work on Linux.  I think it's possible to make it go (it compiles, just needs some reimplementation of Darwin-specific functions), but we also need dsymutil to work on a cross as part of crashreporter.  And dsymutil hasn't been released as open source.  IIRC, dsymutil can't be run conveniently with maloader (I'm not sure of its status running with darling, but I don't expect much better).

So we have two options, as I see it:

- Investigate running dsymutil cross via maloader/darling.  Not sure how much work is involved here and might turn out to be a dead end.
- Lie to the crashreporter's build system and force dump_syms and friends to build for the target, not the host.  Then dump_syms et al be available for the packaging step to be run on a mac.

Any others?  What do people think of those options?
Blocks: 928193
(In reply to Anthony Hughes, Mozilla QA (:ashughes) from comment #105)
> Mihaela, can you please take care of the testing here? I'll send you an
> email with more detailed instructions.

Testing is in progress and the (partial) results are available in this wiki page: https://wiki.mozilla.org/QA/Desktop_Firefox/CC_OSX
Flags: needinfo?(mihaela.velimiroviciu)
(In reply to Mihaela Velimiroviciu [QA] (:mihaelav) from comment #115)
> (In reply to Anthony Hughes, Mozilla QA (:ashughes) from comment #105)
> > Mihaela, can you please take care of the testing here? I'll send you an
> > email with more detailed instructions.
> 
> Testing is in progress and the (partial) results are available in this wiki
> page: https://wiki.mozilla.org/QA/Desktop_Firefox/CC_OSX

Thanks -- looking forward to 10.6/10.7 results.

Question: the code base for the builds-under-test are actually the same as FF 25.0 (not latest m-c). Where can we find the results of the mozmill runs for FF 25.0?

Question: is there any rough indicator of performance based on wall-clock time to run the mozmill tests? (Especially as compared with how long 25.0 took.)
Flags: needinfo?(mihaela.velimiroviciu)
Depends on: 937901
(In reply to Hal Wine [:hwine] (use needinfo) from comment #116)
> (In reply to Mihaela Velimiroviciu [QA] (:mihaelav) from comment #115)
> > (In reply to Anthony Hughes, Mozilla QA (:ashughes) from comment #105)
> > > Mihaela, can you please take care of the testing here? I'll send you an
> > > email with more detailed instructions.
> > 
> > Testing is in progress and the (partial) results are available in this wiki
> > page: https://wiki.mozilla.org/QA/Desktop_Firefox/CC_OSX
> 
> Thanks -- looking forward to 10.6/10.7 results.
The 10.6 results are available in the wiki page. Mozmill tests are now running on 10.7 (it takes a longer time to give the results because I'm not the only person who uses those machines).

> Question: the code base for the builds-under-test are actually the same as
> FF 25.0 (not latest m-c). Where can we find the results of the mozmill runs
> for FF 25.0?
The results of the mozmill runs for FF25 release can be found here (http://mozmill-release.blargon7.com/#/functional/reports?branch=25.0&platform=Mac&from=2013-11-11&to=2013-11-14 - functional) and here (http://mozmill-release.blargon7.com/#/endurance/reports?branch=25.0&platform=Mac&from=2013-11-11&to=2013-11-14 - endurance)

> Question: is there any rough indicator of performance based on wall-clock
> time to run the mozmill tests? (Especially as compared with how long 25.0
> took.)
I'm not aware of such indicators. The time may depend on the machine specs, internet connection, etc. I'm cc-ing Henrik as he may have more details.
Flags: needinfo?(mihaela.velimiroviciu)
We don't have the information about the duration of individual tests in our usual testruns like functional. That's something which comes with Mozmill 2.0 and we can hopefully upgrade to soon. The endurance tests are creating checkpoints for individual test steps, so those could be used for comparisons. See the dashboard link for an example. By clicking on the report id link, you will see the raw json report. Does that help?
FYI - Results of Mozmill runs on 10.7.5 are now available in the wiki: https://wiki.mozilla.org/QA/Desktop_Firefox/CC_OSX
Thanks :whimboo & :mihaelav -- that's very useful information.

Looks like we have memory issues with SWF, will find someone to help with that. Bug 935997 is now a major blocker.
Depends on: 935997
Keywords: qawanted
The functional tests show a consistent failure on all OS versions for /testGeolocation/testShareLocation.js with result "Geolocation position is: Position acquisition timed out"

Can a dev suggest a reason that test would not work given the current build limitations?
Flags: needinfo?
(In reply to Hal Wine [:hwine] (use needinfo) from comment #121)
> The functional tests show a consistent failure on all OS versions for
> /testGeolocation/testShareLocation.js with result "Geolocation position is:
> Position acquisition timed out"

This test is also failing with other builds of Firefox. So it's not specific to the cross-compiled version. We track this issue already in bug 935451.
Flags: needinfo?
(In reply to Henrik Skupin (:whimboo) from comment #122)
> (In reply to Hal Wine [:hwine] (use needinfo) from comment #121)
> > The functional tests show a consistent failure on all OS versions for
> > /testGeolocation/testShareLocation.js with result "Geolocation position is:
> > Position acquisition timed out"
> 
> This test is also failing with other builds of Firefox. So it's not specific
> to the cross-compiled version. We track this issue already in bug 935451.

Whew! Thanks - I was having a hard time imagining a functional connection.
(In reply to Hal Wine [:hwine] (use needinfo) from comment #120)
> Looks like we have memory issues with SWF, will find someone to help with
> that. Bug 935997 is now a major blocker.

Incorrect analysis -- a different set of mozmill tests were executed between the two builds. The darwin-on-linux build was not subjected to the SWF tests that the darwin-on-mac (release) build was.

Is there a reason QA did not or could not run those tests? If not, please run them, as they appear to be a significant stress test. Thanks
Flags: needinfo?(anthony.s.hughes)
Hal, Anthony is out for another week. Given that Mihaela run those tests, she would be best to give an answer to your question.
Flags: needinfo?(anthony.s.hughes) → needinfo?(mihaela.velimiroviciu)
It seems that that's because different tests are run on different branches. The tests run on build from comment #98 were from the default branch, while the tests on 25.0.1 were from mozilla-release branch. I changed the branch for the comment #98 build to mozilla-release and got these results:
http://mozmill-crowd.blargon7.com/#/endurance/report/b99421c0f132c68dec1548288a1df707

I'm not sure why this time I got even *more* tests run than on 25.0.1. Henrik, do you have any suggestions?
Flags: needinfo?(mihaela.velimiroviciu) → needinfo?(hskupin)
Thanks all -- those results show same behavior between the 2 builds w.r.t. memory usage. That's what we needed for now. pulling qawanted flag.
Keywords: qawanted
Status update:
 - build/link issues resolved, pending landings of dependent bugs
 - packaging issues being handled in bug 935237
(In reply to Mihaela Velimiroviciu [QA] (:mihaelav) from comment #127)
> I'm not sure why this time I got even *more* tests run than on 25.0.1.
> Henrik, do you have any suggestions?

Well, it depends on which tests were enabled by that time for the release branch in our test repository. All that information you could retrieve from the manifest.ini file.
Flags: needinfo?(hskupin)
I uploaded a tarball of gnu binutils built on centos6 for x86_64-apple-darwin and ld64 built the same way to people.mozilla.org/~tsaunders/apple-cross-binutils.tar.bz2 which should be useableas a tooltool package when we're ready for that.

some notes on building ld64 on centos6
- you need to build your own libuuid because the one on centos6 is too old and doesn't define the same symbols as ld64 expects.  I built libuuid as a static lib for the above tarball so the ld binary doesn't depend on a libuuid.so (checking ldd ld I see pthreads libc libmlibstdc++ and libgcc none of which immeidiately seem like problematic deps)

- you need the patches I mailed froydnj to inlcude some headers and not use unordered_map::reserve to make ld64 itself build on centos6 (if he hasn't merged them yet of course)
(In reply to Trevor Saunders (:tbsaunde) from comment #131)
> I uploaded a tarball of gnu binutils built on centos6 for
> x86_64-apple-darwin and ld64 built the same way to
> people.mozilla.org/~tsaunders/apple-cross-binutils.tar.bz2 which should be
> useableas a tooltool package when we're ready for that.

Awesome! Thanks for putting that together. I tried it out (on Ubuntu) along with bhearsum's clang (http://people.mozilla.org/~bhearsum/clang.tar.bz2) and it seems to work.

I'm currently debugging some strange issue with missing symbols in linking XUL (looks like from MOZ_VP8_ENCODER?), but I don't think that is because of the toolchain...
(In reply to Trevor Saunders (:tbsaunde) from comment #131)
> I uploaded a tarball of gnu binutils built on centos6 for
> x86_64-apple-darwin

Are we using binutils with the archive alignment patch and the hacky patch posted in this bug?  Or should we be using crosstool-ng (https://github.com/diorcety/crosstool-ng/tree/cctools-llvm) so we get a real otool and based-on-Apple tools that should always DTRT?
(In reply to Nathan Froyd (:froydnj) from comment #133)
> (In reply to Trevor Saunders (:tbsaunde) from comment #131)
> > I uploaded a tarball of gnu binutils built on centos6 for
> > x86_64-apple-darwin
> 
> Are we using binutils with the archive alignment patch and the hacky patch
> posted in this bug?  Or should we be using crosstool-ng
> (https://github.com/diorcety/crosstool-ng/tree/cctools-llvm) so we get a
> real otool and based-on-Apple tools that should always DTRT?

I'm not sure we've decided anything
I'm crossing my fingers, but of course I am biased.

Over at Tor they've made an test ESR24 build:
https://trac.torproject.org/projects/tor/ticket/9829

.. using:
https://www.dropbox.com/s/2f52qrwucs2dlzm/cross-target-x86_64-apple-darwin10-host-x86_64-linux.tar.xz
9ca9c5e9c3a0d990e924f775e3de0cf4897d930e cross-target-x86_64-apple-darwin10-host-x86_64-linux.tar.xz

I'll not list the benefits unless you specifically asked to, but there's a lot involved in making good Darwin cross compilers that behave as close as possible to what Apple's binaries do with the same feature-set.
(In reply to Ray Donnelly from comment #135)
> I'm crossing my fingers, but of course I am biased.
> 
> Over at Tor they've made an test ESR24 build:
> https://trac.torproject.org/projects/tor/ticket/9829
> 
> .. using:
> https://www.dropbox.com/s/2f52qrwucs2dlzm/cross-target-x86_64-apple-darwin10-
> host-x86_64-linux.tar.xz
> 9ca9c5e9c3a0d990e924f775e3de0cf4897d930e
> cross-target-x86_64-apple-darwin10-host-x86_64-linux.tar.xz

I gave this a whirl - I'm able to build using this toolchain and the SDK without Nathan's otool replacement patch, so that's good news :).

Regarding this particular tarball, I have a few questions:

1) Why does linking seem so slow? In particular getting through configure was noticeably delayed. Using clang-3.3 and ld64 on a hello-world C file I get:

$ time ~/cross/clang/bin/clang ok.c -B /home/mshal/cross/apple-cross-binutils/bin -target x86_64-apple-darwin -mlinker-version=136 -Wl,-syslibroot,/home/mshal/cross/apple-uni-sdk-10.6.orig/MacOSX10.6.sdk

real	0m0.022s
user	0m0.016s
sys	0m0.007s

In the x-tools chain I get:

$ time ./x-tools/x86_64-apple-darwin10/bin/x86_64-apple-darwin10-clang ok.c -target x86_64-apple-darwin -mlinker-version=136 -Wl,-syslibroot,/home/mshal/cross/apple-uni-sdk-10.6.orig/MacOSX10.6.sdk

real	0m1.029s
user	0m0.019s
sys	0m0.011s

If I just use '-c' to skip linking, x-tools ends up at 0.019s, which is comparable. It feels as if there's a "sleep 1" somewhere in the linker :)

2) We already have clang-3.3 available in our automation for Linux, and we don't have a need for gcc. Might it be possible to separate those out from the rest of the toolchain? Then we could have a package for clang, and a package for the rest of the toolchain (ar, ld, otool, etc) similar to what tbsaunde put together in #c131. I suppose we can just pull out the pieces we need and rebundle it, or build those pieces ourselves.

> 
> I'll not list the benefits unless you specifically asked to, but there's a
> lot involved in making good Darwin cross compilers that behave as close as
> possible to what Apple's binaries do with the same feature-set.

I think it would be great if we're using the same/similar toolchain as the Tor project so we're not duplicating work. And I agree getting as close as possible to what Apple's binaries do is ideal so that we're not fighting annoying issues due to differences in the cross toolchain vs. a local toolchain. I don't believe we have a particular reason to choose say the patched binutils tools - it's just what we've managed to get working so far.
(In reply to Michael Shal [:mshal] from comment #136)
> (In reply to Ray Donnelly from comment #135)
> > I'm crossing my fingers, but of course I am biased.
> > 
> > Over at Tor they've made an test ESR24 build:
> > https://trac.torproject.org/projects/tor/ticket/9829
> > 
> > .. using:
> > https://www.dropbox.com/s/2f52qrwucs2dlzm/cross-target-x86_64-apple-darwin10-
> > host-x86_64-linux.tar.xz
> > 9ca9c5e9c3a0d990e924f775e3de0cf4897d930e
> > cross-target-x86_64-apple-darwin10-host-x86_64-linux.tar.xz
> 
> I gave this a whirl - I'm able to build using this toolchain and the SDK
> without Nathan's otool replacement patch, so that's good news :).

Great!

> 
> Regarding this particular tarball, I have a few questions:
> 
> 1) Why does linking seem so slow? In particular getting through configure
> was noticeably delayed. Using clang-3.3 and ld64 on a hello-world C file I
> get:
> 
> $ time ~/cross/clang/bin/clang ok.c -B
> /home/mshal/cross/apple-cross-binutils/bin -target x86_64-apple-darwin
> -mlinker-version=136
> -Wl,-syslibroot,/home/mshal/cross/apple-uni-sdk-10.6.orig/MacOSX10.6.sdk
> 
> real	0m0.022s
> user	0m0.016s
> sys	0m0.007s
> 
> In the x-tools chain I get:
> 
> $ time ./x-tools/x86_64-apple-darwin10/bin/x86_64-apple-darwin10-clang ok.c
> -target x86_64-apple-darwin -mlinker-version=136
> -Wl,-syslibroot,/home/mshal/cross/apple-uni-sdk-10.6.orig/MacOSX10.6.sdk
> 
> real	0m1.029s
> user	0m0.019s
> sys	0m0.011s
> 
> If I just use '-c' to skip linking, x-tools ends up at 0.019s, which is
> comparable. It feels as if there's a "sleep 1" somewhere in the linker :)
> 

Yeah, ld64 is really slow and it almost entirely down to the libLTO.so library which is huge. It includes basically all of llvm. It's definitely something I will try to improve but ld64 is also quite slow on a Mac. Fortunately it is called infrequently during the build (but as you say, a lot during configure). It would be interesting to get full-build timings from each of these toolchains.

> 2) We already have clang-3.3 available in our automation for Linux, and we
> don't have a need for gcc. Might it be possible to separate those out from
> the rest of the toolchain? Then we could have a package for clang, and a
> package for the rest of the toolchain (ar, ld, otool, etc) similar to what
> tbsaunde put together in #c131. 

It'd be good to be able to re-use the same clang in this way - and also clang for Windows sometime in the future ;-) Sure it's been designed with this sort of thing in mind, however there are a few Darwin specific hacks in clang and llvm I'm sorry to say. I guess the engineers worked under the assumption that no one would cross-compile targeting Darwin. Again, this is something that I'd like to submit patches to fix; I will try to set some time aside to analyse the extent of the problem, it may not be too bad.

> I suppose we can just pull out the pieces we need and rebundle it, or build those pieces ourselves.

I'm more than happy to provide binaries for you (and will provide a split cctools / clang+llvm version if you want me to*), however I do think taking full control of your build process and adding the option to the build system to make cross-compilers (cloning our crosstool-ng fork and once upstreamed, cloning that instead) before building the Mozilla software would be ideal. We're trying to make this as painless as possible and I aim to make it easy to build cross compilers for as many host, target and compiler family (GCC, Clang) combinations as possible.

> > 
> > I'll not list the benefits unless you specifically asked to, but there's a
> > lot involved in making good Darwin cross compilers that behave as close as
> > possible to what Apple's binaries do with the same feature-set.
> 
> I think it would be great if we're using the same/similar toolchain as the
> Tor project so we're not duplicating work. And I agree getting as close as
> possible to what Apple's binaries do is ideal so that we're not fighting
> annoying issues due to differences in the cross toolchain vs. a local
> toolchain. I don't believe we have a particular reason to choose say the
> patched binutils tools - it's just what we've managed to get working so far.

Yes and also you don't want to be in a situation of having to say to interested Mac developers / contributors things like "We can't accept your patch - or if we do accept it we will disable it on Mac builds because our Mac cross compilers aren't enabled with the features you are using."

* Since LTO was added to cctools, a split cctools / llvm+clang is somewhat counter-intuitive as llvm is the project that provides LTO for cctools (specifically for ld64). Maybe a more logical split is cctools and ld64+llvm+clang. In my patches for cctools+ld64, I actually mixed them to into one build-project to cut down on the amount of work required .. I had to fix and add autotooling to these projects and doing it once made more sense than doing it twice.
(In reply to Ray Donnelly from comment #137)
> Yeah, ld64 is really slow and it almost entirely down to the libLTO.so
> library which is huge. It includes basically all of llvm. It's definitely
> something I will try to improve but ld64 is also quite slow on a Mac.
> Fortunately it is called infrequently during the build (but as you say, a
> lot during configure). It would be interesting to get full-build timings
> from each of these toolchains.

It turns out there's a usleep(1000000) in ld64 when DEBUG is enabled for some reason. I've sent a pull request to crosstool-ng to remove it since it's a nuisance.

> It'd be good to be able to re-use the same clang in this way - and also
> clang for Windows sometime in the future ;-) Sure it's been designed with
> this sort of thing in mind, however there are a few Darwin specific hacks in
> clang and llvm I'm sorry to say. I guess the engineers worked under the
> assumption that no one would cross-compile targeting Darwin. Again, this is
> something that I'd like to submit patches to fix; I will try to set some
> time aside to analyse the extent of the problem, it may not be too bad.

Can you clarify what Darwin-specific hacks you're talking about? We have been using the stock clang-3.3 compiler, and it seems to work just fine as a cross-compiler out of the box.

> I'm more than happy to provide binaries for you (and will provide a split
> cctools / clang+llvm version if you want me to*), however I do think taking
> full control of your build process and adding the option to the build system
> to make cross-compilers (cloning our crosstool-ng fork and once upstreamed,
> cloning that instead) before building the Mozilla software would be ideal.
> We're trying to make this as painless as possible and I aim to make it easy
> to build cross compilers for as many host, target and compiler family (GCC,
> Clang) combinations as possible.

Right now I'm thinking we should use the following packages on our build machines:

1) Using the stock clang-3.3 (we already have it on our build machines, so this is kinda a no-op)
2) Building cctools & ld64 from crosstool-ng (it seems to work fine with a script to set the right variables and pull in crosstool-ng/scripts/functions and crosstool-ng/scripts/build/binutils/cctools.sh - that way I don't have to build clang/gcc/etc)
3) Get the SDK from XCode

Of course nothing is set in stone, so if that seems unreasonable please let me know!
Hi Michael,

There are various things that will break if you don't use our patches for clang and llvm. I will give the exact details later tonight when I can do the full analysis, but my initial take on it is "don't do that!" .. yes, clang / llvm are cross compile happy at heart, but in reality for Darwin they are not quite there yet and we didn't make the patches because we were bored :-)
FWIW, if you were to adopt llvm and clang built with our patches we would ensure that those ones work on all the platforms required. There *may* be some problems with using the same binaries to target multiple systems but I'll make it a priority to make this work correctly.
We currently still ship with GCC on Linux, and we're unlikely to ever ship with anything but MSVC on Windows, so I don't think that's a real concern for us.

Are your patches upstreamable? Seems like that'd be the best situation here.
> Are your patches upstreamable?

Some are, some not (there are some hacky relative paths added to search paths to find bits of the SDK).

> Seems like that'd be the best situation here.

I agree. I've tried to upstream build system patches to LLVM/Clang three times in the past, and those patches got lost in the sea of massive activity, here are the most recent two:

http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130909/187447.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20131118/196387.html
I've gone through our patches for version 3.3 of compiler-rt/llvm/Clang and here are my findings (most of the credit for these patches goes to Yann, so I will ask him to check that I've understood them all properly tomorrow - this process has been helpful to me as we will need to describe them when we try to upstream them to crosstool-ng - or eventually to llvm/Clang):

*** Compiler-rt Patches ***
URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm-compiler-rt/3.3/110-cross-runtime.patch
Description: Compiler-rt isn't capable of being cross-compiled yet. You could compile it on OSX, but ideally it'd be done as part of the toolchain build.
Upstream-ability: You can only build Compiler-rt if you use the copy-sdk-to-sysroot option as Compiler-rt configury has no way of specifying the sysroot. I wouldn't recommend up-streaming until this problem is fixed and Compiler-rt has been tested and/or fixed in other host/target configurations.

*** LLVM Patches ***
URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/100-program-prefix-ld.patch
Description: Without this patch, during configure, the output of "ld -v" is parsed into HOST_LINK_VERSION. This patch uses the correct ld for this test. You should be able to work around this by passing "-mlinker-version=NNN" to all invocations of Clang (and maybe ld64 when using LTO). It's an example of where llvm/Clang isn't as cross-friendly as it could be. I must admit that currently ld64 from our cctools reports version 809 (i.e. the current cctools version) even though it's built from version 127.6. Fixing this is on my TODO list (we get away with it for now!)
Upstream-ability: I think that the actual linker program installed at runtime should be queried instead and all trace of HOST_LINK_VERSION removed. Maybe they didn't want to call ld64 too often due to the 1 second delay that Michael fixed ;-)

URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/110-replace_used_const_PC.patch
Description: Compile fix due to PPC cctools doing "#define PC", yeah, really!
Upstream-ability: Fairly good, but I think PPC has been dropped in more recent cctools, though we see no reason for us to not continue some support for it if requested.

URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/120-Makefile-rules-remove-ld-option--modules.patch
Description: Compile fix for building Darwin cross-compilers.
Upstream-ability: Good.

URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/130-fix_triple_transformation.patch
Contents:
-  // On darwin, we want to update the version to match that of the
-  // target.
-  std::string::size_type DarwinDashIdx = Triple.find("-darwin");
-  if (DarwinDashIdx != std::string::npos) {
-    Triple.resize(DarwinDashIdx + strlen("-darwin"));
-    Triple += getOSVersion();
-  }
Description: It seems the llvm/Clang developers assumed here that no one would want to make cross-compilers targeting Darwin and put in a feature so that the compilation target matches the host OS as returned by getOSVersion().
Upstream-ability: I'm not sure, I'm guessing this was done so that (e.g.) users with a shiny-new MBP running 10.9 get binaries built to exploit that shine by default. Personally I do not like this sort of system-based information leaking into built binaries rendering them less backwards-compatible and heads more scratched. However, without this fixed, you *will* end up with the target being considered as something like x86_64-apple-darwin3.10 (on a Linux 3.10 system) and it won't work.

*** Clang Patches ***
URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/3.3/110-cross-runtime.patch
Description: Again, for Compiler-rt and same comment applies as above.
Upstream-ability: Ditto.

URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/head/120-fix-sysroot-paths.patch
Description: Add -isysroot option and also a fallback so that the copy-sdk option can work (SDK is looked for relative to the compiler), also allows SDKs 10.6 and 10.7 (with GCC sysroots - stuff in 4.2.1 folders) to be used.
Upstream-ability: I'm not sure that this patch would be accepted as it is a fix for people wanting to use old SDKs and old build scripts, but it allows people to get a 10.6 SDK from the internet and build software.

URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/head/130-prefix-program-names-by-triple.patch
Description: The prefix-name (or prefix-name of the symlink to it) of the Clang compiler itself is calculated at runtime and then used when searching for sub-commands (ar, ld, otool, ranlib etc). Such prefixed names are searched before the target-triple prefixed name (as they can differ). This patch wouldn't be needed if you consistently named all of your tools either i386-apple-darwinNN-$tool or x86_64-darwinNN-$tool.
It also works around an issue where, if you built the tools on a 32bit machine, i.e. with the prefix i686-apple-darwinNN (as by default, a 32bit cctools build would get named) it would end up not supporting SSE2, since, oddly, Clang-for-Darwin interprets "i386" as the "32bit intel architecture family" whereas it interprets "i686" as "a specific 32bit CPU - the Pentium Pro". With this change you can name your entire set of compiler tools (or the symlinks to them) consistently and according to your own preference, e.g. apple-$tool and not run into problems. I should re-state that building 64bit tools would result in tools prefixed with x86_64-apple-darwinNN and they would 'just work' - however I think this patch allows for cleaner names of the toolchain binaries and eliminates what was a frustrating SSE2 bug with tools named i686-.
To avoid any confusion I should point out that regardless of the prefix of the compilers (which usually matches the host-machine's architecture, i.e. the bitness for which they were built), those compilers can target both 32 and 64bit Darwin given the right -m32/-m64 flag as they are always built as multilib compilers.
Upstream-ability: Doubtful.

URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/3.3/140-default-gcc-paths.patch
Description: Allow clang to find and use a GCC built sysroot configured and built to the same --prefix, similar to (and possibly mergeable with) 120-fix-sysroot-paths.patch but for other target systems to Darwin.
Upstream-ability: As-per 120-fix-sysroot-paths.patch, doubtful.

URL: https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/head/160-mingw-fix.patch
Description: I know you are sticking with GCC for Windows, I may as well describe this patch!
Instead of adding (hardcoded) both ../../../i686-w64-mingw32/include and ../../../x86_64-w64-mingw32/include, we use ../../../triple.getTriple()/include. Also the paths for GCC libstdc++ are also added.
Upstream-ability: Overall not too bad I think. The libstdc++ stuff may be frowned upon as libc++ exists (though to what extent on Windows I'm not sure).

.. in summary, you might be able to get away with:
1. Applying https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/130-fix_triple_transformation.patch, llvm/3.3/120-Makefile-rules-remove-ld-option--modules.patch and llvm/3.3/110-replace_used_const_PC.patch then building llvm and then ld64 and Clang ..
2. .. on x86_64 so all your tools are prefixed x86_64-apple-darwinNN.
3. Then passing -mlinker-version=127 to Clang/Clang++/ld64 ..
4. .. and not using compiler-rt (or building it on OSX instead).

My hope is that the llvm/Clang devs will improve a lot of the issues we've worked around as time progresses. I'll continue to try to get patches looked at by them though.
Thanks for your detailed analysis! For the purposes of this bug, I am looking at this from the angle of only using the toolchain for Linux->OSX builds. Comments below...

(In reply to Ray Donnelly from comment #143)
> *** Compiler-rt Patches ***
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm-compiler-
> rt/3.3/110-cross-runtime.patch
> Description: Compiler-rt isn't capable of being cross-compiled yet. You
> could compile it on OSX, but ideally it'd be done as part of the toolchain
> build.
> Upstream-ability: You can only build Compiler-rt if you use the
> copy-sdk-to-sysroot option as Compiler-rt configury has no way of specifying
> the sysroot. I wouldn't recommend up-streaming until this problem is fixed
> and Compiler-rt has been tested and/or fixed in other host/target
> configurations.

What do we need Compiler-rt for? When I built clang myself, I did not have it included, and the cross Linux->OSX build still succeeded. I'm not sure if the clang in our build infrastructure has compiler-rt or not. bhearsum, do you know?

> 
> *** LLVM Patches ***
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/100-
> program-prefix-ld.patch
> Description: Without this patch, during configure, the output of "ld -v" is
> parsed into HOST_LINK_VERSION. This patch uses the correct ld for this test.
> You should be able to work around this by passing "-mlinker-version=NNN" to
> all invocations of Clang (and maybe ld64 when using LTO). It's an example of
> where llvm/Clang isn't as cross-friendly as it could be. I must admit that
> currently ld64 from our cctools reports version 809 (i.e. the current
> cctools version) even though it's built from version 127.6. Fixing this is
> on my TODO list (we get away with it for now!)
> Upstream-ability: I think that the actual linker program installed at
> runtime should be queried instead and all trace of HOST_LINK_VERSION
> removed. Maybe they didn't want to call ld64 too often due to the 1 second
> delay that Michael fixed ;-)

I've been using a mozconfig with flags set that passes -mlinker-version=136 to all clang calls. Is that a viable work-around? Or, what is broken if I do that but don't have this patch? And what actual -mlinker-version do we need? (maybe froydnj can answer)

> 
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/110-
> replace_used_const_PC.patch
> Description: Compile fix due to PPC cctools doing "#define PC", yeah, really!
> Upstream-ability: Fairly good, but I think PPC has been dropped in more
> recent cctools, though we see no reason for us to not continue some support
> for it if requested.

I don't think we support OSX PPC anymore: https://support.mozilla.org/en-US/kb/firefox-no-longer-works-mac-os-10-4-or-powerpc

> 
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/120-
> Makefile-rules-remove-ld-option--modules.patch
> Description: Compile fix for building Darwin cross-compilers.
> Upstream-ability: Good.

Can you explain a bit what this does?

> 
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/130-
> fix_triple_transformation.patch
> Contents:
> -  // On darwin, we want to update the version to match that of the
> -  // target.
> -  std::string::size_type DarwinDashIdx = Triple.find("-darwin");
> -  if (DarwinDashIdx != std::string::npos) {
> -    Triple.resize(DarwinDashIdx + strlen("-darwin"));
> -    Triple += getOSVersion();
> -  }
> Description: It seems the llvm/Clang developers assumed here that no one
> would want to make cross-compilers targeting Darwin and put in a feature so
> that the compilation target matches the host OS as returned by
> getOSVersion().
> Upstream-ability: I'm not sure, I'm guessing this was done so that (e.g.)
> users with a shiny-new MBP running 10.9 get binaries built to exploit that
> shine by default. Personally I do not like this sort of system-based
> information leaking into built binaries rendering them less
> backwards-compatible and heads more scratched. However, without this fixed,
> you *will* end up with the target being considered as something like
> x86_64-apple-darwin3.10 (on a Linux 3.10 system) and it won't work.

In my mozconfig I have '-target x86_64-apple-darwin10' passed to clang, and 'ac_add_options --target=x86_64-apple-darwin' - I believe this bypasses the auto-os version detection, correct? I haven't seen my Linux version pop up anywhere.

Side note: I'm not sure why I use 'darwin10' in clang and just 'darwin' in configure, or if that matters...

> 
> *** Clang Patches ***
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/3.3/110-
> cross-runtime.patch
> Description: Again, for Compiler-rt and same comment applies as above.
> Upstream-ability: Ditto.

Similar "do we need compiler-rt" question here :)

> 
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/head/120-
> fix-sysroot-paths.patch
> Description: Add -isysroot option and also a fallback so that the copy-sdk
> option can work (SDK is looked for relative to the compiler), also allows
> SDKs 10.6 and 10.7 (with GCC sysroots - stuff in 4.2.1 folders) to be used.
> Upstream-ability: I'm not sure that this patch would be accepted as it is a
> fix for people wanting to use old SDKs and old build scripts, but it allows
> people to get a 10.6 SDK from the internet and build software.

In the mozconfig I'm passing in -Wl,-syslibroot,/path/to/MacOSX10.6.sdk, and configure gets a --with-macos-sdk=/path/to/MacOSX10.6.sdk, which seems to work fine with the stock clang.

> 
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/head/130-
> prefix-program-names-by-triple.patch
> Description: The prefix-name (or prefix-name of the symlink to it) of the
> Clang compiler itself is calculated at runtime and then used when searching
> for sub-commands (ar, ld, otool, ranlib etc). Such prefixed names are
> searched before the target-triple prefixed name (as they can differ). This
> patch wouldn't be needed if you consistently named all of your tools either
> i386-apple-darwinNN-$tool or x86_64-darwinNN-$tool.
> It also works around an issue where, if you built the tools on a 32bit
> machine, i.e. with the prefix i686-apple-darwinNN (as by default, a 32bit
> cctools build would get named) it would end up not supporting SSE2, since,
> oddly, Clang-for-Darwin interprets "i386" as the "32bit intel architecture
> family" whereas it interprets "i686" as "a specific 32bit CPU - the Pentium
> Pro". With this change you can name your entire set of compiler tools (or
> the symlinks to them) consistently and according to your own preference,
> e.g. apple-$tool and not run into problems. I should re-state that building
> 64bit tools would result in tools prefixed with x86_64-apple-darwinNN and
> they would 'just work' - however I think this patch allows for cleaner names
> of the toolchain binaries and eliminates what was a frustrating SSE2 bug
> with tools named i686-.
> To avoid any confusion I should point out that regardless of the prefix of
> the compilers (which usually matches the host-machine's architecture, i.e.
> the bitness for which they were built), those compilers can target both 32
> and 64bit Darwin given the right -m32/-m64 flag as they are always built as
> multilib compilers.
> Upstream-ability: Doubtful.

hwine (or anyone else) - what do we need to do as far as 32/64-bit support for these cross builds? For the time being I am building on a 64-bit Linux machine and targetting x86_64 for OSX, so I haven't had to deal with any 32-bit issues. If we also have to generate 32-bit binaries, or fat binaries, or whatever, then this might be something we'll need.

> 
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/3.3/140-
> default-gcc-paths.patch
> Description: Allow clang to find and use a GCC built sysroot configured and
> built to the same --prefix, similar to (and possibly mergeable with)
> 120-fix-sysroot-paths.patch but for other target systems to Darwin.
> Upstream-ability: As-per 120-fix-sysroot-paths.patch, doubtful.

For Linux->OSX cross builds we aren't going to be using GCC at all.

> 
> URL:
> https://github.com/diorcety/crosstool-ng/blob/master/patches/clang/head/160-
> mingw-fix.patch
> Description: I know you are sticking with GCC for Windows, I may as well
> describe this patch!
> Instead of adding (hardcoded) both ../../../i686-w64-mingw32/include and
> ../../../x86_64-w64-mingw32/include, we use
> ../../../triple.getTriple()/include. Also the paths for GCC libstdc++ are
> also added.
> Upstream-ability: Overall not too bad I think. The libstdc++ stuff may be
> frowned upon as libc++ exists (though to what extent on Windows I'm not
> sure).

I think you mean MSVC for Windows? In any case, for this bug we are just looking at Linux->OSX, so anything for Windows won't be necessary for us here.

> 
> .. in summary, you might be able to get away with:
> 1. Applying
> https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm/3.3/130-
> fix_triple_transformation.patch,
> llvm/3.3/120-Makefile-rules-remove-ld-option--modules.patch and
> llvm/3.3/110-replace_used_const_PC.patch then building llvm and then ld64
> and Clang ..
> 2. .. on x86_64 so all your tools are prefixed x86_64-apple-darwinNN.
> 3. Then passing -mlinker-version=127 to Clang/Clang++/ld64 ..
> 4. .. and not using compiler-rt (or building it on OSX instead).
> 
> My hope is that the llvm/Clang devs will improve a lot of the issues we've
> worked around as time progresses. I'll continue to try to get patches looked
> at by them though.

Thanks for working to get things upstreamed! That definitely makes things easier going forward.

Given the info above, is there any reason that we should not continue to use the stock clang-3.3 specifically for Linux->OSX cross builds? Since it seems to work already, I'd prefer to keep the original version rather than have to deploy a patched version.
Flags: needinfo?(hwine)
Flags: needinfo?(bhearsum)
(In reply to Michael Shal [:mshal] from comment #144)
> Thanks for your detailed analysis! For the purposes of this bug, I am
> looking at this from the angle of only using the toolchain for Linux->OSX
> builds. Comments below...
> 
> (In reply to Ray Donnelly from comment #143)
> > *** Compiler-rt Patches ***
> > URL:
> > https://github.com/diorcety/crosstool-ng/blob/master/patches/llvm-compiler-
> > rt/3.3/110-cross-runtime.patch
> > Description: Compiler-rt isn't capable of being cross-compiled yet. You
> > could compile it on OSX, but ideally it'd be done as part of the toolchain
> > build.
> > Upstream-ability: You can only build Compiler-rt if you use the
> > copy-sdk-to-sysroot option as Compiler-rt configury has no way of specifying
> > the sysroot. I wouldn't recommend up-streaming until this problem is fixed
> > and Compiler-rt has been tested and/or fixed in other host/target
> > configurations.
> 
> What do we need Compiler-rt for? When I built clang myself, I did not have
> it included, and the cross Linux->OSX build still succeeded. I'm not sure if
> the clang in our build infrastructure has compiler-rt or not. bhearsum, do
> you know?

It looks like we build clang with https://mxr.mozilla.org/mozilla-central/source/build/unix/build-clang/build-clang.py. We symlink compiler-rt at line 158, so I think we do? There's also a comment mentioning compiler-rt around line 124, but I think that's not relevant to this...
Flags: needinfo?(bhearsum)
(In reply to Michael Shal [:mshal] from comment #144)
> hwine (or anyone else) - what do we need to do as far as 32/64-bit support
> for these cross builds? For the time being I am building on a 64-bit Linux
> machine and targetting x86_64 for OSX, so I haven't had to deal with any
> 32-bit issues. If we also have to generate 32-bit binaries, or fat binaries,
> or whatever, then this might be something we'll need.

We currently ship a universal binary supporting both 32 & 64 bit. OSX
10.6 & 10.7 still support 32 bit (although 64bit is the default).

Deprecating the 32bit build is a separate discussion, but if it's a
blocker to this effort, it may be time to have that discussion.
Flags: needinfo?(hwine)
FYI for those like myself who were only able to produce crashing builds recently:

Similar to #c79, if you have a mozconfig with FLAGS set like this:

FLAGS="-target x86_64-apple-darwin10 -mlinker-version=136 -B $CROSS_LD_PATH -Wl,-syslibroot,$CROSS_SYSROOT"
...
export CC="${CROSS_CLANG}/clang $FLAGS"
export CXX="${CROSS_CLANG}/clang++ $FLAGS"
export CPP="${CROSS_CLANG}/clang $FLAGS -E"

Then the visibility flags are not set correctly. You can tell they are broken if they are empty:

$ grep VISIBILITY_FLAGS obj-x86_64-apple-darwin/config.status 
    (''' VISIBILITY_FLAGS ''', r'''  '''),

It is supposed to look like:

$ grep VISIBILITY_FLAGS obj-x86_64-apple-darwin/config.status
    (''' VISIBILITY_FLAGS ''', r''' -fvisibility=hidden '''),

The reason this happens with these flags is because of the warning produced by the unused linker flag (-Wl,-syslibroot,$CROSS_SYSROOT), which gets interpreted as an error by the configure test (which uses -Werror). I had added that flag in my own mozconfig to work around the error that is supposed to be fixed by bug 933231. Here is my new working mozconfig on m-c as of today with bug 933231 applied:

CROSS_LD_PATH=/home/mshal/cross/install-cctools/bin
CROSS_SYSROOT=/home/mshal/cross/SDKs/MacOSX10.6.sdk
CROSS_PRIVATE_FRAMEWORKS=/home/mshal/cross/SDKs/MacOSX10.6.sdk/System/Library/PrivateFrameworks
FLAGS="-target x86_64-apple-darwin10 -mlinker-version=136 -B $CROSS_LD_PATH"

export HOST_CC=gcc
export HOST_CXX=g++
export HOST_LDFLAGS="-g"

CROSS_CLANG=/home/mshal/cross/clang/bin

export LDFLAGS="-Wl,-syslibroot,$CROSS_SYSROOT -Wl,-dead_strip"
export CC="${CROSS_CLANG}/clang $FLAGS"
export CXX="${CROSS_CLANG}/clang++ $FLAGS"
export CPP="${CROSS_CLANG}/clang $FLAGS -E"

CROSS_TOOLS_PREFIX=/home/mshal/cross/install-cctools/bin/x86_64-apple-darwin10
export AR=${CROSS_TOOLS_PREFIX}-ar
export RANLIB=${CROSS_TOOLS_PREFIX}-ranlib
export STRIP=${CROSS_TOOLS_PREFIX}-strip

export CROSS_COMPILE=1

ac_add_options --target=x86_64-apple-darwin
ac_add_options --with-macos-sdk=$CROSS_SYSROOT
ac_add_options --with-macos-private-frameworks=$CROSS_PRIVATE_FRAMEWORKS
ac_add_options --enable-debug
ac_add_options --disable-optimize
ac_add_options --disable-crashreporter
How is the crosstool-ng cctools working out for you? Is the speed now 'good enough' since the sleep was removed?
(In reply to Ray Donnelly from comment #148)
> How is the crosstool-ng cctools working out for you? Is the speed now 'good
> enough' since the sleep was removed?

It's working fine! I did try to swap it out when debugging the crashing, but fortunately that was not related. Currently my setup includes:

- stock clang 3.3 (used in our builders)
- ar/ranlib/strip/ld64/otool (etc) from crosstool-ng cctools
- MacOSX10.6.sdk from our xcode image we use in builders (xcode 4.2)

I think Joey is working on a script to bundle all of that together once any remaining issues are fixed.

Thanks for the help!
Whiteboard: [capacity] → [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2212] [capacity]
Whiteboard: [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2212] [capacity] → [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2223] [capacity]
Whiteboard: [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2223] [capacity] → [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2227] [capacity]
Whiteboard: [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2227] [capacity] → [kanban:engops:https://mozilla.kanbanize.com/ctrl_board/6/2212] [capacity]
Depends on: 1150261
Attachment #825330 - Attachment is obsolete: true
Attached file build-crosstool-ng.sh —
Here's the script I used to build cctools using crosstool-ng, based on a script mshal wrote. I ran this in mrrrgn's desktop-build docker container (quay.io/mrrrgn/desktop-build:0.0.19), but I needed to install one additional package (uuid-dev) to make it work. I filed bug 1175651 to fix that.
Assignee: nobody → ted
Status: NEW → ASSIGNED
Attached file osx-cross-mozconfig —
This is a mozconfig I used to build using the toolchain built from the script in the previous attachment along with the clang that we have in tooltool. I also did this build in the desktop-build container, although I needed to install the sqlite3 package to pacify NSS, and I had to unset LIBRARY_PATH and CPLUS_INCLUDE_PATH in the environment. Fixing those is covered by bug 1164617.
My next steps are to try to automate what I've just done a bit more:
* make it possible to build cctools in taskcluster
* put the cctools package in tooltool
* write a mozharness script to run this build in taskcluster

After that I'm going to look at making packaging and buildsymbols work.
Good to hear this stuff is still useful and working for you. I've recently gotten back into working on crosstool-ng again, with the goal of upstreaming the cctools and clang changes. The first patch in support of this (allowing for building other compilers than just GCC) has already been merged.

As soon as I can, I will start updating to the most recent cctools and ld64 versions and also investigate adding lld.
Depends on: 1176229
Depends on: 1177232
Depends on: 1175315
Depends on: 1182519
Depends on: 935237
Depends on: 1182520
Depends on: 1183129
Blocks: 1183613
I successfully built a cross-compiled build in Taskcluster:
https://tools.taskcluster.net/task-inspector/#tT5nGpWORFKgxPe__C2RUw/

I'll start getting the patches in the dependent bugs up for review. There's one other bug that needs to get sorted before we could run this as a normal in-tree Taskcluster task, having to do with it requiring the tooltool.download.internal scope to fetch the SDK. (I think that bug is filed, I just need to find it.)
Depends on: 1184071
The bug I mentioned in comment 154 did not exist, but now it does: bug 1184084.
Depends on: 1184084
Blocks: 1184122
Blocks: 1185666
Blocks: 1186438
Depends on: 1197154
Depends on: 1197248
Depends on: 1197293
I did a mostly-successful build, it spit out a dmg:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=6177892c36cd

The dmg link is on the "inspect task" link:
https://queue.taskcluster.net/v1/task/z9RBJJS4RSKcRHz1HANfyw/runs/0/artifacts/public/build/target.mac64.dmg

The build shows as failed because the build-linux.sh script that Taskcluster builds use isn't handling the split test zips well:
https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/scripts/builder/build-linux.sh?offset=100#152
Blocks: 1171592
Depends on: 1198190
(In reply to Ted Mielczarek [:ted.mielczarek] from comment #156)
> I did a mostly-successful build, it spit out a dmg:
> https://treeherder.mozilla.org/#/jobs?repo=try&revision=6177892c36cd

\o/ !!!
 
> The build shows as failed because the build-linux.sh script that Taskcluster
> builds use isn't handling the split test zips well:
> https://dxr.mozilla.org/mozilla-central/source/testing/taskcluster/scripts/
> builder/build-linux.sh?offset=100#152

Do you need any help from #taskcluster on this?
That's filed as bug 1198179, mshal has a patch but I think we need a little bit more work.
Depends on: 1203689
Blocks: 1209937
bug 543111 is very close to being done. Once that's fixed I'm going to call this fixed as well. bug 927061 is going to track replacing the existing buildbot builds with these taskcluster builds.
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Ted, is there information somewhere on how we can get this set up on our personal linux machines? This would be a great way to cut down on build times significantly.
Flags: needinfo?(ted)
(In reply to Seth Fowler [:seth] [:s2h] from comment #162)
> Ted, is there information somewhere on how we can get this set up on our
> personal linux machines? This would be a great way to cut down on build
> times significantly.

There's no documentation, but it's actually not terribly complicated either. If you fetch all of the packages that it uses from tooltool:
https://dxr.mozilla.org/mozilla-central/source/browser/config/tooltool-manifests/macosx64/cross-releng.manifest

You could source the common mozconfig and it ought to work:
https://dxr.mozilla.org/mozilla-central/source/build/macosx/cross-mozconfig.common

The only thing in the tooltool manifest that is not public is the copy of the OS X 10.7 SDK. It's just a tarball of the SDK from an XCode install, so if you can get a copy of that in another way you can just copy it to a Linux machine.
blocking-b2g: 2.2r? → ---
Flags: needinfo?(ted)
Component: Platform Support → Buildduty
Product: Release Engineering → Infrastructure & Operations
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: