Instead of clang 4, which they were the last to use, so remove the
clang 4 toolchain.
--HG--
extra : rebase_source : d03a083e9217aeb6c1d2c91decb978426f0e8d1a
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
After this change, we consistently import GPG keys from files in
the GCC build scripts.
MozReview-Commit-ID: BcyvCQoGbMS
--HG--
extra : source : 5fce34a460b51e45ac280a9f0cb8bad896fbcff1
extra : histedit_source : 01621ea8111315c251a9493a11efca72c2ba3c7d
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : rebase_source : 4918b8c3bac53d63665006802054038bfbca0314
After this change, we consistently import GPG keys from files in
the GCC build scripts.
MozReview-Commit-ID: BcyvCQoGbMS
--HG--
extra : rebase_source : 657ccce8e242cabdfaff396fd0d6439754a3f364
Version 2.25.1's libiberty can choke on some symbols. That was fixed in
2.27. As of writing, the last version is 2.30. Conservatively go with
2.28.1, which is the same major version (2.28) as what is currently in
Debian stable.
--HG--
extra : rebase_source : 9e5ba94421a1568f662cfd98af7168ea1c890934
And adapt the build-gcc.sh script for the changes to
contrib/download_prerequisites.
--HG--
rename : taskcluster/scripts/misc/build-gcc-6-linux.sh => taskcluster/scripts/misc/build-gcc-7-linux.sh
extra : rebase_source : b1d785777b8c141c0eb0f52a73734abd2db21b94
The URL is now being redirected to
https://www.openssl.org/source/old/1.1.0/openssl-1.1.0g.tar.gz. Let's
add a -L so we follow redirects automatically.
MozReview-Commit-ID: AuZ98jGidzl
--HG--
extra : rebase_source : 07e61558024e789df45d8e2ab67ab5ad9d3d355b
Build the latest tup master branch with the LD_PRELOAD dependency
checker.
MozReview-Commit-ID: ALfnnmOZrky
--HG--
extra : rebase_source : 529d4392ef73e03f66fb76f089f8b88f45b44972
Note that static analysis was the only remaining user of the 32-bit toolchain, so I've removed win32-clang-cl (or rather, renamed it to win32-clang-cl-st-an).
--HG--
rename : build/build-clang/clang-win32.json => build/build-clang/clang-win32-st-an.json
rename : build/build-clang/clang-win64.json => build/build-clang/clang-win64-st-an.json
rename : taskcluster/scripts/misc/build-clang32-windows.sh => taskcluster/scripts/misc/build-clang32-st-an-windows.sh
rename : taskcluster/scripts/misc/build-clang64-windows.sh => taskcluster/scripts/misc/build-clang64-st-an-windows.sh
Ensure better determinism when creating rust toolchain packages
by rejecting generic channels like 'stable' or 'nightly'. Instead,
insist on a specific version or date.
The current valid dates for beta and nightly can be obtained with:
curl -s https://static.rust-lang.org/dist/channel-rust-beta.toml | grep ^date
curl -s https://static.rust-lang.org/dist/channel-rust-nightly.toml | grep ^date
MozReview-Commit-ID: I0DXw1KJGZz
--HG--
extra : rebase_source : 92e158193072582b8568d9c9f00ffdefa0af1a9c
The Proguard dependency is now managed by Gradle.
MozReview-Commit-ID: EOvKSE5z28P
--HG--
extra : rebase_source : 760b117f500cc639cc8c24e9c02933990f358dd7
We'd like to install the NDK through the Android SDK manager. But we
can't pin versions of the NDK with the SDK manager, and so Google
can silently upgrade the NDK on us. Since that is undesirable, this is
the next best thing.
With the toolchain task in hand, we can make all the relevant tasks
depend on the toolchain task and remove the download of the NDK from
tooltool as well.
New Android-Gradle plugins pin the build-tools version, and we want to
be consistent between Gradle and moz.build.
MozReview-Commit-ID: ApWS4rHzPuH
--HG--
extra : rebase_source : 22008e9333b15c594ce26c2a52f67396d6e3ab84
extra : source : f918500d9cf5112b70bc8e0a120df435b02252b7
Turns out Google's maven repository doesn't publish checksums. I
can't imagine why not, but there it is. We have to think more about
whether to trust the artifacts downloaded from maven.google.com.
MozReview-Commit-ID: CdWijorq1IV
--HG--
extra : rebase_source : 6c66cf1444876624f10409ea6437863e2c2ea9b0
extra : source : 0850b319efd43ac8f4d61485451722975da55ca1
I tested this on automation and the build went on, though I couldn't test
the bindgen output because the build right now is busted on one dependent crate
with rust beta, which is the first toolchain that has this package, and will go
to release shortly.
This should work though! If I need more changes I'll adjust them in bug 1432153.
You can test the repackage manually with repack_rust.py --toolchain beta, for
example.
MozReview-Commit-ID: GI2f6vGVqTe
Don't build ucl when building upx, Debian stretch has a recent enough
version. In fact, the last upstream version doesn't build with GCC in
Debian stretch (http://bugs.debian.org/811707)
--HG--
extra : rebase_source : aae67773b9dd3b99f6ddf9ab7f59a628037e6925
Bump mingw version to get the newest commit and do not include the
un-needed dw-extras.h on MinGW (thanks Jacek!)
MozReview-Commit-ID: OjO93XHCxs
--HG--
extra : rebase_source : 933bbb385004988a23d1069c9cd3241b3a3b336e
llvm-symbolizer is necessary to get symbols in llvm-dsymutil crash
dumps. While we could use the one from clang during the build, it's
better if the llvm-dsymutil toolchain is standalone for local testing.
--HG--
extra : rebase_source : 5cd234a3e14ab52a4ce759821e0e756e68167797
When I originally wrote the llvm-dsymutil build script in bug 1430315,
I wasn't setting CMAKE_BUILD_TYPE to Release, and was ending up with
a very large binary (> 300MB), so I stripped it.
When I later set CMAKE_BUILD_TYPE to Release, I left the manual
stripping on, but that removes symbols that are useful for stacktraces
when dsymutil crashes (the Release type still leaves out debug info).
--HG--
extra : rebase_source : 802daadc24c0090574b1a44ea8b4e6c25735f703
By default, wget prints dots every 1k bytes. This can render a
lot of output for large files. We switch to the "mega" style, which
makes each dot represent 64k, thus reducing output by up to 64x.
We also force the use of dot display. By default, it uses "bar"
which attempts to use terminal formatting if possible. Since most
of this code executes in CI and terminal control characters can
interfere with logged output, we force the use of "dot." (Although
wget appears to automatically switch to dot in TC today. But
consistency is good.)
MozReview-Commit-ID: IpTWJdcauTV
--HG--
extra : rebase_source : 5c9aa1bbdcd78eaa0b31347ad026a2c1beaedc03
We've had problems with crashes in llvm-dsymutil for a while, and while
they are, in essence, due to the fact that rustc produces bad debug
info, they are a hurdle to our builds. The tool comes along clang, and
updating clang is not necessarily easy (witness bug 1409265), so, so
far, we've relied on backporting fixes, which can be time confusing
(witness bug 1410148).
OTOH, llvm-dsymutil is a rather specific tool, that doesn't strictly
need to be tied to clang. It's only tied to it because it uses the llvm
code to do some of the things it does, and it's part of the llvm source
tree. But it could just as well be a separate tool, like it was(is?) on
OSX.
So, we add a toolchain job to build it from the llvm source,
independently from clang, so that we can update it separately, if we
hit new crashes that happen to already be fixed on llvm trunk. It will
also allow to more easily update after upstream fixes crashes after we
report them.
--HG--
extra : rebase_source : b814353b4b4632e46646a24b8f54c5300618ff49
New Android-Gradle plugins pin the build-tools version, and we want to
be consistent between Gradle and moz.build.
MozReview-Commit-ID: ApWS4rHzPuH
--HG--
extra : rebase_source : 38a9781c472d858f3300cbbcbdc6d2311c465713
Turns out Google's maven repository doesn't publish checksums. I
can't imagine why not, but there it is. We have to think more about
whether to trust the artifacts downloaded from maven.google.com.
MozReview-Commit-ID: CdWijorq1IV
--HG--
extra : rebase_source : a884971e51ce7b1ff993754b130f462c476646ab
It was failing to build with the GCC/binutils on the CentOS-based docker
image, but it doesn't with the Debian-based one, so we can remove the
dependency on the gcc toolchain task. This allows sccache to remain
untouched when we change the gcc build scripts, and more importantly,
this allows it to depend on no toolchain that requires building things.
This makes it now possible to use sccache as a dependency for all other
toolchains jobs that compile, if that's beneficial (which might not be
the case, given the current sccache retention time, but at least it's a
viable option, now)
New Android-Gradle plugins pin the build-tools version, and we want to
be consistent between Gradle and moz.build.
MozReview-Commit-ID: ApWS4rHzPuH
--HG--
extra : rebase_source : 5a5730b4b9ce84af40a7c73c4f1abba017103f02