We need this image for building clang on machines with arm64
sysroots. (Note that this image *is* a linux x86-64 image, just with
some arm64 cross-compilation packages available.)
Differential Revision: https://phabricator.services.mozilla.com/D28404
--HG--
extra : moz-landing-system : lando
Run checks done in push-apk in promote-phase, instead of the very last task of the pipeline
Differential Revision: https://phabricator.services.mozilla.com/D26328
--HG--
rename : taskcluster/docker/google-play-strings/Dockerfile => taskcluster/docker/mozapkpublisher/Dockerfile
extra : moz-landing-system : lando
This imports the changes from wheezy-lts (http://deb.freexian.com/extended-lts/)
and creates a package we install in the debian7-based images (with a
modified version number to work around bug #1419577.
This leaves out debian7-raw and debian7-packages as unpatched, because
of the chicken-and-egg problem.
Depends on D26100
Differential Revision: https://phabricator.services.mozilla.com/D26102
--HG--
extra : moz-landing-system : lando
This sets things enough things up to be able to push to try with an
opt-in, but doesn't run the job on every push. This can be used as a
template for future work on a fuzzing job.
Differential Revision: https://phabricator.services.mozilla.com/D25069
--HG--
extra : moz-landing-system : lando
This has the side effect of addressing bug 1524723 for those images.
Depends on D22263
Differential Revision: https://phabricator.services.mozilla.com/D22264
--HG--
extra : moz-landing-system : lando
Because the debian9-base apt configuration doesn't install recommended
packages, we end up needing to install more packages than before. We
could pass --install-recommended to apt-get, but that would make the
image larger than it already was after the upcoming changes, because
new versions of diffoscope come with more recommended dependencies.
The side effect is that this makes the image much smaller than it used
to be, while preserving all the useful recommended packages (we don't
actually need all of them).
Differential Revision: https://phabricator.services.mozilla.com/D22262
--HG--
extra : moz-landing-system : lando
We however leave moving the packages building to a script for another
day.
Differential Revision: https://phabricator.services.mozilla.com/D19624
--HG--
rename : taskcluster/docker/debian-base/cloud-mirror-workaround.sh => taskcluster/docker/debian-raw/cloud-mirror-workaround.sh
rename : taskcluster/docker/debian-base/setup_packages.sh => taskcluster/docker/debian-raw/setup_packages.sh
to give time to docker images and toolchains to build.
--HG--
rename : taskcluster/docker/debian-raw/cloud-mirror-workaround.sh => taskcluster/docker/debian-base/cloud-mirror-workaround.sh
rename : taskcluster/docker/debian-raw/setup_packages.sh => taskcluster/docker/debian-base/setup_packages.sh
We however leave moving the packages building to a script for another
day.
Differential Revision: https://phabricator.services.mozilla.com/D19624
--HG--
rename : taskcluster/docker/debian-base/cloud-mirror-workaround.sh => taskcluster/docker/debian-raw/cloud-mirror-workaround.sh
rename : taskcluster/docker/debian-base/setup_packages.sh => taskcluster/docker/debian-raw/setup_packages.sh
We however leave moving the packages building to a script for another
day.
Differential Revision: https://phabricator.services.mozilla.com/D19624
--HG--
rename : taskcluster/docker/debian-base/cloud-mirror-workaround.sh => taskcluster/docker/debian-raw/cloud-mirror-workaround.sh
rename : taskcluster/docker/debian-base/setup_packages.sh => taskcluster/docker/debian-raw/setup_packages.sh
We however leave moving the packages building to a script for another
day.
Differential Revision: https://phabricator.services.mozilla.com/D19624
--HG--
rename : taskcluster/docker/debian-base/cloud-mirror-workaround.sh => taskcluster/docker/debian-raw/cloud-mirror-workaround.sh
rename : taskcluster/docker/debian-base/setup_packages.sh => taskcluster/docker/debian-raw/setup_packages.sh
This patch adds a toolchain task for building d8 with customized build settings and uses it in jsshell benchmark tests. A customized image with a debian9-base ('custom-v8') is added by this patch as well and is required to build the tool.
Differential Revision: https://phabricator.services.mozilla.com/D14019
--HG--
extra : moz-landing-system : lando
This patch adds a toolchain task for building d8 with customized build settings and uses it in jsshell benchmark tests. A customized image with a debian9-base ('custom-v8') is added by this patch as well and is required to build the tool.
Differential Revision: https://phabricator.services.mozilla.com/D14019
--HG--
extra : moz-landing-system : lando
The SQLite in Debian 7 (3.7.13) lacks support for common table
expressions (the WITH keyword), which was introduced in SQLite
3.8.3. The Mercurial SQLite storage backend currently relies on
CTEs. Even if a future Mercurial doesn't require CTE, it is likely
that it will still use CTE if available for performance reasons.
So, it is in our best interest to give Mercurial access to a
modern SQLite. Plus, using a modern SQLite and avoiding potential
bugs in old versions seems prudent.
This commit introduces a SQLite package backport for Debian 7
so we can use the new SQLite feature. We had to minimally patch
the build to work with an older version of TCL that isn't using
multiarch.
I observed libsqlite3 being installed as part of building various
other packages (such as Python). I initially added the package as
a dependency so packages would be built against a more modern
SQLite. But glandium doesn't believe it matters, since nothing
should be doing build-time feature detection. Python is the most
important downstream package (since Mercurial uses its SQLite).
I audited the CPython build system and did not see any build-time
SQLite feature detection or version sniffing. So I think we'll be
fine building against an older SQLite.
Differential Revision: https://phabricator.services.mozilla.com/D14194
--HG--
extra : moz-landing-system : lando
There are several kinds that cache tasks based on the inputs that go into the task. Historically,
these inputs included the name of upstream tasks. This change these tasks to include the digest
of the upstream tasks.
This also bumps the version of the docker and toolchain as every digest is changed for them.
Differential Revision: https://phabricator.services.mozilla.com/D11949
--HG--
extra : moz-landing-system : lando
Interestingly, the resulting binaries are still compatible with Gtk+
3.4. The only difference in symbol use are:
g_log -> g_logv
g_assertion_message -> g_assertion_message_expr
Both of those symbols are actually available in older versions of glib.
Some #defines just switched from using the latter rather than the
former.
Differential Revision: https://phabricator.services.mozilla.com/D11141
We're going to bump our shipped builds to build against Gtk+ 3.10, but
still want to ensure we can still build against Gtk+ 3.4. As we're using
Gtk+ packages installed in the build docker image, we need to have a
separate image where the Gtk+ packages are kept at version 3.4.
Differential Revision: https://phabricator.services.mozilla.com/D11137
Interestingly, the resulting binaries are still compatible with Gtk+
3.4. The only difference in symbol use are:
g_log -> g_logv
g_assertion_message -> g_assertion_message_expr
Both of those symbols are actually available in older versions of glib.
Some #defines just switched from using the latter rather than the
former.
Differential Revision: https://phabricator.services.mozilla.com/D11141
We're going to bump our shipped builds to build against Gtk+ 3.10, but
still want to ensure we can still build against Gtk+ 3.4. As we're using
Gtk+ packages installed in the build docker image, we need to have a
separate image where the Gtk+ packages are kept at version 3.4.
Differential Revision: https://phabricator.services.mozilla.com/D11137
We're going to bump our shipped builds to build against Gtk+ 3.10, but
still want to ensure we can still build against Gtk+ 3.4. As we're using
Gtk+ packages installed in the build docker image, we need to have a
separate image where the Gtk+ packages are kept at version 3.4.
Differential Revision: https://phabricator.services.mozilla.com/D11137
Now autotest does not require java to be installed, but
it will let the user know that infer is not being tested if java
is missing.
Differential Revision: https://phabricator.services.mozilla.com/D7326
--HG--
extra : moz-landing-system : lando
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : source : 0b941cbdca76fb2fbb98dc5bbc1a0237c69954d0
extra : histedit_source : a3e43bdd8a9a58550bef02fec3be832ca304ea93
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.
The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.
A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).
This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.
The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.
We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.
We have added tasks to fetch source archives used to build the GCC
toolchains.
Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.
We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.
To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.
This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.
There are some things I don't like about this commit.
First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.
The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.
`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.
`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.
`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.
I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.
MozReview-Commit-ID: AGuTcwNcNJR
--HG--
extra : rebase_source : 4918b8c3bac53d63665006802054038bfbca0314
Let's install python-zstandard for both Python 2 and Python 3 in
all our Debian-based images so it is readily available for use.
MozReview-Commit-ID: 1L8zDc5MYXA
--HG--
extra : rebase_source : db718891dd31d4feceff76fbce753b63049e20b1
The python3-minimal package provides /usr/bin/python3 on Debian.
This commit installs this package so a `python3` executable is
provided.
This required backporting the package to wheezy. The final patch
is trivial. But I wasted a bit of time figuring out why `mk-build-deps`
wasn't working. It would no-op and exit 0 and then the build would
complain about missing dependencies!
glandium's theory is that the ":any" multiarch support on wheezy
isn't complete. Removing ":any" seems to make things "just work."
MozReview-Commit-ID: FBicpK4SmkQ
--HG--
extra : rebase_source : a28ce731024e8ed6a43fb30e2ed57da2abb50d0f