Bug 1811546 - Clean up docs, and add FAQ section to the mach try perf docs. r=perftest-reviewers,kshampur

This patch adds an FAQ (Frequently Asked Questions) section to the mach try perf docs. It also does a small cleanup to move fxrecord into the `testing/performance` folder, re-organize the linting configuration file, fix file naming, and captializes the `mozperftest` and `fxrecord` title names in the side-bar. Lastly, it adds a warning to the `mozperftest` docs to direct people who make it there to the `mach try perf` page.

Differential Revision: https://phabricator.services.mozilla.com/D167555
This commit is contained in:
Greg Mierzwinski 2023-01-23 18:31:31 +00:00
parent 9049853887
commit 28dfb3982f
12 changed files with 113 additions and 26 deletions

View file

@ -1,8 +1,8 @@
===========
mozperftest
Mozperftest
===========
**mozperftest** can be used to run performance tests.
**Mozperftest** can be used to run performance tests.
.. toctree::

View file

@ -27,6 +27,15 @@ options, you can use `./mach perftest --help` to learn about them.
Running in the CI
-----------------
.. warning::
If you are looking for how to run performance tests in CI and ended up here, you might want to checkout :ref:`Mach Try Perf`.
.. warning::
If you plan to run tests often in the CI for android, you should contact the android
infra team to make sure there's availability in our pool of devices.
You can run in the CI directly from the `mach perftest` command by adding the `--push-to-try` option
to your locally working perftest call.
@ -39,8 +48,4 @@ to run in the CI because they use sparse profiles. Depending on the
availability of workers, once the task starts, it takes around 15mn to start
the test.
.. warning::
If you plan to run tests often in the CI for android, you should contact the android
infra team to make sure there's availability in our pool of devices.

View file

@ -1,7 +1,8 @@
fxrecord
########
========
Fxrecord
========
`fxrecord <https://github.com/mozilla/fxrecord>`__ is a tool for measuring the
`Fxrecord <https://github.com/mozilla/fxrecord>`__ is a tool for measuring the
startup performance of Firefox for Desktop. It captures a video of Firefox for
desktop starting on a laptop and computes visual metrics using the same manner
as Raptor using Browsertime.

View file

@ -10,7 +10,7 @@ Performance Testing
DAMP
awsy
fxrecord
mach try perf
mach-try-perf
mozperftest
raptor
talos
@ -24,7 +24,7 @@ For more detailed information about each test suite and project, see their docum
* :doc:`DAMP`
* :doc:`awsy`
* :doc:`fxrecord`
* :doc:`mach try perf`
* :doc:`mach-try-perf`
* :doc:`mozperftest`
* :doc:`raptor`
* :doc:`talos`

View file

@ -10,7 +10,27 @@ To make it easier for developers to find the tests they need to run we built a p
When you trigger a try run from the perf selector, two try runs will be created. One with your changes, and one without. In your console, after you trigger the try runs, you'll find a PerfCompare link that will bring you directly to a comparison of the two pushes when they have completed.
The tool is built to be conservative about the number of tests to run, so if you are looking for something that is not listed, it's likely hidden behind a flag found in the `--help`.
The tool is built to be conservative about the number of tests to run, so if you are looking for something that is not listed, it's likely hidden behind a flag found in the `--help`. Here's a small sample of what you'll find there which highlights the most relevant flags::
$ ./mach try perf --help
perf arguments:
--show-all Show all available tasks.
--android Show android test categories (disabled by default).
--chrome Show tests available for Chrome-based browsers (disabled by default).
--safari Show tests available for Safari (disabled by default).
--live-sites Run tasks with live sites (if possible). You can also use the `live-sites` variant.
--profile Run tasks with profiling (if possible). You can also use the `profiling` variant.
--single-run Run tasks without a comparison
--variants [ [ ...]] Select variants to display in the selector from: no-fission, bytecode-cached, live-sites, profiling, swr
--platforms [ [ ...]]
Select specific platforms to target. Android only available with --android. Available platforms: android-a51, android,
windows, linux, macosx, desktop
--apps [ [ ...]] Select specific applications to target from: firefox, chrome, chromium, geckoview, fenix, chrome-m, safari
task configuration arguments:
--artifact Force artifact builds where possible.
Standard Usage
--------------
@ -174,6 +194,23 @@ The following fields are available:
Note that setting the App/Variant-Restriction fields should be used to restrict the available apps and variants, not expand them as the suites, apps, and platforms combined already provide the largest coverage. The restrictions should be used when you know certain things definitely won't work, or will never be implemented for this category of tests. For instance, our `Resource Usage` tests only work on Firefox even though they may exist in Raptor which can run tests with Chrome.
Frequently Asked Questions (FAQ)
--------------------------------
If you have any questions which aren't already answered below please reach out to us in the `perftest matrix channel <https://matrix.to/#/#perftest:mozilla.org>`_.
* **How can I tell what a category or a set of selections will run?**
At the moment, you need to run your command with an additional option to see what will be run: `./mach try perf --no-push`. See the `Categories`_ section for more information about this. In the future, we plan on having an dynamically updated list for the tasks in the `Categories`_ section of this document.
* **What's the difference between `Pageload desktop`, and `Pageload desktop firefox`?**
If you simply ran `./mach try perf` with no additional options, then there is no difference. If you start adding additional browsers to the try run with commands like `./mach try perf --chrome`, then `Pageload desktop` will select all tests available for ALL browsers available, and `Pageload desktop firefox` will only select Firefox tests. When `--chrome` is provided, you'll also see a `Pageload desktop chrome` option.
* **Help! I can't find a test in any of the categories. What should I do?**
Use the option `--show-all`. This will let you select tests from the `./mach try fuzzy --full` interface directly instead of the categories. You will always be able to find your tests this way. Please be careful with your task selections though as it's easy to run far too many tests in this way!
Future Work
-----------

View file

@ -1,8 +1,8 @@
===========
mozperftest
Mozperftest
===========
**mozperftest** can be used to run performance tests.
**Mozperftest** can be used to run performance tests.
.. toctree::

View file

@ -27,6 +27,15 @@ options, you can use `./mach perftest --help` to learn about them.
Running in the CI
-----------------
.. warning::
If you are looking for how to run performance tests in CI and ended up here, you might want to checkout :ref:`Mach Try Perf`.
.. warning::
If you plan to run tests often in the CI for android, you should contact the android
infra team to make sure there's availability in our pool of devices.
You can run in the CI directly from the `mach perftest` command by adding the `--push-to-try` option
to your locally working perftest call.
@ -39,8 +48,4 @@ to run in the CI because they use sparse profiles. Depending on the
availability of workers, once the task starts, it takes around 15mn to start
the test.
.. warning::
If you plan to run tests often in the CI for android, you should contact the android
infra team to make sure there's availability in our pool of devices.

View file

@ -1,7 +1,8 @@
fxrecord
########
========
Fxrecord
========
`fxrecord <https://github.com/mozilla/fxrecord>`__ is a tool for measuring the
`Fxrecord <https://github.com/mozilla/fxrecord>`__ is a tool for measuring the
startup performance of Firefox for Desktop. It captures a video of Firefox for
desktop starting on a laptop and computes visual metrics using the same manner
as Raptor using Browsertime.

View file

@ -2,7 +2,7 @@
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
---
name: mach try perf
name: mach-try-perf
manifest: None
static-only: True
suites: {}

View file

@ -10,7 +10,27 @@ To make it easier for developers to find the tests they need to run we built a p
When you trigger a try run from the perf selector, two try runs will be created. One with your changes, and one without. In your console, after you trigger the try runs, you'll find a PerfCompare link that will bring you directly to a comparison of the two pushes when they have completed.
The tool is built to be conservative about the number of tests to run, so if you are looking for something that is not listed, it's likely hidden behind a flag found in the `--help`.
The tool is built to be conservative about the number of tests to run, so if you are looking for something that is not listed, it's likely hidden behind a flag found in the `--help`. Here's a small sample of what you'll find there which highlights the most relevant flags::
$ ./mach try perf --help
perf arguments:
--show-all Show all available tasks.
--android Show android test categories (disabled by default).
--chrome Show tests available for Chrome-based browsers (disabled by default).
--safari Show tests available for Safari (disabled by default).
--live-sites Run tasks with live sites (if possible). You can also use the `live-sites` variant.
--profile Run tasks with profiling (if possible). You can also use the `profiling` variant.
--single-run Run tasks without a comparison
--variants [ [ ...]] Select variants to display in the selector from: no-fission, bytecode-cached, live-sites, profiling, swr
--platforms [ [ ...]]
Select specific platforms to target. Android only available with --android. Available platforms: android-a51, android,
windows, linux, macosx, desktop
--apps [ [ ...]] Select specific applications to target from: firefox, chrome, chromium, geckoview, fenix, chrome-m, safari
task configuration arguments:
--artifact Force artifact builds where possible.
Standard Usage
--------------
@ -174,6 +194,23 @@ The following fields are available:
Note that setting the App/Variant-Restriction fields should be used to restrict the available apps and variants, not expand them as the suites, apps, and platforms combined already provide the largest coverage. The restrictions should be used when you know certain things definitely won't work, or will never be implemented for this category of tests. For instance, our `Resource Usage` tests only work on Firefox even though they may exist in Raptor which can run tests with Chrome.
Frequently Asked Questions (FAQ)
--------------------------------
If you have any questions which aren't already answered below please reach out to us in the `perftest matrix channel <https://matrix.to/#/#perftest:mozilla.org>`_.
* **How can I tell what a category or a set of selections will run?**
At the moment, you need to run your command with an additional option to see what will be run: `./mach try perf --no-push`. See the `Categories`_ section for more information about this. In the future, we plan on having an dynamically updated list for the tasks in the `Categories`_ section of this document.
* **What's the difference between `Pageload desktop`, and `Pageload desktop firefox`?**
If you simply ran `./mach try perf` with no additional options, then there is no difference. If you start adding additional browsers to the try run with commands like `./mach try perf --chrome`, then `Pageload desktop` will select all tests available for ALL browsers available, and `Pageload desktop firefox` will only select Firefox tests. When `--chrome` is provided, you'll also see a `Pageload desktop chrome` option.
* **Help! I can't find a test in any of the categories. What should I do?**
Use the option `--show-all`. This will let you select tests from the `./mach try fuzzy --full` interface directly instead of the categories. You will always be able to find your tests this way. Please be careful with your task selections though as it's easy to run far too many tests in this way!
Future Work
-----------

View file

@ -3,11 +3,12 @@ perfdocs:
description: Performance Documentation linter
# This task handles its own search, so just include cwd
include: [
'testing/raptor',
'python/mozperftest',
'testing/talos',
'testing/awsy',
'testing/fxrecord',
'testing/raptor',
'testing/talos',
'testing/performance/fxrecord',
'testing/performance/mach-try-perf',
]
exclude: []
extensions: ['rst', 'ini', 'yml']