Hopefully most of these changes are self-explanatory however a few notes follow.
* In timing-model/animations/play-states.html, as well as making the tests match
  the updated spec, one or two tests have also been moved to better reflect the
  order in the spec (to make it obvious which branch of the algorithm is being
  tested).
* In timing-model/animations/set-the-timeline-of-an-animation.html we previously
  had two tests that check:
  a) That the playState was 'pending' before and after setting the timeline.
  b) That the playState was 'pending' before setting the timeline and then,
     after setting the timeline and waiting on the ready promise, would become
     'running'.
  Likewise we had the same test for pausing.
  Since these are basically the same test--(b) just adds the wait on the ready
  promise--we combine them here into one test that covers both (a) and (b).
MozReview-Commit-ID: CLoDJvsdwmF
--HG--
extra : rebase_source : c2f34fa6614795f2d3ba9ca3e572f11306f96463
		
	
					 | 
			||
|---|---|---|
| .. | ||
| animation-model | ||
| interfaces | ||
| resources | ||
| timing-model | ||
| OWNERS | ||
| README.md | ||
| testcommon.js | ||
Web Animations Test Suite
Specification: https://w3c.github.io/web-animations/
Guidelines for writing tests
- 
Try to follow the spec outline where possible.
For example, if you want to test setting the start time, you might be tempted to put all the tests in:
/web-animations/interfaces/Animation/startTime.htmlHowever, in the spec most of the logic is in the “Set the animation start time“ procedure in the “Timing model” section.
Instead, try something like:
/web-animations/timing-model/animations/set-the-animation-start-time.html
Tests all the branches and inputs to the procedure as defined in the spec (using theAnimation.startTimeAPI)./web-animations/interfaces/Animation/startTime.html
Tests API-layer specific issues like mapping unresolved values to null, etc.
On that note, two levels of subdirectories is enough even if the spec has deeper nesting.
Note that most of the existing tests in the suite don't do this well yet. That's the direction we're heading, however.
 - 
Test the spec.
- 
If the spec defines a timing calculation that is directly reflected in the iteration progress (i.e.
anim.effect.getComputedTiming().progress), test that instead of callinggetComputedStyle(elem).marginLeft. - 
Likewise, don't add needless tests for
anim.playbackState. The playback state is a calculated value based on other values. It's rarely necessary to test directly unless you need, for example, to check that a pending task is scheduled (which isn't observable elsewhere other than waiting for the corresponding promise to complete). 
 - 
 - 
Try to keep tests as simple and focused as possible.
e.g.
test(t => { const animation = createDiv(t).animate(null); assert_class_string(animation, 'Animation', 'Returned object is an Animation'); }, 'Element.animate() creates an Animation object');test(t => { assert_throws({ name: 'TypeError' }, () => { createDiv(t).animate(null, -1); }); }, 'Setting a negative duration throws a TypeError');promise_test(t => { const animation = createDiv(t).animate(null, 100 * MS_PER_SEC); return animation.ready.then(() => { assert_greater_than(animation.startTime, 0, 'startTime when running'); }); }, 'startTime is resolved when running');If you're generating complex test loops and factoring out utility functions that affect the logic of the test (other than, say, simple assertion utility functions), you're probably doing it wrong.
It should be possible to understand exactly what the test is doing at a glance without having to scroll up and down the test file and refer to other files.
See Justin Searls' presentation, “How to stop hating your tests” for some tips on making your tests simpler.
 - 
Assume tests will run on under-performing hardware where the time between animation frames might run into 10s of seconds. As a result, animations that are expected to still be running during the test should be at least 100s in length.
 - 
Avoid using
GLOBAL_CONSTSthat make the test harder to read. It's fine to repeat the the same parameter values like100 * MS_PER_SECover and over again since it makes it easy to read and debug a test in isolation. Remember, even if we do need to make all tests take, say 200s each, text editors are very good at search and replace. - 
Use the
assert_times_equalassertion for comparing calculated times. It tests times are equal using the precision recommended in the spec whilst allowing implementations to override the function to meet their own precision requirements. - 
There are quite a few bad tests in the repository. We're learning as we go. Don't just copy them blindly—please fix them!