forked from mirrors/gecko-dev
Currently we cache page data in memory until the browser is closed but this is too long. We know there could be a couple of consumers of page data, snapshots and AVM being two so we want to keep any discovered data in memory until we know that no-one is interested anymore. This adds a cache and a way for an "actor" (ugh!) to register interest in a url. As long as there is an actor interested in a url any data for that url will be cached in memory. The idea is that when we start tracking a new interaction we start caching any data for that url. When interactions have been flushed to disk and we've made any decision about snapshotting we allow the data to expire. We also by default keep data in the cache until the browser it came from is destroyed. Later the AVM can keep a page's data alive until it no longer exists in the river. Differential Revision: https://phabricator.services.mozilla.com/D140056 |
||
|---|---|---|
| .. | ||
| docs | ||
| schemas | ||
| tests | ||
| .eslintrc.js | ||
| jar.mn | ||
| moz.build | ||
| OpenGraphPageData.jsm | ||
| PageDataChild.jsm | ||
| PageDataParent.jsm | ||
| PageDataSchema.jsm | ||
| PageDataService.jsm | ||
| SchemaOrgPageData.jsm | ||
| TwitterPageData.jsm | ||