Explicitly specify the arguments to copy to avoid making a copy of
a dangling `this` pointer.
Convert nsUrlClassifierDBService::mClassifier to a RefPtr since
the update closure might need to continue to access its members
after it's been released by the main thread.
MozReview-Commit-ID: CPio3n9MmsK
--HG--
extra : rebase_source : d7a8b207336209ee37441f3429bc608140e404ae
Replace raw pointers to LookupResult with RefPtrs and eplace the
nsAutoPtr objects + raw pointers params with UniquePtrs.
Also remove unnecessarily paranoid OOM checks when creating single
LookupResult objects since those are pretty small.
MozReview-Commit-ID: G85RNnAat6H
--HG--
extra : rebase_source : a8f6a1ff1e24663d428c8d894cb624e1c67e1bd3
The existing mix of UniquePtr and raw pointers is confusing when
trying to figure out the exact lifetime of these objects.
MozReview-Commit-ID: Br4S7BXEFKs
--HG--
extra : rebase_source : ba35d5c2eeda0741eb4c5491a6caf03f20f3d0ce
This will prevent our holding on to this pointer incorrectly in the
future.
MozReview-Commit-ID: H8ueIOK1qAK
--HG--
extra : rebase_source : 937e9c702c5109b6dfc1684392a1204d8a1edc49
I tried to make TableUpdateArray point to const TableUpdate objects
everywhere but there were two problems:
- HashStore::ApplyUpdate() triggers a few Merge() calls which include
sorting the underlying TableUpdate object first.
- LookupCacheV4::ApplyUpdate() calls TableUpdateV4::NewChecksum() when the
checksum is missing and that sets mChecksum.
MozReview-Commit-ID: LIhJcoxo7e7
--HG--
extra : rebase_source : f6ca4bf42d1ddb897a974a0b19c7185b0b6b93af
Manually keeping tabs on the lifetime of these objects is a pain
and is the likely source of some of our crashes. I suspect we might
also be leaking memory.
This change creates an explicit copy of the main array into the
update thread to avoid using a non-thread-safe shared data
structure. This is a shallow copy. Only the pointers to the
TableUpdates are copied, which means one pointer per list (e.g. 5
in total for google4 in a new profile).
MozReview-Commit-ID: 221d6GkKt0M
--HG--
extra : rebase_source : e1b81f11bb9b41e465571a95845079f455b5868e
Same approach as the other bug, mostly replacing automatically by removing
'using mozilla::Forward;' and then:
s/mozilla::Forward/std::forward/
s/Forward</std::forward</
The only file that required manual fixup was TestTreeTraversal.cpp, which had
a class called TestNodeForward with template parameters :)
MozReview-Commit-ID: A88qFG5AccP
This was done automatically replacing:
s/mozilla::Move/std::move/
s/ Move(/ std::move(/
s/(Move(/(std::move(/
Removing the 'using mozilla::Move;' lines.
And then with a few manual fixups, see the bug for the split series..
MozReview-Commit-ID: Jxze3adipUh
Given we're no longer using dependent strings in
LookupCacheV4::PrefixString(), we will end up make a copy of the
prefixes at some point. Let's do it early and remove a bunch of
complicated code.
Make the string copies fallible so that we return an error and
fail the update instead of crashing.
MozReview-Commit-ID: 5cZHSDIJSlD
--HG--
extra : rebase_source : 0ad130c2be9caf528bb764296836e91fa8a30916
With a common prefix, all of the URL Classifier gtests can be run
like this:
./mach gtest UrlClassifier*
MozReview-Commit-ID: IqQznsldFOD
--HG--
extra : rebase_source : 5da7496e9607083b452c915608a0ab9adae818ff
This patch includes two test cases:
1. Apply an empty update through Classifier interface, which is the normal use case.
2. Apply an empty update through LookupCacheV4::ApplyUpdate, this ensure update algorithm is
correct when applying an empty update. This scenario actually shouldn't happen in
normal use case because it will be skipped by Classifier::CheckValidUpdate.
MozReview-Commit-ID: 9khsuVatX0u
--HG--
extra : rebase_source : 8db87dd2b8b4aec4546257e9f06dcc839b2ea181
This function is arguably nicer than calling NS_ProcessNextEvent
manually, is slightly more efficient, and will enable better auditing
for NS_ProcessNextEvent when we do Quantum DOM scheduling changes.
In this patch, we will make Safebrowsing V2 caching use the same algorithm as V4.
So we remove "mMissCache" for negative caching and TableFresness check for
positive caching.
But Safebrowsing V2 doesn't contain negative/positive cache duration information in
gethash response. So we hard-code a fixed value, 15 minutes, as cache duration.
In this way, we can sync the mechanism we handle caching for V2 and V4.
An extra effort for V2 here is that we need to manually record prefixes misses
because we won't get any response for those prefixes(implemented in
nsUrlClassifierLookupCallback::CacheMisses).
--HG--
extra : rebase_source : 0fb7938464ad24a01a51493ecb1b5c0197c16123
This is in preparation for being able to be replaced with SpinEventLoopUntil(),
which is going to be shipped in bug 1359490.
MozReview-Commit-ID: AChVqh4kfVb
--HG--
extra : rebase_source : 03f8d36c29d74fed34ebed6ae3cb31ca829c3e41
In Bug 1323953, we always send 4-bytes prefix for completion and the prefix is also
used as the key to store cache result from gethash request.
Since it is always 4-bytes, we could convert it to integer for simplicity.
MozReview-Commit-ID: Lkvrg0wvX5Z
--HG--
extra : rebase_source : 7002b1f55bc0f88ed90340082574cc931f8db87a
LookupCacheV4::Has implements safebrowsing v4 caching logic.
1. Check if fullhash match any prefix in local database:
- If not, the URL is safe.
2. Check if prefix is in the cache(prefix is always the first 4-byte of
the fullhash, Bug 1323953):
- If not, send fullhash request
3. Check if fullhash is in the positive cache:
- If fullhash is found and it is not expired, the URL is not safe.
- If fullhash is found and it is expired, send fullhash request.
4. If fullhash is not found, check negative cache expired time:
- If negative cache time is not expired, the URL is safe.
- If negative cache time is expired, send fullhash request.
MozReview-Commit-ID: GRX7CP8ig49
--HG--
extra : rebase_source : 6d3ae8929b11731584810c44d46a1526f7d83e12
This patch includes following changes:
1. nsUrlClassifierHashCompleter.js
nsUrlClassifierHashCompleter.idl
- Add completionV4 interface for hashCompleter to pass response data to
DB service.
- Process response data includes negative cache duration, matched full
hashes and cache duration for each match. Full matches are passed through
nsIFullHashMatch interface.
- Change _requests.responses from array contains matched fullhashes to
dictionary so that it can store additional information likes negative cache
duration.
2. nsUrlClassifierDBService.cpp
- Implement CompletionV4 interface, store response data to CacheResultV4
object. Expired duration to expired time is handled here.
- Add CacheResultToTableUpdate function to convert V2 & V4 cache result
to TableUpdate object.
3. LookupCache.h
- Extend CacheResult to CacheResultV2 and CacheResultV4 so we can store
response data in CompletionV2 and CompletionV4.
4. HashStore.h
- Add API and member variable in TableUpdateV4 to store response data.
TableUpdate object is used by DB service to pass update data or gethash
response to Classifier, so we need to extend TableUpdateV4 to be able
to store fullHashes.find response.
6. Entry.h
- Define the structure about how we cache fullHashes.find response.
MozReview-Commit-ID: FV4yAl2SAc6
--HG--
extra : rebase_source : 02676ded9bc0580c24b4c906199a4abaf004e296
LookupCacheV4::Has implements safebrowsing v4 caching logic.
1. Check if fullhash match any prefix in local database:
- If not, the URL is safe.
2. Check if prefix is in the cache(prefix is always the first 4-byte of
the fullhash, Bug 1323953):
- If not, send fullhash request
3. Check if fullhash is in the positive cache:
- If fullhash is found and it is not expired, the URL is not safe.
- If fullhash is found and it is expired, send fullhash request.
4. If fullhash is not found, check negative cache expired time:
- If negative cache time is not expired, the URL is safe.
- If negative cache time is expired, send fullhash request.
MozReview-Commit-ID: GRX7CP8ig49
--HG--
extra : rebase_source : bcb5fa7aa2b7b116d862e3382447611424603c2d
This patch includes following changes:
1. nsUrlClassifierHashCompleter.js
nsUrlClassifierHashCompleter.idl
- Add completionV4 interface for hashCompleter to pass response data to
DB service.
- Process response data includes negative cache duration, matched full
hashes and cache duration for each match. Full matches are passed through
nsIFullHashMatch interface.
- Change _requests.responses from array contains matched fullhashes to
dictionary so that it can store additional information likes negative cache
duration.
2. nsUrlClassifierDBService.cpp
- Implement CompletionV4 interface, store response data to CacheResultV4
object. Expired duration to expired time is handled here.
- Add CacheResultToTableUpdate function to convert V2 & V4 cache result
to TableUpdate object.
3. LookupCache.h
- Extend CacheResult to CacheResultV2 and CacheResultV4 so we can store
response data in CompletionV2 and CompletionV4.
4. HashStore.h
- Add API and member variable in TableUpdateV4 to store response data.
TableUpdate object is used by DB service to pass update data or gethash
response to Classifier, so we need to extend TableUpdateV4 to be able
to store fullHashes.find response.
6. Entry.h
- Define the structure about how we cache fullHashes.find response.
MozReview-Commit-ID: KgR1NASl7GC
--HG--
extra : rebase_source : 424db14e4af2ffd691c384414d50f64083d5d20b
A new function Classifier::AsyncApplyUpdates() is implemented for async update.
Besides, all public Classifier interfaces become "worker thread only" and
we remove DBServiceWorker::ApplyUpdatesBackground/Foreground.
In DBServiceWorker::FinishUpdate, instead of calling Classifier::ApplyUpdates,
we call Classifier::AsyncApplyUpdates and install a callback for notifying
the update observer when update is finished. The callback will occur on the
caller thread (i.e. worker thread.)
As for the shutdown issue, when the main thread is notified to shut down,
we at first *synchronously* dispatch an event to the worker thread to
shut down the update thread. After getting synchronized with all other
threads, we send last two events "CancelUpdate" and "CloseDb" to notify
dangling update (i.e. BeginUpdate is called but FinishUpdate isn't)
and do cleanup work.
MozReview-Commit-ID: DXZvA2eFKlc
--HG--
extra : rebase_source : cd2e27a6b679d2c96e769854d1582ed2dcda12bb