I was expecting an ad for their product somewhere towards the end, but it wasn't there!
I do wonder though: why would this company report this vulnerability to Mozilla if their product is fingeprinting?
Isn't it better for the business (albeit unethical) to keep the vulnerability private, to differentiate from the competitors? For example, I don't see many threat actors burning their zero days through responsible disclosure!
Don't get your opsec advice from HN. Check whonix, qubes, grapheneos, kicksecure forums/wikis. Nihilist opsec, Privacyguides.
Make sure to exit Tor Browser at the end of a session. Make sure not to mix two uses in one session.
Whether they care is entirely separate.
Why is this global keyed only by the database name string in the first place?
The post mentions a generated UUID, why not use that instead, and have a per-origin mapping of database names to UUID somewhere? Or even just have separate hash-tables for each origin? Seems like a cleaner fix to me compared to sorting (imo, though admittedly, more of a complex fix with architectural changes)
Seems to me that having a global hashtable that shares information from all origins is asking for trouble, though I'm sure there is a good explanation for this (performance, historical reasons, some benefits of this architecture I'm not aware of, etc.).
Also, does anyone know of any researchers in the academic world focusing on this issue? We are aware that EFF has a project that used to be named after a pedophile on this subject, but we are more looking for professors at universities or pure research labs ala MSR or PARC than activists working for NGOs, however pure their praxis :-)
As privacy geeks, we have become fascinated with the topic -- it seems that while we can achieve security through extensions like noscript or ublock origin or firefox containers (our personal "holy trinity"), anonymity slips through our fingers due to fingerprinting issues. (Especially if we lump stylometry in the big bucket of "fingerprinting".)
[1] https://web.archive.org/web/20260422190706/https://fingerpri...
Why don't browsers make it like phones where the server (app) has to be granted permission to access stuff?
Hmm, I'm a little confused, since in 2021 Mozilla released experimental one-process-per-site:
> This fundamental redesign of Firefox’s Security architecture extends current security mechanisms by creating operating system process-level boundaries for all sites loaded in Firefox for Desktop
https://blog.mozilla.org/security/2021/05/18/introducing-sit...
Perhaps that is not fully released?
Or perhaps it is, but IndexedDB happens to live outside of that isolation?
That's why expansion of web standards is wrong. Browser should provide minimal APIs for interacting with device and features like IndexedDB can be implemented as WebAssembly library, leaking no valuable data.
For example, if canvas provided only access to picture buffer, and no drawing routines calling into platform-specific libraries, it would become useless for fingerprinting.
Seriously, I am saddened that Chromium dominates the browser market as much as it does, but at this point the herd-immunity of Chromium is necessary to keep users safe.
And all browser devs should be required to actively fight against fingerprinting.
There is no legitimate need for fingerprinting in browsers.
The IndexedDB UUID is "shared across all origins", so why not use the contents of the database to identify browers, rather than the ordering?
> For security and product stakeholders, the key point is simple: even an API that appears harmless can become a cross-site tracking vector if it leaks stable process-level state.
This reads almost LLM-ish. The article on the whole does not appear so, but parts of it do.