- I don't agree that this is end to end encrypted. For example, a compromise of the TEE would mean your data is exposed. In a truly end to end encrypted system, I wouldn't expect a server side compromise to be able to expose my data.
This is similar to the weasely language Google is now using with the Magic Cue feature ever since Android 16 QPR 1. When it launched, it was local only -- now it's local and in the cloud "with attestation". I don't like this trend and I don't think I'll be using such products
by datadrivenangel
4 subcomments
- Get a fun error message on debian 13 with firefox v140:
"This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.)."
- Unless I misunderstand, this doesn't seem to address what I consider to be the largest privacy risk: the information you're providing to the LLM itself. Is there even a solution to that problem?
I mean, e2ee is great and welcome, of course. That's a wonderful thing. But I need more.
- As someone who has spent a good time of time working on trusted compute (in the crypto domain) I'll say this is generally pretty well thought out, doesn't get us to an entirely 0-trust e2e solution, but is still very good.
Inevitably, the TEE hardware vendor must be trusted. I don't think this is a bad assumption in today's world, but this is still a fairly new domain and longer term it becomes increasingly likely TEE compromises like design flaws, microcode bugs, key compromises, etc. are discovered (if they haven't already been!) Then we'd need to consider how Confer would handle these and what sort of "break glass" protocols are in place.
This also requires a non-trivial amount of client side coordination and guards against any supply chain attacks. Setting aside the details of how this is done, even with a transparency log, the client must trust something about “who is allowed to publish acceptable releases”. If the client trusts “anything in the log,” an attacker could publish their own signed artifacts, So the client must effectively trust a specific publisher identity/key, plus the log’s append-only/auditable property to prevent silent targeted swaps.
The net result is a need to trust Confer's identity and published releases, at least in the short term as 3rd party auditors could flag any issues in reproducible builds. As I see it, the game theory would suggest Confer remains honest, Moxie's reputation plays are fairly large role in this.
- An interesting take on the AI model. I'm not sure what their business model is like, as collecting training data is the one thing that free AI users "pay" in return for services, but at least this chat model seems honest.
Using remote attestation in the browser to attest the server rather than the client is refreshing.
Using passkeys to encrypt data does limit browser/hardware combinations, though. My Firefox+Bitwarden setup doesn't work with this, unfortunately. Firefox on Android also seems to be broken, but Chrome on Android works well at least.
- "trusted execution environment" != end-to-end encryption
The entire point of E2EE is that both "ends" need to be fully under your control.
by AdmiralAsshat
0 subcomment
- Well, if anyone could do it properly, Moxie certainly has the track record.
by jdthedisciple
0 subcomment
- The best private LLM is the one you host yourself.
by throwaway35636
0 subcomment
- Interestingly the confer image on GitHub doesn’t seem to include in the attestation the model weights (they seem loaded from a mounted ext4 disk without dm-verity). Probably this doesn’t compromise the privacy of the communication (as long as the model format is not containing any executable part) but it exposes users to a “model swapping” attack, where the confer operator makes a user talk to an “evil” model without they can notice it. Such evil model may be fine tuned to provide some specifically crafted output to the user. Authenticating the model seems important, maybe it is done at another level of the stack?
- Does it say anywhere which model it’s using?
I see references to vLLM in the GitHub but not which actual model (Llama, Mistral, etc.) or if they have a custom fine tune, or you give your own huggingface link?
by LordDragonfang
1 subcomments
- > Advanced Passkey Features Required
> This application requires passkey with PRF extension support for secure encryption key storage. Your browser or device doesn't support these advanced features.
> Please use Chrome 116+, Firefox 139+, or Edge 141+ on a device with platform authentication (Face ID, Touch ID, Windows Hello, etc.).
(Running Chrome 143)
So... does this just not support desktops without overpriced webcams, or am I missing something?
- Again with the confidential VM and remote attestation crypto theater? Moxie has a good track record in general, and yet he seems to have a huge blindspot in trusting Intel broken "trusted VM" computing for some inexplicable reason. He designed the user backups of Signal messages to server with similar crypto secure "enclave" snake-oil.
- Aha. This, ideally, is a job for local only. Ollama et al.
Now, of course, it is in question as to whether my little graphics card can reasonably compare to a bigger cloud thing (and for me presently a very genuine question) but that really should be the gold standard here.
by orbital-decay
0 subcomment
- At least Cocoon and similar services relying on TEE don't call this end-to-end encryption. Hardware DRM is not E2EE, it's security by obscurity. Not to say it doesn't work, but it doesn't provide mathematically strong guarantees either.
- MM is basically up-selling his _Signal_ trust score. Granted, Signal/RedPhone predecessor upped the game but calling this E2E encrypted AI chat is a bit of a stretch..
- I am confused. I get E2EE chat with a TEE, but the TEEs I know of (admittedly not an expert) are not powerful enough to do the actual inference, at least not any useful one. The blog posts published so far just glance over that.
- Interesting! I wonder a) how much of an issue this addresses, ie how much are people worried about privacy when they use other LLMs? and b) how much of a disadvantage it is for Confer not to be able to read/ train in user data.
- I am shocked at how quickly everyone is trying to forget that TEE.fail happened, and so now this technology doesn't prove anything. I mean, it isn't useless, but DNS/TLS and physical security/trust become load bearing, to the point where the claims made by these services are nonsensical/dishonest.
by letmetweakit
0 subcomment
- How does inference work with a TEE, isn’t performance a lot more restricted?