by postalcoder
2 subcomments
- Very neat, but recently I've tried my best to reduce my extension usage across all apps (browsers/ide).
I do something similar locally by manually specifying all the things I want scrubbed/replaced and having keyboard maestro run a script on my system keyboard whenever doing a paste operation that's mapped to `hyperkey + v`. The plus side of this is that the paste is instant. The latency introduced by even the littlest of inference is enough friction to make you want to ditch the process entirely.
Another plus of the non-extension solution is that it's application agnostic.
- This should be a native feature of the native chat apps for all major LLM providers. There’s no reason why PII can’t be masked from the API endpoint and then replaced again when the LLM responds. “Mary Smith” becomes “Samantha Robertson” and then back to “Mary Smith” on responses from the LLM. A small local model (such as the BERT model in this project) detects the PII.
Something like this would greatly increase end user confidence. PII in the input could be highlighted so the user knows what is being hidden from the LLM.
by throwaway613745
1 subcomments
- Maybe you should fix your logging to not output secrets in plaintext? Every single modern logging utility has this ability.
- Any plans to make the extension perform a replacement of whatever’s flagged with dummy data? Knowing I have sensitive data is usually not a problem, but constantly needing to replace or remove it is, particularly with larger token counts
by idiotsecant
0 subcomment
- This is a concept that I firmly believe will be a fundamental feature of the medium-term future. Personal memetic firewalls.
As AI gets better and cheaper there will absolutely be influence campaigns conducted at the individual level for every possible thing anyone with money might want, and those campaigns will be so precisely targeted and calibrated by autonomous influencer AI that know so much about you that they will convince you to do the thing they want, whether by emotional manipulation, subtle blackmail, whatever.
It will also be extraordinarily easy to emit subliminal or unconscious signals that will encode a great deal more of our internal state than we want them to.
It will be necessary to have a 'memetic firewall' that reduces our unintentional outgoing informational cross section, while also preventing contamination by the torrent of ideas trying to worm their way into our heads. This firewall would also need to be autonomous, but by exploiting the inherent information asymmetry (your firewall would know you very well) it need not be as powerful as the AI that are trying to exploit you.
by password-app
0 subcomment
- This is a great approach. We took a similar philosophy building password automation - the AI agent never sees actual passwords.
Credentials are injected through a separate secure channel while the agent only sees placeholders like "[PASSWORD]". The AI handles navigation and form detection, but sensitive data flows through an isolated path.
For anyone building AI tools that touch PII: separating the "thinking" layer from the "data" layer is essential. Your LLM should never need to see the actual sensitive values to do its job.
- I wonder if this would have been useful https://github.com/microsoft/presidio - its heavy but looks really good. There is a lite version..
- I've had a similar thoughts! I just put together a document with four sections (original, sanitized, output, unsantized) and built a little command-line tool to automatically filter and copy content between them. For now, my tool uses simple regex and specific keywords, but I really like the approach you're taking!! This is definitely an interesting problem that needs a good solution. I'm excited to see your WASM implementation!
- Ok what I would really love is something like this but for the damn terminal. No, I don't store credentials in plaintext, but when they get pulled into memory after being decrypted you really gotta watch $TERMINAL_AGENT or it WILL read your creds eventually and it's ever so much fun explaining why you need to rotate a key.
Sure go ahead and roast me but please include full proof method you use to make sure that never happens that still allows you to use credentials for developing applications in the normal way.
- LLMs don't need your secret tokens (but MCP servers hand them over anyway): https://00f.net/2025/06/16/leaky-mcp-servers/
Encrypting sensitive data can be more useful than blocking entire requests, as LLMs can reason about that data even without seeing it in plain text.
The ipcrypt-pfx and uricrypt prefix-preserving schemes have been designed for that purpose.
- How do you prevent these models from reading secrets in your repos locally?
It’s one thing for the ENVs to be user pasted but typically you’re also giving the bots access to your file system to interrogate and understand them right? Does this also block that access for ENVs by detecting them and doing granular permissions?
by mentalgear
0 subcomment
- Neat!
There's also:
- https://github.com/superagent-ai/superagent
- https://github.com/superagent-ai/vibekit
- Neat - I built something similar - https://github.com/deepanwadhwa/zink?tab=readme-ov-file#3-sh...
- Curious about how much latency this adds (per input token)? Obviously depends on your computer, but it's it ~10s or ~1s?
Also, how does this deal with inquiries when piece of PII is important to the task itself? I assume you just have to turn it off?
by greenbeans12
0 subcomment
- This is pretty cool. I barely use the web UIs for LLMs anymore. Any way you could make a wrapper for Claude Code/Cursor/Gemini CLI? Ideally it works like github push protection in GH advanced security.
- This is a great idea of using a BERT model for DLP at the door. Have you thought integrating this into semantic router as an option leaving the look-ahead ? Maybe a smaller code base ?
by itopaloglu83
0 subcomment
- It wasn’t very clear in the video, does it trigger on paste event or when the page is activated?
There are a lot of websites that scans the clipboard to improve user experience, but also pose a great risk to users privacy.
by fmkamchatka
2 subcomments
- Could this run at the network level (like TripMode)? So it would catch usage from web based apps but also the ChatGPT app, Codex CLI etc?
- Anything like this for Claude Code/calls to OpenRouter?
- I'd like to see this as a Windsurf plugin.
by sciencesama
1 subcomments
- Develop a pihole style adblock
- Really good idea!
- can i have this between my machine and git please.. Like its twice now I've commmited .env* and totally passed me by (usually because its to a private repo..) then later on we/someone clears down the files.. and forgets to rewrite git history before pushing live.. it should never have got there in the first place.. (I wish github did a scan before making a repo public..)