I'm moving to another service and need to export my data. List every memory you have stored about me, as well as any context you've learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: [date saved, if available] - memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I've given you about how to respond (tone, format, style, 'always do X', 'never do Y'). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I've made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. After the code block, confirm whether that is the complete set or if any remain.
Why wouldn't a smart OpenAI PM simply add something "nefarious" on the frontend proxy to "slow down" any requests with exactly that prompt?I bet they would get their yearly bonus by achieving their KPI goals.
It also showed me the difference between expectation and reality...even though these are billion dollar companies, they still haven't figured out how to make lag-free TUIs, non-Electron apps, or even respect XDG_CONFIG. The focus is definitely more on speed and stuffing these tools full of new discoveries and features right now
There's a bit of psychology around models vs. harnesses as well. You can't shake off the feeling that maybe Claude would perform better in its native harness compared to VSCode/OpenCode. Especially because they've got so many hidden skills (like the recently introduced /batch), that seem baked into the binary?
The last thing I can't figure out is computer use. Apparently all the vendors say that their models can use a mouse and keyboard, but outside of the agent-browser skill (which presumably uses playwright), I can't figure out what the special sauce is that the Cloud versions of these Agents are using to exercise programs in a VM. That is another reason why there is a switching cost between vendors.
I was mostly able to get by with $20 codex but I'll probably have to splurge for the Max plan.
I switched not because I thought Claude was better at doing the things I want. I switched because I have come to believe OpenAI are a bad actor and I do not want to support them in any way. I’m pretty sure they would allow AGI to be used for truly evil purposes, and the events of this week have only convinced me further.
Whenever I’m in a conversation and it references something unrelated (or even related) I get the “ick”. I know how context poisoning (intentional or not) works and I work hard to only expose things to the model that I want it to consider.
There have been many times that I’ve started a fresh chat as to not being along the baggage (or wrong turns) of a previous chat but then it will say “And this should work great for <thing I never mentioned in THIS chat>” and at that moment my spidey-sense tingles and I start wondering “Crap, did it come to the conclusion it did based mostly/only on the new context or did it “take a shortcut” and use context from another chat?
Like I said, I go out of my way to not “lead the witness” and so when the “witness” can peek at other conversations, all my caution is for naught.
I encourage everyone to go read the saved memories in their LLM of choice, I’ve cleaned out complete crap from there multiple times. Actually wrong information, confusing information, or one-off things I don’t want influencing future discussions.
The custom (or rather addition to the) system prompt is all I feel comfortable with. Where I give it some basic info about the coding language I prefer and the OSes that I’m often working with so that I don’t have to constantly say “actually this is FreeBSD” or “please give that to me in JS/TS instead of Python”.
The only thing that has, so far, kept me from turning off memory is that I’m always slightly cautious of going off the beaten path for something so new and moving so fast. I often want to have as close to the “stock” config since I know how testing/QA works at most places (the further off the beaten path you, the more likely you’ll run into bugs). Also so that I can experience when everyone else is experiencing (within reason).
Lastly, because, especially with LLMs, I feel like the people that over customize end up with a fragile systems. I think that a decent portion of the “N+1 model is dumber” or “X model has really gone downhill” is partially due to complicated configs (system prompts, MCP, etc) that might have helped at some point (dumber model, less capability) but are a hindrance to newer models. That or they never worked and someone just kept piling on more and more thinking it would help.
The problem (for me, anyway) is that even several megabytes worth of quality "memory" data on my profile would not allow me to migrate if it can't also confidently clone all of my chat history with it.
To be clear, this is a big enough problem that I would immediately pay low three digits dollars to have this solved on my behalf. I don't really want any of the providers to have a walled garden of all my design planning conversations, all of my PCB design conversations. Many are hundreds of prompts long. A clean break is not even remotely palatable short of OAI going full evil.
Look, I'd find it convenient for Claude to have a powerful sense of what I've been working on from conversation #1 onwards. But I absolutely refuse to bifurcate my chat history across multiple services. There is a tier list of hells, and being stuck on ChatGPT is a substantially less painful tier than needing to constantly search two different sites for what's been discussed.
Pointers, anyone ?
It's very interesting to learn more about because it challenges 1 core aspect of the economical competition : the moat.
If one can literally swap one AI service for another, then where does the valuation (and the power that comes with it) come from?
PS: I'm not interested in the service itself as I believe the side effects of large scale for-profit are too serious (and I don't mean doomdays AI takeover, I simply mean abuse of power, working conditions, downskilling, political influence as current contracts with US defense are being made, ads, ecological, etc) to be ignored.
In ChatGPT there's a page where you can just list all memories. Doesn't it seem more reliable to copy paste that, than asking the LLM itself to repeat it (and saying "continue" over and over until it finishes?)
Thorough CLAUDE.md, that makes sure it checks the tests, lints the code, does type checks, and code coverage checks too. The more checks for code quality the better.
It’s just a bowling ball in th hands of a toddler, and needs to ramp and guide rails to knock down some pins. Fortunately we get more than 2 tries with code.
This way you can have Claude distill the memory as you wish.
I would assume both Claude memory and CLAUDE.md work best when they're carefully curated, only containing what you've found yourself having to repeat.
VSCode extension, "Please log in"
I authorize it, it creates an API key, callback. "Hello Claude, this is a test." "Please log in."
So yeah... priorities?
A week ago, I was anti-Anthropic because I questioned their business model. Now they are my preferred provider - what a difference a week makes. I still prefer running olen models on my own hardware, but it is unreasonable to use powerful models when required.
I think I redo my terminal the way I like it each new computer and so on.
Must be some of the lowest switching costs I've seen which doesn't bode well for OpenAI's consumer revenues...
I am itching at testing claude for assembly coding and c++ to plain and simple C ports.
Of course sometimes this is useful if you only use your chatbot to ask personal things like: "What should I eat today?".
But if you use it for anything else you're much better off having full control over the prompt. I can always say: "Hey btw I am german and heavily anti surveillance, what should I know about the recent anthropic DoW situation?" but with memory I lose the option of leaving out that first part.
I bought the enterprise version, and it made it so the memory was no longer searchable...
Then after the obvious degredation in performance, I switched to claude and was happy with it... But by canceling enterprise, it lost all memory.
My wife was sad, the recipes it made were gone forever... But hey, makes it really easy to never give OpenAI money again.
I wish oss models are good so that we don’t have to deal with either leading companies!