- I know everybody seems to want the agent to remember every conversation they've ever had with it, but I just don't see the value in that. In fact, it seems to hurt productivity to have the agent second guessing me based on something I said yesterday. Every time I've used any memory system, the agent gets distracted from the current tasks based on previous conversations and branches of development...often comingling unrelated projects (I work on code for work, open source projects, a bunch of unrelated side projects, etc.) and trying to satisfy requirements that don't make sense.
I've stopped trying to achieve general "memory". I just ask the agent to thoroughly, but concisely, document each project. If it writes developer documentation and a development plan/roadmap, as though a person was going to have to get up to speed and start working on the project, it provides all the information the agent needs tomorrow or next week to pick up where we left off.
The agent is not my friend. I don't need it to remember my birthday or the nasty thing I said about React last week. I need it to document what anyone, agent or human, would need to know to get productive in a particular repo, with no previous knowledge of the project.
Good, concise, developer and user documentation and a plan with checklists solves every problem people seem to think "memory" will solve: It tells the agent what tech stack to use (we hashed it out in planning), it tells it what commands it needs to run and test the app, it covers the static analysis tools in use (which formalizes code style, etc. in a way a vague comment I made a month ago cannot), and it is cheap. Markdown files are the native tongue of agents. No MCP, no skills, no API needed. Just read the file. It works for any agent, any model, and any human just getting started with the project.
Basically, I think memory makes agents dumber and less useful. I want it to focus on the task at hand.
by xcf_seetan
4 subcomments
- It strikes me as funny how we want to get super AI inteligence but keep trying to anthropomorphizing all AI aspects to make it more "human". IMHO, if we keep doing it we will create Human AI with all errors and deficiencies humans have.
- Not something I've (yet) pursued, buy I did wonder a few days back if there was a good analogy between context window and short term memory, and storage with long term memory, and if so might an anki-like algorithm lead to better contexts by keeping relevant / difficult "memories" for the AI fresher (via spaced repetition), in an efficient manner.
- I planned and supervised the build of an ambient recall system, where a 4b model looks at the last 3k or so of context and picks through the RAG database for high ranking memories to inject, as well as mineable things to mark. Injections happens about 1/5 turns on most technical topics, data picked from prior design docs and data sheets mostly. At session wrapup the inference model goes back and rates all the memory injections in a frontmatter section, then looks at all the memory suggestions to commit those it finds memorable to the RAG database. Manual memorisation and RAG search are also available inline in the chat to both the user and the model. It also allows the main model to spawn little models as minions to work on repetitive simple tasks.
Seems to maybe be useful but I’m not sure yet.
- I haven’t had much like with memory implementations. I tried a few.
What I do now is preserve all my claude code conversations and set the context from there.
This allows me to curate memory and it’s been the best way so far.
by larrydakhissi
1 subcomments
- you just make Alzheimer a feature lol , but seriously this is very interesting
by axeldunkel
1 subcomments
- I only use a decay function to see how "hot" a chunk is - not for forgetting old ones. What concerns me more are memory chunks with errors in them - they need to be corrected/removed by some other mechanism, not by decay (since they might get retrieved often).
by Kim_Bruning
0 subcomment
- Everyone and their pet dog is making longer term memory systems at the same time, and they all seem kind of meh. Not casting aspersions here, my own attempts all crash and burn too. And better than nothing is still better than nothing.
Thing is, this seems like it might be a Hard Problem of some sort. Everyone trying, no one making a clean breakthrough, I feel like it's some sort of smell. Either the desired function isn't well understood, or there's something missing, or it's in some weird complexity class, or ... something. My spidey senses tingle.
I wonder if others have the same feeling?
by waterbuffaloai
0 subcomment
- I am also building a similar memory structure and decay mechanism for my local agent project, where I also use Ebbinghaus.
One of the challenge I face is how to decide effectively what to save in the memory: Is it the model to decide what is important, summarize and save it to the memory? How to avoid redundancy and categorize the memory correctly so you could get the right hit and decide what to forget.
I would love to learn more about your approach and what your thoughts on those points
- It's the cumulative weighting based on the softmax output? Is it per layer?
by cyanydeez
1 subcomments
- on the other "biological memory" post in so many weeks, I pointed out that the decay rate shouldn't be based on a real clock but a lifetime of it's use within the coding session. Elsewise your memory fades even when there's no process change (eg, coder goes on vacation). I'm not going to check whether thats true here, but it seems like a naive first assumption thats failed conceptualization.
The other comment is that spatial memory is probably a better trigger for memory, so if you're not tracking where the coding session starts, the folders it's visits, etc, then you're not really providing a good associative footpath for the assistant to retrieve whats important for any given project.
by altmanaltman
5 subcomments
- I am sorry but the whole "biological memory" thing seems like marketing fluff on basic cache mechanisms.
You said it cuts token usage by 84% but isn't that typical for any typical chunked RAG system?
And why did you specifically chose to test against the LoMoCo dataset when there's a lot of issues with it and it being very easy to cheat?
by thisisfatih
0 subcomment
- [dead]
- [dead]
- [dead]
- [dead]
- [dead]
- [dead]
- [dead]