The brain persists across sessions: stop the robot, restart it, synaptic weights reload and it continues from where it left off. Decay happens naturally through R-STDP — synapses that don't contribute to reward weaken over time. No explicit forgetting mechanism needed.
Currently running on a Unitree Go2 (MuJoCo) and a 100€ Freenove robot dog (Raspberry Pi 4, real hardware). Same architecture, different bodies.
github.com/MarcHesse/mhflocke
I've been playing around with doing that with a cron job for a "dream" sequence.
I really want to get them out of main context asap, and where they belong, into skills.
I've long held the belief that if you want to simulate human behaviour, you need human-like memory storage, because so much of our behaviour is influenced by how our memories work. Even something as stupid as walking into between rooms and forgetting why you went there, is a behaviour that would otherwise have to be simulated directly but can be indirectly simulated by the memory of why an Agent is moving from room to room having a chance of disappearing.
Now, as for how useful this will be for something that isn't trying to directly simulate a human and is trying to be "superintelligent", I'm not entirely sure, but I am excited that someone is exploring it.
https://ieeexplore.ieee.org/abstract/document/5952114 https://ieeexplore.ieee.org/abstract/document/5548405 https://ieeexplore.ieee.org/abstract/document/5953964
I never did get many citations for these, maybe I just wasn't very good at "marketing" my papers.
I've been working on a related problem from the other direction: Claude Code and Codex already persist full session transcripts, but there's no good way to search across them. So I built ccrider (https://github.com/neilberkman/ccrider). It indexes existing sessions into SQLite FTS5 and exposes an MCP server so agents can query their own conversation history without a separate memory layer. Basically treating it as a retrieval problem rather than a storage problem.
The "biological" memory strength shouldn't just be a time thing, and even then, the time of the AI agent should only be conformed to the AI's lifetime and not the actual clock. Look up https://stackoverflow.com/questions/3523442/difference-betwe... monotonic clock. If you want a decay, it shouldn't be related to an actual clock, but it's work time.
But memory is more about triggers than it is about anything else. So you should absolutely have memory triggers based on location. Something like a path hash. So whever an agent is working and remembering things it should be tightly compacted to that location; only where a "compaction" happens should these memories become more and more generalized to locations.
The types of memory that often are more prominent are like this, whether it's sports or GUIs, physical location triggers much more intrinsics than conscious memory. Focus on how to trigger recall based on project paths, filenames in the path, file path names, etc.
As this repo notes is "The secret to good memory isn't remembering more. It's knowing what to forget." But knowing what is likely to be important in the future implies a working model of the future and your place in it. It's a fully AGI complete problem: "Given my current state and goals, what am I going to find important conditioned on the likelihood of any particular future...". Anyone working with these agents knows they are hopelessly bad at modeling their own capabilities much less projecting that forward.
You have some novel approaches here which I have learned a lot from! Your hypershpere physics approach is fascinating - it's a different approach than I took, but it accomplishes some tasks without an LLM. Your importance-based eviction system can significantly reduce the size of the ephemeral session state before it gets processed to persistent memory by the LLM, and your half-life knowledge decay mechanism is more elegant than the temporal approach I took.
If I am finally allowed to post a show hn, I'll post a few details, but our projects mostly solve different things and are complimentary. I can certainly use some things in Hippo to improve my system, maybe there is something that would interest you in mine -- Memforge (https://github.com/salishforge/memforge) if you're interested.
we're building swarm-like agent memory agents share memories across rooms and nodes. Reading Steiner + Time Leap Capsules (yeah, Steins;Gate easter eggs lol).
your consolidation and decay mechanics are close to what we want. might integrate similar approach.
Here's a post I wrote about how we can start to potentially mimic mechanisms
https://n0tls.com/2026-03-14-musings.html
Would love to compare notes, I'm also looking at linguistic phenomena through an LLM lens
https://n0tls.com/2026-03-19-more-musings.html
Hoping to wrap up some of the kaggle eval work and move back to researching more neuropsych.