Confirmed in this post: https://www.instagram.com/reel/DWzNnqwD2Lu
This really shows how ideas are worth more than the code itself nowadays. Haven't really tried the project myself yet, but if the benchmark is correct - this looks like a major breakthrough. Even more so coming from someone which (AFAIK) is not technical.
This is amazing. Well done Milla & team!
Btw, I already love the memes around this: "Missed the chance to call this Resident Eval
For me, I see a silver lining. I'll be implementing mempalace for a few small agents to have memory portability that's managed locally.
I think the benchmarker who ran independent tests in GitHub issue #39 summed it up best:
To be clear about what this all means for our own use case: we still think there's a real product here, just not the one the README is selling. The combination of a one-command ChromaDB ingest pipeline for Claude Code, ChatGPT, and Slack exports, a working semantic search index over months or years of conversation history, fully local, MIT-licensed, no API key required, and a standalone temporal knowledge graph module (knowledge_graph.py) that could be used independently of the rest of the palace machinery,is genuinely useful, and we're planning to integrate it into our Sandcastle orchestrator as a claude_history_search MCP tool exactly along those lines.
same story with LoCoMo, the 100% score uses top-k=50 which literally exceeds the session count lol, with reranking on top. honest top-10 no rerank gets you 88.9%
this is giving openclaw energy where you engineer your benchmark results to look perfect and then market it as some breakthrough. the underlying tech might be interesting but leading with "highest score ever published" when the methodology has these kinds of asterisks is not great
cool that milla jovovich is vibe coding tho i guess
That being said, can't help but wonder if stuff like this is better done with auto-encoders. The implementation in dialect.py seems very "narrative" oriented, probably not that good for things like coding.