33k views and 80+ stars in 24 hours — thank you. Your feedback drove this:
What's New
Attention History Tracking Every turn now logs to `~/.claude/attention_history.jsonl`. Query your trajectory:
```bash python3 ~/.claude/scripts/history.py --since 2h python3 ~/.claude/scripts/history.py --file ppe --transitions [12:22:35] Instance A | Turn 26 Query: what divergence dynamics? HOT: divergent.md, t3-telos.md, cvmp-transformer.md WARM: pipeline.md, orin.md (+3 more) ⬆ Promoted to HOT: divergent.md
Fractal Documentation Nested paths now activate correctly. modules/t3-telos/trajectories/convergent.md triggers parent co-activation.
Update cd ~/.claude-cognitive && git pull cp scripts/history.py ~/.claude/scripts/
The problem: Claude Code is stateless. Every new instance rediscovers your architecture from scratch, hallucinates integrations that don't exist, repeats debugging you already tried, and burns tokens re-reading unchanged files.
At 1M+ lines of Python (3,400 modules across a distributed system), this was killing my productivity.
The solution is two systems:
1. Context Router – Attention-based file injection. Files get HOT/WARM/COLD scores based on recency and keyword activation. HOT files inject fully, WARM files inject headers only, COLD files evict. Files decay over turns, co-activate with related files. Result: 64-95% token reduction.
2. Pool Coordinator – Multi-instance state sharing. Running 8 concurrent Claude Code instances, they now share completions and blockers. No duplicate debugging, no stepping on each other.
Results after months of daily use: - New instances productive on first message - Zero hallucinated imports - Token usage down 70-80% average - Works across multi-day sessions
Open source (MIT). Works with Claude Code today via hooks.
GitHub: https://github.com/GMaN1911/claude-cognitive
Happy to answer questions about the architecture or implementation details.