Then for the application specific documentation, I'd understand you'd want to share it, as it stays the same for all agents touching the same codebase. But easily solved by putting it in DESIGN.md or whatever and appending "Remember to check against DESIGN.md before changing the architecture" or similar.
This came out of necessity, as active using of AI assistants in uncontrollable way significantly degraded code quality. The goal is to enforce the same development workflow across team This is internal tool. If someone interesting, I can create a public repo from it
.ai/ ├── AGENTS.md ├── rules/ ├── skills/ └── settings.json # MCP servers, permissions
Run `lnai sync` and it exports to native formats for 7 tools: Claude Code, Cursor, GitHub Copilot, Gemini CLI, OpenCode, Windsurf, and Codex. The interesting part is it's not just copying files. Each tool has quirks:
- Cursor wants `.mdc` files with `globs` arrays in frontmatter - Gemini reads rules at the directory level, so rules get grouped - Permissions like `Bash(git:*)` become `Shell(git)` for Cursor - Some tools don't support certain features (e.g., Cursor has no "ask" permission level). LNAI warns but doesn't fail
Where possible, it uses symlinks. So `.claude/CLAUDE.md` → `../.ai/AGENTS.md`. Edit the source, all tools see the change immediately without re-syncing.
Usage:
npm install -g lnai lnai init # Creates .ai/ directory lnai validate # Checks for config errors lnai sync # Exports to all enabled tools
It's MIT licensed. The code is TypeScript with a plugin architecture, each tool is a plugin that implements import/export/validate. GitHub: https://github.com/KrystianJonca/lnai Docs: https://lnai.sh
Would appreciate feedback, especially from anyone else dealing with this config hell problem.
I ran into this building a spec/skill sync system [1] - the "sync once" model breaks down when you need to track whether downstream consumers are aware of upstream changes.
[1] https://github.com/anupamchugh/shadowbook