by mritchie712
2 subcomments
- Cursor promises to do this[0] in the product, so, especially on HN, it'd be best to start with "why this is better than Cursor".
> favorite doc sites so I do not have to paste URLs into Cursor
This is especially confusing, because cursor has a feature for docs you want to scrape regularly.
0 - https://cursor.com/docs/context/codebase-indexing
- This looks neat, we certainly need more ideas and solutions on this space, I work with large codebases daily and the limits on agentic contexts are constantly evident.
I've some questions related to how I would consume a tool like this one:
How does this fare with codebases that change very frequently? I presume background agents re-indexing changes must become a bottleneck at some point for large or very active teams.
If I'm working on a large set of changes modifying lots of files, moving definitions around, etc., meaning I've deviated locally quite a bit from the most up to date index, will Nia be able to reconcile what I'm trying to do locally vs the index, despite my local changes looking quite different from the upstream?
by alex-ross
1 subcomments
- This resonates. I'm building a React Native app and the biggest
friction with AI coding tools is re-explaining context every time.
How does Nia handle project-specific patterns? Like if I always use
a certain folder structure or naming convention, does it learn that?
- I've no idea what their architecture/implementation looks like, but I've built a similar tool for my own use and the improvements are dramatic to say the least.
Mine's a simple BM25 index for code keyword search (I use it alongside serena-mcp) and for some use cases the speeds and token efficiency are insane.
https://gitlab.com/rhobimd-oss/shebe#comparison-shebe-vs-alt...
- Looks cool, it is always a good take to make agent to retrieve information well- writing docs every time can always go bad.
- > The calling agent then decides how to use those snippets in its own prompt.
To be reductionist, it seems the claimed product value is "better RAG for code."
The difficulties with RAG are at least:
1. Chunking: how large and how is the beginning/end of a chunk determined
2. Given the above quote, how much or many RAG results are put into the context? It seems that the API caller makes this decision, but how?
I'm curious about your approach and how you evaluated it.
by chrisweekly
0 subcomment
- This looks interesting and worthwhile. I did a double-take when I read
"when (a few months ago) I was still in high school in Kazakhstan"
- Congrats on the launch. The problem is definitely there. I wonder how are you planning to differentiate yourself from Cursor and the like. You mention you are complementary, but Cursor provide similar features to add external doc context for instance to a prompt. I understand you do better in your benchmark, but with the amount of funding they may be able to replicate and improve over it (unless you have a secret thing).
- Configure MCP Server
One command to set up Nia MCP Server for your coding agent.
Select your coding agentCursor
Installation method
Local
Remote
Runs locally on your machine. More stable. Requires Python & pipx.
Create API Key
test
Create
Organization required to create API keys
i can not create api key? the create button is grey and can not be pressed.
by bluerooibos
2 subcomments
- Can you explain why I would pay almost the full price of Cursor, ChatGPT, or Claude again - just for your context layer, when these companies are already working on context?
I don't see you justify this with an explanation of the ROI anywhere.
by RomanPushkin
1 subcomments
- Having this RAG layer was always another thing to try for me. I haven't coded it myself, and super interested if this gives a real boost while working with Claude. Curious from anyone who have already tried the service, what's your feedback? Did you feel you're getting real improvements?
by bluerooibos
0 subcomment
- Your landing page tells me a whole lot of nothing.
How does this work? How does it differ from other solutions? Why do I need this? What does the implementation look like if I added this to my codebase?
- Is the RAG database on your servers or is it local? If not local is there a local option?
- The context problem with coding agents is real. We've been coordinating multiple agents on builds - they often re-scan the same files or miss cross-file dependencies. Interested in how Nia handles this - knowledge graph or smarter caching?
by dhruv3006
1 subcomments
- So many coding tools what makes you different.
- Very happy to see this since I am building in this domain. We need external and internal context though. I am aiming for always available context for current and related projects, reference projects, documentation, library usage, commands available (npm, python,...), tasks, past prompts, etc. all in one product. My product, nocodo (1), is built by coding agents, Claude Code (Sonnet only) and opencode (Grok Code Fast 1 and GLM 4.6).
I just made a video (2) on how I prompt with Claude Code, ask for research from related projects, build context with multiple documents, then converge into a task document, shared that with another coding agent, opencode (with Grok or GLM) and then review with Claude Code.
nocodo is itself a challenge for me: I do not write or review code line by line. I spend most of the time in this higher level context gathering, planning etc. All these techniques will be integrated and available inside nocodo. I do not use MCPs, and nocodo does not have MCPs.
I do not think plugging into existing coding agents work, not how I am building. I think building full-stack is the way, from prompt to deployed software. Consumers will step away from anything other than planning. The coding agent will be more a planning tool. Everything else will slowly vanish.
Cheers to more folks building here!
1. https://github.com/brainless/nocodo
2. https://youtu.be/Hw4IIAvRTlY
by orliesaurus
2 subcomments
- Benchmarks?
by krisgenre
1 subcomments
- Is this similar to the indexing done by Jetbrains IDEs?
by jacobgorm
1 subcomments
- SOTA on internal benchmark?
by kenforthewin
1 subcomments
- Congrats. From my experience, Augment (https://augmentcode.com) is best in class for AI code context. How does this compare?
- How does it compare to Serena MCP? :)
https://github.com/oraios/serena
- Absolutely insane that we celebrated coding agents getting rid of RAG, only with the next innovation being RAG
- [dead]
- [under-the-rug stub]
[see https://news.ycombinator.com/item?id=45988611 for explanation]
by ModernMech
1 subcomments
- [flagged]