- Discussed earlier this week: https://news.ycombinator.com/item?id=46567400
by Ronsenshi
1 subcomments
- For me this looks like a great way to build connections between books in order to create a recommendation engine - something better than what Goodreads & Co provides. Something actually useful.
The cost of indexing using third party API is extremely high, however. This might work out well with an open source model and a cluster of raspberry pi for large library indexing?
- I really liked the approach of getting new topics to research via embeddings, trails, and claude code, but often what will this give you outside of novelty?
- I've been using Claude Code for my research notes and had the same realization, it's less about perfecting prompts and more about building tools so it can surprise you. The moment I stopped treating it like a function and started treating it like a coworker who reads at 1000 wpm, everything clicked
by jszymborski
1 subcomments
- This is all interesting, however I find myself most interested in how the topic tree is created. It seems super useful for lots of things. Anyone can point me to something similar with details?
EDIT: Whoops, I found more details at the very end of the article.
by rbbydotdev
0 subcomment
- I had a similar toy project. Attempting to make custom day trips from guide books. I immediately ran into limitations naïvely chunking paragraphs into a RAG. My next attempt I’m going to try using a llm model to extract “entities” like holidays/places/history and store them in a graph db coupled with vectors and original source text or index references(page + column)
Still experimental and way outside my expertise, would love to hear anyone with ideas or experience with this kind of problem
- I used AI for accelerating my reading a book recently. This is a interesting usecase. But it same as racing for the destination instead enjoying the journey.
It kills the tone, pace and the expressions of the author. It is pretty much same as an assistant summarizing the whole book for you, if that's what you want. It misses the entire experience delivered by the author.
- I did a similar thing with productivity books early last year, but never released it because it wasn't high enough quality. I keep meaning to get back to that project but it had a much more rigid hypothesis in mind - trying to get the kind of classification from this is pretty difficult and even more so to get high value from it.
- The mental model I had of this was actually on the paragraph or page level, rather than words like the post demos. I think it'd be really interesting if you're reading a take on a concept in one book and you can immediately fan-out and either read different ways of presenting the same information/argument, or counters to it.
by voidhorse
1 subcomments
- This was posted before and there were many good criticisms raised in the comments thread.
I'd just reiterate two general points of critique:
1. The point of establishing connections between texts is semantic and terms can have vastly different semantic meanings dependent on the sphere of discourse in which they occur. Because of the way LLMs work, the really novel connections probably won't be found by an LLM since the way they function is quite literally to uncover what isn't novel.
2. Part of the point in making these connections is the process that acts on the human being making the connections. Handing it all off to an LLM is no better than blindly trusting authority figures. If you want to use LLMs as generators of possible starting points or things to look at and verify and research yourself, that seems totally fine.
- I really like the idea of the topic tree. That intuitively resonates.
by lloydatkinson
1 subcomments
- How can anyone even trust crap like this? It was only a few days ago Claude and ChatGPT hallucinated a bunch of stuff from actual docs I sent them links to. When asked about it, they just apologised.
by kylehotchkiss
4 subcomments
- In several years, IMO the most interesting people are going to be the ones still actually reading paper books and not trying to shove everything into a LLM
by mizuirorivi
0 subcomment
- [dead]
- [flagged]
- [flagged]