by deepdarkforest
0 subcomment
- Interesting. It's just an agent loop with access to python exec and web search as standard, BUT with premade, curated, 150 tools like analyze_circular_dichroism_spectra, with very specific params that just execute a hardcoded python function. Also with easy to load databases that conform to the tools' standards.
The argument is that if you just ask claude code to do niche biomed tasks, it will not have the knowledge to do it like that by just searching pubmed and doing RAG on the fly, which is fair, given the current gen of LLM's. It's an interesting approach, they show some generalization on the paper(with well known tidy datasets), but real life data is messier, and the approach here(correct me if im wrong) is to identify the correct tool for a task, and then use the generic python exec tool to shape the data into the acceptable format if needed, try the tool and go again.
It would be useful to use the tools just as a guidance to inform a generic code agent imo, but executing the "verified" hardcoded tools narrows the error scope, as long as you can check your data is shaped correctly, the analysis will be correct. Not sure how much of an advantage this is in the long term for working with proprietary datasets, but it's an interesting direction
- This is nice, a lot of possibilities regarding AI use for scientific research.
There is also the possibility of building intelligent workspaces that could prove useful in aiding scientific research:
https://news.ycombinator.com/item?id=44509078
by joelthelion
1 subcomments
- This is really cool, but I think the big question is whether it works and whether it's useful to a professional.
Is there anyone in the field who could comment on this?
- I'm sure they've thought of this but curious how it fared on evaluations for supporting biological threats, ie elevating threat actor capabilities with respect to making biological weapons.
I'm personally sceptical that LLMs can currently do this (and it's based on Claude that does test this) but still interesting to see.
by freedomben
1 subcomments
- Awesome! This is the type of stuff I'm most excited about with AI - improvements to medical research and capabilities. AI can be awesome at identifying patterns in data that humans can't, and there has to be troves of data out there full of patterns that we aren't catching.
Of course there's also the possibility of engineering new drugs/treatments and things, which is also super exciting.
by teenvan_1995
1 subcomments
- I wonder if giving 150+ tools is really a good idea considering context limitations. Need to check out if this works IRL.
- Anyone have a spare invite?
- This is great, I've been on the waitlist for their website for a while and am now excited to be able to try it out!
- Very interesting work!
If biomedical research and paper analysis is of interest to you, I've been working on a set of open source projects that enable RAG over medical literature for a while.
PaperAI: https://github.com/neuml/paperai
PaperETL: https://github.com/neuml/paperetl
There is also this tool that annotates papers inline.
AnnotateAI: https://github.com/neuml/annotateai
- very cool -passed on to my friend who is working a Crispr lab
by Domainzsite
0 subcomment
- [dead]
by SalmoShalazar
7 subcomments
- Not to take away from this or its usefulness (not my intent), but it is wild to me how many pieces of software of this type are being developed. We’re seeing endless waves of specialized wrappers around LLM API calls. There’s very little innovation happening beyond specializing around particular niches and invoking LLMs in slightly different ways with carefully directed context and prompts.