Show HN: Sparrow-1 – Audio-native model for human-level turn-taking without ASR
115 points by code_brian
by cuuupid
1 subcomments
The first time I met Tavus, their engineers (incl Brian!) were perfectly willing to sit down and build their own better Infiniband to get more juice out of H100s. There is pretty much nobody working on latency and realtime at the level they are, Sparrow-1 would be an defining achievement for most startups but will just be one of dozens for Tavus :)
by ljoshua
1 subcomments
Hey @code_brian, would Tavus make the conversational audio model available outside of the PALs and video models? Seems like this could be a great use case for voice-only agents as well.
by pugio
0 subcomment
It sounds really cool, but I don't see any way of trying the model directly. I don't actually want a "Persona" or "Replica" - I just want to use the sparrow-one model. Is there any way to just make API calls to that model directly?
by randyburden
1 subcomments
Awesome. We've been using Sparrow-0 in our platform since launch, and I'm excited to move to Sparrow-1 over the next few days. Our training and interview pre-screening products rely heavily on Tavus's AI avatars, and this upgrade (based on the video in your blog post) looks like it addresses some real pain points we've run into. Really nice work.
by nubg
1 subcomments
Btw while I think this is cool and useful for real time voice interfaces for the general populace, I wonder if for professional users (eg a dev coding by dictating all day), a simple push to talk is not always going to be superior, because you can make long pauses while you think about something, this would creep out a human, but the AI would wait patiently for your push to talk.
by dfajgljsldkjag
2 subcomments
I am always skeptical of benchmarks that show perfect scores, especially when they come from the company selling the product. It feels like everyone claims to have solved conversational timing these days. I guess we will see if it is actually any good.
> Non-verbal cues are invisible to text: Transcription-based models discard sighs, throat-clearing, hesitation sounds, and other non-verbal vocalizations that carry critical conversational-flow information. Sparrow-1 hears what ASR ignores.
Could Sparrow instead be used to produce high quality transcription that incorporate non-verbal cues?
Or even, use Sparrow AND another existing transcription/ASR thing to augment the transcription with non-verbal cues
by nubg
1 subcomments
Any examples available? Sounds amazing.
by orliesaurus
1 subcomments
Literally no way to sign up to try. Put my email and password and it puts me into some wait list despite the video saying I could try the model today. That's what makes me mad about these kind of releases is that the marketing and the product don't talk together.
by sourcetms
1 subcomments
How do I try the demo for Sparrow-1?
What is pricing like?
by mentalgear
2 subcomments
Metric | Sparrow-1
Precision 100%
Recall 100%
Common ...
by ttul
5 subcomments
I tried talking to Claude today. What a nightmare. It constantly interrupts you. I don’t mind if Claude wants to spend ten seconds thinking about its reply, but at least let ME finish my thought. Without decent turn-taking, the AI seems impolite and it’s just an icky experience. I hope tech like this gets widely distributed soon because there are so many situations in which I would love to talk with a model. If only it worked.
by vpribish
1 subcomments
What is "ASR" - automatic speech recognition?
by krautburglar
1 subcomments
Such things were doing a good-enough job scamming the elderly as it is--even with the silence-based delays.