- That is very, very interesting. I've been hoping to have an assistant in the workshop (hands-free!) that I could talk to and have it help me with simple tasks: timers, calculating, digging up notes, etc. — basically, what the phone assistants were supposed to be, but aren't.
"You will have to unlock your iphone first" is kind of a deal-breaker when you are in the middle of mixing polyurethane resin and have gloves and a mask on.
More and more I find that we have the technology, but the supposedly "tech" companies are the gatekeepers, preventing us from using the technological advances and holding us back years behind the state of the art.
I'll be trying this out on my Macbook, looks very promising!
by logicallee
1 subcomments
- It might interest people to know you can also easily fine-tune the text portion of this specific model (E2B) to behave however you want! I fine-tuned it to talk like a pirate but you can get it to do anything you have (or can generate) training data for. (This wouldn't make it to the text to speech portion though.) So you can easily train it to act a certain way or give certain types of responses.
Video: https://www.youtube.com/live/WuCxWJhrkIM
Generated writeup: https://taonexus.com/publicfiles/apr2026/pirate-gemma-journa...
- Solid work and great showcase, I've done a bunch of stuff with Kokoro and the latency is incredible. So crazy how badly Apple dropped the ball... feels like your demo should be a Siri demo (I mean that in the most complimentary way possible).
- This is so cool, I'm always speaking to people about how the advancement in the SOTA hosted AI's is also happening in the local model space, i.e. the SOTA hosted AI models 6-12 months ago are what we're seeing now being able to run locally on average hardware - this is such an amazing way to actually demo it.
- I am making something similar. Also been using Kokoro for TTS. Very cool project!
Gemma 4 is kinda too heavyweight even with E2B. I am sticking with qwen 0.8B at the moment.
- I have been looking forward to build something like this using open models. A voice assisstant I can talk while I am driving, as I do have long commute. I do use chatGPT voice mode and it works great for querying any information or discussions. But I want to do tasks like browsing web, act like a social media manager for my business etc.
by myultidevhq
1 subcomments
- This is really impressive for running locally on an M3 Pro. The latency looks surprisingly good for real-time audio and video input.
Curious about one thing though, how does it handle switching between languages? I work with both Greek and English daily and local models usually struggle with that.
Great work, bookmarking this.
by crsAbtEvrthng
2 subcomments
- If I run this without internet connection it says "loading..." at the bottom of the localhost site and won't work.
If I run this with internet connected it works flawlessly. Even if I disconnect my internet afterwards it still goes on working fine.
Why there has to be an internet connection established at the time I open the localhost site when all of this should be working purely on device?
Despite of this, I am really impressed that this actually works so fast with video input on my M4 Pro 48 GB.
by noodlebreak
0 subcomment
- I have to try it out on my idle laptops. I've been meaning to run some models on them for low cost tasks that need AI - like sorting and filtering photos from 100s of thousands that I have amassed over the years. And applying general size reduction compression to the filtered ones.
Btw if anyone has already created such a pipeline/workflow using such models, please lmk!
- I've been trying to do this, but I can't get voice recognition to work fast enough (meaning live) with Gemma E2B, on either an M1 max (64GB), a 5060 Ti (16Gb) or a SnapDragon 8 Gen2.
Any pointers?
by rubicon33
1 subcomments
- Is there anything unique here happening for the video aspect or is it just taking snapshots over and over?
I’ve been looking for a good video summarizing / understanding model!
- Real time ai sounds like the future
- Can someone quickly vibe code MacOS native app for that so it doesn't require running terminal commands and searching for that browser tab? (: (also for iOS, pls)
- Cool work buddy:)
by jareklupinski
0 subcomment
- just make it say "Uh...", "umm...", or "hmmm..." once or twice halfway between processing and finish :D
- [dead]
- [dead]
- [dead]
by eddie-wang
0 subcomment
- [dead]
by techpulse_x
0 subcomment
- [dead]
by k-almuraee
0 subcomment
- Amazing, love your work ,