https://developer.chrome.com/docs/ai/prompt-api
I just checked the stats:
Model Name: v3Nano
Version: 2025.06.30.1229
Backend Type: GPU (highest quality)
Folder size: 4,072.13 MiB
Different use case but a similar approach.I expect that at some point this will become a native web feature, but not anytime soon, since the model download is many multiples the size of the browser itself. Maybe at some point these APIs could use LLMs built into the OS, like we do for graphics drivers.
It's usually too much when an app asks someone to setup a local LLM but this I believe could solve that problem?
Anyone know if this is somehow possible without going through an extension?