- https://github.com/ggml-org/llama.vscode
I'm sure there are others if you're interesting in switching to something else.
$10 from Github vs $20 for Cursor. [0]
And finally...
Extinguish. [0]
i do not see in text of announcment any reason to use copilot specifically anymore.
so left copilot 2 weeks ago. tried local qwen-coder. started small but ended max fits 12 gb nvidia laptop gpu. it made my laptop sound like jet, so not this year or even next year, until bubble burst. no local for now. need pull earlier than other they from clouds so at some points. but local was fine quality autocompletion even on 12gb.
currently ended up on vs code+continue+mistral. feels fine. i like line by line autocomplete and pressing tab several times for each line. better quality and more control.
also i use only clouds which also provide models to run locally. so i can pull when time will come.
do not have time to try other models and extensions. so many.