We flagged these issues to the authors before submission. They acknowledged them, but chose not to fix them. The paper was later accepted and widely promoted by Google, reaching tens of millions of views.
We’re speaking up now because once a misleading narrative spreads, it becomes much harder to correct. We’ve written a public comment on openreview (https://openreview.net/forum?id=tO3AS KZlok ).
We would greatly appreciate your attention and help in sharing it."
I don’t think they’re going to downsize though, I think the big players are just going to use the freed up memory for more workflows or larger models because the big players want to scale up. It’s a cat and mouse race for the best models.
The demand for memory isn't going to go down, we'll just be able to do more with the same amount of memory.
This part sounds especially cool. I did not think about this application when reading the other articles about TurboQuant. It would be cool to have access to this performance optimization for local RAG.
Compute, bytes of ram used, bytes in model, bytes accessed per iteration, bytes of data used for training.
You can trade the balance if you can find another way to do things, extreme quantisation is but one direction to try. KANs were aiming for more compute and fewer parameters. The recent optimisation project have been pushing at these various properties. Sometimes gains in one comes at the cost of another, but that needn't always be the case.
I am no expert, so this is a shallow take, but I think the global LLM already reaches its limit, and general AGI could only be possible if it's living in the moment, i.e., retraining every minute or so, and associating it with a much smaller device that can observe the surroundings, like a robot or such.
Instead of KV cache, I have an idea of using LoRA's instead: having a central LLM unchanged by learning, surrounded by a dozen or thousands of LoRAs, made orthogonal to each other, each competed by weights to be trained every 1 min say. The LLM, since it's a RNN anyway, provides "summarize what your state and goal is at this moment" and trains the LoRAs with the summary along with all the observations and say inputs from the users. The output of the LoRAs feeds back to the LLM for it to decide the weights for further LoRAs training.
Anyways, I am just thinking there needs to be a structure change of some kind.
Unfortunately, nobody at big companies know, what exactly math will win, so competition not end.
So, researchers will try one solution, then other solution, etc, until find something perfect, or until semiconductors production (Moore's Law) made enough semiconductors to run current models fast enough.
I believe, somebody already have silver bullet of ideal AI algorithm, which will lead all us to AGI, when scaled in some big company, but this knowledge is not obvious at the moment.
Can we please start talking about this in that context? We already know what TurboQuant will do to DRAM demand. We already know what it will do to context windows. There is no need to speculate. There is no need to panic sell stocks.
[1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
Isn't that a classic tit for tat decision and head for a loss?
Excellence and prestige are valuable too. You get those expensive ML for a small discount, public/professional perception, etc. Considering the public communication from Google, that isn't complete sociopathic, they know this war isn't won in one night, they are the only sustainably funded company in the competition. Surely they are at risk with their business, but can either go rampant or focus. They decided to focus.