by mentalgear
7 subcomments
- Some researchers proposed using, instead of the term "AI", the much more fitting "self-parametrising probabilistic model" or just advanced auto-complete - that would certainly take the hype-inducing marketing PR away.
by vivzkestrel
1 subcomments
- AGI would be absolutely terrifying and that is how you'll know AGI is here
- You would prompt "Ok AGI, read through the last 26978894356 research papers on cancer and tell me what are some unexplored angles" and it would tell you
- You would prompt "Show me the last 10 emails on Sam Altman's inbox" and it would actually show you
- You would prompt "Give me a list of people who have murdered someone in the USA and havent been caught yet" and it would give you a list of suspects that fit the profile
You really dont want AGI
by Peteragain
1 subcomments
- Exactly! I am going for "glorified auto complete" is far more useful than it seems. In GOFAI terms, it does case-based reasoning.. but better.
- I am quite happy with LLM being more and more available 24/7 to be useful to human kind ... than some sentient being that never sleep and is more intelligent than me, with its own agenda.
by cashsterling
0 subcomment
- I too doubted, from the beginning, that neural networks will be the basis of AGI. As impressive and useful as LLM's are, they are still a long, long way from AGI.
- I think what Terry is saying is that with the current set of tools, there are classes of problems requiring cleverness: where you can guess and check (glorified autocomplete), check answer, fail and then add information from failure and repeat.
I guess ultimately what is intelligence? We compact our memories, forget things, and try repeatedly. Our inputs are a bit more diverse but ultimately we autocomplete our lives. Hmm… maybe we’ve already achieved this.
- On a more practical level, I would be interested in Terry's thoughts on the open letter Sam Altman co-signed stating that "mitigating the risk of extinction from AI should be a global priority," alongside risks like pandemics and nuclear war.
Do current AI tools genuinely pose such risks?
- These things work well on the extremely limited task impetus that we give them. Even if we sidestep the question of whether or not LLMs are actually on the path to AGI, Imagine instead the amount of computing and electrical power required with current computing methods and hardware in order to respond to and process all the input handled by a person at every moment of the day. Somewhere in between current inputs and handling the full load of inputs the brain handles may lie “AGI” but it’s not clear there is anything like that on the near horizon, if only because of computing power constraints.
by mindcrime
4 subcomments
- Terry Tao is a genius, and I am not. So I probably have no standing to claim to disagree with him. But I find this post less than fulfilling.
For starters, I think we can rightly ask what it means to say "genuine artificial general intelligence", as opposed to just "artificial general intelligence". Actually, I think it's fair to ask what "genuine artificial" $ANYTHING would be.
I suspect that what he means is something like "artificial intelligence, but that works just like human intelligence". Something like that seems to be what a lot of people are saying when they talk about AI and make claims like "that's not real AI". But for myself, I reject the notion that we need "genuine artificial general intelligence" that works like human intelligence in order to say we have artificial general intelligence. Human intelligence is a nice existence proof that some sort of "general intelligence" is possible, and a nice example to model after, but the marquee sign does say artificial at the end of the day.
Beyond that... I know, I know - it's the oldest cliche in the world, but I will fall back on it because it's still valid, no matter how trite. We don't say "airplanes don't really fly" because they don't use the exact same mechanism as birds. And I don't see any reason to say that an AI system isn't "really intelligent" if it doesn't use the same mechanism as human.
Now maybe I'm wrong and Terry meant something altogether different, and all of this is moot. But it felt worth writing this out, because I feel like a lot of commenters on this subject engage in a line of thinking like what is described above, and I think it's a poor way of viewing the issue no matter who is doing it.
by AndrewKemendo
0 subcomment
- The term Artificial Intelligence was coined in 1955 for the Dartmouth Summer Research Project on Artificial Intelligence. The Gods of AI all got together: John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. They defined the entire concept of Artificial Intelligence in a single sentence:
The conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.
The only AI explainer youll need:
https://kemendo.com/Understand-AI.html
- Why are game creators creating AI?
https://x.com/_sakamoro/status/2002016273484714050?s=46&t=Rk...
by johnnienaked
0 subcomment
- Tao is obviously a smart guy but he's really lost the plot on this AI stuff
- > This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing
Useful = great. We've made incredible progress in the past 3-5 years.
The people who are disappointed have their standards and expectations set at "science fiction".
- We seem to be moving the goalposts on AGI, are we not? 5 years ago, the argument that AGI wasn't here yet was that you couldn't take something like AlphaGo and use it to play chess. If you wanted that, you had to do a new training run with new training data.
But now, we have LLMs that can reliably beat video games like Pokemon, without any specialized training for playing video games. And those same LLMs can write code, do math, write poetry, be language tutors, find optimal flight routes from one city to another during the busy Christmas season, etc.
How does that not fit the definition of "General Intelligence"? It's literally as capable as a high school student for almost any general task you throw it at.
- Remember when your goal posts were Turing test?
The only question remaining is what is the end point of AGI capability.
What’s the final IQ we’ll hit, and more importantly why will it end there?
Power limits? Hardware bandwidth limit? Storage limits? the AI creation math scales to infinity so that’s not an issue.
Source data limits? Most likely. We should have recorded more. We should have recorded more.
- [dead]
- There’s a guaranteed path to AGI, but it’s blocked behind computational complexity. Finding an efficient algorithm to simulate Quantum Mechanics should be top priority for those seeking AGI. A more promising way around it is using Quantum Computing, but we’ll have to wait for that to become good enough..
by Davidzheng
1 subcomments
- The text continues "with current AI tools" which is not clearly defined to me (does it mean current Gen + scaffold? Anything which is llm reasoning model? Anything built with a large llm inside? ). In any case, the title is misleading for not containing the end of the sentence. Please can we fix the title?