We already know many useful things to do; there are already 10,000 startups (9789 out of YC alone, 4423 of which are coding-related) doing various ostensibly useful things. And there a ton more use-cases discussed in the comments here and elsewhere. But because of the headline the discussion is missing the much more important point!
Satya's point is, we need to do things that improve people's lives. Specific quotes from TFA:
>... "do something useful that changes the outcomes of people and communities and countries and industries."
> "We will quickly lose even the social permission to take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness, across all sectors, small and large, right?" said Nadella. "And that, to me, is ultimately the goal."
Which is absolutely right. He's the only Big Tech CEO I've heard of who constantly harps on the human and economic benefit angle of LLMs, whereas so many others talk -- maybe in indirect ways -- about replacing people and/or only improving company outcomes (which are usually better for only a small group of people: the shareholders.)
He's still a CEO, so I have no illusions that he's any different from the rest of them (he's presided over a ton of layoffs, after all.) But he seems to be the only CEO whose interests appear to be aligned with the rest of ours.
A way to drum up sense of urgency without mentioning that it's the patience of the investors (and _not_ the public) that will be the limiting factor here?
For instance, as a SWE, I get just a little help with boilerplate from the AI. I could usually have done it better, but sometimes the ask is both simple enough and boring enough that the code from the LLM actually produces something very close to what I would produce.
On the other side of the coin, a non-technical person using AI would be unable to properly understand and review the output.
Where it shines is on things that I am OK at. Like writing marketing copy. I can get by myself, but its slightly outside of my wheelhouse, but as long as I have a solid understanding of the product I can use AI to compliment my beginner/intermediate skills and produce something better than I would produce on my own.
A similar thing is writing tutorials. I write some code and documentation, but the tutorials are enough of a slog that I get distracted by my distaste for it. This is a good fit for AI.
I think this is where we will see AI help the most. Where someone's skillset includes the task at hand but at a secondary level where the user might doubt themselves or get distracted with the misery the task brings them.
WT actual F? They invested so much into something what is not obvious brings value? Will there be consequences on them? Or they take the bonus and hide in New Zealand bunker?
It's big money betting on narratives from wanna-be-big money how AI is transformative for the future. Public takes all the risks with hardware and energy inflation or bailing out banks out of investments which require pruductivity growth from AI which we don't yet see in statistics.
We took the wrong turn somewhere. And responsible people don't seem to be capable or willing to change the course. Too much power in too few weak minds. Nothing good will come from this.
If they mean "machine learning", then sure there are application in cancer detection and the like, but development there has been moving at a steady pace for decades and has nothing to do with the current hype wave of GenAI, so there's no reason to assume it's suddenly going to go exponential. I used to work in that field and I'm confident it's not going to change overnight: progress there is slow not because of the models, but because data is sparse and noisy, labels are even sparser and noisier, deployment procedures are rigid and legal compliance is a nightmare.
If they mean "generative AI", then how is that supposed to work exactly? Asking LLMs for medical diagnosis is no better than asking "the Internet at large". They only return the most statistically likely output given their training corpus (that corpus being the Internet as a whole), so it's more likely your diagnosis will be based on a random Reddit comment that the LLMs has ingested somewhere, than an actual medical paper.
The only plausible applications I can think of are tasks such as summarizing papers, acting as augmented search engines for datasets and papers, or maybe automating some menial administrative tasks. Useful, for sure, but not revolutionary.
That’s courageous from a CEO of an US company, where the current government doesn’t see burning more oil as being bad for the planet, and is willing to punish everyone who thinks otherwise.
* Higher electricty bills.
* 5-6x cost of RAM, GPUs, and other computer components
* Data centers popping up in their backyards
* An internet inundated with slop
* Slop beginning to infiltrate the video game industry and other creative industries
* AI being used to justify gutting entry level jobs for a generation already screwed by larger, long horizon economic forces
* Grok enabling the creation of revenge porn and CSAM with seemingly no repercussions
* Massive IP theft on a scale previously unheard of
* Etc.
The pros of AI are:
* It can summarize text and transcribe audio decently well.
* It can make funny pictures of cats wearing top hats.
* ???
Cause they are able to search the web deeply, search for up to date info/research and synergize all that. You can have back and fourth for as long as you need.
The issue is that using LLMs properly requires a certain skill that more people lack.
There obviously are some compelling use cases for "AI", but it's certainly questionable if any of those are really making people's lives any better, especially if you take "AI" to mean LLMs and fake videos, not more bespoke uses like AlphaFold which is not only beneficial, but also not a resource hog.
I think there are business reasons why they wouldn’t do that, and that makes me sad.
When non techie friends/family bring up AI there are two major topics: 1) the amount of slop is off the charts and 2) said slop is getting harder to recognize which is scary. Sometimes they mention a bit of help in daily tasks at work, but nothing major.
Hi there, friends from another dimension! In my reality, there's a cold front coming from the north. Healthcare is expensive and politics are a mess. But AI? It hallucinates sometimes but it's so much better for searching, ad hoc consultation and as a code assistant than anything I've ever seen. It's not perfect, but it saved me SO much time I decided to pay for it. I'm a penny pincher, so I wouldn't be paying for it otherwise.
I think Satya is talking about cost/benefit. AI is incredibly useful but also incredibly expensive. I think we still need to find the right balance (perhaps slower model releases), but there's no way we'll put the genie back in the bottle.
I hope your AI gets better! Talk to you later!
With all this useless slop, he’s literally arguing against his own point.
No brainwr.
And no, I'm not saying the technology is bad. The business isn't going swimmingly, though.
And yet studies show the opposite [0].
[0] https://www.media.mit.edu/publications/your-brain-on-chatgpt...
There are plenty of uses for AI. Right now, the industry is heavily spending on training new models, improving performance of existing software and hardware, and trying to create niche products.
Power usage for inference will drop dramatically over the next decade, and more models are going to run on-device rather than in the cloud. AI is only going to become more ubiquitous, there's 0% chance it 'fails' and we return to 2020.
I’ve been predicting for a while: free or cheap AI will enshittify and become an addictive ad medium with nerfed capabilities. If you want actually good AI you will have to pay for it, either a much heftier fee or buying or renting compute to run your own. In other words you’ll be paying what it actually costs, so this is really just the disappearance of the bubble subsidy.