I haven’t seen a company convincingly demonstrate that this affects them at all. Lots of fluff but nothing compelling. But I have seen many examples by individuals, including myself.
For years I’ve loved poking at video game dev for fun. The main problem has always been art assets. I’m terrible at art and I have a budget of about $0. So I get asset packs off Itch.io and they generally drive the direction of my games because I get what I get (and I don’t get upset). But that’s changed dramatically this year. I’ll spend an hour working through graphics design and generation and then I’ll have what I need. I tweak as I go. So now I can have assets for whatever game I’m thinking of.
Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.
This collapses an important distinction. The containerization pioneers weren’t made rich - that’s correct, Malcolm McLean, the shipping magnate who pioneered containerization didn’t die a billionaire. It did however generate enormous wealth through downstream effects by underpinning the rise of East Asian export economies, offshoring, and the retail models of Walmart, Amazon and the like. Most of us are much more likely to benefit from downstream structural shifts of AI rather than owning actual AI infrastructure.
This matters because building the models, training infrastructure, and data centres is capital-intensive, brutally competitive, and may yield thin margins in the long run. The real fortunes are likely to flow to those who can reconfigure industries around the new cost curve.
There will be millions of factories all benefiting from it, and a relatively small number of companies providing the automation components (conveyor belt systems, vision/handling systems, industrial robots, etc).
The technology providers are not going to become fabulously rich though as long as there is competition. Early adopters will have to pay up, but it seems LLMs are shaping up to be a commodity where inference cost will be the most important differentiator, and future generations of AI are likely to be the same.
Right now the big AI companies pumping billions into it to advance the bleeding edge necessarily have the most advanced products, but the open source and free-weight competition are continually nipping at their heels and it seems the current area where most progress is happening is agents and reasoning/research systems, not the LLMs themself, where it's more about engineering rather than who has the largest training cluster.
We're still in the first innings of AI though - the LLM era, which I don't think is going to last for that long. New architectures and incremental learning algorithms for AGI will come next. It may take a few generations of advance to get to AGI, and the next generation (e.g. what DeepMind are planning in 5-10 year time frame) may still include a pre-trained LLM as a component, but it seems that it'll be whatever is built around the LLM, to take us to that next level of capability, that will become the focus.
Hopefully the boom will slow down and we'll all slowly move away from Holy Shit Hype things and implement more boring, practical things. (although I feel like the world has shunned boring practical things for quite a while before)
-AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products. Less people employed temporary will change demand side economics, cheaper operating costs will reduce supply/cost side
-The focus should not just be on LLM's (like in the article). I think LLMs have shown what artificial neural networks are capable of, from material discovery, biological simulation, protein discovery, video generation, image generation, etc. This isn't just creating a cheaper, more efficient way of shipping goods around the world, its creating new classifications of products like the microcontroller invention did.
-The barrier to start businesses is less. A programmer not good at making art can use genAI to make a game. More temporary unemployment from existing companies reducing cost by automating existing work flows may mean that more people will start their own businesses. There will be more diverse products available but will demand be able to sustain the cost of living of these new founders? Human attention, time etc is limited and their may be less money around with less employment but the products themselves should cost cheaper.
-I think people still underestimate what last year/s LLMs and AI models are capable of and what opportunities they open up, Open source models (even if not as good as the latest gen), hardware able to run these open source models becoming cheaper and more capable means many opportunities to tinker with models to create new products in new categories independent of being reliant on the latest gen model providers. Much like people tinkering with microcontrollers in the garage in the early days as the article mentioned.
Based on the points above alone while certain industries (think phone call centers) will be in the red queen race scenario like the OP stated there will new industries unthought of open up creating new wealth for many people.
On the one hand, there are a lot of fields that this form of AI can and will either replace or significantly reduce the number of jobs in. Entry level web development and software engineering is at serious risk, as is copywriting, design and art for corporate clients, research assistant roles and a lot of grunt work in various creative fields. If the output of your work is heavily represented in these models, or the quality of the output matters less than having something, ANYTHING to fill a gap on a page/in an app, then you're probably in trouble. If your work involves collating a bunch of existing resources, then you're probably in trouble.
At the same time, it's not going to be anywhere near as powerful as certain companies think. AI can help software engineers in generating boilerplate code or setup things that others have done millions of times before, but the quality of its output for new tasks is questionable at best, especially when the language or framework isn't heavily represented in the model. And any attempts to replace things like lawyers, doctors or other such professions with AI alone are probably doomed to fail, at least for the moment. If getting things wrong is a dealbreaker that will result in severe legal consequences, AI will never be able to entirely replace humans in that field.
Basically, AI is great for grunt work, and fields where the actual result doesn't need to be perfect (or even good). It's not a good option for anything with actual consequences for screwing up, or where the knowledge needed is specialist enough that the model won't contain it.
I'm not sure it is very predictable.
We have people saying AI is LLMs and they won't be much use and there'll be another AI winter (Ed Zitron), and people and people saying we'll have AGI and superintelligence shortly (Musk/Altman), and if we do get superintelligence it's kind of hard to know how that will play out.
And then there's John von Neumann (1958):
>[the] accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
which is what kicked of the misuse of a perfectly good mathematical term for all that stuff. Compared to the other five revolutions listed - industrial, rail, electricity, cars and IT, I think AI is a fair bit less predictable.
This is what happens when users gain value which they themselves capture, and the AI companies only get the nominal $20/month or whatever. In those cases it's a net gain for the economy as a whole if valuable work was done at low cost.
The inverse of the broken window fallacy.
In that scenario, everyone makes money: OpenAI, Google (maybe Anthropic, maybe Meta) make money on the platform, but there are thousands of companies that sell solutions on top.
Maybe, however, LLMs get commoditized and open-source models replace OpenAI, etc. In that case, maybe only NVIDIA makes money, but there will still be thousands of companies (and founders/investors) making lots of money on AI everything.
What LLMs are absolutely not useful for, in my opinion, is answering questions or writing code, or summarising things, or being factual in any sense at all.
The AI revolution has only just got started. We've barely worked out basic uses for it. No-one has yet worked out revolutionary new things that are made possible only by AI - mostly we are just shoveling in our existing world view.
I remember back in 2004, my first project was testing a teleconferencing system. We set up a huge screen with cameras at one of our subsidiaries and another at the HQ, and I had a phone on my desk with a built-in camera and screen. Did the company roll out the system? No, it didn’t. It was just too expensive. Did they make a fortune from that experience? No, they didn’t. But I’m pretty sure all companies in the knowledge industry that didn’t enable video calls and screen sharing for their employees went out of business years ago...
That’s kinda happening, small local models, huggingface communities, civit ai and image models. Lots of hobby builders trying to make use of generative text and images. It just there’s not really anything innovative about text generation since anyone with a pen and paper can generate text and images.
I think we'll see a ton of games produced by AI or aided heavily by AI but there will still be people "hand crafting" games: the story, the graphics, etc. A subset of these games will have mass appeal and do well. Others will have smaller groups of fans.
It's been some time since I've read it, but these conversation remind me of Walter Benjamin's essay, "The Work of Art in the Age of Mechanical Reproduction".
If anything, I'd think that crypto in 2010 had all the hallmarks of a new wave as described. The concept was open to anyone who wanted to tinker with it. It had to be sold to skeptical consumers by wildcat startups. It certainly had the potential to upend the financial industry, but no incumbent would touch it. Yet it did end up more or less being sucked into the gravity well of the incumbents, although in the case of crypto we very much need to consider governments which control the money supply to be the heavyweights, even more so than banks and lenders.
Maybe I'm pessimistic, but I'm not sure any new innovation now can escape the gravitational nexus of the duopoly of government and incumbent tech, in a way that would lead to the kind of wild growth and experimentation we had with microprocessors in the 70s.
We had some guy named Satoshi write a paper that basically handed the keys to anyone who wanted to experiment - and 17 years later, after a significant bubble, that wave has done very little to change the status quo.
I suppose if someone released a DIY genome editor or protein folding was solved or, like, a working Mr. Fusion device showed up on Kickstarter, or a "feed"/"seed" a la the Diamond Age made it possible to turn dirt into anything you wanted, or a FTL drive came out of someone's garage or something... yeah. That would seriously upset the incumbents. But even with sci-fi stuff like that, what's the moat once you make your findings public anymore? This article suggests that the only serious moat has ever been that large companies are slowed down by inertia and take some time to spin up once their internal cultures go from deriding something to deciding it's essential.
How do we know if we have stalled to the point that no one will come along and be the next Amazon or Google, until 50 years from now we see that no one did?
Example
https://specinnovations.com/blog/ai-tools-to-support-require...
Gen AI is not nearly powerful enough to justify current investments. A lot of money is going to go up in smoke.
1. The tech revolutions of the past were helped by the winds of global context. There were many factors that propelled those successful technologies on the trajectories. The article seems to ignore the contextual forces completely.
2. There were many failed tech revolutions as well. Success rate was varied from very low to very high. Again the overall context (social, political, economic, global) decides the matters, not technology itself.
3. In overall context, any success is a zero-sum game. You maybe just ignoring what you lost and highlighting your gains as success.
4. A reverse trend might pickup, against technology, globalization, liberalism, energy consumption etc
AI is largely capable of running on-device. In a few years, it's likely that most tasks that most people want AI for will be possible from a tiny model living in their phone. Open source models are plentiful, functional, and only becoming moreso.
But you can't monetize that. We're currently dumping billions of dollars into datacenter moats that are just gonna evaporate inside the decade.
For the average user doing their daily "who was that actor in that movie" query, no, you absolutely cannot monetize AI because all of your local devices can run the model for free with enough quality that no one will know or care that there's a difference.
For enterprise scale building a trillion dollar datacenter and 15 nuclear reactors to replace a hundred developers... also no. LLMs are not capable of that, and likely won't be in the foreseeable future. It's also extremely unclear that one could ever get an ROI on in-house AI like this. It might be more plausible if it were a commodity technology you can just buy, but then you can't make a moat.
The only hypothetical fortune to be found is by whoever is selling AI to people who think they need to buy AI. Just like bitcoin or NFTs.
The good news is that this has two possible outcomes: capitalist AI vendors will want to remove AI from individual access so they can sell it to you: everyone gets less AI. Capitalists realize they can never monetize AI when it's free and open source, and give up: everyone gets less AI. Win-win-win, in my book.
>The article "AI Will Not Make You Rich" argues that generative AI is unlikely to create widespread wealth for investors and entrepreneurs. The author, Jerry Neumann, compares AI to past technological revolutions, suggesting it's more like shipping containerization than the microprocessor. He posits that while containerization was a transformative technology, its value was spread so thinly that few profited, with the primary beneficiaries being customers.
>The article highlights that AI is already a well-known and scrutinized technology, unlike the early days of the personal computer, which began as an obscure hobbyist project. The author suggests that the real opportunities for profit will come from "fishing downstream" by investing in sectors that use AI to increase productivity, such as professional services, healthcare, and education, rather than investing in the AI infrastructure and model builders themselves.
I used to be the biggest AI hater around, but I’m finding it actually useful these days and another tool in the toolbox.
Looking around, can find curious things current AI can't do but likely can find important things it can do. Uh, there's "a lot of money", can't be sure AI won't make big progress, and even on a national scale no one wants to fall behind. Looking around, it's scary about the growth -- Page and Brin in a garage, Bezos in a garage, Zuckerberg in school and "Hot or Not", Huang and graphics cards, .... One or two guys, ... and in a few years change the world and $trillions in company value??? Smoking funny stuff?
Yes, AI can be better than a library card catalog subject index and/or a dictionary/encyclopedia. But a step or two forward and, remembering 100s of soldiers going "over the top" in WWI, asking why some AI robots won't be able to do the same?
Within 10 years, what work can we be sure AI won't be able to do?
So people will keep trying with ASML, TSMC, AMD, Intel, etc. -- for a yacht bigger than the one Bezos got or for national security, etc.
While waiting for AI to do everything, starting now it can do SOME things and is improving.
Hmm, a SciFi movie about Junior fooling around with electronics in the basement, first doing his little sister Mary's 4th grade homework, then in the 10th grade a published Web site book on the rise and fall of the Eastern Empire, Valedictorian, new frontiers in mRNA vaccines, ...?
And what do people want? How 'bout food, clothing, shelter, transportation, health, accomplishment, belonging, security, love, home, family? So, with a capable robot (funded by a16z?), it builds two more like itself, each of those ..., and presto-bingo everyone gets what they want?
"Robby, does P = NP?"
"Is Schrödinger's equation correct?"
"How and when can we travel faster than the speed of light?"
"Where is everybody?"
1990 is when the real outsourcing mania started, which led to the destruction of most Western manufacturing. Apart from cheap Chinese trinkets the quality of life and real incomes have gotten worse in the West while the rich became richer.
So this is an excellent analogy for "AI": Finding a new and malicious application can revive the mania after an initial bubble pop while making societies worse. If we allow it, which does not have to be the case.
[As usual, under the assumption that "AI" works, of which there is little sign apart from summarizing scraped web pages.]
People using it get dumber.
What is being produced is slop and discardable poc-like trash
The environmental costs of building and training LLMs are huge. That compute and water could have been useful for something.
Even the companies building and peddling AI are losers. They are not profitable, need constant billions of dollars of financial help to even syay afloat and pay their compute depth.
The worst part is that even bigger losers will be the general population. Not only are our kids gonna be dumber than us thanks to never having to think for themselved, but our pensions are tied to the stock market that will inevitably collapse when the realization that the top 30% of companies in terms of value are just dominoes waiting to collapse.
But the biggest loser of all is Elon Musk. Just because of who he is.
This looks certain. Few technologies have had as much adoption by so many individuals as quickly as AI models.
(Not saying everything people are doing has economic value. But some does, and a lot of people are already getting enough informal and personal value that language models are clearly mainstreaming.)
The biggest losers I see are successive waves of disruption to non-physical labor.
As AI capabilities accrue relatively smoothly (perhaps), labor impact will be highly unpredictable as successive non-obvious thresholds are crossed.
The clear winners are the arms dealers. The compute sellers and providers. High capex, incredible market growth.
Nobody had to spend $10 or $100 billion to start making containers.
But I think the benefits of AI usage will accumulate with the person doing the prompting and their employers. Every AI usage is contextualized, every benefit or loss is also manifested in the local context of usage. Not at the AI provider.
If I take a photo of my skin sore and put it on ChatGPT for advice, it is not OpenAI that is going to get its skin cured. They get a few cents per million tokens. So the AI providers are just utilities, benefits depend on who sets the prompts and and how skillfully they do it. Risks also go to the user, OpenAI assumes no liability.
Users are like investors - they take on the cost, and support the outcomes, good or bad. AI company is like an employee, they don't really share in the profit, only get a fixed salary for work