This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
I'd like to use this on my website and also see if I can create variations for some of the major EU markets.
Yay!
>Computer Programmers: -6%
Oh no
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
I formalized my thoughts here: https://jodavaho.io/posts/ai-jobpocolypse.html
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
Whats the outlook like?
Thank you!
Needs
- [utility] add filter by keyword / substring match, e.g majority of visualized reports are un-labeled requiring hovering with a mouse pointer
- [improve discovery] add sort by demographic / pop impact, e.g largest block is 7m ('Hand laborers and movers') and default sorted to bottom-left
Stand in front with a gun while mobs come to burn down the data center that took their jobs.
(I think I'm half joking).
https://apnews.com/article/trump-jobs-firing-f00e9bf96d01105...
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
...word?
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
I guess that was to be expected...
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.