You cannot equate paper clip maximizer with SV companies. Because fitness functions are different.
Not saying that diamonds industry didn’t make world worse off but still somehow it didn’t take over the world to make people eat diamonds for breakfast. So the same there will be no entity that can make all people everywhere eat strawberries for breakfast all year long.
Scary part is finance industry that basically already is self conscious with all the rules baked in and no single person being able to grasp it.
Finance with AGI could already become paper clip optimizer - but it actually needs energy only. It doesn’t need humans anymore. So it would most likely fill in whole world with power plants and erase all other life just to have the electricity.
Excellent quote. We say we're in a rational world where we make rational decisions in our societal game we've all been told we have to play - capitalism where money is the determinant of success vs failure for corporations, families, individuals.
But step back and when looking at the question of whether it's rational to us as humans be playing this game and it is not rational at all. Why are we not deciding that food and places to live for everyone is the determinant of success of a country, society? Or happiness?
60 Minutes has a segment about Bhutan from a couple weeks ago [0] about this. They lived by something they named "Gross National Happiness". Which feels weird to type but again stepping back, it's because our whole lives we're told that "Gross Domestic Product", overall money, is the determinant of "best" and that's so engrained for us.
On a different note, Ted Chaing's short story books [1][2] are incredibly, incredibly good. I'm reading them again and read "Story of Your Life" earlier today. Being able to write fiction like that makes it much more trusting to listen to what someone has to say on other topics. And saying that seems like another topic - how we're told to downplay fiction compared to non-fiction, when our brains evolved for stories. But that's for another comment.
[0] - https://www.youtube.com/watch?v=7g_t1lzn-1A
[1] - https://en.wikipedia.org/wiki/Stories_of_Your_Life_and_Other...
And here I was going to suggest that billionaires, unbridled mega-corporations were the fundamental risk to the existence of human civilization.
> Musk gave an example of an artificial intelligence that’s given the task of picking strawberries.
Also odd since it's more likely that a corporation, in the name of maximizing profits, would make decisions that threaten humanity. We can start with Bhopal, India. If you find fault with that example I am sure there are plenty of others, some probably a good deal more subtle, that others can suggest.
Me, not worried at all about AI.
I wonder if he's seen the latest videos of staged demos where humanoid robots can fold clothes
edit: didn't say 2017 when I commented.
Buying Twitter and turning it into a mass misinformation machine while spending something like $200 million[0] to get a convicted felon who practices pay-to-play politics into the white house indicates a level of Machiavellian singular focus on the accumulation of wealth and power that surpasses anything I would have imagined before.
[0] https://apnews.com/article/elon-musk-america-pac-trump-d2485...
Humans with morals are still very much in the decision chain and there is obviously a lot of debate about their morals, but them being there makes such a vast difference that the comparison to the strawberry AI is completely invalid. The strawberry AI isn’t even considering humans.
The article then builds on that false comparison for the rest of the article so there isn’t much to gain from the rest of it.
You can make the same lazy comparison to a completely socialist, centralized decision making by a government optimizing for a single metric (voter approvals, poverty levels, whatever). It has nothing to do with capitalism or the economic system.
TLDR; article says mega corps are the same as dangerous AI because they make optimizations in favor of profit that some people disagree with.
But, I think, it's the act of trying to optimize on a metric itself that is the source of the destruction. Unmeasurable human values can't survive an optimization process focused on measurable ones.