Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
Was wondering what the beef with this was until I realized author meant "companies that are garbage" and not "landfill operators using gas turbines to make power". The latter is something you probably would want.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
This a million times. I honestly hate interacting with all software and 90% of the internet now. I don't care about your "U""X" front end garbage. I highly prefer text based sites like this
There's such a thing as "multiple invention", precisely because of this. Because we all live in the same world, we have similar needs and we have similar tools available. So different people in different places keep trying to solve the same problems, build the same grounding for future inventions. Many people want to do stuff at night, so many people push at the problem of lighting. Edison's particular light bulb wasn't inevitable, but electric lighting was inevitable in some form.
So with regards to generative AI, many people worked in this field for a long time. I played with fractals and texture generators as a kid. Many people want for many reasons. Artwork is expensive. Artwork is sometimes too big. Or too fixed, maybe we want variation. There's many reasons to push at the problem, and it's not coordinated. I had a period where I was fiddling around with generating assets for Second Life way back because I found that personally interesting. And I'm sure I was not the only one by any means.
That's what I understand by "inevitable", that without any central planning or coordination many roads are being built to the same destination and eventually one will get there. If not one then one of the others.
The techies are drop in the ocean. You may build a new tech or device, but the adaption is driven by the crowd who just drift away without a pinch of resistance.
So technically inevitable or not, it doesn't matter. People will at large keep using smart refrigerators and Tiktok.
Worth getting on your radar if this stuff is of interesting: https://aria.org.uk/opportunity-spaces/collective-flourishin...
(full disclosure: I'm working with the programme director on helping define the funding programme, so if you're working on related problems, by all means share your thoughts on the site or by reaching out!)
1. To display ads is to sacrifice user experience. This is a slippery slope and both developers and users get used to it, which affects even ad-free services. Things like "yes/maybe later" become normal.
2. Ads are only displayed when the user visits the service directly. Therefore we cannot have open APIs, federation, alternative clients, or user customization.
3. The advertisement infrastructure is expensive. This has to be paid with more ads. Like the rocket equation, this eventually plateaus, but by then the software is bloated and cannot be funded traditionally anymore, so any dips are fatal. Constant churn.
4. Well targeted ads are marginally more profitable, therefore all user information is valuable. Cue an entire era of tracking, privacy violations, and psychological manipulation.
5. Advertiser don't want to be associated with anything remotely controversial, so the circle of acceptable content shrinks every year. The fringes become worse and worse.
6. The system only works with a very large number of users. It becomes socially expected to participate, and at the same time, no customer support is provided when things go wrong.
I'm fairly sure ads are our generation's asbestos or leaded gasoline, and would be disappointed if they are not largely banned in the future.
Do we really think LLMs and the generative AI craze would have not occurred if Sam Altman chose to stay at Y Combinator or otherwise got hit by a bus? People clearly like to interact with a seemingly smart digital agent, demonstrated as early as ELIZA in 1966 and SmarterChild in 2001.
My POV is that human beings have innate biases and preferences that tend to manifest what we invent and adopt. I don't personally believe in a supernatural God but many people around the world do. Alcoholic beverages have been independently discovered in numerous cultures across the world over centuries.
I think the best we can do is usually try to act according to our own values and nudge it in a direction we believe is best (both things OP is doing so this is not a dunk on them, just my take on their thoughts here).
People want things to be simpler, easier, frictionless.
Resistance to these things has a cost and generally the ROI is not worth it for most people as whole
I do not think that the current philosophical world view will enable a different path. We've had resets or potential resets, COVID being a huge opportunity, but I think neither the public nor the political class had the strength to seize the moment.
We live in a world where we know the price of everything and the value of nothing. It will take dramatic change to put 'value' back where it belongs and relegate price farther down the ladder.
While this reaction is understandable, it is difficult to feel sympathy when so few people are willing to invest the time and effort required to actually understand how these systems work and how they might be used defensively. Mastery, even partial, is one of the few genuine avenues toward agency. Choosing not to pursue it effectively guarantees dependence.
Ironically, pointing this out often invites accusations of being a Luddite or worse.
There's a lot of bad stuff going on; even more dangerous is the idea that we can't do anything about it; but more dangerous still is the idea that there's no reason to even think in terms of what we "should" do, and that we just have to accept our current position and trajectory without question.
fMRI has always had folks highlighting how shaky the science is. It's not the strongest of experimental techniques.
- McCabe (Kurt Russell), Vanilla Sky
"refurbished plane engines to power their data centers" may get banned though.
*Existence* of a situation as inevitable isn't so bold of a claim. For example, someone will use an AI technology to cheat on an exam. Fine, it's possible. Heck, it is mathematically certain if we have a civilization that has exams and AI techs, and if that civilization runs infinitely.
*Generality* of a situation as inevitable, however, tends to go the other way.
Shout out to the Juicero example, because there are so many people out there showing that AI can be also "just squeeze the bag with your hands".
AI exists -> vacation photos exist -> it's inevitable that someone was eventually going to use AI to enhance their vacation photos.
As one of those niche power users who runs servers at home to be beholden to fewer tech companies, I still understand that most people would choose Netflix over a free jellyfin server they have to administer.
> Not being in control of course makes people endlessy frustrated
I regret to inform you, OP, that this is not true. It's true for exactly the kind of tech people like us who are already doing this stuff, because it's why we do it. Your assumption that people who don't just "gave up", as opposed to actively choosing not to spend their time on managing their own tech environment, is I think biased by your predilection for technology.
I wholeheartedly share OP's dislike of techno-capitalism(derogatory), but OP's list is a mishmash of
1) technologies, which are almost never intrinsically bad, and 2) business choices, which usually are.
An Internet-connected bed isn't intrinsically bad; you could set one up yourself to track your sleep statistics that pushes the data to a server you control.
It's the companies and their choices to foist that technology on people in harmful ways that makes it bad.
This is the gripe I have with anti-AI absolutists: you can train AI models on data you own, to benefit your and other communities. And people are!
But companies are misusing the technology in service of the profit motive, at the expense of others whose data they're (sometimes even illegally) ingesting.
Place the blame in the appropriate place. Something something, hammers don't kill people.
However AI is the future for programming that’s for sure.
Ignore it as a programmer to make yourself irrelevant.
I don't get the reason for this one being in the list. Is that an abusive product in some way?
Imagine if the 80s and 90s had been PC vs Mac but you had to go to IBM for one or more critical pieces of software or software distribution infrastructure. The Cambrian explosion IBM-PC compatability didn’t happen overnight of course. I don’t think it will be (or ought to be) inevitable that phones remain opaque and locked down forever, but the day when freedom finally comes doesn’t really feel like it’s just around the corner.
Posted, alas for now, from my iPhone
None of the items is technically inevitable, but the world runs on capital, and capital alone. Tech advances are just a by product of capital snooping around trying to increase itself.
> But what is important to me is to keep the perspective of what consitutes a desirable future, and which actions get us closer or further from that.
Desirable to whom? I certainly don't think the status quo is perfect, but I do think dismissing it as purely the product of some faceless cadre of tech oligarchs desires is arrogant. People do have agency, the author just doesn't like what they have chosen to do with it...
I was hoping to find such a list within the article, i.e. which companies and products should we be supporting that are doing things 'the right way'?
A lot of history's turning points were much closer than we think.
I firmly believe this is where people will get the most annoyed in the long run. Not having any public facing human beings will lose people.
> "Requiring a smartphone to exist in society is not inevitable."
Seeing smartphones morph from super neat computer/camera/music players in our pockets to unfiltered digital nicotine is depressing to think about.
Notifications abuse is _entirely_ to blame, IMO.
Every app that you think of when you think of "addictive" apps heavily relies on the notifications funnel (badges, toasts, dings) for engagement. I'm disappointed that we as a society have normalized unabated casino-tier attention grabs from our most personal computing devices.
Growth through free, ad-subsidized tiers also helped create this phenomenon, but that strategy wouldn't be nearly as effective without delivery via notifications.
Big AI (more like LLM/Stable Diffusion as a Service) is going to prey on that to levels never seen before, and I'm definitely not here for it.
Obligatory end-of-post anecdote: My phone stays home most of the time when I work out. I only bring my Apple Watch.
My gym bag has an iPad mini and a BOOX eReader, but I only use the iPad to do Peloton stretches and listen to KEXP Archives, as those can't be done from my watch (though I'm working on something for the latter).
Using this setup has given me a lot of opportunities to soak in my surroundings during rest periods. Most of that is just seeing people glued to Instagram, Facebook, and YouTube. TV addiction on steroids, in other words.
Thanks to this, people like me who use their phones as tools are forced to carry huge, heavy black slabs because small phones aren't viable and, as market analysis is showing, thin and lightweight slabs won't cut it either.
I've been thinking a lot lately, challenging some of my long-held assumptions...
Big tech, the current AI trend, social media websites serving up rage bait and misinformation (not to imply this is all they do, or that they are ALL bad), the current political climate and culture...
In my view, all of these are symptoms, and the cause is the perverse, largely unchallenged neoliberal world in which the West has spent the last 30-40 years (at least) living in.
Profit maximising comes before everything else. (Large) Corporate interests are almost never challenged. The result? Deliberately amoral public policy that serves the rich and powerful.
There are oases in this desert (which is, indeed, not inevitable), thankfully. As the author mentioned, there's FOSS. There's indie-created games/movies. There's everyday goodness between decent people.
I guess the problem is scale. A system based on altruism, trust and reciprocity might work great for a community of 20 people. But it doesn't scale to millions of people. Consequently, we end up with (in the West) various shades of democracy, "the least bad system". However, democracy doesn't work well when a tiny fabulously-rich elite is able to buy up all the media and a sizeable chunk of the politicians.
Whole post just reads as someone disgruntled at the state of the world and reeling that they aren't getting their way. Theres a toxic air of intellectual and moral superiority in that blog
Ads are one of the oldest and most fundamental parts of a modern society.
Mixing obviously dumb things in with fundamental ones doesn't improve the point.
I'm pretty cynical, but one ray of hope is that AI-assisted coding tools have really brought down the skill requirement for doing some daunting programming tasks. E.g. in my case, I have long avoided doing much web or UI programming because there's just so much to learn and so many deep rabbit holes to go down. But with AI tools I can get off the ground in seconds or minutes and all that gruddy HTML/JavaScript/CSS with bazillions of APIs that I could go spend time studying and tinkering with have already been digested by the AI. It spits out some crap that does the thing I mostly want. ChatGPT 5+ is pretty good at navigating all the Web APIs so it was able to generate some WebAudio mini apps to start working with. The code looks like crap, so I hit it with a stick and get it to reorganize the code a little and write some comments, and then I can dive in and do the rest myself. It's a starting point, a prototype. It got me over the activation energy hump, and now I'm not so reluctant to actually try things out.
But like I said, I'm cynical. Right now the AI tools haven't been overly enshittified to the point they only serve their masters. Pretty soon they will be, and in ways we can't yet imagine.
What is inevitable? The heat death of the universe. You probably don't need to worry about it much.
Everything else can change. If someone is proposing that a given technology is, "inevitable," it's a signal that we should think about what that technology does, what it's being used to do to people, and who profits from doing it to them.
It's not really game theory but economics: the supply curve for nicely contended markets, and transaction costs for everything. Game theory only addresses the information aspects of transaction costs, and translates mostly only for equal power and information (markets).
The more enduring theory is the roof; i.e., it mostly reduces to what team you're on: which mafia don, or cold-war side, or technology you're leveraging for advantage. In this context, signaling matters most: identifying where you stand. As an influencer, the signal is that you're the leading edge, so people should follow you. The betas vie to grow the alpha, and the alpha boosts or cuts betas to retain their role as decider. The roof creates the roles and empowers creatures, not vice-versa.
The character of the roof depends on resources available: what military, economic, spiritual or social threat is wielded (in the cold war, capitalism, religion or culture wars).
The roof itself - the political franchise of the protection racket - is the origin of "civilization". The few escapes from such oppression are legendary and worth emulating, but rare. Still, that's our responsibility: to temper or escape.
I'm basically down to Anki cards, Chrono Trigger, and the Money Matters newsletter on my phone (plus calls and messaging).
Recently I've dropped Youtube in favor of reading the New Yorker's that're piling up more frequently.
Is it just me or is software actively getting worse too? I feel like I'm noticing more rough edges, the new Mac OS update doesn't feel as smooth as I use to expect from Apple products.
Life is just calmer, get an antenna and PBS, use your library, look at the fucking birds lol. The deluge of misinformation isn't worth it for the good nuggets at this point
Narratives are funny because they can be completely true and a total lie.
There's now a repeated narrative about how the AI bubble is like the railroads and dotcom and therefore will end the same. Maybe. But that makes it seem inevitable. But those who have that story can't see anything else and might even cause that to happen, collectively.
We can frame things with stories and determine the outcomes by them. If enough people believe that story, it becomes inevitable. There are many ways to look at the same thing and many different types of stories we can tell - each story makes different things inevitable.
So I have a story I'd like to promote:
There were once these big companies that controlled computing. They had it locked down. Then came ibm clones and suddenly, the big monopolies couldn't keep up with innovation via the larger marketplaces that opened up with standard (protocol) hardware interfaces. And later, the internet was new and exciting - compuserve and AOL were so obviously going to control the internet. But then open protocols and services won because how could they not? It was inevitable that a locked down walled garden could not compete with the dynamism that open protocols allowed.
Obviously now, this time is no different. And, in fact, we're at an inflection point that looks a lot like those other times in computing that favored tiny upstarts that made lives better but didn't make monopoly-sized money. The LLMs will create new ways to compete (and have already) that big companies will be slow to follow. The costs of creating software will go down so that companies will have to compete on things that align with user's interests.
User's agency will have to be restored. And open protocols will again win over closed for the same reasons they did before. Companies that try to compete with the old, cynical model will rapidly lose customers and will not be able to adapt. The money possible to be made in software will decline but users will have software in their interests. The AI megacorps have no moat - chinese downloadable models are almost as good. People will again control their own data.
It's inevitable.
Inevitable and being a holdout are conceptually different and you can't expect society as a whole to care or respect your personal space with regards to it.
They listed smartphones as a requirement an example. That is great, have fun with your flip phone, but that isn't for most people.
Just because you don't find something desirable doesn't mean you deserve extra attention or a special space. It also doesn't you can call people catering to the wants of the masses as "grifters".
The rational choice is to act as if this was ensured to be the future. If it ends up not being the case, enough people will have made that mistake that your failure will be minuscule in the grand scheme of things, and if it's not and this is the future, you won't be left behind.
Sure beats sticking your feet in the sand and most likely fucking up or perhaps being right in the end, standing between the flames.
What is the best way and how do we stop them?
But the inevitable is not a fact, it's a rigged fake that is, unfortunately, adapted by humans which flock in such large groups, echoing the same sentiments that it for those people seem real and inevitable.
Humans in general are extremely predictable, yet, so predictable that they seem utterly stupid and imbecile.
The species as a whole will evolve inevitably; the individual animal may not.
Can we change direction on how things are going? Yes, but you must understand what means the "we" there, at least in the context of global change of direction.
None of these companies wanted to get Apple'd, and they (particularly Facebook) did everything they could to pay lip service to developing VR without funding anything really groundbreaking (or even obvious). Apple finally had to release something after years of promising shareholders that they weren't going to get left behind in the market, and with nothing material to skim off of competitors, the AVP is what we got.
Until Apple figures out how to dig up and purify its deep-rooted cultural rot, and learn how to actually innovate independently, every halfway-aware competitor is going to hold back development on anything they might want to appropriate. In the meantime, we all lose.
Besides the bigger issue is that this blog post offers no concrete way forward.
Broadly speaking, I would bin them into two categories. The first category contains things like this:
> Tiktok is not inevitable.
Things like this become widespread without coercion. I don't use TikTok or any short-form video and there's nothing forcing me to. For a while, Facebook fed me reels, and I fell for it once or twice, but recognized how awful it was and quit. However, the Tiktok and junk food are appealing to many people even though they are slop. The dark truth is that many people walking around just like slop, and unless there's any restraint imposed from external actors, they'll consume as much as is shoveled into their troughs.
But, at the end of the day, you can live your life without using Tiktok at all.
The other category would be things that become widespread on the back of coercion, to varying degrees.
> Requiring a smartphone to exist in society is not inevitable.
This is much trickier to do than living without Tiktok. It's harder to get through airports or even rent a parking space now. Your alternative options will be removed by others.
“You take the blue pill, the story ends, you wake up in your bed and believe whatever you want to believe.”
Casual thinking, as the author implies is necessary, would make you realize all of this was inevitable. The whole reason the tech boom existed, the whole reason the author is typing on what I can only guess is a 2005 T-series, the whole reason the internet made it, the whole reason all of this works, is STRICTLY because they wrenched control from us.
If FOSS was gonna work it would've by now. I love FOSS, but FOSS enthusiasts are so obnoxiously snobby about everything. In 30+ years linux has wrenched all of 1-2% more of the desktop market away from the giants.
This was all inevitable.
These are all natural forces, they may be human forces but they are still natural forces. We can't stop them, we can only mitigate them -- and we _can_ mitigate them, but not if we just stick our fingers in our ears and pretend it's not going to happen.
It absolutely is. Whatever you can imagine. All of it.
Yea, I remember the time when trillion dollar companies were betting the house on Juicero /s
There are plenty "technology" things which have come to pass, most notably weapons, which have been developed which are not allowed to be used by someone to thier fullest due to laws, and social norms against harming others. Theese things are technology, and they would allow someone to attain wealth much more efficiently....
Parrots retort that they are regulated because society sees them as a threat.
Well, therein is the disconnect, society isn't immutable, and can come to those conclusions about other technologies tomorrow if it so chooses...