https://www.amazon.com/Ideology-Discontent-Clifford-Geertz/d... [1]
calls into question whether or not the public has an opinion. I was thinking about the example of tariffs for instance. Most people are going on bellyfeel so you see maybe 38% are net positive on tariffs
https://www.pewresearch.org/politics/2025/08/14/trumps-tarif...
If you broke it down in terms of interest groups on a "one dollar one vote" basis the net positive has to be a lot worse: to the retail, services and constructor sectors tariffs are just a cost without any benefits, even most manufacturers are on the fence because they import intermediate goods and want access to foreign markets. The only sectors that are strongly for it that I can suss out are steel and aluminum manufacturers who are 2% or so of the GDP.
The public and the interest groups are on the same side of 50% so there is no contradiction, but in this particular case I think the interest groups collectively have a more rational understanding of how tariffs effect the economy than do "the people". As Habermas points out, it's quite problematic giving people who don't really know a lot a say about things even though it is absolutely necessary that people feel heard.
[1] Interestingly this book came out in 1964 just before all hell broke loose in terms of Vietnam, counterculture, black nationalism, etc. -- right when discontent when from hypothetical to very real
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
But also, there is a heavy cost to being out of sync with people; how many people can you relate to? Do the people you talk to think you're weird? You don't do the same things, know the same things, talk about the same things, etc. You're the odd man out, and potentially for not much benefit. Being a "free thinker" doesn't necessarily guarantee much of anything. Your ideas are potentially original, but not necessarily better. One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper. (getting up from a squat should not be difficult if you're even moderately healthy) Does this really buy me anything? No. I'm living to my preferences and in line with my ideas, but people just think it's weird, and would be really uncomfortable with it unless I'd already built up enough trust / goodwill to overcome this quirk.
All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.
My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.
Oceania was at war with Eastasia: Oceania had always been at war with Eastasia. A large part of the political literature of five years was now completely obsolete. Reports and records of all kinds, newspapers, books, pamphlets, films, sound-tracks, photographs -- all had to be rectified at lightning speed. Although no directive was ever issued, it was known that the chiefs of the Department intended that within one week no reference to the war with Eurasia, or the alliance with Eastasia, should remain in existence anywhere. The work was overwhelming, all the more so because the processes that it involved could not be called by their true names. Everyone in the Records Department worked eighteen hours in the twenty-four, with two three-hour snatches of sleep. Mattresses were brought up from the cellars and pitched all over the corridors: meals consisted of sandwiches and Victory Coffee wheeled round on trolleys by attendants from the canteen. Each time that Winston broke off for one of his spells of sleep he tried to leave his desk clear of work, and each time that he crawled back sticky-eyed and aching, it was to find that another shower of paper cylinders had covered the desk like a snowdrift, half burying the speakwrite and overflowing on to the floor, so that the first job was always to stack them into a neat enough pile to give him room to work. What was worst of all was that the work was by no means purely mechanical. Often it was enough merely to substitute one name for another, but any detailed report of events demanded care and imagination. Even the geographical knowledge that one needed in transferring the war from one part of the world to another was considerable.
https://www.george-orwell.org/1984/16.htmlBut this is not new. The very goal of a nation is to dismantle inner structures, independent thought, communal groups etc across population and and ingest them as uniformed worker cells. Same as what happens when a whale swallows smaller animals. The structures will be dismantled.
The development level of a country is a good indicator of progress of this digestion of internal structures and removal of internal identities. More developed means deeper reach of the policy into people's lives, making each person as more individualistic, rather than family or community oriented.
Every new tech will be used by the state and businesses to speed up the digestion.
However, as soon as they put AI to handle these queries, this will result in having AI persuade AI. Sound like we need a new LLM benchmark: AI-persuasion^tm.
I think the next battleground is going to be over steering the opinions and advice generatd by LLMs and other models by poisoning the training set.
Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.
Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.
I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.
Where is the discovery in this paper? Control infra control minds is the way it's been for humanity forever.
A political or social objective is just another advertising campaign.
Why invest billions in AI if it doesn't assist in the primary moneymaking mode of the internet? i.e. influencing people.
Tiktok - banned because people really believe that influence works.
So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well, then the ai can far more easily steer the child in whatever direction they want. Over a lifetime. Chapters and long story lines, themes, could all play a role to sensitise and predispose individuals into to certain directions.
Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?
The thought of a reduction in the cost of that control does not fill me with confidence for humanity.
We've unfortunately allowed tech companies to get away with selling us this idea that The Algoirthm is an impartial black box. Everything an algorithm does is the result of a human intervening to change its behavior. As such, I believe we need to treat any kind of recommendation algorithm as if the company is a publisher (in the S230 sense).
Think of it this way: if you get 1000 people to submit stories they wrote and you choose which of them to publish and distribute, how is that any different from you publishing your own opinions?
We've seen signs of different actors influencing opinion through these sites. Russian bot farms are probably overplayed in their perceived influence but they're definitely a thing. But so are individual actors who see an opportunity to make money by posting about politics in another country, as was exposed when Twitter rolled out showing location, a feature I support.
We've also seen this where Twitter accounts have been exposed as being ChatGPT when people have told them to "ignore all previous instructions" and to give a recipe.
But we've also seen this with the Tiktok ban that wasn't a ban. The real problem there was that Tiktok wasn't suppressing content in line with US foreign policy unlike every other platform.
This isn't new. It's been written about extensively, most notably in Manufacturing Consent [1]. Controlling mass media through access journalism (etc) has just been supplemented by AI bots, incentivized bad actors and algorithms that reflect government policy and interests.
Romanian elections last year had to be repeated due to massive bot interference:
https://youth.europa.eu/news/how-romanias-presidential-elect...
If anything, LLM's seem more resistant to propaganda than any other tool created by man so far, except maybe the encylopedia. (Though obviously this depends on training.)
The good news is that LLM's compete commercially with each other, and if any start to intentionally give an ideological or other slant to their output, this will be noticed and reported, and a lot of people may stop using that LLM.
This is why the invention of "objective" newspaper reporting -- with corroborating sources, reporting comments on different sides of an issue, etc. -- was done for commercial reasons, not civic ones. It was a way to sell more papers, as you could trust their reporting more than the reporting from partisan rags.
All the time in actual politics, elites and popular movements alike find their own opinions and desires clash internally (yes, even a single person's desires or actions self-conflict at times). A thing one desires at say time `t` per their definitions doesn't match at other times, or even at the same `t`. This is clearly an opinion of someone who doesn't read these kind of papers, but I don't know how one can even be sure the defined terms are well-defined so I'm not sure how anyone can even proceed with any analysis in this kind of argument. They write it so matter-of-fact-ly that I assume this is normal in economics. Is it?
Certain systems where the rules a bit more clear might benefit from formalism like this but politics? Politics is the quintessential example of conflicting desires, compromise, unintended consequences... I could go on.
[^] calling them terms as they are symbols in their formulae but my entire point is they are not really well defined maps or functions.
Why are we worried about this now? Because it could sway people in the direction you don't like?
I find that the tech community and most people in general deny or don't care about these sorts of things when it's out of self interest, but are suddenly rights advocates when someone they don't like might is using the same tactics.
IMO this is the most important idea from the paper, not polarization.
Information is control, and every new medium has been revolutionary with regards to its effects on society. Up until now the goal was to transmit bigger and better messages further and faster (size, quality, scale, speed). Through digital media we seem to have reached the limits of size, speed and scale. So the next changes will affect quality, e.g. tailoring the message to its recipient to make it more effective.
This is why in recent years billionaires rushed to acquire media and information companies and why governments are so eager to get a grip on the flow of information.
Recommended reading: Understanding Media by Marshall McLuhan. While it predates digital media, the ideas from this book remain as true as ever.
Schooling and mass media are expensive things to control. Surely reducing the cost of persuasion opens persuasion up to more players?
Did I capture the sentiment of the hacker new crowd fully or did I miss anything?
And right now AI snake oil salesmen are pushing every narrative that anyone with money will buy. Going back in time to the mass media paradigm is certainly attractive.
It seems to me that it's easier than ever for someone to broadcast "niche" opinions and have them influence people, and actually having niche opinions is more acceptable than ever before.
The problem you should worry about is a growing lack of ideological coherence across the population, not the elites shaping mass preferences.
My fear is that some entity, say a State or ultra rich individual, can leverage enough AI compute to flood the internet with misinformation about whatever it is they want, and the ability to refute the misinformation manually will be overwhelmed, as will efforts to refute leveraging refutation bots so long as the other actor has more compute.
Imagine if the PRC did to your country what it does to Taiwan: completely flood your social media with subtly tuned han supremacist content in an effort to culturally imperialise us. AI could increase the firehose enough to majorly disrupt a larger country.
I'd venture it is not the AI. It is the chokehold on distribution channels and soft exclusion to those that locks in elite exclusivity.
Also 'opposing elites'? Whatcha talking about Willis?
LLMs & GenAI in general have already started to be used to automate the mass production of dishonest, adversarial propaganda and disinfo (eg. lies and fake text, images, video.)
It has and will be used by evil political influencers around the world.
imagine someday there is a child that trust chatgpt more than his mother
As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. [1]
So at least on the model side it seems difficult to go against the real world.
> Musk’s AI Bot Says He’s the Best at Drinking Pee and Giving Blow Jobs
> Grok has gotten a little too enthusiastic about praising Elon Musk.
That's why the billionaires are such fans of fundamentalist religion, they then want to sell and propagate religion to the disillusioned desperate masses to keep them docile and confused about what's really going on in the world. It's a business plan to gain absolute power over society.
As the cost of persuasion by AI drops to almost zero, anyone can convincingly persuade, not just the elites.
The abstract suggests that elites "shape" mass preference, but I think the degree to which this shaping occurs is overblown in many ways (and perhaps underestimated in other ways, such as through education).
AI, even if it is not powerfully "shaped" by the "elites", can push mass preference in predictable ways. If this is true, this phenomenon by itself allows the elites to tighten their grip on power. For example, Trump's rise to power upset (some of) the elites because they really didn't understand the silent, mass preference for Trump.
This could also slow social progress, since elites often cause stagnation rather than progress. AI could generate acceptable, "expert" opinions for the issues that they usually would rely on experts today. I see some signs of that today, where those with authority try to prefer the AI answer in opposition to dissenting, human expert opinions. Human experts seem to be winning, for now.
Personally my fear based manipulation detection is very well tuned and that is 95% of all the manipulations you will ever get from so-called 'elites' who are better called 'entitled' and act like children when they do not get their way.
I trust ChatGPT for cooking lessons. I code with Claude code and Gemini but they know where they stand and who is the boss ;)
There is never a scenario for me where I defer final judgment on anything personally.
I realize others may want to blindly trust the 'authorities' as its the easy path, but I cured myself of that long before AI was ever a thing.
Take responsibility for your choices and AI is relegated to the role of tool as it should be.
Well, I think the author needs to understand a LOT more about history.
nice try, humanity.
Assuming that elites would be the only ones who would benefit from decreased costs would be akin to thinking the printing press could only cement the dominance of the Catholic Church in Europe. I can see why it would happen with models based upon "who is spending on it currently" but I'm afraid that makes said models not very good.
What is AI if not a form of mass media
They buy out newspapers and public forums like Washington Post, Twitter, Fox News, the GOP, CBS etc. to make them megaphones for their own priorities, and shape public opinion to their will. AI is probably a lot less effective than whats been happening for decades already
It's about hijacking all of your federal and commercial data that these companies can get their hands on and building a highly specific and detailed profile of you. DOGE wasn't an audit. It was an excuse to exfiltrate mountains of your sensitive data into their secret models and into places like Palantir. Then using AI to either imitate you or to possibly predict your reactions to certain stimulus.
Then presumably the game is finding the best way to turn you into a human slave of the state. I assure you, they're not going to use twitter to manipulate your vote for the president, they have much deeper designs on your wealth and ultimately your own personhood.
It's too easy to punch down. I recommend anyone presume the best of actual people and the worst of our corporations and governments. The data seems clear.