As always, there are good bits connected with mediocre glue. The point about automating the unpleasant parts of activity and losing the very point of the exercise (automatic dildo and automatic vagina, but automatic research papers too!) is a good one.
But damn Slavoj, please use some headings, sections and the like. Work with your thoughts more as you claim it's important to do!
I've been talking to these friends for decades now, with digital records. I think someone already trained an LLM on their IM records.
How many people do you suppose have two-way LLM substitutes that occasionally write to each other with articles from the news to discuss?
There's already services that use this kind of thing to pretend dead people are alive.
Now here's the question: are you in some sense living forever? Say you have a number of friends, who have over time been trained into AI, and they live on various servers (it ain't expensive) forever. They're trained as you, so they read the kind of article you would read. They know your life story, they know their history with their friends. They will be interested in the controversial offsides goal in the 2250 world cup final. They are just made of calculations in data centres that go on, forever.
The bots talking to bots world is a problem only because the objective is finally for a human to observe the bot-bot conversation and have their objectives changed in some way. It's 'advertising' of some concept. Bot-bot conversations of the form currently possible have no purpose in a world without humans. There is no one to convince.
I think it's an interesting idea, certainly, but there is no reason to write it like this. The bits about call centre scamming etc. are sort of pointless. In general, I like when the complexity of a representation of an idea is required because the territory being mapped is complex.
I know he's a famous philosopher and all that, but the complexity of his text appears to be sort of like older segmentation models. You ask it to extract a circle from a background and it produces an almost fractally-complex circle-approximation. "What is the object in the foreground?", you ask, and the machine (and here the philosopher) responds "It is a strange approximation of a circle with a billion jagged edges". No, it's a circle.
Is it possible that this is to a large degree utterly pointless textual wankery?