I suspect that people will be killed, because of outrage over fake stuff. Before the Ukraine invasion, some of the folks in Donbas made a fake bomb, complete with corpses from a morgue (with autopsy scars)[0]. That didn’t require any AI at all.
We can expect videos of unpopular minorities, doing horrible things, politicians saying stuff they never said, and evidence submitted to trial, that was completely made from whole cloth.
It’s gonna suck.
[0] https://www.bellingcat.com/news/2022/02/28/exploiting-cadave...
The rail operator didn't do anything wrong. After an earthquake and a realistic-looking image, the only responsible action is to treat it as potentially real and inspect the track.
This wasn't catastrophic, but it's a preview of a world where a single person can cheaply trigger high-cost responses. The systems we build will have to adapt, not by ignoring social media reports, but by developing faster, more resilient ways to distinguish signal from noise.
Perhaps Network Rail should have a system of asserting rail integrity that is independent of social media (?!!?)
for real, pick up the phone and ask someone (??)
You can also just call the railroad and report the bridge as damaged.
Hoaxes and pranks and fake threats have been around forever.
Typically, postings that gain traction have many many reposts and though some may be deleted, there's a long tail of reverberation left behind. I can't find that at all here.
I wonder if the hoaxer just emailed it to Network Rail directly?
The image is likely AI generated in this case, but this does not seem like the best strategy for finding out if an image is AI generated.
Im way more concerned of this statement than whatever is reported in the title.
How fragile is a society that is unable to make a simple visual confirmation of a statement without having a multiday multi-££ impact?
> "The disruption caused by the creation and sharing of hoax images and videos like this creates a completely unnecessary delay to passengers at a cost to the taxpayer," a spokesperson said.
I don't think this will work the way they think it will work. In fact, I think they just proved they're vulnerable to a type of attack that causes disruption and completely unnecessary delay to passengers at a cost to the taxpayer
QR leads you to a page, you upload image to page, hashes are compared, image-from-sensor confirmed.
Surely at this point we need provable ‘photography’ for the mass market.
You also want to be able chain signing so that for example a news reporter could take a photo, then the news outlet could attest its authenticity by adding their signature on top.
Same principle could be applied to video and text.
From the article:
Trains were halted after a suspected AI-generated picture that seemed to show major damage to a bridge appeared on social media following an earthquake.
... Railway expert Tony Miles said due to the timing of the incident, very few passengers will have been impacted by the hoax as the services passing through at that time were primarily freight and sleeper trains.
"They generally go slow so as not to disturb the passengers trying to sleep - this means they have a bit of leeway to go faster and make up time if they encounter a delay," he said.
"It's more the fact that Network Rail will have had to mobilise a team to go and check the bridge which could impact their work for days."
Standard responsible rail maintainance is to investigate rail integrity following heavy rains, earthquakes, etc.A fake image of a stone bridge with fallen parapets prompts the same response as a phone call about a fallen stone from a bridge or (ideally !!) just the earthquake itself - send out a hi-railer for a track inspection.
The larger story here (be it the UK, the US, or AU) is track inspections .. manned or unmanned?
Currently on HN: Railroads will be allowed to reduce inspections and rely more on technology (US) https://news.ycombinator.com/item?id=46177550
https://apnews.com/article/automated-railroad-track-inspecti...
on the decision to veer toward unmanned inspections that rely upon lidar, gauge measures, crack vibration sensing etc.
Personally I veer toward manned patrols with state of the art instrumentation - for the rail I'm familiar with there are things that can happen with ballast that are best picked up by a human, for now.
I am surprised headlines like this are only coming out now. I've been saying it for a long time, but people said i am crazy. The web as we know it will be unusable. And a new one will not solve all issues, as we have already made ourselves too dependent on the current web and tech. So the impact on the real world is gonna turn a lot if things upside down. It's gonna be a lot of fun. But sure, let's keep pretending AI can either be nothing but bullshit OR we should only fear losing jobs to robots... i don't get why no one every thinks about the societal impact... it's so obvious, still... i am baffled...
It is no surprise to me that Network rail are so understaffed that any special event disrupts their work schedules for days. That is what they call 'efficiency' these days.
Edit: Aside. During a set of fire service strikes it was a relatively common opinion to say something like, 'of course they have an easy job, they get paid to just sit/lie down at the station'. I used to ask, 'what would you like them to do while waiting in case you need rescuing?'. No answer. I spoke to a fireman and he told me that in response to this kind of nonsense a bunch of pointless busy work was invented for them. When real was privatised in the UK they fired a lot of these 'inefficient' workers. After a string of rain crashes, the government had to renationalise Network Rail (the bit that maintains the infrastructure). Another case where 'efficiency' means harming people for profit.
Despite using Claude Code almost daily and finding it a useful tool, on balance I think that AI is a net negative to society.
It actually had very minimal impact. An hour or two wasn't bad for an organisation which stripped staff to a bare minimum, and for the area.
And it's very much the customer's job to work for the railway these days: it's our job to report police matters we are told incessantly with announcements. It's our job to buy the right ticket as there are very few ticket staff and staff with any knowledge these days. It's our job to use third party websites during disruption and to Tweet the railway company for assistance because again there is not enough staff.
So Network Rail is not going to come out and say "it's absolutely our job to be aware of all our infrastructure at all times and our defence to this new threat is to bolster staff and CCTV and reduce our reliance on third party reports"
The problem is the scale. The scale of impact is immense and we're not ready to handle it.
They do ... that's why sociopaths do such things.
What good has it brought us (not the billionaire owners of AI)? It made us 'more effective' and oh instead of googling something and actually going to a link reading in detail the result we can now not bother with any of that and just believe whatever the LLM outputs (hallucinations be damned).
So I guess that's an upside.
(before the AI god bros come: I am talking purely about LLMs and generative imagery and videos, not ML or AI used for research et al)
https://en.wikipedia.org/wiki/Russian_sabotage_operations_in...
See e.g. https://www.polskieradio.pl/395/7785/artykul/2508878,russian... (2020)
> Almost 700 schools throughout Poland were in May last year targeted by hoax bomb threats during key exams, private Polish radio broadcaster RMF FM reported.
> It cited Polish investigators it did not name as saying that a detailed analysis of internet connections and a thorough examination of the content of emails with false bomb threats turned up ties to servers in the Russian city of St. Petersburg.
So far we have almost no positive applications for the IP laundering machines.
Modern tech annoys older tech, like birds poking at dinosaurs. Trains enabled economic progress, which gave rise to computers and AI.