by crazygringo
11 subcomments
- > “We have high confidence that the actor likely leveraged an A.I. model to support the discovery and weaponization of this vulnerability,” the report said.
I wonder what gives them that "high confidence", as opposed to this being just a traditional zero-day?
I'm not being snarky or critical, I'm genuinely wondering what about an attack could possibly indicate it was discovered with LLM assistance?
Like, unless the attackers' computers have been seized and they've been able to recover the actual LLM transcript history? But nothing in the article indicates that the hackers have been caught, just that a patch was developed.
by netdevphoenix
0 subcomment
- I wonder what is the goal here? If Google Search was used to find a major software flaw would this be reported in this way? Between Mythos, OpenAI's Mythos equivalent, it's not clear if there is some interest to keep the "AI is powerful" trend going or they are trying to indirectly bring attention to the technical capabilities of LLMs in cybersecurity (as a potentially untapped source of revenue).
- Haven't read the article, but let me guess:
"That's why for your safety we need a scan of your ID and your biometrics to let you use our best models"
by throwawayffffas
0 subcomment
- Next headline: Google will not be releasing their next AI model to the public but only "trusted" partners, because it's too dangerous.
- It's the narrative "For your own security in the internet (and children's safety), show us your ID now, please".
Tired of this trend.
by QuantumNoodle
0 subcomment
- Okay, when fuzzing techniques came out there was a big surge in discovered and exploited bugs. AI is more general and I expect there be a similar surge. However fuzzing is cheap but compute and techniques can be "owned." The economics of AI is unless you pay for it, it is difficult to self host (expensive hardware, open source models are catching up).
State actors + hackers will have more resources to make better offense. What worse, in my experience AI produced code is blind to overall system behavior. So I fear the exploits will be either low hanging/trivial to exploit errors or bigger system level bugs.
- >But new A.I. models like Anthropic’s Mythos, which was announced last month, appear to be so good at finding such holes that Anthropic shared it only with a limited number of firms and government agencies in the United States and Britain.
Immediate distrust of the article. GPT 5.5 is out with nearly the same capability. The author might be parroting company marketing, unable to discern that a lot of this is much less complex than it seems. For all we know this group could have had a model examine some obscure line of code thousands of times until it found something.
- Black hat hacking seems to be a well-fit use case for these LLMs. Attackers only need to be right once, so the sometimes-wrongness of the attacks might be trivial. This probably devalues stashes of zero-day exploits for those that have been witholding them.
by BatteryMountain
1 subcomments
- To make an omelette, some eggs need to break, right? These companies released AI to the public and thought it will be all sunshine and roses.. there are legit bad actors in the world that hates society and people and they will use AI for expand on that, is that not clear? We need controls on AI similar to any other restricted materials (like nuclear stuff).
by bouncycastle
3 subcomments
- Meanwhile, I cannot ask ChatGTP how to pick my own lock. Even though this information is available in a book in the library.
by viktorcode
0 subcomment
- I expect that only to escalate with time, especially when there'll be more agent-written code deployed.
by Spacemolte
0 subcomment
- Phasing like this immediately makes me wonder what google is lobbying for..
- @dang would be great if the hn link was the 'unlocked' version i.e. instead of
https://www.nytimes.com/2026/05/11/us/politics/google-hacker...
this instead
https://www.nytimes.com/2026/05/11/us/politics/google-hacker...
(can read the article immediately; slightly less fuss)
by atrocities
0 subcomment
- Can we link to the actual google article, instead of these editorialized articles about the article?
https://cloud.google.com/blog/topics/threat-intelligence/ai-...
- > Google said in research published Monday
What research? Where is it published?
by Jean-Papoulos
0 subcomment
- If this is true, I hope AI exploit-finding will force the industry to harden itself against supply-chain vulnerabilities.
- There was a discussion a few days ago on White House considers vetting AI models prior to release (https://news.ycombinator.com/item?id=48013608).
- In past decades the "firewall" of software is that advanced security and coding knowledge is not very easy to access by anyone, only a few smartest people in the big name companies and top orgs.
But nowadays, knowledge is accessible to everyone if you use top LLM, which swipe the difference. I would say that future public software is unsafe anymore. maybe the concept of public software (like SaaS or other) will be dead, software is only private instead of public
- Wild that they think restricting access to models will help much. Access to Chinese models will definitely not be restricted and have enough capability to find exploits as well.
- Security will be a wedge to restrict the sophistication of open-weight and local LLMs, just as it's been used to demonize and restrict cypherpunk technologies.
- Dupe: https://news.ycombinator.com/item?id=48096712
by CrzyLngPwd
2 subcomments
- People used LLMs to find flaws in Google software.
- But in exchange we get to also waste vast energy and carbon while depleting job prospects for just about any college grad.
- But which AI exactly, theres this new claude Mythos about wihch everone is talking, is it legit or a fluff
- Given how everywhere software is now being written by the LLMs, how is that a top headline news that some (albeit malicious) software is being written with LLM?
The robbers used a CAR in the robbery.
The blackmailer used a TYPEWRITER to write blackmailing letter.
by ChrisArchitect
0 subcomment
- Source: https://cloud.google.com/blog/topics/threat-intelligence/ai-... (https://news.ycombinator.com/item?id=48096712)
Why collect all the news dupes but not the source up top OP? Because the source was already submitted?
by justsomedev2
0 subcomment
- What a surprise hackers used AI .
I mean why wouldnt they? Every programmer uses it..
by skywhopper
0 subcomment
- Drives me nuts that the NYT just uncritically cites Anthropic’s unverified claims of “thousands of zero-days” without a hint of skepticism.
by SecretDreams
4 subcomments
- If "bad guy AI" can find flaws, can "good guy AI" patch them faster when backed by trillion dollar companies?
- I stopped reading after "Google says". They have destroyed whatever trust I might have had in them years ago.
- Wait until the bio version of this shows up.
- ...says yet another company hell bent on integrating it into every facet of our lives. This reads like a celebration, if you ask me.
- [dead]
by huflungdung
0 subcomment
- [dead]
- [flagged]
- The Google Threat Intelligence Group wants to increase its relevance and casually point out the it was not Mythos which found the exploit!
Security "researchers" are overpaid buffoons who hype things for their own salaries and their companies. And the stenographers from the press dutifully copy everything.
This is a despicable game to fool politicians into giving money and favorable AI legislation.
Strangely enough these buffoons never offer their models to open source developers. It is always a select group of highly paid other buffoons that throws some very occasional results over the wall.
- Can google please use AI to find bugs then?
Software is in such a state now, Gmail is full of bugs around sharing attachments to the position that I have to tell my dad to turn his phone off and on again in order to attach a document