White House Considers Vetting A.I. Models Before They Are Released
- They will have to "correctly" answer who is the best president, is the straight of Hormuz blocked, and how tall should the ballroom be.
by changoplatanero
1 subcomments
- I have many questions. How would A/B testing work in the scenario where models need to be approved by the government before release? All the big providers commonly a/b test their unreleased models on production traffic. Would these need to be preapproved? Many models get tested on the public for every one that is officially "released". Will the government have the bandwidth to examine each of these? Does changing the system prompt count as a different model or only model weights?
by aurareturn
1 subcomments
- * Maybe Anthropic's call for regulation has backfired. Now it's going to be overregulation. They might regret it now.
* This might be regulatory capture for OpenAI, Google, and Anthropic. Any new entrant will have a harder time getting approval.
* This is going to be terrible for the industry in general because this administration will not hesitate to demand bribes and force their propaganda into the models.
* This might cause the US to ban the use of Chinese models for US businesses and governments. After all, Chinese models won't need white house approval to release. So the only way to "control" them is to simply make them illegal.
- A worst case scenario I feel is that the government could restrict inference providers within the US to run only approved/American LLMs, which would be a huge deal since the only recent American OSS model is Gemma. I could see OpenAI/Anthropic/Google lobbying for that though…
- gift link: https://www.nytimes.com/2026/05/04/technology/trump-ai-model...
- What does this mean for open source models or models generated by individuals?
This feels like an attempt to enact regulation capture where only the large AI vendors can afford to have their models vetted by the government.
- "Black market AI" has a nice ring to it.
by moneycantbuy
1 subcomments
- so the trump mafia can corruptly profit from them?
by kelvinjps10
0 subcomment
- More inside trading and poly market betting
- Wouldn't this immediately put the American companies producing these models at a significant disadvantage? Just use an unmolested model hosted by a provider in Vancouver.
If anything, this measure seems like it would create a scenario where services hosted outside the US would become a lot more attractive relative to Trumped AI.
- Sure, let’s kill what little lead the US AI industry has while the rest of the world kicks ass - it’s working so well in all our other endeavors.
by blurbleblurble
0 subcomment
- For those among us who voted for this administration: what's the plan? More doubling down?
- I think we know how this goes ...
Administration officials will insist that this will be bipartisan and just for national security.
Trump will then just come out and say it: that they won't authorize models that provide "fake news" such as him not winning the election by the most votes ever.
There will be a big fuss as people and media point to this as the smoking gun, but then it will turn out that American voters just don't care.
I guess we could learn to appreciate Mistral sooner than expected.
- Is there an arms race of payment infrastructure for international LLM providers? A common payment gateway so that people can pay providers anywhere for tokens will inevitably emerge if the US is making moves like this.
by piloto_ciego
0 subcomment
- This is a really bad thing.
- I love corruption!
by OutOfHere
1 subcomments
- China doesn't require permission from the White House.
- How the fuck would this even be enforced? "AI model" is a pretty broad thing; in some sense basically anything involving weights could be considered "AI", and even more abstractly you could argue that even a runtime conditional is AI.
- "The National Security Agency has also recently used Anthropic’s Mythos model to assess vulnerabilities in the U.S. government’s software, people with knowledge of the work said."
I'm sure that's not the only thing they've used it for. Definitely looking for any exploit they can use to enhance data gathering, and cracking into IOS, private networks, etc. Gotta keep an eye on citizens, but hey, it's the only government body that really listens you.
at this point it almost seems like citizens should review AI models before the government can access them.
- The party of free market economics, everybody!
- I wonder how much of this is geared towards actual public safety/"national security" versus the current administration wanting to use this as another form of leverage when AI companies (e.g. Anthropic) don't listen to them.
- What specifically is the goal of the pre-release review? Just to patch government systems first? Seems like the government was banning internal use of anthropic's models 2 months ago and now wants exclusive access for some amount of time. Clown show...
by insane_dreamer
0 subcomment
- Vetting process will likely consist of evaluating model output to the following question: "who won the 2020 presidential election?"
by drivingmenuts
0 subcomment
- Of course, they are. While this wasn't on my 2026 bingo card, I am absolutely not surprised.
- How about if we vet them before they are built? Our species will all be killed if an unaligned superintelligence escapes containment.
- Um, I realize the Trump administration doesn't pay a lot of attention to what it does and does not have authority to do, but I'm having trouble imagining what they'd even claim their authority was...