And, does anyone seriously think developing autonomous kill-bots without a human in the loop in the next 3 years is something the DoD should be unilaterally doing now without congressional review? Personally, I think autonomous kill bots with a human in the loop, with congressional review, and even 10 years from now are categorically a terrible idea.
However, I can imagine some reasonable people perhaps quibbling over saying never by citing things like "sufficient safeguards", "congressional oversight" and at a future time where AIs don't hallucinate constantly. But none of that is in contention here. The DoD is publicly proclaiming their need to do things right now which are either A. illegal, or B. no serious person thinks is sane.
1. Builds tool extremely capable of mass surveillance and running autonomous warfighting capabilities.
2. Expresses shock — shock — when the Department of War insists on using the tool for mass surveillance and autonomous warfighting systems.
Why would killbots be sensible moderate with the number of hallucinations LLMs have right now?
They just need to have one rm -rf bug somewhere to so something disasterous, and at least Antrhopic's CEO understands the limitations of the software.
Such non-AI automatic triggering and targeting can already be constrained by location, range, time frame, remote-control, etc using fairly sophisticated non-AI heuristics. If non-AI devices can already <always pull trigger if X, Y and Z conditions = TRUE>, this is really about not pulling the trigger based on more complex judgements. That really only enables leaving such systems armed and active in far larger, less constrained contexts where 'friend or foe' judgements exceed basic true/false sensor conditions. That the military feels such urgent need for that capability is much more worrying to me.
Meanwhile the Pentagon could just build its own capacity. Commercial AI outspends federal science R&D 75:1 right now.
Crikey, this isn't sensible, this is completely misanthropic and nihilistic. How can anyone be ok with a machine unilaterally deciding (outside of the courts or any other check mechanism) to murder someone?
I also take issue with the author's postulation that the Defense Production act could be used here. It's one thing to make sheet metal companies build plane parts, but requiring companies to be put themselves "in the loop" so to speak with regards to actual military strategy or defense puts those companies and their employees at unwilling and extraordinary risk. It's basically enlistment. Plus, it can only be used in extraordinary circumstances.
There's actually another possibility here: Anthropic really doesn't care about being in the loop, and are protesting as theater, but behind the scenes, hammering out a deal with the Pentagon, and they'll help under classified status, and none of us will be the wiser.
Under our constitutional structure of separated powers, the nature of Presidential power entitles a former President to absolute immunity from criminal prosecution for actions within his conclusive and preclusive constitutional authority. And he is entitled to at least presumptive immunity from prosecution for all his official acts. There is no immunity for unofficial acts.
https://www.supremecourt.gov/opinions/23pdf/23-939_e2pg.pdfProbably this https://time.com/7380854/exclusive-anthropic-drops-flagship-...
Corrupt, evil Government: OK.
https://www.wyden.senate.gov/imo/media/doc/wyden_letter_to_d...
Plus that the US military also used anthropics products in some form during the Venezuela operation as they publicly acknowledged, plus Hegseth seeming to be willing to put the boot down anthropics’ neck according to the options presented to them, are a lot of interesting things that happened in a very short amount of time for an environment that is usually known to work as frictionless as possible.
Even for Hegseth this is a lot of public eyes on something the pentagon of previous administrations would have handled probably with the same willingness to drown anthropic in their own tears but completely out of public sight.
But the Pentagon works in mysterious ways, and therefore there might be a very good reason for this kind of pressure, that the people who are responsible for national security even risk making a public fuss about it, that we peasants simply don’t see.
I also can’t wait to see how the us military is messing this whole AI superiority softporn up. It’s not a matter of if but only of when.
They have a track record misshandling weapons of mass destruction.
https://www.atomicarchive.com/almanac/broken-arrows/index.ht...
To be fair tho, for the amount of nuclear weapons they are handling overall they are doing a pretty good job. But no more open blast doors for the pizza delivery guy, ok?
The real question is how many broken arrow events can we even have with AI? Is it better luck next time baby skynet serious or we fucked up Sir, everyone is going to die as matchsticks bad, if whatever system they use decides every problem they throw at it can be solved by removing the human from the equation, all of them preferably.
We are creating a worse version of the Panopticon than was originally designed. A Panopticon that could have entirely devastating consequences. Not only is "the guard" able to see what any given "prisoner" is doing at any time, but they can look into the past. The self-regulation happens because the prisoners could be being watched. It is Orwellian. But this thing we're building? It can look at the prisoners' actions before it was even completed.
I think people don't think about this enough. Culture changes and in that time what is considered morally justifiable or even reasonable changes. Sometimes it is easy to judge people in the past by our current standards but other times it is not. Other times there is context needed, which is lost not only by time but in what is never recorded. How do prisoners self-regulate to future values that they do not know they are supposed to align to?
This creates a terrible machine where whoever controls it will likely have the power to prosecute anyone arbitrarily. Get the morals to change just slightly or just take things out of context and you have the public demanding prosecution. I think people think this seems far fetched but I'm willing to bet every single person on HN has fallen for some disinformation campaign. Be it the "carrots help you see in the dark", peoples misunderstanding between paper/plastic/canvas tote bags, a wide variety of topics related to environmentalism, and on and on. Even if you believe you have never fallen for such a disinformation (or malinformation) campaign, you'll have to concede that it is common for others to. That's all that is needed for someone in power to execute on this Panopticon, and it is a strategy people with power have been refining for thousands of years.
I really do support Anthropic pushing back here, but the discussions about "Future Claude" really are unsettling. It is like we are treating this as an inevitability. As if we have no choice in the matter. If that is true, then we are the mindless automata and then what does the military need killer-bots for? The would already have them.
We could use AI for medical advances and to create a communist utopia without serfdom. But it's already looking like we're getting killer robots and more oppression.
Hope I'm thinking about this wrong. I fear very soon the government will begin nationalizing AI resources and forcing AI researchers to direct their efforts towards weapons systems. Similar to what happened in physics. "We have to be first to have autonomous robot armies" basically.
If the Pentagon wants Anthropic's technology because it has desirable characteristics, can it not just train its own AI models? Why can't the Pentagon build data centers full of GPUs and hire some smart people like the commercial AI providers did?
Why in this case, has the usual path for technology been flipped? Starting out as commercial tech for civilians, and then being re-purposed for military use feels unusual to me. Maybe Hegseth's "War department" has a recruiting problem.
But no one, especially the government, should get in bed with them, when anthropic leadership has a track record trying to use their early mover advantace, to effectively create an AI cartel [1]
I'm glad Anthropic is getting a taste of their own medicine.
[1] https://www.bloomberg.com/opinion/articles/2025-10-15/anthro...
At the end of the rabbit hole, it's all about enforcement, regardless of the contract. Who's going to enforce Anthropic's terms and conditions if they betray the Pentagon?