by cyrusradfar
3 subcomments
- Something is deeply troubling when a company proclaims: "We want to protect people" and the government response is "we can't work with you"
The fact that there are countless use cases for real government efficiency to help the people they would sacrifice because Anthropic wanted to refuse killer robots is baffling.
- More government intervention in private enterprise? This pattern seems to be gathering steam, does that mean they're now subscribing to this model?
Or is this just par for the course and has always been going on, it's just the reporting is different, or the current context makes it more of a sensitive topic?
by SoftTalker
1 subcomments
- I love watching the plot lines of The Terminator play out in real life.
- It's been all of 3 days since Claude decided to delete a large chunk of my codebase as part of implementing a feature (couldn't get it to work, so it deleted everything triggering errors). I think Anthropic is right to hold the line on not letting the current generation delete people.
- Anthropic winning big points with me for this one to be honest. Reminiscent of the Apple vs FBI days almost a decade ago
- "Until this week, however, Anthropic’s Claude product was the only model permitted for use in the military’s classified systems."
I hadn't realized. This does make me consider using alternatives more.
by notepad0x90
2 subcomments
- If only a time traveling robot and his human companions were to pay a visit to decision makers at claude(aka cyberdyne? :) ).
What are they using it for though? Target selection for precise strikes? I'm guessing their argument will be less lives will be lost if claude assisted with making sure the attacks were surgically precise?
- There’s a conflict here that’s nothing to do with the ethical dimension: Claude is regarded as a high quality model at least in part because its critical about what it’s doing. The military, on the other hand, doesn’t really encourage introspection. Even without ethical considerations there’s always going to be a tension between quality and obedience.
by nitwit005
1 subcomments
- Feels like they'll use it for purposes Anthropic didn't approve of, and then turn around and blame them when it turns out asking ChatGPT to determine which ships are hostile was a bad idea.
- Yesterday I was trying to figure out if my expired nacho dip would be safe to eat and wanted to know how much botulism would be toxic if I ate it and so I asked Claude. It refused to answer that question so I could see how the current safeguards can be limiting.
- Kind of wild given the outcome appears to be https://time.com/7380854/exclusive-anthropic-drops-flagship-...
by h4kunamata
1 subcomments
- Read: The USA as usual doesn't like when a company doesn't give what they want.
Awwwnnnn poor thing :)
It is like the USA big techs mad because the Chinese AI companies are stealing their data just like, wait for it, how the USA big techs stole the data from artists worldwide to train their models.
The sweet payback in the name of every single artist/company that have been affected by USA greedy.
Karma is a btch!
- All of this is kind of weird.
https://www.bbc.com/news/articles/cjrq1vwe73po
> the Pentagon official told the BBC the current conflict between the agency and Anthropic is unrelated to the use of autonomous weapons or mass surveillance.
> The official added that the Pentagon would simultaneously label Anthropic as a supply chain risk.
*Supply chain risk*?
The BBC article seems to imply that the government wants to audit Anthropic.
This, coming at the same time those "distillation" claims were published, is all incredibly suspicious.
by yanhangyhy
0 subcomment
- person of intreset... who is gonna build the 'machine'
- > Both xAI and OpenAI have agreed to the government’s terms on the uses of their AI,
Uh... so why doesn't the US government simply work with OpenAI and xAI? Why do they have to use Claude?
by teh_infallible
0 subcomment
- It seems odd to me that the military doesn’t already have far superior models.
by KnuthIsGod
0 subcomment
- Claude is now the official LLM for Sauron and his killers.
by ChrisArchitect
0 subcomment
- [dupe] https://news.ycombinator.com/item?id=47140734
https://news.ycombinator.com/item?id=47142587
- As long as The Boring Company can drill a private Mount Cheyenne bunker in some granite mountain for the billionaires and a new bunker is constructed under the Silicon Valley financed White House ballroom for the politicians, everything is just fine.
Hegseth and Rubio already live on a military base because they are afraid.
by SpicyLemonZest
1 subcomments
- It's inexcusable that the AI companies have not formed a united front against this. I've been skeptical of the idea that OpenAI leadership is outright MAGA, but even pure self-interest does not explain staying silent while the Pentagon demands autonomous killbots.