Claude's ethical stand wins users, but rubs Trump administration the wrong way
Anthropic refused to allow its AI chatbot, Claude, to be used for autonomous weapons and domestic mass surveillance, and the Trump administration isn't thrilled about it.
The company says it doesn't want its tech powering autonomous weapons or risky government projects—a move that's reignited debates about whether chatbots like Claude and ChatGPT are reliable enough for high-stakes military use.
OpenAI's Pentagon deal amid Claude's rise
Interestingly, Anthropic's ethical stand is winning over regular users—it's now beating ChatGPT in US app downloads.
Meanwhile, OpenAI picked up a Pentagon deal to replace Claude but is facing a wave of negative reviews.
The whole situation highlights how public trust and clear ethics are becoming just as important as innovation when it comes to AI.