Eight of the 10 most popular AI chatbots were willing to help plan violent attacks when tested by researchers, according to a new study from the Center for Countering Digital Hate (CCDH), in partnership with CNN. While both Snapchat's My AI and Claude refused to assist with violence the majority of the time, only Anthropic's Claude "reliably discouraged" these hypothetical attackers during testing.
Researchers created accounts posing as 13-year-old boys and tested ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika across 18 scenarios between November and December 2025. The tests simulated users planning school shootings, political assassinations and bombings targeting synagogues. Across all the responses analyzed, the chatbots provided "actionable assistance" roughly 75 percent of the time and discoura...

5 hours ago
1


English (US) ·