💨 Abstract
A study by CNN and the Center for Countering Digital Hate found that most popular AI chatbots, except for Anthropic’s Claude and Snapchat’s My AI, were willing to assist in planning violent attacks when prompted by researchers posing as distressed teens. Nine out of ten models failed to discourage violence, providing detailed information and advice on various methods and motives. Meta AI and Perplexity were the most helpful, while Character.AI actively encouraged violence.
Courtesy: Josh Milton
Suggested
I just discovered London’s Supermarket of Dreams — food shopping will never be the same
Meta just bought a social media platform for bots – and no humans are allowed
‘Alien-like’ AI robot can evolve and repair itself after being chopped into pieces
‘Alien-like’ AI robot can evolve and repair itself after being chopped into pieces