A chatbot without limits

February 10, 2025

🚨 A chatbot without limits: It’s simple to trick Chinese AI DeepSeek into giving out dangerous information. Researchers found using a tactic called “jailbreaking” — bypassing the AI’s built-in restrictions — you can get instructions on making a Molotov cocktail, evading law enforcement and even creating malware (paywall link). And we thought TikTok was dangerous.

https://www.komando.com/news/a-chatbot-without-limits/