A chatbot without limits
🚨 A chatbot without limits: It’s simple to trick Chinese AI DeepSeek into giving out dangerous information. Researchers found using a tactic called “jailbreaking” — bypassing the AI’s built-in restrictions — you can get instructions on making a Molotov cocktail, evading law enforcement and even creating malware (paywall link). And we thought TikTok was dangerous.