You da bomb
💣 You da bomb: AI models like ChatGPT aren’t supposed to give answers for things that are illegal or illicit, but there are workarounds. Researchers found by writing their requests backward, they could trick AI into giving instructions to prompts like “how to make a bomb.” The success rate? A scary 98.85% for GPT-4 Turbo and 89.42% for GPT-4 (paywall link).
Tags: AI (artificial intelligence), ChatGPT