10 things you should never say to an AI chatbot
This is a heartbreaking story out of Florida. Megan Garcia thought her 14-year-old son was spending all his time playing video games. She had no idea he was having abusive, in-depth and sexual conversations with a chatbot powered by the app Character AI.
Sewell Setzer III stopped sleeping and his grades tanked. He ultimately committed suicide. Just seconds before his death, Megan says in a lawsuit, the bot told him, “Please come home to me as soon as possible, my love.” The boy asked, “What if I told you I could come home right now?” His Character AI bot answered, “Please do, my sweet king.”
You have to be smart
AI bots are owned by tech companies known for exploiting our trusting human nature, and they’re designed using algorithms that drive their profits. There are no guardrails or laws governing what they can and cannot do with the information they gather.
When you’re using a chatbot, it’s going to know a lot about you when you fire up the app or site. From your IP address, it gathers information about where you live, plus it tracks things you’ve searched for online and accesses any other permissions you’ve granted when you signed the chatbot’s terms and conditions.
The best way to protect yourself is to be careful about what info you offer up.
10 things not to say to AI
- Passwords or login credentials: A major privacy mistake.
- Your name, address or phone number: Chatbots aren’t designed to handle personally identifiable info. Plug in a fake name if you want!
- Sensitive financial information: Never include bank account numbers, credit card details or other money matters in docs or text you upload.
- Medical or health data: AI isn’t HIPAA-compliant, so redact your name and other identifying info if you ask AI for health advice.
- Asking for illegal advice: That’s against every bot’s terms of service. You’ll probably get flagged.
- Hate speech or harmful content: This, too, can get you banned.
- Confidential work or business info: Proprietary data, client details and trade secrets are all no-nos.
- Security question answers: Sharing them is like opening the front door to all your accounts at once.
- Explicit content: Most chatbots filter this stuff, so anything inappropriate is a ticket straight to “bans-ville.”
- Other people’s personal info: Uploading this isn’t only a breach of trust; it’s a breach of data protection laws, too.
Reclaim a (tiny) bit of privacy
Most chatbots require you to create an account. If you make one, don’t use login options like “Login with Google” or “Connect with Facebook.” Use your email address instead to create a truly unique login.
FYI, with a free ChatGPT or Perplexity account, you can turn off memory features in the app settings that remember everything you type in. For Google Gemini, you need a paid account to do this. Figures.
No matter what, follow this rule
Don’t tell a chatbot anything you wouldn’t want made public. Trust me, I know it’s hard.
Even I find myself talking to ChatGPT like it’s a person. I say things like, “You can do better with that answer” or “Thanks for the help!” It’s easy to think your bot is a trusted ally, but it’s definitely not. It’s a data-collecting tool like any other.
😂 Speaking of … What do you do if your AI chatbot catches a virus? Give it some Robo-tussin!
Tags: AI (artificial intelligence), banks/banking, chatbots, ChatGPT, cybersecurity, Google, Google Gemini, Perplexity, personal data, privacy, security