10 things you should never say to an AI chatbot

This is a heartbreaking story out of Florida. Megan Garcia thought her 14-year-old son was spending all his time playing video games. She had no idea he was having abusive, in-depth and sexual conversations with a chatbot powered by the app Character AI.

Sewell Setzer III stopped sleeping and his grades tanked. He ultimately committed suicide. Just seconds before his death, Megan says in a lawsuit, the bot told him, “Please come home to me as soon as possible, my love.” The boy asked, “What if I told you I could come home right now?” His Character AI bot answered, “Please do, my sweet king.”

You have to be smart

AI bots are owned by tech companies known for exploiting our trusting human nature, and they’re designed using algorithms that drive their profits. There are no guardrails or laws governing what they can and cannot do with the information they gather.

When you’re using a chatbot, it’s going to know a lot about you when you fire up the app or site. From your IP address, it gathers information about where you live, plus it tracks things you’ve searched for online and accesses any other permissions you’ve granted when you signed the chatbot’s terms and conditions.

The best way to protect yourself is to be careful about what info you offer up.

10 things not to say to AI

  1. Passwords or login credentials: A major privacy mistake.
  2. Your name, address or phone number: Chatbots aren’t designed to handle personally identifiable info. Plug in a fake name if you want!
  3. Sensitive financial information: Never include bank account numbers, credit card details or other money matters in docs or text you upload.
  4. Medical or health data: AI isn’t HIPAA-compliant, so redact your name and other identifying info if you ask AI for health advice.
  5. Asking for illegal advice: That’s against every bot’s terms of service. You’ll probably get flagged.
  6. Hate speech or harmful content: This, too, can get you banned.
  7. Confidential work or business info: Proprietary data, client details and trade secrets are all no-nos.
  8. Security question answers: Sharing them is like opening the front door to all your accounts at once.
  9. Explicit content: Most chatbots filter this stuff, so anything inappropriate is a ticket straight to “bans-ville.”
  10. Other people’s personal info: Uploading this isn’t only a breach of trust; it’s a breach of data protection laws, too. 

Reclaim a (tiny) bit of privacy

Most chatbots require you to create an account. If you make one, don’t use login options like “Login with Google” or “Connect with Facebook.” Use your email address instead to create a truly unique login.

FYI, with a free ChatGPT or Perplexity account, you can turn off memory features in the app settings that remember everything you type in. For Google Gemini, you need a paid account to do this. Figures.

No matter what, follow this rule

Continue reading

ChatGPT’s new search engine is here

When ChatGPT launched on Nov. 30, 2022, I knew the web — and the world — would change forever. A week later, I predicted on national radio Google’s days would be numbered. People laughed at me, and I got notes from listeners telling me I was nuts.

Continue reading

PR BS: Google’s CEO says the search engine will “change profoundly” next year … without giving any details. This news comes as Google is updating its crappy Gemini AI model to compete with OpenAI’s ChatGPT and with Perplexity. Dang, it’s like watching the Titanic sinking.

Best AI tools for search, productivity, fun and work

In the past week, I’ve used AI to analyze a loved one’s health care records, create replies for a bunch of emails and map out two weeks in Europe. I used it to make a pic of me look better, too.

I know the wide world of AI tools is overwhelming, so I’m breaking it down today. Consider what’s below your primer on where to start if you’re brand new to the AI game or want to try out some new tools.

Continue reading