Artificial intelligence has received a lot of attention over the last few weeks. Several platforms have become viral hits for using AI to create art with a one-word prompt or transforming selfies into magical portraits.
But where AI is great for creative projects, it can also assist in wording. In fact, AI is one of the oldest forms of online communication, as most websites use chatbots to help users navigate issues. However, a recent technological advance means that it can be used for malicious purposes.
Read on to see how ChatGPT makes sending malware through email easier than ever.
Here’s the backstory
Many websites and apps use artificial intelligence. Chances are you’ve interacted with it at least three times this week. Some AI is subtle, like how Spotify knows exactly what you want to listen to next, while other examples are harder to spot.
However, a revolution is seemingly afoot as OpenAI released its ChatGPT service earlier this month. The platform is designed for any website or service to use and communicates with users without human intervention.
“ChatGPT is a powerful tool for creating chatbots that can engage in natural language conversations with users. It provides information, answers questions, and engages in dialogue in a way that feels similar to interacting with a human.” This is what ChatGPT replied with when asked to explain ChatGPT.
The possibilities are endless. You only need to input a question or request, and ChatGPT dutifully responds in the best way that AI can. But that is also creating a serious security problem, as Check Point Research found out.
Scammers and cybercriminals are usually not native to English-speaking countries. So the text in phishing emails or scam messages contains spelling mistakes and typos. That is easily fixed in a word processor. But grammar, wording and syntax are harder and that is where ChatGPT comes in.
The ChatGPT security threat
As Check Point Research discovered, ChatGPT has no problem generating an authentic-sound phishing message without spelling or grammatical errors.
From there, it tweaked the copy to include certain parameters, such as getting the victim to simply download an Excel document. With the text in place, CPR asked the chatbot to generate malicious code that goes into the phishing email and it did.
“We did not write a single line of code and instead let the AI do all the work. We chose to illustrate our point with a single execution flow, a phishing email with a malicious Excel file weaponized with macros that downloads a reverse shell (one of the favorites among cybercrime actors),” CPR explains.
This is a massive problem. Anybody with little to no hacking knowledge can create malicious code to steal your personal information.
How to avoid falling victim to phishing attacks
Phishing emails are getting more sophisticated and difficult to detect. Now with AI chatbots sending phishing messages, things are even more serious. That’s why it’s important to keep the following safety measures in mind whenever online.
- Safeguard your information — Never give out personal data if you don’t know the sender of a text, chat or email or can’t verify their identity. Criminals only need your name, email address and telephone number to rip you off.
- Always use 2FA — Use two-factor authentication (2FA) for better security whenever available. Tap or click here for details on 2FA.
- Avoid links and attachments — Don’t click on links or attachments you receive in unsolicited emails or messages. They could be malicious, infect your device with malware and steal sensitive information.
- Use strong, unique passwords — Create hard-to-crack passwords for all online accounts. And never use the same password on multiple platforms. Tap or click here for an easy way to follow this step with password managers.
- Antivirus is vital — Always have a trusted antivirus program updated and running on all your devices. We recommend our sponsor, TotalAV. Right now, get an annual plan with TotalAV for only $19 at ProtectWithKim.com. That’s over 85% off the regular price!