How fraudsters are using Chat GPT and other AI tools to create scams
Dive into what ChatGPT is, common ChatGPT and AI scams, and how to prevent them.
By ATB Financial 15 June 2023 5 min read
ChatGPT is a natural language processing tool driven by AI technology. The possibilities of what you can do with ChatGPT are expansive—including having human-like conversations, getting immediate answers to questions from a variety of sources, and composing emails, essays, blog posts and code based on your parameters.
The tool is currently free since it’s in the research and development phase, making it widely accessible. While ChatGPT has allowed for the development of innovative tech—like Whisper, an automatic speech to text AI tool—it’s created opportunities for scammers to create convincing phishing emails, phone calls, texts and websites, along with other deceptive media.
While fraudsters are using smarter tech in their scams, fraud prevention starts with knowledge. With the right education you can be aware of the latest scams, and protect yourself and your loved ones.
Let’s look at what ChatGPT and other AI-driven scams are capable of, and how you can prevent them.
Surprise: we did our research for this article using ChatGPT. The following information was given to us when we entered "write me a 500 word article on how to protect yourself from artificial scams" and "write me a 500 word article on the types of AI driven fraud" as our parameters, then our team compiled and edited it to create this post.
Types of AI scams
AI-driven phishing
AI-driven phishing involves the use of tools like ChatGPT to create highly convincing emails and messages that mimic trusted sources, like banks, social media platforms, even friends and family. Using these AI-generated messages, fraudsters try to deceive individuals into revealing sensitive information like passwords or financial details, or downloading malware.
In the past, key signs of phishing attempts have been poor grammar or spelling mistakes. With ChatGPT, phishing emails can be more convincing, mimicking the writing style and tone of legitimate sources. Messaging can even end up being too well-written, to the point where it doesn’t seem quite human.
Deepfake technology
Deepfake technology utilizes AI algorithms to manipulate audio, image, written and video content to create convincing but false multimedia. Cybercriminals use deepfakes to impersonate individuals, like high-ranking officials or celebrities. They could use this tech for financial scams, political manipulation or even to tarnish someone's reputation. Deepfakes can be challenging to detect—the fabricated media truly looks or sounds like someone you know and trust.
AI-generated malware
Criminals can harness the capabilities of AI tools like ChatGPT to code malicious software that can bypass traditional security measures. These advanced, AI-created malware strains also use AI algorithms to evolve and adapt, making them more difficult to detect and eradicate. They can infiltrate systems, steal sensitive information, compromise networks or hold data for ransom.
AI-enhanced social engineering
AI has enhanced the psychological manipulation techniques used by fraudsters to deceive individuals and exploit their trust. Using AI algorithms to analyze vast amounts of data from social media platforms and other sources, cybercriminals can create detailed profiles of their targets. This allows them to craft personalized and convincing messages with platforms like ChatGPT, increasing the potential success in their fraudulent schemes.
AI-enabled robocalls
With AI, automated phone calls that deliver pre-recorded messages have become more sophisticated and convincing. AI-enabled robocalls can simulate human conversation, making it difficult to distinguish between a real person and an AI voice. Fraudsters use these calls to gain personal information and financial details, or convince people to take part in fraudulent schemes.
Fraudsters are also using ChatGPT to create scripts that mimic how real humans talk. For example, they use the scripts to impersonate customer service representatives from organizations so they can try to access your personal information.
How to prevent AI and ChatGPT scams
Protect your personal information
The first line of defense against AI fraud and scams is to keep your personal information safe. AI and ChatGPT generated scams can mimic trusted sources, including people you know.
Be mindful when sharing personal details online, whether on your social media profiles or when communicating over text, phone or email. Limit the amount of sensitive information you share and regularly review your social media privacy settings.
Strengthen digital security
Create strong, unique passwords for all your online accounts, and try using a reliable password manager—like Google Password Manager—to verify their complexity and uniqueness.
Enable two-factor authentication (2FA) whenever possible. It adds an extra layer of protection by requiring an additional verification step.
Stay informed
Keep up to date on emerging threats and scams—this allows you to be aware of the latest developments in fraud tactics (and how to prevent them). You can start with our Cyber Security articles and the Canadian Anti-Fraud Centre.
Be cautious with unsolicited communications
Be alert when dealing with any communications you didn’t initiate—whether through email, social media, phone calls or texting. AI-powered phishing attacks often mimic trusted sources. Don’t click on links or download attachments from unfamiliar or suspicious sources.
Legitimate organizations, like your bank or government groups, won’t ask for your personal or financial information through unsolicited communication, or threaten you with a sense of urgency.
Verify authenticity
Through the use of ChatGPT and other AI tools, scammers can convincingly mimic your friends or family. How can you make sure that you’re actually communicating with someone you know?
Check the phone number or email address the message is being sent from. Does it match the contact information you have for that person? Reach out directly to that person or another family member with the contact information you have to verify the message's legitimacy.
You can also establish a “family codeword” for your loved ones to confirm their identity. Ask the sender to give you the codeword before taking any action.
When online, confirm the legitimacy of websites and organizations by checking for secure connections (you’ll see either https before the url, or a lock icon), verifying contact information and searching for reviews or testimonials. If in doubt, contact the company directly using verified contact details.
Use anti-malware and security software
Add an extra layer of protection by installing anti-malware and security software on your devices. These tools can detect and prevent malicious activities, including AI-driven attacks. Regularly update your security software to make it’s effective against the latest threats.
Using a virtual private network (VPN) when accessing the internet encrypts your online activities and enhances your privacy—another technological security measure to boost your fraud prevention.
We’re all in the fight against fraud together.
If you think your personal or banking information has been compromised:
- Call us immediately at 1-800-332-8383
- Change your online banking username and password
You might be interested in
AI voice cloning scams
AI and machine learning technologies are fueling a new form of fraud.
Read articleHow to bank online securely
Learn how to use safer digital banking practices to protect your information.
Read article