ChatGPT and generative AI is dominating the news cycle, with companies like Microsoft, Slack, Snapchat, Grammarly and many others leveraging the emerging technology to help users be more efficient, accurate and creative. In fact, ChatGPT was estimated to have reached 100 million monthly active users in January, making it the fastest-growing consumer tool in history.
That kind of extreme popularity, however, makes ChatGPT a tantalizing lure for phishing attacks, says cybersecurity firm Cyble.
The firm says its researchers have identified several instances where threat actors are using ChatGPT to distribute malware and conduct other attacks, including using a fraudulent OpenAI social media page to spread malware via phishing. In addition, other phishing websites are impersonating ChatGPT to steal credit card information.
Further, families of Android malware are OpenAI branding to mislead users into believing the are accessing authentic applications, leading to the theft of sensitive information from Android devices.
Cyble identified an unofficial ChatGPT Facebook page that has over 3,400 likes and followers that contains links to phishing pages that impersonate ChatGPT. The pages lure users into downloading malicious files.
Several posts on the page include links to typo-squatted domains that lead to a fake OpenAI website that instructs users to download files, which are actually information stealers. The malware families included in the campaigns include Lumma Stealer, Aurora Stealer, clipper malware and others.
In addition to a fraudulent ChatGPT-related payment page, Cyble has identified over 50 fake and malicious apps that are using ChatGPT branding to carry out activities and distribute malware.
Like other phishing campaigns, these seek to leverage the popularity of generative AI and ChatGPT to trick unsuspecting users into downloading malicious applications. For example, COVID-19-themed phishing attacks have been popular, and remain a favorite theme of cybercriminals.
Organizations allowing employees to use ChatGPT should ensure that users are only accessing ChatGPT through legitimate sources, such as OpenAI’s website. Administrators should also educate employees on these trends, including how to identify phishing attacks.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply