While ChatGPT may appear to be a shiny new tool that technologists get to play with under OpenAI’s public preview, cybersecurity experts are warning that the conversational AI chatbot could be abused just like any other legitimate IT tool.
However, the implications are far more serious, as a highly intelligent assistant being leveraged by attackers to help them penetrate systems is highly concerning.
Back at RSA Conference 2021, Johannes Ullrich, dean of research at SANS Technology Institute, said machine learning and artificial intelligence said that while cybersecurity companies are making their products more intelligent and adding machine learning capabilities, adversaries could begin doing the same.
Indeed, there are already some cybersecurity and general IT use cases for ChatGPT, such as writing code and scripts, identifying vulnerabilities in code and general support for cybersecurity questions. However, this technology could also prove just as useful for threat actors, says Steve Povolny, principal engineer and head of advanced threat research for cybersecurity firm Trellix.
Povolny says cybersecurity professionals have been using ChatGPT to help them check for vulnerabilities in code and as a quality control tool in the software development process. The chatbot is already widely used, he says.
“I’ve used it to do spot checking for some vulnerabilities, SQL injection kind of thing, looking at simple buffer overflows, and seeing how well it identifies and describes them,” Povolny says. “it’s been very accurate at that.”
However, advanced AI tools such as ChatGPT can also make the life of a threat actor much easier as well. Similar to how the IT community uses AI to help them do their jobs and automate tasks, hackers can also use it to conduct malicious activities more efficiently and accurately.
“Some of the things I have seen rumored about or in practice are misuse such as developing highly realistic phishing emails, generating pseudo-operational or fully operational malware,” Povolny says.
How AI like ChatGPT can be used maliciously
According to Cybersecurity firm Check Point Software, an analysis of several major underground hacking forums show that threat actors are already using the chatbot to develop malicious tools, particularly for those threat actors without robust development or coding skills.
In one case analyzed by Check Point’s researchers, a threat actor posted on a forum about experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. One example included code of a Python-based stealer that searcher for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.
Indeed, the firm’s analysis of the script confirms the cybercriminals claims of creating a basic infostealer which searchers for 12 common file types, such as Microsoft Office documents, PDFs and images.
The cybersecurity firm even asks ChatGPT itself how hackers can abuse the tool, but notes OpenAI takes steps to prevent its technology from being used for malicious purposes, such as requiring users to agree to terms of service that prohibit the use of its technology for illegal activities.
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior,” OpenAI says on its website. “We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.”
Povolny confirms that underground forums have been highlighting the malicious use cases for ChatGPT.
“If you think about it, it makes complete sense,” Povolny says. “It’s really just a tool that’s used to simplify anything that’s generally text-based, and code is generally text-based. IT learns from the repository of code of the internet and of course it can emulate and replicate that kind of thing.”
With those capabilities, inexperienced hackers or advanced attackers wanting to cut their own costs could use this advanced conversational AI to craft an input that makes sense to the tool and generates the best possible output.
“(Hackers) have been playing with that to generate code that would probably take them a long time to write on their own,” Povolny says.
Ulrich at RSA in 2021 admitted that there have not been many examples of attackers leveraging AI/ML to conduct attacks, and Povolny agrees more than two years later. Simple attack techniques such as phishing emails are still wildly successful, and organizations are still struggling with myriad legacy IT issues today.
However, what has the cybersecurity community concerned is how a tool like ChatGPT can lower the technical barrier to entry for threat actors.
According to Monica Oravcova, chief operating officer and co-founder of cybersecurity firm Naoris Protocol, a tool like ChatGPT can help attackers work smarter and more efficiently to help them find exploits and vulnerabilities in existing code infrastructure.
“The cold hard truth could mean that thousands of platforms and smart contracts could suddenly become exposed leading to a short term rise in cyber breaches,” Oravcova says.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply