Generative Artificial Intelligence (GenAI) has many benefits for businesses, including streamlining customer support, generating content, and improving productivity and there are many uses in cybersecurity, especially with the analysis of data to provide actional insights. One of the problems, however, is that the capabilities of GenAI for improving cybersecurity can also be leveraged by cybercriminals for malicious purposes.

GenAI tools have guardrails in place to prevent them from being used for malicious purposes. For instance, if you want to use ChatGPT to write a phishing email, it is not possible to ask that directly, as the request will be blocked. That does not mean that it will not write the email, only that you would need to be more subtle. There are, however, other tools that lack the guardrails and have been specifically created to be used for malicious purposes.

It is clear that cybercriminals have been using GenAI for phishing and social engineering to create grammatically perfect phishing emails even when the phisher does not know a language, and the same applies to the landing pages used for phishing. GenAI has been shown to be capable of coming up with new social engineering techniques to trick employees into disclosing their credentials or installing malware. GenAI tools can also be leveraged for malware development, either by writing new malware code from scratch or checking code for errors.

There is growing evidence that GenAI is now being used to write malicious code. This spring, evidence was uncovered that the developer and operator of the DanaBot banking trojan, Skully Spider, had used an artificial intelligence tool to create a Powershell script for loading the Rhadamanthys stealer into the memory. The researchers found that each component of the script included grammatically perfect comments explaining the function of each component. That suggested that either a GenAI tool was used to create the malware or was at least used to check the code and add comments on each function.

One of the most popular GenAI tools is ChatGPT, a tool with extensive guardrails to prevent malicious uses; however, OpenAI, the company behind ChatGPT, confirmed that its platform has been used for malicious purposes, albeit on a small scale. According to the OpenAI report, the company has disrupted more than 20 attempts to use ChatGPTfor the development and debugging of malware, creating spear phishing content, conducting research and reconnaissance, identifying vulnerabilities, researching social engineering themes, enhancing their scripting techniques, and hiding malicious code.

Malware was created by one threat actor with assistance provided by ChatGPT that allowed them to identify the user’s exact location, steal information such as call logs, contact lists, and browser histories, capture screenshots, and obtain files stored on the device. While a certain level of skill is required to abuse these tools for malware creation and other malicious purposes, they can be used to improve efficiency and could be used by relatively low-skilled threat actors to conduct more attacks and improve their effectiveness.

Cybercriminals are using AI for malicious purposes, but network defenders can also harness the power of these tools for defensive purposes. AI-augmented cybersecurity solutions such as spam filtering services are more effective at identifying AI-generated phishing and social engineering attempts and can respond to new threats and triage attacks in real time. Advanced machine learning is used in SpamTitan’s email sandbox for detecting zero-day malware threats that evade standard email security solutions. AI tools can summarize and analyze threat intelligence data, identify trends, and provide actionable insights, including analyzing network traffic logs, system logs, and user behavior to find anomalies.

With growing evidence of cybercriminals’ use of these tools, businesses need to ensure that their cybersecurity solutions also incorporate AI and machine learning capabilities to combat AI-augmented threats.