Large language models (LLMs) are used for natural language processing tasks and can generate human-like responses after being trained on vast amounts of data. The most capable LLMs are generative pretrained transformers, or GPTs, the most popular of which is ChatGPT, although there are many others including the China-developed DeepSeek app.
These AI-powered tools have proven incredibly popular and are used for a wide range of tasks, eliminating a great deal of human effort. They are used for creating articles, resumes, job applications, and completing homework, translating from one language to another, creating summaries of text to pull out the key points, and writing and debugging code to name just a few applications.
When these artificial intelligence tools were released for public use, security professionals warned that in addition to the beneficial uses, they could easily be adopted by cybercriminals for malicious purposes such as writing malware code, phishing/spearphishing, and social engineering.
Guardrails were implemented by the developers of these tools to prevent them from being used for malicious purposes, but those controls can be circumvented. Further, LLMs have been made available specifically for use by cybercriminals that lack the restrictions of tools such as ChatGPT and DeepSeek.
Evidence has been growing that cybercriminals are actively using LLMs for malicious purposes, including writing flawless phishing emails in multiple languages. Human-written phishing emails often contain spelling mistakes and grammatical errors, making them relatively easy for people to identify but AI-generated phishing emails lack these easily identified red flags.
While cybersecurity professionals have predicted that AI-generated phishing emails could potentially be far more effective than human-generated emails, it is unclear how effective these AI-generated messages are at achieving the intended purpose – tricking the recipient into disclosing sensitive data such as login credentials, opening a malicious file, or taking some other action that satisfies the attacker’s nefarious aims.
A recently conducted study set out to explore how effective AI-generated spear phishing emails are at tricking humans compared to human-generated phishing attempts. The study confirmed that AI tools have made life much easier for cybercriminals by saving them a huge amount of time. Worryingly, these tools significantly improve click rates.
For the study, researchers from Harvard Kennedy School and Avant Research Group developed an AI-powered tool capable of automating spear phishing campaigns. Their AI agents were based on GPT-4o and Claude 3.5 Sonnet, which were used to crawl the web to identify information on individuals who could be targeted and to generate personalized phishing messages.
The bad news is that they achieved an astonishing 54% click-through rate (CTR) compared to a CTR of 12% for standard phishing emails. In a comparison with phishing emails generated by human phishing experts, a similar CTR was achieved with the human-generated phishing emails; however, the human version cost 30% more than the cost of the AI automation tools.
What made the phishing emails so effective was the level of personalization. Spear phishing is a far more effective strategy than standard phishing, but these attacks take a lot of time and effort. By using AI, the time taken to obtain the personal information needed for the phishing attempt and develop a lure relevant to the targeted individual was massively reduced. In the researchers’ campaign, the web was scraped for personal information and the targeted individuals were invited to participate in a project that aligned with their interests. They were then provided with a link to click for further information. In a genuine malicious campaign, the linked site would be used to deliver malware or capture credentials.
AI-generated phishing is a major cause of concern, but there is good news. AI tools can be used for malicious purposes, but they can also be used for defensive purposes and can identify the phishing content that humans struggle to identify. Security professionals should be concerned about AI-generated phishing, but email security solutions such as SpamTitan can give them peace of mind.
SpamTitan, TitanHQ’s cloud-based anti-spam service, has AI and machine learning capabilities that can identify human-generated and AI-generated phishing attempts, and email sandboxing for detecting zero-day malware threats. In recent independent tests, SpamTitan outperformed all other email security solutions and achieved a phishing and malware catch rate of 100%, a spam catch rate of 99.999%, with a 0.000% false positive rate. When combined with TitanHQ’s security awareness training platform and phishing simulator – SafeTitan, security teams will be able to sleep easily.
For more information about SpamTitan, SafeTitan, and other TitanHQ cybersecurity solutions for businesses and managed service providers, give the TitanHQ team a call. All TitanHQ solutions are available on a free trial and product demonstrations can be arranged on request.