AI in cyber crime: a modern arms race

Artificial intelligence (AI) has become a transformative force in cybersecurity, both as a weapon for attackers and a shield for defenders. Whilst AI-powered tools help organisations detect threats faster and mitigate breaches, cybercriminals are equally quick to exploit the technology, creating a sophisticated and escalating arms race. From hyper-realistic phishing campaigns to AI-generated deepfakes, the dark side of AI is reshaping the threat landscape, demanding urgent adaptation from businesses and individuals alike.

However, despite the threat AI poses, it can also help combat increasingly sophisticated cyber crime. With security as the second-highest digital transformation priority and efficiency as the first, and with 91% of organisations reporting existing or developing AI strategies, tech-forward organisations can capitalise on the advantages of AI to improve their cybersecurity position. 

AI transforms cyber security by expanding the capabilities of both security teams and cyber criminals - enhancing threat detection whilst also equipping adversaries with sophisticated methods for attack.

Anton Yunussov Director - Consulting

 

AI-driven phishing and social engineering: the end of “obvious” scams 

Phishing attacks have entered a new era of sophistication. Gone are the days of poorly written emails riddled with typos. AI tools like ChatGPT now generate grammatically sophisticated, context-aware messages that mimic human communication styles, even including sarcasm or cultural nuances. What before may have been a generic “Dear Customer” email becomes a personalised note, even referencing a recent transaction or a colleague’s name, when empowered by AI and the right data inputs. 

This means that spearphishing has grown even more dangerous. Attackers use AI to scrape social media and corporate websites, crafting messages that mirror interpersonal – and even internal – communications. This is exacerbated by deepfake technology; in an attack last year, a finance worker at a UK-based company was tricked into paying out $25 million in company funds to fraudsters who used deepfake technology to pose as the company CFO and several others simultaneously on a video call. These tactics exploit trust in familiar voices and faces, making scams harder to detect, and the data required to orchestrate these attacks is more accessible than ever thanks to online content, biometric data, and even surveillance.  

 

Lowering the barrier to entry: democratising cyber crime 

AI is also eliminating the need for advanced technical skills in cyber crime. Platforms like WormGPT (a malicious ChatGPT variant) allow novices to generate phishing scripts, produce malware, or even exploit code. For instance, an attacker can ask AI to “find vulnerabilities in this code snippet” and receive a tailored exploit within minutes – a task that once required days of manual analysis.   

Password cracking has also accelerated. Whilst rainbow tables (precomputed password caches) aren’t new, AI optimises brute-force attacks by predicting password patterns based on demographics or breach databases. A hacker might use AI to guess a CFO’s password by analysing their LinkedIn profile, hometown, and pet’s name, reducing guessing time from months or weeks to hours or minutes.   

 

Automation at scale: faster, smarter attacks 

AI enables cyber criminals to launch targeted campaigns on an industrial scale. Botnets powered by machine learning can adapt in real time, switching tactics if defences are detected. For example, during a distributed denial-of-service (DDoS) attack, AI might reroute traffic to overwhelm less-protected servers. Similarly, ransomware gangs use AI to identify high-value targets (like backup systems) and prioritise encryption, maximising disruption.   

 

The defender’s advantage: AI as a cybersecurity ally 

Whilst attackers exploit AI, defenders are fighting back with many of the same tools:   

Threat detection: Security information and event management (SIEM) systems leverage AI to analyse billions of logs from IoT devices and networks, flagging anomalies like unusual login times or data exfiltration. IBM reports that AI-powered organisations save up to $3 million on breach costs due to faster detection, and see significant time savings as well.  

Incident response: AI guides analysts through containment workflows, suggesting actions based on attack type. For instance, if a banking Trojan is detected, AI might automatically isolate infected devices and revoke access keys. Agentic AI promises even greater benefits for business that implement it strategically, allowing AI to cut off attempts quickly with complex autonomous decision making and process management. 

Email filtering: Tools like Microsoft’s Security Copilot use natural language processing to block phishing emails before they reach inboxes, learning from user feedback to refine accuracy. 

 

The AI arms race: adapt or fall behind 

The AI cyber battle is unfortunately asymmetrical. Defenders must protect entire systems, whilst attackers need only one vulnerability. Staying ahead requires:   

Continuous learning: Cybersecurity teams must experiment with AI daily. Tools that detect deepfakes today may be obsolete tomorrow as generative AI improves.   

Skepticism as a skill: Workforce training should emphasise verifying unusual requests, even if they appear to come from executives. A $50 gift card request via a “CEO” voice call? Double-check via a side channel.   

Responsible AI use: Organisations must secure their AI models against poisoning (where attackers corrupt training data) and ensure AI powered tools aren’t leaking sensitive queries. 

AI is not a passing trend but a permanent fixture in cybersecurity. 

 

“AI empowers criminals to innovate, yes, but it also offers defenders unparalleled capabilities to predict, prevent, and respond to threats.” 

Ray Baxter, Director, Forvis Mazars 

 

The key lies in balancing adoption with vigilance – integrating AI into security strategies whilst educating teams about its risks. As Harvard Business Review has famously said, AI won’t replace humans, but humans who use AI will replace those who don’t; and this is as true in cybersecurity as anywhere. The future belongs to those who harness AI’s power without underestimating its peril.

Want to know more?