Phishing has long been the most persistent threat in the cyber landscape, exploiting human trust and curiosity to bypass even the most sophisticated technical controls. In 2025, however, the game has changed. The rise of generative artificial intelligence has given attackers a powerful new toolkit, enabling them to craft highly convincing, context-aware phishing messages at unprecedented scale and speed. The result is a new breed of social engineering attack—one that is more personalised, more adaptive, and more difficult to detect than ever before.
This article examines how AI is transforming phishing tactics, the anatomy of recent campaigns, and the multi-layered defences organisations must now deploy to protect people as well as systems.
The AI Phishing Revolution: What’s Changed?
From Spray-and-Pray to Precision Attacks
Traditional phishing campaigns relied on volume, sending generic messages to thousands in the hope that a few would take the bait. AI has flipped this model on its head. Modern phishing toolkits, powered by large language models and data scraping, can:
Analyse social media and public data to tailor messages to individual recipients.
Mimic writing styles, signatures, and even the tone of internal communications.
Automatically adapt to failed attempts, rephrasing or escalating messages based on recipient behaviour.
The Automation Advantage
AI-driven platforms can generate thousands of unique, grammatically flawless emails in minutes. They can simulate ongoing conversations, respond to queries, and even generate convincing voice calls using deepfake audio. This automation enables attackers to target high-value individuals—executives, finance staff, IT administrators—with spear-phishing campaigns that are almost indistinguishable from genuine correspondence.
Anatomy of an AI-Powered Phishing Campaign
Step 1: Data Gathering
Attackers use automated tools to harvest information from social networks, company websites, and data breaches. AI models process this data to build detailed profiles of targets, including their contacts, job roles, and recent activities.
Step 2: Message Generation
The AI crafts emails, instant messages, or even voice calls that reference recent projects, colleagues, or events. The language is natural, the formatting matches company templates, and the sender’s address may be spoofed or closely resemble a legitimate one.
Step 3: Engagement and Manipulation
If the target responds, the AI can continue the conversation, adapting its responses in real time. It may direct the recipient to a malicious website, request sensitive information, or persuade them to open an infected attachment.
Step 4: Exploitation
Once trust is established, the attacker moves to exploit the victim—stealing credentials, initiating fraudulent payments, or deploying malware to gain a foothold in the organisation.
Case Study: The Deepfake CEO
A New Zealand manufacturing firm fell victim to an AI-powered phishing attack in early 2025. The attackers used publicly available recordings of the CEO’s voice to generate a convincing deepfake audio message, instructing the finance director to approve an urgent wire transfer. The accompanying email, crafted by an AI model, referenced a real client and ongoing negotiations. The transfer was only halted when a vigilant staff member noticed a subtle inconsistency in the sender’s email domain.
The incident highlighted the growing sophistication of AI-enabled social engineering and the need for robust verification processes, even for seemingly routine requests.
Detection and Defence: Rethinking the Human Firewall
Technical Controls
Advanced Email Filtering: Deploy AI-powered email gateways that analyse not just content, but context and sender behaviour.
URL and Attachment Scanning: Use sandboxing and real-time analysis to detect malicious links and files.
Anomaly Detection: Monitor for unusual communication patterns, such as out-of-hours requests or messages from new devices.
Human Factors
Continuous Training: Move beyond annual awareness sessions to regular, scenario-based training that reflects the latest attack techniques.
Simulated Phishing: Conduct frequent, realistic phishing simulations to test and reinforce staff vigilance.
Clear Reporting Channels: Make it easy for users to report suspicious messages and ensure rapid response from security teams.
Organisational Policies
Multi-Factor Verification: Require secondary confirmation for sensitive requests, especially those involving financial transactions or changes to access permissions.
Least Privilege: Limit access to sensitive systems and data, reducing the potential impact of a compromised account.
Incident Response Planning: Prepare for the possibility of successful phishing attacks with clear playbooks for containment, investigation, and recovery.
The Road Ahead: Adapting to AI-Driven Threats
The arms race between attackers and defenders is accelerating. As AI models become more capable, phishing campaigns will only grow more convincing and harder to detect. Defenders must respond in kind, leveraging AI for threat detection, automating response workflows, and fostering a culture of scepticism and verification.
Organisations that thrive in this new era will be those that treat security as a shared responsibility, blending cutting-edge technology with empowered, well-trained people. The future of social engineering defence lies not in eliminating risk, but in building resilience—preparing every user to be the last line of defence when the machine is in the middle.

