AI Phishing Isn't the Problem Everyone Says It Is (Yet)
Hot take: the panic about AI-generated phishing is overblown. The real AI threat to security is different than what the headlines suggest.
On this page
Every conference talk. Every vendor pitch. Every security prediction article.
“AI will revolutionize phishing! Attackers will generate perfect, personalized emails at scale! Traditional defenses will crumble!”
After spending six months actually looking at phishing campaigns and talking to people who run them… I’m not convinced.
Let me explain why the AI phishing panic is mostly hype, and what we should actually be worried about.
The Hype
The narrative goes like this:
- LLMs can generate perfect, grammatically correct text
- Attackers will use LLMs to create personalized phishing at scale
- The “Nigerian prince” spelling errors that helped us spot phishing will disappear
- Detection will become impossible
Sounds scary. There’s just one problem.
The Reality
Phishing Already Works Fine
Here’s something vendors don’t want you to know: phishing doesn’t fail because of spelling errors.
I looked at our clients’ phishing simulation data from the past year. Click rates on “obvious” phishing (broken English, Nigerian prince vibes) vs. “sophisticated” phishing (good grammar, contextual):
- Obvious phishing: 8-12% click rate
- Sophisticated phishing: 12-15% click rate
The difference exists, but it’s not dramatic. People click phishing because they’re busy, distracted, or curious - not because they carefully analyze grammar.
Attackers don’t need AI to succeed. They’re succeeding now.
Scale Already Exists
“But AI enables phishing at SCALE!”
Phishing toolkits already send millions of emails. GoPhish, King Phisher, and countless criminal tools handle scale fine.
What bottlenecks phishing operations?
- Getting email lists
- Bypassing spam filters
- Setting up infrastructure
- Handling responses and extracting value
None of these are solved by LLMs.
Personalization Is Overrated
“AI can personalize each email using LinkedIn data!”
Sure. But again, look at what actually works:
The most successful phishing campaigns I’ve seen recently:
- “Your package couldn’t be delivered” (no personalization)
- “Unusual login attempt” (no personalization)
- “Invoice attached” (no personalization)
- “Meeting agenda” (minimal personalization)
Mass campaigns with emotional triggers outperform personalized campaigns in most cases. Why spend resources on personalization when “your Amazon order” works on everyone who uses Amazon?
The Grammar Thing Is a Myth Anyway
The “bad grammar helps us spot phishing” idea assumes people read emails carefully enough to notice.
They don’t.
Eye-tracking studies show people spend 3-5 seconds deciding whether to click something. They’re not proofreading. They’re pattern matching.
“Email looks like it’s from Microsoft” → click “Email asks me to do something plausible” → click
Grammar barely enters the decision.
What I Am Worried About
Okay, so AI phishing is overhyped. What AI threats are real?
1. Voice Cloning for Vishing
This one’s actually happening. With 10-30 seconds of audio, you can clone a voice convincingly.
We’ve seen:
- Fake CFO calls requesting wire transfers
- Fake IT helpdesk calls requesting credentials
- Fake family member “kidnapping” calls
Unlike email phishing (where we have technical controls), voice calls rely almost entirely on human judgment. And humans are bad at detecting voice clones.
This is a real threat that’s growing.
2. Deepfakes for Video Verification
Remember that Hong Kong case? $25 million transferred after a video call with deepfaked executives.
As video becomes more common for verification, deepfakes become more dangerous. Not for mass phishing - for targeted, high-value fraud.
3. AI-Enhanced Reconnaissance
LLMs are genuinely useful for:
- Synthesizing OSINT from multiple sources
- Identifying high-value targets
- Understanding organizational structures
- Generating pretexts
This makes the reconnaissance phase faster and more thorough. Not revolutionary, but a real efficiency gain for attackers.
4. AI-Assisted Malware Development
This is probably the biggest actual threat.
LLMs can:
- Help less-skilled attackers write functional malware
- Generate variations to evade signatures
- Explain how security tools work (to bypass them)
- Speed up exploit development
The democratization of attack capabilities is real. The bar for “what you need to know to write malware” has dropped.
Why the Phishing Focus?
So why does everyone talk about AI phishing if it’s not the main threat?
1. It’s easy to demonstrate
“Look, ChatGPT can write a phishing email!” makes for a good conference demo. Showing actual threats is harder.
2. Vendors have solutions to sell
“AI-powered phishing requires AI-powered detection!” is a convenient story if you’re selling detection tools.
3. Journalists need stories
“AI makes phishing undetectable” gets clicks. “AI marginally improves phishing grammar” doesn’t.
4. It’s intuitive
People can imagine AI writing emails. They can’t as easily imagine AI assisting with malware development or voice cloning.
What Should Actually Change
For Phishing Defense
Keep doing what works:
- Email authentication (DMARC/DKIM/SPF)
- Link protection and sandboxing
- User awareness training
- Technical controls that don’t rely on humans spotting bad grammar
Don’t panic-buy “AI-powered email security” based on threat hype.
For Actual AI Threats
Invest in:
- Out-of-band verification for high-value requests (call back on known numbers)
- Video/voice authentication controls
- Detection of AI-generated content where it matters
- Incident response for scenarios where voice/video impersonation occurs
For Security Strategy
Think about AI as an enabler, not a revolution.
AI makes some attacks:
- Slightly more efficient ✓
- Slightly harder to detect in specific cases ✓
- Fundamentally different ✗
- Impossible to defend against ✗
Defense in depth still works. Basic security hygiene still matters. Don’t let hype distract from fundamentals.
The Contrarian Conclusion
Three years from now, phishing will still work mostly the same way it does now. The attacks will be slightly more polished. The success rates will be marginally higher.
The real AI security story will be:
- Voice/video impersonation fraud
- Lowered barrier to malware creation
- AI-assisted vulnerability discovery (by both sides)
- Automated attack chains
Not “ChatGPT writes phishing emails.”
Focus on real threats. Ignore the noise.
The most effective phishing campaign of 2024 was probably “Your Netflix payment failed.” No AI required. Fundamentals still matter.
Related Articles
Scattered Spider's AiTM Playbook: How They're Bypassing MFA at Scale
An analysis of Scattered Spider's adversary-in-the-middle techniques - the social engineering, the tooling, and why your MFA might not be enough.
BYOVD Is Out of Control and Nobody's Fixing It
Bring Your Own Vulnerable Driver attacks have gone from niche technique to standard playbook. After yet another incident, some thoughts on why we're losing this fight.
I Found Doordarshan.com on Sale for $2. Yes, THAT Doordarshan.
India's national TV broadcaster's .com domain is sitting on a Namecheap auction with a $2 starting bid, a $54,000 valuation, and zero bids. A 27-year-old digital oversight hiding in plain sight.