For years, the standard advice for spotting phishing emails could be summarized in a few simple rules: look for spelling mistakes, check for generic greetings, hover over links before clicking, and be suspicious of urgent requests from unknown senders.
That advice isn't wrong. But it's increasingly incomplete.
Artificial intelligence has fundamentally changed the phishing landscape. The telltale signs that security training programs have taught employees to recognize are disappearing—not because employees are less vigilant, but because the emails they're being asked to evaluate have gotten dramatically better.
What's Actually Changed
The phishing emails of a few years ago were, in many cases, identifiable by their imperfections. A misspelled company name. An awkward sentence structure. A greeting that read "Dear Customer" instead of using your actual name. These were artifacts of attackers working across languages and at scale, without the resources to personalize every message.
AI has removed those constraints.
Modern AI language models can generate emails that are grammatically flawless, contextually appropriate, and personalized to a degree that was previously only achievable through manual reconnaissance. Security researchers have observed AI-generated phishing emails that reference real projects, mimic internal communication styles, and arrive at times that align with normal business workflows.
Research from leading security firms has found that AI-generated phishing emails can achieve click-through rates that match or exceed those of messages crafted by experienced human attackers—and significantly outperform the crude, mass-produced phishing attempts of the past.
We covered the broader implications of AI in cybercrime in our piece on what Canadian SMBs should know about AI-powered threats.
The Old Red Flags Are Fading
Consider the traditional phishing indicators that most security awareness training emphasizes, and how AI has eroded each one:
Grammar and Spelling Errors
AI language models produce native-quality text in virtually any language. The days of spotting a phishing email because of awkward phrasing are effectively over for AI-assisted campaigns. An attacker operating from anywhere in the world can now generate emails that read as though they were written by a native English speaker sitting in the office next door.
Generic Greetings
AI can scrape publicly available information—LinkedIn profiles, company websites, social media—and generate emails that address recipients by name, reference their role, and mention real colleagues or projects. "Dear Valued Customer" has given way to "Hi Sarah, following up on the Q4 vendor review you mentioned in Tuesday's team meeting."
Suspicious Sender Addresses
While AI doesn't directly change email infrastructure, the increased sophistication of AI-crafted content means that even slightly suspicious sender addresses are more likely to be overlooked when the message itself is highly convincing. Attackers also combine AI-generated content with compromised legitimate email accounts, making the technical indicators harder to spot.
Obvious Urgency Tactics
Early phishing attempts relied on blunt urgency: "Your account will be closed in 24 hours!" AI enables more nuanced approaches—a gentle follow-up on an apparently missed invoice, a routine-sounding request from IT to update credentials, or a contextually appropriate reminder about a deadline that actually exists.
We explored the fundamentals of these manipulation techniques in our article on recognizing social engineering attacks.
Why Traditional Training Isn't Enough
This isn't an argument against security awareness training—it remains one of the most important investments a business can make. The point is that training programs built around spotting obvious red flags need to evolve alongside the threats they're designed to counter.
The challenge is a fundamental asymmetry: defenders need to be right every time, while attackers only need to succeed once. When AI allows attackers to craft hundreds of highly personalized, contextually convincing emails in the time it used to take to write one, the math shifts further in the attacker's favor.
Several factors make this particularly challenging for small and medium-sized businesses:
- Higher trust environments: In smaller organizations, employees are accustomed to informal communication from leadership. An email from the owner asking for something unusual doesn't trigger the same skepticism it might in a large corporation with formal communication protocols.
- Limited email security infrastructure: Many SMBs rely on basic email filtering that wasn't designed to catch AI-generated content, which often passes technical checks because it doesn't contain the patterns traditional filters look for.
- Fewer layers of verification: In organizations where one person handles accounts payable, there may be no second set of eyes on a convincing request to update vendor payment details.
The Spear Phishing Evolution
The most concerning development isn't mass phishing—it's the evolution of spear phishing, the targeted form of phishing directed at specific individuals within an organization.
Spear phishing has always been more dangerous than bulk phishing because it's personalized. The limiting factor was the time and effort required to research targets and craft convincing messages. AI has compressed that effort dramatically.
An attacker can now automate the research phase—pulling information from LinkedIn, company websites, press releases, and social media—and use AI to generate highly targeted messages at scale. What used to be a labor-intensive, one-at-a-time operation can now produce dozens of individually personalized spear phishing emails per hour.
For small businesses, this means that targeted attacks—once reserved for high-value corporate targets—are now economically viable against organizations of any size.
What AI-Powered Phishing Looks Like Today
To understand the current threat, it helps to see the contrast between traditional and AI-enhanced phishing approaches:
Business Email Compromise (BEC)
AI enables attackers to study an executive's email writing style from leaked communications or public statements, then generate messages that convincingly mimic that style. The resulting BEC attempts are harder to distinguish from legitimate emails—even for people who know the executive personally.
Supply Chain Phishing
AI-crafted emails impersonating vendors or partners can reference real purchase orders, project timelines, or contract details gathered from public sources. When the email reads like a routine business communication from a known contact, the natural response is to act on it—not to question it.
Multi-Stage Campaigns
Rather than going directly for credentials or money, some AI-powered campaigns begin with innocuous-seeming messages designed to establish a conversation. Only after several exchanges—during which the attacker builds rapport and credibility—does the actual malicious request arrive. This patience-based approach exploits the human tendency to trust established relationships.
How to Adapt
Defending against AI-powered phishing requires a shift in mindset—from "train employees to spot bad emails" to "assume some convincing emails will get through, and build systems that limit the damage."
Evolve Your Training
Security awareness programs should move beyond red-flag checklists and focus on building a verification mindset. Instead of teaching employees to spot grammatical errors, train them to question unexpected requests—regardless of how professional the email appears. As we discussed in our article on the human factor in security, the goal is to create a culture where pausing to verify is the default response, not the exception.
Implement Technical Controls
Since human detection alone is no longer sufficient, technical safeguards become more important:
- Advanced email security that analyzes behavioral patterns, not just content—looking at sender reputation, communication frequency, and contextual anomalies
- Multi-factor authentication on all accounts, so that stolen credentials alone aren't enough to compromise systems
- DMARC, DKIM, and SPF email authentication protocols that make it harder for attackers to spoof your domain
We covered the evolving landscape of email threats in our piece on elevating your email security.
Build Verification Into Processes
For sensitive actions—financial transactions, credential changes, data sharing—establish verification procedures that don't rely on the email channel alone:
- Confirm payment changes via phone using a number already on file
- Require dual authorization for transactions above a set threshold
- Use internal communication tools (not email) to verify unusual requests from colleagues
Invest in Detection and Response
Accepting that some phishing emails will get through shifts the focus to detection speed. How quickly can you identify that an account has been compromised? How fast can you contain the damage? Endpoint detection tools and active monitoring can catch the signs of a successful phishing attack—unusual login locations, abnormal data access patterns, lateral movement—before it escalates.
The Bigger Picture
AI hasn't made phishing a new threat—it's made an existing threat significantly harder to defend against using the methods that have worked for the past decade. The fundamental advice hasn't changed: be skeptical, verify unexpected requests, and don't click on things impulsively. What has changed is that following that advice now requires more than pattern recognition—it requires a systematic approach that combines human judgment with technical controls.
For small businesses, the practical takeaway is this: if your email security and employee training haven't been updated recently, they may be calibrated for a threat landscape that no longer exists. The phishing emails reaching your team today look nothing like the ones your training materials warned about—and that gap is where the real risk lives.
This article is intended for informational purposes only and does not constitute professional security, legal, or compliance advice. Organizations should consult with qualified professionals to assess their specific circumstances and develop appropriate protective measures.