A resume landed in my inbox on Tuesday. By Wednesday, I was convinced we'd found a unicorn candidate. But by Friday, we found out this person didn't even exist… The entire profile from the LinkedIn photo to the GitHub contributions was completely AI generated. An artificial candidate nearly became our new founding engineer. But this isn't some outlier story anymore. Gartner predicts that by 2028, 25% of all job applicants globally will be completely fake. Let that sink in. We're no longer just trying to filter for unqualified resumes. We're also having to deal with sophisticated AI impersonations and fabricated work histories that can be gateways for malware/security breaches. So how do you spot them? After seeing dozens of these cases with Stardex, here's my personal checklist: 1. Connection patterns: Watch for networks that don't match claimed experience. 2. Activity history: Real professionals have a consistent voice in what they share. 3. Visual consistency: Cross-reference photos and look for AI images. 4. Direct verification: Contact previous employers through official channels. 5. Tech tools: Use AI-detection software like GPT Zero for resumes. 6. Interactive tests: Ask candidates to perform simple actions during video calls. 7. Trust your gut: When something feels off, it usually is. At Stardex, our AI ATS doesn't just organize your candidates, it actively protects your whole recruiting process by flagging suspicious patterns across platforms using multiple AI-detection apps. Because with the right tools in place, the technology creating these problems can also help solve them. PS: I've developed two more verification techniques that work like a charm. If you're a head of recruiting facing these challenges, DM me and I'll share them with you.
Validating AI-Generated Candidate Emails
Explore top LinkedIn content from expert professionals.
Summary
Validating AI-generated candidate emails means checking whether emails, profiles, and resumes from job applicants are genuine or artificially created by AI tools. As more fake candidates use advanced AI to create convincing credentials, recruiters need to watch for warning signs to avoid wasted time or security risks.
- Check inconsistencies: Look for mismatched details, unusual formatting, and suspiciously perfect language in candidate emails and resumes.
- Verify backgrounds: Cross-check employment history by contacting previous employers and using trusted channels to confirm candidate claims.
- Use detection tools: Implement AI-detection software and conduct real-time interviews to spot artificial personas and ensure candidates have real skills.
-
-
🥸 Another fake AI candidate wasted my time. Let me show you how to spot them and why they can be dangerous. This was my second encounter, though I had a hunch. I needed confirmation and wanted to learn how to detect them faster. They're clearly on the rise. He claimed to be a contractor from Estonia with a polished CV listing real companies. A few yellow flags appeared. Alone, each might be possible, but together they spelled trouble. 😶🌫️ His LinkedIn profile had few connections, a superficial post history, and was only a month old. It can happen, but it’s unusual. 🤨 His name, location, and ethnicity didn’t match; three different things. He tried very hard for a 🇺🇸-sounding name. It is possible but the last AI scam had the same oddity. 👾 His messages were suspiciously specific to the contract ad. Likely AI-generated. No issue in itself, but another sign. 🤖 His CV looked fine but read like "keyword bingo" adding new tech at every position. It was coherent but clearly AI-written. 🧑💻 On the call, he paused too long when I asked his location, finally saying "Tallinn". He couldn’t name a single local pub. Maybe he doesn’t drink, but it was suspect. Eventually, he just rattled off tech stacks, unable to name any actual project details. Five minutes in, I confronted him. He played dumb, then vanished when he realised he was caught. AI helps fakers create profiles and CVs that were once easy to debunk. Now you have to look harder, and they’ll only get better. Why is this an issue? They’ll waste your time interviewing and your money if hired. If they were skilled, they wouldn’t need to lie. They might slip AI-generated code past you, or worse, exploit access to steal IP or harm your systems for more gains.
-
At Dover, we’ve detected tens of thousands of sketchy candidates applying to jobs with completely AI-generated resumes. After reviewing over 5 million resumes, we’ve learned how to spot the telltale signs. Here’s what to look for. Language red flags: - "Rich tapestry of experiences" or "multifaceted approach" - The word "delve" - its usage exploded after ChatGPT launched - Generic openers like "I am a highly motivated and results-oriented individual" Structural tells: - Perfect formatting with zero natural variations - Identical bullet point structures throughout - Content that's too dense without breathing room Content inconsistencies: - Sophisticated language mixed with contextual awkwardness - Vague accomplishments like "enhanced organizational efficiency" - Industry buzzwords used incorrectly (claiming expertise in 8+ programming languages) Behavioral patterns: - Mass applications at identical timestamps - Formal cover letters followed by casual, error-prone emails - Interview performance that doesn't match written sophistication Detection tools that might be helpful: GPTZero (84% accuracy, paid) Sapling AI (68% accuracy, free) THE BIGGEST TELL: Everything sounds "too perfect" - grammatically flawless but missing authentic personal touches. My advice: Don't use AI detection as your only filter. Some legitimate candidates will use AI to help with their resumes. Focus on skills-based assessments and practical demonstrations during interviews. Your goal isn't eliminating AI-assisted applications, but finding genuine qualifications behind the polished presentation.