Spammy form submissions spiked to 4,530 PER DAY for our nonprofit clients. Here’s how we cut it by 99.8%, giving them back some time (and sanity). We’ve always had anti-spam tools in place. But the bots were getting better at slipping through the cracks. So we added two new fields to website forms. Both are hidden from visitors, so they don’t impact the form submission process. 1. Honeypot A honeypot is a hidden field that’s designed solely to bait spam bots into filling it out. Since visitors can’t see it, they’ll never complete it. That way, if this custom honeypot is filled out, we know it was a bot and we can flag it as spam. 2. Time Trap This field checks how much time passed from when the form was loaded to when it was submitted. Spam bots are fast. Humans aren’t. If a form’s submitted in under two seconds, it’s probably not from a real person. If someone somehow does trigger this system, they’ll see a message telling them they were flagged as spam. When a submission makes it through these first two checks, it gets routed into the anti-spam systems we previously had in place to make sure it’s clean before hitting our clients’ inboxes. Spam wasn’t impacting all of our clients equally. But some were getting hit in waves. Thanks to the technical wizardry of the one and only Jonathan Goldford, we're down from 4,530 spammy messages per day to a much more manageable 11! Which means more time for nonprofits to focus on work that moves their mission forward.
Bot detection for form spam
Explore top LinkedIn content from expert professionals.
Summary
Bot-detection-for-form-spam is the process of identifying and blocking automated programs (bots) that try to submit fake or harmful information through online forms. This protects websites from being overwhelmed with junk submissions and keeps user data accurate and secure.
- Add hidden checks: Include invisible fields or measure how quickly forms are filled out, since bots often act much faster than real people and will complete fields humans can't see.
- Use human verification: Rely on tools like CAPTCHA, verification tokens, or double opt-in emails to confirm that real users—not bots—are submitting information.
- Limit suspicious activity: Set up controls like rate limits or geographic restrictions, and review unusual patterns to prevent bots from spamming your forms repeatedly.
-
-
After seeing multiple devs lose hundreds to voice AI form spam, here's a breakdown of effective mitigations by use case: For authenticated users: ‣ Rate-limit per session and implement temporary account suspension for abuse ‣ Escalate unusual activity patterns to manual review/support contact ‣ This works because you have persistent identity to enforce consequences For anonymous public forms (the harder problem): ‣ Use systems that generate single-use verification tokens confirming human interaction ‣ Modern reCAPTCHA operates invisibly across pages, analysing comprehensive behavioural profiles: mouse trajectories, keystroke timing, Canvas/WebGL fingerprints, scrolling patterns, device characteristics, and Google account signals ‣ When it determines you're human, it issues a time-limited verification token (valid for 2 minutes, single use only) ‣ Your API validates this token server-side with Google before processing the request ‣ This creates per-request proof-of-humanity without requiring traditional session management Universal protections: ‣ Hard spending caps and call duration limits ‣ IP-based rate limiting and geographic restrictions by country/area code ‣ Integration with fraud detection services Advanced verification: ‣ SMS confirmation to validate phone ownership before calling ‣ ⚠️ Critical: This creates SMS bombing attack vectors: apply rate limiting and CAPTCHA protection to SMS endpoints too The fundamental vulnerability: Many voice AI implementations expose API credentials directly in browser dev tools. It makes ALL other protections worthless since attackers can bypass your frontend entirely and call APIs directly. The endpoint that triggers the call is the one that must be protected. The uncomfortable truth: perfect security for truly open services doesn't exist. You can only make abuse expensive and annoying enough to deter most attackers. #VoiceAI #WebSecurity #BotProtection
-
I remember the first time a spambot attacked one of the sign-up pages at BuzzFeed. At first, we didn’t realize what was happening. We were looking at our email lists and saw that a ton of new subscribers were signing up for our newsletters that day — exciting! But then we looked a little closer. Almost all of the subscribers were from the same domain, yahoo.co.uk, which seemed odd. And then we looked even closer: The sign-ups were coming in so quickly — dozens of new yahoo.co.uk emails every minute — there was no way the email addresses were submitted by actual humans. That’s when we realized that something was seriously wrong. But we didn’t realize how much trouble we were in. We were the victims of a spambot, which had been crawling the web looking for a form like ours. These bots are usually looking for forums with a comment section where they can drop in a link to a page where someone can buy something, like pharmaceutical drugs. These bots don’t always realize that they’ve found a newsletter sign-up form — not a comment section. And if lots of bots end up on your list, it can cause serious deliverability issues. So what can you do about them? 1.) You can use a third-party tool to verify email addresses, like Kickbox, before adding them to your list. 2) You can use CAPTCHA, like we eventually did at BuzzFeed, to shut down bot activity on key forms. 3) You can set up a honeypot — a hidden field only a bot can see, and suppress any email address that fills out that field. 4) You can use double opt-in to require an extra confirmation before being added to the list. Your strategy might even involve multiple steps — many teams use CAPTCHA and double opt-in, for instance. Every newsletter should have a game plan for keeping their list clean. I’ve got more ideas here (https://lnkd.in/g89f2553) about how to build out the right strategy for your newsletter. ––– 📷 Below is a screenshot of the BuzzFeed newsletter page. There’s the CAPTCHA logo in the bottom right corner — three overlapping arrows of different colors — that indicates that the form is being secured by CAPTCHA.