The Language Trap: How AI Buzzwords Are Programming Us
It was an unusually busy industry event. But then, what do you expect when everything is about AI nowadays. Besides listening to the speakers on stage, I went from booth to booth to see what the tech companies and start up entrepreneurs were evangelising.
“It’s AI-powered,” someone at a booth declared while showing off their solution, with the reverence usually reserved for discovering fire.
The listeners eyes widened. The price and everything else suddenly seemed reasonable. Everyone around nodded sagely.
This moment reminded me of an article written by Matthew Syed that I had read in The Sunday Times where he wrote about Ludwig Wittgenstein’s famous warning in Philosophical Investigations about “the bewitchment of intelligence by means of language.” The philosopher understood something we’re only now experiencing at scale: the words we choose don’t just describe reality—they actively reshape how we think, feel, and act.
AI isn’t just reshaping technology. The words we use about AI are reshaping us.
The Trinity of AI-Bewitchment
What I had seen and heard at the event reminded me about Matthew Syed’s recent article in The Sunday Times. I was struck about just how much our language nowadays has become influenced by AI – or whatever each of us understands as AI. I am sure each of you readers has noticed likewise. There are various words and phrases that have become commonplace today, even when AI is not the focus of discussion. For the sake of this article, I have selected three phrases that are amongst those that dominate today’s AI conversation. They look innocent enough on the surface, but they carry far more weight than their syllables suggest. They don’t just label what AI does; rather, they change how we relate to it, how we value it, and how we fear it.
Let’s talk about the three culprits: AI-powered, training data, and hallucination.
1. AI-Powered
Arguably one of the most used and abused! This has become the ultimate product blessing. It is our generation’s equivalent of “new and improved.” Say the words “AI-powered” and suddenly there is a transformative change - the mundane becomes magical, a recommendation engine becomes an oracle, a chatbot becomes a consultant and a glorified spreadsheet becomes a “predictive analytics platform.” There seems to be no limit to what ‘AI-powered’ can achieve.
The phrase positions every product as if it has assumed something super-natural. It suggests that behind your humble app lies the computational equivalent of Einstein’s brain, working tirelessly to enhance your life.
And so, the framing is what really matters. Picture this - “AI-powered” shapes our willingness to pay premium prices, and our tolerance for imperfection (“It’s still learning!”), and our expectation that everything should be smarter today than it was yesterday because it is AI-powered, after all!!
Tongue-in-cheek: Nothing is impossible it would seem if it were AI-powered! I shudder to think that someday soon we might converse like this in our everyday life -
- “My AI-powered breakfast cereal adapts to my taste buds in real time while optimizing my nutritional intake. Each spoonful is training tomorrow’s bowl.”
- “My marriage is now AI-powered. The premium version predicts arguments before they happen and generates automated anniversary reminders.”
Once you see it, you can’t unsee it. “AI-powered” has become linguistic glitter dust which we can sprinkle to make anything sparkle.
2. Training Data
The phrase sounds disciplined, scientific, and rigorous. Something very serious and sacrosanct, even. It would seem as if the AI spent years at school, scribbling notes, revising for exams, and occasionally stressing about report cards.
In reality, “training data” is just… us. Yes, just us. Deal with it. Yes, it has been fed the genuinely intellectual stuff, the articles, the white papers, the facts, the published research and so on, all of which add immeasurably to its value. But it is also our tweets, our recipes, our reviews of dodgy phone chargers, our late-night Reddit debates, our badly spelled blogs, our questionable karaoke videos. AI hasn’t been trained in (only) a monastery of facts, intellect and wisdom. It has also been force-fed the sprawling buffet of human digital life.
But the phrase “training data” hides all that messy reality. It makes AI sound like a diligent student rather than what it really is - a giant mirror reflecting our collective brilliance, our biases, all the absurdity that is commonplace in real life.
Tongue-in-cheek: What if humans were “trained” the way AI is?
- He read 10,00 cookbooks and watched all YouTube recipes → Congratulations, he is now a Michelin-starred chef! OR
- Watch every rom-com trailer comment section to become a certified relationship expert; OR
- Skim 3 million Wikipedia pages and be in line for a Nobel Prize in Everything.
No wonder AI sometimes gives us bizarre answers. It’s been raised on the same internet that taught us to believe cats run the world and pineapple belongs on pizza.
3. Hallucination
Recommended by LinkedIn
This is perhaps the most poetic word in the AI dictionary. It makes the machine sound like a dreamy artist, gazing into the ether and producing surreal visions of imagined reality.
In truth? It just means “the system made something up.” It bluffed, really!!
If a doctor “hallucinated” your test results, would you not sue? If your lawyer “hallucinated” legal precedents, you’d be in jail. If your accountant “hallucinated” your tax return, you’d be in prison with them. Yet when AI does it, the word softens the failure into something almost enchanting. Very forgiving, even.
Tongue-in-cheek: Imagine humans getting away with this excuse.
- “Sorry boss, I didn’t falsify the quarterly numbers. I was hallucinating increased revenue.”
- “I didn’t forget your birthday, darling. My neural pathways just hallucinated the date.”
- “Yes officer, I hallucinated that the speed limit was 90.”
See how the euphemism changes our tolerance? The machine isn’t broken, it’s just a bit…err… imaginative.
The Everyday Absurdity
So then, why do these words (and others similar) matter? Because once we start to laugh at how absurd they actually sound in human contexts, we realize how much power they hold in technological ones.
When we hear “AI-powered,” we instinctively assume intelligence, progress, and heightened value. When we hear “training data,” we assume rigour, objectivity, and neutrality. When we hear “hallucination,” we assume creativity instead of error (or error within an acceptable limit, whatever that might be).
The humour is a mirror. The serious reality is that these words subtly rewire how we perceive intelligence, progress, and trust.
The Serious Game We’re Playing
Behind the jokes lies a profound concern. These three terms are not harmless marketing jargon. Instead, they are linguistic levers that shift public understanding, boardroom decisions, and policy directions. Think of how much ‘soft’ power they exercise.
- When everything becomes “AI-powered,” we lose the ability to distinguish genuine breakthroughs from incremental upgrades.
- When every system has “training data,” we forget that the quality of outputs depends on the quality (and bias) of inputs.
- When every error is a “hallucination,” we stop demanding accountability from the systems and those who build them.
Language isn’t just description; it’s almost deliberate persuasion. And consider it’s enormous power and impact - it influences funding, regulation, adoption, and even the way we think about ourselves in relation to machines.
And perhaps most dangerously, it risks diminishing the uniqueness of human intelligence by blurring the line between what we do and what algorithms do.
The Choice Ahead
Wittgenstein warned us: language can bewitch intelligence. And AI discourse proves it daily.
So, the next time you hear these three magic words, pause:
- When someone says “AI-powered,” ask very objectively what does it really do?
- When you hear “training data,” ask what was it trained on?
- When the system “hallucinates,” ask what’s the cost of error and who’s responsible?
So where do we stand as ‘AI jargon’ begins to exercise enormous power and influence? In my opinion, we presently stand at a linguistic crossroads. We can continue letting these words dazzle us into misplaced faith and misplaced fear. Or we can choose language that is precise, grounded, and honest, language that respects AI’s capabilities and yet highlights its serious limitations.
Because in the end, the most dangerous thing about AI may not be what it can do to us. It maybe what our words about AI are already doing to our thinking.
The algorithms may be “learning.” But are we?
Global Lead - Financial Services Technology - Accenture
2moWell captured Sanat Rao !! “Once you see it, you can’t unsee it.” In many cases, this language coverup what’s inside without really having what is truly meant by language. Probing beneath the flavoured surface helps to ground the reality, be it reasoning, learning, training, corpus or value outcomes.
Husband & dad first. Status quo disrupter and business world-changer every hour left after that.
2moGet cAItalyst by MAD AI and you won't have to worry about that. 🙂
As Featured in the Financial Times ‘The Loophole Lady’ – Supporting Directors with Bankruptcy, Insolvency, and Legal Challenges Navigate Business Recovery | 36+ Years of Expertise
3moI’ve noticed the same. When we were using Microsoft Copilot everyone’s emails started to sound the same. Then we shifted to Gemini and gosh that needed some work. Makes me wonder what’s coming next 🤔
Principal Consultant | 22+ yrs in IT | Cloud & Core Banking Infra Expert | DevOps & Deployment Leader | Driving Digital Transformation | Passionate Mentor & Knowledge Sharer, AI enthusiast
3moFascinating observation, Sanat. It’s interesting how AI hasn’t just influenced technology and business, but also our everyday vocabulary. Terms like hallucinations, training data, and prompting have moved from niche technical circles into boardrooms and casual conversations. Language often reflects the zeitgeist, and in many ways, the adoption of AI jargon shows how deeply these technologies are shaping our thinking patterns. I’ve noticed that while these terms make complex ideas more accessible, they can also blur nuances if used too loosely. Perhaps the bigger question is — will this shift in language simply mirror technological progress, or will it subtly reshape the way we frame problems and solutions in the future?
Product Management at Finacle, Edgeverve
3moDo you think who use it themselves (seller) also fall for the trap (buyer) when someone uses it on them OR they are able to read between the lines (or should I say hallucinations). Google it >> 'chatGPT it' is happening more ... sounds way more insulting though