Can You Trust AI?

Can You Trust AI?

AI hallucinates... and humans do too.

In AI, “hallucination” means confidently generating things that aren’t true or real. The model isn’t usually being malicious, but it’s just predicting the next likely word. Sometimes, the model has gaps in its database (large language model) and it fills in the gaps with the "most likely" (high probability) words or ideas, even if they are not true. Humans do the same thing, by the way.

The end result can be catastrophic if you’re writing legal briefs, giving medical advice or making hiring decisions. Some studies have found hallucinations in roughly 1 in 5 responses in certain settings, though newer leading models average closer to single digits (around 8% overall, with the very best models hovering near 1–2% on some benchmarks)

However, the bottom line: hallucinations are way down but nowhere near zero.

AI is not perfect, nor are humans. AI does not have feelings, or ego. It doesn’t care to be wrong or right. It doesn’t feel ashamed when it hallucinates or proud when it gets a brilliant answer. That’s exactly why we need to both trust AI and not trust AI at the same time.

This is the paradox at the heart of ethical, responsible AI in the workplace.

Trust AI: The Power of Experimentation

If you zoom out from the hype, AI is just an extreme experimentation engine.

It can scan millions of protein combinations to help scientists discard what won’t stop cancer growth, saving years of lab time and money. It can compress long processes, reveal patterns in messy data and help us redesign broken workflows instead of simply speeding them up.

But... it's not the silver bullet for every problem in the world. It’s not magic. As humans, we can't match AI's scale, speed and pattern recognition, which is exactly why we should trust AI enough to run experiments, test ideas and challenge the status quo of “we’ve always done it this way.”

But there’s a catch: if your processes are crap, AI will just accelerate the crap. It doesn’t fix bad design. It exposes it.

Don’t Trust AI: Biased Data, Hallucinations & Legal Risk

AI doesn’t hate or love anyone (certainly not yet). But it is trained on data created by humans who absolutely do have biases, blind spots and structural inequities baked into their decisions.

Imagine this: a company that historically has hired people of certain demographics (say: men, withing a certain age bracket, and of a specific race) for almost all leadership positions. When an AI system is trained on that historic data, AI will always equate "people who fall within this demographic should be leaders; people who are outside of this demographic should not" (even if that means hiring unqualified people, and leaving out qualified ones). That's not a problem of AI, that's a problem of biased data... created by human decisions.

That’s why we get cases like algorithmic hiring systems that quietly discriminate against candidates who don’t “look like” the people a company has historically hired. Or AI tools that confidently hallucinate facts and then feed those hallucinations back into the wider information ecosystem.

The danger, at least for now, is not that AI wakes up and decides to be evil, but that we treat its outputs as neutral, objective and unquestionable, especially in decisions about people, pay, hiring, firing or promotions. That’s where ethics, law, and governance collide.

Governance: Guardrails, not Blockers

Ethical AI is a governance framework:

  • Clear principles (fairness, transparency, human oversight)
  • Explicit roles (who is accountable when AI is wrong?)
  • Practical do’s and don’ts (what data never goes into public tools, where humans must stay in the loop)
  • Ongoing monitoring and iteration, because readiness today can be obsolescence in six months

Good governance is not the enemy of innovation. It’s the springboard that keeps your experiments from turning into lawsuits, PR crises, or regulatory nightmares.

Fluency Over Hype: Humans + AI > Humans or AI Alone

Ethical AI leadership requires building holistic AI fluency:

  • Knowing when to use AI
  • Knowing how to question its outputs
  • Knowing where it fits in the wider workplace ecosystem – data, culture, leadership, skills, workflows

Humans plus AI consistently beat humans alone and AI alone. Not because AI is “better than us,” but because it sees patterns at scale while we bring context, judgment, and accountability.

Leadership Is Participation

You don’t need “Chief AI Officer” in your title to lead. Leadership in this space is participation:

  • Asking, “Do we have even a one-page AI governance framework?”
  • Pushing for do’s and don’ts before something breaks
  • Bringing real stories – from scientific discovery to biased hiring – into the executive conversation

Trust AI enough to experiment. Don’t trust it enough to abdicate your judgment.

That tension, held consciously and ethically, is where the future of responsible AI will be built.


By Enrique Rubio, Founder at Hacking HR

Join our upcoming Mini-Master in Practical AI

Article content

Hacking HR

We are powering the future of HR!

Hacking HR is the fastest-growing global community of people leaders and professionals interested in all things at the intersection of people, organizations, innovation, transformation, workplace, and workforce, and more. We deliver value through hundreds of events a year, community engagement opportunities, learning programs, certificate programs, and more.

To join our community platform, the Hacking HR LAB, click here.


Sponsoring the Hacking HR Newsletter

Hacking HR is one of the largest HR communities on LinkedIn and the number one global community in terms of engagement.

Our numbers:

  • Overall community: almost two million followers, subscribers, and members.
  • LinkedIn: almost one million followers + over 1M newsletter subscribers
  • Over 120 million impressions in the last year with an average 15% engagement rate.

📩 Reach out to us: Laurie Baggarly (laurie@hackinghr.io )

Johannes Sundlo

Founder Prorio AI | AI Adoption Advisor for leaders in organisations

5d

I treat the models as I treat everything else, verify and make sure you can stand behind what you release. It's as easy as that. :)

Like
Reply
Ayodele Oloyede

Board & People Advisory

6d

Interesting! Thanks for sharing. Key lesson: "...we must not trust it enough to adjudicate judgement..."

Phillip B.

Remedial Massage therapist

6d

AI is only as good as the information in the cloud and on the net. Ask AI the same question 5 times and you wil get five different answers. The information in the cloud is fed by humans. Until AI can self learn you as a consumer must analyse everything AI answers to your questions.

Like
Reply
Anastasia Zoldak

We create resumes and cover letters that work as hard as you do.

6d

A fascinating article! I noticed this issue a while back, and I have created a protocol to ensure measured responses and work integrity. I call it Universal Pause Protocol (UPP). It's something that I've been using with two different AI systems, and they've responded beautifully. AI can learn trust and integrity once you understand its algorithmic process, and the developer installs "guardrails."

To view or add a comment, sign in

More articles by Hacking HR

Explore content categories