Technology upgrades that break user trust

Explore top LinkedIn content from expert professionals.

Summary

Technology upgrades that break user trust refer to changes or updates in digital systems that unintentionally undermine confidence in the product, service, or brand—often by compromising reliability, transparency, or user experience. When upgrades fail, users can feel deceived or alienated, making it difficult for companies to regain that trust.

  • Prioritize transparency: Clearly communicate how upgrades will affect users and ensure they understand any changes to data, features, or access.
  • Test for real-world impact: Involve actual users in upgrade testing to identify issues that could damage trust before rolling out major changes.
  • Support users post-upgrade: Provide strong customer service and accessible solutions when things go wrong, especially for those who depend on your technology daily.
Summarized by AI based on LinkedIn member posts
  • 𝐓𝐡𝐞 𝐇𝐢𝐝𝐝𝐞𝐧 𝐃𝐞𝐩𝐭𝐡𝐬 𝐨𝐟 𝐕𝐨𝐢𝐜𝐞 𝐀𝐈 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐓𝐞𝐬𝐭𝐢𝐧𝐠  🧏♂️ 🦻 As someone with hearing loss, I've spent years navigating the subtle complexities of speech understanding. Recently, while red teaming voice AI systems, I've realised something fascinating: breaking voice AI isn't just technical exploitation – it can be about unravelling human trust in ways that create lasting psychological ripples. The technical side is complex: phonological attacks, temporal manipulation, cross-linguistic boundaries. But what makes voice AI security uniquely challenging is what I call the "echo effects" - the lingering psychological impacts that persist long after a system is compromised. Think about it: when a website breaks, users try again later. But when a voice system fails in particular ways, it creates lasting behavioural changes. I've observed users developing "compensating behaviours" - speaking slower, over-enunciating, avoiding certain words - that persist long after issues are fixed. These learned responses become embedded in user psychology, creating a kind of "interaction scar tissue." The really fascinating part? Voice AI breaks trust differently than other systems. If a website shows incorrect information, users question that specific data. But when voice AI fails, users begin to question their own communication abilities. Are they speaking clearly enough? Is their accent too strong? Should they modify their natural speech patterns? This is where red teaming voice AI becomes a uniquely human challenge. This is exploration of the edges of human-machine trust. Every successful attack creates ripples: - Users start second-guessing their natural speech patterns  - Support staff begin mistrusting even accurate transcriptions  - Organisations develop overcautious voice interaction policies  - Machine learning models learn and perpetuate compromised patterns The most concerning discovery from my testing: voice AI failures can create lasting accessibility barriers. When users lose trust in voice interfaces, those who rely on them most - often people with disabilities - suffer the greatest impact. My hearing loss has taught me that speech interaction exists on a spectrum of trust and adaptation. Voice AI security isn't only about preventing technical compromises - it's about protecting the delicate psychological contract between humans and machines. Currently documenting more patterns in this space, particularly around how different attack vectors affect user trust differently. Let me know if you've seen any doco or resources in this space please! ☺️ Nadia Piet and AIxDESIGN & Archival Images of AI / Better Images of AI / Limits of Classification / CC-BY 4.0

  • View profile for Charles Radclyffe

    CEO @ EA: Easy Autofill agent for complex corporate documents | Top 10 UK SaaS company 2025) | Techstars 2022

    11,840 followers

    To all my friends who worry about the growing trend of e-waste from major technology companies, here's a story about another trusted brand which is making a big mistake: https://lnkd.in/dxPPKNGq I've always felt Yale is a trusted brand, but perhaps no longer after this story. Their first-gen smart locks require a £5 per lock upgrade from tonight in order to work with their new app. All digital keys purchased on their old app will evaporate into cyberspace. Doesn't sound too bad, right? Well, not unless you're a customer of the Zwave upgrade modules that need to be replaced by this new update. Most app updates are of very little consequence, but what we all should have learned from this month's Crowdstrike outage is that not all apps are created equal - and if you're in the business of critical infrastructure (which surely smart home locks are the exemplar of) - then you need to support customers into the long term. Yale locks might be built to last, but their app ecosystem sure isn't. My worry with stories like this is that they really undermine efforts to smart-upgrade legacy systems, as every actor; good and bad gets tarred by the same brush. I'd love to hear from other Yale customers. Have you been affected by these changes? What were the consequences for you? And also to users of other platforms that have also been affected by cloud tethered systems that are digitally sabotaged. It's stories like this that are really going to make buyers of second-hand Teslas very worried indeed...

  • View profile for George Zeidan

    Fractional CMO | Strategic Marketing Leader for SMEs | Founder @ CMO Angels | Helping Businesses Scale Smarter

    14,088 followers

    You’ve been lied to.   And the liar wasn’t even human. Last year, Meta introduced AI-generated profiles. They looked, acted and interacted like real people. These profiles had names, photos and backstories.   They even engaged in conversations on Instagram and Messenger.   At first glance, they seemed innovative. But beneath the surface was a troubling reality. None of these profiles were real.   Take “Grandpa Brian,” for example. He claimed to be a retired entrepreneur from Harlem. He shared heartwarming stories about nonprofit work. But when questioned, the nonprofit didn’t exist. His entire backstory was fabricated.   Then there was “Liv.” She described herself as a colored queer mom of two. When asked about her creators, she confessed something disturbing. Her team was 12 people, 10 white men, one white woman, and one Asian man. None of them shared her identity.   Meta wanted these profiles to boost engagement. They hoped to create emotional connections.   Instead, users uncovered the truth. The backlash was severe. Meta deleted the profiles and called it a “bug.” But by then, the damage was done.   This is a critical lesson for marketers.   Trust is the foundation of any audience relationship. And once trust is broken, it’s nearly impossible to repair.   AI has incredible potential in marketing. But using it to deceive will always backfire.   Instead of fostering connection, it creates skepticism.   This isn’t just about Meta. It’s a wake-up call for all of us.   The tools we use should amplify trust, not break it. How we integrate AI today will shape tomorrow.   The lesson? Use AI to enhance transparency, not erode it. The future of marketing doesn’t need fake friends. It needs real, honest connections.   What’s your take on this? P.S Can AI ever build trust without crossing ethical boundaries?

  • View profile for Brett Jansen

    GTM | Startup Advisor | AI Strategy & Implementation | Angel Investor

    19,468 followers

    I’ve seen too many AI pilots in healthcare fall apart. Not because the tech didn’t work, but because the data was a mess. It usually starts with something subtle. The AI makes a recommendation that’s just a little off. Not dangerous. Just wrong enough that a clinician flags it. From there, things start to unravel. The clinical team escalates the issue. IT dives into the data pipeline. Eventually, someone realizes a critical field is pulling from a legacy system that hasn’t been updated since the Obama years. And that “clean” dataset? It’s been filtered, transformed, and mapped so many times, the original intent is lost. I’ve seen it firsthand. Respiratory rates defaulting to 16 across every patient. Predictive risk scores still relying on ICD-9 codes years after ICD-10 became the standard. “Validated” training data that turns out to be old operational workarounds from the paper chart era. And once the trust is gone, good luck. Your internal champions go quiet. Your executive sponsor starts asking harder questions. Suddenly, your renewal forecast turns into a “maybe next year” conversation. But this doesn’t have to be your story. Before you go live, trace your data lineage. Validate the model with real users in real workflows. Be honest about what the model assumes, what the data really says, and where things could go sideways. In healthcare, the success of AI depends on user trust. That trust starts with the data. #AIinHealthcare #healthtech #healthcareinnovation #ClinicalAI

  • View profile for Betsy Tong

    Ex-F500 Big Tech VP | I help business leaders make sense of AI | Built $100M+ divisions

    17,066 followers

    Leaders racing to cut costs with AI, pay attention. Klarna replaced 40% of staff with AI → Wiped out $40B in value. Now even Klarna’s CEO admits: We went too far. I used to run global customer support in big tech. This is why Klarna’s decision was flawed from the start. Klarna gloated about its "AI-first support," until it backfired. They called it efficient. Then AI became another gatekeeper.  Satisfaction plunged. Trust evaporated. $40B gone. No one wants to call support. When they do, it better work. That feeling after? That’s your brand. Now the CEO wants to IPO. But trust still lags. Without customer trust, no IPO holds value. So Klarna is pulling engineers and marketers into call centers. People fixing what bots broke. Leaders under pressure to cut costs with AI, learn from Klarna: 1️⃣ Efficiency can’t replace empathy. 2️⃣ AI should scale trust, not erode it. 3️⃣ Measure emotion, not just tickets closed. The problem wasn’t the AI. It was bad leadership chasing efficiency at all costs. Pure cost cutting doesn’t save money. When you erode trust, you erode the business itself. ♻️ Repost to remind leaders: AI that breaks trust fails business. 🔔 Follow Betsy Tong for AI and leadership.

  • View profile for Jeroen Egelmeers

    Master Prompt Engineering and prompt your business forward 🚀 Prompt Engineering Advocate ▪️ GenAI Whisperer ▪️ Public Speaker & (Co-)host ▪️ Author (Amplified Quality Engineering)

    10,338 followers

    "Keep a human in the loop..." "...at the end of the loop." That’s the message I always end my conference talks with. And stories like this? They’re exactly why. This week, AI customer support at Cursor made up a company policy out of thin air. A hallucination. The chatbot confidently told users that logging in from multiple devices wasn’t allowed anymore. ↳ Except... that policy didn’t exist. ↳ It just invented it. ↳ People got frustrated. ↳ They cancelled subscriptions. ↳ Trust? Gone. The AI wasn’t labeled as AI. It had a human name - "Sam". Many assumed it was a real person. No transparency. No fallback. And no human stepping in before the damage was done. This isn't just about AI messing up. It's about responsibility, trust, and the cost of skipping human oversight in critical touchpoints like support. We saw something similar with Air Canada’s chatbot last year. Different company. Same issue. AI confidently making things up - and companies paying the price. So if you're deploying AI in customer-facing roles, especially without labeling it clearly or having a human check the loop... be careful. Because once trust is broken, it's hard to build it back. And no AI can fix that for you. What’s your take on this? Do we need new rules - or just better practices? #AI #CustomerExperience #Trust #HumanInTheLoop #AIFails #Leadership #Innovation

Explore categories