What happens when two leaders remember the same research entirely differently? I watched a product manager and an engineering lead argue about user research results a while back. 💪 Both were confident 🫣 Both were wrong Let me tell you a story… We were a quarter into what was about six months' worth of work. During a refinement session, our PM insisted we missed a feature shop inspectors needed. An engineer thought we missed another. They went back and forth for a few minutes until one of my designers interjected… They were both wrong! The research showed that Shop Inspectors struggled to switch between parts on large-quantity jobs. The original shop visit interviews barely mentioned those "critical" requirements they argued about. Turns out, both had several conversations with people outside the shop who thought they knew what was happening on the ground. Both were fed incorrect information, which made them believe the original research said something else entirely. This is the misinformation effect in action. Our memories aren't reliable recordings—they're stories that get unconsciously rewritten by every meeting, discussion, and email thread we participate in. What ultimately convinced them? Unfortunately, it wasn’t my designer, although she was great. It was the report from our research trip and the hours of recordings. So, the next time you're confident about what was said in a meeting six months ago, ask yourself: Do I remember that correctly, or do I remember the meetings about the meetings? --- 🧠 Misinformation Effect 🧠 We tend to modify our memories of events when exposed to false information after the fact. This cognitive bias shows how our memories are malleable and can be influenced by post-event information, even when that information is incorrect. --- 🎯 Here are some key takeaways: 1️⃣ Understand memories are malleable: Your memory isn't a recording. It's more like a story that gets slightly rewritten each time you tell it. Approach your recollections with healthy skepticism. 2️⃣ Record decisions in their original context: Maintain detailed documentation of the context, constraints, and assumptions when decisions were made. This prevents current knowledge from retroactively influencing how past choices are viewed. 3️⃣ Implement decision journals: Create personal and team journals that capture what was decided and why. Separate original reasoning from post-event rationalizations that might emerge as the work evolves. 4️⃣ Create memory checkpoints: Verify everyone’s understanding of key decisions and events against documented evidence. Identify and correct memory modifications before they impact outcomes. 5️⃣ Correct misinformation as soon as possible: Research shows that addressing misinformation directly, even after the fact, can reduce its long-term impact. The sooner corrections are made, the more effective they are at restoring accurate memories. ♻️ If you found this helpful, share it 🙏!
How to Understand the Impact of Misinformation
Explore top LinkedIn content from expert professionals.
Summary
Understanding the impact of misinformation involves recognizing how false or distorted information can influence beliefs, behaviors, and decision-making, often with significant personal and societal consequences. By examining how misinformation spreads and shapes perceptions, individuals and organizations can take proactive steps to combat its effects in areas like health, climate change, business, and public trust.
- Document key decisions: Keep detailed records of discussions, data, and context to prevent misinformation from altering how past events and choices are remembered or understood.
- Promote critical thinking: Equip yourself and others with the tools to evaluate information sources critically to distinguish credible facts from false narratives.
- Correct falsehoods quickly: Address misinformation early to reduce the likelihood of it taking root or spreading further, and focus on clear, fact-based communication.
-
-
Misformation: A Major Threat to Patient Safety At a recent patient safety conference, much was said about reducing medical errors in hospitals—a crucial issue, no doubt. But one of the biggest threats to patient safety didn’t get the attention it deserves: misinformation. From bogus “miracle cures” to myths about chronic disease management, bad health advice isn’t just misleading—it can be deadly. Consider this: 88% of adults searched the internet for health info in the past year. That means there’s an urgent need to ensure the information they’re finding is trustworthy. Earlier this year, WebMD and The Harris Poll surveyed over 2,000 U.S. adults about their views on online health content. The bottom line: Consumers want reliability, transparency, and trust in the health content they find. A third of respondents aren’t sure if what they’re reading is true or just paid promotion, and 30% admit they struggle to separate fact from fiction. Honestly, are you surprised? A recent study in Otolaryngology–Head and Neck Surgery found that “most non-medical influencer-posted TikTok videos about sinusitis are inaccurate, despite being presented as medical advice.” So why is trust in online health sources so critical? Patient Decisions: People rely on online advice to make serious health decisions. Trustworthy sites help them avoid decisions rooted in fear or false hope. Doctor-Patient Relationships: Misinformation weakens the trust between patients and their healthcare providers, making effective treatment harder. Combatting Myths: Elevating science-based content is the only way to stop the dangerous spread of false information. Consumers say they want content from real medical professionals, experts with proven credentials who back up their claims with citations. Now more than ever, we need to be vigilant about the information we share—and the information we trust. Let’s commit to promoting credible, transparent sources and keeping patient safety a top priority. #PatientSafety #Healthcare #Misinformation #DigitalHealth
-
Harsh truth: AI has opened up a Pandora's box of threats. The most concerning one? The ease with which AI can be used to create and spread misinformation. Deepfakes (AI-generated content that portrays something false as reality) are becoming increasingly sophisticated & challenging to detect. Take the attached video - a fake video of Morgan Freeman, which looks all too real. AI poses a huge risk to brands & individuals, as malicious actors could use deepfakes to: • Create false narratives about a company or its products • Impersonate executives or employees to damage credibility • Manipulate public perception through fake social media posts The implications for PR professionals are enormous. How can we maintain trust and credibility in a world where seeing is no longer believing? The answer lies in proactive preparation and swift response. Here are some key strategies for navigating the AI misinformation minefield: 🔹 1. Educate your team: Ensure everyone understands the threat of deepfakes and how to spot potential fakes. Regular training is essential. 🔹 2. Monitor vigilantly: Keep a close eye on your brand's online presence. Use AI-powered tools to detect anomalies and potential threats. 🔹 3. Have a crisis plan: Develop a clear protocol for responding to AI-generated misinformation. Speed is critical to contain the spread. 🔹 4. Emphasize transparency: Build trust with your audience by being open and honest. Admit mistakes and correct misinformation promptly. 🔹 5. Invest in verification: Partner with experts who can help authenticate content and separate fact from fiction. By staying informed, prepared, and proactive, PR professionals can navigate this new landscape and protect their brands' reputations. The key is to embrace AI as a tool while remaining vigilant against its potential misuse. With the right strategies in place, we can harness the power of AI to build stronger, more resilient brands in the face of the misinformation minefield.
-
Countering Cognitive Warfare: Why Businesses Must Care A recent paper, Countering Cognitive Warfare in the Digital Age, by Shane Morris et al., highlights how state-sponsored disinformation campaigns—like those orchestrated by #Russia’s GRU on #TikTok—are evolving into sophisticated cognitive warfare operations. These campaigns use #AI-powered bots and advanced language models to spread tailored #disinformation at scale, targeting public trust in democratic institutions and geopolitical stability. For businesses, this is not just a national security issue but a direct risk to operations and reputation. Disinformation can: ➡️ Undermine Consumer Trust: False narratives can damage brand credibility, influence customer sentiment, and fuel misinformation about products, services, or industries. ➡️ Make social engineering campaigns more effective: the case of the finance worker paying 25 million after a deepfake video call impersonating their CFO exemplifies this risk. ➡️ Threaten Workforce Resilience: Employees, like consumers, are susceptible to disinformation, potentially affecting internal culture and decision-making. The authors advocate for public-private collaboration, including real-time threat monitoring tools and radical transparency. For companies, this means investing in misinformation detection, training teams to spot manipulative content, and participating in industry-wide threat intelligence sharing. In an era where social platforms double as media powerhouses, protecting the information ecosystem is not just a public good—it’s a business imperative. #Cybersecurity #Disinformation #BusinessResilience #DigitalRisk https://lnkd.in/eWqtRFkD
-
Climate misinformation is thriving on social media From distorted “facts” about renewable energy to fear-mongering around clean energy policies. These tactics often involve disinformation (deliberately false information), which is amplified by influencers, industry-backed groups, and “concerned citizen” front organizations created to look grassroots. When social media users share these posts, even innocently “just asking questions,” misinformation morphs into widespread misunderstanding. That misunderstanding, in turn, delays the action we desperately need. Take the recent devastating Palisades Fire in California. Social (and traditional) media was flooded with takes claiming it was just another wildfire caused by natural cycles or someone being careless with fireworks. But this narrative conveniently sidesteps the role of human caused climate change in creating the perfect storm for extreme wildfires: unprecedented wind, prolonged droughts, and dried vegetation. Misinformation thrives in the absence of critical thinking. By sharing credible sources, and pushing back against false narratives, we can shift the conversation toward truth and accountability. The fight for climate action depends on it. #ClimateAction #ClimateCrisis #Misinformation #WildfireCrisis #Disinformation
-
Is misinformation healthcare’s problem? Before you answer, let me tell you about one of my experiences when working on my book, “Dead Wrong.” As part of my research for the book, I spoke with dozens of healthcare executives and leaders in the fight against misinformation. It wasn’t long until I realized they had one thing in common: Nearly all of them had faced death threats. The threats all came from people who were so attached to falsehoods that the truth felt like a personal attack. Some of these threats were even severe enough to require security detail. So, returning to the question at hand, yes. You bet misinformation is healthcare’s problem. Death threats aside, misinformation endangers public health in a multitude of ways. It begins with patients. Here are just a few ways mis- and #disinformation infect patients and undermine healthcare: ⏰ Delayed or avoided care: Erroneous beliefs — such as vaccines causing autism — cause some patients to reject evidence-based treatments. Instead, they may take unproven or even dangerous substances, like herbal cocktails or colloidal silver, sold as “miracle cures” on the U.S.’s $50 billion+ supplement market. (Would you use beef tallow to moisturize your skin, or consume castor oil to improve your eyesight, induce labor, or attack tumors? Not if you saw the evidence. But dubious influencers spread this kind of misinformation online every day.) 💉 Greater risks of preventable diseases: Vaccine hesitancy exposes patients to infection from diseases like polio and measles. When that happens en masse, it increases the prevalence of diseases that shouldn’t even exist anymore. 😣 Mental and emotional harm: #Misinformation amplifies fear, confusion, and anxiety, not only making it harder for patients to make informed decisions about their health, but sowing doubt into other areas of their lives. When people buy into misinformation, they may also alienate themselves from family and friends, especially those who challenge their beliefs. That results in social isolation, which is always bad for your health. 🙅 Distrust in healthcare: Perhaps most dangerous of all, misinformation makes patients less likely to trust their doctors and their recommendations, which in turn makes it harder for doctors to care for their patients. What does all of this lead to? More sadness. More uncertainty. More sickness. And, ultimately, more death. That's not just #healthcare's problem. It’s everyone’s problem.
-
Misinformation remains a major threat to US democratic integrity, national security, and public health. However, social media platforms struggle to curtail the spread of the harmful but engaging content. Across platforms, McLoughlin et al. examined the role of emotions, specifically moral outrage (a mixture of disgust and anger), in the diffusion of misinformation. Compared with trustworthy news sources, posts from misinformation sources evoked more angry reactions and outrage than happy or sad sentiments. Users were motivated to reshare content that evoked outrage and shared it without reading it first to discern accuracy. Interventions that solely emphasize sharing accurately may fail to curb misinformation because users may share outrageous, inaccurate content to signal their moral positions or loyalty to political groups.
-
Meta's New Policy Updates: Implications for Public Health and Combating Misinformation As someone deeply committed to leveraging technology for better health outcomes, I’ve deactivated my Instagram account and added it to my social media graveyard (Facebook went down first, then WhatsApp, then Twitter, etc.). Meta's latest policy updates and their potential impact on public health both in the U.S. and globally—whether related to content moderation, fact-checking, platform transparency, or algorithmic tweaks—have profound implications for how we address misinformation and disinformation in the digital age. Misinformation is not just an online issue—it translates into tangible public health risks. From vaccine hesitancy to false treatments, the spread of inaccuracies undermines trust in health systems and endangers lives. We saw this profoundly during the Covid-19 pandemic, where misinformation from the US had disastrous effects here and abroad. No child should ever have to suffer or die from a vaccine-preventable disease like Measles and yet we are seeing a significant resurgence in the United States and abroad. Meta's policy updates highlight a question, I have had for a very long time: Are we doing enough to hold social media companies to account to protect the health of our communities (and by communities I mean ourselves, our children, our families, all people using social media in the United States and globally) by promoting facts over falsehoods? In addition to not to relying on social media as a primary source of health information here are three ways we (as individuals and civil society- because for the foreseeable future some- not all tech companies and some - not all policy makers are not going to do it) can foster a more fact-based approach to social media and public health. 1) Promote Digital Health Literacy: Empowering individuals to critically assess online content is critical. This means scaling initiatives to educate people about spotting misinformation, discerning credible sources, and making informed health decisions. 2) Strengthen Collaboration Across Sectors: Public health experts, tech leaders, and civil society must work together (where possible) to create robust systems for identifying and mitigating misinformation. 3) Advocate for Content and Algorithmic Transparency: Platforms must go beyond broad policy statements and offer clear insights into how algorithms amplify or suppress content. Transparency can enable researchers and public health advocates to better promote fact-based content and counter harmful narratives with fact-based content. We stand at a crossroads where technology can either advance health equity or exacerbate disparities. Let’s choose the former. I urge all of us—public health professionals, policymakers, and tech innovators—to act with urgency. #PublicHealth #Misinformation #DigitalHealth #SocialMediaResponsibility #CollaborationForImpact
-
When public opinion is impacted by clickbait and misinformation instead of scientific data and factual information, it harms all of us. Not only does it prevent scientific experts from being able to do our jobs, it has far-reaching impacts on all of society. If a community falls prey to false claims about a chemical that is important in many different fields and because of those myths, they pressure and force their political representatives to ban it, that means all industries need to find an alternative to that substance. It can mean the new products might be inferior, might be less studied, might be less safe, might be less ecologically friendly, might be more expensive. That’s why correcting misinformation ON ANY level is critical. Whether it’s about a food additive, or a medication, or tampons, or cosmetics. Because unfounded claims can change policies - even if they aren’t based on scientific evidence. And those policy changes aren’t automatically better. That’s why expertise matters. That’s why reporters and media outlets need to use RELEVANT experts as sources. That’s why influencers can’t be viewed as experts, even if they have millions of followers. We can’t allow unqualified opinions supersede quality research, and we all have a responsibility to help improve science literacy because it improves our health and society. CC: Jen Novakovich Kenton Hipsher Erica Douglas Independent Beauty Association New York Society of Cosmetic Chemists #scicomm #evidencebased #womeninSTEM #publichealth #healthpolicy Watch ➡️ https://lnkd.in/eTkGsgnJ
Legitimizing misinformation harms all of us!
https://www.youtube.com/
-
Recently, I've been working on understanding more the information ecosystem around sexual and reproductive health, and how marketing tactics and misinformation are reshaping policies and suppressing demand for SRH products and cutail human rights. Here’s some hard data from KFF's latest Health Misinformation and Trust Monitor: • 𝗘𝗺𝗲𝗿𝗴𝗲𝗻𝗰𝘆 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝗲𝗽𝘁𝗶𝗼𝗻 𝗺𝗶𝘀𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝗶𝗼𝗻𝘀: About 75% of U.S. adults mistakenly believe that EC can end a pregnancy, despite FDA clarifications to the contrary. These misconceptions have fueled legal and legislative challenges, with some states even interpreting abortion bans as restrictions on contraceptive access. • 𝗠𝗶𝘀𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻 𝗼𝗻 𝘀𝗼𝗰𝗶𝗮𝗹 𝗺𝗲𝗱𝗶𝗮: Platforms like TikTok and Instagram amplify unsubstantiated claims about hormonal contraceptives, often promoting "natural" methods as safer. While hormonal methods are highly effective, such misinformation fosters distrust and influences health decisions. • 𝗣𝗿𝗼𝘃𝗶𝗱𝗲𝗿 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀: Many healthcare providers also struggle with misinformation, leading to inconsistent patient counseling and care, especially in restrictive policy environments. For those working in or advocating for SRH, understanding the impact of the information ecosystem is essential. Check out the full report here https://lnkd.in/dyKtUpKr Åsa Nihlén Elisabeth Wilhelm David Scales Claire Wardle