🔎 Here are common mistakes to avoid: 1️⃣ Using Relative Change to Exaggerate Small Differences Explanation: Relative changes can misleadingly make small differences look big. Present absolute differences. ❌ "Incidence increased by 100%" ✅ "Cases increased from 2 to 4 in a population of one million" 2️⃣ Reporting Raw Counts Instead of Percentages Explanation: Raw counts can exaggerate small effects in large populations, while percentages are standardized & account for population size. ❌ "There were 25 million smokers" ✅ "Smoking prevalence was 2%" 3️⃣ Using Odds Ratios Instead of Prevalence Ratios for Common Outcomes Explanation: When an outcome is common, odds ratios can inflate perceived differences. Prevalence ratios are more conservative. ❌ "Smoking odds were 123.45 times higher among X than Y" (odds ratios) ✅ "Smoking likelihood was 10.1 times higher among X than Y" (prevalence ratios) 4️⃣ Misleading Significance with Terms Like "Almost Significant" Explanation: Such terms can imply importance where statistical thresholds haven’t been met ❌ "Results were almost significant (p=0.06)" ✅ "Results were not statistically significant (p=0.06)" 5️⃣ Reporting Unstable Estimates with Large Standard Errors Explanation: Better to omit imprecise estimates ❌ Results presented despite very large standard errors. ✅ Omit results with relative standard error >30% (RSE=standard error/proportion *100) 6️⃣ Truncating Axes in Graphs to Emphasize Findings Explanation: Truncated axes can make small differences appear larger than they are. ❌ Graph only shows data from 75%-100% ✅ Display the full axis range to ensure a fair comparison, such as 0%-100% 7️⃣ Overstating Implications with Terms Like "Proves" or "Needs" Explanation: May imply certainty and necessity that findings often don’t support. ❌ "Our results prove the need to implement XYZ" ✅ "Our findings suggest that implementing XYZ may be beneficial" 8️⃣ Using Data That Isn’t Fit for Purpose but Adding "Interpret with Caution" Explanation: This undermines the credibility of findings. Only include data fit for use and fit for purpose. ❌ "The results should be interpreted with caution due to data limitations." ✅ If data are not fit for use/purpose, do not use them. 9️⃣ Using Causal Language for Observational Studies Explanation: Terms like "cause", "effect", "attributable", "impact" imply causation that observational studies can’t establish. "Association" is more appropriate. ❌ "Our cross-sectional results showed the effect of X on Y" ✅ "Our cross-sectional results showed an association between X and Y" 🔟 Testing Repeatedly to Find Significance (P-hacking) Explanation: This is a fishing expedition, also known as a type 1 statistical error (false positive results). ❌ Testing various subgroups until significant p-values appear. ✅ Predefine hypotheses and analyses and report exactly what was found. Ethical communication preserves trust. Let's commit to clear, honest reporting. 🌱 #WriteRight
Avoiding overstatement in climate research
Explore top LinkedIn content from expert professionals.
Summary
Avoiding overstatement in climate research means presenting scientific findings in a balanced and honest way, without exaggerating results or making claims that aren’t fully supported by evidence. This approach helps build trust with the public and ensures that scientific communication remains clear, accurate, and credible.
- Use precise language: Choose words that accurately reflect what the research shows, such as “suggests” or “indicates,” instead of claiming a study “proves” something.
- Show context: Share both absolute numbers and percentages, and present the full range of data so small differences aren’t made to seem larger or more important than they are.
- Normalize uncertainty: Explain what’s known and what’s still being studied, so people understand that scientific findings can evolve over time.
-
-
Overstating research findings is one of the quickest ways to destroy public trust... Here is how 𝗢𝘃𝗲𝗿𝘀𝘁𝗮𝘁𝗶𝗻𝗴 𝗥𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗞𝗶𝗹𝗹𝘀 𝗣𝘂𝗯𝗹𝗶𝗰 𝗧𝗿𝘂𝘀𝘁 One exaggerated headline, one oversimplified study, and public trust in science suddenly erodes. Overhyping research leads to misinformation, unrealistic expectations, and skepticism when results don’t hold up. 📌 The "Breakthrough" Trap → Science is gradual, but people expect instant solutions. ↳ Calling every study a "game-changer" sets up false expectations. ↳ When research gets debunked, trust plummets (e.g., red wine is as good as exercise?). How we can fix this: Use measured language: "Initial evidence suggests..." instead of "Revolutionary discovery!" 📌 Social Media Fuels Misinformation → Research spreads fast ✒︎but misinformation spreads faster. ↳ Viral posts often take findings out of context ↳ Echo chambers amplify misleading or incomplete claims. ↳ People trust repetition, even if the info is false. How to Fix this: Scientists should actively engage in public discussions and counter misinformation. 📌 Science "Changes" & the Public Feels Betrayed → When studies evolve, people feel misled if initial messaging lacked transparency. ↳ The mask debate during COVID-19 led to confusion because guidance kept shifting. ↳ The public expects certainty—but science is about updating knowledge. ↳ Without clear communication, corrections look like contradictions. Fix It: Normalize uncertainty—“Here’s what we know so far, and here’s what we’re still learning.” 📌 The Balance Between Engagement & Accuracy → Scientists and the media must work together to avoid overhyping research. ↳ Too much data? People tune out. ↳ Too little nuance? People get misled. ↳ Fear-based messaging? Causes panic or apathy. ********** When science gets overhyped, credibility suffers. The more we focus on accuracy over attention, the stronger public trust becomes. 💬 What’s an example of overhyped science you’ve seen in the media? #ScienceCommunication #Misinformation #PublicTrust #ResearchEthics
-
Are you exaggerating? As someone who has spent years immersed in science, here is a valuable tip for my fellow science writers and scientists: don't overstate your case. When writing your story or scientific article, approach the work with a healthy dose of skepticism. Only a few papers genuinely pave the way for groundbreaking advances or have far-reaching implications. The question we should ask ourselves is: are we exploring this kind of research in our narratives? We all want to engage our readers with captivating narratives, but it's equally important to maintain accuracy and integrity in #ScientificCommunication. And how can we achieve this? By sticking to what the research actually supports and avoiding making broad claims that go beyond the evidence. ----- 👋 Hey, I'm Morgana Moretti, PhD; I help #LifeSciences companies produce accurate, well-crafted scientific content. Follow me for tips on #MedicalWriting, or contact me to discuss your project.