To all young researchers! Over the years, I’ve mentored countless aspiring researchers, and I keep seeing the same common mistakes in manuscripts. The good news? A little attention to detail can 𝘴𝘪𝘨𝘯𝘪𝘧𝘪𝘤𝘢𝘯𝘵𝘭𝘺 improve your manuscript's quality and increase its chances of acceptance! Here are five mistakes you should 𝐚𝐯𝐨𝐢𝐝 𝐚𝐭 𝐚𝐥𝐥 𝐜𝐨𝐬𝐭𝐬: 1️⃣ 𝐏𝐨𝐨𝐫𝐥𝐲 𝐈𝐝𝐞𝐧𝐭𝐢𝐟𝐢𝐞𝐝 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡 𝐆𝐚𝐩 Your manuscript should 𝐜𝐥𝐞𝐚𝐫𝐥𝐲 𝐬𝐭𝐚𝐭𝐞 𝐰𝐡𝐚𝐭 𝐠𝐚𝐩 𝐲𝐨𝐮𝐫 𝐬𝐭𝐮𝐝𝐲 𝐢𝐬 𝐚𝐝𝐝𝐫𝐞𝐬𝐬𝐢𝐧𝐠. If readers (and reviewers!) cannot determine the significance of your research, why would they care to read further? I’ve seen brilliant data and well-analyzed results get rejected simply because the research gap wasn’t explicitly mentioned. Remember—reviewers won’t ask for clarification; they’ll just reject it. 𝐂𝐥𝐞𝐚𝐫𝐥𝐲 𝐝𝐞𝐟𝐢𝐧𝐞 𝐰𝐡𝐚𝐭’𝐬 𝐧𝐞𝐰 𝐢𝐧 𝐲𝐨𝐮𝐫 𝐬𝐭𝐮𝐝𝐲! 2️⃣ 𝐈𝐧𝐜𝐨𝐧𝐬𝐢𝐬𝐭𝐞𝐧𝐭 𝐔𝐬𝐞 𝐨𝐟 𝐀𝐛𝐛𝐫𝐞𝐯𝐢𝐚𝐭𝐢𝐨𝐧𝐬 A simple but frequent issue: 𝐢𝐧𝐭𝐫𝐨𝐝𝐮𝐜𝐞 𝐚𝐧 𝐚𝐛𝐛𝐫𝐞𝐯𝐢𝐚𝐭𝐢𝐨𝐧 𝐭𝐡𝐞 𝐟𝐢𝐫𝐬𝐭 𝐭𝐢𝐦𝐞 𝐲𝐨𝐮 𝐦𝐞𝐧𝐭𝐢𝐨𝐧 𝐚 𝐭𝐞𝐫𝐦 𝐚𝐧𝐝 𝐬𝐭𝐢𝐜𝐤 𝐭𝐨 𝐢𝐭 𝐭𝐡𝐫𝐨𝐮𝐠𝐡𝐨𝐮𝐭 𝐭𝐡𝐞 𝐦𝐚𝐧𝐮𝐬𝐜𝐫𝐢𝐩𝐭. Don’t switch back and forth between the full term and the abbreviation—it disrupts readability and can confuse the reader. 3️⃣ 𝐈𝐧𝐜𝐨𝐧𝐬𝐢𝐬𝐭𝐞𝐧𝐭 𝐖𝐫𝐢𝐭𝐢𝐧𝐠 𝐒𝐭𝐲𝐥𝐞 "A study has demonstrated…" vs. "Akbar et al. demonstrated…" Pick one style and 𝐬𝐭𝐚𝐲 𝐜𝐨𝐧𝐬𝐢𝐬𝐭𝐞𝐧𝐭 throughout the manuscript. Shifting between writing styles makes the manuscript look unpolished. 4️⃣ 𝐂𝐨𝐩𝐲-𝐏𝐚𝐬𝐭𝐢𝐧𝐠 𝐅𝐢𝐠𝐮𝐫𝐞𝐬 𝐟𝐫𝐨𝐦 𝐏𝐮𝐛𝐥𝐢𝐬𝐡𝐞𝐝 𝐏𝐚𝐩𝐞𝐫𝐬 While you can use published figures, it requires 𝐟𝐨𝐫𝐦𝐚𝐥 𝐩𝐞𝐫𝐦𝐢𝐬𝐬𝐢𝐨𝐧 from the author or publisher (or both). This is a tedious process, so unless absolutely necessary, I recommend creating 𝐨𝐫𝐢𝐠𝐢𝐧𝐚𝐥 𝐟𝐢𝐠𝐮𝐫𝐞𝐬 using figure-design tools. I’ve listed some great figure-creation resources in my featured posts—check them out! 5️⃣ 𝐈𝐧𝐬𝐮𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐭 𝐑𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞𝐬/𝐂𝐢𝐭𝐚𝐭𝐢𝐨𝐧𝐬 Relying on just one manuscript to support an entire paragraph—especially when multiple studies exist on the topic—is a red flag. 𝐂𝐢𝐭𝐞 𝐦𝐮𝐥𝐭𝐢𝐩𝐥𝐞 𝐬𝐨𝐮𝐫𝐜𝐞𝐬 to strengthen your argument and demonstrate that you’ve conducted a thorough literature review. A good rule of thumb: don’t take more than 2-3 key facts from a single paper unless no other sources exist. #ResearchWriting #AcademicPublishing #ManuscriptTips #PubMed #Mentorship
Common Mistakes in Science Peer Review Submissions
Explore top LinkedIn content from expert professionals.
Summary
Submitting a research paper for peer review is a critical step in scientific publishing, but common mistakes can significantly lower the chances of acceptance. Avoid these pitfalls to ensure clear, credible, and impactful submissions.
- Define your research gap: Clearly articulate what problem your study addresses and why it matters to help reviewers see the significance of your work.
- Be consistent in style: Maintain uniformity in abbreviations, writing tone, and formatting throughout your manuscript to improve readability and professionalism.
- Follow submission guidelines: Adhere to journal-specific requirements for formatting, word limits, and ethical standards to avoid automatic rejections.
-
-
🔎 Here are common mistakes to avoid: 1️⃣ Using Relative Change to Exaggerate Small Differences Explanation: Relative changes can misleadingly make small differences look big. Present absolute differences. ❌ "Incidence increased by 100%" ✅ "Cases increased from 2 to 4 in a population of one million" 2️⃣ Reporting Raw Counts Instead of Percentages Explanation: Raw counts can exaggerate small effects in large populations, while percentages are standardized & account for population size. ❌ "There were 25 million smokers" ✅ "Smoking prevalence was 2%" 3️⃣ Using Odds Ratios Instead of Prevalence Ratios for Common Outcomes Explanation: When an outcome is common, odds ratios can inflate perceived differences. Prevalence ratios are more conservative. ❌ "Smoking odds were 123.45 times higher among X than Y" (odds ratios) ✅ "Smoking likelihood was 10.1 times higher among X than Y" (prevalence ratios) 4️⃣ Misleading Significance with Terms Like "Almost Significant" Explanation: Such terms can imply importance where statistical thresholds haven’t been met ❌ "Results were almost significant (p=0.06)" ✅ "Results were not statistically significant (p=0.06)" 5️⃣ Reporting Unstable Estimates with Large Standard Errors Explanation: Better to omit imprecise estimates ❌ Results presented despite very large standard errors. ✅ Omit results with relative standard error >30% (RSE=standard error/proportion *100) 6️⃣ Truncating Axes in Graphs to Emphasize Findings Explanation: Truncated axes can make small differences appear larger than they are. ❌ Graph only shows data from 75%-100% ✅ Display the full axis range to ensure a fair comparison, such as 0%-100% 7️⃣ Overstating Implications with Terms Like "Proves" or "Needs" Explanation: May imply certainty and necessity that findings often don’t support. ❌ "Our results prove the need to implement XYZ" ✅ "Our findings suggest that implementing XYZ may be beneficial" 8️⃣ Using Data That Isn’t Fit for Purpose but Adding "Interpret with Caution" Explanation: This undermines the credibility of findings. Only include data fit for use and fit for purpose. ❌ "The results should be interpreted with caution due to data limitations." ✅ If data are not fit for use/purpose, do not use them. 9️⃣ Using Causal Language for Observational Studies Explanation: Terms like "cause", "effect", "attributable", "impact" imply causation that observational studies can’t establish. "Association" is more appropriate. ❌ "Our cross-sectional results showed the effect of X on Y" ✅ "Our cross-sectional results showed an association between X and Y" 🔟 Testing Repeatedly to Find Significance (P-hacking) Explanation: This is a fishing expedition, also known as a type 1 statistical error (false positive results). ❌ Testing various subgroups until significant p-values appear. ✅ Predefine hypotheses and analyses and report exactly what was found. Ethical communication preserves trust. Let's commit to clear, honest reporting. 🌱 #WriteRight
-
I have reviewed countless research papers, and the reasons for rejections are often the same. Don’t make these 10 mistakes! 1️⃣ Submitting to the Wrong Journal → If your research doesn’t align with the journal’s scope, it’s an automatic rejection. 2️⃣ Lack of Novelty → Editors look for fresh insights. If your study doesn’t add value, it won’t make the cut. 3️⃣ Flawed Methods → Weak or poorly justified methodology raises major concerns for reviewers. 4️⃣ Poorly Written Abstract → Your abstract is the first impression—if it’s unclear or unfocused, your paper may not even be read. 5️⃣ Outdated or Incomplete Literature Review → Missing key references or failing to position your work within existing research weakens your credibility. 6️⃣ Data Analysis Errors → Inaccurate or inappropriate statistical methods can undermine your entire study. 7️⃣ Overstating Findings → Stay grounded in your data—exaggerated claims will be scrutinized. 8️⃣ Ignoring Submission Guidelines → Formatting, word limits, and citation styles matter. Failure to follow instructions signals carelessness. 9️⃣ Ethical Issues → Issues like undisclosed conflicts of interest, plagiarism, or lack of ethical approval are deal-breakers. 🔟 Not Addressing Reviewer Comments → Revisions are part of the process. Dismissing feedback can cost you publication. What’s been your biggest challenge in getting published? ♻️ Repost, hit follow, and turn on your notifications 🔔 #AcademicPublishing #ResearchTips #PhDLife