I’ve reviewed close to 2000+ code review requests in my career. At this point, it’s as natural to me as having a cup of coffee. However, from a senior engineer to now an engineering manager, I’ve learned a lot in between. If I had to learn to review code all over again, this would be the checklist I follow (inspired from my experience) 1. Ask clarifying questions: - What are the exact constraints or edge cases I should consider? - Are there any specific inputs or outputs to watch for? - What assumptions can I make about the data? - Should I optimize for time or space complexity? 2. Start simple: - What is the most straightforward way to approach this? - Can I explain my initial idea in one sentence? - Is this solution valid for the most common cases? - What would I improve after getting a basic version working? 3. Think out loud: - Why am I taking this approach over another? - What trade-offs am I considering as I proceed? - Does my reasoning make sense to someone unfamiliar with the problem? - Am I explaining my thought process clearly and concisely? 4. Break the problem into smaller parts: - Can I split the problem into logical steps? - What sub-problems need solving first? - Are any of these steps reusable for other parts of the solution? - How can I test each step independently? 5. Use test cases: - What edge cases should I test? - Is there a test case that might break my solution? - Have I checked against the sample inputs provided? - Can I write a test to validate the most complex scenario? 6. Handle mistakes gracefully: - What’s the root cause of this mistake? - How can I fix it without disrupting the rest of my code? - Can I explain what went wrong to the interviewer? - Did I learn something I can apply to the rest of the problem? 7. Stick to what you know: - Which language am I most confident using? - What’s the fastest way I can implement the solution with my current skills? - Are there any features of this language that simplify the problem? - Can I use familiar libraries or tools to save time? 8. Write clean, readable code: - Is my code easy to read and understand? - Did I name variables and functions meaningfully? - Does the structure reflect the logic of the solution? - Am I following best practices for indentation and formatting? 9. Ask for hints when needed: - What part of the problem am I struggling to understand? - Can the interviewer provide clarification or a nudge? - Am I overthinking this? - Does the interviewer expect a specific approach? 10. Stay calm under pressure: - What’s the first logical step I can take to move forward? - Have I taken a moment to reset my thoughts? - Am I focusing on the problem, not the time ticking away? - How can I reframe the problem to make it simpler?
How To Conduct Code Reviews Effectively
Explore top LinkedIn content from expert professionals.
Summary
Conducting code reviews well involves examining and providing feedback on a peer's code to ensure quality, functionality, and maintainability. It's a collaborative process that improves code robustness and fosters team communication.
- Focus on the bigger picture: Prioritize reviewing design documents, functional correctness, and test results before diving into detailed code implementation.
- Automate routine checks: Use tools like linters and automated tests to catch syntax errors and coding standards issues, freeing up time for meaningful discussions.
- Encourage constructive dialogue: Provide clear, actionable feedback and create an open space for questions and collaboration to help developers improve.
-
-
Are you wasting time focused on the wrong things in code-review via PR? Many teams are. Here's the thing about human code reviews... yes, you can sometimes find a bug. But like, who cares if a human review finds a bug? That's just proof of a deeper failure in your development process. Said another way - if your code reviews of PRs routinely finds bugs, your failure to test is profound, and the logic unsound. Even if the code review finds a bug from time to time... how will you be assured of not re-introducing it, or adding new ones? More human reviews? A vast, overwhelming amount of code-review nits are completely removed with good linting/formatting/scanning, and these tools should be used aggressively. If it can't be automated with a linter/formatter/scanner... it doesn't matter as much, because it's a non-deterministic opinion at that point. Seriously, you want to enforce snake_case_vars but less than 30 characters? Don't argue about it, put it in a scanner that fails a precommit. Or don't, because it's not really that important. But please, don't have a dev go back and rename it based on an arbitrary discussion about 'appropriate variable length'. When have I seen reviews improve quality? When we changed what changes we really focused large reviews on, and what we reviewed. First - a review starts with a review of the design docs/changes. If you can't write about what you've done at the appropriate level, the code can't possibly be done, and the review is over, failed. Second - we review the integration test outputs. That is, this code has been integrated, built, and deployed to at least 2 environments, and these are the integration tests that run, and their output and performance. The tests should give clear coverage of the behaviors and modes described in the design. Routinely this is where the review really spends time, and we'll talk about other failure/load modes to test. Depending on the scope, sometimes we'll review the unit test suite details and discuss details and enhancements. Lastly - let's actually look at the implementation code and details, show us what you're proud of, and the parts that you think are a hack. There's always good discussion here, but if the docs are good and the tests are solid, we're likely logging TODO items here, because the review is already passed. Only if we find something truly heinous would we send it back for rework at this point. This changes expectations in good ways. I don't want devs to be worried about nit-picky reviews, paranoid about formatting... automate it or pick that stuff up in pairing/mob sessions and mini-reviews that people seek for 1-on-1 feedback. I suppose this creates "layers" of reviews, and that seems like the right idea too. Additionally, it depends on a number of other processes around feature flags, version control, CI/CD, architecture governance, and release management to be effective. 🤷♂️ #meatbasedengineer
-
9 code review practices your team should follow to go from Good → Great projects. (these helped my team deliver 100s of projects without wasting hours fixing bugs) 🟢As a team: ➡️Establish goals and expectations beforehand: for example: + functional correctness + algorithmic efficiency + improving code quality + ensuring code standards are met ➡️Use code review tools Use: (GitHub PRs, GitLab MRs, and Atlassian Crucible). + easily track changes + streamline the review process ➡️Automate code checks: It will help you to: + find syntax errors + avoid common issues + reduce code style violations and potential bugs. 🟡As a reviewer: ➡️Start early, review often: do this to: + catch issues early + prevent technical debt + ensure that code meets project requirements. ➡️Keep reviews small and focused: you get: + an easier process + shorter turnaround time. + better collaboration in the team ➡️Balance speed and thoroughness: + do comprehensive reviews + but avoid excessive nitpicking ➡️Give constructive feedback: always be: + specific, actionable, and respectful + focus on improvement rather than criticizing. + make a space for open communication to answer questions & give clarifications. 🟠As a reviewee: ➡️follow up on feedback: + don’t take the comments personally + actively work on feedback after the session + make necessary revisions and follow up to confirm ➡️Follow coding standards: focus on improving: + readability + maintainability Remember - mutual respect during the code reviews is crucial for a great team culture! – P.S: If you’re a Sr. Software engineer looking to become a Tech Lead or manager. I’m doing a webinar soon. Stay tuned :)