Photo by jason chen on unsplash

Common Code Review Pitfalls and How to Manage Them

Photo by Jason Chen on Unsplash

Type the phrase “Are code reviews …” into Google and autocomplete will reveal the ambivalence–or outright disregard–many people have for this process. 

“Are code reviews worth it?” is the top query. 

“Are code reviews effective?” is another popular question. 

“Why are code reviews important?” 

You get the idea.

This disregard isn’t really about code reviews. It’s about the parts of code reviews people want to avoid. Specifically, code reviews take time, useful feedback is hard to get, criticism is awkward, and people often avoid complex tasks even when they’re important. These are legitimate problems that need to be addressed, but they shouldn’t call into question something as essential as code reviews.

At Revelry, we’ve been honing our code review process for years and we’ve confronted the same issues you have. This piece outlines common code review pitfalls engineering managers face and our guidelines for overcoming them. 

Code reviews take too long

As a manager, you’re measured on your team’s consistency, productivity, and speed. Anything that gets in the way of shipping code is a problem.

Code reviews, if left unaddressed, can be a time suck and an obstacle to releasing code. Research studies have found that the median time for code approval can run from a few hours to a full day, depending on the organization. It’s easy to see why. You’ve got distributed teams working across time zones. You’ve got communication lag. You’ve got different programming languages and multiple cycles of review. 

Fortunately, there are ways to crack these time issues:

  • Make sure code is ready for review before making a git pull request. All the obvious problems should be handled by the author before the reviewer sees it. Static code analysis tools can help here because they identify the overt issues without expending reviewer time. 
  • Distributed teams can reduce the lag between code review messages by using notification tools, such as Pull Reminders. For complex code reviews with distributed groups, discuss the code on a video call. This eliminates the back and forth of an asynchronous review.
  • The time a manager spends reviewing code now can prevent a costly issue later. If you aren’t reviewing the code your team writes, someone can commit something shaky, and then everyone builds on top of it. Six months later that shaky code will grow into a time-intensive problem. Manager involvement in code reviews also presents in-the-moment opportunities to improve team performance and efficiency.

Getting useful feedback from code reviews

Structural and syntax errors are often the first things–sometimes the only things–called out in a code review because they’re easy to spot. But what about deep-seated issues, like bad assumptions and faulty architecture? 

The goal with code reviews should always be to get useful and substantive comments. Here are a few guidelines for making that happen:

  • Require code reviewers to leave feedback when they evaluate a git pull request. This reinforces the importance of code review by connecting it to personal accountability.
  • Combine a feedback requirement with tools that automatically eliminate the non-creative parts of code review. Use a static code analyzer to identify the obvious issues so reviewers will focus on bigger problems. 
  • As a code author, it helps to ask your reviewer questions and call out particular focus areas. Pose questions like “What do you think about this function?”and “Can you think of a more efficient way to handle this?”
  • Apply reviewer prompts, which can be configured through the pull request template functionality in GitHub. This helps a reviewer get unstuck if they’re unsure what to look for. 

People can be too nice

Even when feedback is critical, code reviewers might soften critiques because they understand the sting of rejection and criticism. And on the other side, code authors may favor reviewers who take it easy on their work.

This puts engineering managers in a tricky spot because feedback and morale are both important. 

The solution to this lies in the way leaders and their teams offer feedback. You should strive to be collaborative and conversational in your reviews. Call out the good as well as the bad. Offer context in your feedback. Don’t just point out problems, work on solutions. 

Here’s a few examples of tactful comments that push for clarity without rubbing code authors the wrong way:

  • “I noticed you did this here. Can you explain your process?” 
  • “I’ve used this technique in the past and found it useful. You might be able to apply it here.”
  • “I did some research and learned this can cause problems with X. You might want to take a look.”

The “LGTM” problem in code reviews

“Looks good to me.” It’s a simple phrase–usually shortened to “LGTM”–that signals a job well done. 

The problem is that LGTM isn’t always true when it comes to code reviews. LGTM can be used by reviewers as a feint that gets the work off their plates as quickly as possible.

As a manager, you can employ structure to offset the likelihood of frivolous LGTM comments:

  • Make it a requirement that reviewers must leave at least one comment when they look over a git pull request. This gets reviewers over the hump of reading the code. It’s easier to add additional questions and feedback once a reviewer starts the process. 
  • Interactive feedback is important for risky or complicated projects, which can fall prey to LGTM comments because the code is complex. In a code review meeting–held remotely or in the same room–the reviewer gets immediate answers to questions and the code author can explain things without drafting crisply written replies. Another upside to code review meetings: research has found they can increase the detection of defects by more than 30%.
  • Review size is one of the top challenges reviewers face, so code authors should respect reviewers’ time and create manageable requests. Aim for 200-400 lines of changes when you make a git pull request. Reviewers should also be empowered to request changes be broken into smaller pieces so they can provide proper feedback. 

A new tool to help with your code review issues

We’ve run into all of these code review pitfalls at Revelry, and finding solutions has required a mix of testing, iteration, and custom tool development.

One of those tools is Lintron, the static code analyzer we built to catch errors in our code. Lintron works with Python, JavaScript, Ruby, and a number of other languages. Since it automatically spots issues, Lintron lets reviewers focus on other issues, which saves us a lot of time and keeps us shipping.

Lintron has been a labor of love for us, but now we’re opening it up to everyone because we think the benefits we’ve seen can give other organizations a lift. We hope Lintron helps you and your team as much as it’s helped us.

Sign up to install Lintron

We're building an AI-powered Product Operations Cloud, leveraging AI in almost every aspect of the software delivery lifecycle. Want to test drive it with us? Join the ProdOps party at ProdOps.ai.