© Giuliana Carullo 2020
G. CarulloImplementing Effective Code Reviewshttps://doi.org/10.1007/978-1-4842-6162-0_11

11. Code Reviews

Giuliana Carullo1 
(1)
Dublin, Ireland
 

Nothing clears up a case so much as stating it to another person.

—Sherlock Holmes in Sir Arthur Conan Doyle’s “The Adventure of Silver Blaze” (1892)

We covered so far a fairly large spectrum of good practices and checks to perform to ensure quality of our software.

This last chapter focuses on the process itself, highlighting
  • Clean code lifecycle

  • Metrics

  • Collaboration challenges

  • How to spot flaws in the code without making programmers feel bad

  • Recommendations for managers

Wrapping Up

Before we get into metrics and final recommendations, let’s wrap up what we’ve covered in this book and see how it all fits together with the clean code lifecycle (CCLC) (Figure 11-1).
../images/485619_1_En_11_Chapter/485619_1_En_11_Fig1_HTML.jpg
Figure 11-1

Clean code lifecycle

Adding a variety of code reviews can feel complex and burdensome. But it does need to be. At the end of the day, both classical SDLC and SSDLC should go hand in hand as shown in Figure 11-1. To perform code reviews at their best, we need a well-rounded code review lifecycle (CRLC) that is comprised of
  • Peer reviews: Generally performed during development phase

  • Automation: When feasible

  • Regular comprehensive manual reviews

By considering the feedback loop between the lifecycles, we can ensure and maintain clean and secure code over time.

Next, we will provide metrics on how to perform better informed decisions on the feedback and gather insights on how successfully our code is moving toward a better version of itself.

Code Metrics

Some metrics can be used to gather a high-level view of the code base status, including
  • Defect density

  • Defect removal effectiveness

Defect Density

Defect density (DD) describes the number of faults occurred per lines of code (LOCs). It can be used to have a really high-level view of the status of the code base.
$$ DD=frac{defects found}{LOCs} $$

Indeed, as we early described in this book, just counting defects is not enough, and each of them should be properly prioritized.

As an example, consider the case of a security code review, where different vulnerabilities might be discovered. Not all of them are created equal. Nor their impact to the business. In this case, defect density would give a quantitative view, not a qualitative view.

Defect Removal Effectiveness

Defect removal effectiveness (DRE) , also referred to as defect removal efficiency, expresses a measure of effectiveness of solving issues during the software lifecycle.

It is generally useful to determine product’s healthiness and can be applied to code review. DRE can be defined in terms of reviews as follows:
$$ DRE=frac{defects found or removed}{defects latent or in troduced in the code} $$
(100%)

Defects Found

Table 11-1 shows a sample table template that wraps together the entire review process we analyzed so far and that helps in summarizing the number of defects found during code reviews.
Table 11-1

Sample Table—Tracking Defects Found

Stage

Description

Defects Found

Functional

Bugs found during review

5

Code smells

Code smells described in the book

100

Design

Design issues discovered during review

10

Requirements

Defects in functional and/or nonfunctional requirements

3

Architecture

Architectural defects

8

Overall health

Including human resources and processes

0

Security

Security flaws detected

3

Defects Latent

Defects found are helpful when a single code review is performed. However, defects latent (i.e., unfixed defects) and newly introduced ones are really important to track over time to ensure that the code is actually improving and that the work required is sustainable.

However, these defects can be harder to calculate or estimate.

In case of incremental code reviews (i.e., another review has already been performed), previous number of defects found can be considered. If the number of defects found with a later review is bigger than the previous one, it might signal that new defects were actually introduced in the code.

This is not 100% accurate, of course, in terms of granularity of the defects’ type. Nonetheless, it provides a broad indicator of health status of the code.

Other types of estimation can be performed (e.g., industry experience), but, analogously, they might result in not too accurate defects count.

Review Metrics

Other than the code per se, we also want to evaluate our processes. Thus, in this section, we are going to uncover metrics that we can use to have a better understanding of how effective the code review process is. Specifically we will look at
  • Inspection rate (IR)

  • Code coverage (CC)

  • Defect detection rate (DDR)

Inspection Rate

Inspection rate (IR) gives insights on the velocity of the review process. Indeed it is defined as
$$ IR=frac{LOCs}{hours} $$

where LOCs is the number of lines of code of the analyzed component and hours is the amount of hours spent in performing the review.

This metrics is oftentimes used to get an understanding of the quality of the performed review. A lot of opinions have been expressed about a guess of which IR values express a well-performed review.

I personally do not buy into it as it is and I would still contextualize this type of information and perform an educated validation.

The IR might be impacted by many factors, including
  • The current code quality

  • The experience of the reviewer

  • The number of feedbacks left as regards defects found

  • The quality of these feedbacks

  • Whether or not the reviewer is already familiar with the architecture and code

Code Coverage

Code coverage is the degree of code being inspected. In code reviews, the expected coverage is 100%.

But remember: always break down complex tasks into smaller, more manageable problems. It’s true that complete code reviews should be regularly included into the development process. However, better to start reviewing code as soon as possible after a change is performed or a new feature is added to the codebase (e.g., peer reviews). It will make things easier when the complete review will be due and it will help in achieving better code on the go.

Defect Detection Rate

Defect detection rate (DDR) expresses the velocity of finding defects. It is computed as
$$ DDR=frac{defects found}{hours} $$

The approach for looking at this metric is equivalent to the one used for inspection rate. Use it, but do it with caution and evaluate it depending on the context. Certain times code is so bad that defects jump out from everywhere, other times it is already in a fairly good shape (in which case, high five!).

Recommendations and Guidelines

Let’s wrap the review process with some overarching guidelines on it.

Clear Review Goals

It is important to clarify what the objectives of the review are. This will help you to be on track during the process. The objectives will depend, between all, on environment, stage of the project, requirements, scope, and technologies used.

Stay Scientific, Stay SMART

Avoid recommendations like “a faster DB/component/similar would be cool.” Argue what is actually needed (measurable), why, and the impact on the project.

Use the SMART approach. Always be specific with the defects, and have measurable and achievable suggested improvements. Reviews insights should be relevant to both the type of review you are performing and the problem to be solved. And they should be time bounded by means of having specific actions to be taken to remove defects with a specific deadline.

Example of a bad feedback is

There are too many dependencies between Component 1 and Component 2. This is due to bad design . Change it.

A better form would be

There are a lot of dependencies between Component 1 and Component 2 due to design smell X. The design can be improved by refactoring and by applying the design pattern Y. This is an important defect; consider fixing it in the next couple of Sprints.

Refresher

In Scrum methodology, a Sprint is a time box, generally one month or less, during which a Sprint goal is achieved. The outcome of the Sprint is a potentially releasable to production increment of work.

Plan Ahead

Have a plan on what to fix. Always set up priorities to address defects based on their urgency. Suggestions on possible ways to fix them are very welcome. Consider tracking defects using the template table shown in Table 11-2. The template is meant to track improvements needed in a SMART way.
Table 11-2

Tracking Defects: Sample Template

No.

Defect

Type

Component

Priority

Suggestion

1

What the defect is

Defect type

Component in the code that suffers from the defect

High/medium/low

Suggestion for improvement specific to the code

An example of application of the template is given in Table 11-3.
Table 11-3

Tracking Defects: Sample Table

No.

Defect

Type

Component

Priority

Suggestion

1

Feature density

Design smell

MyComponent.py

High

MyComponent breaks the single responsibility principle and implements two separate independent behaviors. Specifically it should be broken down into Component1 and Component2

Note

Especially if you are using Agile methodologies, filling in tables can be an extra burden. They key point of having a table is to track reviews and changes needed. However, this information can be tracked in any other shape or form depending on the development process in place. As an example, if you are working with Scrum, consider ticketing accordingly instead of filling in a table and discuss findings during planning sessions.

Checklists Are Simply Not Enough

Checklists are very useful as guidance during reviews. However, just walking through a checklist is not enough, and other methods, including scenario-based architecture reviews, should be considered. This would enable a well-rounded assessment.

Code Reviews for Developers

There are as many ways to perform bad code reviews as writing bad software. Code reviews can be really helpful if done right and a waste of time if done wrong.

That’s a Trap

Possible issues can fall into three broad categories:
  • Allowing the review to be judgmental

  • Making the review personal

  • Becoming frustrated with a tedious review process

One of the biggest traps that threaten code reviews is the risk of allowing the process to become judgmental. This approach toward code reviews creates a toxic work environment, destroys team morale, and hinders the production process.

Please, avoid approaching reviews as a way to challenge colleagues. Easier said than done, I get you. Hone your communication skills, if you have to. Almost everybody has rolled their eyes while looking at some not-that-cool code before, but resist the urge to let it cloud your review. Please, go into the process with a positive attitude: help, teach, and reward success. Don’t jab your finger at other people’s flaws. Constructive feedback goes a long way. Reviews can and need to be a team building activity and a learning experience.

Also competition should be avoided. Sometimes our ego can step into the game:

Oh Mark’s code is only so-so. I could write it so much better. My code is beautiful. I’m a true pythonista, unlike Mark.

I get it. I love beautiful code, but let’s keep competition out of the door. Code reviews are not an unhealthy challenge on who writes the most beautiful code in the team.

The second trap is the risk of making the review personal. The defect is in the code; you are not scanning colleagues. The code does something in a wrong way; it is not about Mark or Mary. Thus, be careful to communicate defects in the proper way, nothing personal.

The third trap, nonetheless important, is about being frustrated with a tedious review process.

Let’s consider reviews performed before each merge request. How many of you have been stuck waiting for a colleague to review the code? How many waited too long to perform a review?

Some don’ts are
  • Do not be a perfectionist; it simply does not exist. Good code is not perfect code. Not liking a minutia in the code is not a valid option to question and/or reject the code under review.

  • Do not create big branches/commits. Help your colleagues and they will help you out in turn. Keeping commits focused and short will make reviews fast enough and far more manageable.

Manners

First and foremost, if you volunteered or you have been asked to perform a code review, commit to it. Do it accurately, walk through all the steps, and don’t just glide over it.

Hence, do not just assume that the code works, not because it comes from a more senior peer. Remember! We are not scanning people. You trust your colleagues for doing a great job, and that’s amazing. But it does not imply that you do not have to do a great job yourself. Rebuild and test the code. How many of you found yourself in the situation of ending up with broken tests? Maybe because of a change in an interface? Fully commit to the process and do your best.

If the code integrates well, do not stop the process here. If a new functionality is developed, inspect for documentation and additional tests needed. Is the committed code accompanied by proper documentation and use cases? If not, it is the right moment to raise the necessity and fix it.

A common dispute is about deciding if code reviews include testing or not. This is especially true for teams where a defined distinction is made between who does reviews and who cares about quality assurance (QA). My thought is already clear. During code reviews, some testing needs to be done. Catching bugs and flaws is not the responsibility of a single person or team. The sooner they are detected and fixed, the better; period.

If you found the code hard to read and understand, do not make biased decisions. Nor the perception of your own or someone-else seniority should have any impact on the review. Do not be scared to ask the programmer who wrote the code for a clarification about the context and the approach they used to implement it. Readability is a huge principle in good code. The worst that can happen is that you learn something or something was missing and the code improves. Not that bad, isn’t it?

Finally, after the review, do not skip reviewing again. Does the new code incorporate all the required changes? Rebuild and retest. Check again for documentation and additional tests. All of the steps should be made. Second review might be slightly faster, but cannot be taken at ease. Reviewing can be daunting, but it is worthwhile in the end.

Code Reviews for Managers

It is fair enough to address code reviews also from a manager perspective. As engineers, architects, and tester, you name it, you should have already recognized how much value this process adds to the overall code quality. Thus, this section is specifically designed for all managers.

Oftentimes, code reviews are seen as yet another costly process that distracts engineers from writing more functionalities. This demonstrates yet another conflict between quantity and quality. And if I did not convince you so far on how much they can improve the development process, please allow me to highlight how you can directly benefit from them.

Quality Means Faster

Especially in Agile teams, adding code reviews looks like a lot of effort that adds up to the already full Sprints. Sure enough quality code comes with a price. But quality code also implies a faster development process. If the code is incrementally constantly improved, developing new features is faster. With less bugs, anomalies, and with greater readability and usability, the team is allowed to focus each and every time to keep on providing value (actual functionalities) rather than being constantly stopped because of unclear, unstructured, or buggy code that needs to be fixed first.

In other words, it helps in reducing firefighting. Firefighting happens when different priorities take over on each other causing a lot of context changes.

As shown in Figure 11-2, when firefighting happens, the engineer needs to eventually interrupt the development process to fix the defect and test again and only after they can go back to developing the actual feature. This slows down the process and reduces productivity.
../images/485619_1_En_11_Chapter/485619_1_En_11_Fig2_HTML.jpg
Figure 11-2

Firefighting

Distributed Teams

Any engineering manager knows how tough it can be at times to manage a team especially if it is geographically distributed. Communication is slightly more intricate than “normal” teams.

To deal with it, a lot of meetings are added to the daily work to facilitate communication. Anyway, as developers, we communicate with both verbal language and code. Thus, you should not only care that all the interacting team members share a common language (e.g., English), but they should also share the same coding process and (good) practices.

Show Me the Math

If I did not convince you so far, let’s gather some numbers with a little exercise: a cost benefit analysis (CBA) . Code reviews can save company’s money. I strongly suggest that you compile a table distinguishing between recurring and non-recurring costs for both development process with and without code reviews.

Fill up the table based on the average cost/hour of a team member.

For both recurring and non-recurring costs, add the various activities (e.g., issues due to bugs, time spent in troubleshooting, effort due to lack of clarity of documentation). Estimate the time spent for all the activities with and without code reviews/quality code (e.g., effort). Multiply the effort in hours and the average cost/hour in order to consider costs.
$$ Costs= Effort in hoursast average cost/ hourast recurring cost left(\%
ight) $$

As a reminder, add the status in qualitative measure for bad code without reviews and qualitative benefits introduced by QA activities such as those explained in this book. Sum up all the costs for both approaches and you are done. You will end up quite impressed with the results.

Let’s consider a simplified scenario. Assume the average cost of development for a single engineer to be $100,000 per quarter (3 months). Also assume that you found a piece of technical debt that can be resolved by the team in 2 weeks. Let’s also assume that the team is composed of three engineers collaborating on the same codebase.

The first option would be to not fix it and keep going with new features. Table 11-4 shows a simplified set of estimates for recurring costs in such scenario.
Table 11-4

CBA: Not Fixing Technical Debt

What

Recurring Cost

Cost

Troubleshooting

10% slowdown of the entire development process

$100,000 x 3 x 10% = $30,000

Adding a new functionality in dependent software

15% slowdown of the entire development process

$100.000 x 3 x 15% = $45,000

Lack of documentation

5% slowdown of the entire development process

$100,000 x 3 x 5% = $15,000

Total

 

$90,000

The second option would be to allocate 1 month for improving the code. See Table 11-5.
Table 11-5

CBA: Fixing Technical Debt

What

Non Recurring Cost

Cost

Fixing tech debt

2 weeks cost of 3 engineers

$15,3431 x 3 = 46,029

Total

 

$46,029

The beauty of this is that technical debt is a recurring cost. This means that not fixing it (in our example) will end up in costing $90,000 every quarter until it gets finally resolved.

Differently, not only fixing a given piece of technical debt has (in this case) a lower cost per quarter (i.e., $46,029). It saves money in the long run.

Indeed, suppose that the technical debt goes unfixed for an entire year. While fixing technical debt would stop at $46,029 in costs, the overall recurring cost for not fixing it would pile up to a grand total of $360,000 in a year. You can also quickly imagine how those numbers would snowball if we consider bigger teams and several pieces of technical debt that need to be resolved.

Note

This is, as anticipated, a simplified scenario. Granularity can be modified depending on the given context at hand. Furthermore, overall, we cannot isolate technical debt from strategy and other feature commitments. The recommendation here is to plug in your numbers and estimates and check the math in your context. Math never lies.

Summary

In this chapter, we wrapped the review process by providing high-level guidance on how to approach code reviews both as an engineer and a manager, and we discussed some metrics that can help you in tracking improvements over time.

Key takeaways from this last chapter
  • Gather some metrics and do not trust only your instinct about code improving or not over time.

  • Soft skills are as much valued as technical ones.

Warning

Managers love a nice work environment as much as you do. Having to call out cases of not playing as a team makes them cry. I told you.

Further Reading

Best Kept Secrets of Peer Code Review by Jason Cohen (Smart Bear, 2006) and Peer Reviews in Software by Karl Weigers (Addison-Wesley Professional, 2002) are interesting references on peer reviews. Software Inspection by Tom Gilb and Dorothy Graham (Addison-Wesley Professional, 1994) provides nice insights on the overall inspection process.

Review Checklist

In this last bit of review checklist, we are going to review how reviews are performed.
  1. 1.

    Do you approach the process with a positive attitude?

     
  2. 2.

    Are you nonjudgmental with colleagues?

     
  3. 3.

    Are you choosy and rejecting reviews only because of nitpicking minutia in the reviewed code?

     
  4. 4.

    Is competition out of the door?

     
  5. 5.

    Is the process tedious?

     
  6. 6.

    Are you making the defects more about the programmer rather than the code?

     
  7. 7.

    Are you rewarding success?

     
  8. 8.

    Are you valuing teaching more than critiquing?

     
  9. 9.

    Are commits gigantic scattered pieces of code that need to be reviewed?

     
  10. 10.

    Are you using code reviews as a way of improving communication within the team?

     
  11. 11.

    Are you using code reviews as a learning experience?

     
  12. 12.

    Are the objectives of the code review clearly defined?

     
  13. 13.

    Are feedbacks and recommendation stated in a SMART way?

     
  14. 14.

    Is the review followed by a plan to put in action to fix defects?

     
  15. 15.

    Are you fully committed as a reviewer?

     
  16. 16.

    Are you biased by the perception of yourself (e.g., too young or too senior to perform a review)?

     
  17. 17.

    Are you biased by the perception of others?

     
  18. 18.

    Are you glancing over code?

     
  19. 19.

    Do you rebuild and test the code?

     
  20. 20.

    Do you check that the code is accompanied by additional doc and tests?

     
  21. 21.

    If code is unclear, do you ask for clarification?

     
  22. 22.

    If you asked for a change, do you carefully review again the newer code to ensure important defects have been addressed?

     
  23. 23.

    Are all asked changes properly prioritized and addressed?

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.8.42