17

Best Practices in Coding C# 9

When you act as a software architect on a project, it is your responsibility to define and/or maintain a coding standard that will direct the team to program according to the expectations of the company. This chapter covers some of the best practices in coding that will help developers like you program safe, simple, and maintainable software. It also includes tips and tricks for coding in C#.

The following topics will be covered in this chapter:

  • How the complexity of your code can affect performance
  • The importance of using a version control system
  • Writing safe code in C#
  • .NET core tips and tricks for coding
  • Book use case – DOs and DON'Ts in writing code

C# 9 was launched together with .NET 5. However, the practices presented here can be used in many versions of .NET, but they refer to the basics of programming C#.

Technical requirements

This chapter requires the Visual Studio 2019 free Community Edition or better with all database tools installed. You will find the sample code for this chapter at https://github.com/PacktPublishing/Software-Architecture-with-C-9-and-.NET-5.

The more complex your code, the worse a programmer you are

For many people, a good programmer is one who writes complex code. However, the evolution of maturity in software development means there is a different way of thinking about it. Complexity does not mean a good job; it means poor code quality. Some incredible scientists and researchers have confirmed this theory and emphasize that professional code needs to be focused on time, high quality, and within budget.

Even when you have a complex scenario on your hands, if you reduce ambiguities and clarify the process of what you are coding, especially using good names for methods and variables, and respecting SOLID principles, you will turn complexity into simple code.

So, if you want to write good code, you need to keep the focus on how to do it, considering you are not the only one who will read it later. This is a good tip that changes the way you write code. This is how we will discuss each point of this chapter.

If your understanding of the importance of writing good code is aligned to the idea of simplicity and clarity while writing it, you should look at the Visual Studio tool Code Metrics:

Figure 17.1: Calculating code metrics in Visual Studio

The Code Metrics tool will deliver metrics that will give you insights regarding the quality of the software you are delivering. The metrics that the tool provides can be found at this link: https://docs.microsoft.com/en-us/visualstudio/code-quality/code-metrics-values?view=vs-2019. The following subsections are focused on describing how they are useful in some real-life scenarios.

Maintainability index

This index indicates how easy it is to maintain the code – the easier the code, the higher the index (limited to 100). Easy maintenance is one of the key points to keep software in good health. It is obvious that any software will require changes in the future, since change is inevitable. For this reason, consider refactoring your code if you have low levels of maintainability. Writing classes and methods dedicated to a single responsibility, avoiding duplicate code, and limiting the number of lines of code of each method are examples of how you can improve the maintainability index.

Cyclomatic complexity

The author of Cyclomatic Complexity Metric is Thomas J. McCabe. He defines the complexity of a software function according to the number of code paths available (graph nodes). The more paths you have, the more complex your function is. McCabe considers that each function must have a complexity score of less than 10. That means that, if the code has more complex methods, you must refactor it, transforming parts of these codes into separate methods. There are some real scenarios where this behavior is easily detected:

  • Loops inside loops
  • Lots of consecutive if-else
  • switch with code processing for each case inside the same method

For instance, look at the first version of this method for processing different responses of a credit card transaction. As you can see, the cyclomatic complexity is bigger than the number considered by McCabe as a basis. The reason why this happens is because of the number of if-else inside each case of the main switch:

/// <summary>
/// This code is being used just for explaining the concept of cyclomatic complexity. 
/// It makes no sense at all. Please Calculate Code Metrics for understanding 
/// </summary>
private static void CyclomaticComplexitySample()
{
  var billingMode = GetBillingMode();
  var messageResponse = ProcessCreditCardMethod();
  switch (messageResponse)
    {
      case "A":
        if (billingMode == "M1")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      case "B":
        if (billingMode == "M2")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      case "C":
        if (billingMode == "M3")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      case "D":
        if (billingMode == "M4")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      case "E":
        if (billingMode == "M5")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      case "F":
        if (billingMode == "M6")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      case "G":
        if (billingMode == "M7")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      case "H":
        if (billingMode == "M8")
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        else
          Console.WriteLine($"Billing Mode {billingMode} for " +
            $"Message Response {messageResponse}");
        break;
      default:
        Console.WriteLine("The result of processing is unknown");
        break;
    }
}

If you calculate the code metrics of this code, you will find a bad result when it comes to cyclomatic complexity, as you can see in the following screenshot:

Figure 17.2: High level of cyclomatic complexity

The code itself makes no sense, but the point here is to show you the number of improvements that can be made with a view to writing better code:

  • The options from switch-case could be written using Enum
  • Each case processing can be done in a specific method
  • switch-case can be substituted with Dictionary<Enum, Method>

By refactoring this code with the preceding techniques, the result is a piece of code that is much easier to understand, as you can see in the following code snippet of its main method:

static void Main()
{
    var billingMode = GetBillingMode();
    var messageResponse = ProcessCreditCardMethod();
Dictionary<CreditCardProcessingResult, CheckResultMethod>
methodsForCheckingResult =GetMethodsForCheckingResult();
    if (methodsForCheckingResult.ContainsKey(messageResponse))
        methodsForCheckingResult[messageResponse](billingMode, 
        messageResponse);
    else
        Console.WriteLine("The result of processing is unknown");
}

The full code can be found on the GitHub repository of this chapter and demonstrates how lower-complexity code can be achieved. The following screenshot shows these results according to code metrics:

Figure 17.3: Cyclomatic complexity reduction after refactoring

As you can see in the preceding screenshot, there is a considerable reduction in complexity after refactoring. In Chapter 13, Implementing Code Reusability in C# 9, we discussed the importance of refactoring for code reuse. The reason why we are doing this here is the same – we want to eliminate duplication.

The key point here is that with the techniques applied, the understanding of the code increased and the complexity decreased, proving the importance of cyclomatic complexity.

Depth of inheritance

This metric represents the number of classes connected to the one that is being analyzed. The more classes you have inherited, the worse the metric will be. This is like class coupling and indicates how difficult it is to change your code. For instance, the following screenshot has four inherited classes:

Figure 17.4: Depth of inheritance sample

You can see in the following screenshot that the deeper class has the worse metric, considering there are three other classes that can change its behavior:

Figure 17.5: Depth of inheritance metric

Inheritance is one of the basic object-oriented analysis principles. However, it can sometimes be bad for your code in that it can cause dependencies. So, if it makes sense to do so, instead of using inheritance, consider using composition.

Class coupling

When you connect too many classes in a single class, obviously you will get coupling, and this can cause bad maintenance of your code. For instance, refer to the following screenshot. It shows a design where aggregation has been performed a lot. There is no sense to the code itself:

Figure 17.6: Class coupling sample

Once you have calculated the code metrics for the preceding design, you will see that the number of class coupling instances for the ProcessData() method, which calls ExecuteTypeA(), ExecuteTypeB(), and ExecuteTypeC(), equals three (3):

Figure 17.7: Class coupling metric

Some papers indicate that the maximum number of class coupling instances should be nine (9). With aggregation being a better practice than inheritance, the use of interfaces will solve class coupling problems. For instance, the same code with the following design will give you a better result:

Figure 17.8: Reducing class coupling

Notice that using the interface in the design will allow you the possibility of increasing the number of execution types without increasing the class coupling of the solution:

Figure 17.9: Class coupling results after applying aggregations

As a software architect, you must consider designing your solution to have more cohesion than coupling. The literature indicates that good software has low coupling and high cohesion. In software development, high cohesion indicates a scenario where you should have a software in which each class must have its methods and data with good relationships between them. On the other hand, low coupling indicates software where the classes are not closely and directly connected. This is a basic principle that can guide you to a better architectural model.

Lines of code

This metric is useful in terms of making you understand the size of the code you are dealing with. There is no way to connect lines of code and complexity since the number of lines is not indicative of that. On the other hand, the lines of code show the software size and software design. For instance, if you have too many lines of code in a single class (more than 1,000 lines of code – 1KLOC), this indicates that it is a bad design.

Using a version control system

You may find this topic in this book a bit obvious, but many people and companies still do not regard having a version control system as a basic tool for software development! The idea of writing about it is to force you to understand it. There is no architectural model or best practice that can save software development if you do not use a version control system.

In the last few years, we have been enjoying the advantages of online version control systems, such as GitHub, BitBucket, and Azure DevOps. The fact is, you must have a tool like that in your software development life cycle and there is no reason to not have it anymore since most providers offer free versions for small groups. Even if you develop by yourself, these tools are useful for tracking your changes, managing your software versions, and guaranteeing the consistency and integrity of your code.

Dealing with version control systems in teams

The use of a version control system tool when you are alone is obvious. You want to keep your code safe. But this kind of system was developed to solve team problems while writing code. For this reason, some features, such as branching and merging, were introduced to keep code integrity even in scenarios where the number of developers is quite large.

As a software architect, you will have to decide which branch strategy you will conduct in your team. Azure DevOps and GitHub suggest different ways to deliver that, and both are useful in some scenarios.

Information about how the Azure DevOps team deals with this can be found here: https://devblogs.microsoft.com/devops/release-flow-how-we-do-branching-on-the-vsts-team/. GitHub describes its process at https://guides.github.com/introduction/flow/. We have no idea of which is the one that best fits your needs, but we do want you to understand that you need to have a strategy for controlling your code.

In Chapter 20, Understanding DevOps Principles, we will discuss this in more detail.

Writing safe code in C#

C# can be considered a safe programming language by design. Unless you force it, there is no need for pointers, and memory release is, in most cases, managed by the garbage collector. Even so, some care should be taken so you can get better and safe results from your code. Let us have a look at some common practices to ensure safe code in C#.

try-catch

Exceptions in coding are so frequent that you should have a way to manage them whenever they happen. try-catch statements are built to manage exceptions and they are important for keeping your code safe. There are a lot of cases where an application crashes and the reason for that is the lack of using try-catch. The following code shows an example of the lack of usage of the try-catch statement. It is worth mentioning that this is just an example for understanding the concept of an exception thrown without correct treatment. Consider using int.TryParse(textToConvert, out int result) to handle cases where a parse is unsuccessful:

private static int CodeWithNoTryCatch(string textToConvert)
{
    return Convert.ToInt32(textToConvert);
}

On the other hand, bad try-catch usage can cause damage to your code too, especially because you will not see the correct behavior of that code and may misunderstand the results provided.

The following code shows an example of an empty try-catch statement:

private static int CodeWithEmptyTryCatch(string textToConvert)
{
    try
    {
        return Convert.ToInt32(textToConvert);
    }
    catch
    {
        return 0;
    }
}

try-catch statements must always be connected to logging solutions, so that you can have a response from the system that will indicate the correct behavior and, at the same time, will not cause application crashes. The following code shows an ideal try-catch statement with logging management. It is worth mentioning that specific exceptions should be caught whenever possible, since catching a general exception will hide unexpected exceptions:

private static int CodeWithCorrectTryCatch(string textToConvert)
{
    try
    {
        return Convert.ToInt32(textToConvert);
    }
    catch (FormatException err)
    {
        Logger.GenerateLog(err);
        return 0;
    }
}

As a software architect, you should conduct code inspections to fix this kind of behavior found in the code. Instability in a system is often connected to the lack of try-catch statements in the code.

try-finally and using

Memory leaks can be considered one of software's worst behaviors. They cause instability, bad usage of computer resources, and undesired application crashes. C# tries to solve this with Garbage Collector, which automatically releases objects from memory as soon as it realizes the object can be freed.

Objects that interact with I/O are the ones that generally are not managed by Garbage Collector: filesystem, sockets, and so on. The following code is an example of the incorrect usage of a FileStream object, because it thinks Garbage Collector will release the memory used, but it will not:

private static void CodeWithIncorrectFileStreamManagement()
{
    FileStream file = new FileStream("C:\file.txt",
        FileMode.CreateNew);
    byte[] data = GetFileData();
    file.Write(data, 0, data.Length);
}

Besides, it takes a while for Garbage Collector to interact with objects that need to be released and sometimes you may want to do it yourself. For both cases, the use of try-finally or using statements is the best practice:

private static void CorrectFileStreamManagementFirstOption()
{
    FileStream file = new FileStream("C:\file.txt",
        FileMode.CreateNew);
    try
    {
        byte[] data = GetFileData();
        file.Write(data, 0, data.Length);
    }
    finally
    {
        file.Dispose();
    }
}
private static void CorrectFileStreamManagementSecondOption()
{
    using (FileStream file = new FileStream("C:\file.txt", 
        FileMode.CreateNew))
    {
        byte[] data = GetFileData();
        file.Write(data, 0, data.Length);
    }
}
private static void CorrectFileStreamManagementThirdOption()
{
    using FileStream file = new FileStream("C:\file.txt", 
        FileMode.CreateNew);
    byte[] data = GetFileData();
    file.Write(data, 0, data.Length);
}

The preceding code shows exactly how to deal with objects that are not managed by Garbage Collector. You have both try-finally and using being implemented. As a software architect, you do need to pay attention to this kind of code. The lack of try-finally or using statements can cause huge damage to software behavior when it is running. It is worth mentioning that using code analysis tools (now distributed with .NET 5) will automatically alert you to these sorts of problems.

The IDisposable interface

In the same way that you will have trouble if you do not manage objects created inside a method with try-finally/using statements, objects created in a class that does not properly implement the IDisposable interface may cause memory leaks in your application. For this reason, when you have a class that deals with and creates objects, you should implement the disposable pattern to guarantee the release of all resources created by it:

Figure 17.10: IDisposable interface implementation

The good news is that Visual Studio gives you the code snippet to implement this interface by just indicating it in your code and right-clicking on the Quick Actions and Refactoring option, as you can see in the preceding screenshot.

Once you have the code inserted, you need to follow the TODO instructions so that you have the correct pattern implemented.

.NET 5 tips and tricks for coding

.NET 5 implements some good features that help us to write better code. One of the most useful for having safer code is dependency injection (DI), which was already discussed in Chapter 11, Design Patterns and .NET 5 Implementation. There are some good reasons for considering this. The first one is that you will not need to worry about disposing the injected objects since you are not going to be the creator of them.

Besides, DI enables you to inject ILogger, a useful tool for debugging exceptions that will need to be managed by try-catch statements in your code. Furthermore, programming in C# with .NET 5 must follow the common good practices of any programming language. The following list shows some of these:

  • Classes, methods, and variables should have understandable names: The name should explain everything that the reader needs to know. There should be no need for an explanatory comment unless these declarations are public.
  • Methods cannot have high complexity levels: Cyclomatic complexity should be checked so that methods do not have too many lines of code.
  • Members must have the correct visibility: As an object-oriented programming language, C# enables encapsulation with different visibility keywords. C# 9.0 is presenting Init-only setters so you can create init property/index accessors instead of set, defining these members as read-only following construction of the object.
  • Duplicate code should be avoided: There is no reason for having duplicate code in a high-level programming language such as C#.
  • Objects should be checked before usage: Since null objects can exist, the code must have null-type checking. It is worth mentioning that since C# 8, we have nullable reference types to avoid errors related to nullable objects.
  • Constants and enumerators should be used: A good way of avoiding magic numbers and text inside code is to transform this information into constants and enumerators, which generally are more understandable.
  • Unsafe code should be avoided: Unsafe code enables you to deal with pointers in C#. Unless there is no other way to implement the solution, unsafe code should be avoided.
  • try-catch statements cannot be empty: There is no reason for a try-catch statement without treatment in the catch area. More than that, the caught exceptions should be as specific as possible, and not just an "exception," to avoid swallowing unexpected exceptions.
  • Dispose of the objects that you have created, if they are disposable: Even for objects where Garbage Collector will take care of the disposed-of object, consider disposing of objects that you were responsible for creating yourself.
  • At least public methods should be commented: Considering that public methods are the ones used outside your library, they must be explained for their correct external usage.
  • switch-case statements must have a default treatment: Since the switch-case statement may receive an entrance variable unknown in some cases, the default treatment will guarantee that the code will not break in such a situation.

You may refer to https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/nullable-reference-types for more information about nullable reference types.

As a software architect, you may consider a good practice of providing a code pattern for your developers that will be used to keep the style of the code consistent. You can also use this code pattern as a checklist for coding inspection, which will enrich software code quality.

WWTravelClub – DOs and DON'Ts in writing code

As a software architect, you must define a code standard that matches the needs of the company you are working for.

In the sample project of this book (check out more about the WWTravelClub project in Chapter 1, Understanding the Importance of Software Architecture), this is no different. The way we decided to present the standard for it is by describing a list of DOs and DON'Ts that we followed while writing the samples we produced. It is worth mentioning that the list is a good way to start your standard and, as a software architect, you should discuss this list with the developers you have in the team so that you can evolve it in a practical and good manner.

In addition, these statements are designed to clarify the communication between team members and improve the performance and maintenance of the software you are developing:

  • DO write your code in English
  • DO follow C# coding standards with CamelCase
  • DO write classes, methods, and variables with understandable names
  • DO comment public classes, methods, and properties
  • DO use the using statement whenever possible
  • DO use async implementation whenever possible
  • DON'T write empty try-catch statements
  • DON'T write methods with a cyclomatic complexity score of more than 10
  • DON'T use break and continue inside for/while/do-while/foreach statements

These DOs and DON'Ts are simple to follow and, better than that, will yield great results for the code your team produces. In Chapter 19, Using Tools to Write Better Code, we will discuss the tools to help you implement these rules.

Summary

During this chapter, we discussed some important tips for writing safe code. This chapter introduced a tool for analyzing code metrics so that you can manage the complexity and maintainability of the software you are developing. To finish, we presented some good tips to guarantee that your software will not crash due to memory leaks and exceptions. In real life, a software architect will always be asked to solve this kind of problem.

In the next chapter, we will learn about some unit testing techniques, the principles of unit testing, and a software process model that focuses on C# test projects.

Questions

  1. Why do we need to care about maintainability?
  2. What is cyclomatic complexity?
  3. List the advantages of using a version control system.
  4. What is Garbage Collector?
  5. What is the importance of implementing the IDisposable interface?
  6. What advantages do we gain from .NET 5 when it comes to coding?

Further reading

These are some books and websites where you will find more information about the topics of this chapter:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.15.94