© Nick Harrison 2017

Nick Harrison, Code Generation with Roslyn, 10.1007/978-1-4842-2211-9_1

1. Introduction

Nick Harrison

(1)Lexington, South Carolina, USA

The Problem with Business Logic

You have probably already figured this out, but business logic is hard. It is rarely logical, often doesn't follow discernable patterns, is riddled with exceptions, and changes often and quickly. This can be frustrating but it is the world we live in.

The business world is very competitive. These exceptions and apparent contradictions that drive us crazy often mean the difference between keeping a client and losing a client, between making a deal and losing the deal. Business environments often turn on a dime and when the environment changes, so must our applications. Having application that can change and respond at the speed of business is critical to survival in tough competitive markets.

This puts a lot of pressure on our business applications being able to adapt to change quickly. How do we respond to these challenges? In this book, we explore ways to make our applications more nimble so that they can change at the speed of business.

Develop/Test Deploy Takes Time

Every time we change code, we go through a similar cycle of develop, test, and deploy . These steps take time. Depending on how your application is structured and the processes and tools being used, this could take a lot of time. You can't deploy a single method or even a single class. The smallest unit of deployment is an individual assembly in the .NET ecosystem.

We can partition an application into separate assemblies to try to limit the scope and impact of such changes, but we need to balance runtime performance and time to market. If we split the logic across too few assemblies, we may be left needing to regression test the entire application for every change. Splitting the business logic across too many assemblies and we may have more metadata than runnable code in individual assemblies. Plus there comes a point where too many assemblies can slow down builds and even opening the solution.

Note

Performance concerns from the number of assemblies in an application are not likely to be an issue until you start dealing with hundreds or even thousands of assemblies. So putting each class in its own assembly is not an option.

We can also reduce the time we spend in the develop, test, and deploy cycle with Configuration Management tools. Tools such as continuous integration and automated unit tests can help streamline this cycle. Continuous integration tools allow the code in a repository to be integrated regularly, often with every check. automated unit testing allows you to define a collection of tests to be run against the output of the build with each integration. These tests can ensure with each integration that no new defects are introduced and that all requested features are implemented and are performing correctly. Because these integrations occur frequently, any problems that are introduced can be discovered quickly. The earlier a problem is discovered, the earlier it is to resolve.

It takes time to make code changes, test the changes, and deploy the changes. We can reduce this time by how we structure our solution. We can also add tools like continuous integration and automated testing to reduce this time, but no matter how much we optimize our processes or how we structure our solutions, we will never have software at the speed of business as long as we are stuck in this code/test/deploy loop.

Lookup Tables Are Easier to Modify, Verify, and Deploy

While we can't deploy a single method, we can deploy a single record from a lookup table . We can even deploy an update to a single column. To the extent that this allows us to influence business logic, this gives us lots of options to quickly make changes. The scope of the impact from the changes can be tightly controlled, verified, and easily deployed.

Modifying lookup tables can be as simple as a single SQL statement or as complex as a sophisticated interactive table maintenance screen. You can start simple and over time add sophisticated maintenance screens to push the maintenance from the control of developers to the hands of “power users”. Verification can be as simple as running a report against the lookup tables to confirm that the correct data has been entered. Deployment can again be as simple as running SQL statements in the new environment or as complex as exporting key records from one environment and importing the same records in a new environment. This can easily be incorporated into your automated continuous integration strategy. Sophistication and complexity can grow as needed over time.

These configuration points must be designed and implemented in advance. We need to have the database structures in place to store the lookup data, and we also need code in place to reference and interpret the lookup data.

Note

Chapter 2 focuses on various strategies for structuring this lookup data and how it might be interpreted.

We need to have code in place to reference and interpret this lookup data. This could take many forms .

  • You may run a query to retrieve key configuration parameters.

  • You may run a query to determine the next step in a complex workflow.

  • You may run a query to evaluate which of key business rules are applicable.

  • You may run a query that will actually implement and evaluate key business rules.

Lookup Tables Can Be Slow

In designing software, we often find that a good idea can easily morph into a bad idea. The potential problem with using lookup tables is that overusing or misusing lookup data can potentially slow your application down.

Structuring our applications so that every business decision is controlled by configuration values in lookup tables means that we can easily change any decision point on the fly, but it also means that we may have to make multiple-round scripts to the database to work out the logic for even a simple business transaction.

We need to be careful about how many decision points are configurable through lookup data and how much lookup data must be retrieved to complete a business transaction .

Note

You will see that in many cases, storing business logic in tables is much more efficient by leveraging the power of the database to filter and find the appropriate business rules. Just be wary of scenarios requiring multiple calls to the database.

We can have problems if we have to retrieve hundreds of records or make dozens of round trips to the database to complete a transaction. Consider the following business scenario :

  • We cannot initiate a funding request after 3PM

  • Unless it is a purchase, then the cutoff time is 4PM

  • Unless the Group Manager has authorized an override

You may find similar business logic in a Mortgage Loan Origination System. You may see any number of changes to such business requirements over time. You can easily imagine individual states requiring different cutoff times. You might even see changes requested based on other properties such as Income Documentation Type or Property Type and so on. You may see changes requested to change who can authorize an extension and for how long. Even for a relatively straight forward business rule, the number of configuration points and options can grow quickly.

If you are not careful, you can introduce too many configuration points and sacrifice runtime performance in the name of configuration ease. The penalties for having a slow application can be worse than not being able to respond quickly enough to change.

Having Your Cake and Eating It Too

People don't like compromising. We want to have the best of both worlds. This is human nature. In a perfect world we would have the speed of compiled code and the maintenance ease that comes from table-driven logic. This book explores ways to reach this ideal state.

We explore how to structure lookup data to drive business in Chapter 2. We will see that there are a couple of patterns that we can follow for structuring lookup tables to drive business logic. We will look at some best practices to help guide when to use each approach and some tradeoffs that can be made to optimize some of these patterns to better fit your specific scenario.

Chapter 3 includes several case studies for taking common business logic tables and reviewing what the generated code to represent the encapsulated business logic might look like. We will not discuss how to generate code, only how the generated code would look. This should provide a template for pulling logic out of lookup tables regardless of the generation approach you take.

Chapter 4 introduces the Roslyn Compiler Services. We will explore how Roslyn can be used to build a type, a method, a conditional statement, a looping statement, etc. We will explore how to use Roslyn to generate code implementing the Business Logic described in our logic tables. Roslyn provides some nice options and simplifies some of the complexities and problems common with other code generation methods such as the CodeDom and T4.

Code generation often earns a bad reputation. If you are not careful, you can lose the ability to regenerate your code, which is key. To preserve this ability, we need to make sure that we don't ever directly change the code that we generate. Chapter 5 focuses on ways to preserve the integrity of generated code. We will talk about changing metadata or the generator instead of changing code. We will also talk about partial classes and inheritance to extend generated code. Here we will walk through some best practices for working in a living project that is based on generating code.

In Chapter 6, we discuss programmatically calling the compiler to compile the generated code. We will also explore best practices for deploying the newly generated assemblies. We will explore the complete lifecycle from a user, changing the logic data in a staging environment and verifying that the changes have the intended impact through deploying the new logic data along with the associated newly created assembly to the Production environment. We will also explore some best practices for minimizing the impact to Production when we make these deployments.

Reflection is the key to discovering and executing this generated code, to call the configured business logic. By definition, our generated code will not be known when the rest of the code was originally written. We will need to discover it to execute it. Chapter 7 walks through the mechanics of reflection to safely load a generated assembly, discover the types in the assembly, or create an instance of a specified type, and then call the relevant methods of the types that we find.

Finally in Chapter 8, we review all of the best practices we learned along the way.

What Is Not Covered

In this introduction, we touched on several related concepts that, while very important, are not addressed in this book. We only tangentially address the issues and best practices for separating business logic into separate assemblies. This is an important topic but not one that we will cover.

We will also skip over the issues with configuration management that we briefly mentioned earlier. Strong configuration management practices with continuous integration built on top of automated unit testing is very important, but discussing how to create this is outside the scope of this book. You can find great details for managing configuration management in Beginning Application Lifecycle Management ( https://www.apress.com/us/book/9781430258124 ).

While this book will delve into creating the database structures needed to store and retrieve the data used to define and drive dynamic business logic, this is not a book on data modeling in general. We will provide some good advice and best practices, but this is far from a comprehensive guide to data modeling.

Summary

We live and work in chaotic world where business logic is guaranteed to change and change often. To stay relevant and competitive, our applications need to adapt and respond as quickly as possible. We have seen how it can be difficult to respond quickly with hard coded business logic. We have also seen how it is possible to respond to changes more quickly using table-driven business logic. While this solves the bottleneck inherent with the develop, test, deploy cycle, it can lead to runtime performance issues if the number of trips to the database to retrieve and build the business logic increases.

Throughout this book, we explore how to get the best of both worlds by using table-driven business logic to drive the generation of compiled code to implement business logic.

Note

All code samples used throughout this book are written in C#. The SQL presented has been explicitly written and tested against SQL Server. Unless explicitly stated, the SQL should work with minimal changes against any ANSI-compliant database.

Next, we turn our attention to how to structure business logic in tables to supply the metadata needed for code generation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.12.180