Chapter 8. Testing DSLs

In this chapter

  • Why create testable DSLs?
  • Approaches for building testable DSLs
  • Building a Testing DSL
  • Integrating the Testing DSL with a unit-testing framework

The reasons for creating testable systems have been debated elsewhere for years, so we won’t go over them again here. It’s well accepted that testable and tested systems have lower maintainability costs, are easier to change and grow, and generally are more effective in the long run. I have a high regard for testability, and I consider it a first-level concern in any application or system that I build.

Unlike most systems, a DSL isn’t a closed environment. DSLs are built explicitly to enable greater openness in the environment. It’s almost guaranteed that your DSL will be used in ways that you can’t predict. To ensure that you can grow the language as needs change, you need to know what can be done with your language in the first place, and ensure that you have tests to cover all those scenarios.

8.1. Building testable DSLs

It’s hard to find breaking changes (or regression bugs) in a language. We tend to ignore the possibility of bugs at the language level, so it’s doubly hard to track them down.


Testability, regressions, and design

So far in this book, I have ignored the effect of testability on the design of a system. Ask any test-driven development (TDD) advocate, and they’ll tell you that the ability to run the tests against the application to find regressions is a secondary benefit compared to the impact on the design that using a TDD approach will have.

I can certainly attest to that. Building a DSL in a TDD manner, with tests covering all your corners, will give you a language design that’s much more cohesive. Such a language will be much easier to evolve as time goes on.


Regression tests help identify this sort of bug by ensuring that all the scenarios your language supports are covered. Users may still find ways to use the language that you didn’t anticipate—ways that your future releases may break—but regression testing reduces that risk by a wide margin. DSLs are usually focused on specific scenarios, so you can be fairly certain that you can cover the potential uses. This gives you a system that can be worked on without fear of breaking existing DSL scripts.

Before we try to figure out what we need to test, let’s review the typical structure for a DSL, as shown in figure 8.1.

Figure 8.1. A typical structure for a DSL

The syntax, API, and model compose the language of the DSL, and the engine is responsible for performing the actions and returning results. Note that, in this context, the engine isn’t the class that inherits from DslEngine. It’s the generic concept, referring to the part of the application that does the work, such as the part of the Quote-Generation DSL that takes the generated quotes and does something with them.

Given that there are four distinct parts of the DSL, it shouldn’t come as a surprise that we test the DSL by testing each of those areas individually, and finally creating a set of integration tests for the whole DSL.

8.2. Creating tests for a DSL

The Message-Routing DSL that we created in chapter 5 routes messages to the appropriate handlers. Listing 8.1 shows a simple scenario using this DSL.

Listing 8.1. A simple example using the Message-Routing DSL
HandleWith RoutingTestHandler:
lines = []
return NewOrderMessage( 15, "NewOrder", lines.ToArray(OrderLine) )

When I start to test a DSL, I like to write tests similar to the one in listing 8.2.

Listing 8.2. The first DSL test usually verifies that the script compiles
[Test]
public void CanCompile()
{
DslFactory dslFactory = new DslFactory();
dslFactory.Register<RoutingBase>(new RoutingDslEngine());
RoutingBase routing =
dslFactory.Create<RoutingBase>(@"Routingsimple.boo");
Assert.IsNotNull(routing);
}

This may not seem like a interesting test, but it’s a good place to start. It checks that the simplest scenario, compiling the script, will work. As a matter of fact, I create a CanCompile() test for each of the features I build into the language.


Rhino DSL and testability

One of the design principles for Rhino DSL is that it must be testable, and easily so. As such, most of the internal structure is built in such a way that it’s easily replaceable. This principle allows you to replace the infrastructure when you write your own tests, and it makes it easier to separate dependencies.


I am using basic tests, such as CanCompile(), to go from a rough idea about the syntax and user experience of the language to a language implementation that can be successfully compiled (but not necessarily run or do anything interesting).

Once we have a test that shows that our syntax works and can be successfully compiled, we can start working on the more interesting tests ...

8.2.1. Testing the syntax

What do I mean when I talk about testing the syntax? Didn’t we test that with the CanCompile() test? Well, not really.

When I talk about testing the syntax, I don’t mean simply verifying that it compiles successfully. We need to make sure the syntax we created has been compiled into the correct output. The CanCompile() test is only the first step in that direction.


Testing syntax in external DSLs

In general, when you hear someone talking about testing the syntax of a DSL, they’re talking about testing whether they can parse the textual DSL into an AST, whether the AST is valid, and so on. This is a critical stage for external DSLs, but not one you’ll usually have to deal with in internal DSLs.

By building on a host language, we can delegate all of those issues to the host language compiler. Our syntax tests for internal DSLs run at a higher level—we test that our syntax will execute as expected, not that it’s parsed correctly.


Take a look at listing 8.1 for a moment. What’s going on there? If you recall from chapter 5, the compiler will translate the code in listing 8.1 to code similar to what you see in listing 8.3.

Listing 8.3. C# representation of the compiled script in listing 8.1
public class Simple : RoutingBase
{
public override void Route()
{
this.HandleWith(typeof(RoutingTestHandler), delegate
{
return new NewOrderMessage( 15, "NewOrder", new OrderLine[0] )
;
});
}
}

We want to test that this translation has happened successfully, so we need to write a test for it. But how can we do this?


Interaction-based testing for syntax

Interaction-based testing is a common approach to testing DSLs. It is particularly useful for testing the syntax of a DSL in isolation.

It may seem strange to use interaction-based testing to test the syntax of a language, but it makes sense when you think about what we’re trying to test. We’re testing that the syntax we created interacts with the rest of the DSL infrastructure correctly. The easiest way to test that is through interaction testing.

In general, I use Rhino Mocks to handle interaction-based testing. However, I don’t want to introduce any concepts that aren’t directly tied to the subject at hand, so we’ll create mock objects manually, instead of using Rhino Mocks.


We need some way to know whether the HandleWith method is called with the appropriate values when we call Route(). To do this, we’ll take advantage of the implicit aspect of the implicit base class and replace the base class for the Message-Routing DSL with a testable implementation. This will allow us to override the HandleWith method and verify that it was called as expected.

We’ll start by creating a derived class of RoutingBase that will capture the calls to HandleWith, instead of doing something useful with them. The StubbedRoutingBase class is shown in listing 8.4.

Listing 8.4. A routing base class that captures the values for calls to HandleWith
public abstract class StubbedRoutingBase  : RoutingBase
{
public Type HandlerType;
public MessageTransformer Transformer;
public override void HandleWith(
Type handlerType,
MessageTransformer transformer)
{
this.HandlerType = handlerType;
this.Transformer = transformer;
}
}

Now that we have this base class, we need to set things up in such a way that when we create the DSL instance, the base class will be StubbedRoutingBase, instead of RoutinBase. We can do this by deriving from RoutingDslEngine and replacing the implicit base class that’s used. Listing 8.5 shows how it can be done.

Listing 8.5. Replacing the implicit base class with a stubbed class
 public class RoutingDslEngine : DslEngine
{
protected override void CustomizeCompiler(
BooCompiler compiler,
CompilerPipeline pipeline,
string[] urls)
{
// The compiler should allow late-bound semantics
compiler.Parameters.Ducky = true;
pipeline.Insert(1,
new AnonymousBaseClassCompilerStep(
// the base type
BaseType,
// the method to override
"Route",
// import the following namespaces
"BDSLiB.MessageRouting.Handlers",
"BDSLiB.MessageRouting.Messages"));
}

protected virtual Type BaseType
{
get { return typeof (RoutingBase); }
}
}

public class StubbedRoutingDslEngine : RoutingDslEngine
{
// Uses the stubbed version instead of the real one
protected override Type BaseType
{
get
{
return typeof(StubbedRoutingBase);
}
}
}

In this listing, we extract the BaseType into a virtual property, which we can override and change in the StubbedRoutingDslEngine. I find this to be an elegant approach.

Now that we’ve laid down the groundwork, let’s create a full test case for syntax using this approach. Listing 8.6 shows all the details.

Listing 8.6. Testing that the proper values are passed to HandleWith
[Test]
public void WillCallHandlesWith_WithRouteTestHandler_WhenRouteCalled()
{
const IQuackFu msg = null;

dslFactory.Register<StubbedRoutingBase>(new StubbedRoutingDslEngine());

var routing = dslFactory.Create<StubbedRoutingBase>(
@"Routingsimple.boo");

routing.Initialize(msg);

routing.Route();

Assert.AreEqual(typeof (RoutingTestHandler), routing.HandlerType);

Assert.IsInstanceOfType(
typeof(NewOrderMessage),
routing.Transformer()
);
}

First, we register the stubbed version of the DSL in the DslFactory, and then we ask the factory to create an instance of the script using the stubbed version. We execute the DSL, and then we can inspect the values captured in the StubbedRoutingBase when HandleWith is called.


Note

In the sample code for this chapter, you will find a test called WillCallHandlesWithRouteTestHandlerWhenRouteCalled_UsingRhinoMocks, which demonstrates how you can use Rhino Mocks to test your DSL syntax.


Note that the last line of this test executes the delegate that received from the script to test that it produces the expected response (a type of NewOrderMessage, in this case).

What does this test do for us? It ensures that the syntax in listing 8.1 produces the expected response when the script is executed.

This is an important step, but it’s only the first in the chain. Now that we can test that we’re creating compilable scripts and that the syntax of the language is correct, we need to move up the ladder and start testing the API that we expose to the script.

8.2.2. Testing the DSL API

What exactly is a DSL API? In general, it’s any API that was written specifically for a DSL. The methods and properties of the implicit base class are an obvious candidate (if they aren’t directly part of the DSL syntax, such as RoutingBase.HandleWith in our case).

Because the Message-Routing API has little API surface area worth talking about, we’ll use the Authorization DSL as our example for testing the API. Figure 8.2 shows the class diagram for the Authorization DSL’s base class.

Figure 8.2. The Authorization DSL class diagram

As you can see in figure 8.2, there are several API calls exposed to the DSL. Allow() and Deny() come to mind immediately, and Principal.IsInRole() is also important, though in a secondary manner (see sidebar on the difference between the DSL API and the model). I consider those to be the DSL API because they’re standard API calls that we explicitly defined as part of our DSL.

The rules we follow for the API are different than those we follow when we write the rest of the application. We have to put much more focus on language orientation for those calls. The DSL API is here to be used by the DSL, not by other code, so making the API calls easier to use trumps other concerns. For example, in the case of the Authorization DSL, we could have provided a method such as Authorized(bool isAuthorized), but instead we created two separate methods, Allow() and Deny(), to make it clearer for the users of the DSL.


The difference between the DSL API and the model

In figure 8.1, we distinguished between the DSL API and the model. But what is the difference between the two?

A DSL API is an API built specifically to be consumed by the DSL, whereas the application model, which the DSL often shares, is a more general concept describing how the application itself is built.

The domain model of the application is composed of entities and services that work together to perform the application’s tasks. This domain model isn’t designed first and foremost to be consumed by the DSL. You may find that it’s suitable for the DSL and make use of it, but that isn’t its main purpose.

A DSL API, however, is intended specifically for the DSL. You can create DSL API facades on top of existing models to supply better semantics for what you intend to do.

When you test your DSL, there is no need to test the model. The model is used by the DSL but isn’t part of it. The tests for the model should have been created when the model was created.


Once you’ve identified your DSL API, how do you test it? My personal preference (and recommendation) is that you should test the API without using the DSL at all. This goes back to testing in isolation. You’re testing the API, not the interaction of the API and the DSL (which was tested with syntax tests). This has the side benefit of ensuring that the API is also usable outside the DSL you’re currently building, which is important, because you’ll probably want to reuse the API in your tests, if not in a different DSL altogether (see section 8.3.2 on creating the Testing DSL).

To test your DSL API without using the DSL infrastructure, you use standard testing techniques. Listing 8.7 shows how you can test that the Allow() and Deny() methods perform their duties appropriately.

Listing 8.7. Testing the Allow() and Deny() methods
// Pretends to be a DSL that allows access
public class AllowAccess : AuthorizationRule
{
public override void CheckAuthorization()
{
Allow("just a test");
}
}

// Pretends to be a DSL that has no opinion on the matter
public class AbstainFromVoting : AuthorizationRule
{
public override void CheckAuthorization()
{
}
}

// Pretends to be a DSL that denies access
public class DenyAccess : AuthorizationRule
{
public override void CheckAuthorization()
{
Deny("just a test");
}
}

[Test]
public void WhenAllowCalled_WillSetAllowedToTrue()
{
AllowAccess allowAccess = new AllowAccess(null, null);
allowAccess.CheckAuthorization();

Assert.IsTrue(allowAccess.Allowed.Value);
Assert.AreEqual("just a test", allowAccess.Message);
}

[Test]
public void WhenAllowOrDenyAreNotCalled_AllowHasNoValye()
{
AbstainFromVoting allowAccess = new AbstainFromVoting(null, null);
allowAccess.CheckAuthorization();

Assert.IsNull(allowAccess.Allowed);
}

[Test]
public void WhenDenyCalled_WillSetAllowedToFalse()
{
DenyAccess allowAccess = new DenyAccess(null, null);
allowAccess.CheckAuthorization();

Assert.IsFalse(allowAccess.Allowed.Value);
Assert.AreEqual("just a test", allowAccess.Message);
}

In this example, we start by defining a few classes that behave in a known manner regarding the methods we want to test (Allow() and Deny()). Once we have those, it’s a simple matter to write tests to ensure that Allow() and Deny() behave the way we expect them to. This is a fairly trivial example, but it allows us to explore the approach without undue complexity.

The next step would be to test the model, but we’re not going to do that. The model is used by the DSL, but it isn’t part of it, so it’s tested the same way you would test the rest of your application. Therefore, it’s time to test the engine ...

8.2.3. Testing the DSL engine

When I talk about the DSL engine in this context, I’m referring to the code that manages and uses the DSL, not classes derived from the DslEngine class. The DSL engine is responsible for coordinating the use of the DSL scripts.

For example, the Authorization class in the Authorization DSL, which manages the authorization rules, is a DSL engine. That class consumes DSL scripts, but it isn’t part of the DSL—it’s a gateway into the DSL, nothing more. The DSL engine will often contain more complex interactions between the application and the DSL scripts.

Because the engine is usually a consumer of DSL instances, you have several choices when creating test cases for the engine. You can perform a cross-cutting test, which would involve the DSL, or test the interaction of the engine with DSL instances that you provide to it externally. Because I generally want to test the engine’s behavior in invalid scenarios (with a DSL script that can’t be compiled, for example), I tend to choose the first approach.

Listing 8.8 shows a few sample tests for the Authorization class. These tests exercise the entire stack: the syntax, the API, and the DSL engine.

Listing 8.8. Testing the Authorization class
[TestFixture]
public class AuthorizationEngineTest
{
private GenericPrincipal principal;

[SetUp]
public void Setup()
{
Authorization.Initialize(@"Auth/AuthorizationRules.xml");
principal = new GenericPrincipal(new GenericIdentity("foo"),
new string[]{"Administrators"});
}

[Test]
public void WillNotAllowOperationThatDoesNotExists()
{
bool? allowed = Authorization.IsAllowed(
principal, "/user/signUp");
Assert.IsFalse(allowed.Value);
}

/// <summary>
/// One of the rules for /account/login is that administrators
/// can always log in
/// </summary>
[Test]
public void WillAllowAdministratorToLogIn()
{
bool? allowed = Authorization.IsAllowed(
principal, "/account/login");
Assert.IsTrue(allowed.Value);
}

[Test]
public void CanGetMessageAboutWhyLoginIsAllowed()
{
string whyAllowed = Authorization.WhyAllowed(
principal, "/account/login");
Assert.AreEqual("Administrators can always log in",
whyAllowed);
}
}

As you can see, there’s nothing particularly unique here. We define a set of rules in the AuthorizationRules.xml files and then use the Authorization class to verify that we get the expected result from the class. This is an important test case, because it validates that all the separate pieces are working together appropriately.

I would also add tests to check the order in which the Authorization DSL engine executes the scripts and how the engine handles errors in compiling the scripts and exceptions when running them. I would also test any additional logic in the class, but now we’re firmly in the realm of unit testing, and this isn’t the book to read about that.


Tip

For information on unit testing, you should take a look at The Art of Unit Testing, by Roy Osherove (http://www.manning.com/osherove/). Another good book is xUnit Test Patterns, by Gerard Meszaros (http://xunitpatterns.com/).


So far, we’ve talked about testing each component in isolation, and using some overarching tests to ensure that the entire package works. We’ve tested the syntax, the API, and the engine, and I’ve explained why we aren’t going to test the model in our DSL tests. That covers everything in figure 8.1.

But we haven’t yet talked about testing the DSL scripts themselves.

8.3. Testing the DSL scripts

Considering the typical scenarios for using a DSL (providing a policy, defining rules, making decisions, driving the application, and so on), you need to have tests in place to verify that the scripts do what you think they do. In fact, because DSLs are used to define high-level application behavior, it’s essential to be aware of what the scripts are doing and protect ourselves from accidental changes.

We’ll explore two ways to do this, using standard unit tests and creating a full-blown secondary testing DSL to test our primary DSL.

8.3.1. Testing DSL scripts using standard unit testing

One of the more important things to remember when dealing with Boo-based DSLs is that the output of those DSLs is IL (Intermediate Language, the CLR assembly language). This means that this output has all the standard advantages and disadvantages of other IL-based languages.

For example, when testing a DSL script, you can reference the resulting assembly and write a test case directly against it, just as you would with any other .NET assembly. Usually, you can safely utilize the implicit base class as a way to test the behavior of the scripts you build. This offers a nearly no-cost approach to building tests.

Let’s take the Quote-Generation DSL as our example and write a test to verify that the script in listing 8.9 works as expected.

Listing 8.9. Simple script for the Quote-Generation DSL
specification @vacations:
requires @scheduling_work
requires @external_connections

specification @scheduling_work:
return # Doesn't require anything

Listing 8.10 shows the unit tests required to verify this behavior.

Listing 8.10. Testing the DSL script in listing 8.9 using standard unit testing
 [TestFixture]
public class QuoteGenerationTest
{
private DslFactory dslFactory;

// Set up the DSL factory appropriately
[SetUp]
public void SetUp()
{
dslFactory = new DslFactory();
dslFactory.Register<QuoteGeneratorRule>(
new QuoteGenerationDslEngine());
}

// Standard test to ensure we can compile the script
[Test]
public void CanCompile()
{
QuoteGeneratorRule rule =
dslFactory.Create<QuoteGeneratorRule>(
@"Quotes/simple.boo",
new RequirementsInformation(200, "vacations"));
Assert.IsNotNull(rule);
}

// Positive test, to ensure that we add the correct information
// to the system when we match the requirements of the current
// evaluation
[Test]
public void Vacations_Requirements ()
{
QuoteGeneratorRule rule =
dslFactory.Create<QuoteGeneratorRule>(
@"Quotes/simple.boo",
new RequirementsInformation(200, "vacations"));
rule.Evaluate();

SystemModule module = rule.Modules[0];
Assert.AreEqual("vacations", module.Name);
Assert.AreEqual(2, module.Requirements.Count);
Assert.AreEqual("scheduling_work", module.Requirements[0]);
Assert.AreEqual("external_connections",
module.Requirements[1]);
}

// Negative test, to verify that we aren't blindly doing
// things without regard to the current context
[Test]
public void WhenUsingSchedulingWork_HasNoRequirements()
{
QuoteGeneratorRule rule =
dslFactory.Create<QuoteGeneratorRule>(
@"Quotes/simple.boo",
new RequirementsInformation(200, "scheduling_work"));
rule.Evaluate();

Assert.AreEqual(0, rule.Modules[0].Requirements.Count);
}
}

In this test, we use the DslFactory to create an instance of the DSL and then execute it against a known state. Then we make assertions against the expected output from the known state.

This test doesn’t do anything special—these are standard methods for unit testing—but there is something disturbing in this approach to testing. The code we’re trying to test is exactly 5 lines long; the test is over 45 lines of code.

There is some repetitive code in the test that could perhaps be abstracted out, but I tend to be careful with abstractions in tests. They often affect the clarity of the test. And although I am willing to accept a certain disparity in the number of lines between production and test code, I think that when the disparity is measured in thousands of percent, it’s time to consider another testing approach.

8.3.2. Creating the Testing DSL

The main reason there’s such a disparity between the DSL script and the code to test it is that the DSL is explicitly designed to express information in a concise (yet readable) form. When we try to test the DSL script with C# code, we have to deal with well-abstracted code using relatively few abstractions, so it’s no wonder there’s a big disparity in the amount of code.

Clearly we need a different approach. This being a book about DSL, my solution is to introduce another DSL, a Testing DSL, that will allow us to handle this at a higher level.


Don’t we need a testing DSL to test the Testing DSL?

If we create a Testing DSL to test the primary DSL, don’t we also need another testing DSL to test the Testing DSL? Taken to the obvious conclusion, that would require a third testing DSL to test the second testing DSL used to test the first testing DSL, and so on, ad infinitum. Who watches the watchers? Do we need an infinite series of testing DSLs?

Fortunately, we don’t need recursive testing DSLs. We only need one testing DSL for each DSL we’re testing. We’ll use the old idea of double-entry booking to ensure that we’re getting the right results.

The chance of having complementing bugs in both DSLs is small, so we’ll use each DSL to verify the other when we’re testing. A bug in the testing DSL would manifest itself because there wouldn’t be a matching bug in the primary DSL.


We’ll use the same approach in building the Testing DSL as we’d use for any DSL. We first need to consider what kind of syntax we want to use, in order to test that. Look back at the script in listing 8.9 and consider what kind of syntax would express the tests in the clearest possible manner.

We also need to identify the common things we’ll want to test. Judging by the tests in listing 8.10, we’ll generally want to assert on the expected values from the script under various starting conditions.

Listing 8.11 shows the initial syntax for the Testing DSL.

Listing 8.11. Testing DSL syntax for Quote-Generation DSL scripts
script "quotes/simple.boo"

with @vacations:
should_require @scheduling_work
should_require @external_connections

with @scheduling_work:
should_have_no_requirements

Let’s try to build that. We’ll start with the implicit base class, using the same techniques we applied when building the DSL itself.


DSL building tip

A good first step when building a DSL is to take all the keywords in the language you’re building and create matching methods for them that take a delegate as a parameter. You can then use anonymous blocks to define your syntax.

This is a low-cost approach, and it allows you to choose more complex options for building the syntax (such as macros or compiler steps) later on.


We’ll first look at the structure of the implicit base class in listing 8.12. Then we’ll inspect the keyword methods in detail.

Listing 8.12. Implicit base class for testing Quote-Generation DSL scripts
/// <summary>
/// Implicit base class for testing the Quote-Generation scripts.
/// </summary>
public abstract class TestQuoteGeneratorBase
{
private DslFactory dslFactory;
private QuoteGeneratorRule ruleUnderTest;
private string currentModuleName;

protected TestQuoteGeneratorBase()
{
ruleUnderTest = null;
dslFactory = new DslFactory();
dslFactory.Register<QuoteGeneratorRule>(
new QuoteGenerationDslEngine());
}

/// <summary>
/// The script we're currently testing
/// </summary>
public abstract string Script { get; }

// removed: with
// removed: should_require
// removed: should_have_no_requirements

public abstract void Test();
}

There is only one thing worth special attention in listing 8.12: the abstract property called Script holds the path to the current script. This will be mapped to the script declaration in the Testing DSL using a property macro (property macros are discussed in chapter 7).

Now that we have the overall structure, let’s take a deeper look at the language keywords. We’ll start with the with keyword, in listing 8.13.

Listing 8.13. The implementation of the with keyword
/// <summary>
/// A keyword
/// The scenario that we're testing.
/// Execute the script and then test its state.
/// </summary>
/// <param name="moduleName">The module name we're using
/// as the starting requirement</param>
/// <param name="action">Action that verified the state under the
/// specified module</param>
public void with(string moduleName, Action action)
{
Assert.IsNotEmpty(Script, "No script was specified for testing");

ruleUnderTest = dslFactory.Create<QuoteGeneratorRule>(
Script,
new RequirementsInformation(0, moduleName));
ruleUnderTest.Evaluate();

currentModuleName = moduleName;

action();
}

In listing 8.13, we create and execute the specified DSL and then use the last parameter of the with method, which is a delegate that contains the actions to verify the current state.

Listing 8.14 shows one of the verification keywords, should_require. The implementation of should_have_no_requirements is similar, so I won’t show it.

Listing 8.14. The implementation of one of the verification keywords in our Testing DSL
/// <summary>
/// A keyword
/// Expect to find the specified module as required for the current module
/// </summary>
public void should_require(string requiredModule)
{
// Search for the appropriate module
SystemModule module = ruleUnderTest.Modules.Find(
delegate(SystemModule m)
{
return m.Name == currentModuleName;
});
// Fail if not found
if (module == null)
{
Assert.Fail("Expected to have module: " +
currentModuleName +
" but could not find it in the registered modules");
}
// Search for the expected requirement
foreach (string requirement in module.Requirements)
{
if (requirement == requiredModule)
return;
}
// Not found, we fail the test
Assert.Fail(currentModuleName +
" should have a requirement on " +
requiredModule +
" but didn't.");
}

We use methods like should_have_no_requirements and should_require to abstract the verification process, which gives us a high-level language to express the tests. This approach significantly reduces the amount of effort required to test DSL scripts.


We’re not done yet

A word of caution. The Testing DSL created here can test only a limited set of scenarios. If you want to test a script with logic in it, you need to extend it. For example, what we have so far couldn’t test the following script:

specification @vacations:
requires @scheduling_work
requires @external_connections if UserCount > 50

Extending the Testing DSL to support this is easy enough, but I’ll leave that as an exercise for you (the solution is in the sample code). If you’d like a hint, this is the syntax I used:

script "quotes/WithLogic.boo"
with @vacations, UserCount=51:
should_require @scheduling_work
should_require @external_connections
with @vacations, UserCount=49:
should_require @scheduling_work
should_not_require @external_connections

Now that we’ve built the Testing DSL, are we done? Not quite. We still have to figure out how to execute it. Listing 8.15 shows one way of doing this.

Listing 8.15. Manually executing the Testing DSL
 [TestFixture]
public class TestQuoteGenerationTest
{
private DslFactory dslFactory;

[SetUp]
public void SetUp()
{
dslFactory = new DslFactory();
dslFactory.Register<TestQuoteGeneratorBase>(
new TestQuoteGenerationDslEngine());
}

[Test]
public void CanExecute_KnownGood()
{
TestQuoteGeneratorBase test =
dslFactory.Create<TestQuoteGeneratorBase>(
@"QuoteGenTest.Scripts/simple.boo");
test.Test();
}
}

Although this will work, I don’t like this method much. It has too much friction in it. You’d have to write a separate test for each test script. Much worse, if there’s an error in the test, this manual execution doesn’t point out the reason for the failure; it points to a line, which you’d then have to go and read, as opposed to reading the name of the test.

A better method is to integrate the Testing DSL with the unit-testing framework.

8.4. Integrating with a testing framework

When we’re talking about integrating the Testing DSL scripts with a unit-testing framework, we generally want the following:

  • To use standard tools to run the tests
  • To get pass or fail notification for each separate test
  • To have meaningful names for the tests

Ideally, we could drop Testing DSL scripts in a tests directory, run the tests, and get all the results back.

Probably the simplest approach is to write a simple script that would generate the explicit tests from the Testing DSL scripts on the filesystem. Adding this script as a precompile step would ensure that we get the simple experience we’re looking for. This approach is used by Boo itself in some of its tests, and it is extremely simple to implement.

But this approach isn’t always suitable. For example, consider the syntax we have for testing:

with @vacations:
# ...
with @scheduling_work:
# ...

We have two distinct tests here that happen to reside in a single file. We could split them so there’s only one test per file, but that’s an annoying requirement, particularly if we have three or four such tests with a few lines in each. What I’d like is to have the test figure out that each with keyword is a separate test.

As it turns out, it’s pretty easy to make this happen. Most unit-testing frameworks have some sort of extensibility mechanism that allows you to plug into them. In my experience, the easiest unit-testing framework to extend is xUnit.NET, so that’s what we’ll use.

Listing 8.16 shows a trivial implementation of a test using xUnit. You can take a look at the xUnit homepage for further information: http://xunit.codeplex.com.

Listing 8.16. Sample test using xUnit
public class DemoOfTestUsingXUnit
{
[Fact]
public void OnePlusOneEqualTwo()
{
Assert.Equal(2, 1+1);
}
}

Before we can start extending xUnit, we need to make sure we have a way to differentiate between the different tests in a single file. We’ll do this by defining with as a unit test. What do I mean by that? Figure 8.3 shows the transformation I have in mind.

Figure 8.3. The transformation from the test script to multiple tests

We’ll take each with section and turn it into its own method, which will allow us to treat them separately.


Note

Because this chapter presents two radically different ways to unit test a DSL, this chapter’s code is split into two parts: /Chapter8 contains the code for sections 8.1 through 8.3, and /Chapter8.UnitTestIntegration contains the source code for section 8.4 through the end of the chapter.


If you look at figure 8.3 closely, you’ll notice that we now have a WithModule() instead of the with. This helps us avoid naming collisions. After renaming the with() method to WithModule(), all we have to do is create a method each and every time with is used. We can do this with a macro, as shown in listing 8.17.

Listing 8.17. A macro that moves a with block to its own method
/// <summary>
/// Move the content of a with block to a separate method
/// and call the WithModule method.
/// </summary>
public class WithMacro : AbstractAstMacro
{
public override Statement Expand(MacroStatement macro)
{
// Create a call to WithModule method
var mie = new MethodInvocationExpression(macro.LexicalInfo,
new ReferenceExpression("WithModule"));
// with the arguments that we were passed
mie.Arguments.Extend(macro.Arguments);
// as well as the block that we have there.
mie.Arguments.Add(new BlockExpression(macro.Block));

// Create a new method. Note that the method "name"
// is the content of the with block. This is allowed by
// the CLR, but not by most languages.
var method = new Method(macro.ToCodeString());
// Add the call to the WithModule to the new method.
method.Body.Add(mie);

// Find the parent class definition
var classDefinition = (ClassDefinition) macro.GetAncestor(
NodeType.ClassDefinition);
// Add the new method to the class definition
classDefinition.Members.Add(method);

// Remove all the code that was where this macro used to be
return null;
}
}

During compilation, whenever the compiler encounters a with block, it will call the WithMacro code. When that happens, this code will create a new method whose name is the content of the with block, and move all the code in the with block to the method, removing it from its original location. It also translates the with block to a call to the newly renamed WithModule method.

With this new macro in place, compiling the script in listing 8.11 will produce the output in listing 8.18 (translated to pseudo C# to make it easier to understand).

Listing 8.18. The result of compiling listing 8.11 with the WithMacro macro
public class Simple : TestQuoteGeneratorBase
{
public void with 'scheduling_work':
should_have_no_requirements()
();

public void with 'vacations':
should_require('scheduling_work')
should_require('external_connections')
();

public override string Script { get { ... } }
}

The method names aren’t a mistake; we have a method name with all sorts of interesting characters in it such as spaces, line breaks, and even parentheses. It doesn’t make sense as C# code, but it does make sense as IL code, which is what I translated listing 8.18 from. This is one way to ensure that tests will never have misleading names.

Now that we have each individual test set up as its own method, we can integrate the tests into xUnit fairly easily. Listing 8.19 shows the most relevant piece for integrating with the unit-testing framework.

Listing 8.19. Integrating our DSL with xUnit
public class DslFactAttribute : FactAttribute
{
private readonly string path;

public DslFactAttribute(string path)
{
this.path = path;
}

protected override IEnumerable<ITestCommand>
EnumerateTestCommands(MethodInfo method)
{
DslFactory dslFactory = new DslFactory();
dslFactory.Register<TestQuoteGeneratorBase>(
new TestQuoteGenerationDslEngine());

TestQuoteGeneratorBase test =
slFactory.Create<TestQuoteGeneratorBase>(path);
Type dslType = test.GetType();

BindingFlags flags = BindingFlags.DeclaredOnly |
BindingFlags.Public |
BindingFlags.Instance;

foreach (MethodInfo info in
dslType.GetMethods(flags))
{
if (info.Name.StartsWith("with"))
{
yield return new DslRunnerTestCommand(
dslType, info);
}
}
}
}

This code is straightforward. We accept a path in the constructor for the script we want to test. When the time comes to find all the relevant tests, we create an instance of the script and find all the test methods (those that start with with). We then wrap them in DslRunnerTestCommand, which will be used to execute the test.

The DslRunnerTestCommand class is shown in listing 8.20.

Listing 8.20. DslRunnerTestCommand can execute a specific test in the DSL
public class DslRunnerTestCommand : ITestCommand
{
private readonly MethodInfo testToRun;
private readonly Type dslType;

public DslRunnerTestCommand(Type dslType, MethodInfo testToRun)
{
this.dslType = dslType;
this.testToRun = testToRun;
}

public MethodResult Execute(object ignored)
{
object instance = Activator.CreateInstance(dslType);
return new TestCommand(testToRun).Execute(instance);
}

public string Name
{
get { return testToRun.Name; }
}
}

We need a specialized test command to control which object the test is executed on. In this case, we pass the DSL type in the constructor, instantiate it during execution, and then hand off the rest of the test to the xUnit framework.

All we have left to do is use DslFact to tell the unit-testing framework how to find our tests, which is shown in listing 8.21.

Listing 8.21. Using DslFact to let xUnit find our tests
public class UsingUnitTestingIntegration
{
[DslFact("QuoteGenTest.Scripts/simple.boo")]
public void Simple()
{
}

[DslFact("QuoteGenTest.Scripts/WithLogic.boo")]
public void WithLogic()
{
}
}

What about integrating with other unit-testing frameworks?

All the major unit-testing frameworks have some extensibility mechanism that you can use. Personally, I think that xUnit is the simplest framework to extend, which is why I chose it to demonstrate the integration. Integrating with other frameworks isn’t significantly more complicated, but it does require more moving parts.


What did this integration with the unit-testing framework give us? Here is the result of a failing test:

1) with 'vacations', 49:
should_require('scheduling_work')
should_require('external_connections')
: AssertionException : vacations should have a requirement on
external_connections but didn't.

Compare that to the output we’d have gotten using the approach we took in listing 8.15. This approach gives us a much clearer error, and it’s that much easier to understand what is going on and fix it.

Note that you still have to write a dummy test for each script you want to test. To improve on this, you could change DslFact so it will handle a directory instead of a single file. This would let you drop test scripts into a directory, and they would immediately be picked up by the unit-testing framework.

8.5. Taking testing further

We’ve looked at testing DSLs, but you could use DSLs to take testing further, by building a DSL to test your application, or by building testing into your language.

8.5.1. Building an application-testing DSL

Something that deserves a bit of attention is using testing DSLs not to test another DSL, but to test your application. This is an offshoot of automation DSLs, targeted specifically at testing applications.

The Fit testing tool (http://fit.c2.com/wiki.cgi?IntroductionToFit) can be used to let the customer build acceptance tests without understanding the code, and a testing DSL could be used for similar purposes. Frankly, having used Fit in the past, I find it significantly easier to build a DSL to express those concepts, rather than to use Fit fixtures for the task.

There isn’t anything special in such an application-testing DSL; you can use the same tools and approaches as when building any DSL.

8.5.2. Mandatory testing

There is one approach to testing that we haven’t talked about: mandatory testing as part of the language itself.

Imagine that when you create an instance of a script, the DSL engine looks for a matching test script (for example, the work-scheduling-specs.boo script would match to the work-scheduling-specs.test test). This test script would be compiled and executed the first time you request the primary script. That would ensure that your scripts are tested and working—a missing or failed test would stop the primary script from running.

There are problems with this approach. Tests might require a special environment, or take a while to run, or modify the state of the system in unacceptable ways for production, and so on. But it would be a good approach to use during development and staging. It would give you more motivation to write tests for your DSL scripts.


Note

This approach to mandatory testing is just an idea at the moment. I haven’t tried implementing it in a real project yet. The technical challenges of implementing such a system are nil, but the implications on the workflow and the ease of use of such a system are unknown. On the surface, checked exceptions are great. In practice, they’re very cumbersome. This is why I have only toyed with the idea so far.


8.6. Summary

Testing is a big topic, and it can significantly improve the quality of software that we write. This chapter has covered several approaches for testing DSLs, but I strongly recommend reading more about unit testing if you aren’t already familiar with the concepts and their application.

We covered quite a few topics in this chapter. We saw how to test each area of a DSL in isolation, and how to test the language as a whole. Having a layer of tests for the language itself is critically important—you need it the moment you make a change to the language. That layer of tests is your safety net. In the next chapter, we’ll look at how to safely version a DSL, and having a solid foundation of regression tests is a baseline requirement for successfully doing so.

Beyond testing the language itself, we’ve also seen how to test the language artifacts: the DSL scripts. We started with executing DSL scripts in our unit tests, moved on to building a Testing DSL to test the primary DSL, and finished up by integrating the Testing DSL into a unit-testing framework, which provides better error messages and allows us to have the unit tests automatically pick up new test scripts as we write them.

With that solid foundation of tests under our feet, we can move on to the next stage in our DSL lifetime, version 2.0. Let’s look at how to deal with that.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.40.177