Chapter 10. Testing

Automatic testing is an important part of any programming project. It's always good to know that your application works correctly, but it's tedious to manually click through your application's interface every time you change something. Automatic testing transfers this burden onto the computer; after you've written the tests, the application will test itself whenever necessary. Thus, adding new features becomes a low risk operation because your tests will start failing as soon as you break something; you'll be able to fix the problem immediately, and you won't have to worry about the fix breaking something else. If your tests are well written, you can spend your time adding features, rather than tracking down obscure bugs at 3 A.M.

While this book doesn't intend to push any development methodology, there are a few schools of thinking on when to write tests. One is called test-driven development (TDD), which suggests that you always write tests before you write any code. The idea is that before adding a feature you run your test suite, notice that everything passes, and then add some failing tests. Then you write the code until the tests go green, and you repeat the process for the next feature. When all the features are implemented, you also have a fully-tested application, as no code can be written without writing a test first. This is an easy way to ensure that all of your code is tested; you're simply not allowed to implement a feature until it's tested. You won't have to worry about not having enough time to test something before release—if the feature is working, it's also tested. This is generally good, which is why the movement has attracted so much positive attention.

The other end of the spectrum is waiting until your application is completely built before you write tests. Although it looks like we're doing this in the book, I actually wrote the tests along with the code but saved them until this chapter so that they would get enough text to explain them adequately. I don't recommend that you try this for your real applications. Writing tests is a development aid, so waiting until development is complete misses much of the point of testing. The tests will be helpful for the next development cycle, but you still probably wasted a lot of time in this cycle by manually doing what the computer could do for you. So, get disciplined and write tests as you develop the code the first time.

Even though it seems like you're wasting time by not writing application code, you'll find that in the end you save time, and you won't lose sleep wondering what your last commit broke.

I prefer a hybrid approach to writing tests. I try to write as many tests as possible for a component or feature before I start writing a code for it, but I don't make any effort to be sure that I've got every case that I'm going to write the code for. Then, I code for a bit, and test the basic flow of the application through the interface, just to make sure that I didn't do anything too stupid (such as typos). Once I'm sure that the application doesn't contain any obvious errors, I write tests for the corner cases and fix the errors in the code without using the interface at all. If a bug comes up later, I'll experiment a bit to reproduce the bug, add a test to the appropriate error, and then try to make the test pass without wasting time with the web interface.

As you get familiar with testing, you'll find a style that works well for you. If this chapter is your first exposure to automated testing, I recommend that you try the TDD methodology. After a while, you'll know when you don't need to strictly adhere to the procedure, but in the mean time, you'll be writing tested code at a pretty good pace.

In this chapter, we'll take a look at the technologies and techniques available to make testing your Catalyst application easy. We'll start by seeing how to run the autogenerated tests, and where to put new ones. Then, we'll begin testing ChatStat by testing the data model outside of Catalyst. After that, we'll write a few tests to ensure that the web interface is working well with the data model. Then, we'll write some tests for the AddressBook application. After reading the chapter, it is important that you take time to understand the testing culture and the various modules in CPAN carefully. Many common use cases are already available and you do not have to reinvent the wheel. Always continually explore and discover ways to make testing more efficient and fun!

Mechanics

Before we start writing tests, let's take a look at what we have so far. Out of the box, your Catalyst application already has a few tests in the t/ directory of the application. You can run these by making a Makefile with Makefile.PL as follows:

$ perl Makefile.PL
<output>
$ make test t/01app.................ok t/02pod.................skipped
all skipped: set TEST_POD to enable this test t/03podcoverage.........skipped
all skipped: set TEST_POD to enable this test t/controller_Address....ok
t/controller_Person.....ok t/model_AddressDB.......ok t/view_Jemplate.........ok
All tests successful, 2 tests skipped.
Files=7, Tests=9, 3 wallclock secs ( 2.92 cusr + 0.29 csys = 3.21 CPU)

The tests you see running here were added by catalyst.pl (the ones that start with numbers), or by the addressbook_create.pl script (the ones that start with Model and View). These tests don't really do much testing, because they're just autogenerated tests that see if the code compiles. Here's what the t/view_Jemplate.t looks like:

use strict;
use warnings;
use Test::More tests => 1;
BEGIN { use_ok 'AddressBook::View::Jemplate' }

Although this test doesn't do much, it does show the structure of a test file. A test is just a Perl script that ends with a .t extension. We use the strict and warnings pragmas, and then use a module called Test::More. Test::More produces an output that can be read by a module called Test::Harness. This is what makes test uses to generate the summary above. If you run the tests manually, the output looks like the following:

$ perl -Ilib t/view_Jemplate.t
1..1
ok 1 - use AddressBook::View::Jemplate;

This output format is called TAP (Test Anything Protocol), and is generated for you by Test::More. Although you could generate TAP manually, it's easier to just let the module do it for you.

One last note—Perl comes with a command called prove that will do the same thing as make test:

$ prove -Ilib t/view_Jemplate.t t/view_Jemplate....ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.20 cusr + 0.02 csys = 0.22 CPU)

The command prove is nice when you only want to run a few test files instead of the entire suite.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.97.170