Testing ChatStat

Now that we have seen what tests look like and how to run them, let's dive into testing ChatStat. When I was first writing ChatStat, I designed it with testability in mind. The key practice here is to make as many components of the application work as possible without depending on other parts of the application. Testing a Perl class is very easy, but firing up a Catalyst application and testing it by making requests and looking at the HTML is hard. So, I tried to keep the need for the second type of test down by not doing much inside Catalyst. All the hard stuff is inside the easily-testable Perl modules. This not only makes testing easy, it's just a plain good design. Without any effort on my part, I can use the same code to write a command-line version of the application, or perhaps an IRC interface.

Let's get started by writing some tests for the ChatStat message parser. This isn't particularly related to the Catalyst part of the application, but if this doesn't work, the whole application is useless. In addition, this is the simplest test to write, so it's a good practice before we move on to more complex parts of the application.

We'll start with a very simple test for the parse_nickname function, located in the ChatStat::Robot::Parser module from Chapter 5, Building a More Advanced Application. This function takes a string like [email protected] and produces an array like ['person', 'username', 'some-host.com'].

In t/irc-parser-nickname.t, add the following:

use strict;
use warnings;
use Test::More; qw/no_plan/
use ChatStat::Robot::Parser;
my %tests = (
# me @ example.com
'[email protected]' => ['jrockway', 'jrockway', 'example.com']
);
plan tests => scalar keys %tests;
while (my ($k, $v) = each %tests) {
my $get = [parse_nickname($k)];
my $expect = $v;
is_deeply($get, $expect, "$k parses");
}

We wrote this test by creating a table at the top of the test with the input string and the expected output result. Then we told Test::More how many tests we're going to run (so it can warn us if the test file dies halfway through), and finally we run parse_nickname on each input and used the is_deeply function in Test::More to see if the data we got and the expected data was the same.

You can try running the test with prove as follows:

$ prove -Ilib t/irc-parser-nickname.t t/irc-parser-nickname....ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.12 cusr + 0.00 csys = 0.12 CPU)

The advantage of writing a test in this table-driven format is that you can add another assertion (test case) without writing any more code. If you append another test case to the %tests hash as follows:

my %tests = (
# me @ example.com
'[email protected]' => ['jrockway', 'jrockway', 'example.com'],
'[email protected]' =>
['another', 'test', 'some-place.com'],
);

then you'll see two tests running:

$ prove -Ilib t/irc-parser-nickname.t t/irc-parser-nickname....ok
All tests successful.
Files=1, Tests=2, 0 wallclock secs ( 0.07 cusr + 0.01 csys = 0.08 CPU)

You can take this technique a step further and write test cases in a separate (non-Perl) file and load that file as the %tests hash. Then, you can have someone other than a programmer to write your test cases. If you choose to follow this route, then you should take a look at FIT (http://fit.c2.com/), and the Test::FITesque module on CPAN.

Testing a database

ChatStat is a pretty database-heavy application, so there's no way to avoid testing the database part. We can make testing the database less painful by providing an easy way for creating an empty database just for testing. This is easy to do with SQLite; we just create a temporary database, deploy our schema to it, run the tests, and then delete the database. Whenever I start a database application, I create a small module to automate this. For ChatStat, it's called ChatStat::Test::Database, and looks like the following:

package ChatStat::Test::Database;
use strict;
use warnings;
use Directory::Scratch;
use base 'ChatStat::Schema';
sub connect {
my $class = shift;
# setup temp space
my $tmp = Directory::Scratch->new;
my $db = $tmp->touch('database'),
# "connect" to temp database; deploy schema
my $schema = $class->SUPER::connect("DBI:SQLite:$db");
$schema->deploy;
# done!
return $schema;
}

All we do here is subclass our real DBIx::Class schema, and change the connect method to create a temporary database first. The Directory::Scratch module handles the management of the temporary scratch space, so we don't need to worry about cleaning up when we're done with tests.

Now we're ready to write database tests. The simplest thing to test would be the everything resultset, which will take "opinion" records and coalesce them into a ranked list of "things" with their point values. For example, given the input "test++, test--, foo++, foo++, bar--, and baz+-", we should get output like "foo => 2, baz => 0, test => 0, bar => -1".

Now that we know what we're testing (and have some sample data), we just need to have the computer check it for us. Let's create t/schema-report-everything.t as follows:

use strict;
use warnings;
use Test::More tests => 1;
use ChatStat::Test::Database;
use ChatStat::Robot::Action;
my @RECORDS = ([qw/test 1/],
[qw/test -1/],
[qw/foo 1/],
[qw/foo 1/],
[qw/bar -1/],
[qw/baz 0/],
);
my @EXPECT = ([qw/foo 2/],
[qw/baz 0/],
[qw/test 0/],
[qw/bar -1/],
);
my $schema = ChatStat::Test::Database->connect;
$schema->record(ChatStat::Robot::Action->
new({ who =>'[email protected]',
word => $_->[0],
points => $_->[1],
channel => '#test',
reason => 'test',
message => 'test',
}))
for @RECORDS;
my @everything = map {[ $_->thing, $_->total_points ]}
$schema->resultset('Things')->all;
is_deeply [sort {$a->[0] cmp $b->[0]} @everything],
[sort {$a->[0] cmp $b->[0]} @EXPECT],
'got expected everything';

With the database abstracted away, writing this test is easy. We start by defining our input data and output data. Then we connect to the database and add the input data to the database via ChatStat::Robot::Action. Then we run our everything report and see that it matches the expected data.

This is all that there is to testing. There are some other tests for other reports in ChatStat included with the book's source code, but they're not much different than the test we just saw. They simply add data to the database, and see if the results of queries make sense.

If all this is too complicated for you and if you are just beginning to test, you may want to simply start with Test::Simple. For more information on Test::Simple, refer to the following site:

http://search.cpan.org/~mschwern/Test-Simple-0.94/lib/Test/Simple.pm

If you would like to test the total number of tests executed, you can make use of the more evolved features of Test::More such as done_testing. For more information on Test::More, refer to the following site:

http://search.cpan.org/~mschwern/Test-Simple-0.94/lib/Test/More.pm

Once you are familiar with Test::More, you may also want to explore other modules such as Test::Most to begin with. Refer to the following site:

http://search.cpan.org/~ovid/Test-Most-0.21/lib/Test/Most.pm

Testing the web interface

Once you're confident that the backend is well tested, you need to make sure that the Catalyst part of your application is working smoothly. We have some more tools at our disposal for this task, Catalyst::Test and Test::WWW::Mechanize::Catalyst.

The basic idea here is to use Catalyst::Test (or similar) to make requests against the application and then see if the response contains the data we're looking for. (You can also do other things, like test if the HTML is valid and so on.)

As we still need the database, we'll have to jump through a few hoops to get a real Catalyst application to use the temporary database. What we need to do is create the testing database and then create a Catalyst config file that tells Catalyst to use the test database instead of the real one.

Here's the code for a module which does that, Chatstat::Test::Database::Live:

package ChatStat::Test::Database::Live;
use strict;
use warnings;
use ChatStat::Schema; use Directory::Scratch; use YAML qw(DumpFile); use FindBin qw($Bin);
use base 'Exporter';
our @EXPORT = qw/schema/;
my $schema; my $config;
BEGIN {
my $tmp = Directory::Scratch->new;
my $dsn = "DBI:SQLite:$db";
$schema = ChatStat::Schema->connect($dsn);
$schema->deploy;
$config = "$Bin/../chatstat_local.yml";
DumpFile($config, {'Model::DBIC' => {connect_info => [$dsn]}});
}
sub schema { $schema };
END { unlink $config };

When you use this module, it will create a test database, connect to it, and write a "local" config file, with the connection information pointing to the test database. After this module is loaded and Catalyst is started, it will see the config file with the suffix _local and override the existing config file with the data contained in the local file. The END{} block will delete the local config file when the test is done running, so you won't have to worry about the local configuration cluttering up your disk.

Now that we have a test database shared between the test script and the Catalyst application, we can start writing some tests. Let's test the page at /things, which lists everything that has an entry in the database.

We'll start by creating the basic test file called t/app-live-everything.t as follows:

use strict;
use warnings;
use Test::More tests => 3;
use ChatStat::Robot::Action;
use ChatStat::Test::Database::Live;
use Test::WWW::Mechanize::Catalyst qw/ChatStat/;
my $schema = schema();
my $mech = Test::WWW::Mechanize::Catalyst->new;
# test empty page
is_deeply [$schema->resultset('Things')->everything], [], 'no things yet';
$mech->get_ok('http://localhost/things/'),
$mech->content_unlike(qr/test/);

This will create the database, and then use Test::WWW::Mechanize::Catalyst to start an instance of the application. Then we get the test database and a Test::WWW::Mechanize object (that runs requests against our application instead of the Internet).

With the setup done, we make sure there are no things in the database yet, and that they don't show up on the page. The $mech->get_ok method will run the /things/ action in our application (the localhost in the URL means "our application" in this case, not the web server running on the localhost). The $mech->content_unlike looks for a regex on the page, and is a passing test if the regex doesn't match. In this case, we're looking to ensure that the word "test" doesn't show up on the page yet. We'll add it to the page right before the next test; this is just a sanity check to make sure it doesn't show up there for some other reason.

Now, let's add test++ to the database, and see if the page updates correctly. First, let's add a little utility to make inserting test++ into the database easy:

sub add_opinion {
$schema->record(ChatStat::Robot::Action->
new({ who =>'[email protected]',
word => $_[0],
points => $_[1],
channel => '#test',
reason => 'test',
message => 'test',
})
);
}

I add this to the very bottom of the test file so that I don't have to skip past it when reading the tests.

After the existing tests, add the following:

# add an opinion add_opinion(test => 1);
my @things = $schema->resultset('Things')->everything->all;
is $things[0]->total_points, 1, 'got 1 point';
is $things[0]->thing, 'test', 'for test';
$mech->get_ok('http://localhost/things/'),
$mech->content_like(qr/test/);
$mech->content_like(qr/1/);

Be sure to change the test counter at the top of the test to 8, as we just added five more assertions.

These test cases look like the ones we saw previously. We add a piece of data to the database, check that it's in the resultset, and then see if the correct text appears on the web page. To complete this test, let's add some negative and zero entries, and make sure those work:

add_opinion(foo => 0);
$mech->get_ok('http://localhost/things/'),
$mech->content_like(qr/foo/, 'foo shows up, even though it has 0 points'),
add_opinion(test => -1);
add_opinion(test => -1);
$mech->get_ok('http://localhost/things/'),
$mech->content_like(qr/negative/, q{now there's a negative entry});
$mech->content_unlike(qr/positive/, q{but no more positives});
add_opinion(bar => 1);
$mech->get_ok('http://localhost/things/'),
$mech->content_like(qr/positive/, q{positive is back});
$mech->content_like(qr/bar/, q{and bar shows up});

That's all we need for this test. After running this, you can be pretty sure that adding opinions to the database is reflected in the web interface.

You can do many more things with Test::WWW::Mechanize::Catalyst. If you're looking for documentation, be sure to see the docs for Test::WWW::Mechanize, WWW::Mechanize, and LWP::UserAgent, as that's the inheritance hierarchy for Test::WWW::Mechanize::Catalyst. Anything that those modules can do, Test::WWW::Mechanize::Catalyst can do against your Catalyst application.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.62.122