Chapter 2. Writing Tests

Perl has a rich vocabulary, but you can accomplish many things using only a fraction of the power available. In the same way, Perl has an ever-increasing number of testing modules and best practices built around the simple ok() function described in Chapter 1.

The labs in this chapter guide you through the advanced features of Test::More and other commonly used testing modules. You’ll learn how to control which tests run and why, how to compare expected and received data effectively, and how to test exceptional conditions. These are crucial techniques that provide the building blocks for writing comprehensive test suites.

Skipping Tests

Some tests should run only under certain conditions. For example, a network test to an external service makes sense only if an Internet connection is available, or an OS-specific test may run only on a certain platform. This lab shows how to skip tests that you know will never pass.

How do I do that?

Suppose that you’re writing an English-to-Dutch translation program. The Phrase class stores some text and provides a constructor, an accessor, and an as_dutch() method that returns the text translated to Dutch.

Save the following code as Phrase.pm:

    package Phrase;
    use strict;

    sub new
    {
        my ( $class, $text ) = @_;
        bless $text, $class;
    }

    sub text
    {
        my $self = shift;
        return $$self;
    }

    sub as_dutch
    {
        my $self = shift;
        require WWW::Babelfish;
        return WWW::Babelfish->new->translate(
            source      => 'English',
            destination => 'Dutch',
            text        => $self->text(),
        );
    }

    1;

A user may or may not have the WWW::Babelfish translation module installed. That’s fine; you’ve decided that Phrase’s as_dutch() feature is optional. How can you test that, though?

Save the following code as phrase.t:

    #!perl

    use strict;

    use Test::More tests => 3;
    use Phrase;

    my $phrase = Phrase->new('Good morning!'),
    isa_ok( $phrase, 'Phrase' );

    is( $phrase->text(), 'Good morning!', "text() access works" );

    SKIP:
    {
        eval 'use WWW::Babelfish';

        skip( 'because WWW::Babelfish required for as_dutch()', 1 ) if $@;

        is(
            $phrase->as_dutch,
            'Goede ochtend!',
            "successfully translated to Dutch"
          );
    }

Run the test file with prove in verbose mode. If you have WWW::Babelfish installed, you will see the following output:

    $ prove -v phrase.t
    phrase....1..3
    ok 1 - The object isa Phrase
    ok 2 - text() access works
    ok 3 - successfully translated to Dutch
    ok
    All tests successful.
    Files=1, Tests=3,  1 wallclock secs ( 0.16 cusr +  0.01 csys =  0.17 CPU)

If you run the test without WWW::Babelfish, you will see a different result:

    $ prove -v phrase.t
    phrase....1..3
    ok 1 - The object isa Phrase
    ok 2 - text() access works
    ok 3 # skip because WWW::Babelfish required for as_dutch()
    ok
            1/3 skipped: because WWW::Babelfish required for as_dutch()
    All tests successful, 1 subtest skipped.
    Files=1, Tests=3,  0 wallclock secs ( 0.02 cusr +  0.00 csys =  0.02 CPU)

What just happened?

The test file begins with a Test::More declaration, as you’ve seen in the previous labs. The test file creates a sample Phrase object and also tests its constructor and text() accessor.

To skip the test for as_dutch() if the user does not have the WWW::Babelfish module installed requires a bit of special syntax. The test has a single block labeled SKIP, which begins by attempting to load the WWW::Babelfish module.

Note

You can have as many blocks labeled SKIP as you need. You can even nest them, as long as you label every nested block SKIP as well.

If trying to use WWW::Babelfish fails, eval will catch such an error and put it in the global variable $@. Otherwise, it will clear that variable. If there’s something in $@, the function on the next line executes. skip(), yet another function helpfully exported by Test::More, takes two arguments: the reason to give for skipping the tests and the number of tests to skip. The previous case skips one test, explaining that the optional module is not available.

Even though the test for as_dutch() did not run, it counts as a success because marking it as a skipped test means that you expect it will never run under the given circumstances. If WWW::Babelfish were available, the test would run normally and its success or failure would count as a normal test.

Note

Test::Harness reports all skipped tests as successes because it’s behavior that you anticipated.

Skipping All Tests

The preceding lab demonstrated how to skip certain tests under certain conditions. You may find cases where an entire test file shouldn’t run—for example, when testing platform X-specific features on platform Y will produce no meaningful results. Test::More provides a bit of useful syntax for this situation.

How do I do that?

Use the plan function on its own instead of specifying the tests in the use() statement. The following code checks to see if the current weekday is Tuesday. If it is not, the test will skip all of the tests. Save it as skip_all.t:

    use Test::More;

    if ( [ localtime ]->[6] != 2 )
    {
        plan( skip_all => 'only run these tests on Tuesday' );
    }
    else
    {
        plan( tests => 1 );
    }

    require Tuesday;
    my $day = Tuesday->new();
    ok( $day->coat(), 'we brought our coat' );

Tuesday.pm is very simple:

    package Tuesday;

    sub new
    {
        bless {  }, shift;
    }

    # wear a coat only on Tuesday
    sub coat
    {
        return [ localtime ]->[6] =  = 2;
    }

    1;

Run this test file on a Tuesday to see the following output:

    $ prove -v skip_all.t
    chapter_01/skipping_all_tests....1..1
    ok 1 - we brought our coat
    ok
    All tests successful.
    Files=1, Tests=1,  1 wallclock secs ( 0.13 cusr +  0.04 csys =  0.17 CPU)

Note

A real test file would have more tests; this is just an example.

Run it on any other day of the week to skip all of the tests:

    $ prove -v skip_all.t
    chapter_01/skipping_all_tests....1..0 # Skip only run these tests on Tuesday skipped
            all skipped: only run these tests on Tuesday
    All tests successful, 1 test skipped.
    Files=1, Tests=0,  0 wallclock secs ( 0.14 cusr +  0.05 csys =  0.19 CPU)

What just happened?

Instead of immediately reporting the test plan by passing extra arguments to the use keyword, skip_all.t uses Test::More’s plan() function to determine the test plan when the script runs. If the current weekday is not Tuesday, the code calls plan() with two arguments: an instruction to run no tests and a reason why. If it is Tuesday, the code reports the regular test plan and execution continues as normal.

Marking Tests as TODO

Even though having a well-tested codebase can increase your development speed, you may still have more features to add and bugs to fix than you can program in the current session. It can be useful to capture this information in tests, though they’ll obviously fail because there’s no code yet! Fortunately, you can mark these tasks as executable, testable TODO items that your test harness will track for you until you get around to finishing them.

How do I do that?

Take a good idea for some code: a module that reads future versions of files. That will be really useful. Call it File::Future, and save the following code to File/Future.pm, creating the File/ directory first if necessary:

    package File::Future;

    use strict;

    sub new
    {
        my ($class, $filename) = @_;
        bless { filename => $filename }, $class;
    }

    sub retrieve
    {
        # implement later...
    }

    1;

The File::Future constructor takes a file path and returns an object. Calling retrieve() on the object with a date retrieves that file at the given date. Unfortunately, there is no Perl extension to flux capacitors yet. For now, hold off on writing the implementation of retrieve().

There’s no sense in not testing the code, though. It’ll be nice to know that the code does what it needs to do by whatever Christmas Acme::FluxFS finally appears. It’s easy to test that. Save the following code as future.t:

    use Test::More tests => 4;
    use File::Future;

    my $file = File::Future->new( 'perl_testing_dn.pod' );
    isa_ok( $file, 'File::Future' );

    TODO: {
        local $TODO = 'continuum not yet harnessed';

        ok( my $current = $file->retrieve( 'January 30, 2005' ) );
        ok( my $future  = $file->retrieve( 'January 30, 2070' ) );

        cmp_ok( length($current), '<', length($future),
            'ensuring that we have added text by 2070' );
    }

Run the test with prove. It will produce the following output:

    $ prove -v future.t
    future.t....1..4
    ok 1 - The object isa File::Future
    not ok 2 # TODO continuum not yet harnessed
    #     Failed (TODO) test (future.t.pl at line 14)
    not ok 3 # TODO continuum not yet harnessed
    #     Failed (TODO) test (future.t.pl at line 15)
    not ok 4 - ensuring that we have added text by 2070 # TODO
          continuum not yet harnessed
    #     Failed (TODO) test (future.t at line 13)
    #     '0'
    #         <
    #     '0'
    ok
    All tests successful.
    Files=1, Tests=4,  0 wallclock secs ( 0.02 cusr +  0.00 csys =  0.02 CPU)

What just happened?

The test file for File::Future marks the tests for retrieval of documents from the future as an unfinished, but planned, feature.

Note

Unlike skipped tests, tests marked as TODO do actually run. However, unlike regular tests, the test harness interprets failing TODOs as a success.

To mark a set of tests as TODO items, put them in a block labeled TODO, similar to the SKIP block from "Skipping Tests,” earlier in this chapter. Instead of using a function similar to skip(), localize the $TODO variable and assign it a string containing the reason that the tests should not pass.

Notice in the previous output that Test::More labeled the tests with TODO messages and the TODO reason. The TODO tests fail, but because the test file set that expectation, the test harness considers them successful tests anyway.

What about...

Q: What happens if the tests succeed? For example, if the tests exercise a bug and someone fixes it while fixing something else, what will happen?

A: If the tests marked as TODO do in fact pass, the diagnostics from the test harness will report that some tests unexpectedly succeeded:

    $ prove -v future.t
    future-pass....1..4
    ok 1 - The object isa File::Future
    ok 2 # TODO continuum not yet harnessed
    ok 3 # TODO continuum not yet harnessed
    ok 4 # TODO continuum not yet harnessed
    ok
            3/4 unexpectedly succeeded
    All tests successful (3 subtests UNEXPECTEDLY SUCCEEDED).
    Files=1, Tests=4,  0 wallclock secs ( 0.02 cusr +  0.00 csys =  0.02
        CPU)

This is good; you can then move the passing tests out of the TODO block and promote them to full-fledged tests that should always pass.

Simple Data Structure Equality

Test::More’s is() function checks scalar equality, but what about more complicated structures, such as lists of lists of lists? Good tests often need to peer into these data structures to test whether, deep down inside, they are truly equal. The first solution that may come to mind is a recursive function or a series of nested loops. Hold that thought, though—Test::More and other test modules provide a better way with their comparison functions.

How do I do that?

Save this code as deeply.t:

    use Test::More tests => 1;

    my $list1 =
    [
        [
            [ 48, 12 ],
            [ 32, 10 ],
        ],
        [
            [ 03, 28 ],
        ],
    ];

    my $list2 =
    [
        [
            [ 48, 12 ],
            [ 32, 11 ],
        ],
        [
            [ 03, 28 ],
        ],
    ];

    is_deeply( $list1, $list2, 'existential equivalence' );

Run it with prove -v to see the diagnostics:

    $ prove -v deeply.t
    deeply....1..1
    not ok 1 - existential equivalence
    #     Failed test (deeply.t at line 23)
    #     Structures begin differing at:
    #          $got->[0][1][1] = '10'
    #     $expected->[0][1][1] = '11'
    # Looks like you failed 1 tests of 1.
    dubious
      Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 1
      Failed 1/1 tests, 0.00% okay
    Failed 1/1 test scripts, 0.00% okay. 1/1 subtests failed, 0.00% okay.
    Failed Test Stat Wstat Total Fail  Failed  List of Failed
    ---------------------------------------------------------------------------
    deeply.t       1   256     1    1 100.00%  1

What just happened?

The example test compares two lists of lists with the is_deeply() function exported by Test::More. Note the difference between the two lists. Because the first array contains a 10 where the second contains an 11, the test failed.

The output shows the difference between $list1 and $list2. If there are multiple differences in the data structure, is_deeply() will display only the first. Additionally, if one of the data structures is missing an element, is_deeply() will show that as well.

What about...

Q: How do I see the differences, but not the similarities, between data structures in my test output?

A: Test::Differences exports a function, eq_or_diff(), that shows a Unix diff-like output for data structures. differences.t is a modified version of the previous test file that uses this function.

    use Test::More tests => 1;
    use Test::Differences;

    my $list1 = [
      [
          [ 48, 12 ],
          [ 32, 10 ],
      ],
      [
          [ 03, 28 ],
      ],
    ];

    my $list2 = [
      [
          [ 48, 12 ],
          [ 32, 11 ],
      ],
      [
          [ 03, 28 ],
      ],
    ];

    eq_or_diff( $list1, $list2, 'a tale of two references' );

Running the file with prove produces the following output. Diagnostic lines beginning and ending with an asterisk (*) mark where the data structures differ.

    $ prove -v differences.t
    differences....1..1
    not ok 1 - a tale of two references
    #     Failed test (differences.t at line 24)
    # +----+-----------+-----------+
    # | Elt|Got        |Expected   |
    # +----+-----------+-----------+
    # |   0|[          |[          |
    # |   1|  [        |  [        |
    # |   2|    [      |    [      |
    # |   3|      48,  |      48,  |
    # |   4|      12   |      12   |
    # |   5|    ],     |    ],     |
    # |   6|    [      |    [      |
    # |   7|      32,  |      32,  |
    # *   8|      10   |      11   *
    # |   9|    ]      |    ]      |
    # |  10|  ],       |  ],       |
    # |  11|  [        |  [        |
    # |  12|    [      |    [      |
    # |  13|      3,   |      3,   |
    # |  14|      28   |      28   |
    # |  15|    ]      |    ]      |
    # |  16|  ]        |  ]        |
    # |  17|]          |]          |
    # +----+-----------+-----------+
    # Looks like you failed 1 tests of 1.
    dubious
      Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 1
      Failed 1/1 tests, 0.00% okay
    Failed 1/1 test scripts, 0.00% okay. 1/1 subtests failed, 0.00% okay.
    Failed Test   Stat Wstat Total Fail  Failed  List of Failed
    --------------------------------------------------------------------
    differences.t    1   256     1    1 100.00%  1

Q: How do I compare two strings, line-by-line?

A: Test::Differences shows the difference between multiline strings with its eq_or_diff() function. The following example tests the equality of two multiline strings using eq_or_diff(). Save it as strings.t:

    use Test::More tests => 1;
    use Test::Differences;

    my $string1 = <<"END1";
    Lorem ipsum dolor sit
    amet, consectetuer
    adipiscing elit.
    END1

    my $string2 = <<"END2";
    Lorem ipsum dolor sit
    amet, facilisi
    adipiscing elit.
    END2

    eq_or_diff( $string1, $string2, 'are they the same?' );

Running it with prove produces the following output:

    $ prove -v strings.t
    strings....1..1
    not ok 1 - are they the same?
    #     Failed test (strings.t at line 16)
    # +---+------------------------+------------------------+
    # | Ln|Got                     |Expected                |
    # +---+------------------------+------------------------+
    # |  1|Lorem ipsum dolor sit   |Lorem ipsum dolor sit   |
    # *  2|amet, consectetuer      |amet, facilisi          *
    # |  3|adipiscing elit.        |adipiscing elit.        |
    # +---+------------------------+------------------------+
    # Looks like you failed 1 tests of 1.
    dubious
      Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 1
      Failed 1/1 tests, 0.00% okay
    Failed 1/1 test scripts, 0.00% okay. 1/1 subtests failed, 0.00% okay.
    Failed Test Stat Wstat Total Fail  Failed  List of Failed
    ---------------------------------------------------------------------
    strings.t      1   256     1    1 100.00%  1

The diagnostics resemble those from differences.t, with differing lines in the multiline string marked with asterisks.

Q: How do I compare binary data?

A: It’s useful to see escape sequences of some sort in the differences, which is precisely what the Test::LongString module does. Test::LongString provides a handful of useful functions for comparing and testing strings that are not in plain text or are especially long.

Modify strings.t to use the is_string() function, and save it as longstring.t:

    use Test::More tests => 1;
    use Test::LongString;

    my $string1 = <<"END1";
    Lorem ipsum dolor sit
    amet, consectetuer
    adipiscing elit.
    END1

    my $string2 = <<"END2";
    Lorem ipsum dolor sit
    amet, facilisi
    adipiscing elit.
    END2

    is_string( $string1, $string2, 'are they the same?' );

Run longstring.t using prove to see the following:

Note

Test::LongString also exports other handy stringtesting functions that produce similar diagnostic output. See the module’s documentation for more information.

    $ prove -v longstring.t
    longstring....1..1
    not ok 1 - are they the same?
    #     Failed test (longstring.t at line 16)
    #          got: "Lorem ipsum dolor sit x{0a}amet, consectetuer x{0a}adipisc"...
    #       length: 61
    #     expected: "Lorem ipsum dolor sit x{0a}amet, facilisi x{0a}adipiscing "...
    #       length: 57
    #     strings begin to differ at char 23
    # Looks like you failed 1 tests of 1.
    dubious
      Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 1
      Failed 1/1 tests, 0.00% okay
    Failed 1/1 test scripts, 0.00% okay. 1/1 subtests failed, 0.00% okay.
    Failed Test  Stat Wstat Total Fail  Failed  List of Failed
    ----------------------------------------------------------------------
    longstring.t    1   256     1    1 100.00%  1

The diagnostic output from Test::LongString’s is_string() escapes nonprinting characters (x{0a}), shows the length of each string (61 and 57), and shows the position of the first different character.

Note

x{0a} is one way to represent the newline character.

Data Composition

As the data structures your code uses become more complex, so will your tests. It’s important to verify what actually makes up a data structure instead of simply comparing it to an existing structure. You could iterate through each level of a complex nested hash of arrays, checking each and every element. Fortunately, the Test::Deep module neatens up code testing complicated data structures and provides sensible error messages.

How do I do that?

Save the following as cmp_deeply.t:

    use Test::More tests => 1;
    use Test::Deep;

    my $points =
    [
        { x => 50, y =>  75 },
        { x => 19, y => -29 },
    ];

    my $is_integer = re('^-?d+$'),

    cmp_deeply( $points,
      array_each(
        {
          x => $is_integer,
          y => $is_integer,
        }
      ),
            'both sets of points should be integers' );

Now run cmp_deeply.t from the command line with prove. It will show one successful test:

    $ prove cmp_deeply.t
    cmp_deep....ok
    All tests successful.
    Files=1, Tests=1,  0 wallclock secs ( 0.06 cusr +  0.00 csys =  0.06 CPU)

What just happened?

cmp_deeply(), like most other testing functions, accepts two or three arguments: the data structure to test, what you expect the structure to look like, and an optional test description. The expected data, however, is a special test structure with a format containing special Test::Deep functions.

The test file begins by creating a regular expression using re(), a function exported by Test::Deep. re() declares that the data must match the given regular expression. If you use a regular expression reference instead, Test::Deep believes you expect the data to be a regular expression instead of matching the data against it.

Note

re() also lets you perform checks on the data it matches. See the Test::Deep documentation for details.

Test::Deep’s array_each() function creates the main test structure for the test. To pass the test, $points must be an array reference. Every element of the array must validate against the test structure passed to array_each().

Passing a hash reference as the test structure declares that every element must be a hash reference and the values of the given hash must match the values in the test structure’s hash. In cmp_deeply.t, the hash contains only two keys, x and y, so the given hash must contain only those keys. Additionally, both values must match the regular expression created with re().

Test::Deep’s diagnostics are really useful with large data structures. Change $points so that the y value of the first hash is the letter "Q“, which is invalid according to the provided test structure. Save it as cmp_deeply2.t:

    use Test::More tests => 1;
    use Test::Deep;

    my $points =
    [
        { x => 50, y =>  75 },
        { x => 19, y => 'Q' },
    ];

    my $is_integer = re('^-?d+$'),

    cmp_deeply( $points,
      array_each(
        {
          x => $is_integer,
          y => $is_integer,
        }
      )
    );

Now run cmp_deeply2.t with prove -v. The cmp_deeply() function will fail with the following diagnostic:

    $ prove -v cmp_deeply2.t
    cmp_deep2....#     Failed test (cmp_deep2.t at line 11)
    # Using Regexp on $points->[1]{"y"}
    #    got : 'Q'
    # expect : (?-xism:^-?d+$)
    # Looks like you failed 1 tests of 1.
    dubious
        Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 1
        Failed 1/1 tests, 0.00% okay
    Failed 1/1 test scripts, 0.00% okay. 1/1 subtests failed, 0.00% okay.
    Failed Test Stat Wstat Total Fail  Failed  List of Failed
    ----------------------------------------------------------------------------
    cmp_deep2.t    1   256     1    1 100.00%  1

The failure diagnostic shows the exact part of the data structure that failed and explains that the value Q doesn’t match the regular expression $is_integer.

What about...

Q: What if some values in the data structure may change?

A: To ignore a specific value, use the ignore() function in place of the regular expression. The following example still ensures that each hash in the array has both x and y keys, but doesn’t check the value of y:

    array_each(
      {
        x => $is_integer,
        y => ignore(),
      }
    );

Q: What if some keys in the data structure may change?

A: Suppose that you want to make sure that each hash contains at least the keys x and y. The superhashof() function ensures that the keys and values of the structure’s hash appear in the given hash, but allows the given hash to contain other keys and values:

    array_each(
      superhashof(
        {
          x => $is_integer,
          y => ignore(),
        }
      )
    );

Note

Think of sets, supersets, and subsets.

Similarly, Test::Deep’s subhashof() function ensures that a given hash may contain some or all of the keys given in the test structure’s hash, but no others.

Q: How do I check the contents of an array when I can’t predict the order of the elements?

A: Test::Deep provides a bag() function that does exactly this. Save the following as bag.t:

    use Test::More tests => 1;
    use Test::Deep;

    my @a = ( 4, 89, 2, 7, 1 );

    cmp_deeply( @a, bag( 1, 2, 4, 7, 89 ) );

Run bag.t to see that it passes the test. The bag() function is so common in test files that Test::Deep provides a cmp_bag() function. You can also write bag.t as follows:

    use Test::More tests => 1;
    use Test::Deep;

    my @a = ( 4, 89, 2, 7, 1 );

    cmp_bag( @a, [ 1, 2, 4, 7, 89 ] );

Where to learn more

This section is only a brief overview of the Test::Deep module, which provides further comparison functions for testing objects, methods, sets (unordered arrays with unique elements), booleans, and code references. For more information, see the Test::Deep documentation.

Testing Warnings

The only parts of your code that don’t need tests are those parts that you don’t need. If your code produces warnings in certain circumstances and they’re important to you, you need to test that they occur when and only when you expect them. The Test::Warn module provides helpful test functions to trap and examine warnings.

How do I do that?

Save the following code as warnings.t:

    use Test::More tests => 4;
    use Test::Warn;

    sub add_positives
    {
        my ( $l, $r ) = @_;
        warn "first argument ($l) was negative"  if $l < 0;
        warn "second argument ($r) was negative" if $r < 0;
        return $l + $r;
    }

    warning_is { is( add_positives( 8, -3 ), 5 ) }
        "second argument (-3) was negative";

    warnings_are { is( add_positives( -8, -3 ), -11 ) }
        [
            'first argument (-8) was negative',
            'second argument (-3) was negative'
        ];

Note

There are no commas between the first and second arguments to any of Test:: Warn’s test functions because their prototypes turn normallooking blocks into subroutine references.

Run the file with prove to see the following output:

    $ prove -v warnings.t
    warnings....1..4
    ok 1
    ok 2
    ok 3
    ok 4
    ok
    All tests successful.
    Files=1, Tests=4,  0 wallclock secs ( 0.04 cusr +  0.00 csys =  0.04 CPU)

What just happened?

The test file declares and tests a trivial function, add_positives(). The function adds two numbers together and warns the user if either number is less than zero.

warning_is() takes a block of code to run and the text of the warning expected. Like most other test functions, it takes an optional third argument as the test description. Passing two less-than-zero arguments to add_positives() causes the subroutine to produce two warnings. To test for multiple warnings, use Test::Warn’s warnings_are(). Instead of a single string, warnings_are() takes a reference to an array of complete warning strings as its second argument.

What about...

Q: What if the warning I’m trying to match isn’t an exact string?

A: Test::Warn also exports warning_like(), which accepts a regular expression reference instead of a complete string. Similarly, the warnings_like() function takes an anonymous array of regular expression references instead of just a single one. You can shorten warnings.t by using these functions:

    use Test::More tests => 4;
    use Test::Warn;

    sub add_positives
    {
        my ( $l, $r ) = @_;
        warn "first argument ($l) was negative"  if $l < 0;
        warn "second argument ($r) was negative" if $r < 0;
        return $l + $r;
    }

    warning_like { is( add_positives( 8, -3 ), 5 ) } qr/negative/;

    warnings_like { is( add_positives( -8, -3 ), -11 ) }
        [ qr/first.*negative/, qr/second.*negative/ ];

Q: What if I want to assert that no warnings occur in a specific block?

A: That’s a good test for when add_positives() adds two natural numbers. To ensure that a block of code produces no warnings, use Test::Warn’s warnings_are() and provide an empty anonymous array:

    warnings_are { is( add_positives( 4, 3 ), 7 ) } [  ];

Q: What if I want to make sure my tests don’t produce any warnings at all?

A: Use the Test::NoWarnings module, which keeps watch for any warnings produced while the tests run. Test::NoWarnings adds an extra test at the end that ensures that no warnings have occurred.

The following listing, nowarn.t, tests the add_positives() function and uses Test::NoWarnings. Note that the test count has changed to accomodate the extra test:

    use Test::More tests => 3;
    use Test::NoWarnings;

    sub add_positives {
        my ( $l, $r ) = @_;
        warn "first argument ($l) was negative"  if $l < 0;
        warn "second argument ($r) was negative" if $r < 0;
        return $l + $r;
    }

    is( add_positives( 4,  6 ), 10 );
    is( add_positives( 8, -3 ),  5 );

The second test produces a warning, which Test::NoWarnings catches and remembers. When run, the test diagnostics show any warnings that occurred and the most recently run tests.

    nowarn....1..3
    ok 1
    ok 2
    not ok 3 - no warnings
    #     Failed test (/usr/local/stow/perl-5.8.6/lib/5.8.6/Test/NoWarnings.pm
                    at line 45)
    # There were 1 warning(s)
    #         Previous test 1 ''
    #         second argument (-3) was negative at nowarn.t line 7.
    #  at nowarn.t line 7
    #         main::add_positives(8, -3) called at nowarn.t line 12
    #
    # Looks like you failed 1 tests of 3.
    dubious
        Test returned status 1 (wstat 256, 0x100)
    DIED. FAILED test 3
        Failed 1/3 tests, 66.67% okay
    Failed 1/1 test scripts, 0.00% okay. 1/3 subtests failed, 66.67% okay.
    Failed Test Stat Wstat Total Fail  Failed  List of Failed
    --------------------------------------------------------------------
    nowarn.t       1   256     3    1  33.33%  3

Testing Exceptions

Sometimes things go wrong. That’s okay; sometimes the best thing to do in code that detects an unrecoverable error is to pitch a fit and let higher-level code figure out what to do. If you do that, though, you need to test that behavior. As usual, there’s a module to make this easy. Test::Exception provides the functions to test that a block of code throws (or doesn’t throw) the exceptions that you expect.

How do I do that?

Suppose that you’re happy with add_positives() from "Testing Warnings,” but your coworkers can’t seem to use it correctly. They happily pass in negative numbers and ignore the warnings, and then blame you when their code fails to work properly. Your team lead has suggested that you strengthen the function to hate negative numbers—so much so that it throws an exception if it encounters one. How can you test that?

Save the following listing as exception.t:

    use Test::More tests => 3;
    use Test::Exception;
    use Error;

    sub add_positives
    {
        my ($l, $r) = @_;
        throw Error::Simple("first argument ($l) was negative")  if $l < 0;
        throw Error::Simple("second argument ($r) was negative") if $r < 0;
        return $l + $r;
    }

    throws_ok { add_positives( -7,  6 ) } 'Error::Simple';
    throws_ok { add_positives(  3, -9 ) } 'Error::Simple';
    throws_ok { add_positives( -5, -1 ) } 'Error::Simple';

Note

There are no commas between the first and second arguments to any of Test:: Exception’s test functions.

Run the file with prove:

    $ prove -v exception.t
    exception....1..3
    ok 1 - threw Error::Simple
    ok 2 - threw Error::Simple
    ok 3 - threw Error::Simple
    ok
    All tests successful.
    Files=1, Tests=3,  0 wallclock secs ( 0.03 cusr +  0.00 csys =  0.03 CPU)

What just happened?

The call to throws_ok() ensures that add_positives() throws an exception of type Error::Simple. throws_ok() performs an isa() check on the exceptions it catches, so you can alternatively specify any superclass of the exception thrown. For example, because exceptions inherit from the Error class, you can replace all occurrences of Error::Simple in exception.t with Error.

What about...

Q: How can you ensure that code doesn’t throw any exceptions at all?

A: Use Test::Exception’s lives_ok() function.

To ensure that add_positives() does not throw an exception when given natural numbers, add an extra test to assert that add_positives() throws no exceptions:

    use Test::More tests => 4;
    use Test::Exception;
    use Error;

    sub add_positives
    {
        my ($l, $r) = @_;
        throw Error::Simple("first argument ($l) was negative")  if $l < 0;
        throw Error::Simple("second argument ($r) was negative") if $r < 0;
        return $l + $r;
    }

    throws_ok { add_positives( -7,  6 ) } 'Error::Simple';
    throws_ok { add_positives(  3, -9 ) } 'Error::Simple';
    throws_ok { add_positives( -5, -1 ) } 'Error::Simple';
    lives_ok  { add_positives(  4,  6 ) } 'no exception here!';

If the block throws an exception, lives_ok() will produce a failed test. Otherwise, the test will pass.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.179.59