Chapter 9. Testing Everything Else

As pleasant as it might be to believe otherwise, there’s a whole world outside of Perl. Fortunately, Perl works well with other programs and other languages, even to the point at which you can use them almost seamlessly from your Perl code.

Good testers don’t shy away from testing external code just because it seems difficult. You can use Perl’s nice testing libraries and the tricks you’ve learned so far even if you have to test code written in other languages or programs you can’t modify. Perl’s that flexible.

This chapter’s labs demonstrate how to test Perl programs that you can’t refactor into modules, how to test standalone programs, and how to test code that isn’t Perl at all.

Writing Testable Programs

Not every useful piece of Perl code fits in its own module. There’s a wealth of worthwhile code in scripts and programs. You know the rule: if it’s worth using, it’s worth testing. How do you test them? Write them to be as testable as possible.

Note

Simple, well-factored code is easier to test in isolation. Improving the design of your code is just one of the benefits of writing testable code.

How do I do that?

Imagine that you have a program that applies filters to files given on the command line, sorting and manipulating them before printing them. Save the following file as filefilter.pl:

    #!perl
    
    use strict;
    use warnings;
    
    main( @ARGV ) unless caller();
    
    sub main
    {
        die "Usage:
$0 <command> [file_pattern]
" unless @_;
    
        my $command     = shift;
        my $command_sub = main->can( "cmd_$command" );
        die "Unknown command '$command'
" unless $command_sub;
    
        print join( "
", $command_sub->( @_ ) );
    }
    
    sub sort_by_time
    {
        map  { $_->[0] }
        sort { $a->[1] <=> $b->[1] }
        map  { [ $_, -M $_ ] } @_
    }
    
    sub cmd_latest
    {
        (sort_by_time( @_ ) )[0];
    }
    
    sub cmd_dirs
    {
        grep { -d $_ } @_;
    }
    
    # return true
    1;

Testing this properly requires having some test files in the filesystem or mocking Perl’s file access operators (”Overriding Built-ins" in Chapter 5). The former is easier. Save the following program as make_test_files.pl:

Note

filefilter.pl ends with “1;” so that the require()will succeed.See perldoc -f require to learn more.

    #!perl
    
    use strict;
    use warnings;
    
    use Fatal qw( mkdir open close );
    use File::Spec::Functions;
    
    mkdir( 'music_history' ) unless -d 'music_history';
    
    for my $subdir (qw( handel vivaldi telemann ))
    {
        my $dir = catdir( 'music_history', $subdir );
        mkdir( $dir ) unless -d $dir;
    }
    
    sleep 1;
    
    for my $period (qw( baroque classical ))
    {
        open( my $fh, '>', catfile( 'music_history', $period ));
        print $fh '18th century';
        close $fh;
        sleep 1;
    }

Save the following test as test_filefilter.t:

    #!perl
    
    use strict;
    use warnings;
    
    use Test::More tests => 5;
    use Test::Exception;
    
    use File::Spec::Functions;
    
    ok( require( 'filefilter.pl' ), 'loaded file okay' ) or exit;
    
    throws_ok { main() } qr/Usage:/,
        'main() should give a usage error without any arguments';
    
    throws_ok { main( 'bad command' ) } qr/Unknown command 'bad command'/,
        '... or with a bad command given';
    
    my @directories =
    (
        'music_history',
        map { catdir( 'music_history', $_ ) } qw( handel vivaldi telemann )
    );
    
    my @files = map { catfile( 'music_history', $_ ) } qw( baroque classical );
    
    is_deeply( [ cmd_dirs( @directories, @files ) ], @directories,
        'dirs command should return only directories' );
    
    is( cmd_latest( @files ), catfile(qw( music_history classical )),
        'latest command should return most recently modified file' );

Note

Baroque preceded Classical, of course.

Run make_test_files.pl and then run test_filefilter.t with prove:

    $ prove test_filefilter.t
    test_filefilter....ok
    All tests successful.
    Files=1, Tests=5,  0 wallclock secs ( 0.08 cusr +  0.02 csys =  0.10 CPU

What just happened?

The problem with testing Perl programs that expect to run directly from the command line is loading them in the test file without actually running them. The strange first code line of filefilter.pl accomplishes this. The caller() operator returns information about the code that called the currently executing code. When run directly from the command line, there’s no caller information, and the program passes its arguments to the main() subroutine. When run from the test script, the program has caller information, so it does nothing.

The rest of the program is straightforward.

The test file requires the presence of some files and directories to test against. Normally, creating test data from within the test itself works, but in this case, part of the filter program relies on Perl’s behavior when checking the last modification time of a file. Because Perl reports this time relative to the time at which the test started, it’s much easier to create these files before running the test. Normally, this might be part of the build step. Here, it’s a separate program: make_test_files.pl. The sleep line attempts to ensure that enough time passes between the Baroque and the Classical periods that the filesystem can tell their creation times apart.[2]

The test uses require() to load the program. Test::More::require_ok() is inappropriate here because it expects to load modules, not programs. The rest of the test is straightforward.

Note

The test is incomplete, though; how would you test the printing behavior of main()?

What about...

Q: What if I run this code on a filesystem that can’t tell the difference between the modification times of baroque and classical?

A: That’s one purpose of the test. If the test fails, you might need to modify filefilter.pl to take that into account. Start by increasing the value of the sleep call in make_test_files.pl and see what the limits of your filesystem are.

Q: What if the program being tested calls exit() or does something otherwise scary?

A: Override it (see "Overriding Built-ins" in Chapter 5).

Q: When would you do this instead of running filefilter.pl as a separate program (see "Testing Programs,” next)?

A: This technique makes it easier to test the program’s internals. Running it as a separate program means that your test has to treat the entire program as a black box. Note that the test here doesn’t have to parse the program’s output; it handles the list returned from cmd_dirs(), and the scalar returned from cmd_latest() as normal Perl data structures.

Testing Programs

Perl’s a great glue language and there are a lot of other programs in the world worth gluing together—or at least using from your own programs. Maybe your project relies on the behavior of other programs not under your control. That makes them worth testing. Maybe your job is testing, and you’ve realized that Perl and its wealth of testing libraries would be nice to have to test code written in other languages.

Whatever your motivation, Perl is perfectly capable of testing external programs. This lab shows how.

If you have one program on your machine to run all of the examples in this book, it’s the Perl executable itself. That makes it a great candidate to test, especially for things you can’t really test from within Perl. For example, the Perl core has its own test suite. How does it test Perl’s command-line flags that print messages and exit? How does it test whether bad code produces the correct fatal compiler warnings? It runs a fresh Perl instance and examines its output.

You can do the same.

Note

See _fresh_perl() and _fresh_perl_ is()in t/test.pl in the Perl source code.

How do I do that?

Save the following test file as perl_exit.t:

    #!perl
    
    use strict;
    use warnings;
    
    use IPC::Run 'run';
    use Test::More tests => 7;
    
    my ($out, $err) = runperl( '-v' );
    like($out, qr/This is perl/, '-v should print short version message'      );
    is(  $err, '',               '... and no error'                           );
    
    ($out, $err)    = runperl( '-V' ) ;
    like($out, qr/Compiled at/,  '-V should print extended version message'   );
    is(  $err, '',               '... and no error'                           );
    
    ($out, $err)    = runperl(qw( -e x++ ));
    like($err, qr/Can't modify constant.+postincrement/,
                                'constant modification should die with error' );
    like( $err, qr/Execution.+aborted.+compilation errors/,
                            '... aborting with to compilation errors'     );
    is( $out, '',           '... writing nothing to standard output'      );
    
    sub runperl
    {
        run( [ $^X, @_ ], my( $in, $out, $err ) );
        return ($out, $err);
    }

Note

The special variable $^X contains the path to the currently running Perl executable. It comes up often in testing.

Run the test file with prove:

    $ prove perl_exit.t
    perl_exit....ok
    All tests successful.
    Files=1, Tests=6,  1 wallclock secs ( 0.28 cusr +  0.05 csys =  0.33 CPU)

What just happened?

The IPC::Run module provides a simple and effective cross-platform way to run external programs and collect what they write to standard output and standard error.

The test file defines a subroutine called runperl() to abstract away and encapsulate all of the IPC::Run code. It calls run() with four arguments. The first argument is an array reference of the program to run—here always $^X—and its command-line options. The other arguments are references to three scalar variables to use for the launched program’s STDIN, STDOUT, and STDERR handles. runperl() returns only the last two handles, which IPC::Run has helpfully connected to the output of the program.

Note

None ofthe tests yet need to pass anything to the launched program, so returning $in is useless.

Each set of tests starts by calling runperl() with the arguments to use when running Perl. The first run performs the equivalent of:

    $ perl -v
    
    This is perl, v5.8.6 built for powerpc-linux
    
    Copyright 1987-2004, Larry Wall
    
    Perl may be copied only under the terms of either the Artistic License or the GNU 
General Public License, which may be found in the Perl 5 source kit.
    
    Complete documentation for Perl, including FAQ lists, should be found on
    this system using 'man perl' or 'perldoc perl'.  If you have access to the
    Internet, point your browser at http://www.perl.org/, the Perl Home Page.

The tests check to see that the entire message goes out to standard output, with nothing going to standard error.

The second set of tests uses Perl’s -V, or verbose, flag to display an even longer version message, which includes information about the compile-time characteristics of Perl as well as the contents of @INC.

Finally, the last set of tests exercise Perl’s handling of an error, specifically leaving the sigil off of a variable. This test is equivalent to the one-liner:

Note

Try perl -V yourself. It’s a lot of output.

    $ perl -e "x++"
    Can't modify constant item in postincrement (++) at -e line 1, near "x++"
    Execution of -e aborted due to compilation errors.

All of this output should go to standard error, not standard output. The final test in this set ensures that.

What about...

Q: Are there any modules that integrate this with Test::Builder for me?

A: Test::Cmd and Test::Cmd::Common have many features, but they also have complex interfaces. They may work best for large or complicated test suites.

Testing Interactive Programs

Unfortunately for testers, lots of useful programs are more than modules, well-factored Perl programs, or shared libraries. They have user interfaces, take input from the keyboard, and even produce output to the screen.

It may seem daunting to figure out how to mock all of the inputs and outputs to test the program. Fortunately, there’s a solution. Test::Expect allows you to run external programs, feeding them input and checking their output, all within your test files.

How do I do that?

Think back to your early programming days, when the canonical example of accepting user input was building a calculator. In Perl, you may have written something like simplecalc.pl:

    #!perl
    
    use strict;
    use warnings;
    
    print "> ";
    
    while (<>)
    {
        chomp;
        last unless $_;
    
        my ($command, @args) = split( /s+/, $_ );
    
        my $sub;
        unless ($sub = _ _PACKAGE_ _->can( $command ))
        {
            print "Unknown command '$command'
> ";
            next;
        }
    
        $sub->(@args);
        print "> ";
    }
    
    sub add
    {
        my $result = 0;
    
        $result += $_ for @_;
        print join(" + " , @_ ), " = $result
";
    }
    
    sub subtract
    {
        my $result = shift;
    
        print join(" - " , $result, @_ );
    
        $result -= $_ for @_;
        print " = $result
";
    }

Save the file and play with it. Enter the commands add or subtract, followed by multiple numbers. It will perform the appropriate operation and display the results. If you give an invalid command, it will report an error. Enter a blank line to quit.

It’s tempting to test this program with the technique shown earlier in "Writing Testable Programs,” but the loop is central to the program and difficult to test. Alternately, what if your assignment were to write this code in another language? Fortunately, the same testing technique works for both possibilities.

Save the following test file as testcalc.t:

    #!perl
    
    use strict;
    use Test::More tests => 7;
    use Test::Expect;
    
    expect_run(
        command => "$^X simplecalc.pl",
        prompt  => '> ',
        quit    => "
",
    );
    
    expect(    'add 1 2 3',    '1 + 2 + 3 = 6',  'adding three numbers'       );
    expect_send('subtract 1 2 3',                'subtract should work'       );
    expect_is(  '1 - 2 - 3 = -4',                '.. producing good results'  );
    expect_send('weird magic',                   'not dying on bad input'     );
    expect_like(qr/Unknown command 'weird/,      '... but giving an error'    );

Run it from the directory containing simplecalc.pl:

    $ prove testcalc.t
    testcalc....ok
    All tests successful.
    Files=1, Tests=7,  0 wallclock secs ( 0.27 cusr +  0.02 csys =  0.29 CPU)

What just happened?

The test file begins with a call to expect_run() to tell Test::Expect about the program to automate. The command argument provides the command to launch the program. In this case, it needs to launch simplecalc.pl with the currently executing Perl binary ($^X). The program’s prompt is "> “, which helps the module know when the program awaits input. Finally, the quit argument contains the sequence to end the program.

Note

Test::Expect works like the Expect automation tool, which also has Perl modules in the form of Expect. pm and Expect:: Simple.

The first test calls expect(), passing the command to send to the program and the output expected from the program. If those match, the test passes—actually twice, once for being able to send the data to the program correctly and the second time for the actual results matching the expected results.

The next test uses expect_send() to send data to the program. Though there’s nothing to match, this test passes if the program accepts the input and returns a prompt.

Now that the program has sent some data, the test can check the results of the last operation by calling expect_is() to match the expected data directly. It works just like Test::More::is(), except that it takes the received data from the program run through Test::Expect, not from an argument to the function.

The expect_like() function is similar. It applies a regular expression to the data returned from the last operation performed by the program.

What about...

Q: That’s pretty simple, but I need to use more prompts and handle potential errors. What can I do?

A: Test::Expect uses Expect::Simple internally. The latter module provides more options to drive external programs. You may have to use Test::More::is() and Test::More::like() to perform comparisons, but Expect::Simple handles the messy work of connecting to and driving an external program.

Testing Shared Libraries

Here’s a secret: Perl’s testing modules aren’t just good for testing Perl. They can test anything you can call from Perl. With a little bit of help from a few other modules, it’s easy to test shared libraries—compiled C code, for example—as if it were normal Perl code.

Note

You must have the Inline::C module installed and you must have a C compiler available and configured.

How do I do that?

Suppose that you want to test your C math library, libm. Specifically, you need to exercise the behavior of the fmax() and fmin() functions, which find the maximum or minimum of two floating point values, respectively. Save the following code as test_libmath.t:

    #!perl
    
    BEGIN
    {
            chdir 't' if -d 't';
    }
    
    use strict;
    use warnings;
    use Test::More tests => 6;
    
    use Inline C =>
            Config   =>
                    LIBS   => '-lm',
                    ENABLE => 'AUTOWRAP'
    ;
    
    Inline->import( C => <<END_HEADERS );
            double fmax( double, double );
            double fmin( double, double );
    END_HEADERS
    
    is( fmax(  1.0,  2.0 ),  2.0, 'fmax() should find maximum of two values'  );
    is( fmax( -1.0,  1.0 ),  1.0, '... and should handle one negative'        );
    is( fmax( -1.0, -7.0 ), -1.0, '... or two negatives'                     );
    is( fmin(  9.3,  1.7 ),  1.7, 'fmin() should find minimum of two values' );
    is( fmin(  2.0, -1.0 ), -1.0, '... and should handle one negative'       );
    is( fmin( -1.0, -6.0 ), -6.0, '... or two negatives'                     );

Run the tests with prove:

    $ prove test_math.t
    test_math....ok
    All tests successful.
    Files=1, Tests=6,  0 wallclock secs ( 0.17 cusr +  0.01 csys =  0.18 CPU)

What just happened?

The Inline::C module allows easy use of C code from Perl. It’s a powerful and simple way to build or to link to C code without writing Perl extension code by hand. The test starts as usual, changing to the t/ directory and declaring a plan. Then it uses Inline, passing some configuration data that tells the module to link against the m library (libm.so on Unix and Unix-like systems) and generate wrappers for C functions automatically.

Note

Inline::C caches compiled code an_Inline/directory. The test file changes to t/to localize the cache in the test subdirectory.

The only C code necessary to make this work occurs in the import() call, which passes the function signatures of the C functions to wrap from the math library. When Inline processes this code, it writes and compiles some C code to create the wrappers from these functions, and then makes the wrappers available to the test as the functions fmax() and fmin().

The rest of the test file tests some of the boundary conditions for these two functions.

What about...

Q: Does this work with other languages besides C?

A: There are Inline modules for various languages, including C++, Java, and PHP. The same or similar techniques work there too.

Q: Can I achieve the same thing by using XS or SWIG to generate bindings?

A: Absolutely. Inline is very easy for simple and moderate bindings, but it doesn’t do anything that you can’t do elsewhere.

Q: Can Inline handle passing and returning complex data structures such as C-structs?

A: Yes. See the Inline::C cookbook from the Inline distribution for examples.



[2] Sure, that’s 150 years of musical history, but computers don’t have much culture.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.217.198