The goal of testing in PowerShell is to ensure that the code works as intended. Automatic testing ensures that this continues to be the case as code is changed over time.
Testing often begins before code is ready to execute. PSScriptAnalyzer
can look at code and provide advice on possible best practices that help prevent common mistakes. PSScriptAnalyzer
uses what is known as static analysis.
Unit testing, the testing of the smallest units of code, starts when the code is ready to execute. Tests can be created before the code when following practices such as Test-Driven Development (TDD). A unit test focuses on the smallest parts of a module, the functions and classes. A unit test strives to validate the inner workings of a unit of code, ensuring that conditions evaluate correctly, that it terminates or returns where it should, and so on.
Testing might extend into systems and acceptance testing, although this often requires a test environment to act against. Acceptance testing may include black-box testing, used to verify that a command accepts known parameters and generates an expected set of results. Black-box testing, as the name suggests, does not concern itself with understanding how a block of code arrives at a result.
This chapter covers the following topics:
Before beginning with the main topics of the chapter, there are some technical requirements to consider.
The following modules are used in this chapter:
Static analysis is the process of evaluating code without executing it. PSScriptAnalyzer
uses static analysis.
In PowerShell, static analysis most often makes use of an Abstract Syntax Tree (AST): a tree-like representation of a piece of code. In PowerShell, an element of a script is represented by a node in the syntax tree. AST was introduced with PowerShell 3.
The largest elements represent the script itself, the root of the tree in effect. Each element added to the script is represented by a child node. For example, the parameter block is described by a ParamBlockAst
object, an individual parameter by a ParameterAst
, and so on.
Evaluating elements of the AST is the basis for many of the rules implemented in PSScriptAnalyzer
.
The PSScriptAnalyzer
module is used to run a series of rules against a file or string containing a script. You can install the tool can using the following code:
Install-Module PSScriptAnalyzer
You can use PSScriptAnalyzer
to inspect a script with the Invoke-ScriptAnalzyer
command. For example, the tool will raise one error and one warning for the following script:
@'
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[String]$Password
)
$credential = [PSCredential]::new(
'.user',
($Password | ConvertTo-SecureString -AsPlainText -Force)
)
$credential.GetNetworkCredential().Password
'@ | Set-Content Show-Password.ps1
When Invoke-ScriptAnalyzer
is run on the file, two rule violations are shown, one for the use of ConvertTo-SecureString
, and one for the $Password
parameter using plain text:
PS> Invoke-ScriptAnalyzer .Show-Password.ps1 | Format-List
RuleName : PSAvoidUsingConvertToSecureStringWithPlainText
Severity : Error
Line : 9
Column : 18
Message : File 'Show-Password.ps1' uses ConvertTo-SecureString
with plaintext. This will expose secure information.
Encrypted standard strings should be used instead.
RuleName : PSAvoidUsingPlainTextForPassword
Severity : Warning
Line : 3
Column : 5
Message : Parameter '$Password' should use SecureString,
otherwise this will expose sensitive information. See
ConvertTo-SecureString for more information.
This is one of many best-practice style rules that you can use to test a script.
PSScriptAnalyzer
includes 64 default rules. Most of these rules are automatically evaluated when a script is analyzed. Several rules require configuration before they can be used; these are not enabled by default.
The following command shows the rules that must be explicitly configured before they can be applied:
Get-ScriptAnalyzerRule | Where-Object {
$_.ImplementingType.BaseType.Name -eq 'ConfigurableRule'
}
Conversely, the rules that are evaluated by default (without extra configuration) are shown with the following command:
Get-ScriptAnalyzerRule | Where-Object {
$_.ImplementingType.BaseType.Name -ne 'ConfigurableRule'
}
Configurable rules may be configured using either a settings file or a Hashtable that describes the configuration for a rule. The following example shows how to use the PSUseCorrectCasing
rule against a script read from a string:
$params = @{
ScriptDefinition = 'get-process'
Settings = @{
Rules = @{
PSUseCorrectCasing = @{
Enable = $true
}
}
}
}
Invoke-ScriptAnalyzer @params
Once the configuration for the rule is added, the rule will execute according to the settings for that rule. The settings for the rule are documented in the PSScriptAnalyzer
repository. The document for PSUseCorrectCasing
is: https://github.com/PowerShell/PSScriptAnalyzer/blob/master/RuleDocumentation/UseCorrectCasing.md.
Note that the rule documentation omits the PS
prefix on the rule name. Several other rules, such as PlaceOpenBrace
require configuration in a similar manner.
PSScriptAnalyzer
includes several built-in settings files, which will tab complete when using the Settings
parameter. For example:
Invoke-ScriptAnalyzer .Show-Password.ps1 -Settings CodeFormatting
The settings used by each are not documented in Help for the module, but the content of each file can be viewed in the module directory. You can either use these files or as an example to build a customized settings file.
You can view the settings files shipped with PSScriptAnalyzer
on GitHub: https://github.com/PowerShell/PSScriptAnalyzer/tree/master/Engine/Settings.
It is sometimes hard to meet the requirements of all rules in PSScriptAnalyzer
. Rules can be suppressed globally in the settings file, or rules can be suppressed in code.
It is rarely realistic to expect any significant piece of code to pass all the tests that PSScriptAnalyzer
will throw at it.
Individual tests can be suppressed at the function, script, or class level. The following demonstrative function creates a PSCustomObject
:
@'
function New-Message {
[CmdletBinding()]
param (
$Message
)
[PSCustomObject]@{
Name = 1
Value = $Message
}
}
'@ | Set-Content New-Message.ps1
Running PSScriptAnalyzer
against a file containing the function will show the following warning:
PS> Invoke-ScriptAnalyzer -Path .New-Message.ps1 | Format-List
RuleName : PSUseShouldProcessForStateChangingFunctions
Severity : Warning
Line : 1
Column : 10
Message : Function 'New-Message' has verb that could change
system state. Therefore, the function has to support
'ShouldProcess'.
Given that this function creates a new object in the memory, and does not change the system state, the message might be suppressed. This is achieved by adding a SuppressMessage
attribute before a param
block:
function New-Message {
[Diagnostics.CodeAnalysis.SuppressMessage(
'PSUseShouldProcessForStateChangingFunctions',
''
)]
[CmdletBinding()]
param (
$Message
)
[PSCustomObject]@{
Name = 1
Value = $Message
}
}
PSScriptAnalyzer
leverages an existing class from .NET to express suppressed rules. Visual Studio Code will offer to create an attribute when you start to type "suppress." The second argument, set to an empty string in the preceding example, is required but will be empty in most cases.
Once added, Invoke-ScriptAnalyzer
will cease to warn of the rule failure for this function only.
AST is the basis for the majority of the rules used by PSScriptAnalyzer
.
The AST in PowerShell is available for any script block; an example is as follows:
PS> { Write-Host 'content' }.Ast
Attributes : {}
UsingStatements : {}
ParamBlock :
BeginBlock :
ProcessBlock :
EndBlock : Write-Host 'content'
DynamicParamBlock :
ScriptRequirements :
Extent : { Write-Host 'content' }
Parent : { Write-Host 'content' }
The object returned describes each of the elements of the script (or script block in this case). It shows that the command in the script block is in the end block, the default block.
The script block that defines a function can be retrieved via Get-Command
:
function Write-Content { Write-Host 'content' }
(Get-Command Write-Content).ScriptBlock
Or the script block defining a function can be retrieved using Get-Item
:
function Write-Content { Write-Host 'content' }
(Get-Item function:Write-Content).ScriptBlock
The preceding approaches have one thing in common, PowerShell is immediately parsing the script and will stop if there are any parser errors. The impact of this can be seen if an error is inserted into a script block; the syntax tree will not be accessible:
PS> {
Write-Host
--String--
}
ParserError:
Line |
3 | --String--
| ~
| Missing expression after unary operator '--'.
To allow access to the AST, regardless of errors in the script, you can use the Parser
class to read content either from a file or from a string.
The Parser
class is accessed under the System.Management.Automation.Language
namespace. The following example uses the ParseInput
method to read PowerShell content from a string:
using namespace System.Management.Automation.Language
$script = @'
Write-Host
--String--
'@
$ast = [Parser]::ParseInput($script, [ref]$null, [ref]$null)
The ParseFile
method can be used in place of ParseInput
. The same arguments are used but the string containing the script can be replaced for a path to a file (a full path, not a relative path).
Two of the method arguments to the ParseInput
method in the previous example are set as references to $null
. This essentially means they are ignored at this point. Ordinarily, the first would be used to fill an existing array of tokens, the second an array of errors. Tokens are explored in more detail later in this section.
The errors array reference can be used to capture parse-time errors, such as the error shown when attempting to create the script block.
using namespace System.Management.Automation.Language
$errors = $tokens = @()
$script = @'
Write-Host
--String--
'@
$ast = [Parser]::ParseInput($script, [ref]$tokens, [ref]$errors)
You can view the content of the array after the ParseInput
method has completed:
PS> $errors | Format-List
Extent :
ErrorId : MissingExpressionAfterOperator
Message : Missing expression after unary operator '--'.
IncompleteInput : False
Extent : String--
ErrorId : UnexpectedToken
Message : Unexpected token 'String--' in expression or statement.
IncompleteInput : False
The original script block only showed one error, but parsing stopped at the first error. This approach shows all syntax errors. If attempting to fix such errors, a top-down approach is required; one syntax error can easily cause another.
Returning to the AST object, the object represents a tree, therefore it is possible to work down through the list of properties getting to more specific elements of the script. Each element has a different type. The following example includes ScriptBlockAst
, StatementAst
, NamedBlockAst
, PipelineAst
, and CommandAst
(and more as the more detailed elements of the script are explored).
The following example gets the CommandAst
for the Get-Process
command. That is the part of the script that represents just the Get-Process -ID $PID
:
$ast = { Get-Process -ID $PID | Select-Object Name, Path }.Ast
$ast.EndBlock.Statements[0].PipelineElements[0]
All named blocks, such as the EndBlock
here, can contain zero or more statements. Each statement can contain one or more pipeline elements. The Select-Object
command in this example is in index 1 of the PipelineElements
property.
You can use a module called ShowPSAst
to visualize the tree; the module uses Windows Forms to draw a GUI and is therefore only compatible with Windows systems.
The ShowPSAst
module, available in the PowerShell Gallery, may be used to visualize the AST tree. Install the module with:
Install-Module ShowPSAst -Scope CurrentUser
Once it's installed, you can use the Show-Ast
command on a string, a function, a module, a script block, and so on. Running the following command will show the AST tree in an Ast Explorer window.
Show-Ast 'Get-Process -ID $PID | Select-Object Name, Path'
Figure 21.1 shows the explorer window:
Figure 21.1: The Ast Explorer window
This tiny script with just one short line of code has 12 separate AST nodes in the tree. Attempting to access individual elements of a script by expanding each property and array index quickly becomes impractical in large scripts.
It is possible to search the AST tree using Find
and FindAll
methods on any AST node.
Searches against the AST can use the Find
, which finds the first match only, and FindAll
methods of any AST node. The methods find descendant nodes of the current node. Therefore, a search on a PipelineAst
instance will only find results beneath that node.
An earlier example found the CommandAst
for Get-Process
by expanding properties in the AST:
$ast = { Get-Process -ID $PID | Select-Object Name, Path }.Ast
$ast.EndBlock.Statements[0].PipelineElements[0]
This can be rewritten to use the Find
method instead. The key to the Find
method is a predicate. The predicate is a script block that returns true or false. Each node is tested against the predicate and returned if the result is true.
The simplest predicate is therefore as follows:
$predicate = { $true }
When used with the Find
method, the first matching node is returned. This will be the ScriptBlockAst
, the top-most node in the tree. The second argument states whether the Find
(or FindAll
) method should search nested script blocks. More on this shortly:
using namespace System.Management.Automation.Language
$ast = { Get-Process -ID $PID | Select-Object Name, Path }.Ast
$predicate = { $true }
Find
will show the ScriptBlockAst
object and is therefore equal to the content of the $ast
variable:
PS> $ast.Find($predicate, $true)
Attributes : {}
UsingStatements : {}
ParamBlock :
BeginBlock :
ProcessBlock :
EndBlock : Get-Process -ID $PID | Select-Object Name, Path
DynamicParamBlock :
ScriptRequirements :
Extent : { Get-Process -ID $PID | Select-Object Name,
Path }
Parent : { Get-Process -ID $PID | Select-Object Name,
Path }
Finding the node for the Get-Process
command requires a more complex predicate. Each node in the AST is passed as an argument into the predicate. This can either be accessed using $args[0]
or by defining a parameter to accept that value (as the following code shows). The AST type required is CommandAst
. CommandAst
has a GetCommandName
method, which can be used to separate the command name from the arguments. Here is the updated predicate:
using namespace System.Management.Automation.Language
$ast = { Get-Process -ID $PID | Select-Object Name, Path }.Ast
$predicate = {
param ( $node )
$node -is [CommandAst] -and
$node.GetCommandName() -eq 'Get-Process'
}
This time the result of the search is the node describing Get-Process
:
PS> $ast.Find($predicate, $true)
CommandElements : {Get-Process, ID, $PID}
InvocationOperator : Unknown
DefiningKeyword :
Redirections : {}
Extent : Get-Process -ID $PID
Parent : Get-Process -ID $PID | Select-Object Name, Path
As shown in the preceding example, each AST node returns an Extent
property. The Extent
property describes information about the position of a node within the larger script, such as where it begins and ends.
PS> $ast.Find($predicate, $true).Extent
File :
StartScriptPosition : System.Management.Automation.Language.Int...
EndScriptPosition : System.Management.Automation.Language.Int...
StartLineNumber : 1
StartColumnNumber : 10
EndLineNumber : 1
EndColumnNumber : 30
Text : Get-Process -ID $PID
StartOffset : 9
EndOffset : 29
Line and column numbers may vary depending on the source script.
This information can potentially be used to selectively edit a script if required. This technique is used by several commands in the PSKoans
module (https://github.com/vexx32/PSKoans) to replace or update content in existing scripts.
As mentioned at the start of this section, searches like this are the basis for many of the rules in PSScriptAnalyzer
. PSScriptAnalyzer
supports a second type of rule, a rule based on tokens within a script.
In addition to the AST, PowerShell can also convert a script into a series of tokens, each representing an element of a script with no hierarchy.
One of the advantages of the tokenizer is that it will return tokens representing comments, whereas the AST ignores comments entirely:
using namespace System.Management.Automation.Language
$errors = $tokens = @()
$script = @'
# A short script
Write-Host 'Hello world'
'@
$ast = [Parser]::ParseInput($script, [ref]$tokens, [ref]$errors)
Once executed, the tokens that make up the script can be examined. The first two tokens are shown here:
PS> $tokens | Select-Object -First 2
Text : # A short script
TokenFlags : ParseModeInvariant
Kind : Comment
HasError : False
Extent : # A short script
Text :
TokenFlags : ParseModeInvariant
Kind : NewLine
HasError : False
Extent :
Tokens are less useful than the AST when it comes to defining rules. The lack of context makes it more difficult to relate one token to another beyond the order in the array. Tokens might be used in a rule to validate the content of comments if necessary.
The AST and tokens are used by PSScriptAnalyzer
to implement rules.
PSScriptAnalyzer
allows custom rules to be defined and used. Custom rules might be used to test for personal or organization-specific conventions when striving for a consistent style; such conventions may not necessarily be widely adopted best practices, but instead locally established best practices.
Script analyzer rules must be defined in a module psm1
file. The path to the module file may be passed in by using the CustomRulePath
parameter or may be defined in a script analyzer configuration file.
A script analyzer rule is a function within a module. The PSScriptAnalyzer
module allows rules to be written to evaluate AST nodes or tokens.
The name of the function is arbitrary. The community examples use the verb measure
; however, the use of this verb is not mandatory and does not affect discovery. The community example is linked here for reference: https://github.com/PowerShell/PSScriptAnalyzer/blob/master/ScriptRuleDocumentation.md.
The following examples use a much more lightweight format. This does not sacrifice functionality.
The script analyzer engine examines each function in the custom rule module, looking for parameters with a particular naming style. If a parameter is found, the function is deemed to be a rule.
If a rule is expected to act based on an AST node, the first parameter name must end with ast
. The parameter must use one of the AST types, such as System.Management.Automation.Language.ScriptBlockAst
.
If a rule is expected to act based on a token, the first parameter name must end with token
and must accept an array of tokens.
Script analyzer rules are often simple; it is not always necessary for a rule to perform complex AST searches.
The following example evaluates the named blocks dynamicparam
, begin
, process
, and end
. If one of the blocks is declared in a function, script, or script block, and it is empty, the rule will respond.
The rule only accepts NamedBlockAst
nodes, the smallest scope for the rule to effectively evaluate the script. The script analyzer only passes nodes of that type to the rule, and therefore, the rule itself does not have to worry about handling other node types or performing searches itself.
The rule simply looks to see if the number of statements in the block is 0. If it is 0, then the rule triggers.
The following rule is expected to be placed in a psm1
file. For the sake of this example, that file can be named CustomRules.psm1
:
using namespace Microsoft.Windows.PowerShell.ScriptAnalyzer.Generic
using namespace System.Management.Automation.Language
function PSAvoidEmptyNamedBlocks {
[CmdletBinding()]
param (
[NamedBlockAst]$ast
)
if ($ast.Statements.Count -eq 0) {
[DiagnosticRecord]@{
Message = 'Empty {0} block.' -f $ast.BlockKind
Extent = $ast.Extent
RuleName = $myinvocation.MyCommand.Name
Severity = 'Warning'
}
}
}
The rule returns DiagnosticRecord
when it is triggered. The record is returned by the script analyzer provided the rule is not suppressed. The next command shows the rule in action:
@'
[CmdletBinding()]
param ( )
begin { }
process { }
end {
Write-Host 'Hello world'
}
'@ | Set-Content script.ps1
$params = @{
Path = 'script.ps1'
CustomRulePath = '.CustomRules.psm1'
}
Invoke-ScriptAnalyzer @params
The output from the command flags the begin
and process
blocks as they are empty:
PS> Invoke-ScriptAnalyzer @params
RuleName Severity ScriptName Line Message
-------- -------- ---------- ---- -------
PSAvoidEmptyNamedBlocks Warning script.ps1 4 Empty Begin ...
PSAvoidEmptyNamedBlocks Warning script.ps1 5 Empty Process...
Token-based rules are written in a similar manner.
Rules based on tokens evaluate an array of tokens. The following example looks for empty single-line comments in a block of code. Comments are not a part of the syntax tree, so using tokens is the only option. This new rule can be added to the CustomRules.psm1
file created in the previous section:
using namespace Microsoft.Windows.PowerShell.ScriptAnalyzer.Generic
using namespace System.Management.Automation.Language
function PSAvoidEmptyComments {
[CmdletBinding()]
param (
[Token[]]$token
)
$ruleName = $myinvocation.MyCommand.Name
$token.Where{
$_.Kind -eq 'Comment' -and
$_.Text.Trim() -eq '#'
}.ForEach{
[DiagnosticRecord]@{
Message = 'Empty comment.'
Extent = $_.Extent
RuleName = $ruleName
Severity = 'Information'
}
}
}
As the name suggests, the rule will trigger when it encounters an empty line comment. This is demonstrated by the following example:
@'
[CmdletBinding()]
param ( )
#
# Comment
Write-Host 'Hello world'
'@ | Set-Content script.ps1
The output from Invoke-ScriptAnalyzer
shows the line that failed:
PS> $params = @{
>> Path = 'script.ps1'
>> CustomRulePath = '.CustomRules.psm1'
>> }
PS> Invoke-ScriptAnalyzer @params
RuleName Severity ScriptName Line Message
-------- -------- ---------- ---- -------
PSAvoidEmptyComments Information script.ps1 4 Empty comment.
More custom rules
For more examples of custom rules, please see:
https://github.com/indented-automation/Indented.ScriptAnalyzerRules
PSScriptAnalyzer
is a fantastic tool that can attempt to enforce a specific style or help fix common problems.
Pester is a framework for executing tests. It includes tools to define and execute test cases against anything that can be written in PowerShell.
This chapter focuses on Pester 5, the latest major release. Pester 5 is not installed by default; Windows ships with Pester 3.4. This pre-installed version can be ignored:
Install-Module Pester -Force -SkipPublisherCheck
The -SkipPublisherCheck
parameter is required as Pester has changed maintainer since the version shipped with Windows was released. The certificate issued to the pre-installed version differs from the certificate issued to the current version.
You can use Pester to write tests for code and systems and everything in between. Pester is implemented as what is known as a Domain-Specific Language. It has specific functions that are implemented to behave like language keywords. For example, function
is a language-specific keyword. Pester tests are written using PowerShell, but for the most part, the language, the keywords, and so on, are defined by and specific to Pester.
The following example creates a test that asserts PowerShell 7 or greater should be in use. If the test runs in Windows PowerShell, the test will fail, and the results of that failure will be displayed to someone running the test. Pester tests must be saved in a file in Pester 5 so the following snippet saves content to a file before running Invoke-Pester
:
@'
Describe 'PS developer workstation' {
It 'PowerShell 7 is installed' {
$PSVersionTable.PSVersion |
Should -BeGreaterOrEqual 7.0.0
}
}
'@ | Set-Content workstation.tests.ps1
Invoke-Pester -Path workstation.tests.ps1
The outcome of running the test is displayed in the console, although the results of the test are summarized by default. Times to perform discovery and execute tests may vary:
PS> Invoke-Pester -Path workstation.tests.ps1
Starting discovery in 1 files.
Discovery finished in 5ms.
[+] C:workspaceworkstation.tests.ps1 91ms (2ms|86ms)
Tests completed in 93ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
Setting the -Output
parameter to Detailed
will show the results of each of the tests performed in the script. This test script only has one test. The difference is relatively small:
PS> Invoke-Pester -Path workstation.tests.ps1 -Output Detailed
Starting discovery in 1 files.
Discovering in C:workspaceworkstation.tests.ps1.
Found 1 tests. 9ms
Discovery finished in 13ms.
Running tests from 'C:workspaceworkstation.tests.ps1'
Describing PS developer workstation
[+]PowerShell 7 is installed 4ms (1ms|3ms)
Tests completed in 104ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
Tests must be saved to a file
This section focuses on test file content and not on the process of saving that content to a file. Test content should be saved to a file and run in the same manner as the preceding examples.
Note that the Invoke-Pester
command specifically looks for .tests.ps1
in file names.
The previous example uses three of the major keywords used in Pester:
Describe
– Groups tests for a particular subject togetherIt
– Defines a single test that should be executedShould
– Asserts what the value or result of an expression should beThe condition used with the Should
keyword simply states that the major version number should be 7 or greater.
Testing is a complex topic; it encompasses a wide range of different methodologies and concepts. The majority of these are beyond the scope of this chapter. Further reading is available on sites such as Wikipedia: https://en.wikipedia.org/wiki/Software_testing.
Two methodologies are of interest in this chapter. They are:
Acceptance testing is used to validate that the subject of the tests conforms to a pre-defined state. The test for the version of PowerShell at the start of this section might be part of an acceptance test for a developer workstation.
Acceptance testing in relation to PowerShell development strives to test the outcome of actions performed by a command (or script) without having any knowledge of how that script works. Acceptance testing is, therefore, a form of black-box testing and requires a system that code can be run against.
Unit testing aims to test the smallest units of code and is a form of white-box testing. The author of a unit test must be familiar with the inner workings of the subject of the tests.
Unit testing is most relevant in PowerShell when testing that the components of a module behave as they are expected to behave; that the different paths through a function, based on if
statements and loops, are used correctly. Unit testing does not require a live service to act on. External calls are mocked, a fake response is returned. Mocking is explored later in this chapter.
The advantage of putting tests in code is that they can be run whenever the state of the subject changes. It is possible to continue to prove that the subject of a set of tests is working as expected.
One of the most challenging aspects of any testing process is figuring out what should be tested.
When testing systems, or performing acceptance testing, the following are rough examples of things that might be tested:
When testing a module, or performing unit testing, consider testing the following:
When writing a unit test, resist the temptation to test other functions or commands called by the unit of code. A unit test is not responsible for making sure that every command that it calls works.
How extensive tests should be is debatable. Enough to ensure the functionality of a given script or module is perhaps the only real definition.
Code coverage is one of the measures that is often used. It is the percentage of code that is visited when executing a set of tests. Pester is capable of measuring code coverage. The details of this are shown later in this section. However, while this is an interesting indicator, it does not prove that code has been effectively tested.
Perhaps the most important keywords in Pester are Describe
, It
, and Should
. They are the backbone of any set of tests.
Pester includes keywords that are used to enclose and group tests together. This section explores the keywords that are used to enclose tests. They are:
Describe
Context
It
The Describe
and Context
keywords are both used to enclose or group together sets of tests. The tests themselves are defined within an It
statement.
All test documents will include the Describe
keyword and one or more It
statements.
Describe
is the top-most keyword used in a test document. It most often describes the subject of the tests.
Context
is essentially the same as Describe
. It has the same capabilities and will contain one or more It
statements. Context
is typically used under Describe
to group together small sets of tests, typically where the tests have a similar purpose or require similar start conditions.
A test document might have a broadly defined subject, and several more specifically defined components. For example, a set of tests might be developed to describe the expected state of a developer workstation. The tests are broken down into more detailed sub-sections:
Describe 'PS developer workstation' {
Context 'PowerShell' {
}
Context 'Packages' {
}
}
The use of Context
will become clear as tests grow in complexity and unit tests against PowerShell code are introduced later in this chapter.
Describe
, or each Context
, can include one or more It
blocks, which describe the expected outcome.
The It
keyword is used to define a single test. The test title should describe the purpose and potentially the expected outcome of the test:
Describe 'PS developer workstation' {
Context 'PowerShell' {
It 'PowerShell 7 is installed' {
}
}
Context 'Packages' {
It 'git is installed' {
}
It 'Terraform is installed' {
}
}
}
The It
keyword will contain one or more assertions using the Should
keyword.
The Should
keyword is used to assert the state of the thing it is testing.
Should
has 25 different parameter sets, one for each of the assertions it supports. The different assertions are documented in the Pester wiki along with an example: https://pester.dev/docs/assertions/assertions.
The example used at the start of this section uses one of the possible assertions, the -BeGreaterOrEqual
assertion. Assertions have unsurprising options for the most part. If testing a Boolean value, -BeTrue
or -BeFalse
are appropriate.
Comparisons are achieved using -Be
, -BeLessThan
, -BeLessOrEqual
, -BeGreaterThan
, -BeGreaterOrEqual
, and so on.
The first of the tests, the test for the installation of PowerShell 7, can be changed to allow it to run in Windows PowerShell as well. The point is to prove the system is in the expected state, not that the current runtime is PowerShell 7. The second Context
is temporarily removed to focus on this one assertion.
Describe 'PS developer workstation' {
Context 'PowerShell' {
It 'PowerShell 7 is installed' {
Get-Command pwsh -ErrorAction SilentlyContinue |
ForEach-Object Version |
Should -BeGreaterOrEqual '7.0.0'
}
}
}
Testing for errors is perhaps one of the most complex assertions and benefits from more extensive exploration. You can use a Should -Throw
assertion to test whether a specific error is raised (or not) when running a command.
The -Throw
assertion is used to test whether a block of code throws a terminating error such as one raised when ErrorAction
is set to Stop
or when the throw
keyword is used.
The assertion can be used for several of the tests above, but for the sake of variety, it is only used to test for the installation of Chocolatey, a package manager for Windows:
Describe 'PS developer workstation' {
Context 'PowerShell' {
It 'PowerShell 7 is installed' {
Get-Command pwsh -ErrorAction SilentlyContinue |
ForEach-Object Version |
Should -BeGreaterOrEqual '7.0.0'
}
}
Context 'Packages' {
It 'Chocolatey is installed' {
{ Get-Command choco -ErrorAction Stop } |
Should -Not -Throw
}
}
}
Notice how the expression being tested is defined as a script block, and that the script block is piped to the Should
keyword.
The assertion used in the preceding example expects there to be no error. There is therefore no need to test anything else about the error.
If an error is expected to be thrown, then further tests might be beneficial. For example, attempting to divide 1 by 0 will raise an error:
Describe Division {
It 'Throws an error when 1 is divided by 0' {
{ 1/0 } | Should -Throw
}
}
This type of test is not specific; it does not differentiate between the actual problem and any other error that might occur. Changing the assertion, as the following shows, will still correctly identify that an error is thrown, but the error is no longer consistent with the descriptive name for the It
statement:
Describe Division {
It 'Throws an error when 1 is divided by 0' {
{ throw } | Should -Throw
}
}
If this is saved to a division.tests.ps1
file, it can be run to show that the test passes:
PS> Invoke-Pester -Path .division.tests.ps1
Starting discovery in 1 files.
Discovery finished in 5ms.
[+] C:workspacedivision.tests.ps1 100ms (4ms|93ms)
Tests completed in 102ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
Adding the -ExpectedMessage
parameter is one way to tackle this. Testing for a specific message will greatly improve the accuracy of the test:
Describe Division {
It 'Throws an error when 1 is divided by 0' {
{ 1/0 } | Should -Throw -ExpectedMessage 'Attempted to divide by zero.'
}
}
For the preceding exception, testing the message is potentially as good as it gets. However, since error messages are often written in a user's language, testing the message is a weak test as it demands the tests are run in a specific culture.
The -Throw
assertion allows both the error type and the fully qualified error ID to be tested instead. These are far more robust if the expression raising the error reveals them. The following example tests the fully qualified error ID:
Describe ErrorID {
It 'Raises an error with a fully-qualified error ID' {
{ Write-Error error -ErrorID SomeErrorID -ErrorAction Stop } |
Should -Throw -ErrorId SomeErrorID
}
}
This type of testing is far more accurate, it may be possible to attribute the ErrorID
to a single statement in the code being tested rather than testing for any error anywhere.
It is frequently necessary to perform setup actions prior to executing tests. Pester includes several named blocks for this purpose.
It is often desirable to repeat the same test or tests for a different subject. Pester offers two different styles of iteration:
It
keyword has the TestCases
parameterDescribe
and Context
keywords have the ForEach
parameterThe TestCases
parameter for an It
statement allows a single test to be executed against a set of predefined cases.
The packages context of the "PS developer workstation" acceptance tests are a good candidate for test cases. A few packages were listed in the example context. The following tests assert that each of these should have been installed using Chocolatey, a package manager for Windows that can be downloaded from https://chocolatey.org.
With Chocolatey, the installation of a package can be tested using the following command:
choco list -e terraform -l -r
If the exact package name (-e
) is installed locally (-l
) it will be included in the output from the command. The -r
parameter is used to limit output to essential information only, in this case just the package name and version.
The version of the installed package is not relevant as far as the following tests are concerned. The tests might be extended to ensure a specific version in a real-world implementation:
Describe 'PS developer workstation' {
Context 'Packages' {
It 'Chocolatey is installed' {
{ Get-Command choco -ErrorAction Stop } |
Should -Not -Throw
}
It '<Name> is installed' -TestCases @(
@{ Name = 'terraform' }
@{ Name = 'git' }
) -Test {
choco list -e $Name -l -r | Should -Match $Name
}
}
}
Each test case is defined as a Hashtable, and all keys in the Hashtable are available as variables inside the It
statement automatically. This contrasts with Pester 4, which required a param
block inside It
.
The keys in the Hashtable can be used in the It
description by enclosing the name in < >
.
Pester allows the expansion of properties of values in the description. For instance, using <Name.Length>
in the description would show the length of the string in the Name
key. Not a very practical use in this case.
The outcome of running the tests can be viewed by saving the previous example to a file, such as workstation.tests.ps1
, and using Invoke-Pester
:
PS> Invoke-Pester -Path .workstation.tests.ps1 -Output Detailed
Running tests from 'C:workspaceworkstation.tests.ps1'
Describing PS developer workstation
Context Packages
[+] Chocolatey is installed 20ms (15ms|5ms)
[+] terraform is installed 802ms (799ms|3ms)
[+] git is installed 786ms (786ms|1ms)
Tests completed in 1.79s
Tests Passed: 3, Failed: 0, Skipped: 0 NotRun: 0
The preceding tests are executed against a single subject, the local machine. You can use the -ForEach
parameter of Describe
or Context
to run a set of tests against more than one subject.
The -ForEach
parameter can be used to execute either a Describe
or Context
block against an array of values. Continuing with the theme of acceptance testing, it might be desirable to run a set of tests against several different servers.
The following tests assert that the DNS service exists on a set of Windows servers. The names of the servers are made up. The tests will potentially work if a meaningful set of names is provided:
Describe "DNS servers" -ForEach @(
'dns01'
'dns02'
) -Fixture {
It "The DNS service is running on $_" {
$params = @{
ClassName = 'Win32_Service'
Filter = 'Name="dns"'
ComputerName = $_
}
Get-CimInstance @params | Should -Not -BeNullOrEmpty
}
}
The $_
variable used in the preceding example is created by Pester and is used to access each of the values in the -ForEach
array in turn.
Get-CimInstance
is used in the preceding example but Invoke-Command
and Get-Service
might be used instead if appropriate. You should implement tests in a way that is appropriate to the environment the test executes in.
All the tests used in this section have the potential to fail and raise errors that are not handled within the test. In the cases of the tests using choco
, the tests will raise an error if Chocolatey is not actually installed. In the case of the last example, the tests will raise an error if the server does not exist or Get-CimInstance
fails for any other reason.
Problems of this kind can potentially be handled by skipping tests or marking a test as inconclusive.
There are two possible approaches for dealing with tests that cannot be executed:
It
can be forcefully set by using Set-ItResult
-Skip
parameterSet-ItResult
can be used with the <Name> is installed
test, enabling the test to account for situations where the choco
command is not available.
The Set-ItResult
command can be used inside any It
statement. In the following example, it is used to change the result of the It
statement based on the availability of the choco
command:
Describe 'PS developer workstation' {
Context 'Packages' {
It '<Name> is installed' -TestCases @(
@{ Name = 'terraform' }
@{ Name = 'git' }
) -Test {
if (Get-Command choco -ErrorAction SilentlyContinue) {
choco list -e $Name -l -r | Should -Match $Name
} else {
Set-ItResult -Skipped
}
}
}
}
The name of the test is changed in the result to show it has been skipped if the choco
command is not available:
PS> Invoke-Pester -Path .workstation.tests.ps1 -Output Detailed
Starting discovery in 1 files.
Discovering in C:workspaceworkstation.tests.ps1.
Found 2 tests. 27ms
Discovery finished in 31ms.
Running tests from 'C:workspaceworkstation.tests.ps1'
Describing PS developer workstation
Context Packages
[!] terraform is installed is skipped 14ms (9ms|5ms)
[!] git is installed is skipped 2ms (1ms|1ms)
Tests completed in 217ms
Tests Passed: 0, Failed: 0, Skipped: 2 NotRun: 0
Set-ItResult
allows the result to be set to Inconclusive
, Pending
, or Skipped
.
The advantage of using Set-ItResult
, in this case, is that the test cases are still processed. The Name
value is expanded in the output of the tests. The -Skip
parameter will stop Pester from expanding this value.
You can use the -Skip
parameter on any It
block that is not using -TestCases
.
You can use -Skip
, a switch parameter, to bypass one or more tests.
You can set an explicit value to the parameter such as a value based on a variable. The following example changes the test for the installation of Chocolatey. It will be skipped if the current operating system is not Windows:
Describe 'PS developer workstation' {
Context 'Packages' {
It 'Chocolatey is installed' -Skip:(-not $IsWindows) {
{ Get-Command choco -ErrorAction Stop } |
Should -Not -Throw
}
}
}
The $IsWindows
, $IsMacOS
, and $IsLinux
variables are all automatically available in PowerShell 6 and above.
Values used with the -Skip
parameter must be available during the Discovery phase in Pester.
Pester 5 introduces the concept of different phases when executing tests. This is visible in the output of the tests run in this section:
Starting discovery in 1 files.
Discovering in C:workspaceworkstation.tests.ps1.
Found 2 tests. 27ms
Discovery finished in 31ms.
Running tests from 'C:workspaceworkstation.tests.ps1'
Describing PS developer workstation
First, the Discovery phase is run. During the Discovery phase, Pester attempts to find all the tests it will be running. Each test is defined by an It
statement.
The Run phase will execute only those tests found during the Discovery phase.
This concept is new in Pester 5; it means that everything used to define what will be tested must be in place before discovery occurs, which affects the dynamic creation of tests.
Pester provides a BeforeDiscovery
block, which may be placed either before or inside Describe
. The code in BeforeDiscovery
is, as the name suggests, executed before the Discovery run starts.
The last example used the predefined $IsWindows
variable. As this is built-in, it is automatically available during the Discovery phase.
If the tests were instead to be executed based on a user-defined variable, this variable would need creating in BeforeDiscovery
. The same limitation applies to any test cases used with the -TestCases
parameter, and any arrays used with -ForEach
parameters.
An earlier example used -ForEach
to attempt to execute tests on an array of server names. If the server names had to be read from an external system, then that discovery action would be placed in the BeforeDiscovery
block.
The following example uses the Get-ADComputer
command from the ActiveDirectory
module to get the list of servers to query. These tests will only succeed if the ActiveDirectory
module is installed, and it is able to find server names to test:
BeforeDiscovery {
$dnsServers = Get-ADComputer -Filter 'name -like "dns*"'
}
Describe "DNS servers" -ForEach $dnsServers -Fixture {
It "The DNS service is running on $($_.Name)" {
$params = @{
ClassName = 'Win32_Service'
Filter = 'Name="dns"'
ComputerName = $_.DnsHostName
}
Get-CimInstance @params | Should -Not -BeNullOrEmpty
}
}
BeforeDiscovery
is therefore useful to ensure that the values needed to define which tests are present are in place when Pester is attempting to discover which tests are going to be executed.
Pester provides other named blocks to execute code at certain points during the Run phase. These can be used to define variables and set up conditions for tests. They should avoid defining which tests are executed.
Pester offers several blocks that you can use to perform actions before and after tests execute. Such blocks may be used to set up an environment or tear it down afterwards.
The blocks are:
BeforeAll
BeforeEach
AfterAll
AfterEach
The All
blocks execute once, before (or after) any of the tests in that block. The Each
blocks execute before (or after) individual It
blocks.
Each block can exist once in any Describe
or Context
block. If a Describe
block contains BeforeAll
, and a nested Context
also contains BeforeAll
then both blocks will be executed (the Describe
instance first, then the Context
instance).
The BeforeAll
and BeforeEach
blocks are frequently used when defining a Mock
for a command in unit testing.
Mocking is used to reduce the scope of a set of tests and a vital part of unit testing. Mocking allows the implementation of a command to be replaced with one defined in a test. Mocked commands are created using the Mock
keyword.
The Mock
keyword may be used in BeforeAll
, BeforeEach
, or It
.
The following command reads a CSV file, then either starts or stops a service based on whether that service matches the state in the file.
@'
function Set-ServiceState {
[CmdletBinding()]
param (
[string]$Path
)
Import-Csv $Path | ForEach-Object {
$service = Get-Service $_.Name
if ($service.Status -ne $_.ExpectedStatus) {
if ($_.ExpectedStatus -eq 'Stopped') {
Stop-Service -Name $_.Name
} else {
Start-Service -Name $_.Name
}
}
}
}
'@ | Set-Content -Path module.psm1
Place this function in a file named module.psm1
; it will be used as the subject of the first set of unit tests.
To effectively test this command, the system running the tests would need to have all the services listed in the CSV
file, and it would have to be possible to change the state of those services.
Instead, depending on a complete system to execute on, the results of the code can be tested by mocking each of the commands external to the function.
Two of the commands (Import-Csv
and Get-Service)
must return information, and two (Start-Service
and Stop-Service
) return nothing at all.
Using Mock
for Start-Service
and Stop-Service
is therefore straightforward:
Mock Start-Service
Mock Stop-Service
The output from Import-Csv
and Get-Service
needs to resemble the output from those real commands. The output can be simplified depending on what the command is expecting to do with that.
Import-Csv
is expected to output an object with a Name
and ExpectedStatus
property:
Mock Import-Csv {
[PSCustomObject]@{
Name = 'service1'
ExpectedStatus = 'Running'
}
}
Get-Service
is expected to return an object with a Status
property, but no other properties from Get-Service
are used. Mocking Get-Service
allows the tests to run even if the current computer does not have the service being tested:
Mock Get-Service {
[PSCustomObject]@{
Status = 'Stopped'
}
}
The name of the service is a parameter value for Get-Service
and it is not used by the function. The name of the service can therefore be ignored in the object the mock emits.
Given the outputs that have been defined above, the expectation is that when running Set-ServiceStatus
, you will use the Start-Service
command to start service1
.
Execution of the mock can be tested by using a Should -Invoke
assertion:
@'
BeforeDiscovery {
Import-Module .module.psm1 -Force
}
Describe Set-ServiceState {
BeforeAll {
Mock Get-Service -MockWith {
[PSCustomObject]@{
Status = 'Stopped'
}
}
Mock Import-Csv -MockWith {
[PSCustomObject]@{
Name = 'service1'
ExpectedStatus = 'Running'
}
}
Mock Start-Service
Mock Stop-Service
}
It 'When ExpectedStatus is running, starts the service' {
Set-ServiceState -Path file.csv
Should -Invoke Start-Service
}
}
'@ | Set-Content Set-ServiceState.tests.ps1
The previous command saves the tests in a Set-ServiceState.tests.ps1
file. The content of this file is modified during this section; the size of the file prohibits repeating the content in full for each change.
As Import-Csv
is being mocked in the tests, the name used for the file (the Path
parameter) is not relevant and you can use a made-up value.
The result of running the tests is shown here:
PS> Invoke-Pester -Path .Set-ServiceState.tests.ps1
Starting discovery in 1 files.
Discovery finished in 10ms.
[+] C:workspaceSet-ServiceState.tests.ps1 127ms (16ms|102ms)
Tests completed in 130ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
In this function, there are three possible paths through the code for each service:
Start-Service
or Stop-Service
run.Start-Service
should run.Stop-Service
should run.A parameter filter for the Get-Service
mock can be created to allow a different output based on the service name, which will allow each of the paths to be tested.
You can apply parameter filters to define when that mock should be used. Parameter filters are added using the -ParameterFilter
parameter for Mock
. The parameter filter is a script block that is most often used to test a parameter value used when calling the mock.
First, the mock for Import-Csv
can be extended by adding two more services:
Mock Import-Csv -MockWith {
[PSCustomObject]@{
Name = 'service1'
ExpectedStatus = 'Running'
}
[PSCustomObject]@{
Name = 'service2'
ExpectedStatus = 'Running'
}
[PSCustomObject]@{
Name = 'service3'
ExpectedStatus = 'Stopped'
}
}
Then the original mock for Get-Service
is replaced with three new mocks. Each uses a -ParameterFilter
and tests a different service name:
Mock Get-Service -ParameterFilter {
$Name -eq 'service1'
} -MockWith {
[PSCustomObject]@{
Status = 'Running'
}
}
Mock Get-Service -ParameterFilter {
$Name -eq 'service2'
} -MockWith {
[PSCustomObject]@{
Status = 'Stopped'
}
}
Mock Get-Service -ParameterFilter {
$Name -eq 'service3'
} -MockWith {
[PSCustomObject]@{
Status = 'Running'
}
}
Finally, the It
block is adjusted. For this version, Start-Service
will run once, and Stop-Service
will run once. The previous assertion simply stated that Start-Service
would run, which implicitly means it runs one or more times:
It 'Ensures all services are in the desired state' {
Set-ServiceState -Path file.csv
Should -Invoke Start-Service -Times 1
Should -Invoke Start-Service -Times 1
}
Once the changes are made to the tests file, the single test will pass. However, while this test passes, it remains difficult to explicitly relate cause to effect. A failure in any one or more of the comparisons will cause the preceding tests to fail, but it will not indicate which value caused the failure.
Instead of using -ParameterFilter
, a more robust approach, in this case, might be to use Context
to change the values provided by Import-Csv
or the values returned by Get-Service
.
You can use Mock
in either an It
or a Context
block to override an existing mock or create new mocks which are specific to a specific branch of code. Mock
is scoped to the block it is created in, therefore a Mock
created in It
only applies to that single It
block.
Generally, the safest approach is to define default mocks under a BeforeAll
in Describe
, then to override those as needed. The presence of the default mocks acts as a safeguard. Running the subject of the tests, Set-ServiceState
, in the following example, will only ever call a mocked command. It will never accidentally call the real command because something has been missed from a specific context:
@'
BeforeDiscovery {
Import-Module .module.psm1 -Force
}
Describe Set-ServiceState {
BeforeAll {
Mock Get-Service -MockWith {
[PSCustomObject]@{
Status = 'Running'
}
}
Mock Import-Csv -MockWith {
[PSCustomObject]@{
Name = 'service1'
ExpectedStatus = 'Running'
}
}
Mock Start-Service
Mock Stop-Service
}
}
'@ | Set-Content Set-ServiceState.tests.ps1
The first path through the code, when the service is already in the expected state and neither Start-Service
or Stop-Service
will be called, can be tested using the default mocks established above. The It
block can explicitly assert that Start-Service
and Stop-Service
were not called:
It 'Service is running, expected running' {
Set-ServiceState -Path file.csv
Should -Invoke Start-Service -Times 0
Should -Invoke Stop-Service -Times 0
}
The second path, when the service is stopped and should be started, can be achieved by overriding the mock for Get-Service
. Note that the Set-ServiceState
command is called again after overriding the mock:
It 'Service is stopped, expected running' {
Mock Get-Service -MockWith {
[PSCustomObject]@{
Status = 'Stopped'
}
}
Set-ServiceState -Path file.csv
Should -Invoke Start-Service -Times 1
Should -Invoke Stop-Service -Times 0
}
Finally, the last path runs when the service is running, but the expected state is stopped. This time the mock for Import-Csv
is replaced:
It 'Service is running, expected stopped' {
Mock Import-Csv -MockWith {
[PSCustomObject]@{
Name = 'service1'
ExpectedStatus = 'Stopped'
}
}
Set-ServiceState -Path file.csv
Should -Invoke Start-Service -Times 0
Should -Invoke Stop-Service -Times 1
}
These new tests should be added to the Describe
block of Set-ServiceState.tests.ps1
. Once added, the tests file can be run with detailed output:
PS> $params = @{
>> Path = '.Set-ServiceState.tests.ps1'
>> Output = 'Detailed'
>> }
PS> Invoke-Pester @params
Starting discovery in 1 files.
Discovering in C:workspaceSet-ServiceState.tests.ps1.
Found 3 tests. 11ms
Discovery finished in 16ms.
Running tests from 'C:workspaceSet-ServiceState.tests.ps1'
Describing Set-ServiceState
[+] Service is running, expected running 16ms (13ms|3ms)
[+] Service is stopped, expected running 15ms (15ms|1ms)
[+] Service is running, expected stopped 24ms (22ms|1ms)
Tests completed in 188ms
Tests Passed: 3, Failed: 0, Skipped: 0 NotRun: 0
These new tests provide a granular view of the different behaviors of the function. If a test fails, it is extremely easy to attribute cause to effect without having to spend extra time figuring out where either the tests or the subject failed.
The examples used to demonstrate mocking so far assume that the command being mocked is available on the current system. Commands that are not locally installed cannot be mocked.
If a command is not available on the system running tests, the attempt to create a mock will fail.
It is possible to work around this limitation with a small part of the command required by the tests. This can be referred to as a stub and typically consists of a function with only a parameter block.
The stub is used to provide something to mock, and the mock is used to track the execution of the function and ensure a subject behaves as intended.
For example, consider a function that creates and configures a DNS zone with a predefined set of parameter values:
function New-DnsZone {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[String]$Name
)
$params = @{
Name = $Name
DynamicUpdate = 'Secure'
ReplicationScope = 'Domain'
}
$zone = Get-DnsServerZone $Name -ErrorAction SilentlyContinue
if (-not $zone) {
Add-DnsServerPrimaryZone @params
}
}
It may not be desirable to install the DnsServer
tools on a development system to run unit tests. To mock and verify that Add-DnsServerPrimaryZone
is called, a function must be created first:
Describe CreateDnsZone {
BeforeAll {
function Get-DnsServerZone { }
function Add-DnsServerPrimaryZone { }
Mock Get-DnsServerZone
Mock Add-DnsServerPrimaryZone
}
It 'When the zone does not exist, creates a zone' {
New-DnsZone -Name name
Assert-MockCalled Add-DnsServerPrimaryZone
}
}
Creating the function as shown here is enough to satisfy the tests, but the approach is basic. The test will pass even if parameter names are incorrect or missing.
A more advanced function to mock may be created by visiting a system with the command installed and retrieving the param
block. The ProxyCommand
type in PowerShell to get the param
block from a system with the DnsServer
module installed, for example:
using namespace System.Management.Automation
$command = Get-Command Add-DnsServerPrimaryZone
[ProxyCommand]::GetParamBlock(
$command
)
For Add-DnsServerPrimaryZone
the result is long. A command such as Select-Object
has a simpler param
block and is therefore easier to view. The first two parameters for Select-Object
are shown here after running the GetParamBlock
method:
PS> using namespace System.Management.Automation
PS> [ProxyCommand]::GetParamBlock((Get-Command Select-Object))
[Parameter(ValueFromPipeline=$true)]
[psobject]
${InputObject},
[Parameter(ParameterSetName='DefaultParameter', Position=0)]
[Parameter(ParameterSetName='SkipLastParameter', Position=0)]
[System.Object[]]
${Property},
You can use this technique to create a stub of many modules, allowing tests to run even if the module is not locally installed.
The following snippet combines the GetParamBlock
with GetCmdletBindingAttribute
to create an accurate copy of the basics of the module to use as the basis for mocking a command.
using namespace System.Management.Automation
$moduleName = 'DnsServer'
Get-Command -Module $moduleName | ForEach-Object {
$param = [ProxyCommand]::GetParamBlock($command)
$param = $param -split '
?
' -replace '^s{4}', '$0$0'
'function {0} {{' -f $_.Name
' {0}' -f [ProxyCommand]::GetCmdletBindingAttribute($_)
' param ('
$param
' )'
'}'
''
} | Set-Content "$moduleName.psm1"
This approach works for the DnsServer
module because the module is based on CIM classes; it only depends on assemblies that are already available in PowerShell.
Adding a copy of a module will improve the overall quality of the tests for a command. Tests will fail if a non-existent parameter is used, or if an invalid parameter combination is used.
Each of the mocks used so far has emitted a PSCustomObject
, and in many cases a PSCustomObject
is enough to use within a set of tests.
Mocking allows the result of running another command to be faked. The examples in the previous section have returned a PSCustomObject
where output is required.
It is not uncommon for a command to expect to work with the properties and methods of another object. This might be a value returned by another command, or it might be the value of a parameter the test subject requires.
The ability to mock objects or a specific type or objects that implement methods is important in testing.
Two approaches can be taken when testing:
PSCustomObject
PowerShell includes many modules that are based on CIM classes. These modules often expect CIM instances as input to work. Testing code that uses CIM-based commands may need to create values that closely resemble the real command output.
Methods can be added to a PSCustomObject
, allowing code that uses those methods to be tested without needing to use a more specific .NET type.
Objects with specific properties can be simulated by creating a PSCustomObject
object:
[PSCustomObject]@{
Property = "Value"
}
If the subject of a test takes the result of a mocked command and invokes a method, it will fail unless the method is available. You can add methods to a PSCustomObject
using Add-Member
:
$object = [PSCustomObject]@{} |
Add-Member MethodName -MemberType ScriptMethod -Value { }
$object
If the method used already exists, such as the ToString
method, then the -Force
parameter must be used:
$object = [PSCustomObject]@{}
$object |
Add-Member ToString -MemberType ScriptMethod -Force -Value { }
$object
As many methods as needed can be added to the PSCustomObject
as required.
The method added to the PSCustomObject
may return nothing (as in the preceding examples), return a specific value, or set a variable in script scope which can be tracked or tested. This idea is explored when disarming an existing .NET object.
A piece of code being tested may interact with a specific .NET type. The .NET type may (by default) need to interact with other systems in a way that is not desirable when testing.
The following simple function expects to receive an instance of a SqlConnection
object and expects to be able to call the Open
method:
using namespace System.Data.SqlClient
function Open-SqlConnection {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[SqlConnection]$SqlConnection
)
if ($sqlConnection.State -eq 'Closed') {
$SqlConnection.Open()
}
}
As the value of the SqlConnection
parameter is explicitly set to accept an instance of System.Data.SqlClient.SqlConnection
, you cannot use a PSCustomObject
as a substitute.
When running the function in a test, an instance of SqlConnection
must be created to pass to the function.
The following It
block creates such an instance and passes it to the function:
It 'Opens an SQL connection' {
$connection = [System.Data.SqlClient.SqlConnection]::new()
Open-SqlConnection -SqlConnection $connection
}
This It
block does not contain any assertions yet. It can assert that no errors should be thrown, but the test can only succeed if the computer running the tests is running an SQL server instance. By extension, the test can only fail if the computer running the tests is not a SQL server.
Each of the following tests can be added to a sql.tests.ps1
file. The Describe
block has been omitted from the examples to reduce indentation. The following command creates the tests file. All the content of the Describe
block should be replaced with each example.
@'
BeforeDiscovery {
function Script:Open-SqlConnection {
[CmdletBinding()]
param (
[Parameter(Mandatory)]
[System.Data.SqlClient.SqlConnection]$SqlConnection
)
if ($sqlConnection.State -eq 'Closed') {
$SqlConnection.Open()
}
}
}
Describe Open-SqlConnection {
}
'@ | Set-Content sql.tests.ps1
PowerShell code will prefer to call a ScriptMethod
on an object over a Method
provided by the .NET type. Therefore, a you can create a disarmed version of the SqlConnection
object using Add-Member
:
BeforeAll {
$connection = [System.Data.SqlClient.SqlConnection]::new()
$connection |
Add-Member Open -MemberType ScriptMethod -Value { } -Force
}
It 'Opens an SQL connection' {
Open-SqlConnection -SqlConnection $connection
}
This step solves the problem of needing a SQL server to run Open
, but it does not solve the problem of testing if Open
was called. After all, the command does not return the connection object; there is no output to test.
You can solve the problem by making the Open
method do something. Possible actions include:
SqlConnection
objectThe Open
method, by default, does not return a value, and the statement that calls Open
in the function is not assigned. A value could simply be returned by the method and tested:
BeforeAll {
$connection = [System.Data.SqlClient.SqlConnection]::new()
$connection |
Add-Member Open -MemberType ScriptMethod -Force -Value {
$true
}
}
It 'Opens an SQL connection' { Open-SqlConnection -SqlConnection $connection |
Should -BeTrue
}
In some cases, such an approach might fail or become overly complex. For instance, if this is only one small step the command takes and there are other outputs to consider.
Setting a variable in script scope in the mocked method will allow a test to see if the method is executed. In this case, the scoped value might be reset in a BeforeEach
block, ensuring it is accurately recorded in each test:
BeforeAll {
$connection = [System.Data.SqlClient.SqlConnection]::new()
$connection |
Add-Member Open -MemberType ScriptMethod -Force -Value {
$Script:Opened = $true
}
}
BeforeEach {
$Script:Opened = $false
}
It 'Opens an SQL connection' {
Open-SqlConnection -SqlConnection $connection
$Script:Opened | Should -BeTrue
}
Finally, a property of the SqlConnection
object might be set. The real Open
method sets the value of the State
property. This property is read-only and cannot be set directly. A NoteProperty
must be added for Open
to change.
BeforeAll {
$connection = [System.Data.SqlClient.SqlConnection]::new()
$connection |
Add-Member Open -MemberType ScriptMethod -Force -Value {
$this.State = 'Open'
}
$connection |
Add-Member State -NotePropertyValue Closed -Force
}
It 'Opens an SQL connection' {
Open-SqlConnection -SqlConnection $connection
$connection.State | Should -Be 'Open'
}
Any of the preceding options might be used when mocking methods and properties on .NET objects.
The approach can be taken further still by using the New-MockObject
command in Pester. New-MockObject
creates an instance of a .NET type with no code behind it at all.
New-MockObject
is not appropriate in all cases. If you use this command to create the SqlConnection
object used above, attempting to call the real Open
method will always raise an error:
PS> $connection = New-MockObject System.Data.SqlClient.SqlConnection
PS> $connection.Open()
MethodInvocationException: Exception calling "Open" with "0" argument(s): "Object reference not set to an instance of an object."
You can apply the techniques used in the previous examples to a wide variety of .NET objects. CimInstance
objects are a special case when it comes to mocking.
Many modules in PowerShell are based on CIM classes. For example, the Net
modules, such as NetAdapter
, NetSecurity
, and NetTCPIP
, are all based on CIM classes.
The commands in these modules either return CIM instances or include parameters that require a specific CIM instance as an argument.
For example, the following function uses two of the commands in a pipeline. Any tests would have to account for the CIM classes when mocking commands:
function Enable-PhysicalAdapter {
Get-NetAdapter -Physical | Enable-NetAdapter
}
When these commands act in a pipeline, Enable-NetAdapter
fills the InputObject
parameter from the pipeline. Get-Help
shows that the parameter accepts an array of CimInstance
from the pipeline:
PS> Get-Help Enable-NetAdapter -Parameter InputObject
-InputObject <CimInstance[]>
Specifies the input to this cmdlet. You can use this parameter, or you can pipe the input to this cmdlet.
Required? true
Position? named
Default value
Accept pipeline input? true (ByValue)
Accept wildcard characters? false
However, this is not the whole story. The parameter value is further constrained by a PSTypeName
attribute. This can be seen using Get-Command
:
$command = (Get-Command Enable-NetAdapter)
$parameter = $command.Parameters['InputObject']
$attribute = $parameter.Attributes |
Where-Object TypeId -match 'PSTypeName'
$attribute.PSTypeName
The result is the PSTypeName
the command expects to receive from the pipeline:
Microsoft.Management.Infrastructure.CimInstance#MSFT_NetAdapter
Any mock for Get-NetAdapter
must therefore return an MSFT_NetAdapter
CimInstance
object. Before the instance can be created, one final piece of information is required: the namespace of the CIM class.
The namespace can be taken from any object returned by Get-NetAdapter
:
PS> Get-NetAdapter | Select-Object CimClass -First 1
CimClass
--------
ROOT/StandardCimv2:MSFT_NetAdapter
Finally, the CimInstance
object can be created using the New-CimInstance
command as shown here:
$params = @{
ClassName = 'MSFT_NetAdapter'
Namespace = 'ROOT/StandardCimv2'
ClientOnly = $true
}
New-CimInstance @params
This instance can be added to a mock for Get-NetAdapter
when testing the Enable-PhysicalAdapter
command:
BeforeDiscovery {
function Script:Enable-PhysicalAdapter {
Get-NetAdapter -Physical | Enable-NetAdapter
}
}
Describe Enable-PhysicalAdapter {
BeforeAll {
Mock Enable-NetAdapter
Mock Get-NetAdapter {
$params = @{
ClassName = 'MSFT_NetAdapter'
Namespace = 'ROOT/StandardCimv2'
ClientOnly = $true
}
New-CimInstance @params
}
}
It 'Enables a physical network adapter' {
{ Enable-PhysicalAdapter } | Should -Not -Throw
Should -Invoke Enable-NetAdapter -Times 1
}
}
The commands used in each of the tests in this section are expected to be available in the global scope so that Pester can mock and run the commands. Pester is also able to test commands and classes that are not exported from a module.
The InModuleScope
command and the -ModuleName
parameter of Should
and Mock
are important features of Pester. The command and parameters allow access to content that is normally in the module scope and inaccessible outside.
The following two commands were first introduced in Chapter 20, Building Modules:
@'
function GetRegistryParameter {
[CmdletBinding()]
param ( )
@{
Path = 'HKLM:SYSTEMCurrentControlSetServicesLanmanServerParameters'
Name = 'srvcomment'
}
}
function Get-ComputerDescription {
[CmdletBinding()]
param ( )
$getParams = GetRegistryParameter
Get-ItemPropertyValue @getParams
}
Export-ModuleMember Get-ComputerDescription
'@ | Set-Content LocalMachine.psm1
The function GetRegistryParameter
can be tested in Pester by using InModuleScope
:
@'
BeforeDiscovery {
Import-Module .LocalMachine.psm1 -Force
}
Describe GetRegistryParameter {
It 'Returns a hashtable' {
InModuleScope -ModuleName LocalMachine {
GetRegistryParameter
} | Should -BeOfType [Hashtable]
}
}
'@ | Set-Content GetRegistryParameter.tests.ps1
If the InModuleScope
command is omitted, the test will fail and the GetRegistryParameter
function is not exported from the module and is therefore not normally accessible.
The result of running the tests is shown here:
PS> Invoke-Pester -Script .GetRegistryParameter.tests.ps1
Starting discovery in 1 files.
Discovery finished in 227ms.
Running tests.
[+] C:workspaceGetRegistryParameter.tests.ps1 1.04s (152ms|702ms)
Tests completed in 1.06s
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
A test document may include more than one use of InModuleScope
, but it is advisable to keep the size of these blocks as small as possible. InModuleScope
should not be used to enclose Describe
, Context
, and It
.
In the same way, if it were desirable to mock that command when testing the Get-ComputerDescription
command, the -ModuleName
parameter is required for the Mock
keyword:
BeforeAll {
Mock GetRegistryParameter -ModuleName LocalMachine
}
InModuleScope
can be used to access anything in the module scope, including private commands, classes, and enumerations, and any module-scoped variables.
Using InModuleScope
can add complexity when running Invoke-Pester
from a script.
When Invoke-Pester
is run from a global scope, -ModuleName
is only required to access private components of a module.
When Invoke-Pester
is run from a script, problems may surface because the script scope breaks Pester's scoping.
Consider the following tests:
@'
BeforeDiscovery {
Import-Module .LocalMachine.psm1 -Force
}
Describe Get-ComputerDescription {
BeforeAll {
Mock Get-ItemPropertyValue {
'Mocked description'
}
}
It 'Returns the mocked description' {
Get-ComputerDescription |
Should -Be 'Mocked description'
Should -Invoke Get-ItemPropertyValue
}
}
'@ | Set-Content Get-ComputerDescription.tests.ps1
When Invoke-Pester
is run from the console, the tests pass provided the LocalMachine
module was successfully imported:
PS> Invoke-Pester -Path .Get-ComputerDescription.tests.ps1
Starting discovery in 1 files.
Discovery finished in 7ms.
[+] C:workspaceGet-ComputerDescription.tests.ps1 98ms (6ms|86ms)
Tests completed in 99ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
If instead the Invoke-Pester
command is put in a script, and the script is run the mock is completely ignored:
@'
Invoke-Pester -Path .Get-ComputerDescription.tests.ps1
'@ | Set-Content script.ps1
This is shown here when running the script:
PS> .script.ps1
Starting discovery in 1 files.
Discovery finished in 7ms.
[-] Get-ComputerDescription.Returns the mocked description 5ms (4ms|1ms)
PSArgumentException: Property srvcomment does not exist at path HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesLanmanServerParameters.
at Get-ComputerDescription, C:workspaceLocalMachine.psm1:16
at <ScriptBlock>, C:workspaceGet-ComputerDescription.tests.ps1:12
Tests completed in 117ms
Tests Passed: 0, Failed: 1, Skipped: 0 NotRun: 0
To work around this problem, the -ModuleName
parameter must be added to all Mock
commands and all Should -Invoke
assertions. In the following example, a splat is used:
Describe Get-ComputerDescription {
BeforeAll {
$module = @{
ModuleName = 'LocalMachine'
}
Mock Get-ItemPropertyValue @module {
'Mocked description'
}
}
It 'Returns the mocked description' {
Get-ComputerDescription |
Should -Be 'Mocked description'
Should -Invoke Get-ItemPropertyValue @module
}
}
In all cases the module name is the scope the mock should be created in, so the subject module, not the module the mocked command belongs to.
With this in place, Invoke-Pester
can be run from a script:
PS> .script.ps1
Starting discovery in 1 files.
Discovery finished in 23ms.
[+] C:workspaceGet-ComputerDescription.tests.ps1 135ms (25ms|88ms)
Tests completed in 137ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 0
Pester is a wonderful tool for writing tests for a variety of different purposes. The tools above offer an introduction to the capabilities of the module.
This chapter explored the complex topic of testing in PowerShell.
Static analysis is one part of testing and is the approach used by modules like PSScriptAnalyzer
. Static analysis makes use of the Abstract Syntax Tree and tokenizers in PowerShell.
The Abstract Syntax Tree or AST describes the content of a block of code as a tree of different elements, starting with a ScriptBlockAst
at the highest level. You can use the ParseInput
and ParseFile
methods of the Parser
type to get either an instance of the AST for a piece of code or the tokens that make up a script that includes comments.
The ShowPSAst
module can be used to visualize and explore the AST tree. ShowPSAst
is a useful tool when starting to work with AST as the tree can quickly become complex.
PSScriptAnalyzer
uses either AST or tokens to define rules. Rules can be used to test and enforce personal or organization-specific practices.
Pester is a testing framework and this chapter explored both acceptance and unit testing.
Acceptance testing is commonly used to assess the state of systems and services. You can use Pester to define tests, which can be saved and shared. Such tests can be used to validate a system is configured or behaving as it should be.
Pester is a rich tool that supports iteration with the -ForEach
or -TestCases
parameters. Conditional testing can be achieved using the Set-ItResult
and -Skip
parameters.
Mocking is an exceptionally useful feature of Pester and is often used when writing unit tests to reduce the amount of code that must be tested when the subject is a single command.
The next chapter explores error handling in PowerShell, including terminating and non-terminating errors and the use of try
, catch
, finally
, and the trap
statement.
3.15.153.69