One thing that surprises many people is how much you can accomplish in PowerShell from the interactive prompt alone. Since PowerShell makes it so easy to join its powerful commands together into even more powerful combinations, enthusiasts grow to relish this brevity. In fact, there is a special place in the heart of most scripting enthusiasts set aside entirely for the most compact expressions of power: the one-liner.
Despite its interactive efficiency, you obviously don’t want to retype all your brilliant ideas anew each time you need them. When you want to save or reuse the commands that you’ve written, PowerShell provides many avenues to support you: scripts, modules, functions, script blocks, and more.
To write a PowerShell script, create a plain-text file with your editor of choice. Add your PowerShell commands to that script (the same PowerShell commands you use from the interactive shell), and then save it with a .ps1 extension.
One of the most important things to remember about PowerShell is that running scripts and working at the command line are essentially equivalent operations. If you see it in a script, you can type it or paste it at the command line. If you typed it on the command line, you can paste it into a text file and call it a script.
Once you write your script, PowerShell lets you call it in the same way that you call other programs and existing tools. Running a script does the same thing as running all the commands in that script.
PowerShell introduces a few features related to running scripts and tools that may at first confuse you if you aren’t aware of them. For more information about how to call scripts and existing tools, see Run Programs, Scripts, and Existing Tools.
The first time you try to run a script in PowerShell, you’ll likely see the following error message:
File c: oolsmyFirstScript.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about_signing" for more details. At line:1 char:12 + myFirstScript <<<<
Since relatively few computer users write scripts, PowerShell’s default security policies prevent scripts from running. Once you begin writing scripts, though, you should configure this policy to something less restrictive. For information on how to configure your execution policy, see Enable Scripting Through an Execution Policy.
When it comes to the filename of your script,
picking a descriptive name is the best way to guarantee that you will
always remember what that script does—or at least have a good idea. This
is an issue that PowerShell tackles elegantly, by naming every cmdlet in
the Verb-Noun
pattern: a command
that performs an action (verb) on an item
(noun). As an example of the usefulness of this
philosophy, consider the names of typical Windows commands given in
Example 11-1.
Compare this to the names of some standard Windows PowerShell cmdlets, given in Example 11-2.
As an additional way to improve discovery,
PowerShell takes this even further with the philosophy (and explicit
goal) that “you can manage 80 percent of your system with less than 50
verbs.” As you learn the standard verbs for a concept, such as Get
(which represents the standard concepts of
Read, Open, and so on), you can often guess the verb of a command as the
first step in discovering it.
When you name your script (especially if you intend to share it), make every effort to pick a name that follows these conventions. Find a Verb Appropriate for a Command Name shows a useful cmdlet to help you find a verb to name your scripts properly. As evidence of its utility for scripts, consider some of the scripts included in this book:
PS > dir | select Name Name ---- Compare-Property.ps1 Connect-WebService.ps1 Convert-TextObject.ps1 Get-AliasSuggestion.ps1 Get-Answer.ps1 Get-Characteristics.ps1 Get-OwnerReport.ps1 Get-PageUrls.ps1 Invoke-CmdScript.ps1 New-GenericObject.ps1 Select-FilteredObject.ps1 (...)
Like the PowerShell cmdlets, the names of these scripts are clear, are easy to understand, and use verbs from PowerShell’s standard verb list.
You have commands in your script that you want to call multiple times or a section of your script that you consider to be a “helper” for the main purpose of your script.
Place this common code in a function, and then call that function instead. For example, this Celsius conversion code in a script:
param([double] $fahrenheit) ## Convert it to Celsius $celsius= $fahrenheit - 32 $celsius = $celsius / 1.8 ## Output the answer "$fahrenheit degrees Fahrenheit is $celsius degrees Celsius."
could be placed in a function (itself placed in a script):
param([double] $fahrenheit) ## Convert Fahrenheit to Celsius function ConvertFahrenheitToCelsius([double] $fahrenheit) { $celsius = $fahrenheit - 32 $celsius = $celsius / 1.8 $celsius } $celsius = ConvertFahrenheitToCelsius $fahrenheit ## Output the answer "$fahrenheit degrees Fahrenheit is $celsius degrees Celsius."
Although using a function arguably makes this specific script longer and more difficult to understand, the technique is extremely valuable (and used) in almost all nontrivial scripts.
Once you define a function, any command after that definition can use it. This means that you must define your function before any part of your script that uses it. You might find this unwieldy if your script defines many functions, as the function definitions obscure the main logic portion of your script. If this is the case, you can put your main logic in a “Main” function, as described in Organize Scripts for Improved Readability.
A common question that comes from those
accustomed to batch scripting in cmd.exe
is, “What is the PowerShell
equivalent of a GOTO
?” In
situations where the GOTO
is used
to call subroutines or other isolated helper parts of the batch file,
use a PowerShell function to accomplish that task. If the GOTO
is used as a way to loop over
something, PowerShell’s looping mechanisms are more
appropriate.
In PowerShell, calling a function is designed to feel just like calling a cmdlet or a script. As a user, you should not have to know whether a little helper routine was written as a cmdlet, script, or function. When you call a function, simply add the parameters after the function name, with spaces separating each one (as shown in the solution). This is in contrast to the way that you call functions in many programming languages (such as C#), where you use parentheses after the function name and commas between each parameter.
## Correct ConvertFahrenheitToCelsius $fahrenheit ## Incorrect ConvertFahrenheitToCelsius($fahrenheit)
Also, notice that the
return value from a function is anything that the function writes to the
output pipeline (such as $celsius
in
the solution). You can write return
$celsius
if you want, but it is
unnecessary.
For more information about writing functions, see Writing Scripts, Reusing Functionality. For more information about PowerShell’s looping statements, see Repeat Operations with Loops.
Review the output of the
Get-Verb
command to find a verb appropriate for your
command:
PS > Get-Verb In* | Format-Table -Auto Verb Group ---- ----- Initialize Data Install Lifecycle Invoke Lifecycle
Consistency of command names is one of
PowerShell’s most beneficial features, largely due to its standard set of verbs. While descriptive command names
(such as Stop-Process
) make it clear what a command
does, standard verbs make commands easier to discover.
For example, many technologies have their own words for creating something: New, Create, Instantiate, Build, and more. When a user looks for a command (without the benefit of standard verbs), the user has to know the domain-specific terminology for that action. If the user doesn’t know the domain-specific verb, the user is forced to page through long lists of commands in the hope that something rings a bell.
When commands use PowerShell’s standard verbs, however, discovery becomes much easier. Once users learn the standard verb for an action, they don’t need to search for its domain-specific alternatives. Most importantly, the time they invest (actively or otherwise) learning the standard PowerShell verbs improves their efficiency with all commands, not just commands from a specific domain.
This discoverability issue is so important that PowerShell generates a warning message when a module defines a command with a nonstandard verb. To support domain-specific names for your commands in addition to the standard names, simply define an alias. For more information, see Selectively Export Commands from a Module.
To make it easier to select a standard verb while writing a
script or function, PowerShell provides a Get-Verb
function. You can review the output of that function to find a verb
suitable for your command. For an even more detailed description of the
standard verbs, see Appendix J.
You have a section of your script that works nearly the same for all input, aside from a minor change in logic.
As shown in Example 11-3, place the minor
logic differences in a script block, and then pass that script block as
a parameter to the code that requires it. Use the invoke operator (&
) to execute the
script block.
Example 11-3. A script that applies a script block to each element in the pipeline
############################################################################## ## ## Invoke-ScriptBlock ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .SYNOPSIS Apply the given mapping command to each element of the input. (Note that PowerShell includes this command natively, and calls it Foreach-Object) .EXAMPLE 1,2,3 | Invoke-ScriptBlock { $_ * 2 } #> param( ## The scriptblock to apply to each incoming element [ScriptBlock] $MapCommand ) begin { Set-StrictMode -Version Latest } process { & $mapCommand }
Imagine a script that needs to multiply all the elements in a list by two:
function MultiplyInputByTwo { process { $_ * 2 } }
but it also needs to perform a more complex calculation:
function MultiplyInputComplex { process { ($_ + 2) * 3 } }
These two functions are strikingly similar, except for the single line that actually performs the calculation. As we add more calculations, this quickly becomes more evident. Adding each new seven-line function gives us only one unique line of value!
PS > 1,2,3 | MultiplyInputByTwo 2 4 6 PS > 1,2,3 | MultiplyInputComplex 9 12 15
If we instead use a script block to hold this “unknown” calculation, we don’t need to keep on adding new functions:
PS > 1,2,3 | Invoke-ScriptBlock { $_ * 2 } 2 4 6 PS > 1,2,3 | Invoke-ScriptBlock { ($_ + 2) * 3 } 9 12 15 PS > 1,2,3 | Invoke-ScriptBlock { ($_ + 3) * $_ } 4 10 18
In fact, the functionality provided by
Invoke-ScriptBlock
is
so helpful that it is a standard PowerShell cmdlet—called Foreach-Object
. For
more information about script blocks, see Writing Scripts, Reusing Functionality. For more information
about running scripts, see Run Programs, Scripts, and Existing Tools.
To return data from a script or function, write that data to the output pipeline:
############################################################################## ## ## Get-Tomorrow ## ## Get the date that represents tomorrow ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## Set-StrictMode -Version Latest function GetDate { Get-Date } $tomorrow = (GetDate).AddDays(1) $tomorrow
In PowerShell, any data that your function or
script generates gets sent to the output pipeline, unless something
captures that output. The GetDate
function generates data (a date) and does not capture it, so that
becomes the output of the function. The portion of the script that calls
the GetDate
function captures that
output and then manipulates it.
Finally, the script writes the $tomorrow
variable to the pipeline without
capturing it, so that becomes the return value of the script
itself.
Some .NET methods—such as the System.Collections.ArrayList
class—produce
output, even though you may not expect them to. To prevent these
methods from sending data to the output pipeline, either capture the
data or cast it to [void
]:
PS > $collection = New-Object System.Collections.ArrayList PS > $collection.Add("Hello") 0 PS > [void] $collection.Add("Hello")
Even with this “pipeline output becomes the
return value” philosophy, PowerShell continues to support the
traditional return
keyword as a way
to return from a function or script. If you specify anything after the
keyword (such as return "Hello"
), PowerShell treats
that as a "Hello"
statement followed by a return
statement.
If you want to make your intention clear to
other readers of your script, you can use the Write-Output
cmdlet
to explicitly send data down the pipeline. Both produce the same
result, so this is only a matter of preference.
If you write a collection (such as an array or
ArrayList
) to the output pipeline,
PowerShell in fact writes each element of that collection to the
pipeline. To keep the collection intact as it travels down the pipeline,
prefix it with a comma when you return it. This returns a collection
(that will be unraveled) with one element: the collection you wanted to
keep intact.
function WritesObjects { $arrayList = New-Object System.Collections.ArrayList [void] $arrayList.Add("Hello") [void] $arrayList.Add("World") $arrayList } function WritesArrayList { $arrayList = New-Object System.Collections.ArrayList [void] $arrayList.Add("Hello") [void] $arrayList.Add("World") ,$arrayList } $objectOutput = WritesObjects # The following command would generate an error # $objectOutput.Add("Extra") $arrayListOutput = WritesArrayList $arrayListOutput.Add("Extra")
Although relatively uncommon in PowerShell’s
world of fully structured data, you may sometimes want to use an exit
code to indicate the success or failure of your script. For this,
PowerShell offers the exit
keyword.
For more information about the return
and exit
statements, see Writing Scripts, Reusing Functionality and Determine the Status of the Last Command.
You’ve developed a useful set of commands or functions. You want to offer them to the user or share them between multiple scripts.
First, place these common function definitions by themselves in a file with the extension .psm1, as shown in Example 11-4.
Example 11-4. A module of temperature commands
############################################################################## ## ## Temperature.psm1 ## Commands that manipulate and convert temperatures ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## ## Convert Fahrenheit to Celsius function Convert-FahrenheitToCelsius([double] $fahrenheit) { $celsius = $fahrenheit - 32 $celsius = $celsius / 1.8 $celsius } ## Convert Celsius to Fahrenheit function Convert-CelsiusToFahrenheit([double] $celsius) { $fahrenheit = $celsius * 1.8 $fahrenheit = $fahrenheit + 32 $fahrenheit }
Next, place that file in your
Modules directory (as defined in the
PSModulePath
environment variable), in a subdirectory
with the same name. For example, place
Temperature.psm1 in <My
Documents>WindowsPowerShellModulesTemperature. Call the
Import-Module
command to import the module (and its
commands) into your session, as shown by Example 11-5.
PowerShell modules give you an easy way to package related commands and functionality. As the solution demonstrates, writing a module is as simple as adding functions to a file.
As with the naming of core commands, the
naming of commands packaged in a module plays a critical role in giving
users a consistent and discoverable PowerShell experience. When you name
the commands in your module, ensure that they follow a
Verb-Noun
syntax and that you select verbs
from PowerShell’s standard set of verbs. If your module does not follow
these standards, your users will receive a warning message when they
load your module. For information about how make your module commands
discoverable (and as domain-specific as required), see Selectively Export Commands from a Module.
In addition to creating the
.psm1 file that contains your module’s commands,
you should also create a module manifest to describe its
contents and system requirements. Module manifests let you define the
module’s author, company, copyright information, and more. For more
information, see the New-ModuleManifest
cmdlet.
After writing a module, the last step is
making it available to the system. When you call
Import-Module
<module
name>
to load a module, PowerShell looks through each
directory listed in the PSModulePath
environment
variable.
The PSModulePath
variable is an environment variable, just like the system’s
PATH
environment variable. For more information
on how to view and modify environment variables, see View and Modify Environment Variables.
If PowerShell finds a directory named <module
name>
, it looks in that directory for a
psm1 file with that name as well. Once it finds the
psm1 file, it loads that module into your session.
In addition to psm1 files, PowerShell also supports
module manifest (psd1) files
that let you define a great deal of information
about the module: its author, description, nested
modules, version requirements, and much more. For more information, type
Get-Help New-ModuleManifest
.
If you want to make your module available to
just yourself (or the “current user” if installing your module as part
of a setup process), place it in the per-user modules folder:
<My
Documents>WindowsPowerShellModules<module
name>
. If you want to make the module available to all
users of the system, place your module in its own directory under the
Program Files directory, and then add that
directory to the system-wide PSModulePath
environment
variable.
If you don’t want to permanently install your module, you can instead specify the complete path to the psm1 file when you load the module. For example:
Import-Module c: oolsTemperature.psm1
If you want to load a module from the same directory that your script is in, see Find Your Script’s Location.
When you load a module from a script,
PowerShell makes the commands from that module available to the entire
session. If your script loads the Temperature
module,
for example, the functions in that module will still be available after
your script exits. To ensure that your script doesn’t accidentally
influence the user’s session after it exits, you should remove any
modules that you load:
$moduleToRemove = $null if(-not (Get-Module <Module Name>)) { $moduleToRemove = Import-Module <Module Name> -Passthru } ###################### ## ## script goes here ## ###################### if($moduleToRemove) { $moduleToRemove | Remove-Module }
If you have a module that loads a helper module (as opposed to a script that loads a helper module), this step is not required. Modules loaded by a module impact only the module that loads them.
If you want to let users configure your module
when they load it, you can define a parameter block at the beginning of
your module. These parameters then get filled through the
-ArgumentList
parameter of the
Import-Module
command. For example, a module that
takes a “retry count” and website as parameters:
param( [int] $RetryCount, [URI] $Website ) function Get-Page { ....
The user would load the module with the following command line:
Import-Module <module name> -ArgumentList 10,"http://www.example.com" Get-Page "/index.html"
One important point when it comes to
the -ArgumentList
parameter is that its support for
user input is much more limited than support offered for most scripts,
functions, and script blocks. PowerShell lets you access the parameters
in most param()
statements by name, by alias, and in
or out of order. Arguments supplied to the
Import-Module
command, on the other hand, must be
supplied as values only, and in the exact order the module defines
them.
For more information about accessing arguments
of a command, see Access Arguments of a Script, Function, or Script Block. For more
information about importing a module (and the different types of modules
available), see Extend Your Shell with Additional Commands. For more
information about modules, type Get-
Help
about_Modules
.
Place those commands in a
module. Store any information you want to retain in
a variable, and give that variable a SCRIPT
scope.
See Example 11-6.
Example 11-6. A module that maintains state
############################################################################## ## ## PersistentState.psm1 ## Demonstrates persistent state through module-scoped variables ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## $SCRIPT:memory = $null function Set-Memory { param( [Parameter(ValueFromPipeline = $true)] $item ) begin { $SCRIPT:memory = New-Object System.Collections.ArrayList } process { $null = $memory.Add($item) } } function Get-Memory { $memory.ToArray() } Set-Alias remember Set-Memory Set-Alias recall Get-Memory Export-ModuleMember -Function Set-Memory,Get-Memory Export-ModuleMember -Alias remember,recall
When writing scripts or commands, you’ll frequently need to maintain state between the invocation of those commands. For example, your commands might remember user preferences, cache configuration data, or store other types of module state. See Example 11-7.
Example 11-7. Working with commands that maintain state
PS > Import-Module PersistentState PS > Get-Process -Name PowerShell | remember PS > recall Handles NPM(K) PM(K) WS(K) VM(M) CPU(s) Id ProcessName ------- ------ ----- ----- ----- ------ -- ----------- 527 6 32704 44140 172 2.13 2644 powershell 517 7 23080 33328 154 1.81 2812 powershell 357 6 31848 33760 165 1.42 3576 powershell
In PowerShell version one, the only way to accomplish these goals was to store the information in a global variable. This introduces two problems, though.
The first problem is that global variables impact much more than just the script that defines them. Once your script stores information in a global variable, it pollutes the user’s session. If the user has a variable with the same name, your script overwrites its contents. The second problem is the natural counterpart to this pollution. When your script stores information in a global variable, both the user and other scripts have access to it. Due to accident or curiosity, it is quite easy for these “internal” global variables to be damaged or corrupted.
PowerShell version two resolves this issue through the introduction of modules. By placing your commands in a module, PowerShell makes variables with a script scope available to all commands in that module. In addition to making script-scoped variables available to all of your commands, PowerShell maintains their value between invocations of those commands.
Like variables, PowerShell drives obey the
concept of scope. When you use the New-PSDrive
cmdlet from within a
module, that drive stays private to that module. To create a new drive
that is visible from outside your module as well, create it with a
global scope:
New-PSDrive -Name Temp FileSystem -Root C:Temp -Scope Global
For more information about variables and their scopes, see Control Access and Scope of Variables and Other Items. For more information about defining a module, see Package Common Commands in a Module.
Use the Export-ModuleMember
cmdlet to declare
the specific commands you want exported. All other commands then remain
internal to your module. See Example 11-8.
Example 11-8. Exporting specific commands from a module
############################################################################## ## ## SelectiveCommands.psm1 ## Demonstrates the selective export of module commands ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## ## An internal helper function function MyInternalHelperFunction { "Result from my internal helper function" } ## A command exported from the module function Get-SelectiveCommandInfo { "Getting information from the SelectiveCommands module" MyInternalHelperFunction } ## Alternate names for our standard command Set-Alias gsci Get-SelectiveCommandInfo Set-Alias DomainSpecificVerb-Info Get-SelectiveCommandInfo ## Export specific commands Export-ModuleMember -Function Get-SelectiveCommandInfo Export-ModuleMember -Alias gsci,DomainSpecificVerb-Info
When PowerShell imports a module, it imports all functions defined in that module by default. This makes it incredibly simple (as module authors) to create a library of related commands.
Once your module commands get more complex,
you’ll often write helper functions and support routines. Since these
commands aren’t intended to be exposed directly to users, you’ll instead
need to selectively export commands from your module. The
Export-ModuleMember
command allows exactly
that.
Once your module includes a call to
Export-ModuleMember
, PowerShell no longer exports all
functions in your module. Instead, it exports only the commands that you
define. The first call to Export-ModuleMember
in
Example 11-8 demonstrates how to
selectively export a function from a module.
Since consistency of command names is one of
PowerShell’s most beneficial features, PowerShell generates a warning
message if your module exports functions (either explicitly or by
default) that use nonstandard verbs. For example, imagine that you have
a technology that uses regenerate configuration as a highly
specific phrase for a task. In addition, it already has a
regen
command to accomplish this task.
You might naturally consider
Regenerate-Configuration
and regen
as function names to export from your module, but doing that would
alienate users who don’t have a strong background in your technology.
Without your same technical expertise, they wouldn’t know the name of
the command, and instead would instinctively look for
Reset-Configuration
,
Restore-Configuration
, or
Initialize-Configuration
based on their existing
PowerShell knowledge. In this situation, the solution is to name your
functions with a standard verb and also use command
aliases to support your domain-specific experts.
The Export-ModuleMember
cmdlet supports this situation as well. In addition to letting you
selectively export commands from your module, it also lets you export
alternative names (aliases) for your module
commands. The second call to Export-ModuleMember
in
Example 11-8 (along with the alias
definitions that precede it) demonstrates how to export aliases from a
module.
For more information about command naming, see Find a Verb Appropriate for a Command Name. For more information about writing a module, see Package Common Commands in a Module.
Use the Enter-Module
script
(Example 11-9) to temporarily enter the module and
invoke commands within its scope.
Example 11-9. Invoking commands from within the scope of a module
############################################################################## ## ## Enter-Module ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .SYNOPSIS Lets you examine internal module state and functions by executing user input in the scope of the supplied module. .EXAMPLE PS >Import-Module PersistentState PS >Get-Module PersistentState ModuleType Name ExportedCommands ---------- ---- ---------------- Script PersistentState {Set-Memory, Get-Memory} PS >"Hello World" | Set-Memory PS >$m = Get-Module PersistentState PS >Enter-Module $m PersistentState: dir variable:mem* Name Value ---- ----- memory {Hello World} PersistentState: exit PS > #> param( ## The module to examine [System.Management.Automation.PSModuleInfo] $Module ) Set-StrictMode -Version Latest $userInput = Read-Host $($module.Name) while($userInput -ne "exit") { $scriptblock = [ScriptBlock]::Create($userInput) & $module $scriptblock $userInput = Read-Host $($module.Name) }
PowerShell modules are an effective way to create sets of related commands that share private state. While commands in a module can share private state between themselves, PowerShell prevents that state from accidentally impacting the rest of your PowerShell session.
When you are developing a module, though, you
might sometimes need to interact with this internal state for diagnostic
purposes. To support this, PowerShell lets you target a specific module
with the invocation (&
) operator:
PS > $m = Get-Module PersistentState PS > & $m { dir variable:mem* } Name Value ---- ----- memory {Hello World}
This syntax gets
cumbersome for more detailed investigation tasks, so
Enter-Module
automates the prompting and invocation
for you.
For more information about writing a module, see Package Common Commands in a Module.
You have a module and want to perform some action (such as cleanup tasks) when that module is removed.
Assign a script block to the
$MyInvocation.MyCommand.ScriptBlock.Module.OnRemove
event. Place any cleanup commands in that script block. See Example 11-10.
Example 11-10. Handling cleanup tasks from within a module
############################################################################## ## ## TidyModule.psm1 ## Demonstrates how to handle cleanup tasks when a module is removed ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .EXAMPLE PS >Import-Module TidyModule PS >$TidyModuleStatus Initialized PS >Remove-Module TidyModule PS >$TidyModuleStatus Cleaned Up #> ## Perform some initialization tasks $GLOBAL:TidyModuleStatus = "Initialized" ## Register for cleanup $MyInvocation.MyCommand.ScriptBlock.Module.OnRemove = { $GLOBAL:TidyModuleStatus = "Cleaned Up" }
PowerShell modules have a natural way to define initialization requirements (any script written in the body of the module), but cleanup requirements are not as simple.
During module
creation, you can access your module through the $My
Invocation.
My
Command.
ScriptBlock.Module
property. Each module has an OnRemove
event,
which you can then subscribe to by assigning it a script block. When
PowerShell unloads your module, it
invokes that script block.
Beware of using this technique for extremely
sensitive cleanup requirements. If the user simply exits the PowerShell
window, the OnRemove
event is not processed. If this
is a concern, register for the PowerShell.Exiting
engine event and remove your module from there:
Register-EngineEvent PowerShell.Exiting { Remove-Module TidyModule }
For PowerShell to handle this event, the user
must use the exit
keyword to close the session,
rather than the X button at the top right of the console window. In the
Integrated Scripting Environment,
the close button generates this event as well. This saves the user from
having to remember to call Remove-Module
.
For more information about writing a module, see Package Common Commands in a Module. For more information about PowerShell events, see Create and Respond to Custom Events.
To access arguments by name, use a
param
statement:
param($firstNamedArgument, [int] $secondNamedArgument = 0) "First named argument is: $firstNamedArgument" "Second named argument is: $secondNamedArgument"
To access unnamed arguments by position, use
the $args
array:
"First positional argument is: " + $args[0] "Second positional argument is: " + $args[1]
You can use these techniques in exactly the same way with scripts, functions, and script blocks, as illustrated by Example 11-11.
Example 11-11. Working with arguments in scripts, functions, and script blocks
############################################################################## ## ## Get-Arguments ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .SYNOPSIS Uses command-line arguments #> param( ## The first named argument $FirstNamedArgument, ## The second named argument [int] $SecondNamedArgument = 0 ) Set-StrictMode -Version Latest ## Display the arguments by name "First named argument is: $firstNamedArgument" "Second named argument is: $secondNamedArgument" function GetArgumentsFunction { ## We could use a param statement here, as well ## param($firstNamedArgument, [int] $secondNamedArgument = 0) ## Display the arguments by position "First positional function argument is: " + $args[0] "Second positional function argument is: " + $args[1] } GetArgumentsFunction One Two $scriptBlock = { param($firstNamedArgument, [int] $secondNamedArgument = 0) ## We could use $args here, as well "First named scriptblock argument is: $firstNamedArgument" "Second named scriptblock argument is: $secondNamedArgument" } & $scriptBlock -First One -Second 4.5
Example 11-11 produces the following output:
PS > Get-Arguments First 2 First named argument is: First Second named argument is: 2 First positional function argument is: One Second positional function argument is: Two First named scriptblock argument is: One Second named scriptblock argument is: 4
Although PowerShell supports both the param
keyword and the $args
array, you will most commonly want to
use the param
keyword to define and
access script, function, and script block parameters.
In most languages, the most common reason to
access parameters through an $args
array is to determine the name of the currently running script. For
information about how to do this in PowerShell, see Access Information About Your Command’s Invocation.
When you use the param
keyword to define your parameters,
PowerShell provides your script or function with many useful features
that allow users to work with your script much as they work with
cmdlets:
In addition to the parameters you define,
you might also want to support PowerShell’s standard parameters:
-Verbose
, -Debug
,
-ErrorAction
, -WarningAction
,
-Error
Variable
,
-WarningVariable
, -OutVariable
,
and -OutBuffer
.
To get these additional parameters, add the
[CmdletBinding()]
attribute inside your function,
or declare it at the top of your script. The
param()
statement is required, even if your
function or script declares no parameters. These (and other
associated) additional features now make your function an advanced function. See Example 11-12.
If your function defines a parameter with
advanced validation, you don’t need to
explicitly add the [CmdletBinding()]
attribute. In that
case, PowerShell already knows to treat your command as an advanced
function.
During PowerShell’s beta phases, advanced functions were known as script cmdlets. We decided to change the name because the term script cmdlets caused a sense of fear of the great unknown. Users would be comfortable writing functions, but “didn’t have the time to learn those new script cmdlet things.” Because script cmdlets were just regular functions with additional power, the new name made a lot more sense.
Although PowerShell adds all of its common
parameters to your function, you don’t actually need to implement the
code to support them. For example, calls to Write-Verbose
usually generate no
output. When the user specifies the -Verbose
parameter to your function, PowerShell then automatically displays the
output of the Write-
Verbose
cmdlet.
PS > Invoke-MyAdvancedFunction PS > Invoke-MyAdvancedFunction -Verbose VERBOSE: Verbose Message
If your cmdlet modifies system state,
it is extremely helpful to support the standard -WhatIf
and -Confirm
parameters. For information on how to accomplish this, see Provide -WhatIf, -Confirm, and Other Cmdlet Features.
Despite all of the power exposed by named parameters,
common parameters, and advanced functions, the $args
array is still sometimes helpful. For
example, it provides a clean way to deal with all arguments at
once:
function Reverse { $argsEnd = $args.Length - 1 $args[$argsEnd..0] }
PS > Reverse 1 2 3 4 4 3 2 1
For more information about the param
statement, see Writing Scripts, Reusing Functionality. For more
information about running scripts, see Run Programs, Scripts, and Existing Tools. For more
information about functionality (such as -Whatif
and -Confirm
) exposed by the PowerShell engine, see
Provide -WhatIf, -Confirm, and Other Cmdlet Features.
For information about how to declare parameters with rich validation and behavior, see Add Validation to Parameters.
Use the [Parameter()]
attribute to declare the parameter as mandatory, positional, part of a
mutually exclusive set of parameters, or able to receive its input from
the pipeline.
param( [Parameter( Mandatory = $true, Position = 0, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)] [string[]] $Name )
Use additional validation attributes to define aliases, support for null or empty values, count restrictions (for collections), length restrictions (for strings), regular expression requirements, range requirements (for numbers), permissible value requirements, or even arbitrary script requirements.
param( [ValidateLength(5,10)] [string] $Name ) "Hello $Name"
Traditional shells require extensions (scripts and commands) to write their parameter support by hand, resulting in a wide range of behavior. Some implement a bare, confusing minimum of support. Others implement more complex features, but differently than any other command. The bare, confusing minimum is by far the most common, as writing fully featured parameter support is a complex endeavor.
Luckily, the PowerShell engine already wrote all of the complex parameter handling support and manages all of this detail for you. Rather than write the code to enforce it, you can simply mark parameters as mandatory or positional or state their validation requirements. This built-in support for parameter behavior and validation forms a centerpiece of PowerShell’s unique consistency.
Parameter validation is one of the main distinctions between scripts that are well behaved and those that are not. When running a new script (or one you wrote distantly in the past), reviewing the parameter definitions and validation requirements is one of the quickest ways to familiarize yourself with how that script behaves.
From the script author’s perspective, validation requirements save you from writing verification code that you’ll need to write anyway.
The elements of the [Parameter()]
attribute mainly define how your parameter behaves in relation to
other parameters. All elements are optional.
Mandatory
= $trueDefines the parameter as mandatory. If the user doesn’t supply a value to this parameter, PowerShell automatically prompts the user for it. When not specified, the parameter is optional.
Position
=
position
Defines the position of this
parameter. This applies when the user provides parameter values
without specifying the parameter they apply to (for example,
Argument2
in
Invoke-MyFunction -
). PowerShell
supplies these values to parameters that have defined a
Position, from lowest to highest. When not
specified, the name of this parameter must be supplied by the
user.Param1
Argument1
Argument2
ParameterSetName
=
name
Defines this parameter as a member of
a set of other related parameters. Parameter behavior for this
parameter is then specific to this related set of parameters,
and the parameter exists only in parameter sets in which it is
defined. This feature is used, for example, when the user may
supply only a Name or ID. To include a
parameter in two or more specific parameter sets, use two or
more [Parameter()]
attributes. When not specified, this parameter is a member of
all parameter sets. To define the default parameter set name of
your cmdlet, supply it in the CmdletBinding
attribute:
[CmdletBinding(DefaultParameterSetName =
"
.Name
")]
ValueFromPipeline
= $trueDeclares this parameter as one that
directly accepts pipeline input. If the user pipes data into
your script or function, PowerShell assigns this input to your
parameter in your command’s process {}
block.
For more information about accepting pipeline input, see Access Pipeline Input. Beware of applying this
parameter to String
parameters, as almost all
input can be converted to strings—often producing a result that
doesn’t make much sense. When not specified, this parameter does
not accept pipeline input directly.
ValueFromPipelineByPropertyName
=
$trueDeclares this parameter as one that
accepts pipeline input if a property of an incoming object
matches its name. If this is true, PowerShell assigns the value
of that property to your parameter in your command’s
process {}
block. For more information about accepting
pipeline input, see Access Pipeline Input.
When not specified, this parameter does not accept pipeline
input by property name.
ValueFromRemainingArguments
=
$trueDeclares this parameter as one that accepts all remaining input that has not otherwise been assigned to positional or named parameters. Only one parameter can have this element. If no parameter declares support for this capability, PowerShell generates an error for arguments that cannot be assigned.
In addition to the
[Parameter()]
attribute, PowerShell lets you apply
other attributes that add additional behavior or validation
constraints to your parameters. All validation attributes are
optional.
[Alias("name
")]
Defines an alternate name for this parameter. This is especially helpful for long parameter names that are descriptive but have a more common colloquial term. When not specified, the parameter can be referred to only by the name you originally declared. You can supply many aliases to a parameter. To learn about aliases for command parameters, see Program: Learn Aliases for Common Parameters.
[AllowNull()]
Allows this parameter to receive
$null
as its value. This is required only for
mandatory parameters. When not specified, mandatory parameters
cannot receive $null
as their value, although
optional parameters can.
[AllowEmptyString()]
Allows this string parameter to receive an empty string as its value. This is required only for mandatory parameters. When not specified, mandatory string parameters cannot receive an empty string as their value, although optional string parameters can. You can apply this to parameters that are not strings, but it has no impact.
[AllowEmptyCollection()]
Allows this collection parameter to receive an empty collection as its value. This is required only for mandatory parameters. When not specified, mandatory collection parameters cannot receive an empty collection as their value, although optional collection parameters can. You can apply this to parameters that are not collections, but it has no impact.
[ValidateCount(lower
limit
, upper
limit
)]
Restricts the number of elements that can be in a collection supplied to this parameter. When not specified, mandatory parameters have a lower limit of one element. Optional parameters have no restrictions. You can apply this to parameters that are not collections, but it has no impact.
[ValidateLength(lower
limit
, upper
limit
)]
Restricts the length of strings that this parameter can accept. When not specified, mandatory parameters have a lower limit of one character. Optional parameters have no restrictions. You can apply this to parameters that are not strings, but it has no impact.
[ValidatePattern("regular
expression
")]
Enforces a pattern that input to this string parameter must match. When not specified, string inputs have no pattern requirements. You can apply this to parameters that are not strings, but it has no impact.
If your parameter has a pattern
requirement, though, it may be more effective to validate the
parameter in the body of your script or function instead. The
error message that PowerShell generates when a parameter fails
[ValidatePattern()]
validation is not very
user-friendly (“The argument ... does not match the <pattern>
pattern”).
Instead, it might be more helpful to generate a message
explaining the intent of the
pattern:
if($EmailAddress -notmatch Pattern
)
{
throw "Please specify a valid email address."
}
[ValidateRange(lower
limit
, upper
limit
)]
Restricts the upper and lower limit of numerical arguments that this parameter can accept. When not specified, parameters have no range limit. You can apply this to parameters that are not numbers, but it has no impact.
[ValidateScript( { script block
}
)]
Ensures that input supplied to this
parameter satisfies the condition that you supply in the script
block. PowerShell assigns the proposed input to the
$_
variable, and then invokes your script
block. If the script block returns $true
(or
anything that can be converted to $true
, such
as nonempty strings), PowerShell considers the validation to
have been successful.
[ValidateSet("First
Option
", "Second Option
",
..., "Last Option
")]
Ensures that input supplied to this
parameter is equal to one of the options in the set. PowerShell
uses its standard meaning of equality during this comparison
(the same rules used by the -eq
operator). If
your validation requires nonstandard rules (such as
case-sensitive comparison of strings), you can instead write the
validation in the body of the script or function.
[ValidateNotNull()]
Ensures that input supplied to this
parameter is not null. This is the default behavior of mandatory
parameters, and this attribute is useful only for optional
parameters. When applied to string parameters, a
$null
parameter value instead gets converted
to an empty string.
[ValidateNotNullOrEmpty()]
Ensures that input supplied to this
parameter is neither null nor empty. This is the default
behavior of mandatory parameters, and this attribute is useful
only for optional parameters. When applied to string parameters,
the input must be a string with a length greater than one. When
applied to collection parameters, the collection must have at least one
element. When applied to other types of parameters, this
attribute is equivalent to the
[ValidateNotNull()]
attribute.
Program: Learn Aliases for Common Parameters
Get-Help
about_functions_advanced_parameters
Your command takes a script block as a parameter. When you invoke that script block, you want variables to refer to variables from the user’s session, not your script.
Call the GetNewClosure()
method on the supplied
script block before either defining any of your own variables or
invoking the script block. See Example 11-13.
Example 11-13. A command that supports variables from the user’s session
############################################################################## ## ## Invoke-ScriptBlockClosure ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .SYNOPSIS Demonstrates the GetNewClosure() method on a script block that pulls variables in from the user's session (if they are defined). .EXAMPLE PS >$name = "Hello There" PS >Invoke-ScriptBlockClosure { $name } Hello There Hello World Hello There #> param( ## The script block to invoke [ScriptBlock] $ScriptBlock ) Set-StrictMode -Version Latest ## Create a new script block that pulls variables ## from the user's scope (if defined). $closedScriptBlock = $scriptBlock.GetNewClosure() ## Invoke the script block normally. The contents of ## the $name variable will be from the user's session. & $scriptBlock ## Define a new variable $name = "Hello World" ## Invoke the script block normally. The contents of ## the $name variable will be "Hello World", now from ## our scope. & $scriptBlock ## Invoke the "closed" script block. The contents of ## the $name variable will still be whatever was in the user's session ## (if it was defined). & $closedScriptBlock
Whenever you invoke a script block (for example, one passed by the user as a parameter value), PowerShell treats variables in that script block as though you had typed them yourself. For example, if a variable referenced by the script block is defined in your script or module, PowerShell will use that value when it evaluates the variable.
This is often desirable behavior, although its
use ultimately depends on your script. For example, Write a Script Block accepts a script block parameter that
is intended to refer to variables defined within
the script: $_
, specifically.
Alternatively, this might not always be what you want. Sometimes, you might prefer that variable names refer to variables from the user’s session, rather than potentially from your script.
The solution, in this case, is to call the
GetNewClosure()
method. This method makes the script
block self-contained, or closed. Variables maintain
the value they had when the GetNewClosure()
method
was called, even if a new variable with that name is created.
You want to specify the parameters of a command you are about to invoke but don’t know beforehand what those parameters will be.
Define the parameters and their values as
elements of a hashtable, and then use the @
character
to pass that hashtable to a command:
PS > $parameters = @{ Name = "PowerShell"; WhatIf = $true } PS > Stop-Process @parameters What if: Performing operation "Stop-Process" on Target "powershell (2380)". What if: Performing operation "Stop-Process" on Target "powershell (2792)".
When writing commands that call other commands, a common problem is not knowing the exact parameter values that you’ll pass to a target command. The solution to this is simple, and comes by storing the parameter values in variables:
PS > function Stop-ProcessWhatIf($name) { Stop-Process -Name $name -Whatif } PS > Stop-ProcessWhatIf PowerShell What if: Performing operation "Stop-Process" on Target "powershell (2380)". What if: Performing operation "Stop-Process" on Target "powershell (2792)".
In version one of PowerShell, things were unreasonably more difficult if you didn’t know beforehand which parameter names you wanted to pass along. Version two of PowerShell significantly improves the situation through a technique called splatting that lets you pass along parameter values and names.
The first step is to define a variable, for
example, parameters
. In that variable, store a
hashtable of parameter names and their values. When you call a command,
you can pass the hashtable of parameter names and values with the
@
character and the variable name that stores them.
Note that you use the @
character to represent the
variable, instead of the usual $
character:
Stop-Process @parameters
This is a common need when writing commands that are designed to enhance or extend existing commands. In that situation, you simply want to pass all of the user’s input (parameter values and names) on to the existing command, even though you don’t know exactly what they supplied.
To simplify this situation even further,
advanced functions have access to an automatic
variable called PSBoundParameters
. This automatic
variable is a hashtable that stores all parameters passed to the current
command, and it is suitable for both tweaking and splatting. For an
example of this approach, see Program: Enhance or Extend an Existing Cmdlet. For more information about
advanced functions, see Access Arguments of a Script, Function, or Script Block.
You want to support the standard
-WhatIf
and -Confirm
parameters,
and access cmdlet-centric support in the PowerShell engine.
Ensure your script or function declares the
[CmdletBinding()]
attribute, and then access engine
features through the $psCmdlet
automatic
variable.
function Invoke-MyAdvancedFunction { [CmdletBinding(SupportsShouldProcess = $true)] param() if($psCmdlet.ShouldProcess("test.txt", "Remove Item")) { "Removing test.txt" } Write-Verbose "Verbose Message" }
When a script or function progresses to an
advanced function, PowerShell defines
an additional $psCmdlet
automatic variable. This
automatic variable exposes support for the
-ShouldProcess
and -Confirm
automatic parameters. If your command defined parameter
sets, it also exposes the parameter set name that PowerShell selected
based on the user’s choice of parameters. For more information about
advanced functions, see Access Arguments of a Script, Function, or Script Block.
To support the -WhatIf
and
-Confirm
parameters, add the
[CmdletBinding(SupportsShouldProcess = $true)]
attribute inside of your script or function. You should support this on
any scripts or functions that modify system state, as they let your
users investigate what your script will do before actually doing it.
Then, you simply surround the portion of your script that changes the
system with an if($psCmdlet.Should
Process(...) ) { }
block. Example 11-14 demonstrates this approach.
Now your advanced function is as well-behaved as built-in PowerShell cmdlets!
PS > Invoke-MyAdvancedFunction -WhatIf What if: Performing operation "Remove Item" on Target "test.txt".
If
your command causes a high-impact result that should be evaluated with
caution, call the $psCmdlet.ShouldContinue()
method. This
generates a warning for users—but be sure to support a
-Force
parameter that lets them bypass this
message.
function Invoke-MyDangerousFunction { [CmdletBinding()] param( [Switch] $Force ) if($Force -or $psCmdlet.ShouldContinue( "Do you wish to invoke this dangerous operation? Changes can not be undone.", "Invoke dangerous action?")) { "Invoking dangerous action" } }
This generates a standard PowerShell confirmation message:
PS > Invoke-MyDangerousFunction Invoke dangerous action? Do you wish to invoke this dangerous operation? Changes can not be undone. [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): Invoking dangerous action PS > Invoke-MyDangerousFunction -Force Invoking dangerous action
To explore the
$psCmdlet
automatic variable further, you can use
Example 11-15. This
command creates the bare minimum of advanced function, and then invokes
whatever script block you supply within it.
For open-ended exploration, use
$host.EnterNestedPrompt()
as the script block:
PS > Invoke-AdvancedFunction { $host.EnterNestedPrompt() } PS > $psCmdlet | Get-Member TypeName: System.Management.Automation.PSScriptCmdlet Name MemberType Definition ---- ---------- ---------- (...) WriteDebug Method System.Void WriteDebug(s... WriteError Method System.Void WriteError(S... WriteObject Method System.Void WriteObject(... WriteProgress Method System.Void WriteProgres... WriteVerbose Method System.Void WriteVerbose... WriteWarning Method System.Void WriteWarning... (...) ParameterSetName Property System.String ParameterS... PS > >exit PS >
For more about cmdlet support in the PowerShell engine, see the developer’s reference at http://msdn.microsoft.com/en-us/library/dd878294%28VS.85%29.aspx.
Add descriptive help comments at the beginning of your script for its synopsis, description, examples, notes, and more. Add descriptive help comments before parameters to describe their meaning and behavior.
############################################################################## ## ## Measure-CommandPerformance ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .SYNOPSIS Measures the average time of a command, accounting for natural variability by automatically ignoring the top and bottom ten percent. .EXAMPLE PS > .Measure-CommandPerformance.ps1 { Start-Sleep -m 300 } Count : 30 Average : 312.10155 (...) #> param( ## The command to measure [Scriptblock] $command, ## The number of times to measure the command's performance [int] $iterations = 30) (...)
Like parameter validation, discussed in Add Validation to Parameters, rich help is something traditionally supported in only the most high-end commands. For most commands, you’re lucky if you can figure out how to get some form of usage message.
As with PowerShell’s easy-to-define support for advanced parameter validation, adding help to commands and functions is extremely simple. Despite its simplicity, comment-based help provides all the power you’ve come to expect of fully featured PowerShell commands: overview, description, examples, parameter-specific details, and more.
PowerShell creates help for your script or function by looking at its comments. If the comments include any supported help tags, PowerShell adds those to the help for your command.
To speed up processing of these help
comments, PowerShell places restrictions on where they may appear.
In addition, if it encounters a comment that is
not a help-based comment, it stops searching
that block of comments for help tags. This may come as a surprise if
you are used to placing headers or copyright information at the
beginning of your script. The solution demonstrates how to avoid
this problem by putting the header and comment-based help in
separate comment blocks. For more information about these
guidelines, type Get-Help
about_Comment_Based_Help
.
You can place your help tags in either single-line comments or multiline (block) comments. You may find multiline comments easier to work with, as you can write them in editors that support spelling and grammar checks and then simply paste them into your script. Also, adjusting the word-wrapping of your comment is easier when you don’t have to repair comment markers at the beginning of the line. From the user’s perspective, multiline comments offer a significant benefit for the .EXAMPLES section because they require much less modification before being tried.
Comment-based help supports the following tags, which are all case-insensitive.
.SYNOPSIS
.DESCRIPTION
.PARAMETER
name
A description of parameter
name
, with one for each parameter you
want to describe. While you can write a
.PARAMETER
comment for each parameter,
PowerShell also supports comments written directly above the
parameter (as shown in the solution). Putting parameter help
alongside the actual parameter makes it easier to read and
maintain.
.EXAMPLE
An example of this command in use,
with one for each example you want to provide. PowerShell treats
the line immediately beneath the .EXAMPLE
tag
as the example command. If this line doesn’t contain any text
that looks like a prompt, PowerShell adds a prompt before it. It
treats lines that follow the initial line as additional output
and example commentary.
.INPUTS
A short summary of pipeline input(s) supported by this command. For each input type, PowerShell’s built-in help follows this convention:
System.String You can pipe a string that contains a path to Get-ChildItem.
.OUTPUTS
A short summary of items generated by this command. For each output type, PowerShell’s built-in help follows this convention:
System.ServiceProcess.ServiceController Get-Service returns objects that represent the services on the computer.
.NOTES
.LINK
A link to a related help topic or
command, with one .LINK
tag per link. If the
related help topic is a URL, PowerShell launches that URL when
the user supplies the -Online
parameter to
Get-Help
for your command.
Although these are all of the supported help
tags you are likely to use, comment-based help also supports tags for
some of Get-Help
’s more obscure features: .COMPONENT
, .ROLE
,
.FUNCTIONALITY
,
.FORWARDHELPTARGETNAME
,
.FORWARDHELPCATEGORY
,
.REMOTEHELPRUNSPACE
, and
.EXTERNALHELP
. For more information about these, type
Get-Help
about_Comment_Based_Help
.
If you want the custom information to always
be associated with the function or script block, declare a
System.ComponentModel.Description
attribute inside
that function:
function TestFunction { [System.ComponentModel.Description("Information I care about")] param() "Some function with metadata" }
If you don’t control the source code of the
function, create a new System.Component
Model.Description
attribute, and add it
to the script block’s Attributes
collection manually:
$testFunction = Get-Command TestFunction $newAttribute = New-Object ComponentModel.DescriptionAttribute "More information I care about" $testFunction.ScriptBlock.Attributes.Add($newAttribute)
To retrieve any attributes associated with a
function or script block, access the
ScriptBlock.Attributes
property:
PS > $testFunction = Get-Command TestFunction PS > $testFunction.ScriptBlock.Attributes Description TypeId ----------- ------ Information I care about System.ComponentModel.Description...
Although a specialized need for sure, it is sometimes helpful to add your own custom information to functions or script blocks. For example, once you’ve built up a large set of functions, many are really useful only in a specific context. Some functions might apply to only one of your clients, whereas others are written for a custom website you’re developing. If you forget the name of a function, you might have difficulty going through all of your functions to find the ones that apply to your current context.
You might find it helpful to write a new
function, Get-CommandForContext
, that takes a context
(for example, website) and returns only commands
that apply to that context.
function Get-CommandForContext($context) { Get-Command -CommandType Function | Where-Object { $_.ScriptBlock.Attributes | Where-Object { $_.Description -eq "Context=$context" } } }
Then write some functions that apply to specific contexts:
function WebsiteFunction { [System.ComponentModel.Description("Context=Website")] param() "Some function I use with my website" } function ExchangeFunction { [System.ComponentModel.Description("Context=Exchange")] param() "Some function I use with Exchange" }
Then, by building on these two, we have a context-sensitive
equivalent to Get-Command
:
PS > Get-CommandForContext Website CommandType Name Definition ----------- ---- ---------- Function WebsiteFunction ... PS > Get-CommandForContext Exchange CommandType Name Definition ----------- ---- ---------- Function ExchangeFunction ...
While the
System.ComponentModel.Description
attribute is the
most generically useful, PowerShell lets you place any attribute in a
function. You can define your own (by deriving from the
System.Attribute
class in the .NET Framework) or use
any of the other attributes included in the .NET Framework. Example 11-16 shows the PowerShell commands to
find all attributes that have a constructor that takes a single string
as its argument. These attributes are likely to be generally
useful.
For more information about working with .NET objects, see Work with .NET Objects.
You want to interact with input that a user sends to your function, script, or script block via the pipeline.
To access pipeline input, use the $input
variable, as shown in Example 11-17.
This function produces the following (or similar) output when run against your Windows system directory:
PS > dir $env:WINDIR | InputCounter 295
In your scripts, functions, and script blocks,
the $input
variable represents an
enumerator (as opposed to a simple
array) for the pipeline input the user provides. An enumerator lets you
use a foreach
statement to
efficiently scan over the elements of the input (as shown in Example 11-17) but does not let you directly
access specific items (such as the fifth element in the input, for
example).
An enumerator only lets you scan forward
through its contents. Once you access an element, PowerShell
automatically moves on to the next one. If you need to access an item
that you’ve already accessed, you must either call $input.Reset()
to scan through the list
again from the beginning or store the input in an array.
If you need to access specific elements in the
input (or access items multiple times), the best approach is to store
the input in an array. This prevents your script from taking advantage
of the $input
enumerator’s streaming
behavior, but is sometimes the only alternative. To store the input in
an array, use PowerShell’s list evaluation syntax ( @()
) to force PowerShell to interpret it as
an array.
function ReverseInput { $inputArray = @($input) $inputEnd = $inputArray.Count - 1 $inputArray[$inputEnd..0] }
PS > 1,2,3,4 | ReverseInput 4 3 2 1
If dealing with pipeline input plays a major role in your script, function, or script block, PowerShell provides an alternative means of dealing with pipeline input that may make your script easier to write and understand. For more information, see Write Pipeline-Oriented Scripts with Cmdlet Keywords.
Your script, function, or script block primarily takes input from the pipeline, and you want to write it in a way that makes this intention both easy to implement and easy to read.
To cleanly separate your script into regions
that deal with the initialization, per-record processing, and cleanup
portions, use the begin
, process
,
and end
keywords, respectively. For
example, a pipeline-oriented conversion of the solution in Access Pipeline Input looks like Example 11-18.
This produces the following output:
PS > $debugPreference = "Continue" PS > dir | InputCounter DEBUG: Processing element Compare-Property.ps1 DEBUG: Processing element Connect-WebService.ps1 DEBUG: Processing element Convert-TextObject.ps1 DEBUG: Processing element ConvertFrom-FahrenheitWithFunction.ps1 DEBUG: Processing element ConvertFrom-FahrenheitWithoutFunction.ps1 DEBUG: Processing element Get-AliasSuggestion.ps1 (...) DEBUG: Processing element Select-FilteredObject.ps1 DEBUG: Processing element Set-ConsoleProperties.ps1 20
If your script, function, or script block
deals primarily with input from the pipeline, the
begin
, process
, and end
keywords let you express your solution
most clearly. Readers of your script (including you!) can easily see
which portions of your script deal with initialization, per-record
processing, and cleanup. In addition, separating your code into these
blocks lets your script consume elements from the pipeline as soon as
the previous script produces them.
Take, for example, the Get-InputWithForeach
and Get-InputWithKeyword
functions shown in Example 11-19. The first
function visits each element in the pipeline with a foreach
statement over its input, whereas the
second uses the begin
, process
,
and end
keywords.
Example 11-19. Two functions that take different approaches to processing pipeline input
## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) Set-StrictMode -Version Latest ## Process each element in the pipeline, using a ## foreach statement to visit each element in $input function Get-InputWithForeach($identifier) { Write-Host "Beginning InputWithForeach (ID: $identifier)" foreach($element in $input) { Write-Host "Processing element $element (ID: $identifier)" $element } Write-Host "Ending InputWithForeach (ID: $identifier)" } ## Process each element in the pipeline, using the ## cmdlet-style keywords to visit each element in $input function Get-InputWithKeyword($identifier) { begin { Write-Host "Beginning InputWithKeyword (ID: $identifier)" } process { Write-Host "Processing element $_ (ID: $identifier)" $_ } end { Write-Host "Ending InputWithKeyword (ID: $identifier)" } }
Both of these functions act the same when run
individually, but the difference becomes clear when we combine them with
other scripts or functions that take pipeline input. When a script uses
the $input
variable, it must wait
until the previous script finishes producing output before it can start.
If the previous script takes a long time to produce all its records (for
example, a large directory listing), then your user must wait until the
entire directory listing completes to see any results, rather than
seeing results for each item as the script generates it.
If a script, function, or script block uses
the cmdlet-style keywords, it must place all its code (aside from
comments or its param
statement if
it uses one) inside one of the three blocks. If your code needs to
define and initialize variables or define functions, place them in the
begin
block. Unlike most blocks of
code contained within curly braces, the code in the begin
, process
, and end
blocks has access to variables and
functions defined within the blocks before it.
When we chain together two scripts that
process their input with the begin
,
process
, and end
keywords, the second script gets to process input as soon as the first
script produces it.
PS > 1,2,3 | Get-InputWithKeyword 1 | Get-InputWithKeyword 2 Starting InputWithKeyword (ID: 1) Starting InputWithKeyword (ID: 2) Processing element 1 (ID: 1) Processing element 1 (ID: 2) 1 Processing element 2 (ID: 1) Processing element 2 (ID: 2) 2 Processing element 3 (ID: 1) Processing element 3 (ID: 2) 3 Stopping InputWithKeyword (ID: 1) Stopping InputWithKeyword (ID: 2)
When we chain together two scripts that
process their input with the $input
variable, the second script can’t start until the first
completes.
PS > 1,2,3 | Get-InputWithForeach 1 | Get-InputWithForeach 2 Starting InputWithForeach (ID: 1) Processing element 1 (ID: 1) Processing element 2 (ID: 1) Processing element 3 (ID: 1) Stopping InputWithForeach (ID: 1) Starting InputWithForeach (ID: 2) Processing element 1 (ID: 2) 1 Processing element 2 (ID: 2) 2 Processing element 3 (ID: 2) 3 Stopping InputWithForeach (ID: 2)
When the first script uses the cmdlet-style
keywords, and the second script uses the $input
variable, the second script can’t start
until the first completes.
PS > 1,2,3 | Get-InputWithKeyword 1 | Get-InputWithForeach 2 Starting InputWithKeyword (ID: 1) Processing element 1 (ID: 1) Processing element 2 (ID: 1) Processing element 3 (ID: 1) Stopping InputWithKeyword (ID: 1) Starting InputWithForeach (ID: 2) Processing element 1 (ID: 2) 1 Processing element 2 (ID: 2) 2 Processing element 3 (ID: 2) 3 Stopping InputWithForeach (ID: 2)
When the first script uses the $input
variable and the second script uses the
cmdlet-style keywords, the second script gets to process input as soon
as the first script produces it. Notice, however, that
InputWithKeyword
starts before
InputWithForeach
. This is because functions with no
explicit begin
, process
, or
end
blocks have all of their code placed in an
end
block by default.
PS > 1,2,3 | Get-InputWithForeach 1 | Get-InputWithKeyword 2 Starting InputWithKeyword (ID: 2) Starting InputWithForeach (ID: 1) Processing element 1 (ID: 1) Processing element 1 (ID: 2) 1 Processing element 2 (ID: 1) Processing element 2 (ID: 2) 2 Processing element 3 (ID: 1) Processing element 3 (ID: 2) 3 Stopping InputWithForeach (ID: 1) Stopping InputWithKeyword (ID: 2)
For more information about dealing with pipeline input, see Writing Scripts, Reusing Functionality.
Your function primarily takes its input from the pipeline, and you want it to perform the same steps for each element of that input.
To write a pipeline-oriented function, define
your function using the filter
keyword, rather than the function
keyword. PowerShell makes the current pipeline object available as the
$_
variable.
filter Get-PropertyValue($property) { $_.$property }
A filter is the equivalent of a function that
uses the cmdlet-style keywords and has all its code inside the process
section.
The solution demonstrates an extremely useful filter: one that returns the value of a property for each item in a pipeline.
PS > Get-Process | Get-PropertyValue Name audiodg avgamsvr avgemc avgrssvc avgrssvc avgupsvc (...)
For a more complete example of this approach, see Program: Simplify Most Foreach-Object Pipelines. For more information about the cmdlet-style keywords, see Write Pipeline-Oriented Scripts with Cmdlet Keywords.
You have a long script that includes helper functions, but those helper functions obscure the main intent of the script.
Place the main logic of your script in a
function called Main
, and place that function at the
top of your script. At the bottom of your script (after all the helper
functions have also been defined), dot source the
Main
function.
## LongScript.ps1 function Main { "Invoking the main logic of the script" CallHelperFunction1 CallHelperFunction2 } function CallHelperFunction1 { "Calling the first helper function" } function CallHelperFunction2 { "Calling the second helper function" } . Main
When PowerShell invokes a script, it executes it in order from the beginning to the end. Just as when you type commands in the console, PowerShell generates an error if you try to call a function that you haven’t yet defined.
When writing a long script with lots of helper functions, this usually results in those helper functions migrating to the top of the script so that they are all defined by the time your main logic finally executes them. When reading the script, then, you are forced to wade through pages of seemingly unrelated helper functions just to reach the main logic of the script.
You might wonder why PowerShell requires this strict ordering of function definitions and when they are called. After all, a script is self-contained, and it would be possible for PowerShell to process all of the function definitions before invoking the script.
The reason is parity with the interactive environment. Pasting a script into the console window is a common diagnostic or experimental technique, as is highlighting portions of a script in the Integrated Scripting Environment and selecting “Run Selection.” If PowerShell did something special in an imaginary script mode, these techniques would not be possible.
To resolve this problem, you can place the main script logic in
a function of its own. The name doesn’t matter, but
Main
is a traditional name. If you place this
function at the top of the script, your main logic is visible
immediately.
Functions aren’t automatically executed, so
the final step is to invoke the Main
function. Place
this call at the end of your script, and you can be sure that all the
required helper functions have been defined. Dot sourcing this function
ensures that it is processed in the script scope, rather than the
isolated function scope that would normally be created for it.
For more information about dot sourcing and script scopes, see Control Access and Scope of Variables and Other Items.
You want to take an action based on the pattern of a command name, as opposed to the name of the command itself.
Add a command wrapper for the Out-Default
cmdlet that intercepts
CommandNotFound
errors and takes action based on the
TargetObject
of that error.
Example 11-20 illustrates
this technique by supporting relative path navigation without an
explicit call to Set-Location
.
Example 11-20. Add-RelativePathCapture.ps1
############################################################################## ## ## Add-RelativePathCapture ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .SYNOPSIS Adds a new Out-Default command wrapper that captures relative path navigation without having to explicitly call 'Set-Location' .EXAMPLE PS C:UsersLeeDocuments>.. PS C:UsersLee>... PS C:> .NOTES This commands builds on New-CommandWrapper, also included in the Windows PowerShell Cookbook. #> Set-StrictMode -Version Latest New-CommandWrapper Out-Default ` -Process { if(($_ -is [System.Management.Automation.ErrorRecord]) -and ($_.FullyQualifiedErrorId -eq "CommandNotFoundException")) { ## Intercept all CommandNotFound exceptions, where the actual ## command consisted solely of dots. $command = $_.TargetObject if($command -match '^(.)+$') { ## Count the number of dots, and go that many levels (minus ## one) up the directory hierarchy. $newLocation = ".." * ($command.Length - 1) if($newLocation) { Set-Location $newLocation } ## Handle the error $error.RemoveAt(0) $_ = $null } } }
PowerShell supports several useful forms of named commands (cmdlets, functions, and aliases), but you may find yourself wanting to write extensions that alter their behavior based on the form of the name, rather than the arguments passed to it. For example, you might want to automatically launch URLs just by typing them or navigate around providers just by typing relative path locations.
While this is not a built-in feature of
PowerShell, it is possible to get a very reasonable alternative by
intercepting the errors that PowerShell generates when it can’t find a
command. The example in the Solution does just this, by building a
command wrapper over the Out-Default
command to
intercept and act on commands that consist solely of dots.
While PowerShell’s built-in commands are useful, you may sometimes wish they had included an additional parameter or supported a minor change to their functionality. This was difficult in version one of PowerShell, since “wrapping” another command was technical and error-prone. In addition to the complexity of parsing parameters and passing only the correct ones along, previous solutions also prevented wrapped commands from benefiting from the streaming nature of PowerShell’s pipeline.
Version two of PowerShell significantly improves the situation by combining three new features:
Given a script block that contains a single pipeline,
the GetSteppablePipeline()
method
returns a SteppablePipeline
object that gives you
control over the Begin
,
Process
, and End
stages of the
pipeline.
Given a hashtable of names and values, PowerShell lets
you pass the entire hashtable
to a command. If you use the @
symbol to identify
the hashtable variable name (rather than the $
symbol), PowerShell then treats each element of the hashtable as though it were a parameter to
the command.
With enough knowledge of steppable pipelines, splatting, and parameter validation, you can write your own function that can effectively wrap another command. The proxy command APIs make this significantly easier by auto-generating large chunks of the required boilerplate script.
These three features finally enable the
possibility of powerful command extensions, but putting them together
still requires a fair bit of technical expertise. To make things easier,
use the New-CommandWrapper
script (Example 11-21) to easily create
commands that wrap (and extend) existing commands.
Example 11-21. New-CommandWrapper.ps1
############################################################################## ## ## New-CommandWrapper ## ## From Windows PowerShell Cookbook (O'Reilly) ## by Lee Holmes (http://www.leeholmes.com/guide) ## ############################################################################## <# .SYNOPSIS Adds parameters and functionality to existing cmdlets and functions. .EXAMPLE New-CommandWrapper Get-Process ` -AddParameter @{ SortBy = { $newPipeline = { __ORIGINAL_COMMAND__ | Sort-Object -Property $SortBy } } } This example adds a 'SortBy' parameter to Get-Process. It accomplishes this by adding a Sort-Object command to the pipeline. .EXAMPLE $parameterAttributes = @' [Parameter(Mandatory = $true)] [ValidateRange(50,75)] [Int] '@ New-CommandWrapper Clear-Host ` -AddParameter @{ @{ Name = 'MyMandatoryInt'; Attributes = $parameterAttributes } = { Write-Host $MyMandatoryInt Read-Host "Press ENTER" } } This example adds a new mandatory 'MyMandatoryInt' parameter to Clear-Host. This parameter is also validated to fall within the range of 50 to 75. It doesn't alter the pipeline, but does display some information on the screen before processing the original pipeline. #> param( ## The name of the command to extend [Parameter(Mandatory = $true)] $Name, ## Script to invoke before the command begins [ScriptBlock] $Begin, ## Script to invoke for each input element [ScriptBlock] $Process, ## Script to invoke at the end of the command [ScriptBlock] $End, ## Parameters to add, and their functionality. ## ## The Key of the hashtable can be either a simple parameter name, ## or a more advanced parameter description. ## ## If you want to add additional parameter validation (such as a ## parameter type,) then the key can itself be a hashtable with the keys ## 'Name' and 'Attributes'. 'Attributes' is the text you would use when ## defining this parameter as part of a function. ## ## The Value of each hashtable entry is a script block to invoke ## when this parameter is selected. To customize the pipeline, ## assign a new script block to the $newPipeline variable. Use the ## special text, __ORIGINAL_COMMAND__, to represent the original ## command. The $targetParameters variable represents a hashtable ## containing the parameters that will be passed to the original ## command. [HashTable] $AddParameter ) Set-StrictMode -Version Latest ## Store the target command we are wrapping and its command type $target = $Name $commandType = "Cmdlet" ## If a function already exists with this name (perhaps it's already been ## wrapped), rename the other function and chain to its new name. if(Test-Path function:$Name) { $target = "$Name" + "-" + [Guid]::NewGuid().ToString().Replace("-","") Rename-Item function:GLOBAL:$Name GLOBAL:$target $commandType = "Function" } ## The template we use for generating a command proxy $proxy = @' __CMDLET_BINDING_ATTRIBUTE__ param( __PARAMETERS__ ) begin { try { __CUSTOM_BEGIN__ ## Access the REAL Foreach-Object command, so that command ## wrappers do not interfere with this script $foreachObject = $executionContext.InvokeCommand.GetCmdlet( "Microsoft.PowerShell.CoreForeach-Object") $wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand( '__COMMAND_NAME__', [System.Management.Automation.CommandTypes]::__COMMAND_TYPE__) ## TargetParameters represents the hashtable of parameters that ## we will pass along to the wrapped command $targetParameters = @{} $PSBoundParameters.GetEnumerator() | & $foreachObject { if($command.Parameters.ContainsKey($_.Key)) { $targetParameters.Add($_.Key, $_.Value) } } ## finalPipeline represents the pipeline we wil ultimately run $newPipeline = { & $wrappedCmd @targetParameters } $finalPipeline = $newPipeline.ToString() __CUSTOM_PARAMETER_PROCESSING__ $steppablePipeline = [ScriptBlock]::Create( $finalPipeline).GetSteppablePipeline() $steppablePipeline.Begin($PSCmdlet) } catch { throw } } process { try { __CUSTOM_PROCESS__ $steppablePipeline.Process($_) } catch { throw } } end { try { __CUSTOM_END__ $steppablePipeline.End() } catch { throw } } dynamicparam { ## Access the REAL Get-Command, Foreach-Object, and Where-Object ## commands, so that command wrappers do not interfere with this script $getCommand = $executionContext.InvokeCommand.GetCmdlet( "Microsoft.PowerShell.CoreGet-Command") $foreachObject = $executionContext.InvokeCommand.GetCmdlet( "Microsoft.PowerShell.CoreForeach-Object") $whereObject = $executionContext.InvokeCommand.GetCmdlet( "Microsoft.PowerShell.CoreWhere-Object") ## Find the parameters of the original command, and remove everything ## else from the bound parameter list so we hide parameters the wrapped ## command does not recognize. $command = & $getCommand __COMMAND_NAME__ -Type __COMMAND_TYPE__ $targetParameters = @{} $PSBoundParameters.GetEnumerator() | & $foreachObject { if($command.Parameters.ContainsKey($_.Key)) { $targetParameters.Add($_.Key, $_.Value) } } ## Get the argumment list as it would be passed to the target command $argList = @($targetParameters.GetEnumerator() | Foreach-Object { "-$($_.Key)"; $_.Value }) ## Get the dynamic parameters of the wrapped command, based on the ## arguments to this command $command = $null try { $command = & $getCommand __COMMAND_NAME__ -Type __COMMAND_TYPE__ ` -ArgumentList $argList } catch { } $dynamicParams = @($command.Parameters.GetEnumerator() | & $whereObject { $_.Value.IsDynamic }) ## For each of the dynamic parameters, add them to the dynamic ## parameters that we return. if ($dynamicParams.Length -gt 0) { $paramDictionary = ` New-Object Management.Automation.RuntimeDefinedParameterDictionary foreach ($param in $dynamicParams) { $param = $param.Value $arguments = $param.Name, $param.ParameterType, $param.Attributes $newParameter = ` New-Object Management.Automation.RuntimeDefinedParameter ` $arguments $paramDictionary.Add($param.Name, $newParameter) } return $paramDictionary } } <# .ForwardHelpTargetName __COMMAND_NAME__ .ForwardHelpCategory __COMMAND_TYPE__ #> '@ ## Get the information about the original command $originalCommand = Get-Command $target $metaData = New-Object System.Management.Automation.CommandMetaData ` $originalCommand $proxyCommandType = [System.Management.Automation.ProxyCommand] ## Generate the cmdlet binding attribute, and replace information ## about the target $proxy = $proxy.Replace("__CMDLET_BINDING_ATTRIBUTE__", $proxyCommandType::GetCmdletBindingAttribute($metaData)) $proxy = $proxy.Replace("__COMMAND_NAME__", $target) $proxy = $proxy.Replace("__COMMAND_TYPE__", $commandType) ## Stores new text we'll be putting in the param() block $newParamBlockCode = "" ## Stores new text we'll be putting in the begin block ## (mostly due to parameter processing) $beginAdditions = "" ## If the user wants to add a parameter $currentParameter = $originalCommand.Parameters.Count if($AddParameter) { foreach($parameter in $AddParameter.Keys) { ## Get the code associated with this parameter $parameterCode = $AddParameter[$parameter] ## If it's an advanced parameter declaration, the hashtable ## holds the validation and / or type restrictions if($parameter -is [Hashtable]) { ## Add their attributes and other information to ## the variable holding the parameter block additions if($currentParameter -gt 0) { $newParamBlockCode += "," } $newParamBlockCode += "`n`n " + $parameter.Attributes + "`n" + ' $' + $parameter.Name $parameter = $parameter.Name } else { ## If this is a simple parameter name, add it to the list of ## parameters. The proxy generation APIs will take care of ## adding it to the param() block. $newParameter = New-Object System.Management.Automation.ParameterMetadata ` $parameter $metaData.Parameters.Add($parameter, $newParameter) } $parameterCode = $parameterCode.ToString() ## Create the template code that invokes their parameter code if ## the parameter is selected. $templateCode = @" if(`$PSBoundParameters['$parameter']) { $parameterCode ## Replace the __ORIGINAL_COMMAND__ tag with the code ## that represents the original command `$alteredPipeline = `$newPipeline.ToString() `$finalPipeline = `$alteredPipeline.Replace( '__ORIGINAL_COMMAND__', `$finalPipeline) } "@ ## Add the template code to the list of changes we're making ## to the begin() section. $beginAdditions += $templateCode $currentParameter++ } } ## Generate the param() block $parameters = $proxyCommandType::GetParamBlock($metaData) if($newParamBlockCode) { $parameters += $newParamBlockCode } $proxy = $proxy.Replace('__PARAMETERS__', $parameters) ## Update the begin, process, and end sections $proxy = $proxy.Replace('__CUSTOM_BEGIN__', $Begin) $proxy = $proxy.Replace('__CUSTOM_PARAMETER_PROCESSING__', $beginAdditions) $proxy = $proxy.Replace('__CUSTOM_PROCESS__', $Process) $proxy = $proxy.Replace('__CUSTOM_END__', $End) ## Save the function wrapper Write-Verbose $proxy Set-Content function:GLOBAL:$NAME $proxy ## If we were wrapping a cmdlet, hide it so that it doesn't conflict with ## Get-Help and Get-Command if($commandType -eq "Cmdlet") { $originalCommand.Visibility = "Private" }
3.16.139.8