17. Reporting

_______________________________

In This Chapter

Introduction

Screenshots

TXT Files

CSV Files

HTML Files

SMTP Delivery

File Copy

_______________________________

Introduction

When managing environments like Office 365, we use Get cmdlets to review configuration settings or we make changes to the configuration with Set cmdlets. Additionally there are cmdlets that also start with New, Remove, and so on. Either way the daily tasks such as managing cases, changing settings in the tenant of the SCC, looking for issues and scouring through logs will take up the majority of time and little time is given to documenting an environment or being proactive and producing daily reports.

PowerShell makes creating reports easy and while there are third party products that can produce canned results, they are not usually as flexible as PowerShell. With PowerShell, an administrator can choose the parameters to be reported on, the formatting and delivering method. Scripts can also be scheduled and contain error correction as needed depending on the intended results.

In this chapter, we will explore various reporting formats like TXT, CSV, HTML, and more. Delivery methods will also be explored and ways to schedule the reports. Real world scenarios will be used to help illustrate the usefulness of each method as well as the possibilities that each of these formats will provide.

Screenshots

The simplest method for using PowerShell to document the Security and Compliance Center is to use screenshots to capture script results. The advantages to this method are that it is quick, simple and somewhat flexible. The disadvantages are that the results are harder to manipulate post screenshot and not as flexible for generating good documentation.

Using programs like the Windows Snipping Tool, OneNote, SnagIt and others can make quick snapshots of your PowerShell Script results. However, since this is a PowerShell book, I would only recommend using these tools as enhancements for documentation or reports that were created in PowerShell.

Let’s explore other options for creating documentation via PowerShell.

TXT Files

PowerShell provides a variety of ways to export results from cmdlets or scripts thus making results accessible for later review. One of these methods includes exporting any and all results to a text file.

Exporting the output can be done with a couple different methods. One is to use the ‘>’ symbol and specifying a TXT file name for output. The second method for exporting the results via the Out-File cmdlet. The below examples will explore both of these options for real world scenarios.

First, reporting what Keyword Dictionaries were created in the SCC which requires a cmdlet to produce results - Get-DlpKeywordDictionary. In addition to these cmdlets, some formatting has been inserted for attributes that are reported on. Note that Select-Object is being used to facilitate this:

Example – ‘>’

Get-DlpKeywordDictionary |select Name,Description,KeywordDictionary > c:KeywordDictionaries.txt

** Note ** The '>' symbol can be used to create or overwrite an existing file, while using '>>' will append to an existing file.

Example – Out-File- 1

Similar to the ‘>’ output symbol, the Out-File provides a method for exporting results of the cmdlet to a TXT file. However, Out-File provides more options for formatting the actual output; including Encoding, NoClobber, as well as width of the output.

Get-DlpKeywordDictionary | Select Name, Description, Keyworddictionary | Out-File c:KeywordDictionaries2.txt

This PowerShell cmdlet drops the results to a local text file:

Notice that in terms of actual output or formatting, the text file is exactly the same for either ‘>’ or ‘Out-File’. The true differentiator will be the usage of NoClobber and Encoding. While Encoding was not used for this example, it is an available option. The NoClobber switch is useful as it will prevent the overwriting a file by the output of this cmdlet. This is useful for running reports that may need to be reviewed later and having a script overwrite a file would make data analysis later impossible.

Example – Out-File – 2

The first example was a bit simple, based off a single cmdlet. In this example a script will be written to produce a report of the Label Policies and Policy rules and export these results to a file:

$SCCDestination = 'C:ReportingSCCSCCDocScript.txt'

# Get-LabelPolicy

Write-Host ' * Get-LabelPolicy in-progress' -ForegroundColor Yellow

$Line = "Label Policy" | Out-file $SCCDestination -Append

$Line = "-------------------------------" | Out-file $SCCDestination -Append

$Line = Get-LabelPolicy | Out-file $SCCDestination -Append

If ($Null -eq $Line) {

$Line = 'No Label Policies were found.' | Out-file $SCCDestination -Append

$Line = '' | Out-file $SCCDestination -Append

}

# Get-LabelPolicyRule

Write-Host ' * Get-LabelPolicyRule in-progress' -ForegroundColor Yellow

$Line = "Label Policy Rule" | Out-file $SCCDestination -Append

$Line = "-------------------------------" | Out-file $SCCDestination -Append

$Line = Get-LabelPolicyRule | Out-file $SCCDestination -Append

If ($Null -eq $Line) {

$Line = 'No Label Policy Rules were found.' | Out-file $SCCDestination -Append

$Line = '' | Out-file $SCCDestination -Append

}

Explanation of the Script

In the first part of the script, lines are added to title the TXT file with ‘'Label Policy as this is the first query performed. A line of dashes is also added in order to make set up a proper header for any results that were found:

Next, we have lines of code for the results. In results are not found then we make sure to write that to the log as shown below:

The same process is repeated with Label Policy Rules with the description line and dashed line:

Then followed by any results, or no results found:

From the script above and the previous Out-File and ‘>’ methodology, exporting the results of PowerShell can be used for reporting and can be formatted to your liking. The Out-File cmdlet is not a complex cmdlet and provides an easy way to create documentation or create a starting point for further reporting on an Exchange Server environment. The caveat is that it cannot be easily utilized for producing additional reports and these TXT reports are essentially just screen scrapes. The next section of this chapter, on CSV files, provides that next step of exporting data to a file that can be used for future reports and even as input to other systems or spreadsheets easily.

If the Get-LabelPolicy or Get-LabelPolicyRule does provide any results, these will be placed under the dashed line and in place of 'No Label Policy ...'.

CSV Files

A CSV file can be constructed by using Export-CSV cmdlet in PowerShell. CSV files are a good data format for tables since their format can be used for future scripts. CSV file data is also similar in format to the format of an array of arrays used in PowerShell. They’re typically used as either a temporary data storage for a script for further processing, used by a different script or just to document values found in Exchange. In terms of documenting an the SCC environment CSV files are useful for creating large data tables to be analyzed/reported on. They can also easily be imported into Excel and other tools for further data analysis and usage.

In terms of real world examples, CSV files can be used to document things such as:

  • Quarantine Message - Received, type, direction, sender, subject, size, etc.
  • Information Barrier Policy – Type, Assigned Segment, Segements Blocked, State, etc.
  • Labels – Name, settings, label actions, conditions, mode, disabled, etc.
  • Security – Role and role holder.
  • Insider Risk Policy– Name, type, insider risk scenario, DLP information, workloads, etc.

All of the examples above are good examples of what can be contained in these CSV files. How do we use PowerShell to create a CSV file? Let’s go through some practical examples of how to create files and use the data that is contained in the CSV file.

** Note ** The CSV delimiter value could be different depending on your region. Another sample delimiters is ';' which is used in German and Dutch language regions.

Example 1

For this example, we will create a report of all of the custom Sensitive Information Types that were created in the SCC. The report is part of an overall audit report of custom settings in the SCC. For this we'll need to use Get-DlpSensitiveInformationType cmdlet to list all of the types present:

** Notice that there are a lot of Microsoft Published ones. These are the ones we need to filter for.

The initial one-liner that we need is the following:

Get-DlpSensitiveInformationType | Where {$_.Publisher -notlike 'Microsoft*'} | FL

Sample results:

Now that we have all the Sensitive Information Type properties lined up, the data needs to be exported to a CSV file. What cmdlets are available for CSV export? What cmdlets have ‘CSV’ in them:

Get-Command *csv*

Export-Csv has several useful parameters for exporting the data from the above one-liner and export it to a usable CSV file:

Append

Delimiter

Encoding

Force

InputObject

NoClobber

NoTypeInformation

Path

UseCulture

LiteralPath

Example 2

In this example there is a need to report on all of the File Plan Citations on a yearly basis to make sure that all regulations are there during year end. This allows management to verify that they are compliant and if any regulations are missing, this would allow a quick view to determine that additional Citations are needed.

$Date = Get-Date -Format "yyyy-MM-dd"

Get-FilePlanPropertyCitation | Select Name, CitationJurisdiction, CitationUrl, FilePlanPropertyType, Policy, Workload | Export-csv $Date-FilePlanCitations.csv -NoType -NoClobber

This code is saved as a script and then scheduled to run once per day. The ‘NoClobber’ switch makes sure that none of the files are overwritten when the new daily file is generated. The ‘NoType’ switch makes sure that the format is clean for the CSV file.

Exporting the CSV file without –NoType, results in the CSV file containing bad data (in the red rectangle):

Exporting the CSV file with the –NoType parameter removes this unneeded data:

Then the Get-Date cmdlet is stored in the $Date variable which is used to tag the file name. Key to that cmdlet is the formatting of the date to year-month-day (i.e. 2020-06-26). To schedule PowerShell scripts, store the scripts in a secure location – isolated by NTFS permissions and placed on a share that is also locked down by permissions. Then the scheduled task will also need stored credentials to run. This data could now be used to create charts for trending data, in Excel for example.

HTML Files

When it comes to creating reports, HTML provides the most flexible platform for customization, creativity and informational overload. Visually speaking, HTML is excellent with the customization allowing for reports that are more visually presentable to the consumer of the report. This is important because the reports should be useable and read by the recipient of the report. Reports should be meaningful, containing real data that the recipient can readily understand and not ignore because it's just a table of numbers.

The most useful information-oriented HTML reports contain good coloring, column sizing, spacing and more. In this section on HTML reporting three types of reports will be covered:

  • Quick HTML Report
  • Some Formatting Present
  • Advanced Formatting

The first involves using just the Set-Content and ConvertTo-Html cmdlets: basic, quick and easy. While the second involves some basic options for formatting using CSS and the ConvertTo-Html cmdlet. The last option is to use headers, table formatting and multiple sections of information put into an HTML file, using variables to assemble the content.

Quick HTML Reports

Get-FilePlanPropertyCitation | ConvertTo-Html Name, CitationJurisdiction, CitationUrl, FilePlanPropertyType, Policy, Workload | Set-Content c: est.html

This generates a report like this:

The one-liner on the previous page does the following:

Code Section

What it does

Get-FilePlanPropertyCitation

Finds all File Plan Property Citation

ConvertTo-Html Name, CitationJurisdiction, CitationUrl, FilePlanPropertyType, Policy, Workload

Formats the table columns and values into proper HTML

| set-content c: est.html

This last part takes the values and exports them to an HTML file

As can be seen by the resulting HTML table, the results are really basic. For quick reports that need low effort, this report fits that need. A HTML table can have as many fields as needed. Breaking down the one-liner and the properties that are exported to the HTML file, we can see how these directly relate to the HTML that is created by the one-liner:

Now take for an example where we have a list of Compliance Cases, gathered as part of documentation for the Security and Compliance Center. A typical one-line report allows for a formatted table of the results to be created:

Get-ComplianceCase | Ft Name,Description,CaseType,Status,CreatedDateTime

Taking this same PowerShell one-liner and adding ConvertTo-Html, a portable document can be created and stored as part of an overall documentation of the IT infrastructure (including Exchange and Active Directory):

Get-ComplianceCase | ConvertTo-HTML Name, Description, CaseType, Status, CreatedDateTime | Set-Content c:ComplianceCases.Html

Resulting HTML file:

Similar to the first HTML example, a portable HTML file is now available for IT to keep as a reference in case there are any issues. However, the formatting lacks quite a bit of finish – no Header, no grid marking columns and rows.

Next, let’s add some polish to these HTML reports.

Adding Polish – Refining HTML Reports

Creating better HTML reports start with formatting and refining the look of the HTML output itself. This requires several features of HTML - CSS Styling, headers and possibly a footer as well. Each of these will provide value to the final file when it is delivered or printed out for documentation.

In this first example, the report generated will have a Header, Title and more added to it. At the very top of the HTML report will be this block of text. The colorful ‘rectangles’ refer back to the sections of code that made them possible:

$InitialUserInfo = $InitialUserCSV | ConvertTo-Html -Fragment -As Table -PreContent "<h2>Current User Attributes Before Import</h2>" | Out-String

$InitialReport = ConvertTo-Html -Title "Current State - User Data" -Head "<h1>PowerShell Reporting</h1><br>This report was ran: $(Get-Date)" -Body "$InitialUserInfo $Css"

Starting with the first line and ‘ConvertTo-Html’ portion of this one-liner, notice the following parameters that are being used:

  • Fragment - Defined because this section of code refers to only part of the HTML header being constructed
  • As Table - Formatting the output as a table
  • PreContent - Wording of this section of the HTML header

On the second line, starting with ‘ConvertTo-Html’ again, there are these parameters populated:

  • Title – The title as seen in a browser window
  • Header – Words to appear at the top of the spreadsheet

When coding the next part of the script, displayed in the next section, needs to include a CSS code section, for formatting the overall colors of the chart and other options. The value of the $CSS variable are stored between a pair of ' ' (single quotes). This example uses a black and white coloring scheme. <TH> sections are Black with White text and <TD> sections are White with Black text:

$Css='<style>table{margin:auto; width:98%};Body{background-color:cyan; Text-align:Center;};th{background-color:black; color:white;};td{background-color:white; color:black; Text-align:Center;};</style>'

The last line of the full code, which exists outside the configuration of the HTML file, is to actually export the HTML code to a text file:

$Report | Out-File $Filepath

Complete Coded Section

$FilePath = "c:ReportsReport.html"

$Css='<style>table{margin:auto; width:98%};Body{background-color:cyan; Text-align:Center;};th{background-color:black; color:white;};td{background-color:white; color:black; Text-align:Center;};</style>'

$UserCSV = Get-AdUser -Filter * -Properties * | Select-Object GivenName, Initials, Surname, DisplayName, EmployeeID, Company, Division, Office, Department, Title

$UserCSV | Export-CSV c:DownloadsInitialUserState-$date.csv -NoType

$UserInfo = $UserCSV | ConvertTo-Html -Fragment -As Table -PreContent "<h2>Current User Attributes Before Import</h2>" | Out-String

$Report = ConvertTo-Html -Title "Current State - User Data" -Head "<h1>PowerShell Reporting</h1><br>This report was ran: $(Get-Date)" -Body "$UserInfo $Css"

$Report | Out-File $FilePath

Sample results from this:

Detailed, Complex HTML Reporting

For the last section on HTML reporting, the creation of complex, detailed and visually appealing HTML files are what would have a wow factor or just considered more ‘accessible’ with color coding. This section will concentrate on creating a full HTML report with coloring, multiple charts, legends and more. For this scenario we’ll capture the values from Compliance Cases in the Security and Compliance Center. The report will contain the Name, Description and more.

Caveats to this approach are that you must know HTML. Whole books have been written about HTML. Consider the next few pages a primer on how to combine HTML and PowerShell into one. Feel free to use snippets of code for your own scripts. This will make the script building process go quicker and allow one to explore the code to create more personalized reports.

First, a destination HTML file needs to be declared for holding the information to be gathered with PowerShell. To provide information on this code line, a comment will be included just above the file declaration line:

$HTMLReport = "c: eportscompliancecases.html"

Next we’ll need to begin building HTML files. This section starts with the variable which will store all information that will be exported to a HTML file:

$Output="<html><body>

Next, define the font for the Header to be applied to the Header and the SubHeader (if needed) – the two Headers are defined with <H1> and <H3>:

<Font Size=""1"" face=""Calibri,Sans-Serif"">

<H1 Align=""Center"">Compliance Cases</H1>

<H3 Align=""Center"">Generated $((Get-Date).ToString())</H3>

</Font>

After the Header has been created, a table is defined for the display of the data that will be gathered with PowerShell cmdlets – border is applied (“1”) and some room around each cell (“3”) is added. Also notice that there are double quotes around values, this is because it is being stored within a variable. This section is then closed off with a quote to end stop populating the $Output variable for now:

$Output += "<Table Border=""1"" CellPadding=""3"" Style=""Font-Size:8pt;Font-Family:Arial,Sans-Serif""><tr bgcolor=""#3498db "">"

The HTML file now has a defined Header with labels. After that, table headers need to be defined. Make sure there is one <th> per column (closed with ‘</th>’) and value that will be displayed. For this block four columns are defined for the four values. Two lines are used for readability, four lines could be used or even one long line could be used:

# Build Compliance Case Headers

$Output += "<th Colspan=""10000""><Font Color=""#ffffff"">Case Name</Font></th><th Colspan=""10000""><Font Color=""#ffffff"">Description</Font></th><th colspan=""10000""><Font Color=""#ffffff"">CaseType</Font></th><th Colspan=""10000""><Font Color=""#ffffff"">Status</Font></th><th Colspan=""10000""><Font Color=""#ffffff"">Created Date</Font></th></tr>"

** Note ** This section is ended with ‘</tr>’ which will start a new section and a new line.

For the next section of the script, each Compliance Case needs to have these values to query: Name, Description, CaseType, Status and CreatedDateTime for the case. We can simply rotate through the $Cases variable with a Foreach loop:

$Cases = Get-ComplianceCase | Select Name,Description,CaseType,Status,CreatedDateTime

Foreach ($Case in $Cases) {

We can then convert each value to a variable to insert into the table:

# Get Compliance Cases

$Name = $Case.Name

$Description = $Case.Description

$CaseType = $Case.CaseType

$Status = $Case.Status

$Created = $Case.CreatedDateTime

These variables can be placed into a column for the table. The column is started with ‘<td>’ and ended with ‘</td>’. In between this will be the variable value for each Compliance Case property. Also defined are the width of the column (10000), the alignment of the text (center) and the font color (#000000). The variable also is defined as ‘$Output +=’ as this will append these lines to the $Output variable:

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Name</Font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Description</font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$CaseType</Font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Status</Font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Created</Font></td><tr>"

}

The last column contains the Create Date. Notice at the end of the $Output variable line, that a </tr> is there. This is used to start another new line for the next case to go into the chart.

At the end of the script, the $Output variable is closed up with "</body></html>" which closes off these sections in HTML.

# Ending the HTML FILE

$Output += "</table></body></html>"

Then the variable is exported to an HTML file:

# Export the Outlook variable to the HTML Report

$Output | Out-File $HTMLReport

The end result of the HTML script, looks like this:

Notice that some cells have a color assigned to them and that the table has defined columns. These column borders can be removed by changing its definition here - <table border=""0"">:

If for instance, an additional table needs to be added to this HTML file, simply add some lines like this:

$Output += “<BR><BR>”

Then add a new table the same way as the previous section.

$HTMLReport = 'ComplianceCases-2.html'

$Output = "<html><body>"

$Output += "<Font Size=""1"" face=""Calibri,Sans-Serif""><H1 Align=""Center"">Compliance Cases</H1><H3 Align=""Center"">Generated $((Get-Date).ToString())</H3></Font>"

$Output += "<Table Border=""1"" CellPadding=""3"" Style=""Font-Size:8pt;Font-Family:Arial,Sans-Serif""><tr bgcolor=""#3498db "">"

# Build Compliance Case Headers

$Output += "<th Colspan=""10000""><Font Color=""#ffffff"">Case Name</Font></th><th Colspan=""10000""><Font Color=""#ffffff"">Description</Font></th><th colspan=""10000""><Font Color=""#ffffff"">CaseType</Font></th><th Colspan=""10000""><Font Color=""#ffffff"">Status</Font></th><th Colspan=""10000""><Font Color=""#ffffff"">Created Date</Font></th></tr>"

# Get a list of cases and properties

$Cases = Get-ComplianceCase | Select Name,Description,CaseType,Status,CreatedDateTime

# Build the Data Tabled

Foreach ($Case in $Cases) {

# Get Complance Cases

$Name = $Case.Name

$Description = $Case.Description

$CaseType = $Case.CaseType

$Status = $Case.Status

$Created = $Case.CreatedDateTime

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Name</Font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Description</font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$CaseType</Font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Status</Font></td>"

$Output += "<td Colspan=""10000"" Align=""Center""><Font Color=""#000000"">$Created</Font></td><tr>"

}

# Ending the HTML FILE

$Output += "</table></body></html>"

# Export the Outlook variable to the HTML Report

$Output | Out-File $HTMLReport Delivery Methodologies

Once a report has been created and validated, a delivery method might need to be chosen. One of the most common delivery mechanism is email, but a file copy could also be employed. For this section on delivery we will go through both options to see how this can be done via PowerShell to deliver the report to its destination.

SMTP Delivery

Sending a report created in PowerShell via Exchange is probably the most common delivery method for PowerShell reporting. Email delivery requires a few things:

  • IP Address or FQDN of the Exchange Server to relay email through
  • The location of the source document to be sent
  • Determine if the file is to be attached or inserted into the body of an email
  • The destination email address
  • The sender email address
  • Subject line of the email messages

Each of these parameters will fit into a variable to be defined and then placed into the section of code that handles the email sending. First, how do we send an email in PowerShell? Well, let’s search for the cmdlet:

Get-Command *Message

There is a cmdlet called Send-MailMessage, which looks appropriate for our task at hand. Reviewing the parameters of the cmdlet using ‘Get-Help Send-MailMessage –full’ and comparing them to the list of items needed for the report to be sent:

-Attachments The location of the source document to be sent

-Body Determine if the file is to be attached or inserted into the body of an email

-BodyAsHtml Would be used if sending the report in the body of the email

-From The sender email address

-SmtpServer IP Address or FQDN of the Exchange Server to relay email through

-Subject Subject line of the email messages

-To The destination email address

Now that the parameters are known, variables can be defined and placed into the cmdlet to send the email:

$Date = Get-Date -Format U

$Attachment = “\FileServer01 eportscompliancecases.html”

$Body = "The report was created on $Date."

$From = “[email protected]

$To = “[email protected]

$SMTPServer = “10.1.1.1”

$Subject = “Compliance Case Report from $Date”

Incorporating all the variables above and using them for the Send-MailMessage cmdlet, the cmdlet looks like this:

Send-MailMessage -To $To -From $From -Subject $Subject -Attachment $Attachment -Body $Body -SmtpServer $SmtpServer

Make sure to configure relays correctly to relay emails like this or an error might occur:

When a successful email is sent, it would arrive (as seen below) with the HTML document attached:

If, however, the report needed to be in the body of the message, the script would be changed as follows:

$Body = Get-Content ' \FS01c$ReportsComplianceCases.html' –Raw

Send-MailMessage -To $To -From $From -Subject $Subject -Attachment $Attachment -Body $Body -SmtpServer $SmtpServer -BodyAsHTML

We included the -Attachment also, in addition to replacing the body with the same HTML code. The $Body variable stores the HTML file since the message body will be the HTML file and the -Raw switch will help facilitate that. The email arrives in the destination mailbox as so, with the HTML document pasted into the body of the message. Below is the email with an HTML attachment and the same HTML in the message body:

File Copy

Copying reports to a central location can be a solid alternative to sending all reports through email. By doing so, these files are accessible and possibly kept indefinitely for historical reporting. What cmdlets can be used for moving files to file servers? The BITS Transfer service would be ideal for moving files. This was discussed previously in Chapter 3. What cmdlets are available for this service?

Get-Help *BITS*

Remove-BitsTransfer

Get-BitsTransfer

Suspend-BitsTransfer

Complete-BitsTransfer

Resume-BitsTransfer

Start-BitsTransfer

Add-BitsFile

Set-BitsTransfer

Reviewing the list above, what cmdlets from this list are needed for transferring files from place to place? Start-BitsTransfer will copy a file from a source to a destination. This cmdlet is similar to a file copy daemon on steroids. Start-BitsTransfer has quite a few options to choose for running the cmdlet:

Depending on the destination and how the file needs to be transferred, BITS could be the ideal solution. It is useful if a large number of large files need to be transferred as BITS will dynamically associate bandwidth with a file transfer, run in the background and handle transfers even with network interruptions. In Chapter 3, this cmdlet is used in the script built to download files for use on servers. In the case of documentation, the cmdlets can now be used to move these documentation files to a central location.

In order to copy files to a central location, there is a need to define the source files that need to be moved, where the files will be moved to and if any sort of logging, authentication or priority needs to be assigned to these jobs. Retry intervals can be configured if the files are pulled from a source over a slow or notoriously high latency connection and to adjust for these connections ‘RetryInterval’ and/or ‘RetryTimeout’.

Example

For this example the requirement for the script is to pull locally run results files like CSV, HTML and or text files from five different global locations. These results will be deposited on one server and stored for analysis by the global IT team located in the US global headquarters. For three of the five links are to overseas, these locations are low latency. Two other links are high latency and need to be accounted for. Let’s begin by focusing on the low latency links and work our way out to the high latency link. These jobs should log if possible and the transfer jobs should be described accurately. No special authentication is needed.

Low Latency

For the low latency links, the source and destination options are a given and are filled with the source and destination files. The priority is defined in case this needs adjustments later. To help identify the BITS Transfer, a description and name are given to the processes that are transferring files. Here are the three BITS Transfer PowerShell one-liners for this process:

Start-BITSTransfer –Displayname “InfoBarrier-NA” –Description “Information Barrier Report - NA Segment” -Priority Normal -Source \NA-SRV-EXMBX01Reporting*.html –Destination \US-SRV-FS01ReportingSCC

Start-BITSTransfer –Displayname “InfoBarrier-Europe' –Description “Information Barrier Report - Europe Segment” -Priority Normal -Source \Euro-SRV-RPT01Reporting*.html –Destination \US-SRV-FS01 ReportingSCC

Start-BITSTransfer –Displayname “InfoBarrier-Australia” –Description “Information Barrier Report - Australia Segment” -Priority Normal -Source \Aus-SRV-RPT01Reporting*.html –Destination \US-SRV-FS01ReportingSCC

With the above cmdlets, all files are being copied to the same root folder and not a specific folder for each server. It is assumed that all files are unique and identifiable from the location that they come from. For example each set of files could be prefaced with a location and then the current date. This helps identity where and when these files are generated.

High Latency

For higher latency links, the same criteria above is used, with the addition of higher Retry Internal and Timeout. This is done because of the higher latency of the links. These values would be tweaked over time to adjust for any issues on these links.

Start-BITSTransfer –Displayname “InfoBarrier-China” –Description “Information Barrier Report - China Segment” -Priority High -RetryInterval 900 -RetryTimeout 2419200 -Source \CH-SRV-RPT01Reporting*.html –Destination \US-SRV-FS01ReportingSCC

Start-BITSTransfer –Displayname “InfoBarrier-SA” –Description “Exchange Information – South African Servers” -Priority High -RetryInterval 900 -RetryTimeout 2419200 -Source \SA-SRV-RPT01Reporting*.html –Destination \US-SRV-FS01ReportingSCC

In summary, the BITS Transfer process can be used to move files between destinations with some advantages over a regular file copy. BITS Transfer jobs that error or timeout can be examined with the Get-BITSTransfer cmdlet. There are policies that could be applied if necessary to manipulate the file transfer as well. For large file transfers the BITS Transfer can be monitored and manipulated while in flight (Set-BITSTransfer) and Get-BITSTransfer.

Conclusion

In the end Reporting is an important side effect of PowerShell's automation and power. With little to no interaction, we can use PowerShell to generate hourly, daily, weekly, monthly and yearly reports on items that we tell it to gather. Whether this is the reviewing security roles, managing compliance cases or creating DLP sensitive information types, we can leverage PowerShell to provide these. We can also tie in sending emails via PowerShell to help automate this process even further.

If we don't want automated reports, we can use PowerShell to provide snapshot information like the compliance cases that are currently active or archived as well as users who have Information Barriers applied to them. These reports can be created in CSV format, HMTL, and other formats depending on what the desired purpose of the reporting. In the end, PowerShell is a great platform to use for this purpose in the Security and Compliance Center.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.253.152