The previous chapter introduced the concept of Purple Teaming Extended (PTX) for leveraging different security controls mechanisms to improve the company's whole security posture at multiple layers. The different pieces of code that were provided as Proof of Concepts (PoCs) were designed to run independently. In this chapter, we will describe how it is possible to industrialize these checks with the centralization, monitoring, security, and workflows approach while relying on a DevOps approach. We will focus on the Active Directory controls use case, which was referenced in the previous chapter as Purpling Active Directory security, to provide a step-by-step DevOps approach for automation. The same methodology can be used for all the Chapter 12, Purple Teaming eXtended, examples and extended to any other controls.
This chapter will cover the following topics:
We wanted to warmly thank Dimitri Cognet (DevOps and cloud engineer) for his work and for providing this chapter. Please note that all the scripts and configurations of this chapter are available at: https://github.com/PacktPublishing/Purple-Team-Strategies/tree/main/Chapter-13/
The following workflow describes the steps to implement the automated Active Directory security testing. The implementation is divided into 6 steps:
The following diagram shows the different steps and their interactions:
The first step is the initial preparation of Rundeck for the purpose of automating the execution of PimgCastle, the diffing and notification of the results.
It's very important to store our jobs in the right project from the beginning. All Rundeck projects are independent of each other. The main advantage of creating a different Rundeck is access management. For example, our organization manages multiples customers and we need to define an access policy between different teams:
Another benefit of splitting Rundeck into multiple projects is that an Ansible inventory is dedicated to each project. We want to ensure the security workflow will be run on the right customer infrastructure:
Here, we can see a project called Lab-Purple-Teaming that's in charge of orchestrating several jobs for each customer context, including the following:
The following screenshot shows the All Jobs list:
A Rundeck job is made up of several groups that take care of different options:
We will not comment on these specific sections since that's outside the scope of this book. To learn more about how to use Rundeck, go to https://docs.rundeck.com.
Now, if we open the PingCastle_Daily_Audit job that's stored in the Customer08456 directory, we will see the global security workflow. Its first step is to call a job inside the customer project to install and run PingCastle on the target infrastructure:
Let's see now how to integrate our solution with our environment.
In order to interact with the environment, in our case the Active Directory we need to setup things. First, we need to be able to list hosts within the Active Directory, this will be performed by Ansible by gathering data from a Configuration Management Database (CMDB). Then we will see how Rundeck can interact with the Windows environment in order to execute PingCastle.
Rundeck's inventory is a major component that builds the necessary workflows and operates the security tasks for several customers. This inventory should be generated on the fly from a configuration management database such as Gestionnaire Libre de Parc Informatique (GLPI), ServiceNow, and so on.
In this section, you will learn how to use a script to get a JSON export from a CMDB. GLPI, which is a free CMDB solution, supports API calls and can easily be integrated with Ansible. The GLPI Ansible project helps ease this task. This project is available at https://github.com/Webelys/glpi_ansible.
As we know, each Rundeck project is different, so we need to configure the inventory settings for each customer's project. Let's get started:
Our inventory script should respect the following JSON syntax to allow Rundeck to interpret it. Consider, for example, that we only want nodename, the operating system (OS), version, function, location, and environment to be collected. Here, we can add what we consider to be relevant:
After that, we can benefit from all the filter options that are available regarding the job parameters:
For example, let's learn how to apply a filter to get only Active Directory servers hosted in Europe. The result is 1 Node Matched. If we click on the node, we will see all the tags that are available from the inventory:
If we click on the selected node, we will obtain all the information that's been gathered from the CMDB, as shown in the following screenshot:
As you may have guessed, a Rundeck inventory linked with filter nodes is particularly useful for choosing the targets where our job should be used.
We will not go through the details of this configuration, but we provide the information you will need to do so very quickly. Red Hat published a PowerShell script that you can use to install and configure WinRM on Windows. You can download this script directly from GitHub at https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1.
On the server running Rundeck, we will need to install the pywinrm and requests Python packages. To check the WinRM configuration, Rundeck offers a plugin that validates the communication between the Rundeck node and the Windows host:
Now that we have selected the WinRM step, we must define the following configuration:
Now that everything is set up let's move on the to execution phase.
In the step we will set up the host leveraged for the PingCastle execution and see how we can schedule its execution by using Ansible. Finally, we will run our first health check on our Active Directory environment.
Going back to the big picture, this section will focus on the execution phase. We are going to go through the Ansible playbook that we used, to download, install, and run pingcastle.exe on an Active Directory domain.
Before we create the job and its steps, we will need to manage and protect the users, passwords, and secrets that will be used in the playbook to avoid a plaintext password over Ansible execution. Fortunately, Rundeck offers Key Storage so that we can store any important secrets.
From the management console, click on the cog icon and select Key Storage:
Select Password (we want to store a password) and set a password in the Enter text field. Then, choose a name for this secret:
Now, go back to the job edit menu and click Workfow, then Add an option:
In the Storage Path section, click Select and find the password we registered previously in Key Storage. Once you've done that, go to Input Type and choose Secure Password Input, value exposed in scripts and commands.:
By doing this, we can see diverse ways to call/use this Rundeck option in our jobs, scripts, and more.
In our case, we will only be using an Ansible playbook, so we need to select the ${option.winrm_pwd} format in the code:
In this section, we're going to learn how to use Ansible on a Windows server to orchestrate some commands/plugins to create a workflow.
In the job, go down the page and click the Add a step button to build this part of the workflow:
Now, choose the Ansible Playbook Inline Workflow Node Step workflow:
The Ansible playbook's content is available at https://github.com/PacktPublishing/Purple-Team-Strategies/blob/main/Chapter-13/download_install_pingcastle.yml.
It contains the following variables. These will be used for WinRM authentication:
This playbook is used as a Rundeck step to download the latest version of PingCastle (each time) and unzip pingcastle.exe into the working directory.
The following content was extracted from the script and should be replaced according to your environment:
---
. . .
ansible_winrm_port: 5985
tmp_directory: C: mp
pingcastle_directory: C:appspingcastle
The playbook contains a lot of famous Ansible modules that are available on the internet. They are as follows:
Source: https://docs.ansible.com/.
This playbook can do a lot. Let's take a closer look.
First, it will identify the URL link for downloading PingCastle based on the content of the .zip file:
win_shell: |
get_url=(Invoke-WebRequest -Uri "https://www.pingcastle.com/download").Links.Href |select-string -pattern 'zip' | Sort-Object |Select-object -first 1
Then, after multiple treatments (filtering the results, creating the directory structure, and so on), it will download the last version of PingCastle.
After that, the .zip file's content will be extracted to the destination (pingcastle_directory):
win_unzip:
src: '{{ tmp_directory }}{{ latest_file }}'
dest: "{{ pingcastle_directory }}"
Once the Ansible playbook has been executed, we can check the output of the job by using the following data:
The job's output can be seen in the following screenshot:
If we click on the arrow next to Download&Install, we can see details about the playbook and the result of each task – that is, changed or ok:
We can verify that everything went well on the Windows server node by following what's described in the playbook.
The latest version of PingCastle has been downloaded in the tmp folder:
All the files that are included in the ZIP file have been uncompressed in the right directory:
Now that Rundeck has been configured to download the PingCastle package, we can execute it.
We have already learned how to create a job and a step into it. In this section, we will look at the playbook that's in charge of running pingcastle.exe to generate the audit report.
The Ansible playbook we'll be using in this section can be found at https://github.com/PacktPublishing/Purple-Team-Strategies/blob/main/Chapter-13/pingcastle_execution.yml.
First, the declared variables must be defined correctly for the directory that contains the PingCastle binary (pingcastle_directory), the report directory (report_directory), and the target Active Directory domain name (pingcastle_target):
. . .
vars:
ansible_user: ${option.winrm_user}
ansible_password: ${option.winrm_password}
ansible_connection: winrm
ansible_winrm_server_cert_validation: true
ansible_winrm_transport: basic
ansible_winrm_port: 5985
pingcastle_directory: C:appspingcastle
report_directory: C:apps eports
pingcastle_target: lab-purple.local
. . .
The playbook will then detect previous XML files (reports) using the following PowerShell command. This will help you identify which reports must be used as references:
$latestfile = Get-ChildItem -path {{ report_directory }} -Attributes !Directory *.xml | Sort-Object -Descending -Property LastWriteTime | select -First 1
Then, PingCastle will be run in healthcheck mode. This will generate a new report:
./PingCastle.exe --healthcheck --datefile –-no-enum-limit --server {{ pingcastle_target }}
After being executed, the new report will be moved to the defined report_directory.
Now, let's see what has happened on the Rundeck side by looking at Ansible's output:
Now, let's look at the Windows server used for running PingCastle:
Here, we can see two files in C:apps eports – one file in XML format and another file in HTML format. The XML file will be used to perform diffing operations.
In this section, we are going to learn how to integrate the Python script that's in charge of diffing between two PingCastle reports (day -1/d-day). This script is the same one that we used in Chapter 12, PTX – Purple Teaming eXtended.
The Python script takes two files as input:
find "${option.path}" -mmin +60 -mmin -1440 -type f -name "*.xml"
Let's look at this command in more details:
Source: https://docs.ansible.com/ansible/2.5/modules/find_module.html.
In this section, we will use the following Ansible playbook: https://github.com/PacktPublishing/Purple-Team-Strategies/blob/main/Chapter-13/diffing.yml.
This playbook will identify the previous and last reports and perform diffing operations on them. The playbook begins with a definition of the variable and the path to the diffing script:
vars:
diffing_code: /data/script/PingCastle-diffing.py
Then, it will get the previously generated report using the find command, as explained previously.
After identifying the previous report, the playbook will search for the current report:
- name: Get report in an audit folder newer than 20 minutes
find:
paths: "${option.path}"
age: "-20m"
register: current
After the previous and current reports have been identified correctly, diffing will be performed thanks to the diffing script:
- name: Run the python script in charge of "diffing"
command: python3 {{ diffing_code }} {{ previous }} {{ current }}
register: results
Next, the results will be checked to detect any potential failures (results.stdout_lines|length > 0):
- debug:
var: results.stdout_lines
when: results.stdout_lines|length > 0
Finally, the playbook will send a message if no differences have been found or return the diffing content:
- debug:
msg: "Everything is ok, no difference was found between yesterday and today"
when: results.stdout_lines|length == 0
In the previous configuration block, we can see an example of a playbook that can be used to perform diffing operations against two reports. We can also see that it will generate different outputs based on the result of the playbook. Here, we used the when condition to differentiate between when the script's execution sends no output (that is, nothing new) and when a vulnerability has been identified.
As shown in the following screenshot, when a vulnerability has been detected, the job will trigger an alert:
The following screenshot shows what you will see when everything is ok and the today and yesterday reports are the same:
Now that we can automate the data collection and diffing process, we must generate notifications for when positive diffing occurs.
By default, Rundeck provides the notification plugin for each job that will be created.
At the time of writing, five conditions can trigger notifications:
By default, Rundeck sends a notification that includes the global logging attachment and the status for each step (this is a lot of information).
However, several channels are available even with the free version. This means we can configure a notification very quickly via email, webhook, or Slack:
In addition, if we use a log management solution based on Elasticsearch, Rundeck makes a plugin available that's in charge of forwarding all Rundeck execution logs (by project) to Elasticsearch using Logstash's TCP input:
Source: https://github.com/rundeck-plugins/rundeck-logstash-plugin.
If we only want to catch some essential information that we selected, we can add a step by using Ansible inline or a Bash script that will be in charge of writing a log or sending an email to display data that's been collected in the workflow.
For example, the following playbook sends an email using mail via the ansible plugin:
- name: Email notification PingCastle
mail:
host: ${option.smtp_server}
port: 587
username: ${option.smtp_user}
password: ${option.smtp_password}
to: ${option.email_address}
subject: PingCastle Found new events
body: 'your message including the new event'
attach: /opt/data/reports/ad_hc_lab-purple.local.xml
Finally, we need to ensure our solution is suitable for a production environment. Therefore, we will schedule the whole workflow using Rundeck and implement monitoring to ensure everything is running smoothly.
Project schedules allow us to define schedules that can apply to any job in the project. You can run a Rundeck job in the following ways:
If we want to run a job manually, we need to go inside the project, select the target job, and click Run Job Now:
Next, we can define a schedule using one of two options: Simple or Crontab. Simple can be used if our needs are very basic:
If we want to plan an advanced schedule, we will need to use Crontab, where any kind of scenario is possible:
If you are not familiar with the cron language, go to the following excellent website: https://www.freeformatter.com.
Another way to run our Rundeck jobs is to use the API. This provides a significant amount of added value because our job can be called from another workflow or tools. For example, imagine running a Rundeck job in response to a security incident from a security information event management (SIEM) tool.
Now, let's learn how to start a job using the Rundeck API and view its open. First, we must create a token from the management console:
Now, we can build the HTTP post request to call the job – we just need to modify the job's UUID and the API token. After that, the following curl command can be sent:
curl --location --request POST 'http://localhost:4440/api/21/job/1bc581bd-a6b5-414b-923e-f082e9d6d858/run'
--header 'Accept: application/json'
--header 'X-Rundeck-Auth-Token: MTqFhsDQFKT8NpXXXXXXXXXXX'
--header 'Content-Type: application/json'
--data-raw ''
You will see the following output in JSON format:
{
"id": 264,
. . .
"status": "running",
"project": "Lab-Purple-Teaming",
. . .
"date-started": {
"unixtime": 1639731690332,
"date": "2021-12-17T09:01:30Z"
},
"job": {
"id": "1bc581bd-a6b5-414b-923e-f082e9d6d858",
"averageDuration": 21648,
"name": "PingCastle_Daily_Audit",
"group": "Customer08458",
"project": "Lab-Purple-Teaming",
"description": "",
"options": {
"path": "/data/customer08456/audit/pingcastle/2021"
},
As we can see, lots of information is available in the trace:
Now that we have learned how to schedule reports, we need to build a robust integration that can monitor the full workflow's execution to detect failures and get reports.
There are two methods of monitoring activity jobs within Rundeck (by default). First, we can use the web management console and go to the ACTIVITY menu to see and follow all the job's executions:
If we need more details about a specific job's execution, we can double-click on a log and see the status of each step, including the parent job:
The Rundeck API is very comprehensive, but it also serves a lot of options and information. The following code shows how to collect all the execution logs for a specific job to use with other tools to catch anomalies:
curl --location --request GET 'http://localhost:4440/api/40/job/1bc581bd-a6b5-414b-923e-f082e9d6d858/executions'
--header 'Accept: application/json'
--header 'X-Rundeck-Auth-Token: 9n8dkaW1SYRfnhYNXXXXXXXXXX'
--header 'Content-Type: application/json'
The following is the output:
{
"id": 1,
"href": "http://rundeck.lab.local/api/40/execution/1",
"permalink": "http://rundeck.lab.local/project/Lab-Purple-Teaming/execution/show/1",
"status": "succeeded",
"project": "Lab-Purple-Teaming",
"executionType": "user",
"user": "admin",
"date-started": {
"unixtime": 1638614794519,
"date": "2021-12-04T10:46:34Z"
},
"date-ended": {
"unixtime": 1638614797176,
"date": "2021-12-04T10:46:37Z"
},
"job": {
"id": "1bc581bd-a6b5-414b-923e-f082e9d6d858",
"averageDuration": 24328,
"name": "PingCastle_Daily_Audit",
. . .
Many tools are popular in the DevOps industry for providing dashboards and reporting, including Prometheus and Grafana. Here, we can see an example of a dashboard from Grafana.
Prometheus is a monitoring solution for storing time series data such as metrics. Grafana allows us to visualize the data that's stored in Prometheus (and other sources). Rundeck Exporter transforms metrics from the Rundeck API into a format that can be ingested by Prometheus. The Rundeck Exporter is free and available on GitHub at https://github.com/phsmith/rundeck_exporter.
In this chapter, we showed you how to use DevOps solutions to industrialize PTX operations while relying on free and open source solutions. The step-by-step approach that was provided for continuously controlling the security of Active Directory can be applied to any other security control components once the necessary concepts, workflows, and DevOps solutions have been handled.
In the previous chapters, we looked at multiple solutions that can be used in the purple team's arsenal. However, to complete this book, we need to cover reporting and KPIs to prove the efficiency of the purple teaming strategies that have been implemented. In the next chapter, we will work on how data can be combined to create relevant KPIs that could be used during the reporting phase for continuous improvement.
3.137.146.71