Chapter 8: Azure Sentinel

Security Information and Event Management (SIEM) combines two solutions that were previously separate, Security Information Management (SIM) and Security Event Management (SEM).

We have already mentioned that large organizations rely on SIEM solutions. And Microsoft's SIEM solution for the cloud is Microsoft Azure Sentinel. But let's first take a step back and discuss what SIEM is and what functionalities it should have.

We will be covering the following topics in this chapter:

  • Introduction to SIEM
  • What is Azure Sentinel?
  • Creating workbooks
  • Using threat hunting and notebooks

Introduction to SIEM

Many security compliance standards require long-term storage, where security-related logs should be kept for long periods of time. This varies from one compliance standard to another and can be any period of time from 1 to 10 years. This is where SIM comes into the picture: long-term storage where all security-related logs are stored for analysis and reports.

When we speak of SEM, we tend to be talking about live data streaming rather than long-term event tracking. SEM's focus is on real-time monitoring; it aims to correlate events using notifications and dashboards. When we combine these two, we have SIEM, which tries to live stream all security-related logs and keep them for the long term. With this approach, we have a real-time monitoring and reporting tool in one solution.

When discussing the functionalities required, we have a few checkboxes that SIEM must tick:

  • Data aggregation: Logs across different systems kept in a single place. This can include network, application, server, and database logs, to name a few.
  • Dashboards: All aggregated data can be used to create charts. These charts can help us to visually detect patterns or anomalies.
  • Alerts: Aggregated data is analyzed automatically, and any detected anomaly becomes an alert. Alerts are then sent to individuals or teams that must be aware or act.
  • Correlation: One of SIEM's responsibilities is to provide a meaningful connection between common attributes and events. This is also where data aggregation comes in, because it helps to identify connected events across different log types. A single line in a database log may not mean much, but if it's combined with logs from the network and the application, it can help us prevent a disaster.
  • Retention: As mentioned, one of the compliance requirements is to keep data for extended periods of time. But this also helps us to establish patterns over long periods of time and detect anomalies more easily.
  • Forensic analysis: Once we are aware of a security issue, SIEM is used to analyze events and detect how and why the issue happened. This helps us to neutralize damage and prevent the issue from repeating.

To summarize, SIEM should have the ability to receive different data types in real time, provide meaning to received data, and store it for a long period of time. Received data is then analyzed to find patterns, detect anomalies, and help us prevent or stop security issues.

So, let's see how Azure Sentinel addresses these requirements.

Getting started with Azure Sentinel

Azure Sentinel is Microsoft's SIEM solution in the cloud. As cloud computing continues to revolutionize how we do IT, SIEM must evolve to address the new challenges that the changes to IT create. Azure Sentinel is a scalable cloud solution that offers intelligent security and threat analytics. On top of that, Azure Sentinel provides threat visibility and alerting, along with proactive threat hunting and responses.

So, if we look carefully, all the checkboxes for SIEM are ticked.

The pricing model for Azure Sentinel comes with two options:

  • Pay-As-You-Go: In Pay-As-You-Go, billing is done per GB of data ingested to Azure Sentinel.

    Important Note

    At the time of writing, the price in the Pay-As-You-Go model was $2.60 per ingested GB.

  • Capacity reservation: Capacity reservation offers different tiers with varying amounts of data reserved. Reservation creates a commitment and billing is done per tier, even if we don't use the reserved capacity. However, reservation provides a discount on ingested data and is a good option for organizations that expect large amounts of data.

The following diagram shows the capacity reservation prices for Azure Sentinel at the time of writing:

Figure 8.1 - Azure Sentinel pricing

Figure 8.1 – Azure Sentinel pricing

If ingested data exceeds the reservation limit, further billing is done based on the Pay-As-You-Go model. For example, if we have a reserved capacity of 100 GB and we ingest 112 GB, we will pay the tier price for the reserved capacity up to 100 GB and will also pay for the additional 12 GB for the data that exceeds the reservation.

When enabling Azure Sentinel, we need to define the Log Analytics workspace that will be used for storing data. We can either create a new Log Analytics workspace or use an existing one.

Azure Sentinel uses Log Analytics for storing data. The price for Azure Sentinel does not include charges for Log Analytics.

Important Note

An additional charge for Log Analytics will be incurred for ingested data. Information about pricing can be found at https://azure.microsoft.com/en-us/pricing/details/monitor/.

In the following screenshot, we can see all the pricing options for Azure Sentinel at the time of writing:

Figure 8.2 – Changing the Azure Sentinel pricing tier

Now, let's see how Azure Sentinel does everything that SIEM should be able to do. We will analyze all the requirements one by one and see how they are satisfied by Azure Sentinel.

Let's start with data connectors and retention configuring data connectors and retention

One of SIEM's requirements is data aggregation. However, data aggregation doesn't just mean collecting data, but also the ability to collect data from multiple sources. Azure Sentinel does this really well and has many integrated connectors. At this time (and more connectors are introduced constantly), there are 32 connectors available. Most of the connectors are for different Microsoft sources, such as Azure Active Directory, Azure Active Directory Identity Protection, Azure Security Center, Microsoft Cloud App Security or Office365, to name but a few. But there are also connectors for data sources outside the Microsoft ecosystem, such as Amazon Web Services, Barracuda Firewalls, Cisco, Citrix, and Palo Alto Networks.

Important Note

For more information on connectors, you can refer to the following link: https://docs.microsoft.com/en-us/azure/sentinel/connect-data-sources

The data connectors page is shown in the following screenshot:

Figure 8.3 – Azure Sentinel data connectors

All data connectors include step-by-step instructions explaining how to configure data sources to send data. It's important to mention that all the data is stored in Log Analytics. Instructions vary from data source to data source. Most Microsoft data sources can be added by just enabling a connection from one service to another. Other data sources require the installation of an agent or editing of the endpoint's configuration.

There are many default connectors in Azure Sentinel. Beside the obvious Microsoft connectors with services in Azure and Office365, we have many connectors for on-premises services as well. But it doesn't stop there, and many other connectors are available, such as Amazon Web Services, Barracuda, Cisco, Palo Alto, F5, and Symantec.

Once the data is imported into Log Analytics and is ready to be used in Azure Sentinel, that is when the real work starts.

Working with Azure Sentinel Dashboards

After the data is gathered, the next step is to display data using various dashboards. Dashboards visually present data using Key Performance Indicators (KPIs), metrics, and key data points in order to monitor security events. Presenting data visually helps set up baseline patterns and detect anomalies.

The following screenshot shows events and alerts over time:

Figure 8.4 – Events and alerts dashboard

Figure 8.4 – Events and alerts dashboard

In this screenshot, we can see how the baseline is established. There is a similar number of events over time. Any sudden increase or decrease would be an anomaly that we would need to investigate.

The events over time dashboard uses metrics to display data. But we can also use KPIs to create different types of dashboard. The following diagram shows anomalies in the data source:

Figure 8.5 – Anomalies dashboard

Figure 8.5 – Anomalies dashboard

These two examples represent default dashboards that are available when Azure Sentinel is enabled. We can also create custom dashboards, based on the requirements and the KPIs defined.

However, this is only the first step in detecting something that's out of the ordinary. We need additional steps to automate the process.

Setting up rules and alerts

The only problem with dashboards is that they are useful if someone is watching them. Data is displayed visually, and we can detect issues only if we are monitoring dashboards at all times. But what happens when no one is watching? For these situations, we define rules and alerts.

Using rules, we can define a baseline and send notifications (or even automate responses) when any type of anomaly appears. In Azure Sentinel, we can create custom rules on the Analytics blade, with two types of rules available:

  • The first type of rule is the Microsoft incident creation rule, where we can select from a list of predefined analytic rules. The rules here are Microsoft Cloud App Security, Azure Security Center, Azure Advanced Threat protection, Azure Active Directory Identity Protection, and Microsoft Defender Advanced Threat Protection. The only other option is to select the severity of the incident that will be tracked.
  • The second type of rule is Scheduled query rule. We have more options here, and we can define rules that track basically anything. The only limitation we have here is our data. The more data we have, the more things we can track. Using Kusto Query Language, we can create custom rules and check for any type of information as long as the data is already in the Log Analytics workspace.

To create a custom query rule, the following steps are required:

  1. We need to define a name, select the tactics, and set the severity we want to detect. There are several tactics options we can choose from: Initial Access, Execution, Persistence, Privileged Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Exfiltration, Command and Control, and Impact.

    Optionally, we can add a description. Adding a description is recommended because it can help us track down rules and detect their purpose. An example is shown in the following screenshot:

    Figure 8.6 – Creating a new analytics rule

    Figure 8.6 – Creating a new analytics rule

    Next, we need to set the rule's logic. In this part, we need to set up the rule query using Kusto Query Language. The query will be executed against data in Log Analytics in order to detect any events that represent a threat or an issue. The query needs to be correct because the syntax will be checked. A failed syntax check will result in the query failing to proceed. An example of a query in a custom rule is shown in the following screenshot:

    Figure 8.7 – Defining an analytic rule query

    Figure 8.7 – Defining an analytic rule query

    The query used in this example is tracking resources on the account that were logged in the last 24 hours, which you can see here:

    let GetAllHostsbyAccount = (v_Account_Name:string){

    SigninLogs

    | extend v_Account_Name = case(

    v_Account_Name has '@', tostring(split(v_Account_Name, '@')[0]),

    v_Account_Name has '', tostring(split(v_Account_Name, '')[1]),

    v_Account_Name

    )

    | where UserPrincipalName contains v_Account_Name

    | extend RemoteHost = tolower(tostring(parsejson(DeviceDetail.['displayName'])))

    | extend OS = DeviceDetail.operatingSystem, Browser = DeviceDetail.browser

    | extend StatusCode = tostring(Status.errorCode), StatusDetails = tostring(Status.additionalDetails)

    | extend State = tostring(LocationDetails.state), City = tostring(LocationDetails.city)

    | extend info = pack('UserDisplayName', UserDisplayName, 'UserPrincipalName', UserPrincipalName, 'AppDisplayName', AppDisplayName, 'ClientAppUsed', ClientAppUsed, 'Browser', tostring(Browser), 'IPAddress', IPAddress, 'ResultType', ResultType, 'ResultDescription', ResultDescription, 'Location', Location, 'State', State, 'City', City, 'StatusCode', StatusCode, 'StatusDetails', StatusDetails)

    | summarize min(TimeGenerated), max(TimeGenerated), Host_Aux_info = makeset(info) by RemoteHost , tostring(OS)

    | project min_TimeGenerated, max_TimeGenerated, RemoteHost, OS, Host_Aux_info

    | top 10 by min_TimeGenerated desc nulls last

    | project-rename Host_UnstructuredName=RemoteHost, Host_OSVersion=OS

    };

    // change <Name> value below

    GetAllHostsbyAccount('<Name>')

  2. Once the query is defined, we need to create the schedule. The schedule defines how often the query is executed and the data that it's executed against. We also define the threshold for when an event becomes an alert. For example, a failed login attempt is just an event. But if this event repeats over time, then it becomes an alert.

    The following screenshot shows an example of a schedule and a threshold:

    Figure 8.8 – Analytic rule scheduling

    Figure 8.8 – Analytic rule scheduling

  3. In the last step, we can define what will happen when the alert is triggered. Similar to workflow automation in Azure Security Center (in Chapter 7, Azure Security Center), logic apps are used to create automated responses. These responses can be either notifications (to users or groups of users) or automated responses that will react to stop or prevent the threat.

    An example of creating a new logic app is shown in the following screenshot:

Figure 8.9 – Automated response with a logic app

Figure 8.9 – Automated response with a logic app

But in modern cybersecurity, this may not be enough. We need to respond in a matter of seconds and we need the ability to track specific events related to security.

Creating workbooks

In Azure Sentinel, we can use workbooks to define what we want to monitor and how we do it. Similar to alert rules, we have the option to use predefined templates or to create custom alerts. In contrast with alert rules, with workbooks, we create dashboards in order to monitor data in real time.

At this moment, there are 39 templates available, and this list is very similar to the list of data connectors. Basically, there is at least one workbook template for each data connector. We can choose any template for the list displayed in the following screenshot:

Figure 8.10 – Azure Sentinel workbook templates

Figure 8.10 – Azure Sentinel workbook templates

Each template will enable an additional dashboard that is customized to monitor a certain data source. In the following screenshot, we can see the dashboard for Azure activities:

Figure 8.11 – Azure Activity template

Figure 8.11 – Azure Activity template

The story doesn't end here. With Azure Sentinel, we can leverage machine learning and embed intelligence in our security layer.

Using threat hunting and notebooks

In Azure Sentinel, with dashboards and alerts, we can look for anomalies and issues, but in modern IT we need more. Cyber threats are becoming more and more sophisticated. Traditional methods of detecting issues and threats are not enough. By the time we detect issues, it may already be too late. We need to be proactive and look for possible issues and stop threats before they occur.

For threat hunting, there is a separate section in Azure Sentinel. It allows us to create custom queries, but also offers an extensive list of pre-created queries to help us start. Some of the queries for proactive threat hunting are high reverse DNS count, domains linked with the WannaCry ransomware, failed login attempts, hosts with new logins, and unusual logins, to name a few. A list of some of the available queries is shown in the following screenshot:

Figure 8.12 – Queries for threat hunting

Figure 8.12 – Queries for threat hunting

We can also switch view to live stream and use it to watch for certain events in real time. Live stream offers a graphical view of our hunting queries to help us monitor threats visually.

Another option in the hunting section is bookmarks. When exploring data and looking for threats, we may encounter events that we are not sure about. Some events may look innocent, but, in combination with other events, may prove very dangerous. Bookmarks allow us to save the results of some queries in order to revisit them later. We may want to check the same thing in the next few days and compare results, or maybe check some other logs that may give us more information.

Proactive hunting does not stop there. There is another section in Azure Sentinel, called notebooks. Azure Sentinel is a data store that uses powerful queries and scaling to analyze data in massive volumes. Notebooks goes one step further, and with common APIs, allows the use of Jupyter Notebooks and Python. These tools extend what we can do with data stored in Azure Sentinel, allowing us to use a huge collection of libraries for machine learning, visualization, and complex analysis.

Again, we have some notebooks already available that we can start using right away. The list of available queries is shown in the following screenshot:

Figure 8.13 – Azure Sentinel notebooks

Figure 8.13 – Azure Sentinel notebooks

Azure Sentinel isn't just a tool with a limited number of options; it's also very customizable. There are many ways to adjust Azure Sentinel to your specific needs and requirements. There are many external resources that can be used or further adjusted.

Using community resources

The power of Azure Sentinel is extended by its community. There is a Git repository at https://github.com/Azure/Azure-Sentinel. Here, we can find many additional resources that we can use. Resources developed by the community offer new dashboards, hunting queries, exploration queries, playbooks for automated responses, and much, much more.

The community repository is a very useful collection of resources that we can use to extend Azure Sentinel with additional capabilities and increase security even further.

Summary

Azure Sentinel satisfies all the requirements for SIEM. Not only that, it also brings additional tools to the table in the form of proactive threat hunting and using machine learning and predictive algorithms. With all the other resources we have covered, most aspects of modern security have been covered, from identity and governance over network and data protection, to monitoring health, detecting issues, and preventing threats.

But security does not end here. Almost every resource in Azure has some security options enabled. These options can help us go even further and improve security. With all the cybersecurity threats around today, we need to take every precaution available.

In the final chapter, we are going to discuss security best practices and how to use every security option to our advantage.

Questions

  1. Azure Sentinel is…

    A. Security Event Management (SEM)

    B. Security Information Management (SIM)

    C. Security Information Event Management (SIEM)

  2. Azure Sentinel stores data in…

    A. Azure Storage

    B. Azure SQL Database

    C. A Log Analytics workspace

  3. Which data connectors are supported in Azure Sentinel?

    A. Microsoft data connectors

    B. Cloud data connectors

    C. A variety of data connectors from different vendors

  4. Which query language is used in Azure Sentinel?

    A. SQL

    B. GraphQL

    C. Kusto

  5. Dashboards in Azure Sentinel are used for…

    A. Visual detection of issues

    B. Constant monitoring

    C. Threat prevention

  6. Rules and alerts in Azure Sentinel are used for…

    A. Visual detection of issues

    B. Constant monitoring

    C. Threat prevention

  7. Threat hunting is performed by….

    A. Monitoring dashboards

    B. Using Kusto queries

    C. Analyzing data with Jupyter Notebook

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.242.148