© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
M. BakerSecure Web Application Development https://doi.org/10.1007/978-1-4842-8596-1_12

12. Logging and Monitoring

Matthew Baker1  
(1)
Kaisten, Aargau, Switzerland
 

This is arguably the most important chapter in the book. Try as we may to prevent attackers compromising our systems, there will always be a chance that one will succeed. Damage, actual and reputational, can be minimized by taking action early. Damage can even be prevented by acting as soon as unauthorized access is attempted, before an attacker succeeds in gaining entry.

In order to respond rapidly to unauthorized access, you must generate logs, and you or an operations team must monitor them. This may also be required by compliance departments. If you have several applications and servers, manually looking through log files becomes unsustainable.

In this chapter, we will look at how to automatically consolidate log files, from one or several servers, so they can be viewed and searched in one place. We will set up the Elastic Stack, or ELK, which is a popular open source toolset for logging and monitoring.

We will also look at how to create custom logging for our application, beyond what is provided by Apache by default, and create alerts so that we don’t miss important security events.

12.1 Logging, Aggregating, and Analytics

Even if you have only one server, the system log file, Apache error file, Apache access log, and database log file can all contain entries indicating that attackers have compromised your system, or attempted to do so. If you have your application running on several load-balanced servers, or have multiple applications, it is infeasible to regularly read each log file.

Log aggregators read log files and copy the entries to a central store. That can be on the same server or a central host. The aggregated logs are more readable, and searchable, if they have format that can be parsed into columns, for example:
  • Timestamp

  • Client IP address

  • URL

  • Response code

Reading a single, aggregated log is better than reading several logs and several hosts because you only have to look in once place. However, it is difficult to read and analyze because it is large and entries appear in the order sent, not necessarily in an order that shows a sequence of related events. It is difficult to find important entries and to follow an attack vector. Analytic engines help with this task by providing a query language and dashboards. They can also generate alerts, sending emails or notifications to tools such as Slack.

One popular toolset is Elastic Stack, also called ELK. Later in this chapter, we will use this stack in an exercise to implement logging and monitoring for Coffeeshop.

12.2 The ELK Stack

ELK stands for Elasticsearch, Logstash, and Kibana. They are three tools by the company Elastic that, used together, are a popular open source log file aggregation and monitoring platform.

Elasticsearch is a REST-based search engine based on the Lucene library. Logstash is a log aggregator. Kibana is a web-based visualization tool.

A fourth component, Beats, is often used with ELK. Beat abstracts the log collection from Logstash. Beats components for specific purposes, such as extracting log files or system metrics, send data to Logstash, which aggregates and stores it. Elasticsearch provides the API to query the logs and metrics. Kibana adds a GUI. As the stack has now grown beyond the original three components, ELK is also referred to as Elastic Stack.

Elasticsearch works by maintaining indexes. Applications such as Logstash create indexes. Logstash creates them with names
logstash-yyyy.mm.dd-n

where n is an index number (000001, 000002, etc.) created by log file rotation. When using Kibana to view logs, we define an index pattern for it to search. By default, this is *, which matches all indexes.

Loading Log Files with Logstash

In the next exercise, we will add ELK to Coffeeshop and use it to view access logs. We will not use Beats as the built-in log file parsing functionality in Logstash is sufficient for our purposes.

To parse log files with Logstash, you create a configuration file that tells Logstash the file locations and how to parse them into columns. In the next exercise, we will do this for Apache’s error.log and access.log files. The configuration file is in
coffeeshop/vagrant/elk/apache.conf
The contents of the file are
input {
  file {
    path => "/var/log/apache2/*.log"
    start_position => "beginning"
  }
}
filter {
  if [path] =~ "access" {
    mutate { replace => { type => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
  } else if [path] =~ "error" {
    mutate { replace => { type => "apache_error" } }
  } else {
    mutate { replace => { type => "random_logs" } }
  }
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
  }
  stdout { codec => rubydebug }
}

The first section, input { ... }, defines the file sources. We are loading all files from the /var/log/apache2 directory, starting at the beginning of each file. Logstash understands file rotation. When a log file reaches a certain size, Apache appends a number to it and creates a new log file. Once log files reach a certain age, they are compressed. Logstash understands this and ensures log entries are not duplicated when it parses them.

The second section, filter { ... }, tells Logstash how to parse the files. Apache’s error and access logs have different formats, so we parse them differently, using an if statement to check the file name:
if [path] =~ "access" {
  ...
} else if [path] =~ "error" {
  ...
} else {
  ...
}

Logstash assigns a type to each log file. It puts this in the type field. The default is _doc.

Logstash creates the following additional fields:
  • message: The whole line from the log file

  • timestamp: Parsed from a string matching Day Month Date Hour:Minute:Seconds.Milliseconds Year

  • hostname: Hostname the log was extracted from

  • path: The full path of the file

These fields are sufficient for our Apache error log files, so we don’t do any further processing, other than to replace the value of type with apache_error in the line:
mutate { replace => { type => "apache_error" } }

We do this so that we can differentiate between error and access log entries when searching.

For access log files, we set type to apache_access. We also have some additional filters. The first is
grok {
  match => { "message" => "%{COMBINEDAPACHELOG}" }
}

The grok filter extracts additional fields using regular expressions. We are using it to extract fields from the message field. The regular expressions are defined using the syntax %{PATTERN}. There are a number of built-in patterns. The one we are using is COMBINEDAPACHELOG, which parses Apache access log file entries into their constituent fields. A full list is available in Logstash’s logstash-patterns-core GitHub repository.1

We also have the filter
date {
  match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}

This replaces the default parsing for the timestamp field. Apache error log timestamps are formatted according to the Logstash default. Apache access log timestamps are not, so we need to declare the correct syntax.

The last section in the file, output { ... }, defines where to send the parsed log entries. We are sending them to two locations: elasticsearch and stdout. For elasticsearch, we define the host and port Elasticsearch is running on. For stdout, we use the rubydebug plug-in to format it. This is the default value. Another possibility is json.

Add ELK To Coffeeshop

We will install Elasticsearch, Logstash, and Kibana and configure Logstash to log Apache logs. Then we will use Elasticsearch to view login attempts to the Coffeeshop admin console.

Installing ELK

Installing ELK is a bit involved, so we’ve created a Shell script to automate it and also document the steps involved. To install, run the following commands from a terminal inside the Coffeeshop VM:
cd /vagrant/elk
sudo bash install-elk.sh
The steps install-elk.sh performs are as follows:
  1. 1.

    Add Elastic to the Apt sources list so that we can fetch the latest versions with apt-get.

     
  2. 2.

    Install Elasticsearch, Logstash, and Kibana using apt-get.

     
  3. 3.

    Configure Logsearch to use fewer workers than the default (on a development server, we only need one worker).

     
  4. 4.

    Add the configuration file to read Apache log files.

     
  5. 5.

    Bind the Kibana web server to address 0.0.0.0 so that we can view it from a browser on the host computer.

     
  6. 6.

    Enable and start Kibana (Elasticsearch and Logstash are auto-enabled and started during installation).

     

For details, read the install-elk.sh script.

Configure Kibana to View Apache Access Logs

Visit the URL http://10.50.0.2:5601 in a browser to open Kibana. It may take a minute or two to start. If you get a welcome screen, click Explore on my own. Click the three horizontal bars menu icon at the top left and select Discover under Analytics. The first time you do this on a Kibana installation, you will be taken to a screen to create an index pattern. Click the Create index pattern button, and in the next screen, type * in the Name field and select @timestamp from the Timestamp drop-down (see Figure 12-1). Click Create index pattern to save. Now select Discover under Analytics from the menu again.

You should now see a screen like Figure 12-2 (if you have a wizard popover, close it first). The log entries you see in the section on the right will differ.

Click + Add Filter (highlighted in the figure). In the popup, select type from the Field drop-down (not _type with the leading underscore character), select is from the Operator drop-down, and enter apache_access under Value. This is shown in Figure 12-3. Click Save.

In another tab, visit a page in Coffeeshop (e.g., http://10.50.0.2). Now click the Refresh button in Kibana (top right). You should see your page access logged in the search results.

You can save the filter for later use by clicking on the disk icon to the left of the search bar and clicking the Save Current Query button. The search will then be available in the drop-down list when you click the disk icon again.

Viewing Admin Login Attempts

Let’s create a more complex query to view accesses on the admin console login page. This time we need to use Elasticsearch’s query language, DSL. Leave the type: apache_access filter we previously created as it is. Click + Add Filter again.

Now click Edit as Query DSL at the top right of the filter popup. Enter the text as shown in Figure 12-4. You will find this text in the file
coffeeshop/vagrant/elk/dsl/admin-login.dsl

Click Save. This is a simple wildcard search on the request field, which contains the URL from the HTTP request.

Now visit the admin console at http://10.50.0.2/admin and log in as user admin. As always, the password is in
coffeeshop/secrets/config.env

Go back to Kibana and click Refresh. You should see your login attempt.

As we parsed the access log file when creating the apache.conf file, we can make the output easier to read by customizing the fields. At the left of the Kibana screen, hover over clientip and click the + button. Do the same for verb, request, and response. Your screen should look something like Figure 12-5. If you like, you can save the query for reuse later.

A screenshot depicts a Kibana page for creating an index pattern. It has a Name field, a Timestamp field, and the option to display advanced settings.

Figure 12-1

Creating an index pattern in Kibana

A screenshot of the Analytics Discover page in Kibana. It has search space, filter by type, field name search space, Available field in drop down with popular field names, graph chart, Time stamp, Refresh button, highlighted Add button, and Save button on the top.

Figure 12-2

The Analytics Discover page in Kibana

A screenshot depicts a Kibana page for adding a filter. It shows the selected Add filter option, which opens an Edit filter box with space to search for Field type and value; operator; Cancel and Save function keys.

Figure 12-3

Adding a filter in Kibana

An admin login filter creation page for Kibana using DSL is depicted in a screenshot. The Create Custom Label toggle button, Cancel, and Save buttons have been selected. The Add Filter option opens the Edit filter box with a DSL query.

Figure 12-4

Using DSL to create an admin login filter in Kibana

Kibana displays a screenshot of administrator login attempts. It illustrates a search field at the top with a date filter and a refresh button. It has a search space for field names and the option to filter by type, displaying both selected and available fields. On the right is a chart with a bar graph, time, clienttip, verb, request link, and response.

Figure 12-5

Kibana showing admin login attempts

If you followed the preceding exercise, the ELK stack will now be running in your virtual machine and enabled each time you start the VM with vagrant up. It can consume a lot of CPU. If you find this, you can disable it with
sudo bash /vagrant/elk/disable.sh
from inside the Coffeeshop VM. You can reenable it with
sudo bash /vagrant/elk/enable.sh

12.3 Creating Custom Log Files

One of the drawbacks of form-based authentication is that unsuccessful login results in a 200 OK response code. Django simply redisplays the login page but with an error as part of the HTML body. A successful login results in a 302 Found redirect, but using the response code to differentiate between a successful and a failed login is error-prone.

We would like to see successful and unsuccessful admin console logins in our Kibana control panel. We can create our own log file to report them to. We can also send email alerts whenever there is a successful login.

By default, Django writes log messages to the console and only when debugging is switched on with
DEBUG = True
in settings.py. Apache directs all console messages to
/var/log/apache2/error.log

We could create our own custom login and logout messages and also send them to the console, but in order for ELK to parse them into fields, we need to have a common format for each line in this file. A better solution is to create a separate file to log them to. We can do this in Django by setting the LOGGING variable in settings.py.

Django abstracts logging by defining three components, all of which are set in the LOGGING variable:
  • Formatters define how log messages are formatted, for example, by prepending a timestamp and log level.

  • Handlers define how log entries are written, for example, streaming to the console or writing to a file. Handlers can use a custom formatter.

  • Loggers define a name that can be associated with a handler. In Python, retrieving that handler by name selects the logging destination. A logger also has a log level, for example, ERROR or DEBUG.

To report login and logout messages to a separate file, we define a formatter to prepend a timestamp when logging the message, a handler that writes to a file using that formatter, and a logger that lets us write to that handler in Python code. We will do this in the next exercise.

Django handles login and logout requests within its auth module. Rather than having to edit this code to add logging messages, it sends signals when certain actions occur, such as a user logging in, out, or failing a login attempt. We can bind handlers to these signals in Python and write to our log file from those. We will do this in the next exercise too.

Display Django Login And Logout Events In Kibana
This exercise builds on the previous one, so make sure you have completed it first. We will observe successful logins, successful logouts, and failed logins in Kibana. There are four steps:
  1. 1.

    Create a new Django logger.

     
  2. 2.

    Receive login and logout signals and write log entries.

     
  3. 3.

    Configure Logstash to receive and parse the log messages.

     
  4. 4.

    Configure Kibana to view the log messages.

     

The code for this exercise is in vagrant/snippets/loginalert.

This exercise relies on the previous one, in which we installed the ELK stack. If you did not do that exercise, please complete it first. If you did do it, and you restarted your VM since then, you will need to restart the ELK stack manually by running the following inside your Coffeeshop VM:
cd /vagrant/elk
sudo bash ./enable.sh

Create a Django Logger

In Django, we configure logging in the LOGGING variable in settings.py. Edit this file for the Coffeeshop application and add the following code:
LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'formatters': {
        'timestamp': {
            'format': '{asctime} {message}',
            'style': '{',
        },
    },
    'handlers': {
        'console': {
            'class': 'logging.StreamHandler',
        },
        'login': {
            'level': 'INFO',
            'class': 'logging.FileHandler',
            'filename': '/var/log/django/login.log',
            'formatter': 'timestamp'
        },
    },
    'root': {
        'handlers': ['console'],
        'level': 'WARNING',
    },
    'loggers': {
        'django': {
            'handlers': ['console'],
            'level': os.getenv('DJANGO_LOG_LEVEL', 'INFO'),
            'propagate': False,
        },
        'login': {
            'handlers': ['login'],
            'level': 'INFO',
            'propagate': False,
        },
    },
}
This configures Django to write to a new log file. We will have to create the directory. Enter the following inside your Coffeeshop VM:
sudo mkdir /var/log/django
sudo chown www-data.www-data /var/log/django
sudo chmod 775 /var/log/django

The 'formatters': { ... } section prepends a timestamp to the log messages. The 'handlers': { ... } section defines two handlers: console, which appends to the console, and another, login, that writes to a new file. The 'root' { ... } section defines the default logger that is used when no others match. The 'loggers': { ... } section defines two loggers: one called django, which writes to the console, and one called login, which writes to our new login log file.

Handle Login and Logout Events

Create a new file
vagrant/coffeeshopsite/coffeeshop/signals.py
with the following contents:
import logging
from django.contrib.auth.signals import user_logged_in, user_logged_out, user_login_failed
from django.dispatch import receiver
log = logging.getLogger('login')
@receiver(user_logged_in)
def user_logged_in_callback(sender, request, user, **kwargs):
    ip = request.META.get('REMOTE_ADDR')
    uri = request.META.get('PATH_INFO')
    if (request.META.get('QUERY_STRING')):
        uri += '?' + request.META.get('QUERY_STRING')
    log.info('login success {user} {ip} {uri}'.format(
        user=user,
        ip=ip,
        uri=uri
    ))
@receiver(user_logged_out)
def user_logged_out_callback(sender, request, user, **kwargs):
    ip = request.META.get('REMOTE_ADDR')
    uri = request.META.get('PATH_INFO')
    if (request.META.get('QUERY_STRING')):
        uri += '?' + request.META.get('QUERY_STRING')
    log.info('logout success {user} {ip} {uri}'.format(
        user=user,
        ip=ip,
        uri=uri
    ))
@receiver(user_login_failed)
def user_login_failed_callback(sender, credentials, request, **kwargs):
    user = credentials['username']
    ip = request.META.get('REMOTE_ADDR')
    uri = request.META.get('PATH_INFO')
    if (request.META.get('QUERY_STRING')):
        uri += '?' + request.META.get('QUERY_STRING')
    log.info('login failure {user} {ip} {uri}'.format(
        user=user,
        ip=ip,
        uri=uri
    ))

We have one function for each of four signals sent by the auth module. The @receiver decorator binds each function to a signal. In each case, we want to send a message to the login logger. We arbitrarily choose level INFO. We will log the username, IP address, and URI as well as success or failure.

We need to activate these signal handlers when the application starts. A good place to do this is in the ready()function in CoffeeshopConfig. Edit the
vagrant/coffeeshopsite/coffeeshop/apps.py
Edit this file to read
from django.apps import AppConfig
class CoffeeshopConfig(AppConfig):
    name = 'coffeeshop'
    def ready(self):
        # Implicitly connect a signal handlers decorated with @receiver.
        from . import signals
Now restart Apache by running the following within the Coffeeshop VM:
sudo apachectl restart

Configure Logstash

We need to replace
/etc/logstash/conf.d/apache.conf
There is a new version in
/vagrant/elk/apachedjango.conf

Copy this file to /etc/logstash/conf.d/apache.conf. Make sure you either call it apache.conf or you delete the old apache.conf. If both files are in there, the log entries will be processed twice.

We are adding the following to the original conf file. First, we define a new file source in the input { ... } section:
file {
  path => "/var/log/django/login.log"
  start_position => "beginning"
}
Second, we are adding a new else clause to the filter { ... } section:
} else if [path] =~ "login" {
  mutate { replace => { type => "django_login" } }
  grok {
    match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{WORD:action}
       %{WORD:status} %{USERNAME:[user][identity]} %{IPORHOST:[source][address]}
       %{DATA:[http][request][referrer]}" }
  }
  date {
    match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss,SSS" ]
  }
}

The match line in grok parses the message field into some new fields. TIMESTAMP_ISO8601, WORD, etc., are built-in Logstash templates. We are reusing some fields that have already been created when parsing the Apache access.log file, for example, for the username. The [user][identity] syntax means the value will be placed in a variable called user.identity.

The match line within date parses the ISO 8601 timestamps that were extracted by grok. Without this, Logstash would use the time it processed the log entry rather than the timestamp contained within it.

Restart Logstash with
sudo service logstash restart

from inside the Coffeeshop VM.

Configure Kibana

Open Kibana by visiting
http://10.50.0.2:5601

in a web browser.

As in the last exercise, we are going to create a filter, this time on the new django_login type we created previously. From the two-horizonal-bar menu at the top right of Kibana, select Discover under Analytics. Click + Add filter, as in Figure 12-2. For Field, select type. For Operator, select is. Under Value, enter django_login and click Save.

Let’s make a few login and logout records. In another tab or browser window, visit

http://10.50.0.2/account/login/

and make a failed login attempt (e.g., with xxx as the username and password). Now log in correctly as either bob or alice and then logout. Go back to Kibana and click the Refresh button at the top right. You should see a screen similar to Figure 12-6.

Let’s customize the table so it’s easier to read. From the fields list at the left of the screen, hover over each of the following to see the plus button and then click it to add the field to the table:
  • source.address

  • user.identity

  • action

  • status

Your screen should look like Figure 12-7.6

A screenshot of a Django login page and logout events in Kibana. On the left side, there is a search field for field names, a filter by type option, and a dropdown with 15 available fields. On the right is a chart with a bar graph, time, and document details. On the top is a search field with the option to search by time and a refresh button.

Figure 12-6

Kibana showing Django login and logout events

A page screenshot with formatted login and logout events in Kibana. On the left, there is a search field for field names, a filter by type option, and a drop-down with available fields and selected field names. A bar graph is displayed on the right, with time, source, address, user identity, and action indicated below it.

Figure 12-7

Kibana showing formatted login and logout events

12.4 Creating Alerts for Security Events

Kibana also supports sending alerts to channels such as Email or Slack. This feature is part of the paid version of Kibana, but there is an open source tool that has similar functionality. It is called ElastAlert, and we will use it in the next exercise to send an Email whenever there is a successful login as the admin user.

ElastAlert is a Python program that queries Elasticsearch. It runs rules, each of which is defined in a YAML file. The rules query Elasticsearch and apply a filter, just like Kibana. When an entry matches a filter, the rule performs an action such as sending an Email.

There are different types of rules:
  • Any: Performs the action on everything that matches the filter

  • Blacklist: Performs the action if a field matches an entry in a blacklist

  • Whitelist: Performs the action if a field does not match an entry in a whitelist

  • Change: Performs the action if a field changes

  • Frequency: Performs the action if there are a certain number of events in a given timeframe

  • Spike: Performs the action if the event occurs a defined multiplication factor more than in the previous time period

  • Flatline: Performs the action if the number of events is under a given threshold

  • New term: Performs the action when a value occurs in a field for the first time

  • Cardinality: Performs the action when the total number of unique values in a field is above or below a threshold

  • Metric aggregation: Performs the action when the valuer of a metric is higher or lower than a threshold within a calculation window

  • Spike aggregation: Similar to spike but on a metric within a calculation window

  • Percentage match: Performs the action when the percentage of document in the match bucket is higher or lower than a threshold within a calculation window

In the exercise, we will use the Any type. For more details on the others, see the ElastAlert documentation.2

ElastAlert is designed to be run as a daemon, for example, with the Python zdaemon process controller or as a systemd service.

Creating Email Alerts Using Elastalert

This exercise builds on the previous two, so make sure you have completed them first. We will use ElastAlert to send an email whenever someone successfully logs in as admin. We have MailCatcher set up in the Coffeeshop VM, so we will use it as our SMTP server.

Install ElastAlert by opening a terminal session in the Coffeeshop VM with vagrant ssh and running
sudo pip3 install elastalert

Configuring ElastAlert

To configure ElastAlert, we need to create a config.yaml file and a rule file. There is a config.yaml in
/vagrant/elk/elastalert/config.yaml
The Git repository contains an example config.yaml and example rules. These are not installed by Pip. To see them, you can clone the Git repository from
https://github.com/Yelp/elastalert.git
Our configuration file does not differ much from the example. The line
rules_folder: /vagrant/elk/elastalert/rules
points ElastAlert at a directory containing rules. We have just one rule in our directory. The lines
es_host: localhost
and
es_port: 9200

tell ElastAlert where to find the Elasticsearch service.

We also tell Elasticsearch to run every minute and buffer for 15 minutes in case some alerts are not received in real time.

ElastAlert has its own indexes. We need to create them with
sudo elastalert-create-index

For the Elasticsearch host and port, enter localhost and 9200, respectively. Enter f for Use SSL. You can choose the defaults for all other options.

Creating the Rule

We have a single rule of type Any in /vagrant/elk/elastalert/rules. Take a look at this file. The filter section contains three terms. These are AND’ed together, so it matches log entries only when all terms are present:
filter:
- term:
    type: "django_login"
- term:
    user.identity: admin
- term:
    action: login
We tell it which fields to include in its index with
include:
  - timestamp
  - host
  - user.identity
  - source.address
  - status
and we format the Email with
alert_subject: "Admin login on <{}>"
alert_subject_args:
  - host
alert_text: |-
  Login as {} on {} from {} {}
alert_text_args:
  - user.identity
  - host
  - source.address
  - status
We set the alert type to Email and configure to recipient address with
alert:
  - "email"
email:

A MailCatcher screenshot displays an Admin user login event. It contains information on mail sent from, to, subject, and time received. On the bottom right side, there is a Download button. The search messages space, clear, and quit buttons are in the upper right corner.

Figure 12-8

MailCatcher showing an Admin user login event

The file also contains connection details for the SMTP server.

Running ElastAlert

Run ElastAlert with
elastalert --verbose --config /vagrant/elk/elastalert/config.yaml

Now logout from Coffeeshop if you are logged in already and then visit

http://10.50.0.2/admin

in your web browser. Log in as user admin.

We have set the --verbose flag on ElastAlert, so you should see it detect the login. It may take a minute or two as it only runs every 60 seconds. Open MailCatcher by visiting

http://10.50.0.2:1080

in your browser. You should see a screen like Figure 12-8.

12.5 Summary

Logging and monitoring are essential for spotting and responding to intrusions or intrusion attempts. Web servers, database servers, operating systems, and web applications all produce logs, but monitoring them regularly is difficult without an aggregation and search toolset such as the ELK stack.

Logging applications such as Kibana don’t always get looked at regularly enough to act on critical security events promptly. Alert tools such as ElastAlert can filter events and send them to other channels that are monitored more frequently, such as Email or Stack.

In this chapter, we looked at how ELK can be used to monitor Apache error and access logs. We also created a log file that records Admin user login and logout events and configured ElastAlert to send emails whenever there is a successful Admin user login.

In the next chapter, we look at how third-party tools, as well as people themselves, can introduce vulnerabilities and what we can do to defend against them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.61.49