When we run dozens or hundreds of containers in production, hopefully on a clustered container platform, it soon becomes difficult and tedious to read, search, and process logs—just like it was before when containers with services ran on dozens or hundreds of physical or virtual servers. The problem is that traditional solutions don't work out of the box to handle Docker logs. Luckily, AWS has a nice and easy log-aggregating service, named AWS CloudWatch. Docker has a logging driver just for it. We'll send our Tomcat logs to it right away!
To use AWS CloudWatch Logs, we need at least one log group. Use this book's chapter on Terraform code to create a CloudWatch Logs group and a dedicated IAM user, or manually create both.
The Docker daemon needs to run with the AWS credentials in the memory—it's not information we pass to containers, as it's handled by the Docker daemon's log driver. To give the Docker daemon access to the keys we created, let's create an added systemd
configuration file for the Docker service in /etc/systemd/system/docker.service.d/aws.conf
:
[Service] Environment="AWS_ACCESS_KEY_ID=AKIAJ..." Environment="AWS_SECRET_ACCESS_KEY=SW+jdHKd.."
Don't forget to reload the systemd
daemon and restart Docker to apply the changes:
$ sudo systemctl daemon-reload $ sudo systemctl restart docker
We're now ready to talk to the AWS APIs through the Docker daemon.
Here's a simple way to execute the Tomcat 9 container that uses the awslogs
driver. Utilize the CloudWatch log group named docker_logs
on the us-east-1
data center and automatically create a new stream named www
:
$ sudo docker run -d -p 80:8080 --log-driver="awslogs" --log-opt awslogs-region="us-east-1" --log-opt awslogs-group="docker_logs" --log-opt awslogs-stream="www" tomcat:9
Navigating over the AWS Console, the new log stream will appear under Search Log Group:
Clicking on the log stream name will give us access to all the output logs from our Tomcat container:
We now have access to unlimited log storage and search features, and the amount of effort we put was very limited!
It's also possible to configure the logging driver using Docker Compose. Here's how it works with creating a log stream named tomcat
under the same log group in docker-compose.yml
:
version: '2' services: tomcat: image: tomcat:9 logging: driver: 'awslogs' options: awslogs-region: 'us-east-1' awslogs-group: 'docker_logs' awslogs-stream: 'tomcat'
Launch the compose as usual:
$ sudo docker-compose up Creating network "ubuntu_default" with the default driver [...] tomcat_1 | WARNING: no logs are available with the 'awslogs' log driver
The tomcat
CloudWatch log stream is now automatically created and the logs flow into it.
Another useful way to launch containers is through the use of systemd. Here's how to create a dynamically named log stream using the systemd unit name (in this case, tomcat.service
). This is useful on platforms that use multiple instances of the same container to let them all send their logs separately. Here's a working Tomcat systemd service that is running Docker and sending the logs to a dynamically allocated stream name in /etc/systemd/system/tomcat.service
:
[Unit] Description=Tomcat Container Service After=docker.service [Service] TimeoutStartSec=0 Restart=always ExecStartPre=/usr/bin/docker pull tomcat:9 ExecStartPre=-/usr/bin/docker kill %n ExecStartPre=-/usr/bin/docker rm %n ExecStart=/usr/bin/docker run --rm -p 80:8080 --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=docker_logs --log-opt awslogs-stream=%n --name %n tomcat:9 ExecStop=/usr/bin/docker stop %n [Install] WantedBy=multi-user.target
Reload systemd and start the tomcat
unit:
$ sudo systemctl daemon-reload $ sudo systemctl start tomcat
Now a third log stream is created with the service name, with the systemd unit logs streaming into it:
Enjoy a centralized and powerful way of storing and accessing logs before you eventually process them!
The Docker daemon can stream logs not only to AWS, but also to the more common syslog. This enables a lot of options (such as having traditional rsyslog
setups and online services compatible with the traditional format). Similarly, it not only sends the logs to journald
, but also supports the Graylog or Logstash GELF log format. The Fluentd unified logging layer is also supported, while on the platform front, we find support for Splunk and Google Cloud together with AWS CloudWatch logs.
3.144.47.208