How it works...

In docker-compose.yml, we have added more services and defined some environment variables. These make our system more robust and allow us to replicate the multi-host paradigm for serving static files that is preferred in production.

The first new service is a proxy, based on the jwilder/nginx-proxy image. This service attaches to port 80 in the host machine and passes requests through to port 80 in the container. The purpose of the proxy is to allow use of friendly hostnames rather than relying on everything running on localhost.

Two other new services are defined toward the end of the file, one for serving media and another for static files:

  • These both run the Apache httpd static server and map the associated directory to the default htdocs folder from which Apache serves files.
  • We can also see that they each define a VIRTUAL_HOST environment variable, whose value is drawn from corresponding host variables MEDIA_HOST and STATIC_HOST, and which is read automatically by the proxy service.
  • The services listen on port 80 in the container, so requests made for resources under that hostname can be forwarded by the proxy to the associated service dynamically.

The db service has been augmented in a few ways:

  • First, we ensure that it is listening on the expected port 3306 in the container network.
  • We also set up a few volumes so that content can be shared outside the container—a my.cnf file allows changes to the basic running configuration of the database server; the database content is exposed as a mysql directory, in case there is a desire to back up the database itself; and we add a data directory for SQL scripts, so we can connect to the database container and execute them directly if desired.
  • Lastly, there are four environment variables that the mysql image makes use of—MYSQL_ROOT_PASSWORD, MYSQL_HOST, MYSQL_USER, and MYSQL_PASSWORD. These are declared, but no value is given, so that the value will be taken from the host environment itself when we run docker-compose up.

The final set of changes in docker-compose.yml are for the app service itself, the nature of which are similar to those noted previously:

  • The port definition is changed so that port 8000 is only connected to within the container network, rather than binding to that port on the host, since we will now access Django via the proxy.
  • More than simply depending on the db service, our app now links directly to it over the internal network, which makes it possible to refer to the service by its name rather than an externally accessible hostname.
  • As with the database, several environment variables are indicated to supply external data to the container from the host. There are pass-through variables for MEDIA_HOST and STATIC_HOST, plus SITE_HOST and a mapping of it to VIRTUAL_HOST used by the proxy.
  • While the proxy connects to virtual hosts via port 80 by default, we are running Django on port 8000, so the proxy is instructed to use that port instead via the VIRTUAL_PORT variable.
  • Last but not least, the MySQL MYSQL_HOST, MYSQL_USER, MYSQL_PASSWORD and MYSQL_DATABASE variables are passed into the app for use in the project settings.

This brings us to the updates to settings.py, which are largely centered around connectivity and security:

  • To ensure that access to the application is limited to expected connections, we add SITE_HOST to ALLOWED_HOSTS if one is given for the environment.
  • For DATABASES, the original sqlite3 settings are left in place, but we replace that default with a configuration for MySQL if we find the MYSQL_HOST environment variable has been set, making use of the MySQL variables passed into the app service.
  • As noted in the Working with Docker recipe, we can only view logs that are exposed by the container. By default, the Django runserver command does not output logging to the console, so no logs are technically exposed. The next change to settings.py sets up LOGGING configurations so that a simple format will always be logged to the console when DEBUG=true.
  • Finally, instead of relying upon Django to serve static and media files, we check for the corresponding STATIC_HOST and MEDIA_HOST environment variables and, when those exist, set the STATIC_URL and MEDIA_URL settings accordingly.

With all of the configurations updated, we need to have an easy way to run the container so that the appropriate environment variables are supplied. Although it might be possible to export the variables, that would negate much of the benefit of isolation we gain from using Docker otherwise. Instead, it is possible to run docker-compose with inline variables, so a single execution thread will have those variables set in a specific way. This is, ultimately, what the dev script does.

Now we can run docker-compose commands for our development environment—which includes a MySQL database, separate Apache servers for media and static files, and the Django server itself—with a single, simplified form:

myproject_docker/$ MYSQL_USER=myproject_user 
> MYSQL_PASSWORD=pass1234
> ./bin/dev up -d

Creating myprojectdocker_media_1 ... done
Creating myprojectdocker_db_1 ... done
Creating myprojectdocker_app_1 ... done
Creating myprojectdocker_static_1 ... done

In the dev script, the appropriate variables are all defined for the command automatically, and docker-compose is invoked at once. The script mentions in comments three other, more sensitive variables that should be provided externally, and two of those are included here. If you are less concerned about the security of a development database, these could just as easily be included in the dev script itself. A more secure, but also more convenient way of providing the variables across runs would be to export them, after which they become global environment variables, as in the following example:

myproject_docker/$ export MYSQL_USER=myproject_user
myproject_docker/$ export MYSQL_PASSWORD=pass1234
myproject_docker/$ ./bin/dev build
myproject_docker/$ ./bin/dev up -d

Any commands or options passed into dev, such as up -d in this case, are forwarded along to docker-compose via the $* wildcard variable included at the end of the script. With the host mapping complete, and our container up and running, we should be able to access the system by SITE_HOST, as in http://myproject.local/.

The resultant file structure for a complete Docker project might look something like this:

myproject_docker/
├── apps/
│ ├── external/

│ ├── myapp1/
│ ├── myapp2/
├── bin/
│ ├── dev*
│ ├── prod*
│ ├── staging*
│ └── test*
├── config/
│ ├── my.cnf
│ └── requirements.txt
├── data/
├── media/
├── mysql/
│ ├── myproject_db/
│ ├── mysql/
│ ├── performance_schema/
│ ├── sys/
│ ├── ibdata1

│ └── ibtmp1
├── project/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── static/
├── templates/
├── Dockerfile
├── README.md
└── docker-compose.yml
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.8.42