Chapter 10. What Now?

Flask is quite the most popular Web framework nowadays, so finding online reading material for it is not that hard. For instance, a quick search on Google will surely give you one or two good articles on most subjects you might be interested in. Nonetheless, subjects such as deployment, even though much discussed on the Internet, yet raise doubt in our fellow web warriors' hearts. For that reason, we have stashed a nice step-by-step "deploy your Flask app like a boss" recipe in our last chapter. Along with it, we'll advise you on a few very special places where knowledge is just there, thick and juicy, lying around waiting for you to pinch wisdom. With this chapter, you'll be capable of delivering your products from code to server, and maybe, just maybe, fetching some well-deserved high fives! Welcome to this chapter, where code meets the server and you meet the world!

You deploy better than my ex

Deployment is not a term everyone is familiar with; if you were not a web developer until recently, you would have been, probably, unfamiliar with it. In a rough Spartan way, one could define deployment as the act of preparing and presenting your application to the world, assuring the required resources are available, and tuning it, as a configuration suitable for the development phase is not the same as one appropriate for deployment. In a web development context, we are talking about a few very specific actions:

  • Placing your code in a server
  • Setting up your database
  • Setting up your HTTP server
  • Setting up other services you may use
  • Tying everything together

Placing your code in a server

First of all, what is a server? We refer to as server a computer with server-like features such as high reliability, availability, and serviceability (RAS). These features grant the application running in the server a certain level of trust that the server will keep running, even after any environment problem, such as a hardware failure.

In the real world, where people have budgets, a normal computer (one of those you buy in the closest store) would most likely be the best choice for running a small application because "real servers" are very expensive. With small project budgets (nowadays, also the big ones), a robust solution called server virtualization was created where expensive, high-RAS physical servers have their resources (memory, CPU, hard-drive, and so on) virtualized into virtual machines (VM), which act just like smaller (and cheaper) versions of the real hardware. Companies such as DigitalOcean (https://digitalocean.com/), Linode (https://www.linode.com/), and RamNode (https://www.ramnode.com/) have whole businesses focused in providing cheap, reliable virtual machines to the public.

Now, given that we have our web application ready (I mean, our Minimum Viable Product is ready), we must run the code somewhere accessible to our target audience. This usually means we need a web server. Pick two cheap virtual machines from one of the companies mentioned in the preceding paragraph, set up with Ubuntu, and let's begin!

Setting up your database

With respect to databases, one of the most basic things you should know during deployment is that it is a good practice to have your database and web application running on different (virtual) machines. You don't want them to compete for the same resources, believe me. That's why we hired two virtual servers—one will run our HTTP server and the other our database.

Let's begin our database server setup; first, we add our SSH credentials to our remote server so that we may authenticate without the need to type the remote server user password every time. Before this, generate your SSH keys if you do not have them, like this:

# ref: https://help.github.com/articles/generating-ssh-keys/
# type a passphrase when asked for one
ssh-keygen -t rsa -b 4096 -C "[email protected]"

Now, given that your virtual machine provider provided you with an IP address to your remote machine, a root user, and password, we create a passwordless SSH authentication with our server as follows:

# type the root password when requested
ssh-copy-id root@ipaddress

Now, exit your remote terminal and try to SSH root@ipaddress. The password will no longer be requested.

Here's the second step! Get rid of the non-database stuff such as Apache and install Postgres (http://www.postgresql.org/), the most advanced open source database to date:

# as root
apt-get purge apache2-*
apt-get install postgresql
# type to check which version of postgres was installed (most likely 9.x)
psql -V

Now we set up the database.

Connect the default user Postgres with the role postgres:

sudo -u postgres psql

Create a database for our project called mydb:

CREATE DATABASE mydb;

Create a new user role to access our database:

CREATE USER you WITH PASSWORD 'passwd'; # please, use a strong password
# We now make sure "you" can do whatever you want with mydb
# You don't want to keep this setup for long, be warned
GRANT ALL PRIVILEGES ON DATABASE mydb TO you;

So far, we've accomplished quite a lot. First, we removed unnecessary packages (just a few); installed the latest supported version of our database, Postgres; created a new database and a new "user"; and granted full permissions to our user over our new database. Let's understand each step.

We begin by removing Apache2 and the likes because this is a database server setup and so there is no need to keep the Apache2 packages. Depending on the installed Ubuntu version, you will even need to remove other packages as well. The golden rule here is: the fewer packages installed, the fewer packages we have to pay attention to. Keep only the minimum.

Then we install Postgres. Depending on your background, you might ask—Why Postgres and why not MariaDB/MySQL? Well, well, fellow reader, Postgres is a complete solution with ACID support, document (JSONB) storage, key-value storage (with HStore), indexing, text searching, server-side programming, geolocalization (with PostGIS), and so on. If you know how to install and use Postgres, you have access to all these functionalities in a single solution. I also like it more than other open source/free solutions, so we'll stick with it.

After installing Postgres, we have to configure it. Unlike SQLite, which we have used so far as our relational database solution, Postgres has a robust permissions system based on roles that controls which resources may be accessed or modified, and by whom. The main concept here is that roles are a very particular kind of group, which may have permissions called privileges, or other groups associated with or containing it. For example, the command CREATE USER run inside the psql console (the Postgres interactive console, just like Python's) is not actually creating a user; it is, in reality, creating a new role with the login privilege, which is similar to the user concept. The following command is equivalent to the create user command inside psql:

CREATE ROLE you WITH LOGIN;

Now, toward our last sphinx, there is the GRANT command. To allow roles to do stuff, we grant them privileges, such as the login privilege that allows our "user" to log in. In our example, we grant you all available privileges to the database mydb. We do that so that we're able to create tables, alter tables, and so on. You usually don't want your production web application database user (whoa!) to have all these privileges because, in the event of a security breach, the invader would be able to do anything to your database. As one usually (coff coff never!) does not alter the database structure on user interaction, using a less privileged user with the web application is not a problem.

Tip

PgAdmin is an amazing, user-friendly, Postgres management application. Just use it with SSH tunneling (http://www.pgadmin.org/docs/dev/connect.html), and be happy!

Now test that your database setup is working. Connect to it from the console:

psql -U user_you -d database_mydb -h 127.0.0.1 -W

Enter your password when asked for it. Our preceding command is actually a trick we use with Postgres as we are connecting to the database through a network interface. By default, Postgres assumes you're trying to connect with a role and database of the same name as your system username. You cannot even connect as a role whose name is different than your system username, unless you do it from a network interface as we did.

Setting up the web server

Setting up your web server is a little more complex as it involves modifying more files and making sure the configuration is solid across them, but we'll make it, you'll see.

First, we make sure our project code is in our web server (that is not the same server as the database server, right?). We may do this in one of many ways: using FTP (please don't), plain fabric plus rsync, version control, or version plus fabric (happy face!). Let's see how to do the latter.

Given you already created a regular user in your web server virtual machine called myuser, make sure you have fabric installed:

sudo apt-get install python-dev
pip install fabric

And, a file called fabfile.py in your project root:

# coding:utf-8

from fabric.api import *
from fabric.contrib.files import exists

env.linewise = True
# forward_agent allows you to git pull from your repository
# if you have your ssh key setup
env.forward_agent = True
env.hosts = ['your.host.ip.address']


def create_project():
    if not exists('~/project'):
        run('git clone git://path/to/repo.git')


def update_code():
    with cd('~/project'):
        run('git pull')
def reload():
    "Reloads project instance"
    run('touch --no-dereference /tmp/reload')

With the preceding code and fabric installed, given you have your SSH key copied to the remote server with ssh-copy-id and have it set up with your version control provider (for example, github or bitbucket), create_project and update_code become available to you. You may use them, like this:

fab create_project  # creates our project in the home folder of our remote web server
fab update_code  # updates our project code from the version control repository

It's very easy. The first command gets your code in the repository, while the second updates it to your last commit.

Our web server setup will use some very popular tools:

  • uWSGI: This is used for application server and process management
  • Nginx: This is used as our HTTP server
  • UpStart: This is used to manage our uWSGI life cycle

UpStart comes with Ubuntu out-of-the-box, so we'll remember it for later. For uWSGI, we need to install it, like this:

pip install uwsgi

Now, inside your virtualenv bin folder, there will be a uWSGI command. Keep track of where it is as we'll need it soon.

Create a wsgi.py file inside your project folder with the following content:

# coding:utf-8
from main import app_factory

app = app_factory(name="myproject")

A uWSGI uses the app instance from the file above to connect to our application. An app_factory is a factory function that creates our application. We have seen a few so far. Just make sure the app instance it returns is properly configured. Application-wise, this is all we have to do. Next, we move on to connecting uWSGI to our application.

We may call our uWSGI binary with all the parameters necessary to load our wsgi.py file directly from command line or we can create an ini file, with all the necessary configuration, and just provide it to the binary. As you may guess, the second approach is usually better, so create an ini file that looks like this:

[uwsgi]
user-home = /home/your-system-username
project-name = myproject
project-path = %(user-home)/%(myproject)

# make sure paths exist
socket = %(user-home)/%(project-name).sock
pidfile = %(user-home)/%(project-name).pid
logto = /var/tmp/uwsgi.%(prj).log
touch-reload = /tmp/reload
chdir = %(project-path)
wsgi-file = %(project-path)/wsgi.py
callable = app
chmod-socket = 664

master = true
processes = 5
vacuum = true
die-on-term = true
optimize = 2

The user-home, project-name, and project-path are aliases we use to make our work easier. The socket option points to the socket file our HTTP server will use to communicate with our application. We'll not discuss all the given options as this is not an overview on uWSGI, but a few more important options, such as touch-reload, wsgi-file, callable, and chmod-socket, will receive a detailed explanation. Touch-reload is particularly useful; the file you specify as an argument to it will be watched by uWSGI and, whenever it is updated/touched, your application will be reloaded. After some code update, you certainly want to reload your app. Wsgi-file specifies which file has our WSGI-compatible application, while callable tells uWSGI the name of the instance in the wsgi file (app, usually). Finally, we have chmod-socket, which changes our socket permission to -rw-rw-r--, aka read/write permission to the owner and group; others may but read this. We need this as we want our application in the user scope and our sockets to be read from the www-data user, which is the server user. This setup is quite secure as the application cannot mess with anything beyond the system user resources.

We may now set up our HTTP server, which is quite an easy step. Just install Nginx as follows:

sudo apt-get install nginx-full

Now, your http server is up-and-running on port 80. Let's make sure Nginx knows about our application. Write the following code to a file called project inside /etc/nginx/sites-available:

server {
    listen 80;
    server_name PROJECT_DOMAIN;

    location /media {
        alias /path/to/media;
    }
    location /static {
        alias /path/to/static;
    }

    location / {
        include         /etc/nginx/uwsgi_params;
        uwsgi_pass      unix:/path/to/socket/file.sock;
    }
}

The preceding configuration file creates a virtual server running at port 80, listening to the domain server_name, serving static and media files from the provided paths through /static and /media, and listening to the path directing all access to / to be handled using our socket. We now turn on our configuration and turn off the default configuration for nginx:

sudo rm /etc/nginx/sites-enabled/default
ln -s /etc/nginx/sites-available/project /etc/nginx/sites-enabled/project

What have we just done? The configuration files for virtual servers live inside /etc/nginx/sites-available/ and, whenever we want a configuration to be seen by nginx, we symlink it to the enabled sites. In the preceding configuration, we just disabled default and enabled project by symlinking it. Nginx does not notice and load what we just did on its own; we need to tell it to reload its configuration. Let's save this step for later.

We need to create one last file inside /etc/init that will register our uWSGI process as a service with upstart. This part is really easy; just create a file called project.conf (or any other meaningful name) with the following content:

description "uWSGI application my project"

start on runlevel [2345]
stop on runlevel [!2345]

setuid your-user
setgid www-data

exec /path/to/uwsgi --ini /path/to/ini/file.ini

The preceding script runs uWSGI using our project ini file (we created it earlier) as parameter as the user "your-user" and group www-data. Replace your-user with your user (…) but, do not replace the www-data group as it is a required configuration. The preceding runlevel configuration just tells upstart when to start and stop this service. You don't have to intervene.

Run the following command line to start your service:

start project

Next reload Nginx configuration like this:

sudo /etc/init.d/nginx reload

If everything went fine, the media path and static path exist, the project database settings point to the remote server inside the private network, and the gods are smiling on you, your project should be accessible from your registered domain. Gimme a high-five!!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.187.19