Chapter 8: Running a Sanic Server

In the time that I have been involved with the Sanic project—and specifically, in trying to assist other developers by answering their support questions—there is one topic that perhaps comes up more than any other: deployment. That one word is often bundled with a mixture of confusion and dread.

Building a web application can be a lot of fun. I suspect that I am not alone in finding a tremendous amount of satisfaction in the build process itself. One of the reasons that I love software development in general—and web development in particular—is that I enjoy the almost puzzle-like atmosphere of fitting solutions to a given problem. When the build is done and it is time to launch, that is where the anxiety kicks in.

I cannot overemphasize this next point enough. One of Sanic's biggest assets is its bundled web server. This is not just a gimmick or some side feature to be ignored. The fact that Sanic comes bundled with its own web server truly does simplify the build process. Think about traditional Python web frameworks such as Django or Flask, or about some of the newer Asynchronous Server Gateway Interface (ASGI) frameworks. For them to become operational and connected to the web, you need a production-grade web server. Building the application is only one step—deploying it requires knowledge and proficiency in another tool. Typically, the web server used to deploy your application built with one of those frameworks is not the same web server that you develop upon. For that, you have a development server.

Not only is this an added complexity and dependency, but it also means you are not developing against the actual server that will be running your code in production. Is anyone else thinking what I am thinking? Bugs.

In this chapter, we will look at what is required to run Sanic. We will explore different ways to run Sanic both in development and production to make the deployment process as easy as possible. We will start by looking at the server life cycle. Then, we will discuss setting up both a local and a production-grade scalable service. We will cover the following topics:

  • Handling the server life cycle
  • Configuring an application
  • Running Sanic locally
  • Deploying to production
  • Securing your application with Transport Layer Security (TLS)
  • Deployment examples

When we are done, your days of deployment-induced anxiety should be a thing of the past.

Technical requirements

We will, of course, continue to build upon the tools and knowledge from previous chapters. Earlier, in Chapter 3, Routing and Intaking HTTP Requests, we saw some implementations that used Docker. Specifically, we were using Docker to run an Nginx server for static content. While it is not required for deploying Sanic, knowledge of Docker and (to a lesser extent) Kubernetes will be helpful. In this chapter, we will be exploring the usage of Docker with Sanic deployments. If you are not a black-belt Docker or Kubernetes expert, don't worry. There will be examples on the GitHub repository at https://github.com/PacktPublishing/Python-Web-Development-with-Sanic/tree/main/Chapter08. All that we hope and expect is some basic understanding of and familiarity with these tools.

If you do not have these listed tools installed, you will need them to follow along with this chapter:

  • git
  • docker
  • doctl
  • kubectl

Handling the server life cycle

Throughout this book, we have spent a lot of time talking about the life cycle of an incoming HyperText Transfer Protocol (HTTP) request. In that time, we have seen how we can run code at different points in that cycle. Well, the life cycle of the application server as a whole is no different.

Whereas we had middleware and signals, the server life cycle has what are called "listeners". In fact, listeners are in effect (with one small exception) signals themselves. Before we look at how to use them, we will take a look at which listeners are available.

Server listeners

The basic premise of a listener is that you are attaching some function to an event in the server's life cycle. As the server progresses through the startup and shutdown process, Sanic will trigger these events and therefore allow you to easily plug in your own functionality. Sanic triggers events at both the startup and shutdown phases. For any other event during the life of your server, you should refer to the Leveraging signals for intra-worker communication section of Chapter 6, Operating Outside the Response Handler.

The order of events goes like this:

  1. before_server_start: This event naturally begins runs before the server is started. It is a great place to connect to a database or perform any other operations that need to happen at the beginning of your application life cycle. Anything that you might be inclined to do in the global scope would almost always be better off done here. The only caveat worth knowing about is that if you are running in ASGI mode, the server is already running by the time Sanic is even triggered. In that case, there is no difference between before_server_start and after_server_start.
  2. after_server_start: A common misconception about this event is that it could encounter a race condition where the event runs while your server begins responding to HTTP requests. That is not the case. What this event means is that there was an HTTP server created and attached to the operating system (OS). The infrastructure is in place to begin accepting requests, but it has not happened yet. Only once all of your listeners for after_server_start are complete will Sanic begin to accept HTTP traffic.
  3. before_server_stop: This is a good place to start any cleanup you need to do. While you are in this location, Sanic is still able to accept incoming traffic, so anything that you might need to handle should still be available (such as database connections).
  4. after_server_stop: Once the server has been closed, it is now safe to start any cleanup that is remaining. If you are in ASGI mode, as with before_server_start, this event is not actually triggered after the server is off because Sanic does not control that. It will instead immediately follow any before_server_stop listeners to preserve their ordering.

Two more listeners are available to you—however, these additional listeners are only available with the Sanic server since they are specific to the Sanic server life cycle. This is due to how the server works. When you run Sanic with multiple workers, what happens is that there is the main process that acts as an orchestrator, spinning up multiple subprocesses for each of the workers that you have requested. If you want to tap into the life cycle of each of those worker processes, then you already have the tools at your disposal with the four listeners we just saw.

However, what if you wanted to run some bit of code not on each worker process, but once in the main process: that orchestrator? The answer is the Sanic server's main process events—main_process_start and main_process_stop. Apart from the fact that they run inside the main process and not the workers, they otherwise work like the other listeners. Remember how I said that the listeners are themselves signals, with an exception? This is that exception. These listeners are not signals in disguise. For all practical purposes, this distinction is not important.

It is also worth mentioning that even though these events are meant to allow code to be run in the main process and not the worker process when in multi-worker mode, they are still triggered when you are running with a single worker process. When this is the case, it will be run at the extreme beginning and extreme end of your life cycle.

This raises an interesting and often-seen mistake: double execution. Before continuing with listeners, we will turn our attention to mistakenly running code multiple times.

Running code in the global scope

When you are preparing your application to run, it is not uncommon to initialize various services, clients, interfaces, and so on. You likely will need to perform some operations on your application very early in the process before the server even begins to run.

For example, let's imagine that you are looking for a solution to help you better track your exceptions. You find a third-party service where you can report all of your exceptions and tracebacks to help you to better analyze, debug, and repair your application. To get started, the service provides some documentation to use its software development kit (SDK), as follows:

from third_party_sdk import init_error_reporting

init_error_reporting(app)

You get this set up and running in your multi-worker application, and you immediately start noticing that it is running multiple times and not in your worker processes as expected. What is going on?

Likely, the issue is that you ran your initialization code in the global scope. By global scope in Python, we mean something that is executing outside of a function or method. It runs on the outermost level in a Python file. In the preceding example, init_error_reporting runs in the global scope because it is not wrapped inside another function. The problem is that when multiple workers are running, you need to be aware of where and when that code is running. Since multiple workers mean multiple processes and each process is likely to run in your global scope, you need to be careful about what you put there.

As a very general rule, stick to putting any operable code inside a listener. This allows you to control the where and when, enabling the listener to operate in a more consistent and easily predictable manner.

Setting up listeners

Using listeners should look very familiar since they follow a similar pattern found elsewhere in Sanic. You create a listener handler (which is just a function) and then wrap it with a decorator. It should look like this:

@app.before_server_start

async def setup_db(app, loop):

    app.ctx.db = await db_setup()

What we see here is something incredibly important in Sanic development. This pattern should be committed to memory because attaching elements to your application ctx object increases your overall flexibility in development. In this example, we set up our database client so that it can be accessed from anywhere that our application can be (which is literally anywhere in the code).

One important thing to know is that you can control the order in which the listeners execute depending upon when they are defined. For the "start" time listeners (before_server_start, after_server_start, and main_process_start), they are executed in the order in which they are declared.

For the stop time listeners (before_server_stop, after_server_stop, and main_process_stop), the opposite is true. They are run in the reverse order of declaration.

How to decide to use a before listener or an after listener

As stated previously, there persists a common misconception that logic must be added to before_server_start in cases where you want to perform some operation before requests start. The fear is that using after_server_start might cause some kind of a race condition where some requests might hit the server moments before that event is triggered.

This is incorrect. Both before_server_start and after_server_start run to completion before any requests are allowed to come in.

So, then the question becomes: When should you favor one over the other? There are, of course, some personal and application-specific preferences that could be involved. Generally, however, I like to use the before_server_start event to set up my application context. If I need to initialize some object and persist it to app.ctx, then I will reach for before_server_start. For any other use case (such as performing any other types of external calls or configuration, I like to use after_server_start. This is by no means a hard and fast rule, and I often break it myself.

Now that we understand the life cycle of the server, there is one more missing bit of information that we need before we can run the application: configuration.

Configuring an application

Sanic tries to make some reasonable assumptions out of the box about your application. With this in mind, you can certainly spin up an application, and it should already have some reasonable default settings in place. While this may be acceptable for a simple prototype, as soon as you start to build your application, you will realize that you need to configure it.

And this is where Sanic's configuration system comes into play.

Configuration comes in two main flavors: tweaking the Sanic runtime operation, and declaring a state of global constants to be used across your application. Both types of configuration are important, and both follow the same general principles for applying values.

We will take a closer look at what the configuration object is, how we can access it, and how it can be updated or changed.

What is the Sanic configuration object?

When you create a Sanic application instance, it will create a configuration object. That object is really just a fancy dict type. As you will see, it does have some special properties. Do not let that fool you. You should remember: it is a dict object, and you can work with it like you would any other dict object. This will come in handy shortly when we explore how we can use the object.

If you do not believe me, then pop the following code into your application:

app = Sanic(__name__)

assert isinstance(app.config, dict)

This means that getting a configuration value with a default is no different than any other dict in Python, as illustrated in the following snippet:

environment = app.config.get("ENVIRONMENT", "local")

The configuration object is—however—much more important than any other dict object. It contains a lot of settings that are critical to the operation of your application. We have, of course, already seen in Chapter 6, Operating Outside the Response Handler, that we can use it to modify our default error handling, as illustrated here:

app.config.FALLBACK_ERROR_FORMAT = "text"

To understand the full scope of settings that you can tweak, you should take a look at the Sanic documentation at https://sanic.dev/en/guide/deployment/configuration.html#builtin-values.

How can an application's configuration object be accessed?

The best way to access the configuration object is to first get access to the application instance. Depending upon the scenario you are tackling at the moment, there are three main ways to get access to an application instance, as outlined here:

  • Accessing the application instance using a request object (request.app)
  • Accessing applications from a Blueprint instance (bp.apps)
  • Retrieving an application instance from the application registry (Sanic.get_app())

Perhaps the most common way to obtain the application instance (and, therefore, the configuration object by extension) is to grab it from the request object inside of a handler, as illustrated in the following code snippet:

@bp.route("")

async def handler(request):

    environment = request.app.config.ENVIRONMENT

If you are outside of a route handler (or middleware) where the request object is easily accessible, then the next best choice is probably to use the application registry. Rarely will it make sense to use the Blueprint apps property which is a set of applications that the Blueprint has been applied to. However, because it only exists after registration and it could be ambiguous as to which application you need, I usually will not reach for that as a solution. It is, nonetheless, good to know that it exists.

You may have seen us using the third option already. As soon as an application is instantiated, it is part of a global registry that can be looked up using the following code:

from sanic import Sanic

app = Sanic.get_app()

Whenever I am not in a handler, this is the solution I usually reach for. The two caveats that you need to be aware of are these:

  1. Make sure that the application instance has already been instantiated. Using app = Sanic.get_app() in the global scope can be tricky if you are not careful with your import ordering. Later on, in Chapter 11, A Complete Real-World Example, when we build out a complete application, I will show you a trick I use to get around this.
  2. If you are building a runtime with multiple application instances, then you will need to differentiate them using the application name, as follows:

    main_app = Sanic("main")

    side_app = Sanic("side")

    assert Sanic.get_app("main") is main_app

Once you have the object, you will usually just access the configuration value as a property—for example, app.config.FOOBAR. As shown previously, you can also use a variety of Python accessors, as illustrated here:

app.config.FOOBAR

app.config.get("FOOBAR")

app.config["FOOBAR"]

getattr(app.config, "FOOBAR")

How can the configuration object be set?

If you go to the Sanic documentation, you will see that there are a bunch of default values already set. These values can be updated in a variety of methods as well. Of course, you can use the object and dict setters, like this:

app.config.FOOBAR = 123

setattr(app.config, "FOOBAR", 123)

app.config["FOOBAR"] = 123

app.config.update({"FOOBAR": 123})

You will usually set values like this right after creating your application instance. For example, throughout this book, I have repeatedly used curl to access endpoints that I created. The easiest method to see an exception is to use the text-based exception renderer. Therefore, in most cases, I have used the following pattern to make sure that when there is an exception, it is easily formatted for display in this book:

app = Sanic(__name__)

app.config.FALLBACK_ERROR_FORMAT = "text"

This is not usually ideal for a fully built application. If you have been involved in web application development before, then you probably do not need me to tell you that configuration should be easily changeable depending upon your deployment environment. Therefore, Sanic will load environment variables as configuration values if they are prefixed with SANIC_.

This means that the preceding FALLBACK_ERROR_FORMAT value could also be set outside of the application with an environment variable, like this:

$ export SANIC_FALLBACK_ERROR_FORMAT=text

The best method to do this will obviously depend upon your deployment strategy. We go deeper into those strategies later in this chapter, but the specifics of how to set those variables will differ and are outside the scope of this book.

Another option that you may be familiar with is centralizing all of your configurations in a single location. Django does this with settings.py. While I am personally not a fan of this pattern, you might be. You can easily duplicate it, like this:

  1. Create a settings.py file by running the following code:

    FOO = "bar"

  2. Apply the configuration to the application instance, like this:

    import settings

    app.update_config(settings)

  3. Access the values as needed, as follows:

    print(app.config.FOO)

There is nothing special about the settings.py filename. You just need a module with a whole bunch of properties that are uppercased. In fact, you could replicate this with an object.

  1. Put all of your constants into an object now, like this:

    class MyConfig:

    FOO = "bar"

  2. Apply the configuration from that object, as follows:

    app.update_config(MyConfig)

The result will be the same.

Some general rules about configuration

I have some general rules that I like to follow regarding configuration and reproduce these here. I encourage you to adopt them since they have evolved from years of making mistakes, but I just as strongly encourage you to break them when necessary:

  • Use simple values: If you have some sort of a complex object such as a datetime object, perhaps configuration is not the best location for it. Part of the flexibility of configuration is that it can be set in many different ways, including outside of your application in environment variables. While Sanic will be able to convert things such as Booleans and integers, everything else will be a string. Therefore, for the sake of consistency and flexibility, try to avoid anything but simple value types.
  • Treat them as constants: Yes, this is Python. That means everything is an object and everything is subject to runtime changes. But do not do this. If you have a value that needs to be changed during the running of your application, use app.ctx instead. In my opinion, once before_server_start has completed, your configuration object should be considered locked in stone.
  • Don't hardcode values: Or, at least try really hard not to. When building out your application, you will undoubtedly encounter the need to create some sort of constant value. It is hard to guess a scenario that this might come up in without knowing your specific application, but when you realize that you are about to create a constant or some value, ask yourself whether the configuration is more appropriate. Perhaps the most concrete example of this is the settings that you might use to connect to a database, a vendor integration, or any other third-party service.

Configuring your application is almost certainly something that will change over the lifetime of your application. As you build it, run it, and add new features (or fix broken features), it is not uncommon to return to configuration often. One marker of a professional-grade application is that it relies heavily upon this type of configuration. This is to provide you with the flexibility to run the application in different environments. You may, for example, have some features that are only beneficial in local development, but not in production. It may also be the other way around. Configuration is, therefore, almost always tightly coupled with the environment where you will be deploying your application.

We now turn our attention to those deployment options to see how Sanic will behave when running in development and production environments.

Running Sanic locally

We finally are at the point where it is time to run Sanic—well, locally, that is. However, we also know we have been doing that all along since Chapter 2, Organizing a Project. The Sanic command-line interface (CLI) is already probably a fairly comfortable and familiar tool, but there are some things that you should know about it. Other frameworks have only development servers. Since we know that Sanic's server is meant for both development and production environments, we need to understand how these environments differ.

How does running Sanic locally differ from production?

The most common configuration change for local production is turning on debug mode. This can be accomplished in three ways, as follows:

  1. It could be enabled directly on the application instance. You would typically see this inside of a factory pattern when Sanic is being run programmatically from a script (as opposed to the CLI). You can directly set the value, as shown here:

    def create_app(..., debug: bool = False) -> Sanic:

        app = Sanic(__name__)

        app.debug = debug

        ...

  2. It is perhaps more common to see it set as an argument of app.run. A common use case for this might be when reading environment variables to determine how Sanic should initialize. In the following example, an environment value is read and applied when the Sanic server begins to run:

    from os import environ

    from path.to.somewhere import create_app

    def main():

        app = create_app()

        debug = environ.get("RUNNING_ENV", "local") != "production"

        app.run(..., debug=debug)

  3. The final option is to use the Sanic CLI. This is generally my preferred solution, and if you have been following along with the book, it is the one that we have been using all along. This method is straightforward, as shown here:

    $ sanic path.to:app --debug

The reason that I prefer this final option is that I like to keep the operational aspects of the server distinct from other configurations.

Important Note

As of v22.3, the debug argument has changed slightly. Whereas debug used to both enable debug mode and automatic server reloading, starting in v22.3, you will need to use the dev argument instead.

For example, timeouts are configuration values that are closely linked to the operation of the framework and not the server itself. They impact how the framework responds to requests. Usually, these values are going to be the same, regardless of where the application is deployed.

Debug mode, on the other hand, is much more closely linked to the deployment environment. You will want to set it to True locally but False in production. Therefore, since we will be controlling how Sanic is deployed with tools such as Docker, controlling the server's operational capacity outside of the application makes sense.

"Okay," you say, "turning on debug mode is simple, but why should I?" When you run Sanic in debug mode, it makes a couple of important changes. The most noticeable is that you begin to see debug logs and access logs dispatched from Sanic. This is, of course, very helpful to see while developing.

Tip

When I sit down to work on a web application, I always have three windows in my view at all times, comprising the following:

- My integrated development environment (IDE)

- An application programming interface (API) client such as Insomnia or Postman

- A Terminal showing me my Sanic logs (in debug mode)

The Terminal with debug level logging is your window into what is happening with your application as you build it.

Perhaps the biggest change that debug mode brings is that any exception will include its traceback in the response. In the next chapter, we will look at some examples of how you can make the most of this exception information.

This is hugely important and useful while you are developing. It is also a huge security issue to accidentally leave it on in production. DO NOT leave debug mode on in a live web application. This includes any instance of your application that is not on a local machine. So, for example, if you have a staging environment that is hosted somewhere on the internet, it may not be your "production" environment. However, it still MUST NOT run in debug mode. At best, it will leak details about how your application was built, and at worst, it will make sensitive information available. Make sure to turn off debug mode in production.

Speaking of production, let's move on over to what it takes to deploy Sanic into the wild world of production environments.

Deploying to production

We have finally made it. After working your way through the application development process, there finally is a product to launch out into the ether of the World Wide Web (WWW). The obvious question then becomes: What are my options? There are really two sets of questions that need to be answered, as follows:

  • First question: Which server should run Sanic?

There are three options: Sanic server, an ASGI server, or Gunicorn.

  • Second question: Where do you want to run the application?

Some typical choices include a bare-metal virtual machine (VM), a containerized image, a platform-as-a-service (PaaS), or a self-hosted or fully managed orchestrated container cluster. Perhaps these choices might make more sense if we put some of the commonly used product names to them, as follows:

Table 8.1 – Examples of common hosting providers and tools

Choosing the right server option

As we stated, there are three main ways to run Sanic: the built-in server, with an ASGI compatible server, or with Gunicorn. Before we decide which server to run, we will take a brief look at the pros and cons for each option, starting with the least performant option.

Gunicorn

If you are coming to Sanic from the Web Server Gateway Interface (WSGI) world, you may already be familiar with Gunicorn. Indeed, you may even be surprised to learn that Sanic can be run with Gunicorn since it is built for WSGI applications, not asynchronous applications such as Sanic. Because of this, the biggest downside to running Sanic with Gunicorn is the substantial decrease in performance. Gunicorn effectively unravels much of the work done to leverage concurrency with the asyncio module. It is by far the slowest way to run Sanic, and in most use cases is not recommended.

It still could be a good choice in certain circumstances. Particularly, if you need a feature-rich set of configuration options and cannot use something such as Nginx, then this might be an approach. Gunicorn has a tremendous amount of options that can be leveraged for fine-tuning server operation. In my experience, however, I typically see people reaching for it out of habit and not out of necessity. People will use it simply because it is what they know. People transitioning to Sanic from the Flash/Django world may be used to a particular deployment pattern that was centered on tools such as Supervisor and Gunicorn. That's fine, but it is a little old-fashioned and should not be the go-to pattern for Sanic deployments.

For those people, I urge you to look at another option. You are building with a new framework, so why not deploy it with a new strategy as well?

If, however, you do find yourself needing some of the more fine-tuned controls offered by Gunicorn, I would recommend you take a look at Nginx, which has an equally (if not more) impressive set of features. Whereas Gunicorn would be set up to actually run Sanic, the Nginx implementation would rely upon Sanic running via one of the other two strategies and placing an Nginx proxy in front of it (more on Nginx proxying later in this chapter). This option will allow you to retain a great deal of server control without sacrificing performance. It does, however, require some more complexity since you need to essentially run two servers instead of just one.

If, in the end, you still decide to use Gunicorn, then the best way to do so is to use Uvicorn's worker shim. Uvicorn is an ASGI server, which we will learn more about in the next section. In this context, however, it also ships with a worker class that allows Gunicorn to integrate with it. This effectively puts Sanic into ASGI mode. Gunicorn still runs as the web server, but it will pass traffic off to Uvicorn, which will then reach into Sanic as if it were an ASGI application. This will retain much of the performance offered by Sanic and asynchronous programming (although still not as performant as the Sanic server by itself). You can accomplish this as shown next:

  1. First, make sure both Gunicorn and Uvicorn are installed by executing the following command:

    $ pip install gunicorn uvicorn

  2. Next, run the application like this:

    $ gunicorn

        --bind 127.0.0.1:7777

        --worker-class=uvicorn.workers.UvicornWorker

        path.to:app

You should now have the full span of Gunicorn configurations at your fingertips.

ASGI server

We visited ASGI briefly in Chapter 1, Introduction to Sanic and Async Frameworks. ASGI is a design specification for how servers and frameworks can communicate with each other asynchronously. It was developed as a replacement methodology for the older WSGI standard that is incompatible with modern asynchronous Python practices. This standard has given rise to three popular ASGI web servers: Uvicorn, Hypercorn, and Daphne. All three of them follow the ASGI protocol and can therefore run any framework that adheres to that protocol. The goal, therefore, is to create a common language that allows one of these ASGI servers to run any ASGI framework.

And this is where to discuss Sanic with regard to ASGI, we must have a clear distinction in our mind of the difference between the server and the framework. Chapter 1, Introduction to Sanic and Async Frameworks, discussed this difference in detail. As a quick refresher, the web server is the part of the application that is responsible for connecting to the OS's socket protocol and handling the translation of bytes into usable web requests. The framework takes the digested web requests and provides the application developer with the tools needed to respond and construct an appropriate HTTP response. The server then takes that response and sends the bytes back to the OS for delivery back to the client.

Sanic handles this whole process, and when it does so, it operates outside the ASGI since that interface is not needed. However, it also has the ability to speak the language of an ASGI framework and thus can be used with any ASGI web server.

One of the benefits of running Sanic as an ASGI application is that it standardizes the runtime environment with a broader set of Python tools. There is, for example, a set of ASGI middleware that could be implemented to add a layer of functionality between the server and the application.

However, some of the standardization does come at the expense of performance.

Sanic server

The default mechanism is to run Sanic with its built-in web server. It should come as no surprise that it is built with performance in mind. Therefore, what the Sanic server gives up by forfeiting the standardization and interoperability of ASGI, it makes up for in its ability to optimize itself as a single-purpose server.

We have touched on some of the potential downsides of using the Sanic server, one of which was static content. No Python server will be able to match the performance of Nginx in handling static content. If you are already using Nginx as a proxy for Sanic and you have a known location of static assets, then it might make sense to also use it for those assets. However, if you are not using it, then you need to determine whether the performance difference warrants the additional operational expense. In my opinion, if you can easily add this to your Nginx configuration: great. However, if it would take a lot of complicated effort, or you are exposing Sanic directly, then the benefit might not be as great as just leaving it as is and serving that content from Sanic. Sometimes, for example, the easiest thing to do is to run your entire frontend and backend from a single server. This is certainly a case where I would suggest learning about the competing interests and making an appropriate decision instead of trying to make a perfect decision.

With this knowledge, you should now be able to decide which server is the right fit for your needs. We will assume for the remainder of this book that we are still deploying with the Sanic server, but since it is mainly a matter of changing the command-line executable, this should not make a difference.

How to choose a deployment strategy?

The last section laid out three potential web servers to use for Sanic applications, but that web server needs to run on a web host. But before deciding on which web-hosting company to use, there is still a very important missing component: how are you going to get your code from your local machine to the web host? In other words: how are you going to deploy your application? We will now look through some options for deploying Sanic applications.

There is some assumed knowledge, so if some of the technologies or terms here are unfamiliar, please feel free to stop and go look them up.

VM

This is perhaps the easiest option—well, the easiest besides a PaaS. Setting up a VM is super simple these days. With just a few clicks of a button, you can have a custom configuration for a VM. The reason this then becomes a simple option is that you just need to run your Sanic application the same way you might on your local machine. This is particularly appealing when using the Sanic server since it literally means that you can run Sanic in production with the same commands that you use locally. However, getting your code to the VM, maintaining it once it is there, and then ultimately scaling it will make this option the hardest. To be blunt, I almost would never recommend this solution. It is appealing to new beginners since it looks so simple from the outside, but looks can be deceiving.

There may in fact be times when this is an appropriate solution. If that is the case, then what would deployment look like? Really, not that much different than running it locally. You run the server and bind it to an address and port. With the proliferation of cloud computing, service providers (SPs) have made it such a trivial experience to stand up a VM. I personally find platforms such as DigitalOcean and Linode to be super user-friendly and excellent choices. Other obvious choices include Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. In my opinion, however, they are a little less friendly to someone new to cloud computing. Armed with their good documentation, with DigitalOcean and Linode it is relatively inexpensive and painless to click a few buttons and get an instance running. Once they provide you with an Internet Protocol (IP) address, it is now your responsibility to get your code to the machine and run the application.

You might be thinking the simplest way to move your code to the server would be to use Git. Then, all you need to do is launch the application, and you are done. But what happens if you need more instances or redundancy? Yes—Sanic comes with the ability to spin up multiple worker processes, but what if that is not enough? Now, you need another VM and some way to manage load-balancing your incoming web traffic between them. How are you going to handle redeployments of bug patches or new features? What about changes to environment variables? These complexities could lead to a lot of sleepless nights if you are not careful.

This is also somewhat ignoring the other fact that not all environments are equal. VMs could be built with different dependencies, leading to wasteful time maintaining servers and packages.

That is not to say this cannot or should not be a solution. Indeed, it might be a great solution if you are creating a simple service for your own use. Perhaps you need a web server for connecting to a smart home network, but it is certainly a case of developer beware. Running a web server on a bare-metal VM is rarely as simple as it appears at first glance.

Containers with Docker

One solution to the previous set of problems is to use a Docker container. For those that have used Docker, you can probably skip to the next section because you already understand the power that it provides. If you are new to containers, then I highly recommend you learn about them.

In brief, you write a simple manifest called a Dockerfile. That manifest describes an intended OS and some instructions needed to build an ideal environment for running your application. An example manifest is available in the GitHub repository here: https://github.com/PacktPublishing/Python-Web-Development-with-Sanic/blob/main/Chapter08/k8s/Dockerfile.

This might include installing some dependencies (including Sanic), copying source code, and defining a command that will be used to run your application. With that in place, Docker then builds a single image with everything needed to run the application. That image can be uploaded to a repository and used to run irrespective of the environment. You could, for example, opt to use this instead of managing all those separate VM environments. It is much simpler to bundle all that together and simply run it.

There is still some complexity involved in building our new versions and deciding where to run the image, but having consistent builds is a huge gain. This should really become a focal point of your deployment. So, although containers are part of the solution, there is still the problem of where to run it and the maintenance costs required to keep it running and up to date.

I almost always would recommend using Docker as part of your deployment practices, and if you know about Docker Compose, you might be thinking that is a great choice for managing deployments. I would agree with you, so long as we are talking about deployments on your local machine. Using Docker Compose for production is not something I would usually consider. The reason is simple: horizontal scaling. Just as with the issue when running Sanic on a VM, or a single container on a VM, running Docker Compose on a single VM carries the same problem: horizontal scaling. The fix is orchestration.

Container orchestration with Kubernetes

The problem with containers is that they only solve environmental problems by creating a consistent and repeatable strategy for your application—they still suffer from scalability problems. What happens when your application needs to scale past the resources that are available on a single machine? Container orchestrators such as Kubernetes (aka K8s) are a dream come true for anyone that has done development-operations (DevOps) work in the past. By creating a set of manifests, you will describe to Kubernetes what your ideal application will look like: the number of replicas, the number of resources they need, how traffic should be exposed, and so on. That is it! All you need to do is describe your application with some YAML Ain't Markup Language (YAML) files. Kubernetes will handle the rest. It has the added benefit of enabling rolling deployments where you can roll out new code with zero downtime for your application. It sounds like a dream come true for application deployments.

The downside, of course, is that this option is the most complex to set up. It is suitable for more serious applications where the level of complexity is acceptable. It may, however, be overkill for a lot of projects. This is a go-to deployment strategy for any application that will have more than a trivial amount of traffic. Of course, the complexity and scale of a Kubernetes cluster can expand based upon its needs. This dynamic quality is what makes it increasingly a standard deployment strategy that has been adopted by many industry professionals.

It is an ideal solution for platforms that consist of multiple services working together or that require scaling beyond the boundaries of a single machine.

This does bring up an interesting question, however. We know that Sanic has the ability to scale horizontally on a single host by replicating its workers in multiple processes. Kubernetes is capable of scaling horizontally by spinning up replica pods. You can think of a pod as encapsulating a container. Usually—especially to start—you will run Kubernetes with one container per pod. Let's say you hypothetically have decided that you need four instances of your application to handle the projected load that your application will receive. Should you have two pods each running two workers, or four pods each with one worker?

I have heard both put forth as ideal solutions. Some people say that you should maximize the resources per container. Other people say that you should have no more than one process per container. From a performance perspective, it is a dead heat. In all of my testing and experience, the solutions effectively perform the same. Therefore, it comes down entirely to the choice of the application builder. There is no right or wrong answer.

Later in this chapter, we will take a closer look at what it takes to launch a Sanic application with Kubernetes.

PaaS

Heroku is probably one of the most well-known PaaS offerings. It has been around for a while and has become an industry leader in these low-touch deployment strategies. Heroku is not the only provider—both Google and AWS have PaaS services in their respective cloud platforms, and DigitalOcean has also launched its own competing service. What makes a PaaS super convenient is that all you need to do is write the code. There is no container management, environment handling, or deployment struggles. It is intended to be a super easy low-touch solution for deploying code. Usually, deploying an application is as simple as pushing code to a Git repository.

This simple option is, therefore, ideal for proof-of-concept (POC) applications or other builds you need to deploy super quickly. I also do know plenty of people that run more robust and scalable applications through these services, and they really can be a great alternative. The huge selling point of these services is that by outsourcing the deployment, scaling, and service maintenance to the SP, you are freed up to focus on the application logic.

Because of this simplicity, and—ultimately—flexibility, we will take a closer look at launching Sanic with a PaaS vendor later in this chapter in the Deployment examples section. One of the things that are great about a PaaS is that it handles a lot of details such as setting up a TLS certificate and enabling an https:// address for your application. In the next section, however, we will learn what it takes to set up an https:// address for your application in the absence of convenience from a PaaS.

Securing your application with TLS

If you are not encrypting traffic to your web application, you are doing something wrong. In order to protect information while it is in transit between the web browser and your application, it is an absolute necessity to add encryption. The international standard for doing that is known as TLS, which is a protocol for how data can be encrypted between two sources. Often, however, it will be referred to as SSL (which stands for Secure Sockets Layer and is an earlier protocol that TLS replaces) or HTTPS (which stands for HTTP Secure and is technically an implementation of TLS, not TLS itself). Since it is not important for us how it works and we only care that it does what it needs to do, we will use these terms somewhat interchangeably. Therefore, it is safe for you to think about TLS and HTTPS as the same thing.

So, what is it? The simple answer is that you request a pair of keys from some reputable source on the internet. Your next step is to make them available to your web server and expose your application over a secure port—typically, that is port 443. After that, your web server should handle the rest, and you should now be able to access your application with an https:// address instead of http://.

Setting up TLS in Sanic

There are two common scenarios you should be familiar with—if you are exposing your Sanic application directly, or if you are placing Sanic behind a proxy. This will determine where you want to terminate your TLS connection. This simply means where you should set up your public-facing certificates. We will assume for now that Sanic is exposed directly. We will also assume that you already have certificates. If you do not know how to obtain them, don't worry—we will get to a potential solution for you in the next section.

All we need to do is to tell the Sanic server how to access those certificates. Also, since Sanic will default to port 8000, we need to make sure to set it to 443. With this in mind, these are the steps we'll take:

  1. Our new runtime command (in production) will be this:

    $ sanic

        --host=0.0.0.0

        --port=443

        --cert=/path/to/cert

        --key=/path/to/keyfile

        --workers=4

        path.to.server:app

  2. It is largely the same operation if you are using app.run instead, as illustrated in the following code snippet:

    ssl = {"cert": "/path/to/cert", "key": "/path/to/keyfile"}

    app.run(host="0.0.0.0", port=443, ssl=ssl, workers=4)

When you are exposing your Sanic application directly and therefore terminating your TLS with Sanic, there is often a desire to add HTTP to HTTPS redirect. For your users' convenience, you probably want them to always be directed to HTTPS and for this redirection to happen magically for them without having to think about it.

The Sanic user guide provides us with a simple solution that involves running a second Sanic application inside our main application. Its only purpose will be to bind to port 80 (which is the default HTTP non-encrypted port) and redirect all traffic. Let's quickly examine that solution and step through it, as follows:

  1. First, in addition to our main application, we need a second that will be responsible for the redirects. So, we will set up two applications and some configuration details, as follows:

    main_app = Sanic("MyApp")

    http_app = Sanic("MyHTTPProxy")

    main_app.config.SERVER_NAME = "example.com"

    http_app.config.SERVER_NAME = "example.com"

  2. We add only one endpoint to the http_app application that will be responsible for redirecting all traffic to the main_app application, as follows:

    @http_app.get("/<path:path>")

    def proxy(request, path):

        url = request.app.url_for(

            "proxy",

            path=path,

            _server=main_app.config.SERVER_NAME,

            _external=True,

            _scheme="https",

        )

        return response.redirect(url)

In Chapter 10, Implementing Common Use Cases with Sanic, there is a more complete working example of how to accomplish HTTP to HTTPS redirection: https://github.com/PacktPublishing/Python-Web-Development-with-Sanic/tree/main/Chapter10/httpredirect

  1. To make running the HTTP redirect application easier, we will just piggyback off of the main application's life cycle so that there is no need to create another executable. Therefore, when the main application starts up, it will also create and bind the HTTP application. The code is illustrated in the following snippet:

    @main_app.before_server_start

    async def start(app, _):

        app.ctx.http_server = await http_app.create_server(

            port=80, return_asyncio_server=True

        )

        app.ctx.http_server.app.finalize()

You should note how we are assigning that server to the ctx object for our main application so that we can use it again.

  1. Finally, when the main application shuts down, it will also be responsible for shutting down the HTTP application, as illustrated in the following code snippet:

    @main_app.before_server_stop

    async def stop(app, _):

        await app.ctx.http_server.close()

With this in place, any request to http://example.com should be automatically redirected to the https:// version of the same page.

Back in Step 1 and Step 2, this example sort of skipped over the fact that you need to obtain actual certificate files to be used to encrypt your web traffic. This is largely because you need to bring your own certificates to the table. If you are not familiar with how to do that, the next section provides a potential solution.

Getting and renewing a certificate from Let's Encrypt

Back in the olden days of the internet, if you wanted to add HTTPS protection to your web application, it was going to cost you. Certificates were not cheap, and they were somewhat cumbersome and complicated to manage. Actually, certificates are still not cheap if you are to buy one yourself, especially if you want to buy a certificate that covers your subdomains. However, this is no longer your only option since several players came together looking for a method to create a safer online experience. The solution: free TLS certificates. These free (and reputable) certificates are available from Let's Encrypt and are the reason that every production website should be encrypted. Expense is no longer an excuse. At this point in time, if I see a website still running http:// in a live environment, a part of me cringes as I go running for the hills.

If you do not currently have a TLS certificate for your application, head over to https://letsencrypt.org to get one. The process to obtain a certificate from Let's Encrypt requires you to follow some basic steps and then prove that you own the domain. Because there are a lot of platform specifics and it is outside the scope of this book, we will not really dive into the details of how to obtain one. Later on, this chapter does go through a step-by-step process to obtain a Let's Encrypt certificate for use in a Kubernetes deployment in the Kubernetes (as-a-service) section.

I do, however, highly encourage you to use Let's Encrypt if the budget for your project does not allow for you to go out and purchase a certificate.

With a certificate in hand, it is finally time to look at some actual code and decide which deployment strategy is right for your project.

Deployment examples

Earlier, when discussing the various choices for deployment strategies, two options rose above the others: PaaS and Kubernetes. When deploying Sanic into production, I would almost always recommend one of these solutions. There is no hard and fast rule here, but I generally think of Kubernetes as being the go-to solution for platforms that will be running multiple services, have the need for more controlled deployment configurations, and have more resources and a team of developers. On the other hand, a PaaS is more appropriate for single developer projects or projects that do not have resources to devote to maintaining a richer deployment pipeline. We will now explore what it takes to get Sanic running in these two environments.

PaaS

As we stated before, Heroku is a well-known industry leader in deploying applications via PaaS. This is for good reason as it has been in business providing these services since 2007 and has played a critical role in popularizing the concept. It has made the process super simple for both new and experienced developers. However, in this section, we are going to instead take a look at deploying a Sanic application with DigitalOcean's PaaS offering. The steps should be nearly identical and applicable to Heroku or any of the other services that are out there, and we look at them here:

  1. First, you need to—of course—go to DigitalOcean's website and sign up for an account if you do not have one. DigitalOcean's PaaS is called Apps, which you can find on the left-hand side of the main dashboard once you are logged in.
  2. You will next be taken through a series of steps that will ask you to connect a Git repository.
  3. You will next need to configure the app through their user interface (UI). Your screen will probably look something like this:

Figure 8.1 – Example settings for PaaS setup

A very important thing to note here is that we have set --host=0.0.0.0. This means that we are telling Sanic that it should bind itself to any IP address that DigitalOcean provides it. Sanic will bind itself to the 127.0.0.1 address without this configuration. As anyone who has done web development knows, the 127.0.0.1 address maps to localhost on most computers. This means that Sanic will be accessible only to web traffic on that specific computer. This is no good. If you ever deploy an application and cannot access it, one of the first things to check is that the port and host are set up properly. One of the easiest options is to just use 0.0.0.0, which is the equivalent of a wildcard IP address.

  1. Next, you will be asked to select a location for which data center it will live in. Usually, you want to pick one that will be close to where your intended audience will be to reduce latency.
  2. You will then need to select an appropriate package. If you do not know what to choose, start small and then scale it up as needed.
  3. The only thing left to do is to set up the files in our repository. There is a sample in GitHub for you to follow, at https://github.com/PacktPublishing/Python-Web-Development-with-Sanic/tree/main/Chapter08/paas.
  4. Finally, we need a requirements.txt file that lists out our dependencies: Sanic and a server.py file, just as with every other build we have done so far.

Once that is done, every time you push to the repository, your application should be rebuilt and available to you. One of the nice benefits of this is that you will get a TLS certificate with HTTPS out of the box. No configuration is needed.

Seems simple enough? Let's look at a more complex setup with Kubernetes.

Kubernetes (as-a-service)

We are going to turn our attention to Kubernetes: one of the most widely adopted and utilized platforms for orchestrating the deployment of containers. You could, of course, spin up some VMs, install Kubernetes on them, and manage your own cluster. However, I find a much more worthwhile solution is to just take one of the Kubernetes-as-a-service solutions. You still have all of the power of Kubernetes but none of the maintenance headaches. Most of the major cloud providers offer Kubernetes as a service, so you should be able to use your provider of choice.

We will again look at DigitalOcean and use their Kubernetes platform for our example. Here are the steps:

  1. In our local directory, we will need a few files, as follows:
    • Dockerfile to describe our Docker container
    • app.yml, a Kubernetes config file described next
    • ingress.yml, a Kubernetes config file described next
    • load-balancer.yml, a Kubernetes config file described next
    • server.py, which is again a Sanic application

You can follow along with the files in the GitHub repository at https://github.com/PacktPublishing/Python-Web-Development-with-Sanic/tree/main/Chapter08/k8s.

  1. Our Dockerfile is the set of instructions to build our container. We will take a shortcut and use one of the Sanic community's base images that has both Python and Sanic pre-installed, as follows:

    FROM sanicframework/sanic:3.9-latest

    COPY . /srv

    WORKDIR /srv

    EXPOSE 7777

    ENTRYPOINT ["sanic", "server:app", "--port=7777", "--host=0.0.0.0"]

Just as we saw with the PaaS solution, we are binding to host 0.0.0.0 for the same reason. We are not adding multiple workers per container here. Again, this is something you could do if you prefer.

  1. Next, we will need to build an image, as follows:

    $ docker build -t admhpkns/my-sanic-example-app.

  2. Let's try running it locally to make sure it works. Here's the command to do this:

    $ docker run -p 7000:7777 --name=myapp admhpkns/my-sanic-example-app

Once it is running, you should be able to access the API at http://localhost:7000.

  1. Don't forget to clean up your environment, and remove the container when you are done, like this:

    $ docker rm myapp

  2. And you will, of course, need to push your container to some accessible repository. For ease of use and demonstration purposes, I will be pushing it to my public Docker Hub repository, like this:

    $ docker push admhpkns/my-sanic-example-app:latest

If you are not familiar with Docker repositories, they are cloud-hosted locations for storing container images. Docker Hub is a great resource that provides a free tier. Other popular locations include GitLab, Google, and AWS.

  1. For this next part, we will interact with DigitalOcean through their CLI tool. If you do not have it installed, head to https://docs.digitalocean.com/reference/doctl/how-to/install/. You will want to make sure you log in by running the following command:

    $ doctl auth init

  2. We next need a DigitalOcean Kubernetes cluster. Log in to their web portal, click on Kubernetes on the main dashboard, and set up a cluster. For now, the default settings are fine.
  3. We next need to enable kubectl (the tool to interact with Kubernetes) to be able to talk to our DigitalOcean Kubernetes cluster. If kubectl is not installed, check out the instructions here: https://kubernetes.io/docs/reference/kubectl/overview/. The command you need will look something like this:

    $ doctl kubernetes cluster kubeconfig save afb87d0b-9bbb-43c6-a711-638bc4930f7a

Once your cluster is available and kubectl is set up, you can verify it is running by checking the following:

$ kubectl get pods

Since we have not set up any pods yet, there should not be anything to see.

  1. When configuring Kubernetes, we need to start by running kubectl apply on our app.yml file.

    Tip

    Before going any further, you will see a lot of online tutorials that use this style of command:

    $ kubectl create ...

    I generally try to avoid that in favor of this:

    $ kubectl apply ...

    They essentially do the same thing, but the convenience is that Kubernetes resources that are created with apply can be continually modified by "applying" the same manifest over and over again.

What is in app.yml? Check out the GitHub repository for the full versions. It is rather lengthy and includes some boilerplate that is not relevant to the current discussion, so I will show only relevant snippets here. This goes for all of the Kubernetes manifests in our example.

The file should contain the Kubernetes primitives needed to run the application: a service and a deployment. A service is a stability layer on top of pods. Because Kubernetes pods can be easily created and destroyed, services exist to have a consistent internal IP address to point to those pods. A deployment is an abstraction that defines how pods are to be created, which containers should they contain, how many there should be, and so on.

The service should look something like this:

spec:

  ports:

    - port: 80

      targetPort: 7777

  selector:

    app: ch08-k8s-app

Notice how we are mapping port 7777 to 80. This is because we will be terminating TLS in front of Sanic, and our ingress controller will talk to Sanic over HTTP unencrypted. Because it is all in a single cluster, this is acceptable. Your needs might be more sensitive, and then you should look into encrypting that connection as well.

The other thing in app.yml is the deployment, which should look something like this:

spec:

  selector:

    matchLabels:

      app: ch08-k8s-app

  replicas: 4

  template:

    metadata:

      labels:

        app: ch08-k8s-app

    spec:

      containers:

        - name: ch08-k8s-app

          image: admhpkns/my-sanic-example-app:latest

          ports:

            - containerPort: 7777

Here, we are defining the number of replicas we want, as well as pointing the container to our Docker image repository.

  1. After creating that file, we will apply it, and you should see a result similar to this:

    $ kubectl apply -f app.yml

    service/ch08-k8s-app created

    deployment.apps/ch08-k8s-app created

You can now check out to see that it worked, as follows:

$ kubectl get pods

$ kubectl get svc

  1. We will next use an off-the-shelf solution to create an Nginx ingress. This will be the proxy layer that terminates our TLS and feeds HTTP requests into Sanic. We will install it as follows:

    $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/do/deploy.yaml

Note, at the time of writing, v1.0.0 is the latest. That probably won't be true by the time you are reading this, so you may need to change that. You can find the latest version on their GitHub page at https://github.com/kubernetes/ingress-nginx.

  1. Next, we will set up our ingress. Create an ingress.yml file following the pattern in our GitHub repository example, like this:

    $ kubectl apply -f ingress.yml

You will notice there are intentionally some lines commented out. We will get to that in a minute. Let's just quickly verify that it worked by executing the following command:

$ kubectl get pods -n ingress-nginx

  1. We should take a step back and jump over to the DigitalOcean dashboard. On the left is a tab called Networking. Go there, and then in the tab for Domains, follow the procedure to add your own domain there. In that example, in ingress.yml we added example.com as the ingress domain. Whichever domain you add to DigitalOcean's portal should match your ingress. If you need to go back and update and re-apply the ingress.yml file with your domain, do that now.
  2. Once that is all configured, we should be able to see our application working, as in the following example:

    $ curl http://example.com       

    Hello from 141.226.169.179

This is, of course, not ideal because it is still on http://. We will now get a Let's Encrypt certificate and set up TLS.

  1. The easiest method for this is to set up a tool called cert-manager. It will do all of the interfacing we need with Let's Encrypt. Start by installing it, as follows:

    $ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.5.3/cert-manager.yaml

Again, please check to see what the most up-to-date version is and update this command accordingly.

We can verify its installation here:

$ kubectl get pods --namespace cert-manager

  1. Next, create a load-balancer.yml file following the example in the GitHub repository. It should look something like this:

    apiVersion: v1

    kind: Service

    metadata:

      annotations:

        service.beta.kubernetes.io/do-loadbalancer-hostname: example.com

      name: ingress-nginx-controller

      namespace: ingress-nginx

    spec:

      type: LoadBalancer

      externalTrafficPolicy: Local

      ports:

        - name: http

          port: 80

          protocol: TCP

          targetPort: http

        - name: https

          port: 443

          protocol: TCP

          targetPort: https

      selector:

        app.kubernetes.io/name: ingress-nginx

        app.kubernetes.io/instance: ingress-nginx

        app.kubernetes.io/component: controller

  2. Apply that manifest and confirm that it worked, as follows:

    $ kubectl apply -f load-balancer.yml

    service/ingress-nginx-controller configured

  3. Your Kubernetes cluster will now start the process of obtaining a certificate.

    Tip

    One thing that you might encounter is that the process gets stuck while requesting the certificate. If this happens to you, the solution is to turn on Proxy Protocol in your DigitalOcean dashboard. Go to the following setting and turn this on if you need to:

    Networking > Load Balancer > Manage Settings > Proxy Protocol > Enabled

  4. We're almost there! Open up that ingress.yml file and uncomment those few lines that were previously commented out. Then, apply the file, as follows:

    $ kubectl apply -f ingress.yml

Done! You should now automatically have a redirect from http:// to https://, and your application is fully protected.

Better yet, you now have a deployable Sanic application with all the benefits, flexibility, and scalability that Kubernetes container orchestration provides.

Summary

Building a great Sanic application is only half of the job. Deploying it to make our application usable out in the wild is the other half. In this chapter, we explored some important concepts for you to consider. It is never too early to think about deployment either. The sooner you know which server you will use and where you will host your application, the sooner you can plan accordingly.

There are of course many combinations of deployment options, and I only provided you with a small sample. As always, you will need to learn what works for your project and team. Take what you have learned here and adapt it.

However, if you were to ask me to boil all of this information down and ask for my personal advice on how to deploy Sanic, I would tell you this:

  • Run your applications using the built-in Sanic server.
  • Terminate TLS outside of your application.
  • For personal or smaller projects, or if you want a simpler deployment option, use a PaaS provider.
  • For larger projects that need to scale and have more developer resources, use a hosted Kubernetes solution.

There you have it. You should now be able to build a Sanic application and run it on the internet. Our work is done, right? You should have the skills and knowledge you need now to go out and build something great, so go ahead and do that now. In the remainder of this book, we will start to look at some more practical issues that arise while building web applications and look at some best-practice strategies for how to solve them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.137.127