Going Deeper: The Request Pipeline

When we created the hello project, Mix created a bunch of directories and files. It’s time to take a more detailed look at what all of those files do and, by extension, how Phoenix helps you organize applications.

When you think about it, typical web applications are just big functions. Each web request is a function call taking a single formatted string—the URL—as an argument. That function returns a response that’s nothing more than a formatted string. If you look at your application in this way, your goal is to understand how functions are composed to make the one big function call that handles each request. In some web frameworks, that task is easier said than done. Most frameworks have hidden functions that are only exposed to those with deep, intimate internal knowledge.

The Phoenix experience is different because it encourages breaking big functions down into smaller ones. Then, it provides a place to explicitly register each smaller function in a way that’s easy to understand and replace. We’ll tie all of these functions together with the Plug library.

Think of the Plug library as a specification for building applications that connect to the web. Each plug consumes and produces a common data structure called Plug.Conn. Remember, that struct represents the whole universe for a given request, because it has things that web applications need: the inbound request, the protocol, the parsed parameters, and so on.

Think of each individual plug as a function that takes a conn, does something small, and returns a slightly changed conn. The web server provides the initial data for our request, and then Phoenix calls one plug after another. Each plug can transform the conn in some small way until you eventually send a response back to the user.

Even responses are just transformations on the connection. When you hear words like request and response, you might be tempted to think that a request is a plug function call, and a response is the return value. That’s not what happens. A response is just one more action on the connection, like this:

  conn
  |> ...
  |> render_response()

The whole Phoenix framework is made up of organizing functions that do something small to connections, even rendering the result. Said another way…

Plugs are functions.

Your web applications are pipelines of plugs.

Phoenix File Structure

If web applications in Phoenix are functions, the next logical step is to learn where to find those individual functions and how they fit together to build a coherent application. Let’s work through the project directory structure, focusing on only the most important ones for now. Here’s what your directories look like now:

 ...
 ├── assets
 ├── config
 ├── lib
 ├──── hello
 ├──── hello_web
 ├── test
 ...

Browser files like JavaScript and CSS go into assets and the Phoenix configuration goes into config. Your supervision trees (we’ll explore those more in chapters to come), long-running processes, and application business logic go into lib/hello. Your web-related code—including controllers, views, and templates—goes in lib/hello_web. Predictably, you’ll put tests in test.

In this section, you will walk through each of these pieces, including the pieces you created and many other ones that Phoenix generated. To sleuth out the entire pipeline of functions for a full web request, you need to start at the beginning. You will start with the basic code that Elixir and Erlang depend on.

Elixir Configuration

Since Phoenix projects are Elixir applications, they have the same structure as other Mix projects. Let’s look at the basic files in the project:

 ...
 ├── lib
 │ ├── hello
 │ ├── hello_web
 │ │ ├── endpoint.ex
 │ │ └── ...
 │ ├── hello.ex
 │ └── hello_web.ex
 ├── mix.exs
 ├── mix.lock
 ├── test
 ...

We’ve already encountered the .ex files. These contain Elixir code which you’ll compile to the .beam files that run on the Erlang virtual machine. The .exs files are Elixir scripts. They’re not compiled to .beam files. The compilation happens in memory, each time they are run. They’re excellent for quick-changing scripts or standalone development-time tasks.

The project we created is a Mix project, named after the build tool that nearly all Elixir projects use. All Mix projects have a common structure. Each project has a configuration file, mix.exs, containing basic information about the project that supports tasks like compiling files, starting the server, and managing dependencies. When we add dependencies to our project, we’ll need to make sure they show up here. Also, after we compile the project, mix.lock will include the specific versions of the libraries we depend on, so we guarantee that our production machines use exactly the same versions that we used during development and in our build servers.

Each Mix project also has a lib directory. Support for starting, stopping, and supervising each application is in lib/hello/application.ex.

Also, each Mix project has a test directory that hosts all tests. Phoenix adds some files to this test structure to support testing-specific files like controllers and views. We have not yet written any tests, but when we do, they will live in test.

Environments and Endpoints

Your application will run in an environment. The environment contains specific configuration that your web application needs. You can find that configuration in config:

 ...
 ├── config
 │ ├── config.exs
 │ ├── dev.exs
 │ ├── prod.exs
 │ ├── prod.secret.exs
 │ └── test.exs
 ...

Phoenix supports a master configuration file plus an additional file for each environment you plan to run in. The environments supported by default are development (dev.exs), test (test.exs), and production (prod.exs), but you can add any others that you want.

You can see the three environment files, the master config.exs file containing application-wide configuration concerns, and a file called prod.secret.exs, which is responsible to load secrets and other configuration values from environment variables. Those environment variables are usually populated by deployment tasks.

You switch between prod, dev, and test environments via the MIX_ENV environment variable. We’ll spend most of our time in this book in dev and test modes. That’ll be easy, because your Mix task will have you working in dev by default, and it’ll shift to test when you run automated tests with mix.

The master configuration file, config/config.exs, initially contains information about logging, and endpoints. Remember when we said that your web applications were just functions? An endpoint is the boundary where the web server hands off the connection to our application code. Now, you’ll see that config/config.exs contains a single endpoint called Hello.Endpoint. Open the file called config/config.exs in your editor:

 use​ Mix.Config
 
 # Configures the endpoint
 config ​:hello​, HelloWeb.Endpoint,
 url:​ [​host:​ ​"​​localhost"​],
 secret_key_base:​ ​"​​U8VmJ...hNnTsFFvrhmD"​,
 render_errors:​ [​view:​ HelloWeb.ErrorView, ​accepts:​ ​~​w(html json)],
 pubsub:​ [​name:​ Hello.PubSub,
 adapter:​ Phoenix.PubSub.PG2]

Even though you might not understand this entire block of code, you can see that this code has our endpoint, which is the beginning of our world. The config function call configures the HelloWeb.Endpoint endpoint in our :hello application, giving a keyword list with configuration options. Let’s look at that endpoint, which we find in lib/hello_web/endpoint.ex:

 defmodule​ HelloWeb.Endpoint ​do
 use​ Phoenix.Endpoint, ​otp_app:​ ​:hello
 
  plug Plug.Static, ...
  plug Plug.RequestId
  plug Plug.Telemetry, ...
 
  plug Plug.Parsers, ...
  plug Plug.MethodOverride
  plug Plug.Head
 
  plug Plug.Session, ...
  plug HelloWeb.Router
 end

You can see that this chain of functions, or plugs, does the typical things that almost all production web servers need to do: deal with static content, log requests, parse parameters, and the like. Remember, you already know how to read this code. It’ll translate to a pipeline of functions, like this:

  connection
  |> Plug.Static.call()
  |> Plug.RequestId.call()
  |> Plug.Telemetry.call()
  |> Plug.Parsers.call()
  |> Plug.MethodOverride.call()
  |> Plug.Head.call()
  |> Plug.Session.call()
  |> HelloWeb.Router.call()

That’s an oversimplification, but the basic premise is correct. Endpoints are the chain of functions at the beginning of each request.

Now you can get a better sense of what’s going on. Each request that comes in will be piped through this full list of functions. If you want to change the logging layer, you can change logging for all requests by specifying a different logging function here.

Summarizing what we have so far: an endpoint is a plug, one that’s made up of other plugs. Your application is a series of plugs, beginning with an endpoint and ending with a controller:

  connection
  |> endpoint()
  |> plug()
  |> plug()
  ...
  |> router()
  |> HelloWebController()

We know that the last plug in the endpoint is the router, and we know we can find that file in lib/hello_web/router.ex.

José says:
José says:
Can I Have More Than One Endpoint?

Although applications usually have a single endpoint, Phoenix doesn’t limit the number of endpoints your application can have. For example, you could have your main application endpoint running on port 80 (HTTP) and 443 (HTTPS), as well as a specific admin endpoint running on a special port—let’s say 8443 (HTTPS)—with specific characteristics and security constraints.

Alternatively, we could break those endpoints into separate applications but still run them side by side. You’ll explore this later on when learning about umbrella projects.

The Router Flow

Now that you know what plugs are, let’s take a fresh look at our router. Crack open lib/hello_web/router.ex. You can see that it’s made up of two parts: pipelines and a route table. Here’s the first part:

 defmodule​ HelloWeb.Router ​do
 use​ HelloWeb, ​:router
 
  pipeline ​:browser​ ​do
  plug ​:accepts​, [​"​​html"​]
  plug ​:fetch_session
  plug ​:fetch_flash
  plug ​:protect_from_forgery
  plug ​:put_secure_browser_headers
 end
 
  pipeline ​:api​ ​do
  plug ​:accepts​, [​"​​json"​]
 end

Sometimes, you’ll want to perform a common set of tasks, or transformations, for some logical group of functions. Not surprisingly, you’ll do each transformation step with a plug and group these plugs into pipelines. When you think about it, a pipeline is just a bigger plug that takes a conn struct and returns one too.

In router.ex, you can see two pipelines, both of which do reasonable things for a typical web application. The browser pipeline accepts only HTML. It provides some common services such as fetching the session and a user message system called the flash, used for brief user notifications. It also provides some security services, such as request forgery protection.

We’d use the second pipeline of functions for a typical JSON API. This stack strictly calls the function that accepts only JSON requests, so if you had the idea of converting the whole API site to accept only XML, you could do so by changing one plug in one place.

Our hello application uses the browser pipeline, like this:

 scope ​"​​/"​, HelloWeb ​do
  pipe_through ​:browser​ ​# Use the default browser stack
 
  get ​"​​/"​, PageController, ​:index
 end

Now you can tell exactly what the pipeline does. All the routes after pipe_through :browser—all the routes in our application—go through the browser pipeline. Then, the router triggers the controller.

In general, the router is the last plug in the endpoint. It gets a connection, calls a pipeline, and then calls a controller. When you break it down, every traditional Phoenix application looks like this:

  connection
  |> endpoint()
  |> router()
  |> pipeline()
  |> controller()
  • The endpoint has functions that happen for every request.

  • The connection goes through a named pipeline, which has common functions for each major type of request.

  • The controller invokes the model and renders a template through a view.

Let’s look at the final piece of this pipeline, the controller.

Controllers, Views, and Templates

From the previous section, you know that a request comes through an endpoint, through the router, through a pipeline, and into the controller. The controller is the gateway for the bulk of a traditional web application. Like a puppet master, your controller pulls the strings for this application, making data available in the connection for consumption by the view. It potentially fetches database data to stash in the connection and then redirects or renders a view. The view substitutes values for a template.

For Phoenix, your web-related code, including controllers, views, and templates goes into the lib/hello_web/ directory. Right now, that directory looks like:

 ├── hello
 │   ├── application.ex
 │   └── repo.ex
 ├── hello.ex
 
 ├── hello_web
 │   ├── channels
 │ │ └── user_socket.ex
 │   ├── controllers
 │   │   ├── hello_controller.ex
 │   │   └── page_controller.ex
 │   ├── endpoint.ex
 │   ├── gettext.ex
 │   ├── router.ex
 │   ├── templates
 │   │   ├── hello
 │ │ │ └── world.html.eex
 │   │   ├── layout
 │ │ │ └── app.html.eex
 │   │   └── page
 │ │ │ └── index.html.eex
 │   └── views
 │   ├── error_helpers.ex
 │   ├── error_view.ex
 │   ├── hello_view.ex
 │   ├── layout_view.ex
 │   └── page_view.ex
 └── hello_web.ex

You can see two top-level files, hello.ex and hello_web.ex. The Hello module is an empty module which defines the top-level interface and documentation for your application. The HelloWeb module contains some glue code that defines the overall structure to the web-related modules of your application.

The second part of this book will be dedicated to applications that use the channels directory, so let’s skip that for now. You’ve already coded a simple controller, so you know what the basic structure looks like.

As you might expect for the support of old-style MVC applications, you can see that lib/hello_web contains directories for views, and controllers. There’s also a directory for templates—because Phoenix separates the views from the templates themselves.

We’ve created code in the controller, views, and templates/hello directories, and we’ve added code to router.ex as well. This application is fairly complete. After all, it’s handling plenty of production-level concerns for you:

  • The Erlang virtual machine and OTP engine will help the application scale.

  • The endpoint will filter out static requests and also parse the request into pieces, and trigger the router.

  • The browser pipeline will honor Accept headers, fetch the session, and protect from attacks like cross-site request forgery (CSRF).

All of these features are quickly available to you for tailoring, but they’re also conveniently stashed out of your way in a structure that’s robust, fast, and easy to extend. In fact, there’s no magic at all. You have a good picture of exactly which functions Phoenix calls on a request to /hello, and where that code lives within the code base:

 connection ​# Plug.Conn
 |> endpoint() ​# lib/hello_web/endpoint.ex
 |> browser() ​# lib/hello_web/router.ex
 |> HelloController.world() ​# lib/hello_web/controllers/hello_controller.ex
 |> HelloView.render( ​# lib/hello_web/views/hello_view.ex
 "​​world.html"​) ​# lib/hello_web/templates/hello/world.html.eex

It’s easy to gloss over these details and go straight to the hello_web directory, and entrust the rest of the details to Phoenix. We encourage you instead to stop and take a look at exactly what happens for each request, from top to bottom.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.221.144