Summary

This chapter is dedicated to showing how we can deploy our API services into production. One way is to run the Go binary and access it through the IP: Port combination directly from the client. That IP will be the Virtual Private Server (VPS) IP address. Instead, we can have a domain name registered and pointed to the VPS. The second and better way is to hide it behind a proxy server. Nginx is such a proxy server, using which we can have multiple application servers under one umbrella.

We saw how to install Nginx and start configuring it. Nginx provides features such as load balancing and rate limiting, which could be crucial while giving APIs to clients. Load balancing is the process of distributing loads among similar servers. We saw what types of loading mechanisms are available. Some of them are Round Robin, IP Hash, Least Connection, and so on. Then, we added authentication to our servers by allowing and denying a few sets of IP addresses.

Finally, we need a process monitor that can bring our crashed application back to life. Supervisord is a very good tool for the job. We saw how to install Supervisord and also launch supervisorctl, a command-line application to control running servers.

In the next chapter, we are going to see how to make our API production-grade using an API gateway. We will discuss deeply how we can put our API behind an entity that takes care of authentication and rate limiting.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.158.165