Summary

In this chapter, we built our own API and deployed an ML model to send predictions as an endpoint. Using FastAPI's built-in features, we were able to generate interactive documentation and define a schema to validate both inputs and outputs. We further created a simple HTML dashboard, generating charts upon request, and we learned how to tune the performance of the API, leveraging asynchronous functionality. Lastly, we modeled a traffic load on our system, using an open source tool, Locust.

By doing so, we made a fast run over the full cycle of API development: choosing a framework, adding your business logic, and testing. The skills we learned along the way are useful if you want to get the flexibility, scalability, and richness of providing your service via an API.

Building your own web service is a great optiondefinitely the best if the API is popular and needs to withstand constant and intensive loads of requests.

In the next chapter, we'll look at a different approach: running the same prediction model as a serverless application, which allows us to keep the API cheaper and more scalable, if we don't need to serve many requests or if the loads are sporadic.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.118.14