Speeding up with asynchronous calls

Now, let's turn to the question of performance. Once in a while, our application will need to be constantly monitored and, if needed, scaled and optimized. There are a few ways to speed things up incrementally, for example, by installing the ujson package, which works exactly like built-in json but is more performant (because it is written in C). In that case, FastAPI will automatically switch to using this library instead.

Potentially, more significant improvement in performance is built into FastAPI, Uvicorn, and based on the new features of Python 3.4 and later versions, asynchronous calls. We did spend some time discussing this feature in Chapter 3, Functions. In a nutshell, all of the code we generally write in Python is executed sequentiallyonce one line is executed, Python will go to the next, and so on. It means that, when the operation requires some data to be acquired from the web or the database, or if the operation is computation-heavy but runs on a single CPU, we could run some other tasks in the meantime. This is the promise of an asynchronous computation.

As we said, FastAPI and Uvicorn both support asynchronous calls. What it means practically is that every endpoint function, if it relies on something (library, database connection, and so on) that supports asynchronous calls, or does not rely on anything at all, could be asynchronous. To make it asynchronous, just add async before def. If it indeed relies on some asynchronous call, you need to state that with await. Given that, Uvicorn will automatically run those methods as asynchronousthis won't speed up each individual call, but will allow the server to execute some other requests, when applicable, scaling its overall performance. For example, here is our prediction model, made asynchronously (we also changed the name to keep both methods in place):

@app.get('/predict_async/{complaint_type}', tags=['predict'])
async def predict_time_async(complaint_type:ComplaintType, latitude:float, longitude:float, created_date:datetime):

obj = pd.DataFrame([{'complaint_type':complaint_type.value,
'latitude':latitude, 'longitude':longitude,
'created_date':created_date},])
obj = obj[['complaint_type', 'latitude','longitude',
'created_date']]


predicted = clf.predict(obj)
logger.info(predicted)
return {'estimated_time': predicted[0]}

You can test that this endpoint runs as good as the non-asynchronous one.

We don't recommend trying to use asynchronous calls from the get-go, especially if you don't have much experience. However, if you're on the lookout for better API performance, asynchronous calls can be a good option. On many occasions, asynchronous calls can make a drastic difference in performance, compared to traditional, synchronous execution. For some features and entire products, the difference could be critical. But how do we measure performance and how do we determine whether it is okay to publish an endpoint? For that, let's go to the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.90.205