Writing a simple Python AsyncIO program

It's time to buckle up and start taking a dive into the world of asynchronous programming with Python and to understand how the AsyncIO really works.

The following code implements a simple URL fetcher using the Python requests library and AsyncIO:

# async_url_fetch.py
#!/usr/bin/python3
import asyncio
import requests

async def fetch_url(url):
response = requests.get(url)
return response.text

async def get_url(url):
return await fetch_url(url)

def process_results(future):
print("Got results")
print(future.result())

loop = asyncio.get_event_loop()
task1 = loop.create_task(get_url('http://www.google.com'))
task2 = loop.create_task(get_url('http://www.microsoft.com'))
task1.add_done_callback(process_results)
task2.add_done_callback(process_results)
loop.run_forever()

That was a small and a nice asynchronous program implementing the Python AsyncIO library. Now, let's spend some time understanding what we did here.

Starting from the top, we have imported the Python requests library to make web requests from our Python code and have also imported the Python's AsyncIO library.

Next, we define a co-routine named fetch_url. The general syntax of defining a co-routine for AsyncIO requires the use of the async keyword:

async def fetch_url(url)

The next in line is the definition of another co-routine named get_url. What we do inside the get_url routine is make a call to our other co-routine, fetch_url, which does the actual fetch of the URL.

Since fetch_url is a blocking co-routine, we proceed the call to fetch_url with the await keyword. This signifies that this method can be suspended until the results are obtained:

return await fetch_url(url)

Next in the program is the definition of the process_results method. We use this method as a callback to process the results from the get_url method once they arrive. This method takes a single parameter, a future object, which will contain the results of the function call to the get_url.

Inside the method, the results of the future can be accessed through the use of the results() method of the future object:

print(future.results())

With this, we have all the basic machinery set up for the execution of the AsyncIO event loop. Now, it's time to implement a real event loop and submit a few tasks to it.

We start this by first fetching an AsyncIO event loop by making a call to the get_event_loop() method. The get_event_loop() method returns the optimal event loop implementation of AsyncIO for the platform on which the code is running.

AsyncIO implements multiple event loops which a programmer can use. Usually a simple call to get_event_loop() will return the best event loop implementation for the system the interpreter is running on.

Once we have the loop created, we now submit a few tasks to the event loop through the use of the create_task() method. This adds the tasks to the queue of the event loop to execute. Now, since these tasks are asynchronous and we don't have a clue about which task will produce the results first, we need to provide a callback to handle the results of the task. To achieve this, we add a callback to the tasks with the help of the tasks add_done_callback() method:

task1.add_done_callback(process_results)

Once everything here is set, we start the event loop into a run_forever mode so that the event loop keeps on running and dealing with the new tasks.

With this, we have completed the implementation of a simple AsyncIO program. But hey, we are trying to build a enterprise scale application. What if I wanted to build an enterprise web application with AsyncIO?

So, now let's take a look at how we can use AsyncIO to implement a simple asynchronous socket server.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.15.43