Body Content Workflow

Take an instance where a continuous stream of data is being downloaded when we make a request. In this situation, the client has to listen to the server continuously until it receives the complete data. Consider the case of accessing the content from the response first and the worry about the body next. In the above two situations, we can use the parameter stream. Let us look at an example:

>>> requests.get("https://pypi.python.org/packages/source/F/Flask/Flask-0.10.1.tar.gz", stream=True)

If we make a request with the parameter stream=True, the connection remains open and only the headers of the response will be downloaded. This gives us the capability to fetch the content whenever we need by specifying the conditions like the number of bytes of data.

The syntax is as follows:

if int(request.headers['content_length']) < TOO_LONG:
content = r.content

By setting the parameter stream=True and by accessing the response as a file-like object that is response.raw, if we use the method iter_content, we can iterate over response.data. This will avoid reading of larger responses at once.

The syntax is as follows:

iter_content(chunk_size=size in bytes, decode_unicode=False)

In the same way, we can iterate through the content using iter_lines method which will iterate over the response data one line at a time.

The syntax is as follows:

iter_lines(chunk_size = size in bytes, decode_unicode=None, delimitter=None)

Note

The important thing that should be noted while using the stream parameter is it doesn't release the connection when it is set as True, unless all the data is consumed or response.close is executed.

The Keep-alive facility

As the urllib3 supports the reuse of the same socket connection for multiple requests, we can send many requests with one socket and receive the responses using the keep-alive feature in the Requests library.

Within a session, it turns to be automatic. Every request made within a session automatically uses the appropriate connection by default. The connection that is being used will be released after all the data from the body is read.

Streaming uploads

A file-like object which is of massive size can be streamed and uploaded using the Requests library. All we need to do is to supply the contents of the stream as a value to the data attribute in the request call as shown in the following lines.

The syntax is as follows:

with open('massive-body', 'rb') as file:
    requests.post('http://example.com/some/stream/url',
                  data=file)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.110.155