Working with streams to process requests

If you have to work with a large enough set of data, it's fairly obvious that it will cause problems. Your server may not be able to provide all the required memory, or even if that doesn't prove to be a problem, the needed processing time would surpass the standard waiting time, causing timeouts—plus the fact that your server would close out other requests, because it would be devoted to handling your long-time processing one.

Node provides a way to work with collections of data as streams, being able to process the data as it flows, and piping it to compose functionality out of smaller steps, much in the fashion of Linux's and Unix's pipelines. Let's see a basic example, which you might use if you were interested in doing low-level Node request processing. (As is, we will be using higher-level libraries to do this work, as we'll see in the next chapter.) When a request comes in, its body can be accessed as a stream, thus allowing your server to deal with any size of requests.

The response that will be sent to the client is also a stream; we'll see an example of this in the next section, Compressing files with streams.

Streams can be of four kinds:

  • Readable: Which can (obviously!) be read. You would use this to process a file, or, as in our following example, to get a web request's data.
  • Writable: To which data can be written.
  • Duplex: Both readable and writable, such as a web socket.
  • Transform: Duplex streams that can transform the data as it is read and written; we'll see an example of this for zipping files.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.249.105