There's more...

Despite being relatively easy to understand, our implementation of try_to_send_file is not endlessly scalable. Imagine serving and loading huge files into memory for millions of clients at the same time. That would bring your RAM to its limits pretty quickly. A more scalable solution is to send the file in chunks, that is part by part, so that you only need to hold a small part of it in memory at any given time. To implement this, you'll need to copy the contents of your file to a limited [u8] buffer with a fixed size and send that through an additional channel as an instance of hyper::Chunk, which implements From<Vec<T>>.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.98.14