Edge datacenters

If we have sufficient computing power at the datacenter level, why keep anything anywhere else but at these datacenters? All the connections will be routed back to the server providing the service, and we can call it a day. The answer, of course, depends on the use case. The biggest limitation in routing back the request and session all the way back to the datacenter is the latency introduced in the transport. In other words, network is the bottleneck. As fast as light travels in a vacuum, the latency is still not zero. In the real world, the latency would be much higher when the packet is traversing through multiple networks and sometimes through undersea cable, slow satellite links, 3G or 4G cellular links, or Wi-Fi connections.

The solution? Reduce the number of networks the end user traverse through as much as possible by one. Be as directly connected to the user as possible; and two: place enough resources at the edge location. Imagine if you are building the next generation of video streaming service. In order to increase customer satisfaction with a smooth streaming, you would want to place the video server as close to the customer as possible, either inside or very near to the customer's ISP. Also, the upstream of my video server farm would not just be connected to one or two ISPs, but all the ISPs that I can connect to with as much bandwidth as needed. This gave rise to the peering exchanges edge datacenters of big ISP and content providers. Even when the number of network devices are not as high as the cloud datacenters, they too can benefit from network automation in terms of the increased security and visibility automation brings. We will cover security and visibility in later chapters of this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.96.86