Apply caching

In Chapter 6, Solution Architecture Design Patterns, you learned how to apply caching at various levels of architecture in the Cache-based architecture section. Caching helps to improve application performance significantly. Although you learned the different design patterns to apply to the cache by adding an external caching engine and technology, such as a content distribution network (CDN), it's essential to understand that almost every application component and infrastructure have their cache mechanism. Utilizing the build-caching mechanism at each layer can help to reduce latency and improve the performance of the application.

At the server level, the CPU has its hardware cache, which reduces the latency when accessing data from the main memory. The CPU cache includes the instruction and data cache, where the data cache store copies frequently used data. The cache is also applied at the disk level, but it is managed by operating system software (known as the page cache); however, the CPU cache is entirely managed by hardware. The disk cache is originating from secondary storage, such as the hard disk drive (HDD) or solid-state drive (SSD). Frequently used data is stored in an unused portion of the main memory (that is, the RAM as page cache, which results in quicker access of content).

Oftentimes, the database has a cache mechanism that saves the results from the database to respond faster. The database has an internal cache that gets data ready in the cache based on the pattern of your use. They also have a query cache that saves data in the main server memory (RAM) if you make a query more than once. The query cache gets cleared in case of any changes in data inside the table. In the case that the server runs out of memory, the oldest query result gets deleted to make space.

At the network level, you have a DNS cache, which stores the web domain name and corresponding IP address local to the server. DNS caching allows a quick DNS lookup if you revisit the same website domain name. The DNS cache is managed by the operating system and contains a record of all recent visits to websites. You learned about client-side cache mechanisms such as the browser cache and various caching engines like Memcached and Redis in Chapter 6, Solution Architecture Design Patterns.

In this section, you learned about the original design factors, such as latency, throughput, concurrent, and caching, which need to be addressed for architecture performance optimization. Each component of the architecture (whether it is a network at the server level or an application at the database level) has a certain degree of latency and a concurrency issue that needs to be handled.

You should design your application for the desired performance, as improving performance comes with a cost. The specifics of performance optimization may differ from application to application. Solution architecture needs to direct the effort accordingly—for example, a stock-trading application cannot tolerate sub-millisecond latency, while an e-commerce website can live with a couple of seconds latency. Let's learn about selecting technology for various architecture levels to overcome performance challenges.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.106.100