High availability

 Regarding redundancy toward higher availability, the first and foremost tip is to architect software applications to be redundant. Redundancy is the duplication of any system to substantially increase its availability. If a system goes down due to any reason, the duplicated system comes to the rescue. That is why we often hear and read that software applications are being generally deployed in multiple regions, as indicated in the following diagram. Lately, applications are being constructed out of distributed and duplicated application components. Thus, if one component or service goes down, then its duplication comes handy in sustaining and elongating the application lifetime.

Microservices architecture eventually leads to distributed systems, which are highly available and scalable: 

The following table clearly insists why the redundancy turns out to be a crucial need for business and IT systems. With duplication in place, we can easily attain 99.999% availability of systems and software:

That is, if a component is guaranteed to fulfil 99% availability, then if we have the component in two geographically different places, then the total availability goes up to 99.99 %. With more instances, we are bound to get higher availability of systems. To design such an architecture across multiple availability zones and regions, applications have to be stateless and there is a need to use an elastic load balancer (cluster to avoid single point of failure) to intelligently route requests from different sources to the backend applications and their clones. Not all requests are going to be stateless. Some requests demand stickiness, and hence there are several options being rolled out to involve stateful applications.

Fault-tolerance towards higher availability: Fault tolerance relies on a specialized mechanism to proactively detect a fault/risk in one or more of the components of any IT hardware system and instantaneously the system gets switched to a redundant component to continue the service without any delay. The failed component may be the motherboard, which typically comprises the CPU, memory, and connectors for input and output devices, power supply, or a storage component. Software downtime is another issue due to faults in software packages. Recently, there is a litany of techniques and tools helping out our developers to build fault-tolerant software systems. 

Besides this, there are software testing and analysis methods through automated tools, which comes handy in eliminating any kind of deviations and deficiencies in software libraries. In the recent past, with the faster maturity and stability of containers, errors or attacks on containerized microservices could be easily identified and contained within their containers, toward the goal of fault isolation. That is, through this isolation, any kind of misadventures and misdemeanors can be stopped on their way preemptively. That means, a compromised component need not affect other components within the system. This avoids the complete shutdown of the system. Failed services can be rectified and restarted or the redundant service instances can be leveraged to ensure the business continuity. The fault-tolerance capability of IT systems guarantees that there is no service interruption. Systems are being innately empowered to continuously deliver their assigned functionalities in the event of internal failures and external attacks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.162.110