Realizing Reliable Systems - the Best Practices

System reliability is defined as the combination of system resiliency and elasticity. With the proliferation of web-scale, data-intensive, and process-intensive applications across industry verticals, the application reliability has to be ensured at any cost to fulfil varying business expectations. Similarly, cloud environments emerge as the one-stop IT solution for business process and operation automations. All kinds of personal, professional, and social applications are being meticulously modernized and moved to cloud centers to reap all the originally expressed benefits of the software-defined cloud infrastructures. Thus, cloud reliability is also guaranteed through the leverage of highly pioneering technologies and tools. Thus, the reliability of applications and IT infrastructures is very important to encourage customers to retain their confidence and continuity on the various innovations and improvisations happening in the IT space. This chapter will pin down the best practices accrued out of the expertise, experience, and education of site reliability engineers, DevOps people, and cloud engineers. 

The world is increasingly becoming connected. With the competent technologies and tools continuously abound for establishing and sustaining deeper and extreme connectivity, our everyday entities and elements are getting connected with one another (locally, as well as remotely) to interact and collaborate decisively and deeply. All kinds of physical, mechanical, electrical, and electronics systems in our personal as well as professional environments are connected to and integrated with the faster maturity and stability of digitization/edge technologies. There are powerful communication and data transmission protocols emerging and evolving fast to systematically link up everything and to empower them to team up together in a purpose-driven manner.

Furthermore, the power of digital technologies (cloud infrastructures and platforms for hosting and managing operational and transactional applications, big, fast, and streaming data analytics platforms, groundbreaking artificial intelligence (AI) algorithms and approaches to bring forth prognostic, predictive, prescriptive, and personalized insights out of the Internet of Things (IoT) data, the pervasiveness of the microservices architecture (MSA) pattern, enterprise mobility, and social networking, the surging popularity of fog or edge computing, blockchain, and so on) leads to the realization of knowledge-filled, service-oriented, event-driven, cloud-hosted, process-aware, business-centric, and mission-critical software solutions and services, which have been directly enabling business automation and augmentation. The technologically enabled IT adaptivity, agility, and affordability enhancements overwhelmingly lead to the setting up and sustaining of intelligent business operations and brings forth premium offerings. 

With the unprecedented adoption of the cloud paradigm, the arrival of programmable, open, and flexible IT infrastructure is being speeded up. Previously, it was primarily closed, inflexible, and expensive IT infrastructures in the form of mainframe servers and monolithic applications. With the cloud-enablement strategy gains prominence, we have additional infrastructural assets in the form of virtual machines (VMs) and containers. Cloud IT infrastructures are highly optimized and organized through the smart application of cloud technologies and tools. Because of segmenting physical machines/bare metal servers into multiple VMs and containers, the number of participating infrastructural modules is bound to go up rapidly. This sort of strategically sound partitioning of IT infrastructures (server machines, storage appliances, and networking solutions) into a number of easily maneuverable and manageable, highly scalable, network-accessible, publicly discoverable, composable, and available IT resources. This transition definitely brings forth a number of business, technical, and user advantages. However, there is a catch. That is, the operational and management complexities of modern IT infrastructures have gone up significantly. Also, for the connected world, the software solutions have to be made out of distributed and decentralized application components. To succulently meet evolving business requirements, software packages have to be nimble and versatile. Thus, hardware, software, and services have to be creatively modernized and innately insights-driven.

Software complexity is on the rise consistently, due to requirements, changes, and additions. The functional requirements of software applications are being widely fulfilled, but the challenge is how to build software applications that guarantee the non-functional requirements (NFRs), which are alternatively termed as the quality of service (QoS) and quality of experience (QoE) attributes. The well-known QoS properties are scalability, availability, performance/throughput, security, maneuverability, and reliability.

For achieving reliable systems, we need to have reliable infrastructures and applications. Increasingly, we hear and read about infrastructure-aware applications and applications-aware infrastructures. Thus, it is clear that both infrastructure and application play a vital role in rolling out reliable software systems. This chapter is dedicated to detailing the best practices that empower software architects and developers to come out with microservices that are resilient. When resilient microservices get composed, we are to enjoy and experience reliable software systems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.125.219