Prognostic, predictive, and prescriptive analytics

Any operational environment is in need of data analytics and machine learning capabilities to be intelligent in their everyday actions and reactions. The profoundly impacting environments include IT environments (traditional data centers or recent cloud-enabled data centers (CeDCs)), manufacturing and assembly floors, plant operations, maintenance, repair, and overhaul (MRO) facilities. Increasingly, a variety of important environments are being stuffed with scores of networked, embedded, and resource constrained, as well as intensive devices, toolsets, and microcontrollers. Hospitals have a growing array of medical instruments, and homes are blessed with a number of wares and utensils, such as connected coffee makers, dishwashers, microwave ovens, and consumer electronics. Manufacturing floors have powerful equipment, machinery, and robots. Workshops, mechanical shops, and flight maintenance garages are becoming more sophisticated and smarter with the stuffing of connected devices and instruments.

The concept of cyber-physical systems (CPS) is seamlessly and securely linking the physical with the virtual/cyber world. The physical assets along with mechanical and electrical systems are being integrated with cloud-enabled and native applications and data sources to exhibit distinct and deft behavior. The self-, surroundings-, and situation-aware capabilities are being realized through this kind of integration and orchestration. These digitized entities and elements generate a lot of data through their interactions, collaborations, correlations, and corroborations. Thus, the discipline of data science gains immense popularity as the data that's generated leads to enviable insights. 

As data centers and server farms evolve and embrace new technologies (virtualization and containerization), it becomes more difficult to determine what impacts these changes have on the server, storage, and network performance. By using proper analytics, system administrators and IT managers can easily identify and even predict potential choke points and errors before they create problems.

Various business houses and organizations are methodically using big data analytics to dig down and slice data center operations data. This can uncover hitherto unseen correlations among various IT systems. Furthermore, what sort of impacts new workloads on their underlying resources make are also being understood. With the emergence of streaming and real-time analytics platforms, behavioral and performance insights are being extracted instantaneously, and appropriate countermeasures are being worked and rolled out to sustain the goals of business continuity. That is, through data analytics capability, it becomes possible now to gain a deeper and decisive understanding of the system performance levels. If there is any possibility for any kind of performance degradation, administrators and operational teams can quickly consider various options for overcoming any potential performance-related problems well in advance.

In any cloud center, there are many server systems such as bare metal (BM) servers, virtual machines (VMs), and containers. Furthermore, there are many types of networking elements, such as routers, switches, firewalls, load balancers, intrusion detection and prevention systems, and application delivery controllers. In addition to that, there are several kinds of storage appliances and arrays. Every kind of equipment in a cloud environment is to emit a lot of logged data at different junctures. All of this logged data ought to be collected carefully, cleansed, and crunched systematically through automated toolsets for the timely extraction of actionable insights. There are several performance evaluation metrics, and through appropriate data analytics capabilities enshrined in every large-scale enterprise, the preventive and predictive maintenance of every participating and contributing devices and machines can be guaranteed. The emergence of online, off-premise/on-premise, and on-demand cloud infrastructures come handy in speeding up the process of IT data analytics. The infrastructure automation, monitoring, governance, management, rationalization, and utilization are being simplified through the leverage of infrastructure (software as well as hardware) log data.

The data provided by virtual machine monitor (VMM) and container platforms can be invaluable in completely and comprehensively analyzing virtual data centers. For example, the hypervisor has a lot of information because it is designed in such a way to use a lot of context-sensitive data to accurately allocate virtual resources. Similarly, in the containerized cloud environments, container-monitoring tools are blessed with a lot of operational data. Extracting hypervisor and container data and submitting them for purpose-specific analysis using analytics engines helps pinpoint a lot of useful information associated with system functioning and performance. The insights generated empower administrators to optimize workloads and identify new systems to host the replicas of the workloads or newer workloads. Not only workloads and virtual machines but also the condition of physical machines and their clusters can be extracted to ponder various advancements.

IT teams need to have complete visibility on their IT infrastructures and business workloads running on them. The enhanced visibility leads to tighter control of their entire stack from the underlying infrastructure up to applications. To ensure high visibility, controllability, and security, the IT team needs intelligent software solutions to monitor the hardware and software stack, manage large-scale compute clusters, and automate the routine but time-consuming and complex operations such as failure handling, OS patching and security updates, and software upgrades.

Machine-learning (ML) algorithms are very popular these days. Empowering machines to be smart through a host of self-learning algorithms and models are the central concept behind the enormous success of various ML algorithms. As the data size, structure, speed, and scope vary hugely, it is pertinent to empower machines themselves to capture, stock, and understand all kinds of incoming data without any human involvement, interpretation, and instruction. Handling big data by humans is a time-consuming and tough affair. As personal as well as professional devices are being blessed with the tremendous amount of memory and storage capacities and processing capabilities, the future definitely belongs to cognitive systems and machines.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.171.20