© Alasdair Gilchrist 2016

Alasdair Gilchrist, Industry 4.0, 10.1007/978-1-4842-2047-4_3

3. The Technical and Business Innovators of the Industrial Internet

Alasdair Gilchrist

(1)Bangken, Nonthaburi, Thailand

The advances in sensor technologies in recent times have been driven by the advent of high-speed and low-cost electronic circuits, a change in the way we approach signal processing, and corresponding advances in manufacturing technologies. The coming together of these new developments in these synergetic fields has allowed sensor designers and manufacturers to take a completely novel approach, such as introducing intelligence for self-monitoring and self-calibration, thereby increasing the performance of their technical products. Similarly, the advances in sensor manufacturing technologies facilitate the production of systems and components with a low cost-to-performance ratio. This includes advances in microsystem technologies, where manufacturers are increasingly adopting techniques such as surface and bulk micromachining. Furthermore, initiatives exploring the potential in the field of digital signal processing involve novel approaches for the improvement of sensor properties. These improvements in sensor performance and quality mean that multi-sensor systems, which are the foundation of the Industrial Internet, can significantly contribute to the enhancement of the quality and availability of information. Due to these initiatives and an innovative approach by designers, this has led to new sensor structures, manufacturing technologies, and signal processing methods in individual and multi-sensor systems. However, it is the latest trends in sensor technology that have the most relevance in the Industrial Internet and these are the miniaturization of sensors and components, the widespread use of multi-sensor systems, and the increasing availability of radio wireless and autonomous sensors.

Previously, sensors that where embedded into devices or systems had to be hard-wired or rely on the host system for communication. Remote I/O devices could provide the communication interface and intelligence to interface applications with sensors with no communication circuitry, but again they typically had to be hard-wired, which limited their deployment options in outdoor locations. Bluetooth and ZigBee are lightweight technologies that have transformed sensor design by providing an embedded miniature radio for short distance communication. ZigBee, which has found numerous applications in the Industrial Internet due to its ability to build mesh networks that can span wide areas, is more prevalent in industrial applications than Bluetooth, which has its roots in the mobile phone industry. Developers heavily utilize Bluetooth in mobile accessories, applications, and short distance communication. Furthermore, advances in low-power WAN radio technologies and protocols has enabled these to be embedded into sensors and remote I/O devices, which has facilitated their deployment outdoors. This is true even at great distances away from the host operation and management applications.

Miniaturization

However, all these communication technologies would be impractical were it not for the rapid advancement in sensor miniaturization. Miniaturization has progressed to the stage that the manufacturers of sensors can reduce them to be the size of a grain of sand. This means that sensors can now be embedded anywhere and in anything, such as the clothes we wear, the packaging of the food we eat, and even our bodies.

Embedding intelligence into the sensor has also accelerated the path to miniaturization, as has integrating multi-functions into the design, such as temperature and humidity. For example, manufacturers that produce sensors that come fully calibrated, temperature compensated, and amplified, reduce the number of components needed on the PCB, which helps reduce size and weight as well as cost.

Some examples of the scale of miniaturization and how it has enabled use-cases of the Industrial Internet are in the medical and health care industry. One such device is the humble reed switch. A reed switch is a passive component that requires no power or additional components to work, which is, as we will see, one of its great advantages. A reed switch senses the presence of a magnetic field when a magnet is nearby and closes its connections. Once the magnet goes away, it opens the connections. The problem is that it is difficult to miniaturize passive components and still get them to work. Consequently, reed switches were typically a minimum of 25mm long, but after miniaturization, they have scaled down to around 3mm. That might not sound like a lot but it has had a dramatic impact on their use in industry. Reed switches are used in industrial, medical, and aerospace designs, among others.

Two of the most critical areas for miniaturization are the electronic/semiconductor equipment testing market and medical devices. Reed switches are essential in semiconductors as they are required to switch digital pulses billions of times a second, and reed switches do this perfectly. Additionally, they also have a place in medical implants and they outfit pill cameras, defibrillators, glucose monitoring devices, nerve stimulation devices, and have many more in-body applications. Reed sensors are perfect for these applications, as they use no power. Unlike semiconductor-based sensors which require batteries, reed sensors can sit in the body for many years without the need for removal.

Another advance in multi-sensor systems came about through the success of complementary technologies and through their proliferation. This was due to the popularity and acceptance of technology such as smartphones, systems-on-a-board, and even systems-on-a-chip (SoC) . These devices come packed with multi-sensors and the software to drive them. For example, an Apple iPhone, the Raspberry Pi, and the Arduino with extension shields all provide the tools to create multi-sensor devices that can sense and influence their analogue environment through their interaction with the digital world. The availability of these development kits has accelerated the design process, by allowing the production of proof-of-concept (PoC) models. They have driven innovation in the way we deploy multi-sensor devices into industrial system automation and integrate M2M with cyber-physical systems to create Industrial Internet of Things environments.

Cyber Physical Systems (CPS)

The Industrial Internet has come about due to the rapid advancements in digital computers in all their formats and vast improvements in digital communications. These disciplines are considered separate domains of knowledge and expertise, with there being a tendency for specialization in one or the other. This results in inter-disciplinary knowledge being required to design and build products that require information processing and networking; for example, a device with embedded microprocessor and ZigBee, such as the Raspberry Pi or a smartphone. However, when we start to interact with the physical world, we have a physical domain to contend with and that requires special knowledge of that physical and mechanical domain such as that of a mechanical engineer. Therefore, it is necessary to identify early in the design process whether the product is to be an IT, network, or a physical system–or a system that has all three, physical, network, and digital processing features. If it has, then it is said to be a cyber-physical system . In some definitions, the networking and communications feature is deemed optional, although that raises the question as to how a CPS differs from an embedded system.

Information systems, which are embedded into physical devices, are called “embedded systems”. These embedded systems are found in telecommunication, automation, and transport systems, among many others. Lately, a new term has surfaced, the cyber-physical systems (CPS). This distinguishes between microprocessor based embedded systems and more complex information processing systems that actually integrate with their environment. A precise definition of cyber-physical systems (CPS) is that they are integrations of computation, networking, and physical processes. Embedded computers and networks monitor and control the physical processes, with feedback loops where physical processes affect computations and vice versa.

Therefore, a cyber-physical system can be just about anything that has integrated computation, networking, and physical processes. A human operator is a cyber-physical system and so is a smart factory. For example, a human operator has physical and cyber components. In this example, the operator has a computational facility—their brain—and they communicate with other humans and the system through HMI (human machine interface) and interact through mechanical interfaces—their hands—to influence their environment.

Cyber-physical systems enable the virtual digital world of computers and software to merge through interaction—process management and feedback control—with the physical analogue world, thus leading to an Internet of Things, data, and services. One example of CPS is an intelligent manufacturing line, where the machine can perform many work processes by communicating with the components and sometimes even the products they are in the process of making.

An embedded system is a computational system embedded within a physical system; the emphasis is on the computational component. Therefore, we can think of all CPS as containing embedded systems, but the CPS’s emphasis is on the communications and physical as well as the computational domains.

CPS have many uses, as they can use sensors and other embedded systems to monitor and collect data from physical processes. These processes could be anything such as monitoring the steering of a vehicle, energy consumption, or temperature/humidity control. The CPS systems, unlike embedded systems, are networked, which allows for the possibility of the data being available remotely, even globally. In short, cyber-physical systems make it possible for software applications to interact with events in the physical world. For example, to measure peaks in energy consumption in an electrical power grid—the physical process—with which the CPS interacts through its embedded computation and network functions.

Unlike traditional embedded systems, which are often standalone devices with perhaps a communication capability built in, CPSs are designed to be networked with other complementary devices and so have physical I/O ports. CPS is closely related to robotics, and a robot is a good example of a CPS, as it has clear physical components that can manipulate its environment. Robots are good at sensing objects, gripping and transporting objects, and positioning them where required. In factories, robots are used to do repetitive jobs that often require heavy lifting or the positioning of large awkward items in an assembly line. Robots have computation, network, and physical components to enable them to run software to do their tasks, such as to read sensors data, apply algorithms, and send control information to servomotors and actuators that control the robots arms, levers, and mechanisms. Robots also communicate with back-end servers in the operations and management domain and with safety devices on the assembly line. An example of this is that in some deployments, such as in stock handling in warehouses where robots retrieve or return stock from shelves and bins, robots work very quickly. They move at tremendous speeds, performing mechanical arm actions in a blur, and do not tire or need rest breaks, so they outperform humans in every regard. However, robots and humans do not always work together safely, and it is necessary then if a human comes into the working vicinity of a robot that the robot must slow down and perform its actions at a speed compatible with humans. Consequently, robots work better and much more efficiently in human-free environments.

Robotics is an obvious example of a CPS but presently they are adapted to work in many IIoT use-cases, such as working in hazardous environments such as in fire fighting or mining, or doing dangerous jobs such as bomb disposal or performing heavy-duty tasks such as lifting car assemblies on the production line. However, other uses for other types of CPS abound, such as in situations that require precision, such as in automated surgery, and in coordination, in the case of air-traffic control systems.

Real-world applications in the Industrial Internet for CPS are mainly in sensor-based applications where network-enabled CPS devices monitor their environment and pass this information back to an application on another networked node, where computation and analysis will be performed and feedback supplied if and when required. An example of this is collision detection and protection in cars, which also use lane-change awareness systems, also driven by CPS.

It is envisaged that advances in physical cyber-mechanics will greatly enhance CPS in the near future, with improvements in functionality, reliability, usability, safety, adaptability, and autonomy.

Wireless Technology

Wireless communication technology’s adoption into the enterprise had a somewhat inauspicious start back in the early 2000s. Deemed to be slow and insecure, many IT security departments shunned its use and others went further and banned it from the enterprise. Industry was not so quick to write it off though, as Wi-Fi had tremendous operational potential in certain industrial use-cases, such as in hospital communications, and in warehouses, where vast areas could not be easily covered by hard-wired and cabled solutions.

Gradually, wireless technology evolved from the early systems, which could only offer limited bandwidth of 1-2Mbps and often a lot less than that over limited distances of 50 feet to high-performance Gbps systems. The evolution in the technology was a gradual step-process, which took most of the decade where incremental improvements in performance, matched with improvements in security. Security was a major issue as radio waves are open to eavesdropping since they broadcast over the air and anyone listening on the same frequency can make them out. Additionally, access points broadcast an SSID, which is a network identifier so that the wireless devices can identify and connect to its home network. For the sake of convenience, early Wi-Fi access points were open with no user credentials required for authorization and data went unencrypted over the air or was protected by a very weak security protocol called WEP (Wired Equivalent Protocol).

These failings were unacceptable to enterprise IT, so Wi-Fi found itself a niche in the home, SMB (small medium business), and in some industries where security concerns were not such an issue. Industrial uses for wireless technologies such as Wi-Fi, Bluetooth (which has similar early security setbacks), and later ZigBee were concerned with M2M data communications over short distances within secured premises, so the risk of data leakage through windows from overpowered or poorly positioned access points and antennae was not a problem. Similarly, in vast warehouses, Wi-Fi was a boon for M2M communication with remote-control vehicles and the low speeds, low throughput, and poor encryption were not an issue. Therefore, wireless technology gained an initial niche in the industrial workplace that was to grow over the decade, and as speeds and security improved beyond all previous expectation, wireless communication had become a driving force and an enabler of the Industrial Internet.

This transformation came about because of technological advances in wireless modulation, which enabled more bits or symbols to be carried over the same carrier frequency signal. For example, Wi-Fi went from 802.11b with a realistic bit-rate of 1-2Mbps (theoretical 11Mbps) to 802.11n with realistic bit-rate of 70-90Mbps (theoretical 300Mpbs) in less than a decade. Significantly, security also had rapid improvements with the flawed WEP replaced eventually with WPA2, a far more secure encryption and authentication protocol. The combination of these improvements was Wi-Fi’s redemption in the IT enterprise and it has now gained full acceptance. In some cases, it is the preferred communications medium, as it provides flexible and seamless mobility around the workplace.

Further amendments to the standards in 2013 have produced staggering results, with 802.11ac and 802.11ad producing theoretical bit-rates of 800Mbps and 6Gbps, respectively, due in part to advanced signal modulation through ODFM and the MIMO technology (Multi-IN/Multi-OUT). They use multiple radios and antennae to achieve full-duplex multi-stream communications.

Additionally, amendments to the 802.11 protocol in 2015 produced the 802.11ah, which is designed for low-power use and longer range. It was envisaged to be a competitor to Bluetooth and ZigBee. One new feature of 802.11ah, which makes it differ from the traditional WLAN modes of operation, is that it has predetermined wake/doze period to conserve power. In addition, devices can be grouped with many other 802.11h devices to cooperate and share a signal, similar to a ZigBee mesh network. This enables neighbor area networks (NAN) of approximately 1KM, making it ideally suitable for the Industrial Internet of Things.

However, it has not just been in Wi-Fi that we have experienced huge advancements; other wireless communication technologies have also claimed niche areas, especially those around M2M and the Internet of Things. Some of these wireless technologies have come about as a result of the need for improvements over the existing alternatives to Bluetooth and ZigBee, which were originally focused on high-end mobile phones and home smart devices, respectively, where power and limited range were not constraining factors. In the Industrial Internet, there are many thousands of remote sensors that must be deployed in areas with no power and are some distance from the nearest access point, so they have to energy harvest or run from batteries. This makes low-power radio communication essential, as changing out batteries would be a logistical and costly nightmare.

The wireless technologies to address the specific needs of IoT devices are Thread, Digimesh, WirelessHart, 802.15.4, Low-Power WiFi, LoHoWAN, HaLow, Bluetooth low-power, ZigBee-IP NAN, DASH7, and many others.

However, it is not just these technologies, which have accelerated the innovation that are driving the Internet of Things. We cannot overlook the platform, which coincidentally in many ways was introduced in 2007 with the release of the first real smartphone, the IPhone.

The iPhone is a perfect mobile cyber-physical system with large processing power. It is packed with sensors and is capable of both wireless and cellular communications. It can run complex apps and interact with devices and its environment via a large touch screen, which is an excellent HMI (human machine interface). The importance of the introduction of the smartphones (Google’s Android was soon to follow) was that to both consumer and industrial IOT there was now a perfect mobile CPS that was ubiquitous and highly acceptable. The problem previously was how humans were going to effectively control and manage IoT devices. The introduction of the smartphone, and later tablets solved, that problem, as there was now a mobile CPS solution to the HMI dilemma.

IP Mobility

It was around 2007 that wireless and smartphone technology transformed our perception of the world and our way of interacting with our environment. Prior to 2007 there was little interest in mobile Internet access via mobile devices even though high-end mobiles and Blackberry handsets had been capable of WAP (Web Access Protocol). Device constraints and limited wireless bandwidth (2G) made anything other than e-mail a chore. The 3G cellular/mobile networks had been around for some time, but uptake was slow. That was to change with the arrival of the smartphone and the explosive interest in social media, through Facebook and the like. Suddenly, there was market need for any time, any where Internet access. People could check and update their social media sites, chat, and even begin to browse the Internet as fast data throughput, combined with larger touch-screen devices made the browsing experience tolerable. Little did we know at the time the disruptive impact that the smartphone and later the tablet would have on the way we worked and lived our lives.

Prior to 2007 and the advent of the smartphone and mobile revolution, IT governed the workplace as far as technology and employee work devices were concerned under the banner of security and a common operating environment. However, with the proliferation of employee’s own smartphones and tablets coming into the workplace, things were going to change. Employees were working on their personal devices, iPhones, and Android phones that had capabilities that at least matched the work devices that they loathed. This ultimately led to the revolution of employees demanding to use their own devices to do their work, as they were more comfortable with the devices, the applications, and it was in their possession 24/7 so they could work whenever and wherever they wanted. This was referred to as BYOD (bring your own device) and it went global as a workplace initiative. Similarly, riding on the success of BYOD it became acceptable to store work data on personal data storages, after all it was little use to employees to have 24/7 access to applications, and reports but not data so BYOC (bring your own cloud), although not nearly so well published, as employees stored work data on personal cloud storage such as Box and Amazon, became ubiquitous.

However, most importantly is what these initiatives had achieved. They transformed the way that corporate and enterprise executives viewed IT and employee work practices. The common consensus was these initiatives fostered a healthier work/lifestyle balance, created an environment conducive to innovation, and increased productivity.

Regardless of the merits of BYOD, what it did was introduce mobility into the workplace as an acceptable practice. What this meant was IT had to make available the data and service to the employees even if they were working outside the traditional company borders. Of course, IP mobility was abhorrent to traditional IT and security, but they lost the war because innovation and productivity ring louder in the C-suite called security.

However, at the time little did anyone know the transformative nature of IP mobility and how it would radically change the workplace landscape. With the advent of IP mobility, employees could work anywhere and at any time, always having access to data and company applications and systems through VPNs (virtual private networks). Of course, to IT and security, this was a massive burden, and it logically led to deploying or outsourcing application in the cloud via SaaS (software as a service).

Make no mistake, these were radical changes to the business mindset. After years of building security barriers and borders, security processes and procedures to protect their data, businesses were now allowing the free flow of information into the Internet. It proved, as we know now with hindsight, to be a brilliant decision and SaaS and cloud services are now considered the most cost effective ways to provide enterprise class software and to build SME data centers and development platforms.

IP mobility is now considered a necessity with everything from software to telephone systems being cloud-hosted and is available to users anywhere they have an Internet connection.

An example of IP mobility is that employees can access cloud services and SaaS anywhere, which makes working very flexible. Previously with on-premises server-based software, employees could only access the application if they were within the company security boundaries, for example using a private IP address, within a specific range or by VPN from a remote connection. However, both of these methods were restrictive and not conducive to flexible working. The first method meant physically being at the office, and the second meant IT having to configure a VPN connection, which they were loathed to do unless there was a justifiable reason.

Cloud-hosted software and services get around all those barriers by being available over the Internet from anywhere. Additionally, cloud-hosted services can integrate easily through APIs with other cloud-based applications so employees can build a suite of complementary applications that are tightly integrated, thus making their work experience more efficient and productive.

Network Functionality Virtualization (NFV)

Virtualization is a major enabler of the IoT; the decoupling of the underlying network topology is essential in building agile networks that can deliver the high performance requirements in an industrial environment. One of the ways to achieve this is through flexible network design where we can remove centralize network components and distribute them where they are required as software. This is the potential offered by NFV for the Industrial Internet, the simplification, cost reduction, and increase efficiency of the network without forsaking security.

NFV is concerned with the virtualization of network functionality, routers, firewalls, and load-balancers, for example, into software, which can then be deployed flexibly wherever it is required within the network. This makes networks agile and flexile, something that traditional networks lack but that is a requirement for the IIoT. By virtualizing functions such as firewall, content filters, WAN optimizers for example and then deploying them on commodity off-the-shelf (CotS) hardware, the network administrator can manage, replace, delete, troubleshoot, or configure the functions easier than they could when the functions were hard-coded into multi-service proprietary hardware.

Consequently, NFV proved a boon for industry, especially to Internet service providers, which could control the supply of services or deny services dependent on a service plan. For example, instead of WAN virtualization or firewall functions being integrated into the customers’ premise equipment (CPE) —and freely available to those who know how to configure the CPE—a service provider could host all their virtual services on a vCPE.

Here lies the opportunity—NFV enables the service provider to enhance and chain their functions into service catalogues and then offer these new features at a premium.

Furthermore, NFV achieves this improvement in service provisioning and instantiation by ensuring rapid service deployment while reducing the configuration, management, and troubleshooting burden.

The promise of NFV is for the IioT to:

  • Realize new revenue streams

  • Reduce capital expenditure

  • Reduce operational expenditure

  • Accelerate time to market

  • Increase agility and flexibility

Increasing revenue and decreasing provisioning time, while reducing operational burden and hence expense are the direct results NFV.

NFV is extremely flexible in so much as it can work autonomously without the need of SDN or even a virtual environment. However to deliver on the promise, which is to introduce new revenue streams, reduce capital and operation expenses, reduce time to market for services, and provide agile and flexible software solutions running on commodity server hardware, it really does need to collaborate with and support a virtualized environment.

In order to achieve agile dynamic provisioning and rapid service deployment, a complementary virtualization technique is required and that is network virtualization.

Network Virtualization

Network virtualization provides NFV with the agility it requires to escape the confines of the network edge and the vCPE; it really is that important.

Network virtualization provides a bridged overlay, which sits on top of the traditional layer-2/3 network. This bridged overlay is a construction of tunnels that propagates across the network providing layer-2 bridges. These tunnels are secure segregated traffic flows per user or even per service per user. They are comparable to VLANs but are not restricted to a limit of 4,096, instances. Instead, they use an encapsulation method to tunnel layer-2 packet flows through the traditional layer-3 network using the WxLAN protocol.

The importance of this bridged overlay topology (tunnels) to NFV and the IIoT is that it provides not just a method for secure multi-tenancy via a segregated tunnel per user/service, but also provides real network flexibility.

What this means in practical terms is that it doesn’t have to connect a physical firewall or load balancer inline on the wire at a certain aggregation point of the network. Instead, there is a far more elegant solution.

An administrator can spin up, copy over, and apply individual NVFs to the specific customer/service tunnel, just as if they were virtual machines. This means there is terrific flexibility with regard to deploying customer network functions and they can be applied anywhere in the customer’s tunnel.

Consequently, the network functions no longer have to reside on the customer’s premises device. Indeed some virtual network functions can be pulled back into the network to reside on a server within a service provider’s network.

Network virtualization brings NFV inside the CSP network onto servers that are readily accessible and easy for the CSP to manage, troubleshoot and provision. Furthermore, as most of the network functionality and configurations are carried out inside the Service providers POP there are no longer so many truck-rolls to customers’ sites. Centralizing the administration, configuration, provisioning, and troubleshooting within the service providers own network greatly reduces operation expense and improves service provisioning and deployment, which provides agile and flexible service delivery.

One last virtualization technology plays a part in a high performance network that cannot be ignored—that is the Software Defined Network (SDN).

SDN (Software Defined Networks)

There is much debate about the relationship between NFV and SDN , but the truth is that they are complementary technologies and they dovetail together perfectly. The purpose of SDN is to abstract the complexities of the control plane from the forwarding plane.

What that means is that it removes the logical decision making from network devices and simply uses the devices forwarding plane to transmit packets. The decision-making process transposes to a centralized SDN controller.

This SDN controller interacts with the virtualized routers via southbound APIs (open flow) and higher applications via northbound APIs. The controller makes intelligent judgments on each traffic flow passing through a controlled router and tells the forwarding path how to handle the forwarding of the packets in the optimal way. It can do this because, unlike the router, it has a global view of the entire network and can see the best path to any destination without network convergence.

However, another feature of SDN makes it a perfect fit with the IIoT and that is its ability to automate, via the SDN controller, the fast real-time provisioning of all the tunnels across the overlay, which is necessary for the layer-2 bridging to work.

SDN brings orchestration, which enables dynamic provisioning, automation, coordination, and management of physical and virtual elements in the network. Consequently, NFV and SDN working in conjunction can create an IIoT network virtual topology that can automate the provisioning of resources and services in minutes, rather than months.

What Is the Difference Between SDN and NFV?

The purpose of SDN and NFV is to control and simplify networks; however they go about it in different ways. SDN is concerned primarily with separating the control and the data planes in proprietary network equipment. The rationale behind decoupling the forwarding path from the control path is that it bypasses the router’s own internal routing protocols running in its control plane’s logic.

What this means is the router is no longer a slave to OSPF or EIGRP algorithms, which are the traditional routing mechanisms that determine the shortest path between one routing host to another in order to determine the most efficient or shortest path between communicating nodes. These algorithms were designed for a more peaceful and graceful age.

Instead, the SDN controller will assume control. It will receive the first packets in every new flow via the southbound OpenFlow API and determine the best path for the packets to take to reach the destination. It does this using its own global view of the network and its own custom algorithms.

The best path an SDN controller takes will not necessarily be based on the shortest path like most conventional routing protocols instead the programmer may take many constraints into consideration such as congestion, delay, bandwidth as it is designed to be programmable.

Smartphones

At the heart of all the recent trends in IoT and machine learning is the smartphone, and everyday we see innovations that center on the device as a controller, a system dashboard, and a security access key, or a combination of all three, that enable myriad of applications and analytic tools. The smartphone, because it has huge consumer adoption and market penetration (at levels in excess of 90% in developed countries), enables IoT innovation. Indeed, it is because people always have their smartphones in hand that mobile banking and NFC cardless payments have proliferated.

An example of the smartphones’ importance to IIoT innovations is that it is the primary human machine interface (HMI) . Consider Ford’s innovative approach to in-car infotainment systems to see how industry is approaching future design. Car manufacturers design and build a car to last a first time owner 10 years; however designing and constructing the body shape and the underlying mechanics is troublesome enough without having to consider the infotainment system, which is likely to be out of date within five years. The solution that Ford and other car manufacturers came up with was to supply a base system, a visual display, and sound system, that integrates with a smartphone through a wireless or cable connection and via software APIs. By doing this, Ford circumvented the problem of the infotainment system being outdated before the car, after all the infotainment system now resides on the owner’s smartphone and an upgrade is dependent on a phone upgrade. The point here is that it would only be through a commonly held item, one that the driver would likely always have in their possession, that this design would be feasible. It would not work with for example a laptop.

Similarly, though just a project just now, Ford is looking at drive-train control for their cars. What this would mean is that instead of Ford building the same class of cars but with economic, standard, or sports variants, they could produce one model, and the owner could control the drive-train via a smartphone application. Therefore, the family car could be either a sedate economic car for school runs or a high-performance gas-guzzler on the weekends, depending on the smartphone app. The outlook here is that cars would not become commodity items but their performance could be temporarily altered by a smartphone application to suit the driver or the circumstances.

Smartphones appear to be the HMI device of choice for IoT application designers, as can be seen in most remote control applications. The smartphone is certainly the underpinning technology in consumer IoT where control and management of smart devices is through a smartphone application rather than physical interaction. However, it is not as simple as convenience or pandering to the habits of the remote control generation. Smartphones are far more intelligent than the humble remote control and can provide much more information and feedback control.

Take, for example, the IoT capabilities of a smartphone. A modern Android or iPhone comes packed with sensors, including an accelerometer, linear acceleration sensor, magnetometer, barometer, gravity sensor, gyroscope, light sensor, orientation sensor, among others. All of these sensors, in addition to functional capabilities such as a camera, microphone, computer, storage and networking, can provide the data inputs to IoT applications and subsequent information regarding the phones environment can be acquired, stored, analyzed, and visualized using local streaming application tools.

Smartphones are not just HMI or remote controls—they are sensors, HMIs, and application servers and they can provide the intelligence and control functions of highly sophisticated systems, such as infotainment and drive-train technology. However, smartphones will only be at the edge of the proximity and operations and management domains, as they are not yet or likely to be in the near future capable of handling Big Data, pentagrams of unstructured and structured data for predictive analysis.

Deeper analysis, for example predictive analysis of vast quantities of Big Data, will still be performed in the cloud, but importantly fast analysis of data and feedback that is essential and required in real-time by industrial applications will be performed closer to the source and high-performance local servers are presently the most likely candidate.

However, the embedded cognitive computing ability in a smartphone will advance in the coming years, taking advantage of the sensors and the data they produce. Streaming analytic algorithms will enable fast fog-like analysis of sensor data streams at local memory speed without recourse to the cloud. As a result, smartphones will act as cognitive processors that will be able to analyze and interact with their environment due to embedded sensors, actuators, and smart algorithms.

An example of Industrial Internet is in retail. Smart devices with the appropriate apps loaded determine the location of a customer in a supermarket and spy on he or she is viewing. This is possible via RFID tags on products that have very short range, so their phone will only detect the products directly in front of the customer. The app will be able to describe through the display or importantly through the speaker—for those visually impaired—to the user what he or she is viewing. For example, the type of product, the price, discount, and any calorific or nutrient data normally declared on the label. Knowing a user’s location, activity, and interests will enable location based services (LBS), such as instantaneously providing a coupon for a discount.

The Cloud and Fog

Cloud computing is similar to many technologies that have been around for decades. It really came to the fore, in the format that we now recognize, in the mid 2000s with the launch of Amazon Web Services (AWS). AWS was followed by RackSpace, Google’s CE, and Microsoft Azure, among several others. Amazon’s vision of the cloud was on hyper-provisioning; in so much as they built massive data centers with hyper-capacity in order to meet their web-scale requirements. Amazon then took the business initiative to rent spare capacity to other businesses, in the form of leasing compute, and storage resources on an as-used basis.

The cloud model has proved to be hugely successful. Microsoft and Google followed Amazon’s lead, as did several others such as IBM, HP, and Oracle. In essence, cloud computing is still following Amazon’s early pay-as-you-use formula, which makes cloud computing financially attractive to SMEs (small to medium enterprises), as the costs of running a data center and dedicated infrastructure both IT and networks can be crippling. Consequently, many cash-strapped businesses, for example start-ups, elected to move their development and application platforms to the cloud, as they only paid for the resources they used. When these start-ups became successful, and there were a few hugely successful companies, they remained on the cloud due to the same financial benefits—no vast capital and operational expenditure to build and run their own data centers—but also because the cloud offered much more.

In order to understand why the cloud is so attractive to business, look at the major cloud providers’ business model. Amazon AWS, Microsoft Azure, and Google Cloud dominate the market, which is hardly surprising as they have the data centers and financial muscle to operate them. Amazon, the early starter launching in 2005, built on that early initiative to build their cloud services with increasing services and features year after year. Microsoft and Google came later with full launches around 2010 to 2012, although with limited services. They have not wasted time in catching up and both now boast vast revenue from their cloud operations.

To explain how the cloud and fog relates to the Industrial Internet, we need to look to the services cloud providers deliver. In general, cloud providers dynamically share their vast resources in compute, storage, and networks among their customers. A customer pays for the resources they use on and 10 minute or hourly basis, depending on the provider, and nothing else. Setup and configuration is automatic and resources are elastic. What this means is that is you request a level of compute and storage and then find that demand far exceeds this. The cloud will stretch to accommodate the demand without any customer interaction; the cloud will manage the demand dynamically by assigning more resources.

There are three categories of service—IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service). Each category defines a set of services available to the customer, and this is key to the cloud—everything is offered as a service. This is based on the earlier SOA (service orientated architecture), where web services were used to access application functions. Similarly, the cloud operators use web services to expose their features and products as services.

  • IaaS (Infrastructure as a Service)—AWS’s basic product back in 2005 and it offered their excess infrastructure for lease to companies. Instead of buying hardware and establishing a server room or data center a SME could rent compute, storage, and network from Amazon, the beauty being they would only pay for what they used.

  • PaaS (Platform as a Service)—Came about as Microsoft and others realized that developers required not just infrastructure but access to software development languages, libraries, APIs, and microservices in order to build Windows-based applications. Google also supplies PaaS to support its many homegrown applications such as Android and Google Apps.

  • SaaS (Software as a Service)—The precursor to the cloud in the form of web-based applications such as Salesforce.com, which launched in 1999. SaaS was a new way of accessing software, instead of accessing a local private server hosting a copy of the application, users used a web browser to access a web server-based shared application. SaaS was slow to gain acceptance until the mid 2000s, when broadband Internet access accelerated, thus permitting reliable application performance.

In context to the Industrial Internet, the cloud offers affordable and scalable infrastructure through IaaS. It also provides elasticity in so much as resources can scale on demand; therefore there is no need to over-provision infrastructure and networks, with the cloud you can burst well beyond average usage, as the cloud is elastic; it assigns resources as required, albeit at a price. Similarly, the cloud providers offer virtual and persistent storage, which is also scalable on demand. This is a major selling point for cloud versus data center deployments as the capacity planning requirements for the data storage of the industrial Internet can be vast.

For example, an airliner’s jet engines generate terabytes of data per flight, which is stored onboard the aircraft and sent to the cloud once the aircraft lands, and that is just one aircraft.

Therefore, having elastic compute and storage facilities on demand but only paying for the resources used is hugely financially attractive to start-ups even large cash rich companies. Additionally, PaaS provides huge incentives for IIoT, in so much as the cloud providers can supply development environments and tools to accelerate application development and testing. For example, Microsoft Azure provides support for .NET applications and Google provides tools to support its own in-house applications such as Big Data tools and real-time stream processing.

From a network perspective the major cloud providers, Amazon, Microsoft and Google provide potentially millions of concurrent connections, and Google run their own fiber optic network, including their own under-sea cables.

The cloud is a huge enabler for the Industrial Internet as it provides the infrastructure and performance that industry requires but is at the same time financially compelling. However, there is one slight problem. Latency, which is the time it takes data to be transmitted from a device and then be processed in the cloud, is often unacceptable. In most cases, this is not an issue as data can be stream analyzed as it enters the cloud and then stored for more thorough Big Data analytics later. However there are some industrial use-cases where real time is required, for instance in manufacturing. In some, if not most, instances within manufacturing, a public cloud scenario would not be acceptable, so what are the alternatives?

  • Private cloud—An internal or external infrastructure either self managed or managed by a third party but with single tenancy that is walled off from other customers.

  • Public cloud—A community that shares all the resources based on a per-usage model; resources are supplied on-demand and metered. This is a multi-tenancy model with shared resources, such as storage and networking; however, tenant IDs prevent customers viewing or accessing another customer’s data.

  • Hybrid cloud—A combination of the private and public clouds, which is quite common due to security and fears over sensitive data. For example, a company might store its highly sensitive data in a private internal data center cloud and have other applications in AWS.

  • Multi-cloud—For example, a company might have applications in AWS and developers working on Windows in Azure with Android developers using Google Cloud, and other IT applications stored on other public clouds.

It is likely in Industrial Internet context that a private cloud would be more attractive as it is inherently private, although it still leaves the dilemma of latency, jitter, and packet loss if the private cloud is on the Internet. The alternative of a private cloud hosted internally is also fraught with difficulty. To host a private cloud requires one of three methods—host the cloud on an existing infrastructure and manage it in-house, host the cloud on existing infrastructure and outsource the management to a third party, or outsource the cloud management to a third party on the Internet.

There is a fourth way, which is to use open source software. OpenStack can be downloaded and installed though it takes skill and patience. It is not recommended unless there is in-house cloud skills and a deep understanding of each business unit’s application requirements. Remember, by setting up a private cloud on in-house infrastructure, the effect is to virtualize and share all resources. No longer will HR’s application run happily in splendid isolation on their dedicated server, and the same goes for manufacturing’s ERP server and customer cares, CRM, and VoIP. But what happens when you start sharing all the resources?

In addition, private cloud implementations will be costly, time consuming, and, unless diligently deployed, potentially insecure. So what is the alternative for industrial applications that require low latency and deterministic performance?

The Fog

Cloud systems are generally located in the Internet, which is a large network of unknown network devices of varying speeds, technologies, and topologies that is under no direct control. As a result, traffic can be routed over the network but with no quality of service measures applied, as QoS has to be defined at every hop of the journey. There is also the issue of security as data is traversing many autonomous system routers along the way, and the risk of confidentiality and integrity being compromised is increased the farther the destination is away from the data source.

IIoT data is very latency sensitive and requires mobility support in addition to location awareness. However, IIoT benefits from the cloud model, which handles data storage, compute, and network requirements dynamically in addition to providing cloud based Big Data analysis and real-time data streaming analytics. So how can we get the two requirements to coexist?

The answer is to use the fog.

The fog is a term first coined by Cisco to describe a cloud infrastructure that is located close to the network edge. The fog in effect extends the cloud through to the edge devices, and similar to the cloud it delivers services such as compute, storage, network, and application delivery. The fog differs from the cloud by being situated close to the edge of the proximity network border, typically connecting to a service provider’s edge router. It will be on the service provider’s edge router that the fog network will connect to, thereby reducing latency and improving QoS.

Fog deployments have several advantages over cloud deployments, such as low latency, very low jitter, client and server only one hop away, definable QoS and security, and supporting mobility location awareness and wireless access. In addition, the fog does not work in a centralized cloud location, but is distributed around the network edge, reducing latency and bandwidth requirements as data is not aggregated over a single cloud channel but distributed to many edge nodes. Similarly, the fog avoids slow response times and delays by distributing workloads across several edge node servers rather than a few centralized cloud servers.

Some examples of fog computing in an IIoT context are:

  • The fog network is ideally suited to the IIoT connected vehicles use-case, as connected cars have a variety of wireless connection methods such as car-2-car, car-2-access point, which can use Wi-Fi, 3g/4G communications but require low latency response. Along with SDN, network concepts fog can address outstanding issues with vehicular networks such as long latency, irregular connections, and high packet loss by supplementing vehicle-vehicle communications with vehicle-infrastructure communication and ultimately unified control.

  • Fog computing addresses many of the severe problems cloud computing has with network latency and congestion over the Internet; however, it cannot completely replace cloud computing which will always have a place due to its ability to store Big Data and perform analytics on massive quantities of data. As Big Data analytics is a major part of the IIoT and then the cloud, computing will also remain highly relevant to the overall architecture.

Big Data and Analytics

Big Data describes data that is just too large to be managed by traditional databases and processing tools. These large data structures can be and usually are made up of a combination of structured and non-structured data from a variety of sources such as text, forms, web blogs, comments, video, photographs, telemetry, GPS trails, IM chats, news feeds, and so on. The list is almost endless. The problem is with these diverse data structures is that they are very difficult to incorporate or analyze in a traditional structural database. Companies however need to analyze data from all sources to benefit from the IIoT, after all knowledge, such as customer trends and operational efficiency data can be distilled from all sorts of data.

However, in the IIoT the concern will be in handling vast quantities of unstructured data as well as M2M sensor data from thousands or more devices. Therefore, in order to gain value from this data there has to be an alternative way to handle and manage it.

Companies such as Walmart and Google have been processing Big Data for years and mining valuable hidden correlations from the data, but it has been done at great expense and with vast arrays of server and storage technology. However, they have undoubtedly been successful in their pursuit of handling and analyzing all the data they can retrieve from their operations. The Industrial Internet will require a similar approach as data from thousands of sensors will require managing and processing for valuable insights.

In industry, particularly in manufacturing, health services, power grids, and retail among others, handling and managing vast amounts of sensor data is nothing new, they have managed there production or services like this for years. For example in production, a sensor detects an event and sends the appropriate signal to an operational historian, which is a database that logs and stores data coming from sensors. The data stores are optimized to perform time-dependent analysis on the stored data by asking questions such as, how did this hour’s production deviate from the norm? This database system manages this through complementary software tools designed to provide reporting and to detect trends and correlations.

Technology is capable of collecting sensor data from hundreds of sensor types and is developed to survive in hostile environments and to store data in the event that the database becomes unavailable. This is the long-established method for handling server data, so how will this change in the Industrial Internet?

The recent advances in sensor miniaturization and wireless radio technology have created a huge surge in the deployment of sensors, and consequently in sensor data. These advances led to the introduction of micro-electro-mechanical systems (MEMs) . Sensors are now small enough to be deployed anywhere and can communicate over wireless technology. This has resulted in an explosion of data travelling from sensors to systems and sometimes back again, which is way beyond the levels of a few years ago. Now IIoT is seen as a major contributor of Big Data and as such requires the modern technologies to handle huge data sets of unstructured and dirty data.

Fortunately, for industry, cloud services are available to manage Big Data, with unlimited storage on-demand and open source technologies such as Hadoop, which is an open source cloud-based distributed data storage system optimized to handle unstructured and structured data. Similarly, there are tools for analytics such as MapReduce, developed by Google for its web search index. Hadoop utilizes its own file-system HDFS and works by assigning chunks of data to each server in its distributed storage system. Hadoop then performs a MapReduce operation before retrieving the results back into HDFS. This method is great for batch-job analytics; however, many IIoT use-cases will require fast real-time or close to real-time analytics on the data as it is in flight.

Therefore, knowing which technologies are needed depends on the type of Big Data, which can have several characteristics, termed the four Vs. They are each discussed next.

Volume

The ability to analyze large volumes of data is the whole purpose of Big Data. For example the larger the data pool, the more we can trust its forecasts. An analysis on a pool of 500 factors is more trustworthy than a pool of 10.

Velocity

Velocity is concerned with the speed the data comes into the system and how quickly it requires analyses. Some data, such as M2M sensors, will require in-flight or in-memory analysis; other data may be stored and later analyzed once in Hadoop. An example of a long-standing usage for high-velocity analysis is stock market and financial data. Financial institutions and banks have been analyzing this type of data at velocity, even going to the lengths of running a private submarine cable between exchanges in London and New York in order to shave a millisecond of the handling time of this valuable high-velocity Big Data.

Data velocity in the IIoT context, or streaming data as it is known, requires real-time or close to real-time as possible handling and analysis. This constraint puts additional pressures on the data storage and handling systems. The problem is the way the Industrial Internet tends to work; devices send sensor data back to an operations and management domain for processing. Now that data is typically sent to indicate a change of status with an entity or condition being monitored, and the sending device might be expecting a response.

This method of control feedback is very common in the industry and the system processing the data must be able to handle the data streams arriving from device sensors, process the data in flight (in memory), and identify and extract the data it requires, before it can take an appropriate action. For example, say a sensor on a high-speed motor within a centrifuge sends data that it has detected dangerous temperature, and simultaneously other sensors monitoring the motor report erratic performance and vibration. The system would want to know about this immediately, not as a result of a batch-job, but in real time, so that it could react and send a feedback signal to shut down the errant motor.

Variety

Another characteristic of Big Data is that it is typically messy and comes from a variety of sources, such as raw sensor feeds or web service APIs that do not fit neatly into organized relational structures, hence the need for NoSQL databases. As a typical use of Big Data processing is to extract meaning from unstructured data so that it can be input as structural data into an application and this requires cleaning it up. Sensor data is notoriously dirty as timestamps are often missing or lost in communications and therefore requires considerable tidying up before processing.

An example of this real-time insight into Big Data handling of sensor data is found in Smart City projects. For example, if a traffic monitoring system detects congestion or an accident from its roadside sensors, it can instantaneously send control feedback to change traffic lights, thereby easing traffic flows to reduce congestion.

Veracity

The problems with Big Data appear when we go beyond collecting and storing vast amounts of data and analyze the data stores using the 3 Vs and consider, is the data actually true.

The problem is that data is not only dirty or unreliable, it can be downright false. For example, say you harvest data from multiple sources of dumb sensors. You aggregate this data and transform it into information, on the basis that data leads to information that leads to knowledge. If the data was worthless to begin with, the results will be as well (garbage in, garbage out, as they say).

Value

Since not all data is equal, it becomes pertinent to decide what data to collect and analyze. For example, it has become a popular practice within industry and enterprises to collect everything, indeed the Big Data idea is to store everything and throw nothing away! The problem here is that data is valuable only if you can determine its relevance to the business value . After all having a Big Data value set means nothing unless the scientific data analysts have programmed software to retrieve the value from it. You must know what you are looking for. Big Data is not going to produce correlations and trends unless the algorithms have been programmed to search for such things.

Visibility

Visualizing data is hugely important as it allows people to understand trends and correlations better. Visualization software can present data in many formats, such as dashboards and spreadsheets or through graphical reports. Whatever way it is presented, it will still visualize the data in a human readable format making it easier to understand.

However, sometimes visibility means sharing data among partners and collaborators, and that is both a good thing and potentially hazardous. In the context of Industrial Internet it would be strange to offer information to potential competitors, as it could lead to others stealing highly sensitive data. For example, a Lathe machine in a factory will be programmed with a design template that determines the design and production of a specific product. To allow that information to leak to the Internet could be disastrous to the company. Say you contract an offshore company to produce one million shirts. Now that they have the template, what is stopping them from running off 2 million extras to sell on the black market?

The big point about Big Data is that it requires vast amounts of intellect to distil business value from it. Creating data lakes will not automatically facilitate business intelligence. If you do not know the correct question to ask of the data, how you can expect a sensible answer?

This is where we have to understand how or if machines think and collaborate.

M2M Learning and Artificial Intelligence

Big Data empowers M2M learning and artificial intelligence , the larger the pool of data the more trustworthy the forecasts—or so it would seem. M2M learning is very important and sometimes very simple, for example a multiple-choice exam. Say the exam wants to determine the student’s knowledge level, so it might ask at random a question, it might be categorized as difficult, medium, or easy. If the student answers incorrectly the program might ask another question on the same subject at a different level. Its objective is not to fail the student but to discover the student’s understanding of the topic. This is in simplistic terms called machine learning.

If we asked Google—with its vast arrays of computing power—to search for a nonsensical input, would we be expecting a sensible answer? The result is likely to be no. Moreover, here lies the problem, despite companies collecting, and harvesting vast quantities of data and constructing data lakes of unstructured data, how do they analyze this data in order to extract valuable information?

The answer is that currently they cannot, yet they can collect data in huge quantities and store it in distributed data storage facilities such as the cloud, and even take advantage of advanced analytical software to try to determine trends and correlation. However, we are not able to actually achieve this feat now, as we do not know the right questions to ask of the data. What we will require are data scientists, people skilled in understanding and trolling through vast quantities of unstructured data in search of sense and order, to distinguish patterns that ultimately will deliver value.

Data scientists can use their skills in data analysis to determine patterns in the data, which is the core of M2M communication and understanding, while at the same time ask the relevant questions that derive true value from the data that will empower business strategy.

After all a wealth of unstructured data—a data lake—means nothing unless you can formulate the correct questions to interrogate that vast data source in order to reveal those hidden correlations of potential information that add value to the company’s strategic plan.

Consequently, data scientists have become the most sought after skilled professionals in the business. The Industrial Internet cannot exist without Big Data and intelligent analysis of the data delivered, and that requires skilled staff that understand data, algorithms, and business.

However, putting aside fear of robots and malicious intelligent machines, we can clearly see that even today, we have Big Data and analytics that do enable AI and that does deliver business value. For instance, IIoT systems listen to sensors that can interact with their environment and they can sense and react quicker that any human can. These sensors are our eyes, ears, nose, and fingers in the industrial world allowing us to proactively and reactively respond to our environment.

The huge benefits are that adding M2M communication with real-time analytics creates a cognitive computing system that is capable of detecting or predicting flaws, failures, or anomalies in the system that a human operator could not detect.

There are various types of machine learning or artificial intelligence which have ever-shifting definitions. For example, there are three general classifications of artificial intelligence, the classical AI approach, simple neuron networks, and biological network neuron networks. Each has its own defined characteristics.

Consider the classic AI approach that has been ongoing since the 1960s; you can identify the scientific approach is to identify the intelligence that humans find easy. For example, classic AI strived to find ways for machines to mimic human ability with regard to speech, facial, and text recognition. This approach has met with fairly mixed results, and speech and text has been far more successful than facial recognition, especially when compared to human performance. The problem with the classic AI approach is the machine’s performance had to be judged and to be corrected by a human tutor so that it learned what was correct and what wasn’t.

An alternative approach, again from the 60s and 70s, decided on a neural network, which was to mimic the way that the human brain works. In this scenario, the machine learns without any human intervention; it simply makes sense of the data using complex algorithms. The problem was that this required the machine to process vast quantities of data and look for patterns and that was not always readily available at the time. We have since discovered that simple neuron networks are a bit of a misnomer as it has little comparison to real neuron networks. In fact, it is now termed deep learning, and is suitable for analysis of large, static data sets.

The alternative is the biological neural network, which expands on the neural theme and takes it several steps further. With this AI model, the biological neural network does actually try to mimic the brain’s way of learning, using what is termed spaced distributed representation. Biological neural networks also take into consideration that memory is a large part of intelligence and that it is primarily a sequence of patterns. Furthermore, learning is behavior based and must be continuous.

Initially, we might think well, so what, until that is we see how each model is used:

  • Classic AI—This model is still used as it is very efficient in question answering, such as IBM’s Watson and Apple’s Siri.

  • Neural networks—Data mining in large static data sets with the focus on classification and pattern recognition.

  • Biological neural networks—It has many uses typically in security devices tasked with the detection of uncharacteristic behavior, as its strengths lies with prediction, anomaly detection, and classification.

How these would work in action is that each would take a different approach to solving a problem, let us say for example, inappropriate file access on a financial department’s network. In this scenario, classic AI would report based on rules configured by an administrator, who shouldn’t and who should have access and report any attempted violations. Now that works fine if it is black and white, some have access and others do not, but what if some people do need access but not every day?

This is where neural networks can play its part as it looks upon vast quantities of historical data and determines how often the network was accessed, how often, for how long and when, whether it was every day, week or just monthly.

A biological neural network takes it one step further; it doesn’t just build a profile for the network, it builds a profile for each users behavior when accessing the network resource. It then can paint a picture of each user’s behavior and determine anomalies in their behavior.

We must always consider artificial intelligence is not always the desirable goal and machine learning for the sake of it is not ultimately productive. A human is an astonishing machine, capable of learning and adapting to their sometimes hostile and complex work requirements and environments. For example, humans learn and can be taught skills far easier and cheaply than replacing hugely expensive robotic or CPS equipment on a production line. Humans also are fantastically capable of doing precise and delicate work, something robots are struggle with. Humans also have a brain, tremendous dexterity, strength, and commitment to workmates and family, unfortunately humans have a personality, are easily bored, and have an ego that no computer can possibly match. Therefore, repetitive, boring tasks are more suited to robots. After all humans, as brilliant machines as they are, were not designed to stand on a production line all day doing repetitive boring work. We have failings that make us human and these might be in the robot’s favor in the job market of the future.

Augmented Reality

Augmented reality (AR) , although still in its infancy, is stirring up quite an interest in the IIoT environment. Although AR is particularly new it was investigated as a futuristic technology decades ago and despite the obvious potential AR development fell by the wayside due to the lack of complementary technologies. However, in recent years interest in AR has been revived as all the complementary technologies are now a reality and in most cases thriving. Technologies such as AR visors, glasses, and headsets are now in production, and although still expensive from a consumer perspective, they are realistic for industry depending on the ROI (return on investment) of the use-case.

However, AR is not all about the glasses or visual display; it could just as well be a smartphone as it is also about data. After all AR is only as good as the information shadow that accompanies the object that you are looking at. An information shadow is the data that relates to the object that you are viewing, you could stare all you want at the walls of your house and you are not going to see the pipes and the wiring or anything different. For AR to work it needs the object you are studying to have a related 3D CAD diagram stored either locally if the AR device such as a tablet or AR headset can support large files or remotely in the cloud in the case of present models of AR glasses and visors. Then through the projection of the 3D CAD diagram, either through the heads-up display or onto the object itself, you will see pipes and wiring, and all that should be behind the wall.

Later versions of 3D do not solely rely on “as-built” CAD drawings as they can be notoriously poor, in the building and construction trades. Instead, they will rely on embedded sensors to transmit their location to build an interactive 3D drawing, which will show exactly what is located behind the wall. AR is extremely useful in the building and construction trade for planning work with the minimum of collateral damage.

AR has many other applications, one notable and passive form is interactive training of machinery maintenance. Previously technicians had to attend vendor training courses pass certification exams and develop skills over years of experience before they were capable of maintaining these machines or networks. Now with AR it is possible to accelerate the training because the in-depth data manuals and service guides are stored in the cloud along with 3D schematics and drawings. By projecting this information shadow alongside the AR view of physical machine, the technician can receive detailed troubleshooting steps to follow and the 3D image project onto the physical product will show them exactly where parts are located and how to access them. AR can play a massive role in industrial maintenance, removing most of the previous requirements for on-site expert knowledge, which due to its rarity was expensive.

Another use-case for AR is in control and management in industrial operation centers. In traditional operation centers, physical displays show displays of analogue processes, for example, temperature, pressure, RPM, among other sensor information. However, these physical dashboards were either physically connected or showed graphical representations of pre-configured and programmed dashboards. With AR, sensor data can be viewed from anywhere and projected onto any surface, thereby creating the facility to mix and match sensor data to be displayed in impromptu mobile dashboards.

Retail and the food and entertainment industries are also likely to be big players in the AR game, becoming early adopters and pioneers in building their own information shadow. For example, if the restaurant owners published to the cloud for public access, the prices, menu, and current available seating, then a potential customer when viewing the external facade will see that information superimposed through their AR glasses or device. Even if restaurant owners themselves do not take the initiative, social media may, and the potential customer will not see real-time data like the seating availability but they may see customer ratings and reviews.

The potential for AR and the IIoT is massive, so we have saved the most impressive use-case for last and that is AR use in emergency services. For instance, fire fighters already wear handsets and communication gear but, by using AR visors or helmets, they could be fed sensor information. This communicated environmental data, fed and displayed in real time to the fire fighter’s heads-up display would provide vital information such as temperature, status of smoke detectors, and presence sensors. This information lets the firefighter have a heads-up view of the entire building floor by floor and know instantly if anyone is in the building and, if so, in what rooms, as well as the surrounding environmental conditions.

In short, only the boundaries of imagination and innovation of the developers and industry adopters of the technology limit the use-cases for AR when coupled with the Industrial Internet.

3D Printing

Additive printing or what is more commonly known as 3D printing is a major technology that enables the financial reality of the IIoT across many industrial use-cases. 3D printing works by creating an image as a computer file of either an existing product or through a CAD design one thin layer at a time. It builds on each subsequent layer until a full copy of the subject or CAD image has been completed. Once that computer file has been generated, it can then be fed to a 3D printer, which can interpret the coordinates to recreate the design using several techniques and substrates to create a physical representation of the subject.

3D printing enables a product to be created from a source file, which is not much different from how a programmable lathe machine creates a 2D product; however, it is the way that it does it in 3D that is different.

3D printing is therefore perfect for proof of concept, and modeling of theoretical designs as it is cheap and relatively quick. Most low-level industrial or consumer 3D printers work using plastic but higher-end printers can and do produce industry-quality components and products utilized in aircrafts, cars, and even in health care. These specialized components can be made from a variety of substrates, not just molten plastic, which is commonly used in consumer grade additive printing.

In industrial use, other techniques and materials are used to form and fuse the layers. For example, metal powder is used with binder jetting, which glues that metal powder together to form the 3D shape. In another industrial technique, glass, ceramic, and metal can be used as the base material using a technique, called power bed fusion, which used a high-power laser to fuse the material together to form the layers. There is also sheet lamination, which binds sheets of metal, paper, and polymers together, layer upon layer using force to create the binding. Metal sheets are then trimmed and cut by CNC milling machines to form the required shape.

The applications in industry are vast as 3D printing lends itself to all sorts of rapid prototyping, architecting, and construction. 3D printing enables not just rapid modeling but one lot sized production of customized objects as only the base software template file needs changed. This is very attractive in manufacturing where previously to change a products design required weeks of work refitting production lines and reconfiguring machines. With 3D printing, lot sizes of one can be entertained profitably and cost effectively.

It is important to differentiate between the media-hyped consumer market for 3D printing and the industrial reality. Components created using 3D printing in industry are not models or gimmicks as they are used by NASA, and in the aviation industry in jet engines. Similarly, they are commonly utilized in cars, with at least one manufacturer making their entire vehicle via 3D printing.

3D printing goes beyond just manipulating polymers, ceramics, paper, and metal—it also can be used in health care. Additive manufacturing is used in prosthetics and medical components such as medical sensors and actuators implanted within the body, such as heart pace-makers for example. However, the latest research is being driven by bio-medical requirements such as creating 3D printed skin, and other body tissue, and perhaps soon even complete organs. The goal here is to reproduce a patient’s failing organs using 3D printing to create a replacement without the requirement of a donor transplant.

The way this works is that layers of living cells harvested from the patient are deposited via the 3D printer onto a culture plate to build up each layer of the three-dimensional organic structure. Because the 3D model is built from the patient’s own living cells, the body won’t reject the cells and the immune system won’t attack them as a foreign entity, which is a huge problem with donor transplants.

People versus Automation

Humans are the greatest innovators and providers of technology and we adapt our environment to our liking so that we can exist in even the most hostile conditions. As a species, we can live in conditions as extreme as 50 degrees centigrade or survive for months in Antarctica in temperatures barely rising above -20 degrees. We can do this because, uniquely among animals, we can design and transform our environment to suit our physical requirements. Not only can we provide shelter for ourselves and kin in inhospitable climates but also we can produce food and sustenance through industry, agriculture, and hunting. As we have developed as a society, we have also developed social skills, rules, empathy, and emotional intelligence that allow us to operate as a social group living in harmony.

However, it is through competition that drives humans to excel. It is threat or rivalry that brings out our competitive nature, as we can see manifested in sport and war. We have egos and personalities, and we have emotions as powerful as love and as destructive as hate, envy, and greed. These may not necessarily be inhibitors to innovation or invention; they may actually be the spark that kindles the fire.

In times of strife or stress, humans are incredibly inventive, curious, and adventurous and many major innovations, inventions, discoveries, or social disruption has come around during periods of war or famine. Importantly, humans can realize not only their own physical limitations, which are great, but also can stretch beyond those boundaries and advance their capabilities and horizons even beyond their own planet.

We can develop robots and other cyber-physical systems to do our biding in environments too dangerous or ferocious for life. Furthermore, humans can adapt, learn, and change our behavior and physical characteristics to meet the requirements of our habitat.

The human body is an astonishing machine in its own right, as it can learn and counter physical stress, as we witness when undergoing any fitness-training regime. Our muscles adapt to the workload stress by getting stronger and developing stamina, as does our cardio-vascular system. Our heart rate lowers and our breathing becomes more efficient. However, most interestingly, our muscles and nerves create circuits that enable us to perform tasks without conscious thought. For example, a tennis player returning a serve with their backhand.

Machines and robots have none of these characteristics; they are simply mechanical devices designed to do a simple job, albeit repetitively and tirelessly. The fact that robots are indefatigable is of course their great strength, but they are extremely limited, or are at present. However, if robots or other cyber-physical systems could communicate through advanced M2M learning and communications, perhaps they could work collaboratively as a team, just like humans, but without the unpredictable performance driven by negatives of envy, bickering, bullying, moaning, or the more endearing positive qualities of camaraderie and team spirit displayed by human teams.

This is of course is the goal of machine learning and artificial intelligence—to create intelligent machines that can have some measure of cognizance. Ideally, they have a level of inherent intelligence that enables them to work and adapt to their and others circumstances and environment but without human attitude and personality flaws.

Currently in robotics, we are a long way from the objective; however, in software machine learning is coming along very well. Presently the state of machine learning and artificial intelligence is defined by the latest innovations.

In November 2015, Google launched its machine learning system called TensorFlow . Interest in deep learning continues to gain momentum, especially following Google’s purchase of DeepMind Technologies, which has since been renamed Google DeepMind.

In February 2015, DeepMind scientists revealed how a computer had taught itself to play almost 50 video games, by figuring out what to do through deep neural networks and reinforcement learning.

Watson, developed by IBM, was the first commercially available cognitive computing offering. In 2015, it was being used to identify treatments for brain cancer. In August 2015, IBM announced it had offered $1 billion to acquire medical imaging company, Merge Healthcare, which in conjunction with Watson will provide the means for machine learning.

Astonishingly, Google’s AlphaGo beat the world champion Lee Sodol at the board game GO, which is a hugely complex game with a best of five win. What was strange is that both Lee Sodol and the European champion (who had been beat previously by AlphaGo) could not understand Google’s AlphaGo’s logic. Seemingly, AlphaGo played a move no human could understand; indeed all the top players in the world believed that AlphaGo had made a huge mistake. Even its challenger the world champion Lee Sodol thought it was a mistake; indeed he was so shocked by AlphaGo’s move he took a break to consider it, until it dawned on him the absolutely brilliance of the move. “It was not a human move … in fact I have never seen a human make this move”. Needless to say, Google’s AlphaGo went on to win the game. Why did AlphaGo beat the brilliant Lee Sodol? Is it simply because as a machine AlphaGo can play games against itself, and replay all known human games, building up such a memory of possible moves as a process of 24/7 learning that it keep can continuously keep improving its strategic game.

Google’s team analyzed the victory and realized that AlphaGo did something very strange—it calculated a move, based on its millions of known human play training movements that a human player would only have had a one in ten thousand chance of recognizing and countering that seemingly crazy move.

In fairness to the great Lee Sodol, he did manage to outwit AlphaGo to win one of the best of five games, and that appears to be an amazing achievement.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.128.243