Chapter 16. Health and Medicine

From education, training, and procedure rehearsal, to enhancing a surgeon’s situational awareness, to tools that aid in recovery and rehabilitation, a growing number of applications of virtual and augmented reality–enabling technologies within the physical and mental health fields are showing demonstrable results, as well as significant potential for future development. In this chapter we explore a number of these applications, highlighting the specific problems they solve, their major enabling technology components, and their strengths compared to solutions traditionally employed.

Advancing the Field of Medicine

The application of virtual and augmented reality within the physical and mental health fields is having a transformational impact on a number of areas in the practice of medicine. From powerful, clinically validated procedural simulators to innovative information displays designed to increase a physician’s level situational awareness and optimize workflows, great strides are being made to apply these technologies in a manner that ultimately results in better care being rendered, more favorable patient outcomes, and more efficient use of resources.

In this chapter we will explore several innovative applications of immersive, semi-immersive, and augmented display technologies, detailing the specific problem being addressed, the solution employed, as well as quantifiable benefits. These examples represent only a small fraction of the number of solid, deployed applications, development efforts, and clinical trials underway at dozens of universities and hospitals around the world, not to mention the fascinating offerings of many companies formed in the past few years with hardware and software solutions targeting this field. To attempt to do so would far exceed the scope and intent of this chapter, and perhaps even this entire book. As such, the cases highlighted are selected with the specific goal of illustrating the broad range of applications possible with careful thought and planning, as well as their far-reaching, problem-solving impact.

Training Applications

Every human being comes to understand the reality of the old adage “practice makes perfect.” In nearly all areas of human endeavor, from the most basic skills like children learning to accurately guide food into their mouth instead of their nose, to mastery of a range of particular talents as adults, there is no question that practice and training improve human performance across most physical and mental activities. And the greater the complexity of a task, the more practice is needed, not only to initially develop a skill but often to maintain performance levels. These truths are no more apparent than in the varied fields of medicine, where the health and well-being of individuals is at stake.

To this end, the past decade has seen a dramatic increase in the number of simulation and training utilities developed for the medical community based on, or incorporating, virtual and augmented reality–enabling technologies, and their adoption is having a transformational effect on the field. From assisting students and practitioners in developing and refining specific skills ranging from delicate microsurgical techniques to complex invasive procedures, the benefit of using these technologies over more traditional methods of skills development has been well documented.

Perhaps one of the greatest benefits to the use of computer-based simulation technologies in the medical field is the creation of an environment where the student or practitioner is able to fail without consequence. Failure is absolutely critical to the learning process, although in medicine, it can have dire consequences. Thus, it is obviously far better to make mistakes in training and procedure rehearsal than in a clinic or operating suite.

HelpMeSee Cataract Surgery Simulator

According to the World Health Organization, the leading cause of blindness worldwide is untreated cataracts, which are a clouding of the eye’s natural lens resulting in the reduced transmission of light to the retina. (See Chapter 3, “The Mechanics of Sight”; WHO, 2014a.) According to the latest assessments, more than 20 million people worldwide, or approximately half of all cases of blindness globally, are the result of this condition. In most instances cataracts are a normal result of aging, although children are sometimes born with the condition. By the age of 80, more than half of all Americans will either have a cataract in one or both eyes or have had cataract surgery (WHO, 2014b; NEI, 2009).

Although corrective surgery, normally an outpatient procedure, is easily obtained in developed regions of the world, there are significant barriers to access in less developed nations, including the costs of treatment, few trained practitioners, and lack of awareness (Tabin, 2005).

To combat this growing problem, HelpMeSee, a U.S.-based nonprofit organization and global campaign to eliminate cataract blindness, has joined with a number of partners—including Moog, Inc (New York, New York), InSimo (Strasbourg, France), and SenseGraphics (Kista, Sweden)—in the development of a high-performance surgical simulator used to train in-country specialists in a fast, effective, and high-quality procedure to correct the condition and restore vision.

Known as the Manual Small Incision Cataract Surgery (MSICS) simulator, the system is used to train specialists to perform a low-cost, highly effective, small-incision surgical procedure that enables the removal of a clouded, cataractous lens and replaces it with an artificial intraocular lens implant in as little as 5 minutes for an adult patient and 15 minutes for a child (HelpMeSee, 2014a).

As shown in Figure 16.1, the MSICS simulator is a self-contained, cart-based system incorporating an armature-mounted, high-definition (HD) stereoscopic display taking the place of what would normally be a stereo microscope. As with the real surgical procedure, the human operator would be seated near the head looking into the viewing device. Peering into the viewer, the operator is presented with an exceptionally detailed graphical model of a human eye. The simulator’s main user interface consists of bimanual surgical instruments that are identical to those used in the actual surgical procedure.

Image

Credit: Image courtesy of Moog, Inc/HelpMeSee

Figure 16.1 The Manual Small Incision Cataract Surgery (MSICS) simulator developed by HelpMeSee, Moog Industrial Group, and several software partners will be used to train thousands of cataract surgical specialists in developing nations.

As the operator moves the instruments and interacts with the virtual eyeball, high-fidelity haptics technologies developed by Moog, combined with physics-based virtual tissue models and a simulation engine from SenseGraphics and InSimo, provide a level of realism, both visually and tactually, that is virtually indistinguishable from that experienced during a live procedure performed by an experienced surgeon (HelpMeSee, 2014a).

The system also includes an instructor workstation and courseware that will ultimately encompass more than 240 training tasks and complications that cataract specialists could encounter during live surgical procedures (Moog, 2015; Singh and Strauss, 2014).

System rollout is intended to begin in 2016 with the establishment of up to seven training centers spread across Asia, Africa, and Latin America, each capable of training up to 1,000 MSICS surgeon candidates annually, with each trainee expected to undergo between 400 and 700 hours of learning with roughly 60% of this time spent working with the simulator (Broyles, 2012). According to HelpMeSee, each surgical specialist trained in this procedure is capable of performing upward of 2,500 procedures per year at the cost of approximately $50 USD per operation (HelpMeSee, 2014b; 2014d).

Simodont Dental Trainer

In dental schools worldwide, students have traditionally developed their clinical skills by using drills and other tools on plastic teeth within “phantom heads.” This is an expensive, time-consuming process and, from an instructor’s viewpoint, highly subjective in terms of evaluating a student’s performance. To help students build a stronger set of essential skills earlier, and at considerably less expense, Moog Industrial Group and the Academic Centre for Dentistry in Amsterdam (ACTA) collaborated in the development of Simodont, a high-quality, high-fidelity, bimanual dexterity simulator that combines 3D visualization, tactile, and force feedback technologies as well as audio to deliver highly realistic training in operative dental procedures (Forsell, 2011; Moog, 2011).

As shown in Figure 16.2, the user is seated in a position similar to that of a dentist in a clinical setting. Wearing polarized stereo glasses, the operator looks into a viewing window and is presented with a sharp, correctly sized 3D model of a patient’s mouth directly in the physical workspace of the hand instruments to be used within the specific lesson (handpieces, burs, mirror, and so on). As the user moves the hand instruments, which are the same as the standard tools she would be encountering in a clinical setting, the virtual tools mimic those movements precisely. When the virtual drill interacts with a virtual tooth, the simulation engine and haptics drivers provide a full visual, audio, and haptic experience of drilling a physical tooth, including crisp rendering of drill and contact forces as well as hardness of a tooth’s enamel. As with a drill in a dentist’s office, a foot peddle controls the speed of the virtual drill.

Image

Credit: Image courtesy of Moog, Inc

Figure 16.2 The Simodont Dental Trainer developed by Moog Industrial Group and the Academic Centre for Dentistry in Amsterdam (ACTA) is used by dental students around the world to practice and refine many of the manual skills they will need when treating live patients.

Sophisticated courseware developed by ACTA provides a range of training procedures and scenarios, as well as the ability of an instructor to play back and review a student’s movements to provide feedback in a more objective manner. The system also enables scans of real teeth to be imported to infinitely expand the selection of case scenarios.

The Simodont dental trainer has proven so successful at building student skill set that it is now in use at dentistry schools around the world.

As with other medical procedure simulators, the value that such systems bring to a learning environment cannot be overstated. From a student’s perspective, simulators provide a powerful tool with which to develop and refine skills essential to treating patients once they progress from a preclinical to a clinical environment. For instructors, simulators such as Simodont enable significantly greater flexibility in development of training scenarios and pedagogical options to ultimately turn out better graduates.

Treatment Applications

Ain’t it funny how a melody can bring back a memory?

Take you to another place in time,

Completely change your state of mind.”

—Clint Black, State of Mind, 1993

These lyrics from the 1993 country music hit State of Mind by Clint Black perfectly illustrate the power of the human brain to store and retrieve memories based on sounds, sights, smells, and a variety of other environmental cues. Clinically classified as episodic autobiographical memories (EAMs), they are an involuntary function of the human perceptual and memory systems that add both tremendous benefit, and more than occasional troubles, to our existence. One of the areas where this function is highly problematic is in retention of sensory information and memories from times of extreme mental and physical trauma, such as is experienced on wartime battlefields.

Post-Traumatic Stress Disorder

Since September 11, 2001, America’s armed forces have endured more than 14 years of high-intensity ground combat operations and a deployment tempo that has led to significant behavioral health challenges within our active duty and veteran population (Rizzo et al., 2012). As of 2014, some 2.5 million U.S. service members had deployed one or more times in support of Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF) (Ramchand et al., 2014; Hautzinger et al., 2015). Of these numbers, it is estimated that upward of 18% of all returning service members are struggling with psychological injuries, and a majority of those deployed report exposure to multiple life-changing stressors (Hoge et al., 2004; APA, 2007).

Collectively, these psychological conditions are classified under the official label of post-traumatic stress disorder (PTSD) or post-traumatic stress injury, which the American Psychological Association defines as an anxiety disorder that can develop after a person is exposed to one or more traumatic events, such as major stress, warfare, or other threats on a person’s life (DeAngelis, 2008)1. Research shows this is a disorder that, once manifested, often becomes chronic (Hoge et al., 2004).

1 Although PTSD is discussed within this chapter in direct relation to military personnel, such psychological injuries and conditions are also experienced by many others, including victims of rape, terrorist attacks, first responders, and more.

To date, published research suggests one of the most widely used and empirically validated psychotherapeutic treatments for the condition is known as prolonged-exposure therapy, which consists of two components: imaginal and in vivo exposure. As the name would suggest, imaginal exposure involves a trained therapist carefully guiding the client to verbally recount, in a gradual, controlled, and repeated manner, the traumatic experiences from memory. In vivo exposure involves a simulated exposure to feared objects, activities, or situations, in both a rapid as well as progressive manner. Both components are said to allow the individual to safely engage, evaluate, and emotionally process the stressors, enabling the person to overcome excessive fear and anxiety (Foa et al., 2007; DeAngelis, 2008).

Although published research clearly demonstrates a high rate of effectiveness using prolonged-exposure therapy in the treatment of PTSD, significant challenges still exist with its traditional delivery. One of the most formidable obstacles has been the reliance on the patient to mentally visualize the traumatic experiences. This is a major impediment because avoidance of trauma reminders is one of the key identifying symptoms of PTSD (Rizzo et al., 2006). There is also an obvious hindrance in the ability to put the patient and clinician into a convincing setting that enables controlled re-exposure to widely varying traumatic stimuli. In other words, there is only so much that can be done within a clinician’s office to effectively simulate the stressful situations and environment of a combat zone. That is, until recently.

Bravemind (Virtual Iraq and Afghanistan)

Leveraging advances in immersive displays, increased computational performance, as well as scene and character modeling, researchers with the University of Southern California/Institute of Creative Technologies (USC/ICT) have developed the basis for a new clinical tool that facilitates the manner and effectiveness in which prolonged exposure therapy is delivered to soldiers suffering from combat-related PTSD.

A collaborative effort between USC/ICT, Georgia-based Virtually Better Inc., Naval Medical Center-San Diego (NMC-SD), and the Geneva Foundation and known as Bravemind, the system enables the controlled, gradual exposure of patients to fully immersive virtual representations of the experiences that underlie his/her traumatic combat-related memories until there is a diminishing, or habituation, to the anxiety-producing stimuli (Virtually Better, 2008).

As shown in Figure 16.3, the key hardware components of a Bravemind system consist of commercial off-the-shelf technologies including a PC, dual monitors, a Sony HMZ T3W stereoscopic head-mounted display, a position/orientation sensor, and a handheld controller neatly built into a medical-grade mobile cart. Not limited to just the sights and sounds of a battlefield environment, the Bravemind system also includes a tactile feedback component in the form of a small floor platform into which is built a sub-woofer to simulate vibrational cues such as engine rumbling, explosions, firefights, and corresponding ambient noises. A scent machine is also provided that can deliver situation-relevant odors (including cordite, burning rubber, diesel fuel, garbage, and gunpowder).

Image

Credit: Image courtesy of Virtually Better, Inc. www.VirtuallyBetter.com

Figure 16.3 The Bravemind simulator developed by the University of Southern California/Institute of Creative Technologies is specifically designed to provide immersive virtual reality exposure therapy to soldiers suffering from post-traumatic stress disorder (PTSD).

At the time of this book’s preparation, two main virtual environment simulation software packages were available: Virtual Iraq and Virtual Afghanistan. Both contain baseline models of various battlefield environments resembling Middle Eastern cities as well as desert road environments. Clinicians are provided considerable flexibility and control in engaging users within a variety of different scenarios and intensity levels, including foot patrols, urban warfare, vehicle convoys, bridge crossings, and medical evacuations via helicopters.

Quantifiable Benefits

Across a number of studies (Rizzo et al., 2015; Gerardi et al., 2008; Reger and Gahm, 2008; Rizzo et al., 2007; Difede et al., 2007; Difede and Hoffman, 2002), outcomes resulting from the use of virtual reality exposure therapy in the treatment of post-traumatic stress disorder have been both statistically and clinically significant. This includes patients with no prior PTSD treatment, those who previously underwent more traditional exposure therapy, as well as active duty service members actually treated in the war zone.

So promising are the outcomes of the use of these simulation utilities that the Bravemind system is in active use at more than 50 sites around the United States, including VA hospitals, military medical centers, and university research centers, to study and treat PTSD.

Phobias

Virtual reality exposure therapy (VRET) has also been demonstrated in a number of investigations to produce statistically and clinically significant outcomes when used to treat a variety of phobias, including the fears of flying, heights, and storms.

Fear of Flying/Fear of Heights/Fear of Storms Treatment Suites

Georgia-based Virtually Better Inc. has produced several commercially available PC-based VRET software suites specifically designed to be used in the treatment of common phobias and addictions, as well as others for pain distraction and relaxation. Unlike systems requiring more robust capabilities and peripheral devices such as the Bravemind system mentioned earlier, some of these VRET suites, as illustrated in Figure 16.4, can be run on a notebook PC and utilize a smartphone-based stereoscopic head-mounted display. (See Chapter 6.)

Image

Credit: Image courtesy of Virtually Better, Inc. www.VirtuallyBetter.com

Figure 16.4 Virtually Better Inc. has developed several PC-based virtual reality exposure therapy applications specifically for treatment of phobias such as the fear of flying, fear of heights, and fear of storms.

Accelerometers within the smartphone handle tracking of the user’s head orientation (roll, pitch, and yaw), while a small hand controller enables the patient to safely translate, at her own pace, her viewpoint through the simulation models. Here again, clinicians are provided considerable flexibility and control in engaging users within a variety of different scenarios to achieve the desired outcomes for the specific treatment plan.

Quantifiable Benefits

A 2015 meta-analysis (a statistical technique for combining the findings from multiple independent studies) of 14 VRET clinical trials on specific phobias arrived at two powerful findings: patients performed significantly better on post-treatment behavioral assessments than before treatments and, the results of behavioral assessments at post-treatment and during follow-up showed no significant differences with traditional in vivo techniques. The net takeaway from the study is that VRET can produce significant behavioral changes in real-life situations (Morina et al., 2015). Extrapolating beyond these core findings, you can easily see that, applied correctly, the use of VRET can have a significant impact in terms of increasing treatment efficiency as well as lowering costs given the reduced need for the patient and clinician to make offsite visits to engage in in-vivo treatment scenarios.

Vascular Imaging

Accessing a vein to draw blood or to provide intravenous (IV) therapy is one of the most challenging clinical tasks faced by health professionals, including lab techs and nurses, EMTs, military field medics, anesthesiologists, and everyone in between. Although it is one of the most routinely performed invasive procedures globally, a variety of circumstances and conditions can make this seemingly simple task exceedingly difficult, including tiny spidery veins, subcutaneous fat, darker complexions, vasoconstriction due to cold temperatures, dehydration, hemodialysis, and more.

Evena Eyes-On Glass

To make this process easier, California-based Evena Medical has developed a head-mounted, stereoscopic, augmented reality display that allows a health worker to peer through the skin and visualize the underlying vascular structures in near real time, enabling the selection of the best veins for the invasive procedure. As shown in Figure 16.5, this device, known as Eyes-On Glass, uses a patented multispectral lighting system built into the brow piece of the display that projects four near-infrared (NIR) wavelengths of light falling between 600 and 1000 µm to illuminate the targeted area of the body. As blood absorbs these wavelengths of light at greater levels than surrounding tissues such as skin and muscle, it appears darker, and thus, an optical contrast is produced. Two custom-designed cameras sensitive to these particular wavelengths (one camera for each eye) collect video imagery, which is transferred to a belt-worn controller.

Image

Credit: Kent Lacin Photography, Evena Medical

Figure 16.5 Evena’s Eyes-On Glass helps clinicians visualize a patient’s veins by using a unique lighting and video system to overlay an enhanced view onto the wearer’s real-world view.

The controller interlaces the video imagery from across the four different wavelengths (separately for each eye). The result is then returned to the display portion of the headset, which is itself built around the display subassembly of the Epson Moverio BT-200. (See Chapter 5, “Augmenting Displays.”) Projectors in the display then overlay the separate left and right video channels on top of the wearer’s real-world scene, resulting in a clinically useful stereoscopic 3D view of the worksite that dramatically enhances the healthcare worker’s view of the venous network.

As shown in the inset of Figure 16.5, the effect of this device in revealing what is normally an invisible vascular structure is plainly obvious.

Quantifiable Benefits

The benefits of this type of device in a clinical environment are numerous. In the United States alone, more than 2.5 million venipuncture procedures are performed daily (Walsh, 2008; Ogden-Grable and Gill, 2005). It has been estimated that up to 60% of children and 40% of adults require multiple attempts to access a vein (Frey, 1998; Harris, 2004). With such failure rates, this basic procedure experienced by nearly everyone who enters a hospital for treatment is one of the leading causes of medical injury. Add to this the costs of additional supplies, labor, and IV-related complications that can extend hospital stays, and it is easy to see how such an imaging device can be of significant benefit.

Healthcare Informatics

Consider the information-rich environment of a modern hospital. Each moment, the medical staff is inundated with extraordinary amounts of data in the form of alphanumeric displays, graphics, flashing lights, status tones, and alarms from multiple sources, the totality representing the vital parameters and overall physical state of a patient. This information includes heart rate and rhythm monitors, blood pressure readings, respiration rate, oxygen saturation, body temperature, EKG and EEG traces, preoperative and real-time medical imaging products, as well as intravenous fluid and medication rates to name just a few. Given this tsunami of information, it is easy to see how physicians can quickly become overwhelmed. In fact, a compelling argument can be made that the cognitive strain resulting from ever-increasing advances in patient monitoring technologies may actually increase the potential of human error.

An equally important problem exists in the form of a physician becoming so fixated on a particular task or procedure that key information, such as a critical change in vital signs, is missed.

This combination of two information management challenges (too much versus too little) is most acute within an operating room where a surgical team must carry out complex invasive procedures while concurrently monitoring sensor data flowing to a variety of displays spread around and above the operating table and often even across the room.

The combination of task and information saturation, as well as the need for increased situational awareness, is strikingly similar to challenges faced by pilots of high-performance fighter aircraft. It also appears to have similar solutions.

VitalStream

California-based Vital Enterprises has developed a software platform known as VitalStream that enables the display and sharing of a complex array of medical sensor and imaging data on several head-mounted augmenting display devices to increase the situational awareness and efficiency of healthcare professionals operating within information-rich, high-skill-high-stakes environments such as an operating room.

As depicted in Figure 16.6, VitalStream can be used with Google Glass, Osterhout Design Group R-7 Smart Glasses (see Chapter 5) and similar devices to enable the display of key data from a variety of point sources directly within the field of view of the user to increase overall situational awareness. Data types enabled for display include vital signs, radiology images, endoscopy and fluoroscopy video, and more.

Image

Credit: Image courtesy of Vital Enterprise Software, Inc

Figure 16.6 The VitalStream software platform combined with displays such as Google Glass can increase situational awareness by placing key medical sensor and imaging data onto the wearer’s normal field of view.

A powerful feature of the VitalStream platform is known as ZeroTouch capability, which utilizes the accelerometer and gyroscope within the display device to monitor head movements, and thus, enables simple hands-free control of the data presentation and communication capabilities of the system.

VitalStream also leverages the onboard video cameras common with augmenting displays such as Google Glass to record procedures and share imagery with other members of the team via remote PCs, tablets, and so on.

Quantifiable Benefits

In 2014, the Stanford University School of Medicine carried out a randomized pilot study involving the use of VitalStream and Google Glass to evaluate the effectiveness of streaming a patient’s vital signs to an operating surgeon’s field of view. Within the study, surgical residents were tasked with performing a relatively routine procedure on dummy simulators during which they were presented with a complication requiring one of two immediate, emergency procedures to be performed: a thoracostomy tube placement (creating a small incision between the ribs and into the chest to drain fluid or air from around the lungs), or a bronchoscopy (inspection of the airways and lungs through a thin viewing instrument called a bronchoscope) (Sullivan, 2014). Within the study, participants carried out the two procedures using both traditional vital-sign monitors as well as the VitalStream/Google Glass method of wireless vital-sign data streaming.

The results of the study were impressive.

In both emergency procedures, live streaming of sensor data to the Google Glass display, and thus, its presence constantly within the surgeons field of view, resulted in participants recognizing critical changes in vital signs earlier than the respective control groups using traditional monitors. In the case of the thoracostomy, Glass users experienced a time to recognition of hypotension (abnormally low blood pressure) 10.5 seconds faster than the control group. In the case of the bronchoscopy, Glass users experienced a time to recognition of critical oxygen desaturation 8.8 seconds faster (Liebert et al., 2014).

Although this application description references only one study, similar results are being achieved across a host of other investigations evaluating the viability of these new displays in high-stress medical scenarios where there is little room for error.

IRIS Vision Aid for Low Vision

The term low vision refers to vision impairment characterized by partial sight, such as blurred vision, blind spots, or tunnel vision, but it also includes legal blindness (Vision Council, 2016). Primarily associated with older adults, leading causes of low vision include macular degeneration, diabetic retinopathy, strokes, as well as other medical conditions. Generally, low vision cannot be corrected through the use of glasses, contacts, medications, or surgery. As a result, the estimated 4 to 5 million Americans dealing with the condition often resort to a variety of assistive technologies such as handheld electronic magnifiers, wearable miniature binoculars, variable Loupe magnifier glasses, talking watches, and more.

Recent developments in mobile phone–based immersive displays such as the Samsung GearVR (see Chapter 6) have enabled the development of a variety of new assistive technologies bringing relief to sufferers of low vision. One such product is the IRIS Vision system from California-based Visionize, LLC. Using specially developed software along with the high-resolution display and camera within the mobile device, a magnified “bubble” is placed within the center of the user’s field of view such as is shown in Figure 16.7. Because the size of the bubble can be controlled using rocker switches on the side of the display, the user is able to vary the scope of the area being magnified while maintaining the overall context of the scene.

Image

Credit: Image courtesy of Prof. Frank Werblin

Figure 16.7 The IRIS Vision system uses the camera of a mobile device and specialized software to create high-magnification insets within the wearer’s field of view.

Developed by scientists from UC Berkeley and Maryland-based Sensics, Inc., the IRIS Vision system clearly demonstrates the ability to keenly identify a widespread problem and bring about a low-cost, highly effective head-mounted display-based solution that can improve the lives of millions.

Conclusion

Virtual and augmented reality hold significant potential in the physical and mental health fields, and fairly clear delineations as to application areas are beginning to emerge. For instance, specialized applications aside, it is difficult to envision regularly occurring instances in the practice of physical medicine where a fully immersive visualization capability is necessary, although the benefits to education are immense, particularly when there is a need to understand the complex interrelationships between various portions of the anatomy. As such, it is highly likely that most practical applications of virtual reality will come in the form of insilico training and procedure rehearsal, and then, using fixed displays to enable attention to be focused on a particular worksite. As has been pointed out to me on multiple occasions, surgeons do not like to wear headsets during training if they will not be wearing headsets during actual procedures.

Augmented reality is a different story entirely. Because one of the greatest challenges facing medical practitioners is information accessibility and management (sometimes needing more, sometimes requiring just specific types), the ability to overlay sensor data, medical imaging products, worksite enhancements, and patient records will likely have a profound impact on the quality and efficiency with which medicine is practiced in the coming years.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.131.238