Chapter 1. Computer-Generated Worlds

Augmented and virtual reality are often part of the same conversation, though there are significant differences between the two technologies. One provides textual, symbolic, or graphical information that holds a real-time relationship with a situation or surroundings, and the other provides a complete replacement to our visual world. In this chapter we explore the basic foundations of augmented reality (AR) and virtual reality (VR), drawing clear distinctions between the differing capabilities of these systems and laying the foundation for a study of the enabling technologies and the host of problem-solving applications they enable.

What Is Augmented Reality?

The phrase augmented reality is a general term applied to a variety of display technologies capable of overlaying or combining alphanumeric, symbolic, or graphical information with a user’s view of the real world. In the purest sense, and using the phrase in the manner for which it was originally coined, these alphanumeric or graphical enhancements would be aligned, correlated, and stabilized within the user’s real-world view in a spatially contextual and intelligent manner.

Although the phrase “augmented reality” is itself a modern creation, coined in the early 1990s by Boeing research scientist Tom Caudell (Caudell and Mizell, 1992), the first technological developments enabling modern augmented reality devices can be traced back to the early 1900s and a patent filed by Irish telescope maker Sir Howard Grubb. His invention (patent No.12108), titled “A New Collimating-Telescope Gun-Sight for Large and Small Ordnance,” describes a device intended for use in helping aim projectile firing weapons.

Grubb’s description of the invention published in the 1901 Scientific Transactions of the Royal Dublin Society was of profound vision to say the least:

“It would be possible to conceive an arrangement by which a fine beam of light like that from a search light would be projected from a gun in the direction of its axis and so adjusted as to correspond with the line of fire so that wherever the beam of light impinged upon an object the shot would hit. This arrangement would be of course equally impracticable for obvious reasons but it is instanced to show that a beam of light has the necessary qualifications for our purposes.”

“Now the sight which forms the subject of this Paper attains a similar result not by projecting an actual spot of light or an image on the object but by projecting what is called in optical language a virtual image upon it.” (Grubb, 1901)

This invention solved a fundamental challenge presented by the human eye only being able to focus on one depth of field at a time. You’re either focusing on something close up, or something in the distance, such as is illustrated in Figure 1.1. The design of the human eye makes it impossible to focus on both simultaneously. This makes aiming a rifle or pistol outfitted with only iron sights particularly challenging and a skill which, to this day, requires regular practice to master.

Image

Credit: Running turkey by dagadu / Depositphotos.com

Figure 1.1 This image illustrates one of the grand challenges in shooting posed by the human eye only being able to focus on one depth of field at a time.

Formally referred to as a reflector sight or reflex sight, Grubb’s invention, the basic function of which is illustrated in Figure 1.2, used a series of optical elements to overlay a targeting reticle, focused at optical infinity, on a distant target.

Image

Figure 1.2 Adaptation of a 1901 patent diagram illustrating a version of Howard Grubb’s collimating reflector sight suitable for firearms and small devices.

Grubb’s innovation directly inspired the development of more advanced gun sights for use in military aircraft. The first known use of the technology for this purpose came in 1918 when the German optics manufacturer Optische Anstalt Oigee developed what was known as the Oigee Reflector Sight shown in Figure 1.3. The system was based on a semi-transparent mirror mounted on a 45-degree angle and a small electric lamp to create a targeting reticle. Aligned correctly with the aircraft’s gun, the device enabled pilots to achieve considerably greater weapons accuracy (Wallace, 1994).

Image

Credit: Images and illustration courtesy of Erwin Wiedmer

Figure 1.3 The 1918 Oigee Reflector Sight built by the German optics manufacturer Optische Anstalt Oigee used an electric lamp and collimating to create a virtual targeting reticle on a partially reflecting glass element. The system was deployed in German Albatross and Fokker DS1 aircraft.

Head-Up Displays

As the onboard systems of fighter aircraft and helicopters grew in complexity, the information processing tasks required of pilots also increased dramatically. The sizeable array of sensors, weapons, avionics systems, and flight controls increasingly resulted in pilots spending more time focusing on dials and displays inside of the cockpit instead of what was happening outside of the aircraft, often with tragic results. These developments forced scientists and engineers in the United States and several other countries to undertake extensive research into more intuitive and effective methods of communicating critical flight, sensor, and weapons systems information to the human operators.

Following the development of the airborne electronic analog computer in the 1950s, these research efforts resulted in the introduction of the first modern head-up (or heads-up) display (HUDs), a transparent display mounted in front of the pilot that enables viewing with the head positioned “up” and looking forward, instead of angled down, looking at instruments lower in the cockpit. Because the information projected onto the HUD is collimated (parallel light rays) and focused on infinity, the pilot’s eyes do not need to refocus to view the scene beyond the display outside of the aircraft.

A typical HUD contains three primary components: a projector unit, a combiner (the viewing glass), and a video generation computer (also known as a symbol generator) (Previc and Ercoline, 2004). As shown in Figure 1.4, information is projected onto the combiner at optical infinity to provide pilots of both military and commercial aircraft with a variety of data and symbology necessary to increase situational awareness, particularly in low visibility landing and taxiing operations, without having to look down into the cockpit at the more traditional information displays.

Image

Credit: Image courtesy of DoD

Figure 1.4 This image shows the basic flight data and symbology displayed in the HUD of a U.S. Marine Corps AV-8B Harrier ground-attack aircraft. Information shown on the display includes altitude, speed, and level of the aircraft to aid in flight control and navigation and help pilots keep their eyes on the environment.

The first combat aircraft to deploy with an operational HUD was a British low-level strike aircraft known as the Blackburn Buccaneer in 1958 (Nijboer, 2016). To this day, all HUDs incorporate a number of the basic concepts embodied in Grubb’s original inventions.

The same principles of keeping a human operator focused on the task at hand have also resulted in the integration of these heads-up technologies into an increasing number of new automobile designs (Newcomb, 2014).

Helmet-Mounted Sights and Displays

Through the 1960s, as cockpit avionics, sensors, and weapons systems continued to advance, scientists and engineers in military labs around the world similarly continued efforts at easing a pilot’s information processing burden and improving the control of sensors and weapons. The next logical step was moving the display of some of this information from the HUD to the pilot’s helmet.

The first major step in this evolution was the development of a helmet-mounted sight (HMS) in the late 1960s by the South African Air Force (SAAF). The HMS aided pilots in the targeting of heat-seeking missiles (Lord, 2008). To this point, pilots had been required to maneuver an aircraft so the target fell within view of the HUD.

In the early 1970s, the U.S. Army deployed a head-tracked sight for the AH-1G Huey Cobra helicopter to direct the fire of a gimbaled gun. This was followed by the U.S. Navy deploying the first version of the Visual Target Acquisition System (VTAS) in the F-4 Phantom II aircraft to exploit the lock-on capabilities of the AIM-9G Sidewinder air-to-air missile. In operation, the Sidewinder seeker or the aircraft radar was “slaved” to the position of the pilot’s head. The pilot steered the missile using the sight picture displayed on his single-eye ‘Granny Glass’ (VTAS I) or on the inside of his visor (VTAS II), along with a sensor to track head movements.

In the ensuing years, dozens of different helmet-mounted displays have been designed and come in a wide variety of forms, including monocular (single image to one eye), biocular (single image to both eyes), binocular (separate viewpoint-corrected images to each eye), visor projections, and more. The key feature with each is the ability to overlay information onto a pilot’s real-world view. This information takes a variety of forms, including standard avionics and weapons information, as well as sensor data such as that provided by Forward Looking Infrared Radar (FLIR). These systems are explored in greater detail within Chapter 5, “Augmenting Displays,” and Chapter 17, “Aerospace and Defense.”

Smart Glasses and Augmenting Displays

During the past several years, augmenting display technologies have transitioned from purely defense and specialty application areas into commercially available products, with many more on the way. As you progress through this book, a number of these new displays are presented along with a host of innovative application overviews.

At the time of this book’s preparation, there were two general categories of wearable augmenting displays, both of which are illustrated in Figure 1.5.

Image

Credit: Head by decade3d / Depositphotos.com

Figure 1.5 This image illustrates the core differences between the two primary types of head-worn augmented reality displays. On the left, an optical see-through display overlays symbology and graphics directly on a user’s real-world view. On the right, a video see-through display combines imagery gathered from outward-facing video cameras with computer-generated graphics. These combined images are presented to the user on display elements within the headset.

Optical See-Through

With Optical See-Through displays, the user views the real world by looking directly through monocular or binocular optical elements such as holographic wave guides or other system that enables the overlay of graphic, video, and symbology onto real-world surroundings.

Video See-Through

With a video see-through head-mounted display (HMD), the real-world view is first captured by one or two video cameras mounted on the front of the display. These images are combined with computer-generated imagery and then presented to the user.

Handheld/Mobile AR Devices

Although the emphasis of this book focuses on traditional wearable AR and VR technologies, it is important to note the existence of handheld augmented display devices based on tablet computers and smartphones such as that shown in Figure 1.6. Because all the systems use the onboard cameras to merge a real-world scene with computer-generated imagery, they would be classified as video see-through devices.

Image

Credit: Image courtesy of Office of Naval Research

Figure 1.6 Handheld augmented reality systems based on smartphones and tablets display information overlays and digital content tied to physical objects and locations. In the example shown, the tablet app is able to recognize an AR icon embedded in the wall-mounted illustration to reveal a 3D model of a valve handle, which remains stable as the user moves the tablet.

Another key attribute of some augmented reality displays is the analysis of camera imagery by the host system. An example of such a system is provided in Chapter 17, within which camera imagery is analyzed to detect where a cockpit ends and window begins to insert virtual aircraft into a pilots view.

What Is Virtual Reality?

The phrase virtual reality has been extraordinarily difficult to define for a variety of reasons, not the least of which is the problem caused by the conflicting meaning and intent of the words virtual and reality.

Increasingly, it seems that any display technology even remotely associated with the words “three dimensional” is being hoisted onto the “us too” bandwagon by lazy marketers and others wishing to get caught up in the wave of hype. This general lack of specificity and broad application of the term has opened the doors to tremendous confusion.

For the purposes of this book, we prefer to rely on the original use of the phrase, which refers to display technologies, both worn and fixed placement, that provide the user a highly compelling visual sensation of presence, or immersion, within a 3D computer model or simulation such as depicted in Figure 1.7. This is accomplished via two primary methods: the use of stereoscopic head-mounted (or head-coupled) displays, as well as large fully and semi-immersive projection-based systems such as computer-assisted virtual environments (CAVEs) and domes.

Image

Credit: Composite Image courtesy of NASA and innovatedcaptures © 123RF.com

Figure 1.7 This staged image is intended to illustrate the basic concept of immersive virtual reality. As opposed to looking at a desktop monitor, which is essentially a 2D window on a 3D world, an immersive virtual reality system provides the user the visual sensation of actual presence inside the 3D model or simulation.

Many virtual reality systems and applications also incorporate one of several 3D audio solutions to supplement the visual display. These systems are covered in greater detail within Chapter 8, “Audio Displays.”

Although the phrase virtual reality was first popularized in 1987 by Jaron Lanier, former founder and CEO of VPL Research, Inc., a manufacturer of the first commercially available VR products, the development of the core concepts and enabling technologies of virtual reality began decades earlier.

Of particular note is the work of American computer scientist and graphics pioneer Ivan Sutherland. In the mid-1960s while serving as an associate professor of electrical engineering at Harvard University, Sutherland visited the Bell Helicopter company, where he saw a stereoscopic head-mounted display slaved to an infrared camera that was to be mounted below a helicopter to assist in difficult night landings. As Sutherland explains:

“We got a copy of that head-mounted display with its two CRTs. The leap of idea that we had was “wouldn’t it be interesting if the computer generated the picture instead of the infrared camera?”” (Sutherland, 2005).

As shown in Figure 1.8, the Bell Helicopter HMD used by Sutherland was based on two CRTs, one mounted on each side of the user’s head. Images from the CRTs were steered around and into the user’s eyes using a series of lenses and half-silvered mirrors. Sutherland and a colleague developed an armature suspended from the ceiling that tracked movement of the user’s head. The visual effect provided by the system was to overlay simple wireframe geometric shapes on top of the user’s view of the real world. By virtue of the tracking armature suspended from the ceiling and attached to the display, transformations could be calculated and the view of the wireframe images updated to reflect these physical changes of viewpoints.

Image

Credit: Image courtesy of Pargon via Flickr and distributed under a CC 2.0 license.

Figure 1.8 This image shows the head-mounted display developed by Bell Helicopter and used by computer scientist Ivan Sutherland and students to conduct early augmented and virtual reality research.

Parallel to Sutherland’s work in the 1960s and 1970s in civilian research labs, the U.S. Air Force was involved in its own development efforts. Of particular note was work carried out at Wright-Patterson Air Force Base in Ohio under the direction of Dr. Thomas Furness. One of these projects focused on development of virtual interfaces for flight control. In 1982, Furness demonstrated a system known as VCASS (Visually Coupled Airborne Systems Simulator) shown in Figure 1.9.

Image

Credit: Images courtesy of DoD

Figure 1.9 The image on the left shows an engineer wearing the U.S. Air Force Visually Coupled Airborne Systems Simulator (VCASS) helmet while seated in a laboratory cockpit (circa 1982). The terrain scene, symbology, and avionics data on the right are representative of the imagery displayed to the user.

The VCASS system used high-resolution CRTs to display visual information such as computer-generated 3D maps, sensor imagery, and avionics data to the simulator operator. The helmet’s tracking system, voice-actuated controls, and other sensors enabled the pilot to operate the aircraft simulator with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities (Lowood, 2016).

Between these early systems and today, a multitude of fully immersive stereoscopic head-mounted displays have been developed, albeit a majority for the high-end simulation and training market. It is only in the past couple of years that commercially available versions of these systems have entered the marketplace. Many of these displays will be described in greater detail in Chapter 6, “Fully Immersive Displays.”

Conclusion

In this chapter we have explored the basic foundations of virtual and augmented reality systems, drawing clear distinctions between their differing capabilities as well as their varied, though related, pathways to existence. Despite the enormous hype associated with virtual reality, it is near certain that augmented reality will ultimately be the more widely adopted of the two technologies.

Keep in mind that the most successful products in any market are those that fulfill an important need or solve real problems. To this end, there are innumerable applications areas for general consumers and enterprise users where a personal data display would be highly useful while going about daily life or in the course of one’s work. The widespread prevalence of smartphones, apps, and the overall “mobile lifestyle” already proves this.

In contrast, fully immersive virtual reality is spatially restricting and largely isolates the user from the real world. Once the user dons a head-mounted display, the loss of reference to one’s surroundings severely limits mobility. Beyond gaming and entertainment for the mass market, virtual reality will find expansion into a myriad of specialty application markets detailed throughout this book. But realistically, is immersive virtual reality something that a dominant portion of the population will want or need as a standalone product? This is unlikely.

As the reader progresses through this book, it will become readily apparent that current implementations of pixel-based LCD, OLED, and AMOLED virtual reality displays will ultimately give way to dual-use augmenting headsets or even contact lenses. Efforts at developing such systems are already well underway.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
100.28.2.72