GPU-Accelerated Visualization 243
11.6 Large Display Visualization
There is an ever-increasing size of data sets, which results in visually more
complex representations of the data sets. Therefore, the interest in larger
displays with increased resolution arises, which usually refers to an increase of
the total number of pixels in context of tiled displays rather than an increase
of pixel density.
Although large, high-resolution displays are still expensive to build and
rather difficult to operate, real-world application areas have existed for quite
some time. One of the most prominent applications is vehicle design, where
the goal is to have a model at an original scale for designers and engineers
to work with. Control rooms that are used to manage telecommunications or
electricity networks are another example. Physically large displays are also the
key to immersive applications, which are often used for training and real-world
simulation purposes. Finally, many high-resolution installations can be found
in academia—not the least for researching the construction and operation of
those systems themselves, though. The benefits of a large display area and
a large number of pixels have also been subject of numerous studies, namely
for navigational tasks, multitasking, memorizing, and general usability [22,
112, 6]. Ni et al.’s [83] survey provides a comprehensive summary of usability
aspects as well as hardware configurations and applications of tiled displays.
11.6.1 Flat Panel-Based Systems
With consumer graphics cards having two or more video connectors, the
easiest way to increase display space is by attaching multiple monitors to a
GPU [3]. With a graphics cluster, this setup can be extended to large, wall-
sized, tiled displays. The use of commodity hardware makes the construction
of systems reaching more than 300 megapixels affordable [81].
Besides a lower original cost, compared to the formerly projection-based
tiled display installations, liquid crystal display (LCD) arrays have the ad-
vantage of lower maintenance costs, mostly thanks to expensive lamp bulb
replacements being unnecessary [83, 23, 81]. Furthermore, the requirements
regarding the infrastructure around the system itself are much lower: LCDs
do not have a throwing distance and hence require less space, and their heat
dissipation is not as high as of a large number of projectors. Finally, color cor-
rection and geometric alignment are much easier to achieve, not in the least
because of the bezels between the screens making pixel-exact registration un-
necessary.
While those bezels have even been found beneficial for organizing different
tasks on a desktop, they nevertheless hamper immersion and introduce visual
distortion or missing information when images cross them [97]. Using a hard-
ware setup resembling Baudisch et al.’s focus-and-context screen [7], Ebert
et al. tried to overcome this problem by projecting low-resolution imagery on
the bezels [27].
244 High Performance Visualization
11.6.2 Projection-Based Systems
Tiled displays that do not suffer from discontinuities caused by screen
bezels have traditionally been built using an array of video projectors. The
challenges of building such a system have barely changed over the last
decade [41]: projectors and a matching screen material must be chosen, the
devices must be mounted stably, and the whole system must be calibrated to
form a seamless, continuous display.
Over the years, a variety of projection technologies have been developed.
Early installations, like the first CAVE [20], were built using CRT projectors,
which offer freely-scalable image geometry, but they suffer from low brightness.
Later, LCD projectors became the prevalent video projection technology used
in nearly every conference room, making them cheap commodity hardware.
Closely related to LCD is Liquid Crystal on Silicon (LCoS), which uses a re-
flective silicon chip with the liquid crystals on it. LCoS enables commercially-
available devices with resolutions up to 4096 ×2100 pixels (4K), and research
prototypes with 33 megapixels exist [47]. A six-sided CAVE installation with
24 LCoS projectors and a total of 100 megapixels has the highest resolution
amongst rear-projection systems nowadays [85]. Contrariwise, Digital Light
Processing (DLP) uses a myriad of micromirrors for reflecting the light of
each pixel separately. These mirrors can individually be moved very quickly,
creating the impression of gray scales by reflecting the light either out of
the projector or not. Color is added either through time multiplexing via a
color wheel, or through a separate chip for each color channel. The mirrors
can actually be toggled so fast that the frame rates required for active stereo
projections are reached, which is a clear advantage for building immersive
installations.
Stereoscopic displays show two perspectively different images, one being
only visible to the left eye of the user and the other only to the right eye [20].
This channel separation can be achieved by different means, most notably
active shutters (time multiplexing), polarized light, or interference filters. All
of these technologies require the user to wear matching stereo glasses, which
complement the filters built into the projector. While stereo displays can also
be built from flat panels, their full resolution cannot be brought to bear to
each eye, especially in the case of autostereoscopic displays, which remove the
need for wearing stereo glasses completely [102].
Aside from projector technology, the screen material is a crucial compo-
nent that affects the quality of a display environment. However, choosing the
screen material usually means finding a compromise and is also dependent on
the projectors and, if applicable, the technology used for stereoscopy. While
having a nearly Lambertian surface for the screen makes calibration easier
and, therefore, is desirable for a large number of tiles [12], such a screen re-
sults in reduced image sharpness, hot spots, and it might be inappropriate
to use because it may affect the polarization of light. Likewise, tinted screens
have become popular, since they increase the contrast of the image, but obvi-
GPU-Accelerated Visualization 245
FIGURE 11.6: A rear-projection tiled display without photometric calibra-
tion.
ously they require brighter projectors. An acrylic glass, or a float glass, are the
two most common support materials as they provide the rigidity needed for
building systems with very small pixels. Thanks to increasing demand, those
materials are currently also available as seamless pieces in sizes required for
large screens.
What makes the installation of a high-resolution rear-projection display
difficult is the need for different types of alignment—geometric registration
and photometric calibration—that have to be carried out thoroughly to make
the display seamless. A satisfactory geometric calibration is difficult to achieve
using hardware due to nonlinear lens distortions, which cannot be compen-
sated by using positioning devices with six degrees of freedom. Hence, a lot of
effort has been put into distorting the images so that the effects of an incom-
plete mechanical registration cancel out [19, 17, 92]. The goal of photometric
calibration is to remove variations of luminance and chrominance between
different projectors and within a projector. Madjumder and Stevens [66] iden-
tified the reasons for those variations and also stated that aligning luminance is
the most important factor for achieving the impression of a seamless display—
a problem that is specifically relevant because overlapping projections cause
areas of exceptionally high brightness (Fig. 11.6). Matching the color gamuts
of the projectors after that is a computationally expensive operation [9], but
less important because chrominance does not vary significantly between de-
vices of the same type [65]. Therefore, a per-pixel correction of luminance
can suffice to create the impression of a seamless screen. For geometric and
photometric calibration, cameras or spectroradiometers are typically used to
build a feedback loop and automate the process. Brown et al. [12] give a good
overview of both fields, also addressing the fact that physically large displays
246 High Performance Visualization
might require the camera to be moved or might require simply using more
cameras [92, 19, 17]. Devices that can measure their light flux and automat-
ically synchronize it with all other projectors in the array are commercially
available from some companies who specialize in building immersive projec-
tion installations [118]. One step further is the idea of combining cameras and
projectors into one smart projector that can sense its environment and auto-
matically adjust for arbitrarily shaped and colored projection surfaces [11].
11.6.3 Rendering for Large Displays
Interactive visualization on large tiled displays usually involves the use
of a graphics cluster, simply because each of the display devices requires an
input, and the number of outputs a single machine can provide is limited. This
hardware setup predetermines a natural image-space subdivision, which leads
to a bias towards the sort-first class of Molnar et al.’s taxonomy [73]. While
their taxonomy applies to parallel rendering of 3D scenes in general, Chen
et al. [16] introduce a classification targeted at rendering for high-resolution
display walls. They identify three classes of data transfers: control data, which
enable multiple instances of the same application to run in a synchronized way,
primitives, which are rasterized separately on each machine, or simply pixels
of the final image. From that they derived two models of program execution:
master–slave, which basically implements the primitive’s distribution pattern,
and synchronized execution [18].
The synchronized execution model usually minimizes the amount of data
to be transferred and can be implemented on an application or system level.
The former is often used in tools solving a specific application problem or
even targeting a specific hardware constellation. At system level the goal is to
synchronize applications transparently by coordinating buffer swaps, timers
and I/O operations.
WireGL [44] and its successor Chromium [45] are typical representatives
of the master–slave pattern. Their goal is total application transparency to
enable unmodified OpenGL applications running on clusters by intercepting
and distributing all OpenGL API calls and executing those on the remote
machines. An equivalent for 2D graphics in X-Window systems is Distributed
Multihead X (DMX) [25]. It implements an X server that packs the API
commands and sends them to back end machines. Again, the remote machines
perform the actual rendering, which results in a large X11 desktop spanned
over multiple machines.
As fully transparent solutions introduce performance penalties, due to their
generality and implementing custom applications from scratch is cumbersome.
A lot of research has been done in developing generic, efficient, and easy-to-use
middleware layers. Raffin and Soares [91] and Ni et al. [83] give a good overview
of those. Often, these frameworks do not strictly fall in one of Chen et al.’s
classes nor do they implement both, like Aura [116]. Aura is a retained mode
graphics API used to build visualization applications for tiled displays on a
GPU-Accelerated Visualization 247
graphics cluster that offers two different communication modes: one mode is
that multiple copies implement the synchronized execution pattern, and the
second mode is that broadcast replicates the scene to all rendering machines
thus implementing master–slave rendering. On top of it, the VIRPI toolkit
provides controls and event handling for building user interfaces in virtual
reality environments [34].
The Cross Platform Cluster Graphics Library (CGLX) by Doerr and
Kuester [26] exposes a callback-based interface like the widely used OpenGL
Utility Toolkit (GLUT) to facilitate porting existing applications to high-
resolution tiled display environments. CGLX synchronizes events raised by
user interaction from the master instance to its slave nodes and provides a
means for passing user-defined messages between the nodes. The distributed
OpenGL contexts are managed by intercepting API calls, which is, in con-
trast to Chromium, not fully transparent, but still must be adapted manually
by replacing the OpenGL function calls with pass-through functions of the
framework.
Only a few of the frameworks have found wider usage, such as commer-
cially supported products like the CAVElib or Equalizer [28]. The latter is an
object-oriented rendering framework that supports building image-space and
object-space task subdivisions including distributed compositing algorithms.
Equalizer allows for specifying task decomposition strategies by means of trees
that are automatically run in a synchronized execution manner by the frame-
work.
The broad availability of high-speed network interconnects like 10 Gi-
gabit Ethernet or InfiniBand makes the implementation of frameworks dis-
tributing pixels feasible. The OptIPuter project aims to leverage high-speed
optical networks for tightly coupling remote storage, computing and visual-
ization resources [23]. Therefore, the Scalable Adaptive Graphics Environ-
ment (SAGE) [94] has been developed, which is the implementation of a dis-
tributed high-resolution desktop for displaying image streams from various
sources. It comes with a variety of streaming source providers including ones
for videos, 2D imagery, 3D renderings, and remote desktops via Virtual Net-
work Computing (VNC) [96]. Like windows on a desktop, all streams can be
freely moved and resized in real-time.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.187.210