194 12.AGenericMultiviewRenderingEngineArchitecture
this parameter to select the vertex data and the rendering state of an object. Lev-
el-of-detail can then be implemented in one or more of the following ways:
Level-of-detail for object geometry. Simplified mesh geometries can be used
for some of the views, reducing vertex processing time for complex meshes.
Level-of-detail for object materials. Different views can apply different ren-
dering techniques to a single object. The changes may involve reducing the
number of rendering passes or the shading instruction complexity, such as
switching between per-pixel and per-vertex shading or using different surface
models for different views.
Level-of-detail for shaders. Shaders can execute different code paths based
on the active view number, allowing more algorithmic approaches to reduc-
ing shader complexities for higher numbers of views. This can also ease de-
velopment of view-specific shaders and allow the creation of simplified
routines for optimized views. Shader level-of-detail is a research area in it-
self, and various automated techniques have also been proposed [Pellacini
2005]. This can equally be achieved by an application-specific solution,
where different shaders are supported for the different views.
MultiviewLevelofDetailforMultiviewBuffers
Using the customizability of off-target multiview buffers, it is possible to create
buffers of different sizes for different views. One basic approach is to reduce the
resolution of the target by half, practically reducing the pixel shading cost. The
low-resolution buffers can be upsampled (magnified) during the composition
phase, using nearest or linear sampling.
Since the low-resolution buffer on the same viewport will be blurry, it is ex-
pected that the sharper view will dominate the depth perception and preserve the
sharpness and quality of the overall perceived image [Stelmach et al. 2000].
Thus, it is advisable to use buffer level-of-detail options whenever the pixel shad-
ing is time consuming.
We should also note that one of the two eyes of a person can be more domi-
nant than the other. Thus, if the dominant eye observes a higher-quality view, the
user experiences a better view. It is not possible to know this information in ad-
vance, so user-specific tests would need to be performed and the system adjusted
for each user. An approach that avoids dominant-eye maladjustment is to switch
the low- and high-resolution buffer pairs after each frame [Stelmach et al. 2000].
12.9Discussion 195
OtherOptimizationApproaches
As surveyed by Bulbul et al. [2008], graphics-pipeline-based and image-based
optimization solutions have also been proposed. Graphics-pipeline-based optimi-
zations make use of the coherence between views, or they are based on approxi-
mate rendering where fragment colors in all neighboring views can be
approximated from a central view when possible. In image-based optimizations,
one view is reconstructed from the other view by exploiting the similarity be-
tween the two. In these techniques, the rendering time of the second image de-
pends on only the image resolution, instead of the scene complexity, therefore
saving rendering computations for one view. Approaches have been proposed
that are based on warping, depth buffers, and view interpolation [Bulbul et al.
2008].
12.9Discussion
MultiviewSceneSetup
Our architecture supports customization and extensible parameterization, but
does not further provide guidelines on how to set the multiview camera parame-
ters and scene in order to achieve maximum viewing comfort. In the first volume
of Game Engine Gems, Hast [2010] describes the plano-stereoscopic view mech-
anisms, common stereo techniques such as anaglyph, temporal multiplexing
(shutter glasses), and polarized displays and discusses their pros and cons. Some
key points are that contradicting depth cues should be avoided and that special
care needs to be directed at skyboxes and skydomes, billboards and impostors,
GUIs, cursors, menus in virtual 3D space, frame rate, view synchronization, and
scene-to-scene camera setup consistency (such as focal distance). Viewers may
have different eye separation distances and display sizes, and the distance of the
viewer to the display can differ among different platforms. It should be kept in
mind that creating the right 3D feeling is a process that requires a scalable tech-
nical infrastructure (as presented in this chapter) and an analysis of the target
platforms, the virtual scene, and animations.
Enabling/DisablingMultiviewRenderingatRunTime
It is important to allow the user to select single-view rendering if the hardware
supports it; some users [Hast 2010] may not be able to accommodate the mul-
tiview content easily and may prefer single-view rendering because it can pro-
duce a higher-quality image. The architecture natively supports switching
196 12.AGenericMultiviewRenderingEngineArchitecture
between single-view and multiview configurations through run-time attachment
and detachment of multiview components (camera, buffer, and compositors as
required) to a specific viewport on the render target. Viewport rendering logic
that easily adapts itself to follow a single-view or multiview rendering pipeline is
possible, and the implementation within OpenREng provides a sample solution.
PostprocessingPipelines
Post-processing pipelines are commonly used, and their adaptation to multiview
rendering can present a challenge. Most of the post-processing filters use spatial
information about a fragment to calculate the output. The spatial information is
partly lost when different views are merged into a single image. Thus, applying
the same post-processing logic to the single composited image may not produce
the expected output. If spatial data is not used, such as in color filters, the post-
processing can natively interact with the results in separate views. However, fil-
ters like high dynamic range and bloom may interact with spatial data and special
care may need to be taken [Hast 2010]. In our architecture, the post-processing
logic can be integrated into multiview compositor logic (shaders) to provide an-
other rendering pass optimization.
IntegrationwithOtherStereoRenderingAPIs
As discussed in Section 12.5, our architecture can benefit from OpenGL quad-
buffer stereo mode support directly. Yet there are other proprietary APIs that
manage the stereo rendering at the driver level. As an example, Nvidia’s
3DVision API only supports DirectX implementations. Basically, the multiview
rendering is handled by the graphics driver when the application follows specific
requirements. Since such APIs offer their own abstractions and optimizations for
stereo rendering, it may not be possible to wrap their APIs over our architecture.
3DVideoPlayback
To be able to playback 3D video over our architecture, it is possible to send the
decoded 3D video data for separate views to their corresponding multiview buff-
er color render targets and specify the composition by defining your own mul-
tiview compositors. It is also possible to skip the multiview buffer interface and
perform the composition work directly using the decoded video data inside the
multiview compositor merge routines.
Acknowledgements 197
UsingMultiviewPipelineforOtherRenderingTechniques
Our multiview rendering architecture can be extended to support soft shadow
techniques that use multi-lights to generate multiple depth results from different
locations. Yang et al. [2009] show an example of the multi-light approach for
soft shadow rendering.
Acknowledgements
This project has been supported by 3DPHONE, a project funded by the European Union
EC 7th Framework Programme.
References
[Bowman et al. 2004] Doug A. Bowman, Ernst Kruijff, Joseph J. LaViola, and Ivan
Poupyrev. 3D User Interfaces: Theory and Practice. Reading, MA: Addison-
Wesley, 2004.
[Bulbul et al. 2010] Abdullah Bulbul, Zeynep Cipiloglu, and Tolga Çapın. “A Perceptual
Approach for Stereoscopic Rendering Optimization.” Computers & Graphics 34:2
(April 2010), pp. 145–157.
[Dodgson 2005] Neil A. Dodgson. “Autostereoscopic 3D Displays.” Computer 38:8
(August 2005), pp. 31–36.
[Hast 2010] Anders Hast. “3D Stereoscopic Rendering: An Overview of Implementation
Issues.” Game Engine Gems 1, edited by Eric Lengyel. Sudbury, MA: Jones and
Bartlett, 2010.
[Pellacini 2005] Fabio Pellacini. “User-Configurable Automatic Shader Simplification.”
ACM Transactions on Graphics 24:3 (July 2005), pp. 445–452.
[Stelmach et al. 2000] L. Stelmach, Wa James Tam, D. Meegan, and A. Vincent. “Stereo
Image Quality: Effects of Mixed Spatio-Temporal Resolution.” IEEE Transactions
on Circuits and Systems for Video Technology 10:2 (March 2000), pp. 188–193.
[Yang et al. 2009] Baoguang Yang, Jieqing Feng, Gaël Guennebaud, and Xinguo Liu.
“Packet-Based Hierarchal Soft Shadow Mapping.” Computer Graphics Forum
28:4 (June–July 2009), pp. 1121–1130.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.89.2