434 High Performance Visualization
are actually allocated. EnSight uses specific predefined tags to indicate where
individual EnSight framework components are to be realized when it is not
provided with a more specific tag from a configuration file. For example, the
role “SOS
SERVERS” indicates that the CEIShell could run an EnSight server
for a distributed EnSight session. EnSight can inquire how many CEIShells
have been assigned a particular role and use that information to determine,
at runtime, the number of EnSight servers or rendering nodes to use.
21.3.2 Application Invocation
Once EnSight decides what components and how many of each it needs to
invoke, it requests the CEIShell network to actually launch the components on
its behalf. This approach simplifies the core EnSight application logic by insu-
lating the application from the details of remote resource access and launching
details, and provides a site with the flexibility needed for tight integration with
its computing environment.
The CEIShell is designed to aid in complex session debugging tasks. It
redirects the output from the applications it launches and other CEIShells
back to the root CEIShell. From there, it can redirect this output to the main
application (e.g., the EnSight client) where a dialog might be displayed with
highlighted warnings and errors. CEIShell can also route communications on
behalf of the application using an intrinsic VPN mechanism. This system
allows application components to talk to each other without allocating ad-
ditional communication channels by multiplexing application communication
over the intra-CEIShell communication channels. This can be essential if ap-
plication components reside multiple network hops away from each other,
precluding the establishment of direct communication channels.
21.3.3 CEIShell Extensibility
CEIShell is extensible in several ways. It knows nothing about a particular
application, including EnSight, so it can be used for any distributed network
application. Because roles are not specific to CEIShell, individual applications
may dictate their semantics and exploit them for other tasks. For example,
a site might decide to launch X11 servers with an EnSight session-specific
life cycle on specific nodes, in the CEIShell network, using their own custom
roles. CEIShells communicate through transport plug-ins as specified through
generic URLs that indicate which transport and parameters to use. These
transports are implemented as dynamic shared libraries that implement basic
communication primitives. A site may provide its own custom communications
transport abstraction. CEIShell ships with transports for named pipes and
TCP/IP sockets as well as a source code to an MPI transport plug-in.
The EnSight Visualization Application 435
21.4 Advanced Rendering
One of the most recognizable aspects of a visualization framework is its
interactive rendering system and associated user interface. The ease with
which useful visual representations of large, complex data sets may be con-
structed is one measure of the effectiveness of such interfaces. The EnSight
rendering infrastructure has undergone several major revisions designed to
improve this measure. Recent revisions have been motivated by the need to
support distributed-memory volume rendering and improve direct user inter-
action with extremely large data sets. These changes facilitated a move to a
more modern, Qt-based GUI (see Fig. 21.4) that supports improved direct
interaction mechanisms—drag and drop, context sensitive menus, etc.
In order to improve application responsiveness with large data sets, it be-
came necessary to be able to perform as many actions as possible without
forcing geometry to be re-rendered, as rendering-induced latency tends to in-
crease with the rendered polygon count. Additionally, direct interaction and
feedback methods introduced in the revised EnSight GUI added the require-
ment to support object-level picking and high performance object recognition,
with both geometry and annotation elements in the rendering system. The
resulting rendering system supports dynamic selection changes with object
silhouette highlighting and object picking, without forcing additional render-
ing operations. The approach is to base the rendering system on an image
fragment compositing system implemented with OpenGL shaders. This mech-
anism maps naturally to distributed parallel compositing, allowing a single,
unified system to support both simple desktop and distributed parallel ren-
dering.
The dynamic shader system uses OpenGL multiple render-target enabled
framebuffer objects, making it possible to break up the rendering pipeline
into independent, retained layers for the 3D geometry and 2D annotations.
Hardware picking and layered rendering make it possible to eliminate the tra-
ditional EnSight modes (plots, annotations, viewports, etc.), replacing them
with context-sensitive direct interaction, which accelerates and simplifies user
interaction. The system was developed as an extension to the existing parallel
compositing system. All of the geometry rendering operations are implemented
in a framework that is based on the ability to generate complete OpenGL
shader programs. The shader programs represent the current rendering state
dynamically, from a collection of shader program fragments.
21.4.1 Customized Fragment Rendering
The dynamic fragment system follows the Chromium [7] OpenGL model
without the network components. It uses a state tracking system [4] to deter-
mine what rendering functionality is active and computes the actual fragment
program from the state. As the core application makes OpenGL calls, a model
of the current rendering state is formed and tracked, avoiding the overhead of
436 High Performance Visualization
FIGURE 21.4: The EnSight 10 GUI includes dynamically generated silhou-
ette edges for selections and targets, hardware-accelerated object picking, and
context sensitive menus. These form the basis of a comprehensive direct in-
teraction, drag and drop interface.
redundant state changes. When the modeled state is bound, the engine looks
at the current OpenGL state and generates a complete fragment shader by
combining fragments of GLSL source code that model the various OpenGL
rendering features (e.g., lighting, texturing models, etc). The shader is com-
piled and cached for future reuse. In a typical session, Ensight will encounter
15–20 independent OpenGL rendering states and will generate and cache a
shader program for each such state.
There is a formal OpenGL extension system, which allows for the redefi-
nition and extension of rendering capabilities. The system allows for custom
OpenGL rendering as CEI OpenGL extensions. For example, EnSight uses a
custom texturing interpolation function to implement its “limit fringes” fea-
ture. The implementation is specified as a hand-coded GLSL fragment. The
application then passes a unique OpenGL texturing enumerator to the state
tracker via the state tracked glTextureEnv calls. Through this mechanism,
the rendering engine can be extended to support rendering styles, not intrin-
sic to OpenGL, in a form that can be additively combined, on-the-fly, with
other OpenGL rendering features. Structured and unstructured volume ren-
dering and depth peeling have been implemented as such extensions, allowing
the system to support a hybrid transparent polygon/ray tracing rendering
system.
A key feature of this new OpenGL abstraction is to embrace fragments
composed of deep pixels. A deep pixel includes, color, depth and secondary
information related to the object types, symmetry, etc. The EnSight rendering
system includes two independent depth buffers and at least two color buffers.
For fast, pixel accurate, transparency surface rendering, EnSight uses a depth
The EnSight Visualization Application 437
peeling [5] algorithm. The paired depth buffers are used for depth peeling and
the secondary color buffers are used to store ancillary metadata information.
Every visible object in EnSight has the ability to store one or more render-
ing tags. Tags are rendered into secondary color buffers. They include informa-
tion about the basic state of the object (e.g., selected, highlighted, etc.) and
they hint at how a given object should behave in various interactive contexts.
For example, tags encode the relative importance of an object to the current
set of operations, making it easier for the user to select the most appropriate
options for a given operation. They can also be used to inform antialiasing
post-processing operations for edges and boundaries [8]. The dynamic high-
lighting and picking systems are based on these tags. If the user clicks on a
pixel, the actual target object can be determined directly from the tag buffer
by reading back a few pixels without referencing the actual geometry.
Dynamic highlights can be updated in real-time response to single pixel
cursor motion, exploiting custom shaders to perform the necessary topological
analysis without re-rendering any geometry. Tags are also used to encode the
context of an object. For example, if a given object is currently being ren-
dered using a symmetry operation, the specific mode of symmetry is included.
This information is used by the picking system to generate coordinates in the
original data space with simple coordinate inversion operations. The overall
user experience is greatly improved with the unique types of instantaneous
feedback afforded by the deep pixel tags.
The dynamic fragment framework outlined previously is used to implement
a ray casting volume rendering system capable of supporting arbitrary polyhe-
dral elements [3] while maintaining a high-efficiency rectilinear grid renderer
that can be used when appropriate. The algorithm allows for the rendering
of volumes with embedded, transparent surfaces by terminating the volume
rays at the depth peels, and later continuing them at the depth surface of the
peel. Coupled with natural ray restart at the volume element block boundary,
the system supports both embedded transparent surfaces and volume data
presented as unsorted, concave blocks. Very little pre-processing (only sim-
ple, unconstrained blocking) and no presorting of volume domain elements
is necessary. Another way to conceptualize the system is as a space-leaping
volume renderer that leverages depth peels as leaping boundaries, switching
back and forth between ray casting and traditional polygon rendering, as dic-
tated by the geometry. The results of the ray casting segments are blended
into the peel, effectively collapsing the combined region into a single image
fragment. This formulation allows volume rendering to fit naturally into the
distributed compositing system. Examples that exercise both the structured
volume shader and the unstructured volume shader are shown in Figures 21.5
and 21.6, respectively.
438 High Performance Visualization
FIGURE 21.5: Supernova remnant density field rendering, demonstrating
distributed structured volume rendering, with embedded polygonal surfaces.
Data courtesy of Dr. John Blondin, North Carolina State University.
21.4.2 Image Composition System
EnSight has always utilized a multi-pass rendering system. It supports
threaded rendering over multiple tiles with tile chunking, antialiasing, and
other functions being handled in the various passes. The rendering engine
maps the various passes into textures and framebuffer objects generated with
each pass and in many cases retained from frame to frame. Key retained
fragments include the annotation layer and depth peels with their associated
metadata tags. These image fragments are rapidly composited to form the
final image though a filtering engine that is used to generate dynamic selection
highlighting and image-based, full-scene antialiasing.
Selection highlighting maps the current list of selected objects in a scene
into a state table, and applies it to the pixel object tags via texturing. Topo-
logical edge detection operators generate constrained silhouette edges for the
current selected and targeted objects, which are then blended into the final
image. This last step is implemented as a single fragment program and can
be regenerated from the retained textures and framebuffer objects without
re-rendering the core geometry, enabling dynamic changes in the graphical
entity selection, independent of the data set size and its associated rendering
expense. The annotation plane is one such fragment layer that contains all 2D
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.134.198