Rendering 53
frame-to-frame coherence, but allowing a dynamic, view-dependent partition-
ing of the data. The study found that parallel rendering efficiencies, achieved
with small replication factors were, similar to the ones measured with full
replication. VTK and VisIt use the Mesa OpenGL library, which implements
the OpenGL API in software, to enable parallel rendering through sort-last
compositing (see Chap. 17 for more information). Rendering, with either VisIt
or Paraview, scales well on GPU clusters using hardware rendering and sort-
last compositing, but they are bounded by the composite time for the image
display. When using Mesa, the rendering is not scalable or interactive, due to
the rasterization speed; though, replacing rasterization with ray tracing can
improve the scalability of software rendering [4].
An alternative to rasterization for high performance visualization is to
render images using ray tracing. Ray tracing refers to tracing rays from the
viewpoint, through pixels, and intersecting the rays with the geometry to
be rendered. There have been several parallel ray tracing implementations for
high performance visualization. Parker et al. [31] implement a shared-memory
ray-tracer, RTRT, which performs interactive isosurface generation and ren-
dering on large shared-memory computers, such as the SGI Origin. This proves
to be effective for large data sets, as the data and geometry are accessible by
all processors. Bigler et al. [2] demonstrate the effectiveness of this system
on large-scale scientific data. They extend RTRT to use various methods in
order to visualize the particle data, from simulations using the material point
method, as spheres. They also describe two methods for augmenting the vi-
sualization using silhouette edges and advanced illumination, such as ambient
occlusion. In 4.5, there are details about how to accelerate ray tracing for
large amounts of geometry on distributed-memory platforms, like commodity
clusters.
4.4 Volume Rendering
Direct volume rendering methods generate images of a 3D volumetric data
set, without explicitly extracting geometry from the data [21, 12]. Conceptu-
ally, volume rendering proceeds by casting rays from the viewpoint through
pixel centers as shown in Figure 4.2. These rays are sampled where they inter-
sect the data volume. Volume rendering uses an optical model to map sampled
data values to optical properties, such as color and opacity [24]. During ren-
dering, optical properties are accumulated along each viewing ray to form an
image of the data, as shown in Figure 4.3. Although the data set is interpreted
as a continuous function in space, for practical purposes it is represented as
a discrete 3D scalar field. The samples typically are trilinearly interpolated
data values from the 3D scalar field. On a GPU, the trilinear interpolation is
performed in hardware and, therefore, is quite fast (see Chap. 11). Samples
along a ray can be determined analytically, as in ray marching [35], or can be
generated through proxy geometry on a GPU [14].