32 2.Modeling,Lighti ng,andRenderingTechniquesforVolumetricClouds
Lighting and rendering are achieved in separate passes. In each pass, we set
the blending function to (
ONE, ONE_MINUS_SRC_ALPHA) to composite the voxels
together. In the lighting pass, we set the background to white and render the
voxels front to back from the viewpoint of the light source. As each voxel is ren-
dered, the incident light is calculated by multiplying the light color and the frame
buffer color at the voxel’s location prior to rendering it; the color and transparen-
cy of the voxel are then calculated as above, stored, and applied to the billboard.
This technique is described in more detail by Harris [2002].
When rendering, we use the color and transparency values computed in the
lighting pass and render the voxels in back-to-front order from the camera.
To prevent breaking the illusion of a single, cohesive cloud, we need to en-
sure the individual billboards that compose it aren’t perceptible. Adding some
random jitter to the billboard locations and orientations helps, but the biggest is-
sue is making sure all the billboards rotate together in unison as the view angle
changes. The usual trick of axis-aligned billboards falls apart once the view angle
approaches the axis chosen for alignment. Our approach is to use two orthogonal
axes against which our billboards are aligned. As the view angle approaches the
primary axis (pointing up and down), we blend toward using our alternate (or-
thogonal) axis instead.
To ensure good performance, the billboards composing a cloud must be ren-
dered as a vertex array and not as individual objects. Instancing techniques
and/or geometry shaders may be used to render clouds of billboards from a single
stream of vertices.
While splatting is fast for sparser cloud volumes and works on pretty much
any graphics hardware, it suffers from fill rate limitations due to high depth com-
plexity. Our lighting pass also relies on pixel read-back, which generally blocks
the pipeline and requires rendering to an offscreen surface in most modern
graphics APIs. Fortunately, we only need to run the lighting pass when the light-
ing conditions change. Simpler lighting calculations just based on each voxel’s
depth within the cloud from the light direction may suffice for many applications,
and they don’t require pixel read-back at all.
VolumetricSlicing
Instead of representing our volumetric clouds with a collection of 2D billboards,
we can instead use a real 3D texture of the cloud volume itself. Volume render-
ing of 3D textures is the subject of entire books [Engel et al. 2006], but we’ll give
a brief overview here.
The general idea is that some form of simple proxy geometry for our volume
is rendered using 3D texture coordinates relative to the volume data. We then get