36 2.Modeling,Lighti ng,andRenderingTechniquesforVolumetricClouds
void getVolumeIntersection(in vec3 pos, in vec3 dir, out float tNear,
out float tFar)
{
// Intersect the ray with each plane of the box.
vec3 invDir = 1.0 / dir;
vec3 tBottom = -pos * invDir;
vec3 tTop = (1.0 - pos) * invDir;
// Find min and max intersections along each axis.
vec3 tMin = min(tTop, tBottom);
vec3 tMax = max(tTop, tBottom);
// Find largest min and smallest max.
vec2 t0 = max(tMin.xx, tMin.yz);
tNear = max(t0.x, t0.y);
t0 = min(tMax.xx, tMax.yz);
tFar = min(t0.x, t0.y);
// Clamp negative intersections to 0.
tNear = max(0.0, tNear);
tFar = max(0.0, tFar);
}
Listing 2.3. Intersection with a ray and unit cube.
With the parameters of our viewing ray in hand, the rest of our fragment
program becomes straightforward. We sample this ray in front-to-back order
from the camera; if a sample contains a cloud voxel, we then shoot another ray
from the light source to determine the sample’s color. We composite these
samples together until an opacity threshold is reached, at which point we
terminate the ray early. Listing 2.4 illustrates this technique.
uniform sampler3D volumeData;
uniform sampler3D noiseData;
uniform vec3 cameraTexCoords; // The camera in texture coords
uniform vec3 lightTexCoords; // The light dir in tex coords
uniform float extinction; // 1 – exp(optical depth)
uniform float constTerm; // (albedo * optical depth)/4*pi
uniform vec3 viewSampleDimensions; // Size of view sample
2.3CloudRenderingTechniques 37
// in texcoords
uniform vec3 lightSampleDimensions; // Size of light sample,
// texcoords
uniform vec3 skyLightColor; // RGB sky light component
uniform vec3 multipleScatteringTerm; // RGB higher-order
// scattering term
uniform vec4 lightColor; // RGBA direct sun light color
void main()
{
vec3 texCoord = gl_TexCoord[0].xyz;
vec3 view = texCoord - cameraTexCoords;
vec3 viewDir = normalize(view);
// Find the intersections of the volume with the viewing ray.
float tminView, tmaxView;
getVolumeIntersection(cameraTexCoords, viewDir,
tminView, tmaxView);
vec4 fragColor = vec4(0, 0, 0, 0);
// Compute the sample increments for the main ray and
// the light ray.
vec3 viewSampleInc = viewSampleDimensions * viewDir;
float viewInc = length(viewSampleInc);
vec3 lightSampleInc = lightSampleDimensions * lightTexCoords;
// Ambient term to account for skylight and higher orders
// of scattering.
vec3 ambientTerm = skyLightColor + multipleScatteringTerm;
// Sample the ray front to back from the camera.
for (float t = tminView; t <= tmaxView; t += viewInc)
{
vec3 sampleTexCoords = cameraTexCoords + viewDir * t;
// Look up the texture at this sample, with fractal noise
// applied for detail.
float texel = getCloudDensity(sampleTexCoords);
if (texel == 0) continue;
38 2.Modeling,Lighti ng,andRenderingTechniquesforVolumetricClouds
// If we encountered a cloud voxel, compute its lighting.
// It's faster to just do 5 samples, even if it overshoots,
// than to do dynamic branching and intersection testing.
vec4 accumulatedColor = lightColor;
vec3 samplePos = sampleTexCoords + lightSampleInc * 5;
vec3 scattering = lightColor * constTerm;
for (int i = 0; i < 5; i++)
{
float lightSample = texture3D(volumeData, samplePos).x;
if (lightSample != 0)
{
// Multiple forward scattering:
vec4 srcColor;
srcColor.xyz = accumulatedColor.xyz * scattering
* phaseFunction(1.0);
srcColor.w = extinction;
srcColor *= lightSample;
// Composite the result:
accumulatedColor = srcColor + (1.0 - srcColor.w)
* accumulatedColor;
}
samplePos -= lightSampleInc;
}
vec4 fragSample;
// Apply our phase function and ambient term.
float cosa = dot(viewDir, lightTexCoords);
fragSample.xyz = accumulatedColor.xyz * phaseFunction(cosa)
+ ambientTerm;
fragSample.w = extinction;
// apply texture and noise
fragSample *= texel;
2.3CloudRenderingTechniques 39
// "Under operator" for compositing:
fragColor = fragColor + (1.0 - fragColor.w) * fragSample;
// Early ray termination!
if (fragColor.w > 0.95) break;
}
// Apply tone-mapping and gamma correction to final result.
toneMap(fragColor);
gl_FragColor = fragColor * gl_Color;
}
Listing 2.4. GPU ray casting of a cloud.
Note that the texture lookup for each sample isn’t just a simple texture3D
call—it calls out to a
getCloudDensity() function instead. If you rely on the
3D volume data alone, your clouds will look like nicely shaded blobs. The
get-
CloudDensity()
function needs to add in procedural noise for realistic results—
we upload a
3
3
2 -texel RGB texture of smoothed random noise, and apply it as a
displacement to the texture coordinates at a couple of octaves to produce fractal
effects. Perlin noise [Perlin 1985] would also work well for this purpose. An
example implementation is shown in Listing 2.5; the
noiseOffset uniform
vector is used to animate the noise over time, creating turbulent cloud animation
effects.
float getCloudDensity(in vec3 texCoord)
{
vec3 r = viewSampleDimensions.xyz * 32.0;
vec3 perturb = vec3(0, 0, 0);
vec3 uvw = ((texCoord + noiseOffset) / viewSampleDimensions) / 256.0;
perturb += 1.0 * texture3D(noiseData, 2.0 * uvw).xyz - 0.5;
perturb += 0.5 * texture3D(noiseData, 4.0 * uvw).xyz - 0.25;
return (texture3D(volumeData, texCoord + perturb * r).x);
}
Listing 2.5. Adding fractal noise to your clouds.
40 2.Modeling,Lighti ng,andRenderingTechniquesforVolumetricClouds
Unfortunately, adding high-frequency noise to the cloud impacts perform-
ance. We restrict ourselves to two octaves of noise because filtered 3D texture
lookups are expensive, and high frequency noise requires us to increase our
sampling rate to at least double the noise frequency to avoid artifacts. This may
be mitigated by fading out the higher octaves of noise with distance from the
camera and adaptively sampling the volume such that only the closer samples are
performed more frequently. The expense of adding procedural details to ray-
casted clouds is the primary drawback of this technique compared to splatting,
where the noise is just part of the 2D billboards used to represent each voxel.
Intersections between the scene’s geometry and the cloud volume need to be
handled explicitly with GPU ray-casting; you cannot rely on the depth buffer,
because the only geometry you are rendering for the clouds is its bounding box.
While this problem is best just avoided whenever possible, it may be handled by
rendering the depth buffer to a texture, and reading from this texture as each
sample along the viewing ray is computed. If the projected sample is behind the
depth buffer’s value, it is discarded.
Viewpoints inside the volume also need special attention. The intersection
code in Listing 2.3 handles this case properly, but you will need to detect when
the camera is inside the volume, and flip the faces of the bounding box being
rendered to represent it.
Three-dimensional textures consume a lot of memory on the graphics card. A
256 256 32 array of voxels each represented by a single byte consumes two
megabytes of memory. While VRAM consumption was a show-stopper on early
3D cards, it’s easily handled on modern hardware. However, addressing that
much memory at once can still be slow. Swizzling the volume texture by
breaking it up into adjacent, smaller bricks can help with cache locality as the
volume is sampled, making texture lookups faster.
Fog and blending of the cloud volume is omitted in Listing 2.4 but should be
handled for more realistic results.
HybridApproaches
Although GPU ray casting of clouds can be performant on modern graphics
hardware, the per-fragment lighting calculations are still expensive. A large
number of computations and texture lookups may be avoided by actually
performing the lighting on the CPU and storing the results in the colors of the
voxels themselves to be used at rendering time. Recomputing the lighting in this
manner results in a pause in framerate whenever lighting conditions change but
makes rendering the cloud extremely fast under static lighting conditions. By
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.176.72