Chapter 8. Gleaming the Cube

This chapter presents an assortment of graphics topics, including skybox rendering, environment mapping, fog, and color blending. The first couple effects share a common theme: They are implemented through texture cubes. And all the material takes you farther along the path of graphics enlightenment (pun intended).

Texture Cubes

A texture cube (also known as a cube map) is a set of six 2D textures. Each texture corresponds to a face of an axis-aligned cube (typically) centered on the world space origin. These textures can be stored individually or can all reside in the same file using a format such as DDS (discussed in Chapter 3, “Tools of the Trade”). Figure 8.1 shows a texture cube stored in a single file, with each of its faces labeled accordingly.

Image

Figure 8.1 A texture cube. Each face is labeled with the corresponding axis. (Texture by Emil Persson.)

Creating Texture Cubes

You can use a variety of tools to create texture cubes. NVIDIA, for example, includes the nvDXT command-line tool as part of its DDS Utilities package (the book’s companion website provides a link). This tool accepts a list of individual textures to compile into a texture cube and store in DDS format.

Microsoft includes the DirectX Texture Tool as part of the stand-alone DirectX SDK installation (although not as part of the Windows SDK). Figure 8.2 shows the DirectX Texture Tool with a texture cube loaded and the Cube Map Face menu open. Use this menu to view a specific face of the cube map.

Image

Figure 8.2 The DirectX Texture Tool. (Texture by Emil Persson.)

To create a new cube map, choose File, New Texture and specify Cubemap Texture for the texture type (see Figure 8.3). Then specify the resolution of your texture, the number of mip levels, and the texture format. With a new empty texture cube created, you populate the faces by selecting a face through View, Cube Map Face and then choosing File, Open Onto This Cubemap Face to assign an image.

Image

Figure 8.3 The New Texture dialog box of the DirectX Texture Tool.


Warning

I’ve seen the DirectX Texture Tool add a 1-pixel border color to cube map faces. If your resulting output has visible seams, you might want to inspect the texture cube for such defects. To do so, open the DDS file in an application such as Adobe Photoshop and zoom into the texture.


Aside from constructing a texture cube out of a set of textures, the question of how to create the individual textures themselves also arises. You have a variety of tools to assist with this, including Terragen from Planetside Software and Vue from e-on software. Websites also offer free or for-fee textures. The book’s companion website has links to some of these resources.

Sampling Texture Cubes

You sample a texture cube with a three-dimensional direction vector rooted at the center of the cube. The chosen texel is where the vector intersects a face of the cube. Figure 8.4 illustrates this concept. The texture filtering settings, which Chapter 5, “Texture Mapping,” discusses, still apply for coordinates that don’t map directly to a texel.

Image

Figure 8.4 An illustration of texture cube sampling. (Texture by Emil Persson.)

Skyboxes

A common application of a texture cube is a skybox, a box or sphere mapped with a texture cube that surrounds the camera and provides the illusion of an environment. Listing 8.1 presents the code for a skybox effect.

Listing 8.1 Skybox.fx


/************* Resources *************/

cbuffer CBufferPerObject
{
    float4x4 WorldViewProjection : WORLDVIEWPROJECTION < string
UIWidget="None"; >;
}

TextureCube SkyboxTexture <
    string UIName =  "Skybox Texture";
    string ResourceType = "3D";
>;

SamplerState TrilinearSampler
{
    Filter = MIN_MAG_MIP_LINEAR;
};

RasterizerState DisableCulling
{
    CullMode = NONE;
};

/************* Data Structures *************/

struct VS_INPUT
{
    float4 ObjectPosition : POSITION;
};

struct VS_OUTPUT
{
    float4 Position : SV_Position;
    float3 TextureCoordinate : TEXCOORD;
};

/************* Vertex Shader *************/

VS_OUTPUT vertex_shader(VS_INPUT IN)
{
    VS_OUTPUT OUT = (VS_OUTPUT)0;

    OUT.Position = mul(IN.ObjectPosition, WorldViewProjection);
    OUT.TextureCoordinate = IN.ObjectPosition;

    return OUT;
}

/************* Pixel Shader *************/

float4 pixel_shader(VS_OUTPUT IN) : SV_Target
{
    return SkyboxTexture.Sample(TrilinearSampler,
IN.TextureCoordinate);
}

/************* Techniques *************/

technique10 main10
{
    pass p0
    {
        SetVertexShader(CompileShader(vs_4_0, vertex_shader()));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader(ps_4_0, pixel_shader()));

        SetRasterizerState(DisableCulling);
    }
}


Skybox Preamble

Notice that this effect is quite simple, compared to the effects of the last chapter. The CBufferPerFrame block has been completely removed, and the CBufferPerObject block contains only a WorldViewProjection matrix; gone are the members to produce a specular highlight.

The single texture input is now named SkyboxTexture and is of type TextureCube instead of Texture2D. Additionally, the lone SamplerState object is now named Trilinear-Sampler and removes the explicit settings for addressing mode. The combination of linear interpolation for minification, magnification, and mip-level sampling is dubbed trilinear sampling (the name derives from bilinear interpolation, between texels in a mipmap, and an additional interpolation between mip-levels). The addressing mode settings aren’t necessary for cube maps.

The VS_INPUT struct contains only the vertex position. The surface normal isn’t needed without lighting calculations, and texture coordinates are determined within the vertex shader. Similarly, the VS_OUTPUT struct just contains the position in homogenous clip space, along with a member for passing the cube map texture coordinates.

Skybox Vertex and Pixel Shader

As usual, the vertex shader transforms the vertex position into homogeneous space. Then it passes the object position as texture coordinates. At first glance, this might seem odd, considering that a cube map is sampled through a direction, not a position. But recall from our discussion of vectors in Chapter 2, “A 3D/Math Primer,” that we can consider a position as a direction vector rooted at the origin.

The pixel shader just samples the texture cube and returns the result as the final output color.

Skybox Output

Figure 8.5 shows the output of the skybox effect applied to a sphere within NVIDIA FX Composer. In this image, the camera is inside the sphere and a reference grid is visible. Because a skybox surrounds the camera, you must disable backface culling, as Listing 8.1 demonstrates. You also need to scale your sphere to a reasonable size.

Image

Figure 8.5 Skybox.fx looking outward from inside the associated sphere. (Texture by Emil Persson.)


Note

If you were using this effect in your own CPU-side application, you would guarantee that the skybox was always positioned in tandem with the camera. That way, the viewer could never reach the edge of the world and shatter the illusion of a surrounding environment.


Environment Mapping

Another common application of texture cubes is environment mapping. Also known as reflection mapping, environment mapping approximates reflective surfaces, such as the chrome bumper on a car.

The process involved is slightly more complicated than for a skybox because you compute a reflection vector for the light interacting with the surface. The reflection vector is dependent on the view direction (the incident vector) and the surface normal. The specific equation follows:

R = I – 2 * N * (IN)

Here, I is the incident vector and N is the surface normal. Listing 8.2 presents the code for an environment mapping effect.

Listing 8.2 EnvironmentMapping.fx


#include "include\Common.fxh"

/************* Resources *************/

cbuffer CBufferPerFrame
{
    float4 AmbientColor : AMBIENT <
        string UIName =  "Ambient Light";
        string UIWidget = "Color";
    > = {1.0f, 1.0f, 1.0f, 0.0f};

    float4 EnvColor : COLOR <
        string UIName =  "Environment Color";
        string UIWidget = "Color";
    > = {1.0f, 1.0f, 1.0f, 1.0f };

    float3 CameraPosition : CAMERAPOSITION < string UIWidget="None"; >;
}

cbuffer CBufferPerObject
{
    float4x4 WorldViewProjection : WORLDVIEWPROJECTION < string
UIWidget="None"; >;
    float4x4 World : WORLD < string UIWidget="None"; >;

    float ReflectionAmount <
        string UIName =  "Reflection Amount";
        string UIWidget = "slider";
        float UIMin = 0.0;
        float UIMax = 1.0;
        float UIStep = 0.05;
    > = {0.5f};
}

Texture2D ColorTexture <
    string ResourceName = "default_color.dds";
    string UIName =  "Color Texture";
    string ResourceType = "2D";
>;

TextureCube EnvironmentMap <
    string UIName =  "Environment Map";
    string ResourceType = "3D";
>;

SamplerState TrilinearSampler
{
    Filter = MIN_MAG_MIP_LINEAR;
};

RasterizerState DisableCulling
{
    CullMode = NONE;
};

/************* Data Structures *************/

struct VS_INPUT
{
    float4 ObjectPosition : POSITION;
    float3 Normal : NORMAL;
    float2 TextureCoordinate : TEXCOORD;
};

struct VS_OUTPUT
{
    float4 Position : SV_Position;
    float2 TextureCoordinate : TEXCOORD0;
    float3 ReflectionVector : TEXCOORD1;
};

/************* Vertex Shader *************/

VS_OUTPUT vertex_shader(VS_INPUT IN)
{
    VS_OUTPUT OUT = (VS_OUTPUT)0;

    OUT.Position = mul(IN.ObjectPosition, WorldViewProjection);
    OUT.TextureCoordinate = get_corrected_texture_coordinate(IN.
TextureCoordinate);

    float3 worldPosition = mul(IN.ObjectPosition, World).xyz;
    float3 incident = normalize(worldPosition - CameraPosition);
    float3 normal = normalize(mul(float4(IN.Normal, 0), World).xyz);

    // Reflection Vector for cube map: R = I - 2 * N * (I.N)
    OUT.ReflectionVector = reflect(incident, normal);

    return OUT;
}

/************* Pixel Shader *************/

float4 pixel_shader(VS_OUTPUT IN) : SV_Target
{
    float4 OUT = (float4)0;

    float4 color = ColorTexture.Sample(TrilinearSampler,
IN.TextureCoordinate);
    float3 ambient = get_vector_color_contribution(AmbientColor, color.
rgb);
    float3 environment = EnvironmentMap.Sample(TrilinearSampler,
IN.ReflectionVector).rgb;
    float3 reflection = get_vector_color_contribution(EnvColor,
environment);

    OUT.rgb = lerp(ambient, reflection, ReflectionAmount);

    return OUT;
}

/************* Techniques *************/

technique10 main10
{
    pass p0
    {
        SetVertexShader(CompileShader(vs_4_0, vertex_shader()));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader(ps_4_0, pixel_shader()));

        SetRasterizerState(DisableCulling);
    }
}


Environment Mapping Preamble

In this effect, the CBufferPerFrame block contains members for ambient color and an environment color. This allows for a global ambient value and an independent color/intensity value that’s specific to environment-mapped objects. It’s just one more dial to give to an artist. Members for directional lighting, specular, point lights, or spotlights have been removed to keep the focus on the specifics of environment mapping. However, all the earlier lighting models could be used in conjunction with environment mapping. CBufferPerFrame also contains a CameraPosition object for computing the incident (view) vector.

New to the CBufferPerObject block is a ReflectionAmount member. This value is used for linear interpolation between the ambient term and the computed reflection color.

Also notice the two textures supplied to the environment mapping effect. The ColorTexture is the usual 2D texture for sampling the color of the surface. The EnvironmentMap shader constant is for the texture cube that supplies the reflected environment.

Finally, observe the ReflectionVector member of the VS_OUTPUT struct. Computed in the vertex shader, this vector is used to sample the texture cube. The neighboring 2D TextureCoordinate member is for sampling the color texture.

Environment Mapping Vertex Shader

The vertex shader performs the usual steps of transforming the vertex into homogeneous space and passing along the color map’s texture coordinates. Then it transforms the vertex into world space, to calculate the incident vector. After transforming the surface normal into world space, the reflection vector is calculated using the HLSL intrinsic reflect(). This function performs the same math as presented for the reflection vector, but whenever an intrinsic is available, it’s a good idea to use it.

Environment Mapping Pixel Shader

The pixel shader samples the color texture and computes the ambient term. Then it samples the environment map and modulates that value by the color and intensity of the EnvColor uniform. The final output color is produced by interpolating the ambient and reflection terms using the HLSL lerp() intrinsic. Linear interpolation uses the formula:

Value = x * (1 – s) + (y*s)

where, s is a value between 0.0 and 1.0 that describes how much of the computed value comes from x and how much comes from y. Thus, for the environment mapping effect, the final color is computed as:

ColorFinal = ambient * (1 – ReflectionAmount) + (reflection * ReflectionAmount)

Your RelectionAmount member is therefore just a slider that defines the percentage contribution of the reflection versus the nonreflection colors. If you supply the value 1.0 for ReflectionAmount, the color of your object will be mirrorlike, reflecting all of the associated texture cube. Conversely, if you specify 0.0 for the ReflectionAmount, your object will reflect none of the environment.

Environment Mapping Output

Figure 8.6 shows the output of the environment mapping effect applied to a sphere with pure-white, full-intensity ambient and environment light values. A checkerboard texture is used for the color map, and the environment map is that of Figure 8.1 (without labels). In the left-side image, the reflection amount is assigned the value 1.0; for the right-side image, the reflection amount is 0.5.

Image

Figure 8.6 EnvironmentMapping.fx applied to a sphere with pure-white, full-intensity ambient and environment light values, and reflection amounts of 1.0 (left) and 0.5 (right). (Skybox texture by Emil Persson.)

Dynamic Environment Mapping

So far, we’ve been discussing static environment maps, texture cubes that don’t change (or do so infrequently). Static environment maps provide interesting detail and good performance, but if viewers look closely, they may notice that the environment map doesn’t exactly match the actual scene. Plenty of games use completely unrelated cube maps that have just reasonable color similarity. If the camera can’t get close enough to a reflective surface, or if the shape of the surface distorts the image enough, no one’s the wiser. However, if the camera can zoom in on a reflective surface, the viewer might see incongruities between the actual environment and the reflected map. For example, nearby objects aren’t typically included in a “global” environment map.

One solution is to vary the environment map as the viewer moves from one area to the next, given that the areas are thematically different. Within the same area, you also could vary the environment map based on time of day or weather conditions. But the ultimate solution is to generate dynamic environment maps from within the scene itself. To do so, you position your camera at the location of an object and configure it with a 90-degree field of view (both horizontal and vertical). Then you render the scene six times per frame—looking down each axis—and build a texture cube from the resulting images. In this way, you capture every object in the scene within your reflected environments. However, at least at the time of this writing, this is a prohibitively expensive process to perform at interactive frame rates. Consider creating dynamic texture cubes only for key objects in your scene. Furthermore, consider rendering such cube maps at a lower resolution and perhaps at a fraction of the total frame rate.

Fog

The next several effects are unrelated to texture cubes but help round out your exposure to various graphics techniques. The first we discuss is fog, fading out an object to a background color as its distance from the camera increases.

Fog can be modeled with a color, a distance from the camera at which the fog begins, and how far the fog extends before the object’s color is entirely dominated by the fog color. The process of applying the final color to an object begins by determining how much of the color should come from the fog and how much should come from regular lighting. As with environment mapping, you can use linear interpolation for this process, and you can compute the interpolation value with the following equation:

Image

where V is the view vector between the camera and the surface. Then you interpolate the final pixel color as follows:

ColorFinal = lerp(litColor, fogColor, FogAmount)

You can use any lighting model to calculate litColor and perform the fog lerp as the last step in your pixel shader. Listing 8.3 presents an abbreviated fog effect that highlights the fog-specific code.

Listing 8.3 Abbreviated Fog Effect


cbuffer CBufferPerFrame
{
    /* ... */

    float3 FogColor <
        string UIName =  "Fog Color";
        string UIWidget = "Color";
    > = {0.5f, 0.5f, 0.5f};

    float FogStart = { 20.0f };
    float FogRange = { 40.0f };

    float3 CameraPosition : CAMERAPOSITION < string UIWidget="None"; >;
}

/************* Data Structures *************/

struct VS_OUTPUT
{
    float4 Position : SV_Position;
    float3 Normal : NORMAL;
    float2 TextureCoordinate : TEXCOORD0;
    float3 LightDirection : TEXCOORD1;
    float3 ViewDirection : TEXCOORD2;
    float FogAmount: TEXCOORD3;
};

/************* Utility Functions *************/

float get_fog_amount(float3 viewDirection, float fogStart, float
fogRange)
{
    return saturate((length(viewDirection) - fogStart) / (fogRange));
}

/************* Vertex Shader *************/

VS_OUTPUT vertex_shader(VS_INPUT IN, uniform bool fogEnabled)
{
    VS_OUTPUT OUT = (VS_OUTPUT)0;

    OUT.Position = mul(IN.ObjectPosition, WorldViewProjection);
    OUT.TextureCoordinate = get_corrected_texture_coordinate(IN.
TextureCoordinate);
    OUT.Normal = normalize(mul(float4(IN.Normal, 0), World).xyz);
    OUT.LightDirection = normalize(-LightDirection);

    float3 worldPosition = mul(IN.ObjectPosition, World).xyz;
    float3 viewDirection = CameraPosition - worldPosition;
    OUT.ViewDirection = normalize(viewDirection);

    if (fogEnabled)
    {
        OUT.FogAmount = get_fog_amount(viewDirection, FogStart,
FogRange);
    }

    return OUT;
}

/************* Pixel Shader *************/

float4 pixel_shader(VS_OUTPUT IN, uniform bool fogEnabled) : SV_Target
{
    float4 OUT = (float4)0;

    float3 normal = normalize(IN.Normal);
    float3 viewDirection = normalize(IN.ViewDirection);
    float4 color = ColorTexture.Sample(ColorSampler,
IN.TextureCoordinate);
    float3 ambient = get_vector_color_contribution(AmbientColor, color.
rgb);

    LIGHT_CONTRIBUTION_DATA lightContributionData;
    lightContributionData.Color = color;
    lightContributionData.Normal = normal;
    lightContributionData.ViewDirection = viewDirection;
    lightContributionData.LightDirection = float4(IN.LightDirection, 1);
    lightContributionData.SpecularColor = SpecularColor;
    lightContributionData.SpecularPower = SpecularPower;
    lightContributionData.LightColor = LightColor;
    float3 light_contribution = get_light_contribution(lightContributi
onData);

    OUT.rgb = ambient + light_contribution;
    OUT.a = 1.0f;

    if (fogEnabled)
    {
        OUT.rgb = lerp(OUT.rgb, FogColor, IN.FogAmount);
    }

    return OUT;
}

/************* Techniques *************/

technique10 fogEnabled
{
    pass p0
    {
        SetVertexShader(CompileShader(vs_4_0, vertex_shader(true)));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader(ps_4_0, pixel_shader(true)));

        SetRasterizerState(DisableCulling);
    }
}

technique10 fogDisabled
{
    pass p0
    {
        SetVertexShader(CompileShader(vs_4_0, vertex_shader(false)));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader(ps_4_0, pixel_shader(false)));

        SetRasterizerState(DisableCulling);
    }
}


Fog Preamble

The CBufferPerFrame block contains new members for FogColor, FogStart, and FogRange. The VS_OUPUT struct contains a float for the FogAmount interpolation value. The scale of the start and range of the fog should be based on the scale selected for your CPU-side application.

Fog Vertex and Pixel Shader

The vertex and pixel shaders both include uniform parameters for fogEnabled, which is set by the fogEnabled and fogDisabled techniques. If enabled, the vertex shader calculates the FogAmount with a call to the get_fog_amount() utility function. This function should reside in your Common.fxh file for use across your library of effects.

The pixel shader lerps the object’s lit color and the fog color to produce the final value for the pixel.

Fog Output

Figure 8.7 shows the fog effect applied to a sphere with a fog start of 5.0, a fog range of 10.0, and a gray fog color. The sphere is lit with a single directional light and a half-intensity specular highlight. The top image depicts the sphere closer to the camera than the fog start value; therefore, the fog color has no impact. The middle and bottom images show the object progressively farther from the camera, where the fog color begins to take over.

Image

Figure 8.7 Fog.fx applied to a sphere with a texture of Earth, with a fog start of 5.0, a fog range of 10.0, and a gray fog color. The three images show the object at progressively farther distances from the camera (top to bottom). (Original texture from Reto Stöckli, NASA Earth Observatory. Additional texturing by Nick Zuccarello, Florida Interactive Entertainment Academy.)

Color Blending

With the environment mapping and fog effects, you’ve been blending colors to produce a final output. And you’ve performed this blending within the same pixel shader, so the render target never sees the two independent colors; it sees just what’s been blended. But a different form of color blending applies when the frame buffer already has a color for a particular pixel, and that pixel gets drawn a second time with the intent of mixing the two colors together. You specify how this happens through DirectX blend states.

In HLSL, BlendState objects are described much like the RasterizerState objects you’ve been using throughout your effects, just with different members. But before you delve into the specifics of creating and using a blend state object, we need to discuss some vocabulary. In color blending, the new color being written is called the source color. The destination color is the color that already exists in the render target. The source and destination colors are combined through the following formula:

(source * sourceBlendFactor)blendOperation(dest * destBlendFactor)

Each blend factor is one of the values in Table 8.1, and the blend operation is one of the options in Table 8.2.

Image

Table 8.1 Blend Factor Options

Image

Table 8.2 Blend Operation Options


Note

Additional blend factors support dual-source color blending, but that topic is outside the scope of this section. For a complete list and description of dual-source color blending, visit the DirectX documentation website.


Using the blend factor and blend operation enumerations, Table 8.3 provides a list of common combinations.

Image

Table 8.3 Common Color Blending Settings

As you can see from Table 8.3, additive blending simply yields an addition of the source and destination colors, and multiplicative blending just multiplies them. A 2X multiplicative blending would use DEST_COLOR for the source blend factor instead of zeroing it out.

Alpha blending creates a transparency effect in which the two colors are mixed based on the source alpha channel. As with the lerps we’ve discussed for environment mapping and fog, you can consider the source alpha channel as a percentage slider, with the contribution of each color determined by this value. For example, if the source alpha value is 1.0, then 100 percent of the final color comes from the source; the destination has no impact. If the source alpha value is 0.7, the result is made up of 70 percent of the source color and 30 percent of the destination.

Listing 8.4 presents the code for an effect with alpha blending enabled. But to make the effect a bit more interesting, it uses a separate texture for the alpha channel, a transparency map. This allows for some interesting results by animating or swapping out the alpha texture at runtime. Instead of abbreviating the listing, all code is presented for this effect. It incorporates ambient lighting, a single point light, specular highlighting, and fog.

Listing 8.4 TransparencyMapping.fx


#include "include\Common.fxh"

/************* Resources *************/

cbuffer CBufferPerFrame
{
    float4 AmbientColor : AMBIENT <
        string UIName =  "Ambient Light";
        string UIWidget = "Color";
    > = {1.0f, 1.0f, 1.0f, 0.0f};

    float4 LightColor : COLOR <
        string Object = "LightColor0";
        string UIName =  "Light Color";
        string UIWidget = "Color";
    > = {1.0f, 1.0f, 1.0f, 1.0f};

    float3 LightPosition : POSITION <
        string Object = "PointLight0";
        string UIName =  "Light Position";
        string Space = "World";
    > = {0.0f, 0.0f, 0.0f};

    float LightRadius <
        string UIName =  "Light Radius";
        string UIWidget = "slider";
        float UIMin = 0.0;
        float UIMax = 100.0;
        float UIStep = 1.0;
    > = {10.0f};

    float3 FogColor <
        string UIName =  "Fog Color";
        string UIWidget = "Color";
    > = {0.5f, 0.5f, 0.5f};

    float FogStart = { 20.0f };
    float FogRange = { 40.0f };

    float3 CameraPosition : CAMERAPOSITION < string UIWidget="None"; >;
}

cbuffer CBufferPerObject
{
    float4x4 WorldViewProjection : WORLDVIEWPROJECTION < string
UIWidget="None"; >;
    float4x4 World : WORLD < string UIWidget="None"; >;

    float4 SpecularColor : SPECULAR <
        string UIName =  "Specular Color";
        string UIWidget = "Color";
    > = {1.0f, 1.0f, 1.0f, 1.0f};

    float SpecularPower : SPECULARPOWER <
        string UIName =  "Specular Power";
        string UIWidget = "slider";
        float UIMin = 1.0;
        float UIMax = 255.0;
        float UIStep = 1.0;
    > = {25.0f};
}

Texture2D ColorTexture <
    string ResourceName = "default_color.dds";
    string UIName =  "Color Texture";
    string ResourceType = "2D";
>;

Texture2D TransparencyMap <
    string UIName =  "Transparency Map";
    string ResourceType = "2D";
>;

SamplerState TrilinearSampler
{
    Filter = MIN_MAG_MIP_LINEAR;
    AddressU = WRAP;
    AddressV = WRAP;
};

RasterizerState DisableCulling
{
    CullMode = NONE;
};

BlendState EnableAlphaBlending
{
    BlendEnable[0] = True;
    SrcBlend = SRC_ALPHA;
    DestBlend = INV_SRC_ALPHA;
};

/************* Data Structures *************/

struct VS_INPUT
{
    float4 ObjectPosition : POSITION;
    float2 TextureCoordinate : TEXCOORD;
    float4 Normal : NORMAL;
};

struct VS_OUTPUT
{
    float4 Position : SV_Position;
    float3 Normal : NORMAL;
    float2 TextureCoordinate : TEXCOORD0;
    float4 LightDirection : TEXCOORD1;
    float3 ViewDirection : TEXCOORD2;
    float FogAmount : TEXCOORD3;
};

/************* Vertex Shader *************/

VS_OUTPUT vertex_shader(VS_INPUT IN, uniform bool fogEnabled)
{
    VS_OUTPUT OUT = (VS_OUTPUT)0;

    OUT.Position = mul(IN.ObjectPosition, WorldViewProjection);
    OUT.TextureCoordinate = get_corrected_texture_coordinate(IN.
TextureCoordinate);
    OUT.Normal = normalize(mul(float4(IN.Normal, 0), World).xyz);

    float3 worldPosition = mul(IN.ObjectPosition, World).xyz;

    OUT.LightDirection = get_light_data(LightPosition, worldPosition,
LightRadius);
    float3 viewDirection = CameraPosition - worldPosition;

    if (fogEnabled)
    {
        OUT.FogAmount = get_fog_amount(viewDirection, FogStart,
FogRange);
    }

    OUT.ViewDirection = normalize(viewDirection);

    return OUT;
}

/************* Pixel Shader *************/

float4 pixel_shader(VS_OUTPUT IN, uniform bool fogEnabled) : SV_Target
{
    float4 OUT = (float4)0;

    float3 normal = normalize(IN.Normal);
    float3 viewDirection = normalize(IN.ViewDirection);
    float4 color = ColorTexture.Sample(TrilinearSampler,
IN.TextureCoordinate);
    float3 ambient = get_vector_color_contribution(AmbientColor, color.
rgb);

    LIGHT_CONTRIBUTION_DATA lightContributionData;
    lightContributionData.Color = color;
    lightContributionData.Normal = normal;
    lightContributionData.ViewDirection = viewDirection;
    lightContributionData.SpecularColor = SpecularColor;
    lightContributionData.SpecularPower = SpecularPower;
    lightContributionData.LightDirection = IN.LightDirection;
    lightContributionData.LightColor = LightColor;
    float3 light_contribution = get_light_contribution(lightContributi
onData);

    OUT.rgb = ambient + light_contribution;
    OUT.a = TransparencyMap.Sample(TrilinearSampler,
IN.TextureCoordinate).a;

    if (fogEnabled)
    {
        OUT.rgb = lerp(OUT.rgb, FogColor, IN.FogAmount);
    }

    return OUT;
}

technique10 alphaBlendingWithFog
{
    pass p0
    {
        SetVertexShader(CompileShader(vs_4_0, vertex_shader(true)));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader(ps_4_0, pixel_shader(true)));

        SetRasterizerState(DisableCulling);
        SetBlendState(EnableAlphaBlending, (float4)0, 0xFFFFFFFF);
    }
}

technique10 alphaBlendingWithoutFog
{
    pass p0
    {
        SetVertexShader(CompileShader(vs_4_0, vertex_shader(false)));
        SetGeometryShader(NULL);
        SetPixelShader(CompileShader(ps_4_0, pixel_shader(false)));

        SetRasterizerState(DisableCulling);
        SetBlendState(EnableAlphaBlending, (float4)0, 0xFFFFFFFF);
    }
}


Transparency Mapping Effect

Much of this code you’ve seen before, although separated out into individual effects. New is the EnableAlphaBlending blend state object. You set the SrcBlend and DestBlend members for alpha blending and then enable color blending for the first render target bound to the output-merger stage with BlendEnable[0] = True. You can bind eight render targets to the output-merger stage at one time. Part IV, “Intermediate-Level Rendering Topics,” discusses multiple render targets further.

You apply the blend state from within a technique through a call to SetBlendState(). The first parameter is your blend state object. The second parameter is the constant color used when either the source or the destination blend factor is set to the BLEND_FACTOR enumeration (see Table 8.1). The third parameter is a 32-bit multisample coverage mask that determines which samples are updated for the active render targets. Chapter 11, “Direct3D Initialization,” covers multisampling.

Transparency Mapping Output

Figure 8.8 shows the output of the transparency mapping effect applied to a plane with a checkerboard color texture surrounded by a skybox. Beneath each image is the texture used for the alpha channel. In the image to the left, the alpha map is a gradient transitioning from transparent to opaque (from 0.0 to 1.0). In grayscale, these values are visualized as a gradient from black (0.0) to white (1.0). Note that the single-pixel border around the right-side image is apparent because the plane is selected in the NVIDIA FX Composer Render panel. This is to demonstrate the transparency effect and is not a rendering artifact.

Image

Figure 8.8 TransparencyMapping.fx applied to a plane with a checkerboard color texture using a gradient alpha map (left) and an alpha map in the shape of a maple leaf (right). (Skybox texture by Emil Persson.)

Alpha blending is dependent on the order in which objects are rendered. Transparent objects should be rendered from back to front (farthest from the camera to nearest); otherwise, the “bleed through” of the background pixels will be incorrect. Figure 8.9 shows two iterations of the same scene, in which a plane is in front of a sphere. In both images, the render target is cleared to a gray color and then a reference grid is drawn. Afterward, in the image to the left, the environment-mapped sphere is drawn and then the alpha-blended plane is drawn. For the image to the right, the plane is drawn before the sphere. Notice that the image to the right is mixed with the gray background color overlaid with the grid instead of the sphere because the sphere hasn’t yet been drawn to the render target when the plane is blended.

Image

Figure 8.9 A scene depicting the impact of draw order on alpha-blended objects. On the left, the sphere is rendered before the plane, and vice versa for the image to the right. (Skybox texture by Emil Persson.)


Note

The draw order for NVIDIA FX Composer is determined by the sequence in which the scene elements are created. The first object created in the scene is drawn first.


For opaque images, the color isn’t affected by the draw order, but rendering opaque objects from front to back has performance gains. We discuss this further in Part IV.

Summary

In this chapter, you discovered several graphics techniques. You learned about texture cubes and their application for skyboxes and environment mapping. You wrote shaders for producing a fog effect and explored color blending. Finally, you wrote an effect for transparency mapping and learned a bit about draw order.

In the next chapter, you explore effects for normal mapping and displacement mapping, the final topics in Part II, “Shader Authoring with HLSL.”

Exercises

1. Create a texture map, either by hand or using a tool such as Terragen from Planetside Software. Use the texture map with the skybox effect, and observe the output as you manipulate the camera.

2. Experiment with the environment mapping effect. Modify the ambient and environment light values and the reflection amount, and observe the results.

3. Implement a complete fog effect that incorporates a single directional light.

4. Explore the transparency mapping effect with multiple objects in a scene. Vary the creation order of objects, and observe the results.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.89.238