i
i
i
i
i
i
i
i
308 13. Programming 3D Graphics in Real Time
Figure 13.2. The OpenGL processing pipeline: geometric 3D data in the form of ver-
tex data and image data in the form of pixels pass through a series of steps until they
appear as the output rendered image in the frame buffer at the end of the pipeline.
each other. It shows that the output from the frame buffers can be fed back
to the host computer’s main memory if desired. There are two key places
where the 3D geometric data and pixel image interact: in the rasterizer they
are mixed together so that you could, for example, play a movie in an inset
area of the window or make a Windows window a ppear to float around with
the geometric elements. Pixel data can also be stored in the hardware ’s video
RAM and called on by the fragment processor to texture the polygons. In
OpenGL Version 1.x (and what is called OpenGLs fixed functionality), shad-
ing and lighting are done on a per-vertex basis using the Gouraud model.
Texture coordinates and transformations are also calculated at this stage for
each primitive. A primitive is OpenGLs term for the group of vertices that,
taken together, represent a polygon in the 3D model. It could be three, four
or a larger group of vertices forming a triangle strip or triangle fan (as detailed
in Section 5.1). In fixed-functionality mode, the fragment processing step is
fairly basic and relates mainly to the application of a texture. We shall see
in Chapter 14 that with OpenGL Version 2, the fragment processing step
becomes much more powerful, allowing lighting calculation to be done on a
per-fragment basis. This makes Phong shading a possibility in real time. The
full details of the processing which the geometry undergoes before rasteriza-
i
i
i
i
i
i
i
i
13.1. Visualizing a Virtual World Using OpenGL 309
Figure 13.3. Geometry processing by OpenGL before the rasterization stage.
tion is given in Figure 13.3. It is worth also pointing out that the frame buffer
is not as simple as it appears. In fact, it contains several buffers (Z depth, sten-
cil etc.), and various logical operations and bit blits
6
can be performed within
and among them.
The two blocks transform & light and clip in Figure 13.2 con-
tain quite a bit of detail; this is expanded on in Figure 13.3.
As well as simply passing vertex coordinates to the rendering
hardware, an application program has to send color informa-
tion, texture coordinates and vertex normals so that the lighting
and texturing can be carried out. OpenGLs implementation as a
state machine says that there are current values for color, surface
normal and texture coordinate. These are only changed if the
6
Bit blit is the term given to the operation of copying blocks of pixels from one part of the
videomemorytoanother.
i
i
i
i
i
i
i
i
310 13. Programming 3D Graphics in Real Time
application sends new data. Since the shape of the surface poly-
gons is implied by a given number of vertices, these polygons
must be assembled. If a polygon has more that three vertices,
it must be broken up into primitives. This is primitive assem-
bly. Once the primitives have been assembled, the coordinates
of their vertices are transformed to the cameras coordinate sys-
tem (the viewpoint is at (0, 0, 0); a right-handed coordinate system
is used with +xtotheright,+yupandzpointingawayfromthe
viewer in the direction the camera is looking) and then clipped to
the viewing volume before being rasterized.
To summarize the key point of this discussion: as application developers,
we should think of vertices first, particularly how we order their presentation
to the rendering pipeline. Then we should consider how we use the numer-
ous frame buffers and how to load and efficiently use and reuse images and
textures without reloading. The importance of using texture memory efficiently
cannot be overemphasized.
In Figure 13.4, we see the relationship between the software components
that make up an OpenGL system. The Windows DLL (and its stub library
file) provides an interface between the application program and the adapter’s
Figure 13.4. A program source uses a stub library to acquire access to the OpenGL
API functions. The executable calls the OpenGL functions in the system DLL, and
this in turn uses the drivers provided by the hardware manufacturer to implement
the OpenGL functionality.
i
i
i
i
i
i
i
i
13.1. Visualizing a Virtual World Using OpenGL 311
device driver, which actually implements the OpenGL functionality. With
this component architecture, taking advantage of any new functionality pro-
vided by graphics card vendors is just a matter of replacing the driver with
another version.
Now that we have a fair understanding of how OpenGL works, we can
get down to some detail of how to use it to visualize a virtual environment.
13.1.2 Rendering OpenGL in a Windows Window
The template code in Appendix D shows that to render within a Windows
window, an application program must pro vide a handler for the
WM PAINT
message, obtain a drawing device context for the client area and use it to
render the lines, bitmaps or whatever. In platform-independent OpenGL,
there is no such thing as a device context. To draw something, one simply
issues calls to the required OpenGL functions, as in:
glClear(GL_BUFFER); // clear the screen
glBegin(GL_TRIANGLES); // Draw a single triangle - just as an example,
glVertex3f(0.0.0,-1.0); // it may not be visible until we describe
glVertex3f(0.0.0,-1.0); // the view point correctly.
glVertex3f(0.0.0,-1.0); // 3 calls to glVertex... make up a triangle
glEnd(); // match the call to glBegin()
glFlush(); // flush the OpenGL buffers
glFinish(); // finish drawing and make it visible
Drawing directly to the screen like this can give an unsatisfactory result.
Each time we draw, we start by clearing the screen and then render one poly-
gon after another until the picture is complete. This will give the screen the
appearance of flickering, and you will see the polygons appearing one by one.
This is not what we want; a 3D solid object does not materialize bit by bit. To
solve the problem, we use the technique of double buffering. Two copies of
the frame buffer exist; one is visible (appears on the monitor) and one is hid-
den. When we render with OpenGL, we render into the hidden buffer, and
once the drawing is complete, we swap them over and start drawing again
in the hidden buffer. The swap can take place imperceptibly to the viewer
because it is just a matter of swapping a pointer during the display’s vertical
blanking interval. All graphics hardware is capable of this.
Under Windows, there is a very small subset of platform-specific OpenGL
functions that have names beginning
wgl. These allow us to make an implicit
connection between the Windows window handle hWnd and OpenGLs draw-
i
i
i
i
i
i
i
i
312 13. Programming 3D Graphics in Real Time
ing functions. All one has to do to get OpenGL working on Windows is
run a small piece of initialization code during window creation (by handling
the
WM CREATE message) and then act on the WM PAINT message by r endering
the 3D scene with calls to the appropriate OpenGL drawing functions. List-
ing 13.1 shows the key steps that enable us to use OpenGL t o render into a
// the window message handler function is passed a
// handle to the window as its first argument hWnd
HGLRC hRC; HDC hDC; // local variables - HGLRC is a display
.... // context for OpenGL windows
case WM_CREATE:
hDC = GetDC(hWnd);
if(!bSetupScreenFormat(hDC)) // specify the 3D window’s properties
PostQuitMessage(0); // NO OpenGL available !
hRC = wglCreateContext( hDC );
wglMakeCurrent( hDC, hRC );
glClearColor(0.0,0.0,0.0,1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
break;
case WM_PAINT:
hDC = BeginPaint(hWnd, &ps);
glBegin(GL_TRIANGLES); // draw a single triangle - just as an example
glVertex3f(0.0.0,-1.0); // it may not be visible until we describe
glVertex3f(0.0.0,-1.0); // the view point correctly
glVertex3f(0.0.0,-1.0); // 3 call to glVertex... make up a trianble
glEnd();
glFlush(); // flush the OpenGL buffers
glFinish(); // OpenGL function to finish drawing and make it visible
// these are the Windows specific functions
hDC = wglGetCurrentDC();
SwapBuffers(hDC);
EndPaint(hWnd, &ps);
break;
case WM_DESTROY:
hRC = wglGetCurrentContext(); // get and release the OpenGL
hDC = wglGetCurrentDC(); // system
wglMakeCurrent(NULL, NULL);
if (hRC != NULL)wglDeleteContext(hRC);
if (hDC != NULL)ReleaseDC(hWnd, hDC);
break;
....
Listing 13.1. To use OpenGL in Windows, it is n ecessary to handle four messages: one
to initialize the OpenGL drawing system
WM CREATE, one to perform all the nec-
essary rendering
WM PAINT, one to release OpenGL WM DESTROY and WM RESIZE
(not shown) to tell OpenGL to change its viewport and aspect ratio.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.4.181