Creating surfaces

Surfaces are an object representation of image data buffers. They play the main role in the texture management process. Every image you load needs to be placed onto the surface first. After this, you can use this surface to render your image onto the screen, store it into graphical memory, or do some image manipulation.

Getting ready

Every surface consists of a pixel format description and binary data. The surface object structure is shown in the following table:

Field names

Description

flags

This is the bit mask description of surface properties.

format

This is the surface pixel format.

w

This is the surface width in pixels.

h

This is the surface height in pixels.

pitch

This is the length of the surface scanline in bytes. It specifies how many bytes are used to store a single line surface content and it's used mostly in surface blitting operations. The pitch value is always dividable by 4 to optimize surface processing speed.

pixels

This is the pointer to pixel data in the form of userdata values.

clip_rect

This is the clipping rectangle for the surface. This affects the destination area that can be modified with blitting.

refcount

This is the reference count, which is the internal value and is used primarily when freeing a surface.

All these fields are read-only except for the pixels field which can be used to indirectly change the surface content.

The pixel format describes how many bits per pixel your image uses, how many color channels are there, and so on. The following table shows you the structure of the pixel format in detail:

Field names

Description

palette

This is the palette structure. It's empty if the value of BitPerPixel > 8.

BitsPerPixel

This is the number of bits used to represent one pixel. It's usually either 8, 16, 24, or 32.

BytesPerPixel

This is the number of bytes used to represent one pixel. It usually uses a number from 1 to 4.

Rloss, Gloss, Bloss, and Aloss

This is the precision loss of each color component. It usually presents a size of color channel in bits.

Rshift, Gshift, Bshift, and Ashift

This is the binary left shift of each color component.

Rmask, Gmask, Bmask, and Amask

This is the binary mask to retrieve each color component.

colorkey

This is the transparent color identifier.

alpha

This is the overall surface alpha value—transparency.

Usually, there is no need for you to set these values. However, they are important if you load an image with the alpha channel.

How to do it…

You can create a new empty surface with the SDL.SDL_CreateRGBSurface function, where its definition is as follows:

SDL_CreateRGBSurface(flags, width, height, depth, Rmask, Gmask, Bmask, Amask)

The Flags parameter is a bitmask that specifies whether the surface is stored in the system or the video memory. You can use these values:

Flag names

Description

SDL.SDL_SWSURFACE

Here, the surface will be stored in the system memory. The pixel level access is faster, but blitting operations don't take advantage of hardware acceleration.

SDL.SDL_HWSURFACE

Here, the surface will be stored in the video memory. Blitting is hardware accelerated.

SDL.SDL_SRCCOLORKEY

Here, the surface will use the colorkey parameter from the pixel format descriptor. This color will represent a transparent color.

SDL.SDL_SRCALPHA

Here, the surface will use the alpha value from the pixel format descriptor to apply transparency on blitting. However, you don't need this flag to load an image with the transparency channel!

It's completely safe to use flags with zero value as they will use SDL.SDL_SWSURFACE by default.

The next parameters are the width and height of the image in pixels.

The last parameters are the color depth and bit masks for each color component. These are fairly important as the incorrect bit masks will result in messed-up colors. Keep in mind that most of the current computers use little endian bit encoding, which is also called endianness. This affects the order of the color components in the surface memory. Unfortunately, PNG, JPEG, and many other file formats use big endian encoding. As a result of this, each pixel is stored with color components in the (A)BGR order and you have to convert it to RGB(A). Fortunately, you can deal with this problem easily by setting the correct bit masks for each color component.

The following image shows what happens when the image is loaded with an invalid pixel format:

How to do it…

Let's say you want to create an empty image surface of 16 x 16 pixels with a bit depth of 32-bit and an RGBA pixel format. You would create such a surface with the following code:

local surface = SDL.SDL_CreateRGBSurface(0, 16, 16, 32, 0x000000FF, 0x0000FF00, 0x00FF0000, 0xFF000000)

If you have used a big endian computer, you would use the code with swapped bit masks:

local surface = SDL.SDL_CreateRGBSurface(0, 16, 16, 32, 0xFF000000, 0x00FF0000, 0x0000FF00, 0x000000FF)

Surface objects are not freed automatically. Be sure to free up all unused surface objects with the SDL.SDL_FreeSurface function:

SDL.SDL_FreeSurface(surface)

How it works…

The libSDL library always tries to reserve enough memory for the whole image in uncompressed form. So, even if your PNG image file has a few kB in size, it must be decompressed before storing it into memory. You can compute memory consumption in bytes with this simple equation:

needed_memory = width * height * color_components_count

The situation may change with the use of memory alignment, where each pixel takes 4 bytes (32 bits), even if your image uses a 24-bit color depth. You can check this with the bytes per pixel field in the pixel format of the surface.

You can access the pixel format information with the format field:

local pixelFormat = surface.format

Note that the pixel format information is stored as a userdata with metatable and internally it's just another object.

If you need to change the pixel format of the existing surface, use the SDL.SDL_ConvertSurface function, which creates a new surface. The existing pixel data will be correctly converted into the new pixel format. This function has this formal specification:

SDL_ConvertSurface(source_surface, new_pixel_format, flags)

There's more…

You can always convert the existing surface into the current display pixel format with the SDL.SDL_DisplayFormat(surface) function.

There are some occasions where you need to change the pixel format of the surface in a very specific way. A typical example of such a situation is that you have an ARGB surface and you need to transfer this surface into graphic card memory with OpenGL. However, OpenGL supports the RGBA pixel format, which is the closest one. Of course, you can use the SDL.SDL_ConvertSurface function but you need a pixel format object that describes the RGBA format. To do this, you can create a garbage collector-friendly version with the SDL.SDL_PixelFormat_local() function. Be wary that every value in the object must be set because the object itself is not initialized. Otherwise, you can easily cause a memory access violation or segmentation fault. The following example shows how to create a 32-bit RGBA pixel format object:

local pf = SDL.SDL_PixelFormat_local()
local bpp = 32
pf.BitsPerPixel = bpp
pf.BytesPerPixel = math.ceil(bpp/8)
pf.Rmask = 0x000000FF
pf.Gmask = 0x0000FF00
pf.Bmask = 0x00FF0000
pf.Amask = 0xFF000000
pf.Rloss = 0
pf.Gloss = 0
pf.Bloss = 0
pf.Aloss = 0
pf.Rshift = 0
pf.Gshift = 0
pf.Bshift = 0
pf.Ashift = 0
pf.colorkey = 0
pf.alpha = 255
pf.palette = nil

There's another thing with the current stable version of Lua 5.2. This version allows you to define a custom garbage collection routine for a regular table. This routine can be used to define automatic surface destruction when the surface object is not used anymore.

Take an example of a situation when you create an interface within your application to manage textures for each game level. Textures can take a lot of memory space. When you change the game level, you'll most probably want to use a different set of textures. So, in the end, you'll need to keep a track of all the textures you use. Before loading a new game level, you can free up all the previously used textures and load the new ones. However, there will almost certainly be textures that you use over and over, for example, font textures and decals. You can achieve proper surface tracking with weak tables, where you only keep a note that the surface is being used and it should be freed when it's not needed anymore.

The weak table is a special type of a Lua table, which may contain references to other objects. Additionally, these references aren't considered by the garbage collector, and therefore, it allows the object to be freed even if there's a reference to it in a weak table.

The problem is that you have to implement your own mechanism to manage that and often, it's not done correctly. You'll most likely end up with memory leaks. You can solve this with a Lua table extended with metatable, which will contain the garbage collection routine in the form of a metamethod stored with the __gc key.

The problem is that Lua 5.1 can use the __gc metamethod only on userdata objects with a metatable. The newer versions of the Lua language incorporate the so-called finalizers, which means the __gc metamethod can be called on empty tables when they are garbage collected.

There's a workaround for Lua 5.1 to apply this garbage collection mechanism even on a regular table. The following lines will define the table.proxy function that will add the capability to use the __gc metamethod in the Lua 5.1 interpreter:

table.proxy = function(t)
  -- newproxy is not available in Lua 5.2+ !!!
  if type(newproxy)=='function' then
    assert(type(t) == 'table', '1st argument should be a table')
    
    -- create a new proxy object
    local p = newproxy(true)
    local p_mt = getmetatable(p)

    -- set GC meta-method for proxy object
    p_mt.__gc = function()
      local mt = getmetatable(t)
      local gc_fn = mt.__gc
      if type(gc_fn)=='function' then
        gc_fn(t)
      end
    end

    -- store proxy object in a metatable
    local mt = getmetatable(t) or {}; mt.__proxy = p; setmetatable(t, mt)
  end
end

With the garbage collection routine in tables, you can simply define that, if the object (regular table) is collected, the Lua interpreter will call your custom garbage collection routine, which will correctly free up the memory space used by the object.

The following example shows such a design on the surface object:

local surface = function(width, height, bpp, rmask, gmask, bmask, amask)
  local obj = {}
  local raw = assert(SDL.SDL_CreateRGBSurface(0, width, height, bpp, rmask, gmask, bmask, amask))
  local mt = {
    __gc = function()
      SDL.SDL_FreeSurface(raw)
    end,
  }
  obj.blit = function(destination, srcRect, destRect)
    ...
  end
  setmetatable(obj, mt)
  table.proxy(obj)
  return obj
end

In this construction, the surface function will return a regular table, which is in fact an interface to the surface object with one blit method. This function will create a closure upon calling and keeps the raw surface object hidden within the implementation. The mt table contains the __gc metamethod definition, where the function uses an upvalue raw value that contains the surface object. This raw value is used to delete the surface object and to free up the unused memory space. The beauty of this is that object deletion is done automatically when the Lua object is no longer used and it's collected.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.235.176