© Chris Conlan 2017

Chris Conlan, The Blender Python API, 10.1007/978-1-4842-2802-9_8

8. Textures and Rendering

Chris Conlan

(1)Bethesda, Maryland, USA

So far, we have constrained our code examples to the creation of meshes and add-ons in Blender. For 3D artists and animators, the goal of 3D modeling is to make a scene come to life with rendered images and videos. Rendering in Blender Python is actually very simple, typically requiring only a single function call. To bring us to the point where we want to render our scenes, we will discuss texturing, lighting, and camera placement.

By the end of this chapter, users will be able to create automated pipelines for texturing, lighting, camera placement, and still rendering. While it is possible to render animated video with Blender Python, we will limit our discussion here to rendering still images.

Vocabulary of Textures

There are many types of textures in general, and many extra parameterized types in Blender. Our first example uses diffuse textures and normal maps to illustrate how materials function in Blender. Before we proceed, we will establish some new vocabulary about textures.

Types of Influence in Blender

While these effects are categorized as influences in Blender, they traditionally refer to types of textures in the broad domain of 3D modeling. Blender has its own types of textures, each of which can adopt any of these influences. See Figure 8-1 for the location of these influences in the Blender GUI. They can be found in Properties ➤ Materials ➤ Influence.

  • Diffuse textures are for coloring the object. Diffuse textures can describe the color, intensity, alpha levels, and translucency of objects in Blender. To overlay an image on the face of an object, we use a diffuse color texture.

  • Shading textures describe how the object interacts with others in the scene. If we want the object to mirror another, to emit color onto another, or spill ambient light into the scene, we specify the requisite shading properties in Blender.

  • Specular textures describe how the object reacts to light. For example, if we supplied an image of static fuzz (as one might see on an old TV screen) as a specular texture, the light would reflect off the object like shiny grains of sand. We can fine-tune specular maps by specifying how intensely and in what direction the colors react to light.

  • Geometry textures allows the object to affect the geometric appearance of the object. For example, if we supplied black and white stripes to a geometric map and specified a normal map, we would see 3D ridges in our model. It is important to note that these effects are realized only in rendering, not in the mesh data itself.

A438961_1_En_8_Fig1_HTML.jpg
Figure 8-1. Influences in Blender

Types of Textures in Blender

Though we will mainly be working with image textures, Blender has numerous customizable textures we can choose from. These are selected from the Properties ➤ Materials ➤ Type menu shown in in Figure 8-2.

A438961_1_En_8_Fig2_HTML.jpg
Figure 8-2. Texture types in Blender

The Image and Video and Environment Map options can import image and video files. The remaining textures can be parameterized in Blender to achieve the desired result. We do not detail how to work with any of these parameterized textures specifically, as would be many dozens of parameters to discuss. Listing 8-1 explains how to work with the parameters of the Image and Video type in order to texture an object. From here, readers should be able to replicate this process for any of the remaining types using Blender’s Python tooltips.

Adding and Configuring Textures

We touched on the definition of textures in Chapter 4 while discussing file interchange formats. Textures are mapped to a face in 3D space via uv coordinates. To map a square image as a texture to a square face of a mesh, we specify uv coordinates [(0, 0), (1, 0), (0, 1), (1, 1)] to the bottom-left, bottom-right, top-left, and top-right points of the mesh, respectively. As shapes of faces become more complicated, so do the processes required to achieve the desired texture mappings. We discuss method for mapping uv coordinates to common shapes next.

Loading Textures and Generating UV Mappings

Due to the manner in which Blender handles texture imports and materials, uv mapping is not an altogether straightforward task. We have to overcome a few procedural hurdles in order to reach the point in our script where we can explicitly define the uv coordinates on our object. Once we reach this point, precise specification of uv coordinates is fairly straightforward. We explain by way of example in Listing 8-1.

We use sample images of the numbers 1 and 2 in our example that can be downloaded at http://blender.chrisconlan.com/number_1.png and http://blender.chrisconlan.com/number_2.png . Readers can use these images or any other desired image for Listing 8-1. See Figure 8-4 for the result. We discuss the functions used in Listing 8-1 in the following sections.

Note

After running this script, view the results by selecting Rendered view in the 3D Viewport Header, as shown in Figure 8-3.

A438961_1_En_8_Fig3_HTML.jpg
Figure 8-3. Selecting rendered view
Listing 8-1. Loading Textures and Generating UV Maps
import bpy
import bmesh
from mathutils import Color


# Clear scene
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete()


# Create cube
bpy.ops.mesh.primitive_cube_add(radius = 1, location = (0, 0, 0))
bpy.ops.object.mode_set(mode = 'EDIT')


# Create material to hold textures
material_obj = bpy.data.materials.new('number_1_material')


### Begin configure the number one ###
# Path to image
imgpath = '/home/cconlan/Desktop/blender-book/ch08_pics/number_1.png'
image_obj = bpy.data.images.load(imgpath)


# Create image texture from image
texture_obj = bpy.data.textures.new('number_1_tex', type='IMAGE')
texture_obj.image = image_obj


# Add texture slot for image texture
texture_slot = material_obj.texture_slots.add()
texture_slot.texture = texture_obj


### Begin configuring the number two ###
# Path to image
imgpath = '/home/cconlan/Desktop/blender-book/ch08_pics/number_2.png'
image_obj = bpy.data.images.load(imgpath)


# Create image texture from image
texture_obj = bpy.data.textures.new('number_2_tex', type='IMAGE')
texture_obj.image = image_obj

# Add texture slot for image texture
texture_slot = material_obj.texture_slots.add()
texture_slot.texture = texture_obj


# Tone down color map, turn on and tone up normal mapping
texture_slot.diffuse_color_factor = 0.2
texture_slot.use_map_normal = True
texture_slot.normal_factor = 2.0


### Finish configuring textures ###
# Add material to current object
bpy.context.object.data.materials.append(material_obj)


### Begin configuring UV coordinates ###
bm = bmesh.from_edit_mesh(bpy.context.edit_object.data)
bm.faces.ensure_lookup_table()


# Index of face to texture
face_ind = 0
bpy.ops.mesh.select_all(action='DESELECT')
bm.faces[face_ind].select = True


# Unwrap to instantiate uv layer
bpy.ops.uv.unwrap()


# Grab uv layer
uv_layer = bm.loops.layers.uv.active


# Begin mapping...
loop_data = bm.faces[face_ind].loops


# bottom right
uv_data = loop_data[0][uv_layer].uv
uv_data.x = 1.0
uv_data.y = 0.0


# top right
uv_data = loop_data[1][uv_layer].uv
uv_data.x = 1.0
uv_data.y = 1.0


# top left
uv_data = loop_data[2][uv_layer].uv
uv_data.x = 0.0
uv_data.y = 1.0


# bottom left
uv_data = loop_data[3][uv_layer].uv
uv_data.x = 0.0
uv_data.y = 0.0


# Change background color to white to match our example
bpy.data.worlds['World'].horizon_color = Color((1.0, 1.0, 1.0))


# Switch to object mode to add lights
bpy.ops.object.mode_set(mode='OBJECT')


# Liberally add lights
dist = 5
for side in [-1, 1]:
    for coord in [0, 1, 2]:
        loc = [0, 0, 0]
        loc[coord] = side * dist
        bpy.ops.object.lamp_add(type='POINT', location=loc)


# Switch to rendered mode to view results
A438961_1_En_8_Fig4_HTML.jpg
Figure 8-4. Explicitly mapping UV coordinates

Textures Versus Materials in Blender

Texture is a broad term in 3D modeling. It can refer to diffuse textures, color textures, gradient textures, bump maps, and more. It is important to note that we can map all of these forms of textures to an object simultaneously. For example, a set of shingles on the roof of a house may require an image texture, a diffuse map, and a bump map in order to appear realistic when rendered.

Additionally, it is common for the image, diffuse map, and bump map of a real-world material to be built specifically for each other. In our shingle example, the bump map would define the ridges between the physical shingles as they appear in the image texture. The diffuse map would further define the shiny particles we typically see on roof shingles. By design, the files that represent the images and maps would not necessarily work with other files from outside the set. This is the motivation for materials in Blender.

A material in Blender is a collection of texture-related data. It may include any of the images and maps mentioned previously, and it may include others like normal and alpha maps. So, we must first build the material from its constituent textures, then assign the material to the object. Regardless of whether we have one or many textures comprising a material, texture data must be assigned to the material. Then, materials must be assigned to the object.

This discussion reveals the motivation behind material management in Listing 8-1. We declare and manipulate all required textures first, then we add the entire material to the object via bpy.context.object.data.materials.append(). From here, we can manipulate the uv coordinates of the entire material.

UV Coordinates and Loops

The second half of Listing 8-1 accesses a data endpoint we have not worked with previously. The uv coordinate data layer we aim to access is contained within a loops object. Loops can be thought of as 3D polygons that trace a set of vertices of a 3D object. Loops can span multiple faces, but must start and end on the same point. When loops span multiple faces, they are intended to capture a localized set of adjacent faces.

3D artists have access to advanced tools that help them create loops. These loops then aid them in manual assignment of uv coordinates. While we will not be manipulating these loops in Blender Python, it is important to understand how they work, because the loops data object lies between the mesh itself and the uv layer.

Fortunately, loops data objects in Blender have a 1-to-1 correspondence with bmesh.faces[].verts[] objects, which we are used to working with. In other words, the (u, v) coordinates accessed by bm.faces[f].loops[v][uv_layer].uv correspond to the (x, y, z) coordinates accessed by bm.faces[f].verts[v].co for any two integers, f and v.

It is important to note that two integers f and v may not specify a unique point in 3D space. In a default Blender 2.78c cube, as it appears in the startup file, f:v pairs 0:2, 3:3, and 4:0 all correspond to the point (-1.0, -1.0, -1.0) in 3D space. When the cube is textured, these uv coordinates will typically be unique, because they will all correspond to different parts of the texture map.

Another Note on Indexing and Cross-Compatibility

When dynamically texturing objects, we run into a problem similar to that mentioned in Chapter 3’s “Note on Indexing and Cross-Compatibility”. In that section, we noted that the behavior of vertex indices were replicable but untamable, thus justifying selection by characteristic as a workaround (implemented in Listing 3-13). The same concept applies here, except we must work with bm.faces[f].verts[v].co as opposed to just bm.verts[v].co.

For example, say we wanted to place a texture upright along on the y-axis on the top of a cube. One possible solution is to use ut.act.select_by_loc() from our ut.py toolkit to select the top face of the cube based on its location. From here, we can use f_ind = [f.index for f in bm.faces if f.select][0] to return the selected face index. Using the face index, we can store the face’s vertices as vert_vectors = [v.co for v in bm.faces[f_ind.verts]] and use this information to orient our texture along the cube.

Our other option is to operate against the advice of the Chapter 3’s “Note on Indexing and Cross-Compatibility” by assuming we know the location and orientation of the face vertices of an object in advance of texturing it. We can often determine this information in advance and hardcode it into our texturing scripts as we did in Listing 8-1. This is a viable option for controlled and internal use but is advised against for code that we will share with the community and that is tested for cross-version compatibility.

Based on our discussion up to this point, readers should have the tools and knowledge available to implement their desired dynamic (or non-dynamic) texturing scripts. The referenced section of Chapter 3, along with its following sections, are a strong analogue to any dynamic texturing task readers may undertake.

We now move on to discuss rendering in Blender and some of its uses.

Removing Unused Textures and Materials

We have discussed many useful functions for deleting meshes and objects in Blender. As we continually test scripts, our materials and textures data can quickly become cluttered without our realizing. Blender will rename textures to my_texture.001, my_texture.002, etc. when we neglect to delete them.

Textures and materials must have no users in order to be eligible for deletion. In this case, users refers to the number of objects that currently have it assigned. To delete textures and materials, we loop through our bpy.data.materials and bpy.data.textures datablocks and call .remove() on those that are not in use. See Listing 8-2 for this implementation.

Listing 8-2. Loading Textures and Generating UV Maps
import bpy

mats = bpy.data.materials
for dblock in mats:
    if not dblock.users:
        mats.remove(dblock)


texs = bpy.data.textures
for dblock in mats:
    if not dblock.users:
        texs.remove(dblock)

Rendering Using Blender Render

Using Blender’s built-in rendering functions is very straightforward. We introduce and explain how to position lights and cameras in a scene, then call the rendering function to create an image. The majority of our discussion focuses on semantics and helper functions for cameras and lights.

Adding Lights

In Listing 8-1, we added six lights around our cube to make it viewable in Blender’s Rendered view in the 3D Viewport. Properly using this view, and rendering in general, requires lights. Lighting is an important and large domain in 3D modeling in and of itself. In this section, we focus on Blender Python functions related to lighting rather than general practices for aesthetically pleasing lighting.

In the 3D Viewport Header, we can navigate to Add ➤ Lamp to select any of Blender’s built-in lights. Using Python tooltips, we can see that they all rely on the function bpy.ops.object.lamp_add(), with the type= parameter determining the type of light. We have the options SUN, POINT, SPOT, HEMI, and AREA. Each of these types has its own sets of parameters to configure.

Our primary concerns when it comes to procedurally generated lighting are placement and direction. We will introduce some utilities for managing placement and direction. For example, to lazily light our entire scene, we may want to create point lights around the aggregate bounding box of the scene. Additionally, we may want to point a spotlight directly at another arbitrarily placed object. See Listing 8-3 for a list of utilities that may help with procedurally adding lights. All of the functions we declare in Listing 8-3 have been added to our toolkit ut.py, which can be downloaded at http://blender.chrisconlan.com/ut.py .

See Table 8-1 for a basic description of each type of light

Table 8-1. Types of Lights

Type

Description

Point

Emits lights equally in all directions; rotation has no effect

Spot

Emits a cone of light in a particular direction

Area

Emits light from a rectangular area; follows a Lambert distribution

Hemispheric

Similar to area, but has spherical curvature

Sun

Emits orthogonal light in a particular direction; position has no effect

Adding Cameras

Rendering a scene requires a camera. To procedurally add a camera, we must position it, adjust its direction, and modify its parameters. We will use the functions in Listing 8-3 to position and direct the cameras as well as lights.

The biggest problem we must solve when procedurally generating cameras is determining the distance and field of view such that the entire scene will be captured without appearing too small in the rendering. We will use some basic trigonometry to solve these problems.

The field of view (FoV) is a pair of two angles (θ x , θ y ) projecting outward from a camera that defines an infinitely extending rectangular pyramid. Everything lying within this rectangular pyramid can be seen by the camera if there is nothing in front of it. To give some perspective, an iPhone 6 camera has a FoV of about (63°, 47°) degrees when in landscape mode. Note that when photographers refer to FoV colloquially, they commonly refer to only the larger of the two angles.

We must understand FoV so that we can ensure the placement and calibration of the camera captures the scene we want to render.

Given a camera with FoV (θ x , θ y ) centered along and facing a scene with bounding box of height h and width w, the distance from the scene d required to capture the scene is max(d x , d y ). For this discussion, d x and d y represent the requisite distance to capture the scene along the horizontal and vertical dimensions, respectively. See Figure 8-5 for a visual representation. Using basic trigonometry, we arrive at
$$ egin{array}{l}{d}_x=frac{w}{2}frac{	heta_x}{cot(2)}\ {}{d}_y=frac{h}{2}frac{	heta_y}{cot(2)}end{array} $$

A438961_1_En_8_Fig5_HTML.jpg
Figure 8-5. Field of view along the y-axis

This only accounts for the simple case where the camera is pointing along the x- or y-axis, but it will suffice for our purposes. In Listing 8-4, we use utility functions established previously to direct the camera such that it can render the entire visible scene.

Listing 8-3. Utilities for Lights and Cameras
# Point a light or camera at a location specified by "target"                  
def  point_at(ob, target):
     ob_loc = ob.location
     dir_vec = target - ob.location
     ob.rotation_euler = dir_vec.to_track_quat('-Z', 'Y').to_euler()


# Return the aggregate bounding box of all meshes in a scene
def scene_bounding_box():


     # Get names of all meshes
     mesh_names = [v.name for v in bpy.context.scene.objects if v.type == 'MESH']


     # Save an initial value
     # Save as list for single-entry modification
     co = coords(mesh_names[0])[0]
     bb_max = [co[0], co[1], co[2]]
     bb_min = [co[0], co[1], co[2]]


     # Test and store maxima and minima
     for i in range(0, len(mesh_names)):
         co = coords(mesh_names[i])
         for j in range(0, len(co)):
             for k in range(0, 3):
                 if co[j][k] > bb_max[k]:
                     bb_max[k] = co[j][k]
                 if co[j][k] < bb_min[k]:
                     bb_min[k] = co[j][k]


     # Convert to tuples
     bb_max = (bb_max[0], bb_max[1], bb_max[2])      
     bb_min = (bb_min[0], bb_min[1], bb_min[2])


     return [bb_min, bb_max]

Rendering an Image

Rendering is the process of computing high-resolution imagery and video given 3D data. Rendering is not instantaneous. While the 3D Viewport in Blender seems to move fluidly as we translate and rotate the camera, rendering can take a considerable amount of time. The 3D Viewport is an instantaneous rendering of the 3D data, but it does not represent the same level of quality or definition as a traditional rendering.

In Listing 8-4, we render the output of Listing 8-1 using both Blender Render and OpenGL render. This example assumes positions the camera to point upward along the x-axis at the median of the scene, from the yz-median of the scene, such that it will capture the whole scene. We use the equations discussed previously to accomplish this. Recall that these equations assume the simple case that we are pointing the camera along an axis.

The resulting rendering captures the object squarely within the frame. See Figure 8-6 for the Blender Render of the cube created in Listing 8-1. For the Blender Render, the scene’s camera is used as the rendering camera. This is why it is important to know how to set the camera’s position procedurally. If we want to loop through and render many scenes, we need to be confident that the scene will be captured within the frame.

A438961_1_En_8_Fig6_HTML.jpg
Figure 8-6. Blender Render

We can also render a snapshot of the 3D Viewport using OpenGL render. This will capture basic features of the scene similar to how we see the 3D Viewport in Object Mode with Solid view. See Figure 8-7 for the result. Note that we can see both the lights and camera, but not the materials, in this view. When we call bpy.ops.render.opengl(), setting view_context = True will cause Blender to use the 3D Viewport camera (the user’s view) rather than the scene camera.

A438961_1_En_8_Fig7_HTML.jpg
Figure 8-7. OpenGL rendering
Listing 8-4. Rendering Using Blender Render and OpenGL Render
### Assumes output of Listing 8-1 is in scene at runtime ###

import bpy
import bmesh
import ut


from math import pi, tan
from mathutils import Vector


# Get scene's bounding box (meshes only)
bbox = ut.scene_bounding_box()


# Calculate median of bounding box
bbox_med = ( (bbox[0][0] + bbox[1][0])/2,
             (bbox[0][1] + bbox[1][1])/2,
             (bbox[0][2] + bbox[1][2])/2 )


# Calculate size of bounding box
bbox_size = ( (bbox[1][0] - bbox[0][0]),
              (bbox[1][1] - bbox[0][1]),
              (bbox[1][2] - bbox[0][2]) )


# Add camera to scene
bpy.ops.object.camera_add(location=(0, 0, 0), rotation=(0, 0, 0))
camera_obj = bpy.context.object
camera_obj.name = 'Camera_1'


# Required for us to manipulate FoV as angles
camera_obj.data.lens_unit = 'FOV'


# Set image resolution in pixels
# Output will be half the pixelage set here
scn = bpy.context.scene
scn.render.resolution_x = 1800
scn.render.resolution_y = 1200


# Compute FoV angles
aspect_ratio = scn.render.resolution_x / scn.render.resolution_y


if aspect_ratio > 1:
    camera_angle_x = camera_obj.data.angle
    camera_angle_y = camera_angle_x / aspect_ratio
else:
    camera_angle_y = camera_obj.data.angle
    camera_angle_x = camera_angle_y * aspect_ratio


# Set the scene's camera to our new camera
scn.camera = camera_obj


# Determine the distance to move the camera away from the scene
camera_dist_x = (bbox_size[1]/2) * (tan(camera_angle_x / 2) ** -1)
camera_dist_y = (bbox_size[2]/2) * (tan(camera_angle_y / 2) ** -1)
camera_dist = max(camera_dist_x, camera_dist_y)


# Multiply the distance by an arbitrary buffer
camera_buffer = 1.10
camera_dist *= camera_buffer


# Position the camera to point up the x-axis
camera_loc = (bbox[0][1] - camera_dist, bbox_med[1], bbox_med[2])


# Set new location and point camera at median of scene
camera_obj.location = camera_loc
ut.point_at(camera_obj, Vector(bbox_med))


# Set render path
render_path = '/home/cconlan/Desktop/blender_render.png'
bpy.data.scenes['Scene'].render.filepath = render_path


# Render using Blender Render
bpy.ops.render.render( write_still = True )


# Set render path
render_path = '/home/cconlan/Desktop/opengl_render.png'
bpy.data.scenes['Scene'].render.filepath = render_path


# Render 3D viewport using OpenGL render
bpy.ops.render.opengl( write_still = True , view_context = True )

Conclusion

This chapter concludes our discussion of the Blender Python API. Even with its many examples, this text is not a comprehensive guide. This is a testament to the complexity and modularity of Blender more than anything else. Blender can be edited, tweaked, customized, and expanded using the Python API. The author of this book and the dedicated professionals that assisted in its development hope that this knowledge helps encourages research and development in the Blender community.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.178.9