In this chapter, we investigate how to use color within XR. In order to see color, we must also have light—because color is created from different light wavelengths. We will explore how these two properties work together to create a usable and realistic 3D experience. Here is what we will be covering:
COLOR APPEARANCE MODELS The way a color appears is dependent on how it is created. We will look at the different color modes and what settings exist to create the most realistic colors in XR.
LIGHT INTERACTIONS We can see only illuminated objects. Light has different appearances that can change the overall perception of the object itself. Learning standard lighting setups will help build your lighting understanding so you can then customize the lighting for your experience.
DYNAMIC ADAPTATION With the unpredictability of backgrounds and lighting conditions in AR, how do you create lighting that looks realistic? Imitation.
If one says “Red”—the name of color—and there are fifty people listening, it can be expected that there will be fifty reds in their minds. And one can be sure that all these reds will be very different.1
1 Albers, J. (1963). Interaction of color. Yale University Press.
So stated Josef Albers, author of the book Interaction of Color and known for his extensive research on human perception of color. The way that we interact with and understand color is a personal and dynamic relationship. Color creates an emotional impact, as we carry cultural meanings to the hues surrounding our society. Not all reds are the same. Some are more intense, some more passionate, some more full of life, and some more cautionary. There are many variables that impact the way we see color. Yet, the way that we see a color, in turn, has a large impact on what that color communicates to us.
The term color space is used to describe the capabilities of a display or printer to reproduce color information. For example, you will want to make sure that you match the color space used with the medium (print or digital) or even the specific device to make sure they will display as you intend. In a closely related concept, software often allows you to set the color mode.
Color space A specific organization of colors that determines the color profile that is used to support the reproduction of accurate color information on a device. RGB and CMYK are two common examples.
In traditional print design, ensuring the accurate creation of color is so important to brand identities and marketing that the Pantone Matching System was created. This standardized system provides a way to ensure that colors (in theory) should match in the printed space regardless of the use of different printers, and it is formulated based on the coating on the paper and the printing process. So, when you select a color to be printed on a business card, it will look like the Pantone swatch. This is much like selecting the right paint swatch for a wall. It also means that when you reorder your business cards using the same paper and Pantone color, the colors on the new cards will match your first order.
The need for a color matching system comes from the way we perceive colors that are created in different formats, specifically on an electronic display versus using a pigment on paper. This need for adapting colors to compensate for formats is likely not new for designers, but when you start combining physical and digital spaces together and strive for realism, new considerations arise. Some sort of predictable model is needed or what Mark Fairchild described as a “color appearance model” in his book Color Appearance Models:
Any model that includes predictions of at least the relative color appearance attributes of lightness, chroma, and hue.2
2 Fairchild, M. D. (2013). Color appearance models. Wiley.
These color appearance models can be based on two different factors: physics or human perception. From a physics perspective color can be additive or subtractive (FIGURE 11.1).
Additive colors are created by combining light waves together, starting with the color black and then adding other hues to achieve the desired outcome. The RGB (red, green, blue) color space uses this additive model. This mode assigns red, green, and blue each a color value of between 0 and 255. When combined, these three values can create over 16 million color combinations. The use of the sRGB color profile was created by HP and Microsoft in 1996 specifically for use on monitors and the web. Adobe RGB is another commonly used RGB color profile.
8-bit sRGB color format is the preferred input for images on many XR devices.
When you mix all the colors together in an additive color mode, such as RGB, you create white. In RGB mode, black is the absence of color, meaning black will be transparent in the XR space. This will be more noticeable in larger areas of black or dark colors, but often will be less noticeable with small areas of color. Perceiving a dark color in this space requires light surrounding the dark area.
White will be perceived as the brightest color in RGB mode, offering the most contrast and visibility—unless your background is extremely bright, in which case the visibility will be reduced. However, knowing that bright light will be uncomfortable to look at, a user will likely move the device to avoid looking directly at a bright light. As you consider this human behavior, know that large blocks of white will appear very bright, and users will avoid looking at them, much as we avoid looking directly at the sun. You want to use white in small amounts to help specific content be visible, but not make it so bright that people look elsewhere. On some devices, large blocks of color, such as white, will create some distortion as a result of the visual intensity.
Each color has a specific value represented in the HSB or HSL format. This format provides a numeric value to the hue, saturation, and brightness (or lightness) of the color.
Subtractive colors are created by mixing base pigments together to create colors. Some removal also happens because some color wavelengths will be absorbed into an object, making them unseen, while others will reflect, enabling our eyes to see them. The common color profile for this is called CMYK (cyan, magenta, yellow, and key black). Black is created when you combine cyan, magenta, and yellow in equal proportion. As this color profile uses pigment, it is used for print media. In fact, the term key is a direct reference to the key plate used in the printing process. The use of CMYK is also referred to as the four-color printing process, in reference to the use of the colors in combination to produce all needed colors in a design. The CMYK profile is used in offset and digital printing processes. These four colors can produce over 16,000 different color combinations.
This book is printed in CMYK; if you look using a magnifier, you can see the “dots” that combine to create the colors.
Because XR uses varying display technology, you won’t ordinarily use the subtractive process. The exceptions would be when a printed element in the physical world is used in the experience. For example, you would use CMYK color mode to create elements such as image targets, which involve a camera scanning a printed image and then applying augmented content to it. All of your digital content, however, should be prepared and optimized for the screen using RGB.
Spend some time observing the light around you and, even more importantly, shadows. This relationship between an area of light and the shadow that is produced as the light falls on objects in the space can tell you a lot of about the light itself. These areas of light and shadow will have a direct effect on the perception of the color around them. The areas where the light hits will be lighter, and that color will gradually get darker where the shadow deepens. A great place to observe this is on a colored surface such as a painted wall. As light hits part of the wall surface, you will see a different tint of the wall color. Where there is a transition into a shaded area, the darker color produced is a shade of the wall color.
Tint The increased lightness of color by the addition of white.
Shade The increased darkness of a color by the addition of black.
In the real world light is linear. That means that the gradation from a bright area to a shadow does so in a linear progression. The intensity of the light will also affect the intensity, or ramp, of the transition from the light to the dark space. As you create digital graphics, if you create similar linear color spaces and keep the shading mathematical, your perception of that color may not match what you see. As you can see in FIGURE 11.2, when the darkness of a color increases at a consistent incremental rate, you may not see a correspondingly consistent darkening in the darker range. The top row shows incremental increases by 10%: from 30% to 40, 50, and 60%. When more contrast is added to the darker shades, however, you can perceive more of an increase because of a higher range in contrast. In the bottom row the color is darkened by varying but steadily increasing percentages from 30% to 40, 60, and 100%, and you can more clearly see the color steadily darkening.
Josef Albers studied this phenomenon in depth. The result was the discovery that the human eye can distinguish between darker shades better than lighter shades. So, when creating digital images, there is a need to have more accuracy and variety in the dark tones. To accommodate for this sensitivity of the way the brain perceives shades, gamma correction, also referred to as tone mapping, was created.
Once an image or graphic has been gamma corrected, it should, in theory, be displayed “correctly” for the human eye. However, this is not how we see in the real world. To replicate that in a way that is mathematical correct, the linear color space was created to match our physical space.
Gamma correction A process that increases the contrast of an image in a nonlinear way to adjust for the human eye’s perception and the way displays function.
Linear color space Numeric color intensity values that are mathematically proportionate.
In the past, gamma color spaces were the standard color space to work in, but working in linear color space gives more accurate and realistic rendering. So, many XR and game designers prefer to use the linear color space to give their work that realistic feel. This has also become a standard within software focused on immersive experiences such as Unity and Unreal Engine.
Remember that you want to match your colors with the device to make sure they can display as you intend. This means that you have to check the platform for what color space it supports. Some HMDs will support linear only, while others support gamma only. Some will allow a combination: linear color with some gamma corrections. It is best to select the color space at the start of your creation process. Changing between them during the process will interfere with the appearance of your lighting and textures, producing a less than desirable effect. So, if you have to change this color mode during the design process, you will also have to update your properties as well.
Once you have the correct color space set up, the next consideration is selecting colors that will make the experience usable. As we discussed, usability is an important role in our hierarchy of needs. If someone finds an experience too difficult, they will determine that it doesn’t fit their needs and go find something else that will.
Choosing a color that aids usability relies on a number of factors:
Legibility and readability
Legibility and readability refer not only to the color of type (text), but also to color of the elements surrounding the text. To ensure that type is easily read, you can use a shape as a color background that helps separate the letters from the environmental background. The color of this shape and of the type should have enough contrast to make sure it is legible. Red text on a black background is hard to read because they are both dark. Select colors that have varying shades, so you don’t have a dark color on a dark color; instead, you want light on dark or dark on light.
White is the most common color for text and icons in XR.
When you have two colors that are close in shade or even saturation, they will start to vibrate off one another (FIGURE 11.3). To avoid this effect, select colors that have visual contrast. This often means opposite qualities such as light and dark or saturated and desaturated.
Contrast is essential for keeping your experience accessible. Making sure your color choices have solid contrast will make the experience usable for a greater number of users. This approach is more likely to suit a user’s unique needs, even if those needs change based on their environment.
In addition to contrast, it is also important not to rely on color alone as a visual indicator, although a color change can be helpful. If you have a UI element that only changes color to provide feedback to the user, this will be limiting to those who are color blind. So, in addition to the color shift, be sure to add another visual change for that feedback to be more inclusive.
Remember, each time someone uses your AR experience, the background and available light for the experience may vary. This means that it is even more important to provide contrast in your elements so they can stand out in both bright and dark environments. Sometimes gradients can be used to help provide separation of the foreground and background. However, this is recommended in mobile AR only, where the user is using the camera to see and augment the scene. You want to use gradients only when absolutely necessary as they can cause banding in the view on some HMDs.
If you have to use a gradient, use blue instead of black as that helps reduce some of the visible banding that occurs.
A color at its purest form is called chroma. This is when the color is fully saturated, without the addition of gray. These pure colors are both bright and vibrant. Vibrancy increases the brightness of the desaturated tones (FIGURE 11.4).
Vibrancy The energy of a color caused by increasing or decreasing the saturation of the least saturated tones.
Vibrancy can also change the energy of the color and, as a result, the overall experience. Bright oranges and reds will grab your attention over desaturated greens or grays.
To create a positive user experience, you want the user to be comfortable. If the colors you select are too intense or create too much strain, then this will cause discomfort. If a user is met with too much discomfort, they will likely leave the experience to find a different one that is more comfortable. Larger areas of color in XR, especially vibrant and fully saturated colors, will be hard on the eyes. So, use these brighter colors sparingly to attract attention, but don’t use them in large quantities.
The best way to evaluate the comfort of the experience is to have users test the experience, and even test with different color combinations to see what works best for most people. Because colors will change in appearance between the computer you create them on and the actual device that plays the XR experience, it is important to test your designs. View the colors in context, and then make adjustments to improve the ease of use.
Color will be displayed differently based on the kind of display you use. An optical see-through (OST) display, such as the Microsoft HoloLens 2, AR glasses, or smartglasses will show all elements as more transparent, due to the nature of the technology. It’s essential that users be able see the world around them, so all the digital content will be slightly transparent to enable users to see through it. Even today with technology constantly improving, this issue is something to keep an eye on, literally. Video see-through (VST) displays, such as mobile AR experiences that use the camera to view the physical world, have different considerations. Because any graphics or objects will be applied directly on top of the camera view in a VST-based experience, they can be displayed fully opaque.
With some current OST displays, remember that black will also be transparent and won’t serve as a strong separation layer unless the environment is bright. Although this may continue to change, it is an example of how transparency is built into some kinds of technology. If you are able to determine the amount of transparency in a 3D model or object, however, then you can reserve opaque colors for UI elements (such as buttons and interactive features) so that they stand out on the display. Making the UI easy to see and interact with is a high priority.
The perception of color is directly connected to the light in the scene, so to ensure that users see the colors that you select for the design, you need to design the lighting as well.
“The only thing a camera sees is light.” These are the words of master portraitist Gregory Heisler, my friend and colleague. The truth behind these words shows how important lighting is to capturing a moment. The same is true as you use cameras in 3D programs and as you integrate digital elements into the physical world. Photographers, such as Heisler, have dedicated their careers to becoming masters of light. If you were to work on a film, there would be professionals whose entire role is dedicated to lighting design. It is a large topic to say the least, but as an XR designer you don’t have to know everything a lighting specialist would. Instead of overwhelming you with esoteric details, let’s break down just the parts that are most important as you design your lighting for 3D.
Adjusting light in a scene or onto an object does not just mean that you are simply brightening or darkening; it is the secret to making an object appear as though it belongs in a scene. Believable immersion relies in the use of light and its accompanying shadow.
With the exception of some stylistic deviations, you will want your lighting to mimic the real world. It makes sense then to be inspired by light from your physical space. This should not come as a surprise by this point, as this idea of the real inspiring the digital is a common theme that continues to surface. If an experience will be happening outside, then it makes sense to add light mimicking the sun into your digital scene. However, there is a lot to consider when it comes to designing a light setup. You have to consider:
Type of light
Color of light
Direction and distance of light
Intensity of light
Each one of these considerations has multiple components, so we will examine each separately.
Think about lighting design as you would think about determining the colors of a composition: Identify the key areas that you would like to have the most attention. The brightest and most vibrant colors will attract attention first. Choosing which areas of a scene or model to highlight will impact the overall visual hierarchy, so each light you add should be thoughtful and with purpose. To assist you in achieving the lighting you desire, let’s consider the different kinds of lighting commonly found in 3D modeling software (FIGURE 11.5). You may see different names depending on the program you are working in (just to add confusion), but these are the most commonly used names.
POINT LIGHT A point light will emit light in all directions from a single point. This light has a specific location and shines light equally in all directions, regardless of orientation or rotation. Examples are lightbulbs and candles.
SPOT LIGHT A spot light works just like a spotlight used in stage design. It emits light in a single direction, and you can move the direction of the light as needed. An example is a stage spot light for a soloist.
A spot light emits light in a cone shape, which you can customize. For instance, you can control the cone angle, which determines how wide the light reaches, and the cone feather, which controls the softness of the edge of the light. The higher the angle, the bigger the circle will be, and the smaller the angle, the smaller it will be. A feather of 0% will produce a hard-edge line, and a feather of 100% will gradually fade out the light’s edge.
AREA LIGHT This light source is confined within a single object, often in a geometric shape such as a rectangle or sphere shape. Examples are a rectangular florescent light and a softbox light.
DIRECTIONAL OR PARALLEL LIGHT Parallel rays that mimic the sun; these lights are infinite, just like sun infinitely lights. This means that the position of these lights doesn’t matter, only their direction and brightness. An obvious example is sunlight.
AMBIENT LIGHT Ambient light applies to the full scene. You cannot choose a specific location for this light, and it will change the overall brightness of the scene. An example is natural, indirect light from a window.
If you have ever gone lightbulb shopping or bought Christmas lights, then you’ve seen how many different colors of light there are. Even if you want just “plain white” light, you are greeted with a magnitude of options. The reason is that no light is pure white. Light is made up of three colors: red, green, and blue. Mixing these colors in different proportions alters the color of the light we see, thanks to the additive property we discussed earlier.
Light has a color temperature; it can be warm or cool depending on the proportional mix of colors. These light temperatures are measured using the Kelvin scale. As you can see in FIGURE 11.6, 2700K is a warmer, yellower white; 7000K is a cooler, bluer white; daylight is 6400K. The color of the light you choose can have a large impact on the overall appearance of an object, possibly more than any materials you add to it.
It is quite likely that you will need to use more than one light in your scene, just as you would in the real world. You can have window light and a table lamp in the same space, for example. As you add additional lights to the scene, you need to control the relationship of the lights. To do so effectively, you need to provide a role for each light that you add. These roles should be based on the kind of light that you add. As you start into lighting design, it is recommended that you start with some more basic lighting setups that are commonly used effectively. Then, as you gain more and more understanding of light relationships and how they each work together, you can create your own custom lighting setups to really play with the overall emotion and mood of the scene or object.
Create your lighting design before you add materials and textures so that you can see what the light looks like on the gray. This practice is especially helpful while you are learning these techniques.
Soft lighting is the best choice if you need to add evenly distributed lighting to your scene (FIGURE 11.7). The name actually refers to the soft quality of shadows in the scene, making the overall contrast feel balanced and calm. This kind of lighting is frequently used for portrait photography. Soft lighting requires attention to the size of the light source, which is typically larger than the object you are lighting, and the positioning of the light, which is typically fairly distant.
The one-point lighting technique uses a single light and, as a result, will create a dynamic mood. It also creates harsher shadows where the light is not illuminating the object (FIGURE 11.8).
The three-point lighting technique uses three lights—key, rim, and fill—each of which has a specific role in the overall lighting setup (FIGURE 11.9).
Key light illuminates the focal point of the scene or object and is the primary light in the scene.
Rim light illuminates the back your subject, separating it from the background and adding depth.
Fill light fills in more light in the scene to reduce or eliminate harsh shadows and even out the overall lighting.
The three-point lighting setup is used as a default setup in photography, film, television, and even 3D modeling.
In the sunlight approach there is a single light source: the sun (FIGURE 11.10). If you are looking to replicate an outdoor scene, then you should use direct sunlight as your lighting. This will result in harsher shadows, just as the sun does because it is so bright. Unlike in the real world, however, you easily can move the direction of the sun in a 3D scene to mimic the type of sunlight you prefer: sunrise, high noon, sunset, or something in between.
A primary light source behind your object is a backlight (FIGURE 11.11). This technique is not as commonly used, but it can create some mystery and drama to the scene as needed. This lighting also can cause harsh shadows and a lot of contrast between the light and the object, often creating a silhouette and reducing the number of details seen.
The environmental lighting approach pulls lighting from an image that is imported into the program (FIGURE 11.12). This works best when using high-dynamic-range imagery (HDRI) for which the luminosity data of the image, specifically the darkest and lightest tones, are captured at a larger range. This basically means that more lighting data is stored within the image file (it is a 32-bit image, versus the standard 8-bit). These images can be used to replicate the lighting in the image in the 3D scene. Using environmental lighting is a fast way to generate a custom and believable lighting setup.
As you add images into your environment, pay attention to your reflections, as they will reflect light that you may not anticipate. This is called specular light.
The relationship between the light and the shadow provides a lot of information, which is why it is important to control the look and feel of that transition. There are a few different ways to control the intensity of the light-to-shadow edge. As a light moves farther away, it will get weaker, and as the light weakens so too will the shadow. This weakening of a light along its outer edge is called falloff. The falloff has a radius and a distance, and you can control it. Lights with a smooth falloff have a high radius and a large distance that will show a gradient blur that slowly goes from light to dark. A harsh falloff, in opposition, will have a sharp transition where the light stops at the bounds of its clear and focused area.
Falloff The visual relationship of shadow and light as illumination decreases while becoming more distant from the light source.
The edge of the light can be controlled through edge or cone feathering to soften the line between the light and the shadow. This is how you can edit and control the edge itself. This option is often available for any lighting that is a cone shape, such as a spot light.
Feathering The smoothing, softening, or blurring of an edge in computer graphics.
Once you have the kinds of lights, their position, their roles in the scene, and their color properties identified, the next step is to determine how bright the light should be. This is the intensity. The default is 100% (the highest brightness), but this amount can be edited to make the light dimmer. The strength of the light can also be called energy.
You can individually change the intensity of the lights in the scene, so one light can be brighter than another to better customize the lighting setup for your experience.
Wherever there is a light, there must be an accompanying shadow, where there is light falloff or light is blocked by another object. Without a shadow, the light will not be perceived as real and won’t be believable. Shadows also play a big part in our ability to perceive where an object is in space. Seeing a shadow far away from an object tells us that the object is suspended in the air or not near the plane. A shadow that connects to the bottom of the object tells us that the object is sitting directly on the plane. We can also tell a lot about the kind of light in the scene based on the property of the shadow. For example, natural sunlight casts stronger shadows than artificial light.
The terms soft light and hard light actually reference the characteristic of the shadows the types of light create. Soft lighting provides a more even light across all of the subject and, in turn, creates soft shadows with a fuzzy edge. Conversely, hard lighting provides more dramatic lighting on an object, creating sharp edges on shadows.
Within 3D software, you can control the appearance of the shadows (FIGURE 11.13), often using such properties as shadow darkness and shadow diffusion. The darkness property is (just as it sounds) how dark the shadow is. The diffusion, also referred to as feathering, is how soft or sharp the edge of the shadow is. If there is high diffusion, then the edge will be soft and fuzzy. If there is low diffusion, then the edge of the shadow will be sharp and crisp.
The larger your light source in relation to your subject, the softer the lighting will appear. The smaller your light source in relation to your subject, the harder the lighting will appear.
Now that you have an understanding of some of these lighting concepts to use in your digital spaces, we can explore how that lighting can adapt to the changing light within physical spaces.
From a young age we learn the art of imitation. With songs like “Itsy Bitsy Spider” and games such as Follow the Leader, we are introduced to this idea of the copycat. This concept allows you to learn and adapt to new interactions by imitating what someone else is doing—learning as you go along. This simple concept can be applied to a larger scale, as we look at imitation in AR.
How can we create realistic immersion with our digital objects? By copying the real world. With dynamic backgrounds and environments, the light and the properties of the light will constantly change. Just as a child sees a hand movement and repeats the action on their own, so too can software, such as Google’s ARCore and Apple’s ARKit framework, evaluate environmental light and repeat it as digital light. The basic method used is called lighting estimation. Using sensors, cameras, and algorithms, the computer creates a picture of the lighting found within a user’s physical space and then generates similar lighting and shadows for digital objects added to the space. For this to be effective and realistic, this analysis should be continual throughout the experience so it can adapt to changes in the lighting and within the environment. This is a key attribute in the ARCore and ARKit frameworks.
Lighting estimation A process that uses sensors, cameras, machine learning, and mathematics to provide data dynamically on lighting properties within a scene.
When using this lighting estimation method, the computer and AR development framework work together to analyze the:
Color correction values
Main light direction
Using each pixel on the display, the average lighting intensity can be calculated and then applied to all digital objects. This is called pixel intensity, and it adjusts the overall brightness based on calculating the average overall available light in the environment.
The white balance can be detected and checked dynamically to allow for color correction of any digital objects within the scene to react to the color of the light. Continually checking and adjusting the color balance allows changes to occur smoothly and more naturally instead of abrupt adjustments. This adds to the illusion or realism.
If you have any luminance properties applied to your 3D model, it will still maintain those color properties, but it will also receive the color correction from the light estimation scan.
By identifying the main directional light, the software ensures that digital objects added to the scene will have shadows cast in the same direction as other objects around them. It also enables specular highlights and reflections to be correctly positioned on the object to match the environment. To see this at work, consider FIGURE 11.14. First, look at the shadows. When you are designing the lighting in a digital environment, you want to make sure that all the shadows and highlights are following consistently from the singular directional light. Having this consistent direction of light may seem minor, but it is something that the brain sees and perceives without us even realizing. We may pick up that the lighting doesn’t feel right, even if we can’t place exactly why. This is what you want to avoid. After paying attention to the shadow, the next things to look at are the intensity of the light and also the falloff of those shadows. You don’t want the intensity of the light to feel too bright to match the scene, or the reverse of that where the light feels too dark to match the scene.
Since learning how multiple lights can work together to create a full lighting setup, it is important to pay attention to what other light sources are in the scene. As an important part of the light estimation scan, ARCore can re-create what Google calls “ambient probes” that add an ambient light to the full scene coming from a broad direction to create a softer overall tone. This can add an overall lighting adjustment to the whole object to match any ambient light within the physical space. The benefit of this added ambient lighting is it works with the directional light to help the digital objects blend more seamlessly into the scene. Again, it is about replicating or imitating the real-world scene.
Once the object has successfully reflected the environmental light based on the lighting estimation, the next step is to have those lights be responsive to the physical space. Every time you add a computer-generated light, it will produce a generated shadow. Those shadows need to fall into the physical space to make to make them believable. To do so, two things need to happen.
When you add an ambient light, it should both cast a shadow on the object and have the shadows occlude all around it.
When the light hits the object itself, such as on a piece of fabric, each wrinkle should show a shadow.
Something like a brick wall should have shadows created inside of every groove. Ambient light will hit multiple surfaces, and each one will create their own shadow. Once those shadows are cast, they should react to the environment. If you add a glass on the table, the ambient light will cast an interesting shadow all along the table. If there is another object on the table, such as a plate, the plate should also pick up some of that shadow as well. To be believable, these shadows should not just fall on the anchored plane, but also on the other objects around them, just as would happen in the real world. This shadow casting is called ambient occlusion.
Ambient occlusion Simulation of shadows both on an object itself and also on the other objects around it created by the addition of an ambient light source.
Take a look at the reflections in FIGURE 11.15, and pay attention to everywhere you see environmental reflections, or places where pieces of the space are reflected. Depending on the material of the objects, the relative reflectiveness will change. When you add a digital object to a scene, especially an object that has a metallic or glass surface, it should respond to the light around it in the form of a reflection. For these virtual objects, the reflections have to happen in real time and adjust according to the space to lend realism and believability to the objects.
When creating your 3D objects, you can adjust several properties to affect how reflective an object is.
Each material you apply to your 3D object has a base color or texture. Adjusting an object’s diffusion property affects the amount and color of light that is reflected at each point of an object. The diffusion stays consistent as you look around the object. It is a property that is applied equally along the material’s surface. Because this is an even distribution of light, it will result in a nonreflective surface. In 3D software, the default diffusion color is white, unless you change it otherwise.
Diffusion Even distribution of light across an object’s surface.
The way that light reacts to a surface has a lot to do with the property of the object’s surface. If the surface is smooth and shiny like a car’s chrome bumper, it will be highly reflective. But if the surface has tiny bumps and cracks along the surface like the surface of a rock or brick, then it will be less reflective. This roughness property can change how matte or shiny an object can become. Increasing the roughness and using brighter colors will diffuse the light across the surface more, making it appear matte or rough. Reducing the amount of roughness, in addition to using darker colors, will cause the material to appear smooth and shiny.
Materials that are shiny will also create specular highlights. These are the small shiny areas on the edges of an object’s surface that reflect a light. These specular highlights should change relative to the position of a viewer in a scene, because they are created by the position of the light.
For the physical surface of an object, you can set multiple properties to determine how metallic or nonmetallic it is. The refraction index controls the ability for light to travel through the material. Light that cannot travel through an object will reflect back, and more metallic surfaces will produce sharper reflections. The grazing angle makes the surface appear more or less mirror-like. If the surface reflects the light sharply and has a mirror-like quality, it will appear more metallic. These properties can be adjusted to lower or increase the metalness to change the appearance of an object’s surface. If the surface is made more metallic and mirror-like, this will increase the need for environmental reflections on the object’s surface. Reflective surfaces also pick up colors and reflect images. So, a metallic object placed in a green room will also have a green tone.
Light and color work together to create a sense of depth and realism. As you create and design digital objects, they should be reflective of the environment around them. This process starts with selecting the appropriate color appearance mode for your experience, works through adding and adjusting any custom lighting options, and should come to life by adapting to the physical spaces that the object augments.