The Spotlight Tunnel Sample Scene and this tutorials aims to look at the fundamentals of setting up a good baseline for believable visuals. Given the project’s art contents, we will be exploring how to make a scene in Unity that will look believable. Light, texture, scale, and material need to all work together to make the digital content look ‘right’ with all the features available. To make things simple for content creators to learn, we’ll be exploring only Unity-specific features and some general context for common Digital Content Creation software.
Before we start to explore rendering inside Unity, we first need to get our assets into a format suitable for our eventual intent. Setting up a proper workflow from your Digital Content Creation tool into Unity is critical. While we won’t go through the export steps in the multitude of DCC tools available, there are a few things that we need to consider:
Scale and units
Scale and your unit of measurement play a very important role in a believable scene. In many "real world" setups, it is recommended to assume 1 Unity unit = 1 meter (100cm) as many physics systems assume this unit size.
Unit translation DCC 3D software to Unity
To maintain consistencies between DCC 3D software and Unity it’s always a good idea to validate imported object scale and size. DCC 3D software such as 3ds Max/Maya/Blender/Houdini,etc have units settings and have scale settings in the FBX export configuration (consult each software manual to configure). In general setting these tools to work in cm and export FBX at automatic scale will match the scale in Unity import properly. However it’s always reassuring to confirm it matches whenever starting a project.
A quick test to validate your export settings would be to create a simple 1x1x1m cube in your DCC 3D software. This can then be imported into Unity. The default Unity Cube (GameObject > 3D Object > Cube) is 1x1x1m and is therefore a useful scale reference to compare with your imported model.
These cubes should look identical when scale is set to 1,1,1 in the Transform component within the Unity Inspector.
Maya and 3DsMax can override your default unit depending on the last opened file.
Be aware that 3D Software can display different unit in the workspace while having different settings in their "internal unit". This might cause some confusion.
Shown example is from 3ds Max.
Point of reference scale model
When blocking out a Scene with placeholders or sketching geometry, having a point of reference scale model can be helpful. Depending on what Scene you’re making, choose a point of reference scale model that is relatable for the Scene. In the Spotlight Tunnel Sample Scene case, a common park bench was chosen. This doesn’t mean the Scene has to be exactly the same proportions as real life, instead it allows consistencies of scale to be relative between objects even if the Scene is intended to have exaggerated proportions.
Texture output and channels
In the same way that Unity expects consistent unit translation between 3D software and Unity, the information inside a texture needs to contain the correct information to give a proper result when added to a material.
Example of presets configuration for Substance Painter to output textures to be used with Unity Standard Opaque material.
Texture assignment in Unity Standard Material:
$textureSet_Albedo: assigned to Albedo Slot.
$textureSet_MetallicAOGloss: assigned to Metallic and Occlusion, smoothness Source set to Metallic Alpha.
$textureSet_Normal: assigned to Normal Map Slot.
NOTE: Packing multiple channels to a single texture such as the Metallic, AO, Gloss saves texture memory compared to exporting Ambient Occlusion (AO) as separate texture. This is the typical way of working with Unity Standard material.
Texture authoring software such as Photoshop and Substance Painter will output consistent and predictable textures with proper configuration. There are however some cases where content creators can get mixed up mainly dealing with alpha channel.
The example below shows "transparency" in a PNG can be confusing to author in Photoshop because of its native way in dealing with PNG Alpha without external plugin. In this case an uncompressed 32 Bits TGA with a dedicated Alpha Channel might be a better option, assuming source texture file size is not an issue.
Photoshop authored “transparent” PNG above has alpha coming through as a black value, while the TGA with dedicated Alpha Channel shows expected value. When each texture assigned to Standard material that reads alpha channel as smoothness, the result is the unexpectedly inverted smoothness in the material with PNG textures while the material with TGA texture displays expected results as shown above.
Normal map direction
Unity reads tangent space normal maps with the following interpretation: Red channel X+ as "Right", Green channel Y+ as “Up”(OpenGL style).
For example, 3ds Max “Render to Texture” normal map output Green Channel Y+ as “Down” by default. This will cause inverted surface direction along the Y axis and create invalid results when lit.
To validate normal map direction, create a simple plane with concave bevel (middle picture on the example above) and bake it to a flat plane. Then assigned the baked normal map into a plane in Unity with identifiable light direction and see if any of the axis are inverted. Refer to your Digital Content Creation software manual for proper axis settings.
Preparing Unity Render Settings
The following advice focuses on achieving a believable visual target. Understanding how Unity’s rendering features can be used to realistically mimic the real world will enable you to quickly achieve your project’s believable visual goal. For greater depth information, visit Unity's Introduction to Lighting and Rendering tutorial.
Linear rendering mode
In simple terms, this sets Unity to do lighting and shading calculations using physically accurate math before transforming the final output into the format that works best for monitors.
To specify a gamma or linear workflow, go to Edit > Project Settings > Player and open Player Settings. Then go to Other Settings > Rendering and change the Color Space to Linear.
Defining your color space should be one of the earliest decisions in your project because of the drastic impact on the final shading and lighting results. Unity has good documentation explaining each workflow.
In the Spotlight Tunnel Sample Scene, Deferred rendering mode is used. This allows content creators to work with multiple dynamic lights efficiently, combined multiple reflection cubemap and be able to use the existing Screen Space Reflection features in Unity 2017+.
Set this in Graphic Settings > Rendering Path or set in Camera > Rendering Path
For more in depth rendering mode information refer to the Unity documentation.
High Dynamic Range (HDR) Camera
When rendering believable lighting, much like real life, content creators will be dealing with lighting values and emissive surfaces that have a brightness higher than 1 (high dynamic range). These values then need to be remapped to the proper screen range (tonemapping). This setting is crucial to allow the Unity camera to process these high values and not clip it. To enable this, select the main camera on the scene. Ensure that HDR is checked in the inspector tab for the selected camera.
HDR Lightmap encoding (optional)
The "Spotlight Tunnel" Sample Scene didn’t use baked lighting, however if you are planning to work with High intensity (HDR) baked lighting, setting the lightmap encoding to HDR lightmap is recommended to make sure the baked light result is consistent. The option can be found under Edit > Project > Player settings > Other settings > Lightmap encoding (Unity 2017.3+ only). Detailed information for Lightmap encoding can be found here.
Tonemapper for your Scene
To display HDR lighting properly, tonemapper need to be enabled in the project. Make sure Unity Post Process package is installed in your project.
Setup in Post Process stack V1: (the version used in the “Spotlight Tunnel” scene.)
Create a Post Process Profile Asset in your project and configure it as such:
Enable Color Grading > Tonemapper > ACES (Academy Color Encoding Standards)
Enable Dithering Dithering allows the Scene to alleviate banding artifact introduced by 8 Bit/channel output from HDR Scene. Modern engines use this technique to circumvent the limitation of 16M colour output.
Leave the rest of settings in tonemapper alone for now.
Select the “Main Camera” and Add component, Post Processing Behaviour.
Assigned the Post Process profile previously created to the profile slot. For Post Process stack V2, please refer to the readme of the package as it is currently in Beta development phase.
Enable Image effect for viewport
This enables user to see the tonemapper all the time while working with the Scene view.
^Notice the highlight rendition and the dark tunnel value separation improvements in the tonemapped Scene. If you look at the non-tonemapped Scene, you can see how the highlights didn’t converge to a unified color. (the yellowish burning sun in this case).
This setup essentially try to replicate how a Digital camera captures a Scene with a fixed exposure (without exposure adaptation / eye adaptation features enabled).
At this point, content creators have achieved proper foundational scene rendering setup that should give believable results with a wide range of content.
Before starting to create final assets and approaching lighting for a Scene, it is important to figure out your lighting strategy. At the start of a project, it is very easy for content creators - eager to start making cool things - to overlook this important step. Altering your lighting strategy late in development is a costly operation. Taking the time to get this right before you enter production will save time while giving better performance and a higher visual fidelity.
Much like anything in real life, there’s almost always trade-off between the benefits and costs of one setup or another. Just as a Formula 1 Car isn’t well suited for everyday groceries shopping compared to its gas saving hybrid car siblings. There are times however, where certain technologies will give you options to mitigate these trade-offs within specific constraints and knowing each features and trade-off will allow you to choose what’s best for your project.
Going back to our Lighting, a typical Scene, at daytime, with outdoor areas can be broken down to 3 lighting components:
Hemisphere (Sky contribution).
Direct lights (Sun + Local lights).
Indirect lights (Bounced lighting).
This seems like three simple components. How you choose to mix and match real time lights, mixed lights, baked lights, static objects and dynamic objects ends up creating a diverse range of potential lighting options.
In Unity we cater to lots of different lighting strategies and project scenarios.
Find the documentation to understand lighting modes and setup here.
For newcomers, this can be overwhelming to figure out which setup and what the trade-off are for each setup. Let’s distill this mass of information down to the most commonly used setup.
These 5 are the most commonly used lighting setup.
Visual notable differences between these options:
Basic realtime, the specular highlights from the light are visible but no indirect lighting.
Baked, soft baked shadows are visible, high resolution static indirect lighting, but there is no specular response from lights and dynamically lit object don’t cast shadows.
Mixed Lighting, similar to Baked, but with specular response and dynamically lit object cast shadows.
Realtime Light and GI, proper indirect lighting response, specular response are visible, lights are all moveable and updateable, but there’s no angular soft shadow.
Guns Blazing all options enabled, depending on the settings of each light you can achieve the combination of all the above options.
The slideshow above showcase baked Ambient Occlusion when enabled. NOTE: Realtime GI can’t bake static ambient occlusion and hence not included.
Here are the general characteristics for each configuration:
Basic Realtime lighting + Ambient (with no Realtime GI or Baked GI).
Typical platform target: Console and PC. Generally used in stylistic visual project and prototype phase.
All direct lights and shadows are real-time, therefore movable.
Fast iteration because of no precompute or baking and mesh preparation.
Dynamic objects and Static objects are lit using the same method, no Light Probes required.
No Hemisphere occlusion, just straight skybox/ambient value and color in area not lit by direct lighting.
Without "Global Illumination" / indirect lighting component, scene might not give the best visual outcome.
All baked lighting + Light Probe.
Typical platform target: Mobile platform, VR, console and low end PC. Generally used in games where runtime performance in an issue but there’s room in memory, such as top down isometric mobile games and high frame rates VR game.
All lights are baked for static objects, produce ambient occlusion and indirect lighting.
Area light bake support and soft shadow angle can be baked onto statically lit objects.
Fastest in runtime performance among the options listed here.
Can slow down lighting iteration since lights are baked, therefore requiring light rebuild on Scene changes if progressive Lightmapper aren’t used.
Dynamically lit objects are lit using Light Probes only.
No Specular highlights from light source, relying only on cubemap/reflection.
No shadowing from dynamic objects.
Can cost a lot of runtime memory depending on how much lightmap textures used in the scene.
Might require authoring texture coordinates channel 2 (UV2 for lightmap).
Mixed lighting with Shadowmask + Light Probe.
Typical platform target: VR, console and PC. Generally used in majority of console games and pc games where time of day lighting such as sun movement is not important. Advantage:
Similar to all baked lighting, but in Mixed lighting, Dynamic object gets real-time specular and cast realtime shadows, while the static object gets baked shadowmask resulting in better visual quality.
Object can only have 4 shadowmask limit.
Cost additional performance in runtime for rendering the real-time lights.
Care needs to be taken in using mixed lights as they can drastically affect performance in certain setups.
The above list is an oversimplification description of shadowmask lighting. Complete information can be found here.
Realtime lighting with Realtime GI + Light Probe.
Typical platform target: Console and PC. Generally used in open area game where time of day lighting updates is required and dynamic lighting effect are required as part of the game design. Advantage:
This allow fast lighting iteration with realtime indirect lighting.
Dynamic and static objects get real time specular and shadows.
Can use less memory than baked lighting for giving indirect lighting effect.
Fixed cost CPU time performance for updating GI.
Occlusion isn’t as detailed as baked lighting and usually must be augmented by Screen Space Ambient Occlusion (SSAO) and per object texture baked AO.
No area/ light angle soft shadows for static objects.
Care needs to be taken in using realtime lights as they can drastically affect performance in certain setups.
Precompute times (Generate lighting) time can took significant amount of time if there’s too much object contributing to the static lighting especially without optimized UV setup (Requires authoring texture coordinates channel 3 (UV3) for Enlighten Realtime GI further optimization and/or projection fixes.)
In depth information in optimizing Realtime GI can be found here.
Guns blazing, all option enabled.
Typical platform target: Console and PC. Generally used for games with high fidelity requirements with tightly controlled memory usage and performance limit. Best enabled when the content creators completely understood each individual system and has proper knowledge on handling each of the lighting combination implication.
This is the complete set of lighting features, it gives your content creator all the functionality.
Can potentially burn performance at run time, memory usage.
Increase the workflow burden (UV authoring and baking time).
For faster iteration and learning lighting a Scene, responsive visual feedback is necessary. For this reason, the Spotlight Tunnel Sample Scene is using Realtime Lighting w/ Realtime GI. This will give us a nice range of specular response, good bounce lighting, and let us change our lights on the fly.
One of the mistakes that content creators usually make is not planning for what’s ahead before modelling. It’s ok to do fast and loose modelling during pre production or for roughing out a space, but the moment there’s a need for the asset to be somewhat finalized, there are a few things that need to be thought out ahead of time. Here are few things to keep in mind when modeling a proper Scene that might be overlooked even by seasoned content creators.
Make every polygon count. Much like everything else in project development, simplicity is your friend. Despite modern hardware being more capable than ever, simple geometry can go a long way in creating a Scene. Unnecessary tessellation and complex geometry can backfire as it will be difficult to manage for realtime setup, cost performance and can burn memory unnecessarily.
In the example above geometry that is never seen by the players will only ended up wasting resources such as lightmap, overdraw and at worst it can cause light leakage.
Modeling rules based on Lighting mode of a geometry. If you are using baked lighting or realtime Global Illumination with Light Probes you need to decide whether a model contribute or only receive the indirect/baked lighting in the Scene.
For objects that you want to contribute to lighting
(Lightmap Static checked in the inspector) Simpler and smoother surface area produce better indirect bounces/baked lighting because of its efficiency of space usage in lightmaps textures. UV2 for the geometry might need to be authored when doing a light bake if the auto lightmap UV provides inefficient chart or generate undesirable seams. UV3 for the geometry might need to be authored for efficient results in realtime GI. Sometimes in Realtime GI, a UV of a mesh can be simplified to make the geometry uses significantly less resources and produce the best result with fewer artifacts.
For objects that only receive lighting from dynamic lights and Light Probes, the geometry don’t have lightmap UV restriction. The geometry still needs special attention if its large as it might not be lit properly with a single probe and might require a light probe proxy volume to stitch multiple probes light definition. Check Unity manual for LPPV guide for further information.
Just because an object is not moving, doesn’t mean it has to be lit by a lightmap or contribute to Realtime GI. It is very easy to fall in the habit of marking all static objects Static Lightmap. If an object is small, or it doesn’t have surfaces that will bounce much light, it probably doesn’t need to be included in the lightmap. The bench and railings below are a good example:
Model UV layout strategy. UV layout can help improve visual quality while paying for the same memory footprints for normal map baking (typically UV1), lightmaps baking (UV2) and real-time lightmaps (UV3), especially for geometry with non-tileable textures.
Here are few tips for UV layout:
For UV1 charts, split only as necessary and try to layout the UV chart as efficient as possible to not waste texture space for normal map baking. To put it into perspective, a 1024 square texture will use the same amount of memory whether you place details or not in the texture.
Above is an example how the pieces occupy the whole texture space avoiding wasted space.
For lightmaps (UV2), try to make the charts a contiguous unbroken chart to avoid seams in the light baker. Lightmap UV charts should not overlap to avoid bleeding. Keeping a consistent scale between UV charts/shells is important for even distribution of lightmap texels across your model.
Above is an exaggerated lightmaps splitting on a simple geometry that showcase issues with lightmap seams.
For realtime GI (UV3), prioritize UV space for large areas that represent big surfaces in your model as best as possible to reduce memory usage and avoid seams. In many cases, the automatic UV settings in the model can go a long way in optimizing the chart. In depth chart optimization for realtime GI can be found here.
For objects that doesn’t require lightmaps, don’t waste memory and time by authoring those additional UVs unless required by custom shaders.
Details in geometry. Real world objects are highly detailed. Identifying details to be placed in geometry vs the normal map and textures is a standard part of real-time geometry authoring. High polygon to Low polygon Normal map baking is the norm these days in developing assets for real-time Scenes.
One important details that often gets missed is the way edges of an object catch highlights. It is unusual to find a real life object with very sharp edges, with non bevelled edges, or without details in edge definition. Replicating this effect improves the believability of the Scene.
This gives a pleasing smooth flowing highlight on surfaces where it meets other geometry.
Smoothing groups (Hard/Soft edges of polygon). Efficiency of models and normal maps can be improved with proper smoothing groups.
When dealing with Normal Map baking from high polygon to low polygon, a simple configuration for smoothing groups is preferred over multiple faceted polygons. This is due to tangent normal maps needing to bend the hard split of the surface normal from the low poly geometry.
The above example showcase how the normal map does a great job in bending the low poly normal smoothly to mimic the high polygon mesh.
A smooth polygon with a good normal map will also save on vertex count, which equates to more efficient geometry to render. Here’s a simple example, a simple 18 triangle plane in 1 smoothing groups equals 16 vertices, compared to a single plane with split smoothing groups that is equals to 36 vertices.
A smooth polygon will also save on chart splitting in Lightmap baking and Realtime GI that will, in turn, give a smoother visual result.
While this list doesn’t cover the complete process of modeling a 3D prop or Scene, hopefully it gives content creators a good idea what to look out for.
Standard Shader/Material PBS and texturing
Materials define how light reacts with the surface of a model and are an essential ingredient in making believable visuals. Once a model is created, it is time to define its surface properties.
Physically Based Rendering Shader innovations have had a massive impact on real-time rendering. It has done more to let us achieve believable visuals than anything since the normal map innovation comes along. The Unity Standard Shader is a great Shader that allows content creators to create plausible materials easily. It is highly recommended to master the use of the standard shader before veering off into making custom surface shaders in ShaderLab, ShaderGraph (Scriptable Render Pipeline) or other 3rd party Shader creation tools.
More detailed explanations for the Standard Shader in Unity can be found here.
While PBS allows you to easily create believable materials, just playing with the slider and color pickers on the Material will not get a content creator too far. Most real life surfaces are made up of multiple materials. Here are few things to keep in mind for a DCC to achieve their goal when texturing an object with the Unity Standard Material. To keep things simple, only Albedo, Smoothness, Normal Map and AO covered here.
Standard or Standard Specular Setup. In Unity there’s two option in Standard Material, there’s Standard and Standard Specular Setup. Few things to be aware off for these two material:
In general it is easier to use Standard setup material as the Specular brightness and color are calculated automatically based on Albedo, Smoothness and Metallic input.
In Standard setup material, Metal at 1 means the albedo drives the color of the specular and its brightness in tandem with smoothness that adjust the brightness and glossiness of the surface.
Metal at 0 means the albedo color don’t affect the specular color and show up as surface color.
The Standard Specular Shader should be used when you want to untether the specular color from the material’s abedo. This is the case in some more exotics material.
More information can be found here.
Albedo values and Material Validator. While a Physically Based Shader is working hard to properly model energy conservation (automatically calculating specular brightness and distribution from the light), the albedo of your material needs to be plausible. A material’s Albedo effects both direct and indirect lighting, and an unrealistic value will propagate through the rest of your Scene lighting.
A very dark albedo will absorb light significantly and causes unusual lighting response. A too bright albedo reflects significant amounts of light and indirect color that are not usually observed in real life.
Above sample showcase Albedo on non metal surface affecting indirect lighting.
While there’s a chart of material people refer to for determining values for PBS, there is no defined value for non metal painted surfaces, which are very common in real life. Content creators can decide that a wooden wall is painted with charcoal black or white snow paint for example. There’s no single definitive value of that wall albedo other than the content creator preference. This is where the general guidelines comes in. It is safe to say that for a non metal, painted surface an albedo value below 0.2 is too dark and an albedo value above 0.8 is too bright. This is not a scientific measurement but simply an easy to remember guideline. A chart of proper PBS values can be found here.
For darker dielectric material information, please refer to this expert guide.
A chart can be simple to use when dealing with a single albedo surface, however determining validity of complex albedo textures can be difficult. Our rendering engineer on the Spotlight team developed a Material Validation tool for exactly this reason. This tool allows you to check whether material values follow the guidelines or not. This tool can be enabled in Scene viewport, switch from "Shaded" to “Material Validation” view.
Metallic values The Metal value of a material defines how much of the environment gets reflected to the surfaces while also determining how much of the albedo color visible on the surface. When a surface is pure metal, the surface color (albedo) drives the color of the environment reflection. A few things to keep in mind with metal materials:
Pure metal gloss materials don’t bounce diffuse lighting. If your entire room is made out of metal, your room will be very dark and you can only see specular highlights and reflection.
Above example showcase how dark a smooth metal room with full pointlight coverage.
Deciding whether a surface is a metal or not can sometimes mix up content creators. Don’t get caught by the object core material, but pay attention to the final surface of an object. As an example, metal railings that is "painted" blue should only have their unpainted area designated as metal. The image below illustrate how a painted metal railing should be textured.
NOTE: While the chipped area of the painted metal bar is metallic, rust however is not metal a surface.
While it is easy to imagine that material only needs either a metal value 0 or 1, there are cases where surface materials are mixed in / blended. Metal objects partially covered with dust or dirt is a good example where the value of metal is in between due to blending. Other than that, be really cautious to not use metal value in between 0 and 1 when creating plausible material.
More information about metal can be found here.
Smoothness value. Smoothness controls the microsurface detail of the surface, a value of 1 will yield a pure reflective mirror like surface and a value of 0 will yield a very rough and dull surface. This is often straightforward and intuitive, but at other times can be downright confusing. A few things to keep in mind:
Focus on the final surface quality of the object. Just because an object made of concrete, it doesn’t mean anything to the smoothness. It could be a rough surface concrete with a gloss paint on top of it. Another example is unpainted wood, how the wood got polished determines the final smoothness value.
Don’t forget scuff, dirt, scratches and water stains. In real life the surface of a material gets affected by many variable and rarely are they a pure single surface.
How elements gets blended between surfaces also determines the characteristic of the material. (e.g. a water puddle on soil usually have a ring of absorbed water that has darkened the albedo instead of just a direct smoothness blend).
More information for smoothness can be found here.
Normal Map. A "normal map" usually refers to a tangent space normal map that bends the surface normal of a polygon as if the light direction comes in from the other direction. This is usually used by content creators to add many details on a seemingly simple mesh. While normal maps are usually used to add geometric details, it is important not to forget its role in defining a material. It can be used to show the original surface material, as an example, this wood painted with high gloss red finish.
More information about normal map can be found here.
Occlusion map. Occlusion map mimics the attenuation of ambient light and can enhance the perception of concavity and shape.
Why would we need this, since we already have light baking and an indirect lighting solution and SSAO (Screen Space Ambient Occlusion)? The answer is two fold.
First, a detailed version of the occlusion map can be achieved at much higher quality during offline render especially if the data is coming from a higher detailed model (similar to normal map baking from high detailed model to low).
Second, occlusion maps help dynamically lit object tremendously since dynamic objects don’t get occlusion from light baking and only receive Light Probe or ambient lighting and low detail Screen Space Ambient Occlusion (SSAO).
More information about occlusion maps can be found here.
Reference picture, colour chart and photo source. Like trying to learn any new field, studying up on the general principle behind Digital Content Creation will make your results better. Taking pictures of the surface whether it’s for reference or texture source often times helped speed up the creation of surface material in digital content creation tool. There are not many rules for capturing reference, other than taking lots of pictures of the particular subject. It’s the equivalent of going to image search engine and searching for specific reference image.
On the other hand, taking pictures for texture source needs some guidelines in order to give DCC close enough results for texture capturing:
While it’s nice to use Digital SLR / advance camera, it is not a requirement. Any camera, including a mobile phone, with manual exposure control and RAW capability can achieve good result with the above setup.
To take it further, a X-Rite ColorChecker profiled RAW image with polarized lighting and lens will produce higher accuracy in capturing of albedo texture.
Like the R-27 gray card, ColorChecker chart is a known trusted color reference that can be used as anchor. Camera sensor + lens + filter + lighting condition characteristic profile can be generated and be used to correct RAW images.
Lighting condition when capturing a texture source need to be in diffused condition, such as cloudy day or in evenly lit shades.
For purer albedo texture once the captured image are processed, additional processing using Unity’s de-lighting tool can be done.
Lighting and Setup
At this stage content creators have meshes that are properly textured, an assembled Scene with proper tonemapped Unity render settings, but the Scene will still not look good until a proper lighting setup in place. It is assumed that content creators set the Scene with a Realtime GI strategy and then lit with Realtime lights for instant feedback, despite similar principles also applied towards baking.
Outdoor lighting and Scene setup.
Hemisphere lighting. First component for outdoor lighting is Hemisphere lighting, called Environment Lighting in Unity. This is a fancy word for skylight. Night sky has minimal contribution, while daytime sky has very bright contribution. Hemisphere settings can be found under the Lighting tab (Window > Lighting > Settings > Environment). For a start, procedural skybox material would be prefered instead of HDRI cubemap. Create a new material in the project, name it SkyMaterial and then set it to Skybox / Procedural.
Assigned it to Environment Skybox Material inside Lighting tab > Scene.
At this point the Scene is somewhat lit. There is ambient, but not exactly proper hemisphere lighting. We’ll leave this alone for now.
Directional Light. Typical Sunlight or Moonlight is usually represented by a directional light. This is due to the parallel nature of its light and shadow direction mimicking light source at close to infinity distance.
Global Illumination. Directional light + Ambient alone won’t create believable lighting. Proper hemisphere lighting requires occlusion of the skylight lighting and Sun requires indirect lighting bounce. The sky currently renders a single color value to the Scene making it flat. This is where Realtime Global Illumination or Baked Lighting is required to calculate occlusion and indirect bounce lighting. In order to achieve that, follow these steps:
Make sure all meshes that need to contribute to the Realtime GI or baking are flagged with Enable Lightmap Static and Reflection probe static. (Typically large static meshes).
Next is to enable Realtime Global Illumination (Leave at default-medium settings) in the Lighting tab > Scene > Realtime Lighting. Hit Generate Lighting or check Auto Generate.
The Scene is now dark after the Scene finished generating lighting. To make matters worse, some elements of the Scene are out of place (Notice the Tram and the door on the background). The static objects in the Scene currently have proper occlusion for hemisphere and indirect bounce response from the directional light, however the rest of the object lack a of proper lighting setup.
Light Probes and Reflection Probes. For dynamic objects or non-lightmap objects to receive Realtime/Baked Global Illumination, there needs to be Light probes distributed in the Scene. Make sure to distribute Light probe groups in the Scene efficiently near the area where dynamically lit object located or will pass (such as player). More information for Light Probes group can be found here
Hit Generate Lighting again or wait for the precomputation to finish if Auto Generate is checked.
The Tram and the background door are grounded better, but reflections look out of place. Sky reflection is all over the place and shows up inside the tunnel. This is where reflection probes comes in. Efficiently place reflection probes with proper coverage in the Scene as needed (In the Scene above 1 reflection probe for the entire room is sufficient). 128 pixels Cubemap Resolution using box projection usually is a good baseline for typical cases and will keep memory and reflection bake times happy.
More information for reflection probe can be found here.
The Scene now looks properly grounded and cohesive, an important part of believable Scene. But everything is even darker than before and nowhere near believable quality.
HDR Lighting Value. Many content creators don’t realize that, in reality, hemisphere lighting and Sunlights are very bright light sources. They’re much brighter than value 1. This is where HDR lighting comes into play. For now, turn off the directional light and then set the SkyMaterial Exposure to 16. This will give you a good idea what hemisphere lighting does to a Scene.
Things start to look believable. Think of this state as a cloudy day, where sunlight is completely diffused in the sky (directional light don’t show up). At this point sunlight can then be reintroduced back into the Scene at a much higher value, try Intensity 5 for a start. Despite the sun looking nearly white, it is important that color is chosen properly as the impact of indirect color from the strong sun can dramatically change the look of the Scene.
Now the sun (directional light) looks like a high energy light as expected from real life. The Scene looks quite believable at this point.
Screen Space Ambient Occlusion and Screen Space Reflection. While the Scene lighting looks pretty good at this point there’s additional details can be added to Scene to push it further. Baking of detailed occlusion usually isn’t possible because of the limited resolution set in the Realtime GI for reasonable performance. This is where Screen Space Ambient Occlusion can help. Enable SSAO in Post Process Profile under Ambient occlusion. Settings for this example is set to Intensity 0.5, Radius 1, Medium Sample count w/ Downsampling and Ambient Only checked for a start.
While SSAO takes care of extra ambient lighting occlusion, reflection could use some accuracy improvements in addition to the simple reflection probes. Screen Space Ray trace Reflection can helps improve this situation. Enable the Screen Space Reflection in the post process profile.
Notice that the left side of the wet track no longer renders bright reflections as SSR gives the Scene more accurate reflections for on screen objects. Both of these post process incur performance costs at runtime, so enable it wisely and set the quality settings at reasonable performance impact to fit runtime requirements.
Fog At this stage the content creators have achieved somewhat believable outdoor and indoor value separation on a fixed exposure. Reflection is visible in the dark indoor areas as strong highlights and not dim muddy values.
However the Scene foreground elements and background elements are not showing up despite having strong perspective elements. A subtle fog in the Scene can create a massive difference in giving the Scene additional dimension.
Notice the foreground railing have better definition compared to the zero fog Scene. Fog is enabled in Lighting tab > Scene > Other Settings. Fog color #6D6B4EFF, Exponential at 0.025 density is enabled here. In deferred rendering, fog might also need to be enabled in the post process profile if its not activated automatically.
Indoor and local lighting
Spotlight / Pointlight The staples of real time local lighting are spotlights and pointlights. Area lighting can only be used when baking lighting, with the exception of HD Scriptable Render Pipeline. There are new area lights that can be rendered in realtime in HD SRP mode. Fundamentally both of these types of lights emit light from one point in space and are limited by range with the spotlight having an additional limit by angle. More information can be found here.
The big differences between the two lights have to do with the way they casts shadows and interact with cookies. Shadowing with a Pointlight costs 6 shadow maps compared to a Spotlight’s single shadow map. For this reason point lights are much more expensive and should be used very sparingly. NOTE: Baked lights don’t need to worry about this issue. Another difference is that a cookie texture on a Spotlight is a simple straight forward 2d texture while a pointlight requires a cubemap, usually authored in 3D software. More information here.
Colour and Intensity of light. Choosing the proper colour and intensity for your lights needs to follow some loose guidelines to give plausible results. Factors that need to be considered is the effect of the color and value chosen. When selecting intensity for indoor lights, try to make sure no indoor lights have a greater intensity than the sun’s. This can create an unbalanced look depending on the Scene.
Given this Scene setting, it’s very unlikely that there’s a high intensity lights shining from the ceiling that exceed the brightness of the daylight time.
When selecting colour, try not to leave out any one of the colour channels completely. This creates a light that has problem converging to the white point.
While it is technically a valid light color, the light color on the left image removes all blue color from the final output. Having a limited final color palette in the Scene for a baseline is not a great idea, especially if color grading is expected to be done later on.
Emissive Surfaces In Unity, emissive surfaces can contribute to lighting if Realtime GI or baked GI is enabled, giving the effect of area lighting. This is especially useful if Realtime GI is enabled. content creators can modify the intensity and color of the emissive surface and get the feedback immediately, assuming that precompute had been done ahead of time.
Image above showcase subtle diffuse lighting coming from small meshes on the ceiling.
At this point, your content creators should have a good understanding of how to setup and light a Scene to look believable.
Understanding Post Process Features
As you might expect given the name, Post Processes are rendering effects that are based on an existing rendered Scene. Effects in Post Process are usually Scene view dependant or layered on top of the rendered Scene before generating the final render. The clear advantage of this feature is the instant visual feedback and dramatic improvement to the Scene without the need to alter existing content. There are lots of features in Post Process that are not considered a baseline requirement for creating believable Scenes. However its capability to enhance a Scene further is certainly worth the time it takes to understand the system. The goal here is to give you the information needed to decide which Post Process effects are right for your situation and to avoid the pitfalls that can come with these advanced features. More info can be found here: https://docs.unity3d.com/Manual/PostProcessingOverview.html
Anti aliasing. When rasterizing a 3D polygon into a 2D screen with limited resolution, the final pixels will show a stair-stepping effect (aliasing). There are different solutions for Anti-Aliasing techniques in real-time 3D (Supersampling, MSAA,FXAA, SMAA, Temporal, and any combination of those or newer methods). The most popular techniques currently are FXAA and Temporal Anti-Aliasing due to their effectiveness and relatively high performance.
The sample above showcases that FXAA does a good job of fixing some of the glaring aliasing effects. Temporal, however, takes it a step further and can be seen to perform a much better job in the tram rails.
FXAA is a pure post process anti-aliasing. In simple terms, the rasterized Scene is captured, edges are analyzed and an algorithm run on top of the existing image to smooth it out. It is straightforward, doesn’t have any complex dependencies, and is fast.
Temporal Anti-Aliasing is a lot more complex. Temporal uses jittering and the previous frame as additional data to be blended in. It uses motion vectors to predict which pixels need to be rejected or accepted to render the final frame. The idea is to increase the effective resolution of a frame with more data without the need to render the Scene larger than its final resolution (Supersampling). The benefit is clearly a much smoother anti-aliasing, similar to the quality given by SuperSampling, without the major performance impact.
Like everything in real time rendering, there’s always trade off. Temporal AA requires motion vectors to function and has a larger performance cost when compared to FXAA. Temporal’s complex nature of predicting the final image can cause some artifacts for fast moving objects and texture blurines which might not be suitable for some applications.
Ambient Occlusion. Ambient occlusion post process is an approximation of ambient occlusion based on screen space data, mainly depth, hence it is usually called Screen Space Ambient Occlusion (SSAO). As explained on the lighting setup section, SSAO can gives better fidelity when shading ambient lighting, especially for dynamic objects that typically don’t have any occlusion interaction between static Scene and dynamic Scene.
While in general SSAO helps a Scene’s ambient shading, it can cause too much occlusion. Having per object baked Ambient Occlusion from offline Digital Content Creation Software with additional ambient occlusion from light baking puts SSAO as the third layer of ambient occlusion.
Make sure to keep the final output in mind when setting up SSAO and try to find a balance with the other Ambient Occlusion solutions.
Sample showcasing the issue of adding too much AO causing open area to be very dark.
Screen Space Reflection. Similar to SSAO, Screen Space Reflection use the current Scene view to approximate reflections via ray tracing. To get a believable results, it is almost always a good idea to enable this feature. It adds a highly accurate reflection that compliments the normal cubemap captured reflection. However, enabling this feature does restrict rendering to deferred rendering only and has a performance impact. Another downside of SSR is "screen space", anything not on the screen will not generate reflection hits and can cause a missing reflection such as the sweeping effect at the edges of the screen.
Depth of Field. What most DCC understood from Depth of field effect is actually the lack of DoF. Blurry foreground and background images focused only at small point in space is what usually comes to mind when talking about DoF. While this effect can give the cinematic feel of a large sensor camera, it can also be used to change the scale perception of a Scene like how a tilt-shift camera lens gives a miniature effect.
The above example is a real-life photograph looking like a miniature by faking DOF.
Motion blur Some argue that motion blur induces motion sickness or causes a distraction while others swears by it. Should you enable it? The answer depends on the desired effect and application. A subtle motion blur goes a long way in blending the transition of one frame to another. This is especially true, if there is a massive difference in Scene translation typically found in first person or 3rd person cameras. For example, a wide angle view where the player can swing their camera really fast from left to right will look jittery and gives a stop motion look without motion blur even if it is rendering at 60FPS.
The sample above is running motion blur at a Shutter angle of 180 degrees. (Full 360 shutter angle will give you a full frame duration trail, while anything less means less trail). With that in mind, if you are aiming for a stop motion look, then by all means disable motion blur.
Bloom and emissive Bloom in real-life is an artifact of a lens where light beams aren’t focused properly, usually found on lower quality camera lenses or some special effects glow camera filter.
Two main things that are usually expected from using a bloom are a hazy soft image (as seen above, 0 threshold setup) or used to differentiate elements of high intensity or bright light (image below).
Overusing this features might backfire as seen at the sample where there are lots of high intensity pixels while the threshold of the intensity started to bloom very early (Threshold Gamma 1.0 means anything above value of 1.0 in current exposure will bloom intensely) Selecting the value of the threshold depends on the specific values of your emissive surfaces, the lighting setup for the Scene, and whether you have enabled eye adaptation.
ToneMapper type A tonemapper is a way of processing a linear HDR buffer of input data and rendering it back out to the designated colour space for final output. This is similar to how a camera works. In Unity Post processing there are two types of tonemapper, Neutral and ACES (Academy colour Encoding System). More info for ACES can be found in Wiki. At first glance the difference between the two is in its default contrast of the tonemapper. However that isn’t the main difference between the two as you can adjust neutral to be similar in contrast to ACES (seen below that the two image is almost identical).
Above Neutral settings: Black In 0.02, White In 10, Black Out -0.04, White Out 10, White Level 5.3 and White Clip 10. The main difference needed to be taken into account is how the two tone mappers handle high intensity colour values such as coloured light or emissive effects (e.g. explosion effects or fire).
The above image showcase how ACES tonemapper normalizes high intensity colour differently than the Neutral tonemapper.
Chromatic Aberration, Grain and Vignette These are a post process effect to simulate artifacts from real life camera systems. To prevent these being abused, it is a good idea to understand how each of these occurred in real cameras.
Chromatic Aberration or CA effect is a dispersion of colour that appears on an image if the lens of a camera fails to focus all colour to the same convergent point. This is usually found in a poorly calibrated lens or lower quality lens. While this can sometimes add a sense of realism to a digital Scene, this can also mean your virtual camera is directed to convey a low quality lens.
Grain seen in final image of a real photograph or cinema are usually the telltale sign of an insufficient quantity of useful light entering the sensor, such as dark Scene or a high iso camera sensor/film translating to noise. This effect can be used to simulate camera limitations added to a pure clean 3D rendered Scene to feel more believable. However having too much noise in a Scene can distract viewer with a false sense of motion and effect the contrast of the final rendered image.
Vignette effect, similar to the CA effect, is an artifact where a lens could not give consistent light coverage from the center to the edge of the sensor/film of a camera. This can be used to give some sense of focus for a central point of a Scene, but can also be abused and make a Scene look like it was processed by an amature post editor.
The key takeaway from these post process fundamentals are for a content creator to effectively use the features with a sense of purpose rather than a "make it advance rendering" checkboxes as each additional effects added to the Scene cost some performance.
Post Process missing in this explanation are Eye Adaptation, colour Grading (other than tonemapper) and User Lut (Lookup Table). These post process effects warrants a deeper explanation. General information of these effects can be found here:
Dealing with dynamically lit objects, especially large objects require more tricks than their static counterparts. Objects that are non statically lit in many cases are expected to change position, hence the need for dynamic lighting information. Dynamic objects have to work with these limitations in mind when predetermined lighting calculations aren’t an option. Here are some things to consider to improve the quality of dynamic object lighting:
Light Probe Proxy Volume (LPPV). Surfaces of dynamic objects that aren’t lit by dynamic lighting typically use Light Probe data to fill in their lighting information (In a Scene where probe is not present Environment Lighting is used). Depending on lighting strategy used in the setup of the Scene, this can range from indirect lighting information down to shadowing and baked diffused probe lighting information. This Light Probe strategy usually works fine for small dynamic objects, however larger objects require a finer granularity of Light Probe lighting. This is where Light Probe Proxy Volumes comes in. Check Unity manual for LPPV guide. Using Light Probe Proxy Volumes allow a large dynamically lit object to use more than a single Light Probe resulting in higher lighting accuracy.
The example above showcases how the capsule with LPPV demonstrate higher accuracy of Light Probe sampling despite only using 2x2x2 Volume grid.
Per object baked Ambient Occlusion Map (AO) Dynamic objects only receive lighting from Light probes or ambient lights. There’s a need to precalculate an occlusion for the object, especially if the object involves a concave interior such as the tram in the example.
On the example above the tram on the left without AO applied Light Probe lighting data without knowing how to differentiate the interior and the exterior surfaces. With the prebaked AO, this map serves as a guide to reduce the intensity of light and reflection from the exterior, giving a much more grounded look.
Per object Ambient Occlusion offline baking can even give further detailed occlusion by baking from higher detailed mesh to lower detailed mesh similar to how Normal map baking works.
NOTE: Per object AO doesn’t interact with other Dynamic object, for example if a dynamic object such as character entering the tram, it will be receiving Light Probe data from the scene and doesn’t necessarily match the occlusion of the tram interior.
Local Reflection. Most dynamic objects don’t warrant their own reflection, however for objects that involve concave interiors, attaching a reflection probe to the object and allowing it to run in realtime can help reduce false reflection hits coming from the environment reflection probe.
Exaggerated material to showcase reflection issues.
Fake Shadows or occlusion based on assumptions. If certain assumptions can be made for an object, there are tricks that can be used to improve visual quality. In this sample shown below, the tram is expected to be always on rail and in order to help its ground light occlusion in shaded area, a simple multiply transparent material plane is placed.
Similar tricks usually used in other games where under a character there’s a blob shadow projector instead of the character casting real shadows. In real time rendering, if you can find a tricks that works and are cheap in performance usually it can be used as a viable solution.
There are certainly more tips and tricks that can be done for improving visual rendering. The above list should give content creators confidence in thinking of solutions for different kinds of visual target requirements.
Sample Project File
Spotlight Tunnel Sample Scene was developed by Unity SF Spotlight team to help content creators do hands on learning and experimentation.
Spotlight Tunnel Sample project file can be found here.
Simply extract the project to the folder and open the project using Unity.
Spotlight Tunnel Project was made with Unity 2017.1.0f3
Opening this project in newer version of Unity will require lighting rebuild as there might be lighting data format incompatibility between versions.
All asset provided in this project may only be used in project developed with Unity Engine.