So Thin I Couldn't Even See It: Rendering double-sided geometry

Unity’s default plane has 121 vertices which comprise 200 triangles. Here are two pictures of the plane rendered using the Diffuse shader.

Plane top and bottom

As you can see, the triangles that form the plane’s surface are only visible from above. The visible faces are the front faces. The Diffuse shader, along with all the built-in shaders that support lighting, has back face culling enabled.

Back face culling speeds up rendering by only drawing triangles whose fronts face the camera. In most cases, back face culling makes rendering faster without any effect on the final image. A sphere being viewed from the outside, for instance, does not need to have its inside surface rendered.

There are situations where culling seems to be the wrong decision, though. Very thin objects such as pieces of paper should be visible from both sides. We will explore two methods for rendering such objects.

A note on normals

Culling also prevents vertex normals from being used for both sides of the triangle. Here is a single triangle with its vertex normals in blue:

Screen shot 2011-03-05 at 3.40.33 PM

The vertex normals are correct for the top surface of the triangle. For the bottom, they are pointing in exactly the wrong direction. Because the vertex normals can only be correct for one side of a triangle, back face culling makes sense when you need normals.

Vertex normals aren’t necessary for every shader, but they are used by most. Any shader that supports real-time lighting, even just per-vertex lighting, needs correct normals.

Fix it with shaders

Let’s look at a shader-based solution step-by-step.

I’ve seen a lot of people simply add Cull Off to an existing shader, which turns off back face culling. This can be effective for simple shaders, but if you’ve read this far then you know why that doesn’t work when the shader supports lighting. Without changing the mesh data, we need to find a way to flip the vertex normals, but only for back faces.

A vertex program can interpret vertex data in many different ways. It already has to transform each normal for rendering, it can easily perform the extra task of inverting it. Unfortunately, the hard part isn’t inverting normals, but deciding when to do it.

While a vertex program can invert a normal really quickly, it has very little contextual information about individual vertices. Specifically, there is no way for the vertex shader to tell which face of a triangle is going to be rendered. Without this information, it is impossible for the vertex program to choose correctly whether or not to invert each normal.

There is an easy way around this, which is to split the shader into two passes. The first pass uses Cull Back, and treats each normal as usual. The second pass uses Cull Front, and inverts each normal in the vertex program. Now we have a shader that renders both sides of each triangle with correct lighting.

There are still the issues of normal maps and shadows, both of which will appear incorrect using the above two-pass method. Instead of addressing those right now, let’s look at what we’ve got so far.

Double-sided sword

Although a fairly simple shader-based solution seems to go a long way towards fixing the problem, it also becomes more work for the graphics card. Turning culling off is very efficient and requires only a very small change to the shader, but fixing the lighting for back faces requires that you double your pass count. Every additional pass is another draw call, which means twice as many draw calls for double-sided shaders.

Even if draw calls aren’t a problem, you’re still creating new shaders to render certain meshes in a way that doesn’t actually look different. Having a double-sided version of each shader you want to use for your single-sided objects means you waste the GPU’s time with state changes and shader compilation. You also waste your own time writing shaders.

How the GPU sees it

The problem of double-sided objects stems from the difference between the way we think of triangles and the way the GPU handles them. A triangle is made up of three vertices; on this much we can agree. For the GPU, though, vertices are more than just positions. They can include normals, tangents, colours, and other information. This means that adjacent triangles often can’t share vertices, because although some of their vertex positions are the same, the other data may differ. UV coordinates, colours, and even normals can change at triangle borders.

While adjacent triangles can share vertex normals, two sides of the same triangle cannot. Explaining this to the GPU is a lot of work for everyone.

The easy way

The easiest and fastest way to render double-sided surfaces doesn’t involve shaders at all. Instead of starting down the tortuous road of fixing a geometry problem with the GPU, you can address it in your modelling program. Just select all of the triangles you want to be double-sided, duplicate them, and then flip their normals.

In the worst case, your model now has twice as many vertices. This means twice as much data to upload to the graphics card. This data only needs to be uploaded once, though, and it’s unlikely that your graphics memory is filled with mesh data (textures tend to take the most space). The benefits of this approach are many: you don’t need to write and maintain complex shaders, the graphics card doesn’t need to compile them, and more of your meshes can share materials.

A lot of problems can and should be solved with shaders, but creating double-sided geometry is best handled in your modelling program.
blog comments powered by Disqus