Interpolation is the process of getting from A to B without jumping. Let’s take a look at some simple ways of doing this. The most commonly used interpolation function in video games is

Note: normalized forms simply interpolate between 0 and 1, and are useful for the type of comparison we’ll be doing.

Lerp is great because it’s simple, but for some situations it is too crude. It starts and stops abruptly, giving the transition a robotic feeling. To provide a smooth start and stop (also known as easing in and out), we can use a cubic hermite function known as

Smoothstep is simple and useful. It has a slope of zero at its endpoints, so it gets you from A to B without giving you whiplash. Because it doesn’t move as fast as lerp near the endpoints, smoothstep has to move faster at its midpoint, where it has a slope of 1.5.

The next function we’ll look at is a degenerate case of interpolation called

Although rarely used in practice, step is important to understanding the next goal.

Smoothstep is great, but it has a fixed shape. Maybe you want an even easier start and finish with a faster transition in between. Perhaps you’d like to slow down in the middle in exchange for starting and stopping more abruptly. Looking at the smoothstep curve, it seems like a slightly curved version of lerp. Maybe the slope at the halfway point could be varied arbitrarily, letting you go all the way from

The desire for this kind of flexibility is the genesis of

This is great, because it lets you interpolate as smoothly or abruptly as you want rather than having to pick a single function. Ideally, though, we wouldn’t be providing the actual midpoint slope because it has the awkward domain of [1,∞]. Instead, we would like it to range from zero to one with the following results (we’ll call it

We’ve got the specifications for our desired function. Now all we need is the math. A bit of Googling shows some unsuccessful attempts, and nothing really promising. I have a feeling that while a fair number of animators, game programmers and possibly audio engineers have come across the need for variable smoothstep, the problem hasn’t received the attention of anyone with a solid background in mathematics. I’ve asked a few people who are better at math than I am, but we’ve never been able to arrive at an elegant (or even convincing) answer.

Unfortunately, everyone who wants variable smoothstep, as defined above, has accidentally created a trick question. This meta-interpolation function looks like it should be a natural progression, with the midpoint slope moving gently from 1 to infinity. The problem is that the definition glosses over what happens at the endpoints. Step and smoothstep both have endpoints with slopes of zero. Lerp, however, has the same slope at its endpoints as it does in the middle, which is 1. This means that there is no continuous way to get from lerp to smoothstep that will also take you from smoothstep to step.

Simply put, there is no sensible way to define a variable smoothstep that fulfills the above requirements. Instead of trying to get all three behaviours out of one function, it might be better to interpolate between two of them. Which functions you want to start with depends on the behaviour you want, but it will be easier to do without accidentally creating impossible requirements.]]>

As you can see, the triangles that form the plane’s surface are only visible from above. The visible faces are the

Back face culling speeds up rendering by only drawing triangles whose fronts face the camera. In most cases, back face culling makes rendering faster without any effect on the final image. A sphere being viewed from the outside, for instance, does not need to have its inside surface rendered.

There are situations where culling seems to be the wrong decision, though. Very thin objects such as pieces of paper should be visible from both sides. We will explore two methods for rendering such objects.

Culling also prevents vertex normals from being used for both sides of the triangle. Here is a single triangle with its vertex normals in blue:

The vertex normals are correct for the top surface of the triangle. For the bottom, they are pointing in exactly the wrong direction. Because the vertex normals can only be correct for one side of a triangle, back face culling makes sense when you need normals.

Vertex normals aren’t necessary for every shader, but they are used by most. Any shader that supports real-time lighting, even just per-vertex lighting, needs correct normals.

Let’s look at a shader-based solution step-by-step.

I’ve seen a lot of people simply add Cull Off to an existing shader, which turns off back face culling. This can be effective for simple shaders, but if you’ve read this far then you know why that doesn’t work when the shader supports lighting. Without changing the mesh data, we need to find a way to flip the vertex normals, but only for back faces.

A vertex program can interpret vertex data in many different ways. It already has to transform each normal for rendering, it can easily perform the extra task of inverting it. Unfortunately, the hard part isn’t inverting normals, but deciding when to do it.

While a vertex program can invert a normal really quickly, it has very little contextual information about individual vertices. Specifically, there is no way for the vertex shader to tell which face of a triangle is going to be rendered. Without this information, it is impossible for the vertex program to choose correctly whether or not to invert each normal.

There is an easy way around this, which is to split the shader into two passes. The first pass uses Cull Back, and treats each normal as usual. The second pass uses Cull Front, and inverts each normal in the vertex program. Now we have a shader that renders both sides of each triangle with correct lighting.

There are still the issues of normal maps and shadows, both of which will appear incorrect using the above two-pass method. Instead of addressing those right now, let’s look at what we’ve got so far.

Although a fairly simple shader-based solution seems to go a long way towards fixing the problem, it also becomes more work for the graphics card. Turning culling off is very efficient and requires only a very small change to the shader, but fixing the lighting for back faces requires that you double your pass count. Every additional pass is another draw call, which means twice as many draw calls for double-sided shaders.

Even if draw calls aren’t a problem, you’re still creating new shaders to render certain meshes in a way that doesn’t actually look different. Having a double-sided version of each shader you want to use for your single-sided objects means you waste the GPU’s time with state changes and shader compilation. You also waste your own time writing shaders.

The problem of double-sided objects stems from the difference between the way we think of triangles and the way the GPU handles them. A triangle is made up of three vertices; on this much we can agree. For the GPU, though, vertices are more than just positions. They can include normals, tangents, colours, and other information. This means that adjacent triangles often can’t share vertices, because although some of their vertex positions are the same, the other data may differ. UV coordinates, colours, and even normals can change at triangle borders.

While adjacent triangles can share vertex normals, two sides of the same triangle cannot. Explaining this to the GPU is a lot of work for everyone.

The easiest and fastest way to render double-sided surfaces doesn’t involve shaders at all. Instead of starting down the tortuous road of fixing a geometry problem with the GPU, you can address it in your modelling program. Just select all of the triangles you want to be double-sided, duplicate them, and then flip their normals.

In the worst case, your model now has twice as many vertices. This means twice as much data to upload to the graphics card. This data only needs to be uploaded once, though, and it’s unlikely that your graphics memory is filled with mesh data (textures tend to take the most space). The benefits of this approach are many: you don’t need to write and maintain complex shaders, the graphics card doesn’t need to compile them, and more of your meshes can share materials.

A lot of problems can and should be solved with shaders, but creating double-sided geometry is best handled in your modelling program.]]>