Week 7: RayMarching


Also browse some of the short examples of ray marching on Jamie Wong’s Shader Toy profile. Or browse the general Shader Toy site.


Class and lab were canceled due to instructor illness.


Let’s regroup since it has been awhile.

Midterm update

The midterm project is still pending. I would like to have it up this week. Once posted you will have two weeks to work with a partner to implement a ray marcher. There will be several core components including composing a general scene with lighting, shadows, and multiple shape/colors/materials. There will also be a number of optional extensions to choose from and at least one must be implemented for full credit. You will be using the Shader Toy Mock up to implement your work in an environment similar to what you have been using, but the ray marching particulars will have you doing most of your coding entirely in the fragment shader.

SDF primitives

Last week we introduced a single SDF primitive, the sphere of radius \(r\), centered at the origin.

\[f(p) = \sqrt{p \cdot p} - r\]

But this is just the beginning of the basic shapes you can create with signed distance functions. For a basic introductory list, see Iñigo Quílez’s Distance Function Article

Constructive Solid Geometry - Combining objects

Signed distance fields use the union, intersection, and subtraction operators to compose scenes of multiple objects in a variety of ways, many of which are difficult to do with traditional vertex buffer approaches.


Shape transforms

To transform a shape, e.g., to move it from its default location, apply the inverse transform first to the point. Then use the transformed point to sample the original SDF. In the case of translations and rotations where distances are preserved, this is sufficient.

Let’s consider the case where we want to have a sphere orbiting around the \(y\) axis clockwise. This can be expressed as a combination of translation matrix \(T\) and a time varying rotation matrix \(R_y(\theta(t))\). In the SDF, we want to construct a matrix \(M\) that is the inverse of the orbit transform from e.g. lab 4. What would \(M\) look like?

\[M = T^{-1} R_y(-\theta(t))\]

Our SDF for the orbiting sphere would be

\[f_o(p) = f(Mp)\]

Where \(f(Mp)\), is the standard sphere SDF.

Handling scaling transforms is somewhat more complicated. We can apply the inverse scaling transform to the point, but when we do so, we also scale the distance returned by the SDF. And we are using this information to determine how far to move in the original scene. If the scaled distance is an underestimate of the true distance, this may be OK as the ray march step will just be slower and more conservative. But if the scaled distance is an overestimate of the the true distance, we could march past the surface of the scene. To correct the scaled distance, we apply the original scaling factor (not its inverse) to the returned SDF value. In the case of non-uniform scaling, we apply the minimum scaling factor to SDF and fall back to a conservative underestimate of the true distance.

In general, while it is possible to compute exact SDFs for many shapes, in cases where it is not possible, it is better to return an underestimate of the distance during the ray march step than an overestimate.


The SDF for a single shape is typically described as returning a single floating point distance value. For a complex scene, we can use the union, intersection and subtraction operators to create a more elaborate SDF using constructive solid geometry (CSG). But the standard description of these operators again just describe returning the distance, and not the object or type of surface material encountered. Most ray marchers actually modify their scenes to return both the SDF distance value and an object or material identifier in their implementations when sampling the scene.

This is typically done in GLSL by having the scene SDF return a vec2 type, with one component being the traditional SDF distance and the second component being a material or object label.

vec2 sceneSDF(vec3 p)

The GLSL vec4 type can access the individual components through swizzle operators xyzw (geometry), rgba (colors), or stpq (texture coordinates). So you can index the vec2 types using xy, rg, or st. I prefer using st and letting the first .s component be the surface or material id and the second .t component be the distance or time to reach the surface.

vec2 sceneSDF(vec3 p){
  vec2 obj1 = vec2(1., sphereSDF(p-vec3(1.,0.,0.), 1.));
  vec2 obj2 = vec2(2., sphereSDF(p+vec3(1.,0.,0.), 1.));
  return union(obj1,obj2);

Note that the union operator here still is a CSG SDF operation that needs no knowledge of materials. However, it may need modification in implementation to accept vec2 inputs.


One of the advantages of path tracing methods over the triangle geometry pipeline is the ability to support global illumination models. While it is possible to do some shadows in the traditional pipeline, it is conceptually easier in the path tracing model.

First consider what it means conceptually for a point \(p\) to be in shadow. A point \(p\) is in shadow if …​

Now let’s consider how to compute if \(p\) is in shadow. What would we need to know as inputs? Can we express this as a ray marching problem?