Midterm Project: Ray Marcher

Due Monday, 9 November 2020, before 7am

This is a partnered lab assignment If you have not already done so, please set your Partner preferences through the TeamMaker service. Your can pick a specific partner, a random partner, or to work alone. I strongly encourage you to work with a partner when given the option. You may choose a partner from either lab section. If you pick a specific partner, your partner must also choose you before the application will create a team and create a git repository for you.

1. Lab Goals

Through this lab you will learn to

  • Implement a path tracing renderer using the fragment shader.

  • Implement ray marching techniques.

  • Explore a global lighting model capable of rendering shadows.

  • Use more advanced features of GLSL in the fragment shader context.

2. General Guidance

Please see the page on General Guidance for help on setting up the python server, or ngrok, and submitting your labs via git.

3. References

3.2. GLSL

  • GLSL ES 3.0 Quick? Reference This should most closely match the version of GLSL we use with WebGL2. Note that GLSL code reference doesn’t start until page 4. Earlier stuff is the CPU side OpenGL ES reference. Since you will be working (almost?) exclusively in fshader.glsl, you shouldn’t need to consult the earlier stuff.

4. Overview

For your midterm project you will implement a ray marcher entirely in the fragment shader similar to many of the demos on Shadertoy. To limit the scope a bit and provide a consistent nomenclature for core elements, I have provided a number of function prototypes which you should implement as described without modifying the prototype. There will additionally be opportunities to expand the core and implement optional features of interest to you. The core components will provide a good technical challenge that will require you to use and modify prior course topics. But creating a ray tracer or ray marcher can be one of the more creatively rewarding projects in computer graphics and hopefully you will have time to explore and create interesting scenes.

4.1. Getting Started

Clone the files and set up the symbolic link as described in the General Guide. If you run the started code in the browser you should see a circle that varies in size over time and changes color as you move the mouse (you may need to click once to focus in the window). You will replace most of the mainImage code in fshader.js with your own ray marching code. Unless you add GUI controls or do some optional features that require textures or additional uniforms, you should not need to modify any other files besides fshader.js for this project.

The provided fragment shader sets up some uniforms common to those used in Shadertoy, including the screen size iResolution, the time iTime, and the mouse state iMouse. While GLSL expects the fragment shader to have a main function that sets one out color, ShaderToy has a slightly different void mainImage(out vec4 fragColor, in vec2 fragCoord) that passes the current fragment location in pixels coordinates as input and sets a fragment color as output. A thin wrapper to convert between these two formats has already been provided and you can use the Shadertoy mainImage function as your primary starting point.

As you will see in many other Shadertoy demos and the sample code provided, your shader code can have other helper functions and function prototypes. I have defined and documented several prototype functions for you that you should implement. Do not modify the prototypes! You can see examples in the code for e.g., some of the common matrix transforms, where I declared the function prototypes near the top of the program and implemented them later in file. This is similar to the C/C++ header .h file and separate implementation .c/.cpp file, though the prototype and implementation are in the same file here.

You should incrementally implement and test your functions as you build towards a full ray marcher. A rough guide on how to proceed is below.

5. Creating rays and marching

Your first steps should be creating rays and setting up your ray march loop by implementing rayDirection, rayMarch, and sceneSDF to create a basic circle in the center of the screen. You can use Jaime Wong’s tutorial as a guide, though there are few places where the function prototype parameters are slightly different. Note that rayMarch is closest to Wong’s shortestDistanceToSurface, but both the sceneSDF and rayMarch functions here return a vec2 object instead of a float. As mentioned in class, we use the vec2 type to pass back both the distance value and an object/material ID. I strongly recommend using the first component of the vec2 as the material ID and the second component as the traditional distance value. For now, you can use a dummy 1. value for the material ID until you create more complex scenes with multiple materials, objects.

It is fine to use a similar implementation of Wong’s rayDirection function (note the citation at the top of the shader), as it is simply saying the rays are radially symmetric about the \(-z\) axis in eye space.

Aside from some renaming of functions, reordering of parameters, and the addition of the material ID, you should be able to recreate a scene similar to Part 1 of the Ray Marching Tutorial in your shader.

6. Computing Normals

Next you will want to compute surface normals by implementing estimateNormal. Recall, you can estimate the normal by first computing:

\[\vec{n} = \begin{pmatrix} dx \\ dy \\ dz \\ \end{pmatrix} = \begin{pmatrix} f(x+\varepsilon, y, z) - f(x-\varepsilon, y, z) \\ f(x, y+\varepsilon, z) - f(x,y-\varepsilon, z) \\ f(x, y, z+\varepsilon) - f(x,y,z-\varepsilon) \\ \end{pmatrix}\]

where \(f(p)=f(x,y,z)\) is the scene SDF. A little GLSL trick here is to define a small vec3 disp = vec3(eps, 0., 0.); for some small eps. Evaluating dx is now easy: dx = f(p+disp)-f(p-disp);. For dy you want a disp that looks like vec3(0., eps, 0.). Instead of resetting disp completely, you can use GLSL’s swizzle operators to say something like disp.yxy to create a new vector that contains the old y component as the new x and z components and the old x component (eps) as the new y component. So dy=f(p+disp.yxy)-f(p-disp.yxy);. Small tricks like this are common in advanced GLSL and Shadertoy.

Don’t forget to normalize the result before returning it.

\[\hat{n} = \frac{\vec{n}}{\|\vec{n}\|}\]

You can test if your calculation is working by using the computed normals as a color (assuming your ray hits the sphere). Assign the color of your sphere to:

0.5*(n+vec3(1.))

and you should see a smooth pastel color gradient across your sphere.

7. Lighting and Materials

Once you have your normals working, you can do Phong lighting, similar to the approach outlined in Part 2 of Wong’s tutorial. The only real difference here between our version and the tutorial is that the prototypes for the midterm use Material types to group the ambient, diffuse, specular, and shiny components. You should use/expand updateMaterial to fetch parameters for material colors given a material ID. You should support at least three different material IDs in your final scene.

The tutorial and prototypes decompose lighting into two functions: one for the total lighting, and one for a per light computation. You should support at least two positional lights in your scene.

8. Camera motion

Similar to lab5, you should implement cameraMatrix to position the camera in the scene. Unlike lab5 where you constructed a view matrix to go from world coordinates to eye coordinates, here you want to construct a camera matrix that goes from eye coordinates to world coordinates. The rayDirection function you implemented earlier computes rays in eye space assuming the eye direction and world directions are aligned.

Here I would encourage you to perhaps look at your own lab5 instead of Wong’s Part 3 of the tutorial. While the tutorial is not wrong per se in what it ultimately does, it uses a slightly different notation and calls the matrix the view matrix, which is what we called our matrix in lab5 because twgl called the inverse (what we acually want here in the midterm project) the lookAt matrix. So the thing that twgl calls lookAt, Jaime Wong calls view, and we call camera. They are all the same, but we had a view matrix in lab5 which is the inverse of all of these. Wong also uses the vectors s, u, and -f for our r, u, and n vectors in the Week 04 notes. I recommend just using the r, u, n notation.

Even though Wong does not transform the eye in his tutorial, you should in your cameraMatrix implementation to match the specification. As noted in the tutorial, the matrix is only applied to the ray direction vector to convert it to world space (see mainImage in Part 3), so the translation would have no effect on vectors.

Test your implementation by trying a different view and in the world direction from the eye in world space (only the direction is changing here, you do not need to modify the eye coordinate which was already in world space)

9. Shadows

Modify your Phong illumination model to support hard shadows from occluding objects. An surface point p is in shadow from a light source L if there is another surface between p and L. In this case, there is no diffuse or specular lighting contribution from L at p. The helper function inShadow, which you must implement should make this easier to implement in your phongContrib function.

10. Animation and Scene Creation

With your core components in place, create a scene consisting of at least five different objects including three different surface types, and three different materials. At least some component of your scene should be animated. This could be the lights, the objects, or some user interaction by using mouse input.

You will likely want to implement the constructive solid geometry operators of union, intersection, and subtraction to help compose your scenes. You can apply model transforms to the sample points to modify the basic SDFs.

11. Optional Extensions

This assignment serves as just an introduction to ray marching methods. It is your first experience writing almost entirely in the shader (which can make it harder to debug, so proceed carefully). The global sceneSDF allows you to evaluate global shadow with relative ease.

In addition to the core components, you must implement at least one optional extension to your ray marcher. Some example extensions are below. If you have something else in mind, just check with me briefly before implementing your optional feature and I can add it to the approved list.

  • Mirrored/reflective surfaces.

  • Add support for texture maps in shader.

  • Soft Shadows.

  • Camera motion with mouse.

  • Anti-aliasing with super sampling.

12. Summary of Requirements

Your project will be graded on the following components:

  • A working ray marcher in the fragment shader.

  • Correctly implemented functions for all the initial prototypes provided including:

    • Phong lighting with shadows and at least two lights.

    • A working Camera transform for setting the eye look at.

    • Support for at least three different material types.

  • Some animated feature.

  • At least one implemented optional extension.

  • Answer to concept questions in the Readme.md file.

You will not be graded on the lab survey questions in the Readme.md file

Submit

Once you have edited the files, you should publish your changes using git. See the General Guide for more details.