CS40 Midterm Project: Raytracing

Due 11:59pm Wednesday, 31 October 2018

You may work with one partner on this assignment. This lab will count as your midterm project and use a variety of techniques that you have learned so far. You will design and implement a ray tracer. Your ray tracer should model spheres, triangles, and rectangles, using the Phong lighting model for ambient, diffuse, and specular lighting. You should also model shadows.

Getting started
$ cd
[~]$ ssh-add
Enter passphrase for /home/ghopper1/.ssh/id_rsa:
Identity added: /home/ghopper1/.ssh/id_rsa (/home/ghopper1/.ssh/id_rsa)
[~]$ cd ~/cs40/
[cs40]$ git clone git@github.swarthmore.edu:CS40-F18/raytracer-YOURUSERNAME-YOURPARTNERNAME.git raytracer

Making and building code

Make a build directory in your raytracer directory and run cmake ../ and make.
[raytracer]$ cd ~/cs40/labs/raytracer
[raytracer]$ mkdir build
[raytracer]$ cd build
[build]$ cmake ..
[build]$ make -j8
[build]$ ./raytracer scenes/input.txt
You program does not display a window. Instead it creates a png image file (test.png as specified in input.txt). You can view image files on the CS system using the program eog. The image should be a blank black image at this point. Your raytracer code will construct the correct image.
See input.txt for a sample test file describing a scene to model and the viewpoint from which to raytrace the final image.

The image plane

The first few lines of this file describe the name of the output image and the height and the width of the image (in pixels)

output test.png
outsize 512 512
This image also represents a planar rectangle in the world. This rectangle is specified by the lower left corner of the rectangle and two vectors: a horizontal vector pointing from the lower left to the lower right, and a vertical vector pointing from the lower left to the upper right.
origin -5 -5 8
horiz 10 0 0
vert 0 10 0

The center of every pixel at row, column in the output image file has a corresponding position in world coordinates using origin, horiz and vert.

The eye

The $x,y,z$ location of the eye in world coordinates is specified with the eye keyword.
eye 0 0 15
You will be tracing rays from the eye location through the center of each pixel to the scene, usually located on the opposite side of the image plane from the eye.


You are asked to support, at the minimum, three shapes: spheres, triangle, and rectangles. A sphere should be represented by a center and radius
#center x,y,z radius
sphere 0 0 0 2

Triangles and rectangles will be specified by three points. In the case of rectangles, assume the first three points are the lower left, lower right, and upper right, respectively. You can compute the upper left with this information.

#xyz of p1,p2,p3
triangle -5 5 5  5 5 5  5 5 -5

#xyz of ll, lr, ur
rectangle -5 -5 5  5 -5 5  5 -5 -5

Shape.h describes a virtual base class, much like the Drawable class from lab 02. It is important that each shape you add to your scene is able to compute intersections with rays and the shape. Also, each shape should compute normals for points on the shape. You should implement spheres, triangles, and rectangles as classes derived from the Shape class. I have started this for you in Sphere.h, but you need to add some stuff and implement the appropriate methods in Sphere.cpp. Don't forget to update your CMakeLists.txt file. You will need to add Triangle and Rectangle classes from scratch.


Lights in the input file are of two forms: point lights that contribute to diffuse and specular lighting and a single global ambient light that contributes overall background lighting. The global ambient lighting should in general have a low intensity, but you can bump it up while debugging. Point light sources have a position in world space and an intensity.
#global ambient intensity
amblight 0.1

#xyz pos intensity
light 0 3 10 0.3
light -5 0 0 0.3
light 5 8 0 0.3
Code Overview
This assignment uses several techniques from previous lab assignments.

QImage and QColor are from Lab 01. Recall that in the QImage class, pixel 0,0 is in the upper left.

view, material, light, ray and hit are very lightweight classes or structs that just are containers for grouping related elements together. In many cases, there is no associated cpp files since the member variables can be accessed directly. Feel free to modify these classes/structs if you like, but you shouldn't need to add a ton of stuff to these files to get a basic ray tracer working.

common.h currently just contains the vec3 typedef. Feel free to add other "common" functions here if needed, and add a common.cpp if the code complexity grows.

makescene.cpp is a small wrapper around the bulk of the raytracer. This is the main executable you run. It checks that you provided an input file, creates an instance of the RayTracer class, parses the file, traces the scene, and saves the result. You do not need to modify this code. Instead, modify raytracer.cpp.

If you look at raytracer.cpp initially, you'll see that RayTracer::save() creates a QImage object and saves it to a file. You will probably want to move the creation of this image into RayTracer::trace(), make the QImage object a member variable, and only have save write the output image created in trace(). This was my final implememtation of save:

void RayTracer::save() {
    m_img->save(m_view.fname, "PNG");
    qDebug() << "Saved result to " << m_view.fname;

That leaves the parser, which reads a text file like input.txt and converts it into an internal format that your raytracer can use. Writing parsers in C++ can be very tedious. I got you started by writing some helper function in parser.h. Reading this file may be helpful as you parse some commands I have left out. Reading parser.cpp is probably less helpful. It has the tedious and annoying details of C++/Qt string manipulation. raytracer.cpp contains the start of a full parser that opens the input file and parsers each command line by line. Check out parseLine which is similar to a giant switch statement (except you can't switch on string types). When you run the parser initially, you will find some commands are completely missing and some are only partially implemented. Examine the other parts of parseLine and use it to fill in any missing details. It is recommended that you store all the information about the input file in the m_scene object. I use two QHash dictionaries in the parser to refer to certain color and material variables by a string name, like "red". Take a look at a few examples in the parser and feel free to ask questions.

To make material handling a bit easier, there is a notion of the "current" material. Changing the properties of a material through the use of mat amb color changes the "current" material which can be saved as a special name and retrieved later. When you create a new sphere, triangle, or rectangle, you do not need to specify nine material coefficients. The semantics is that these objects should just use the "current" material at the time the object is created. It's very OpenGL-esque, for better or worse.

As for implementing the actual raytracer, it is helpful to have a function that can convert i,j png pixel coordinates to world coordinates using the origin, horiz and vert point/vector information. For each pixel, create a ray from the eye to the pixel position in world coordinates then have a function that traces one ray and returns a single QColor which can be assigned to the final output png file.

Don't try to handle all the components at once. Focus on maybe getting the ambient lighting working for one sphere or one rectangle in the center of the image. Once you have the basic outline correct, adding diffuse and specular lighting should be easier.

Additional Components
A modest amount of credit will be reserved for adding some more advanced ray tracing components to your system. I do not expect you to implement all of these ideas, but you should attempt to implement at least one of the following features. How you design and implement your solutions (and how you adjust the parser) is up to you.
Hints, Tips, and Updates

You do not need a draw method in your shape classes. The drawing is done by the rays traveling through the image plane.

You do not need a perspective or ortho projection matrix. As all rays originate from the origin and go through the image plane, you will get a perspective effect from simply tracing rays.

Below is a general sketch of the raytracing process

  init Image
  for each col,row in Image:
    ray = MakeRay(col, row)
    clr = TraceRay(ray)
    Image(col, row) = clr


RayTracer::MakeRay(col, row):
  /* Use origin, horiz, vert, nrows, ncols to
   * map col, row to point in world coordinates */
  Point p = ConvertToWorld(col,row)
  Ray ray
  ray.origin = eye
  ray.direction = p-eye
  return ray



  Hit closest = FindClosestShape(ray)
  Color clr = background
  if closest.ID != -1:
    /* We hit a shape with the ray, compute
     * color using lighting model */
    clr = DoLighting(ray, closest.shape)
  return clr


RayTracer::DoLighting(ray, shape):
  /* include global ambient */
  clr = ambient*shape.clr;
  for each light L:
    if shape not in shadow of L:
       clr += phong(ray, L, shape)
  return clr



 closest.ID = -1 /* haven't found anything yet */
 for each object O:
   time = O.hitTime(ray)
   if time > 0:
      if closest.ID == -1 or time < closest.time:
         /* O is closer, update closest */
Computing Hit Times
Given a ray $\overrightarrow{r} = \overline{o} + \overrightarrow{d} t$, and a shape $S$, we want to compute the time $t$ at which $\overrightarrow{r}$ intersects $S$. We call this time the hit time. Below we look at computing the hit times for a plane and sphere


We can define a plane in 3D with point $\overline{P_0}$ in the plane and a vector $\overrightarrow{n}$, perpendicular, or normal to the plane. The set of points $\overline{P}$ in the plane satisfy the equation $$(\overline{P}-\overline{P_0}) \cdot \overrightarrow{n}=0$$ A point $\overline{P}$ on the ray $\overrightarrow{r}$ satisfies $\overline{P} = \overline{o} + \overrightarrow{d} t$, for some $t$. Substituting this equation for $\overline{P}$ into the plane equation we can solve for the time $t$ in which $\overline{P}$ is simultaneously on the ray and the plane. $$(\overline{P}-\overline{P_0}) \cdot \overrightarrow{n} = ((\overline{o} + \overrightarrow{d} t)-\overline{P_0}) \cdot \overrightarrow{n}=0 \implies $$ $$(\overrightarrow{d}\cdot \overrightarrow{n}) t = \overrightarrow{n} \cdot (\overline{P_0}-\overline{o}) $$ As long as $\overrightarrow{d}\cdot \overrightarrow{n} \neq 0$, we can compute $t$ by dividing the right hand side by $\overrightarrow{d}\cdot \overrightarrow{n}$, yielding $$t=\frac{\overrightarrow{n} \cdot (\overline{P_0}-\overline{o})}{\overrightarrow{d}\cdot \overrightarrow{n}}$$ In the case where $\overrightarrow{d}$ and $\overrightarrow{n}$ are perpendicular, there is no intersection (or the ray is always in the plane).


We can define a 3D Sphere with center $\overline{C}$ and radius $r$ as the set of points $\overline{P}$ satisfying $| \overline{P} - \overline{C} | = r$, or $(\overline{P} - \overline{C}) \cdot (\overline{P} - \overline{C}) = r^2$. Using $\overline{P} = \overline{o} + \overrightarrow{d} t$, we get a hot mess of $$((\overline{o} - \overline{C}) + \overrightarrow{d} t) \cdot ((\overline{o} - \overline{C}) + \overrightarrow{d} t) = r^2 $$ or $$\overrightarrow{d} \cdot \overrightarrow{d} t^2 + 2 \overrightarrow{d} \cdot (\overline{o} - \overline{C}) t + (\overline{o} - \overline{C}) \cdot (\overline{o} - \overline{C}) - r^2 = 0$$ This equation has the form $At^2+Bt+C = 0$ which has two solutions $$ t = \frac{-B \pm \sqrt{B^2-4AC}}{2A} $$ provided $B^2-4AC > 0$. Otherwise, there are no solutions. $$A = \overrightarrow{d} \cdot \overrightarrow{d} $$ $$B = 2 \overrightarrow{d} \cdot (\overline{o} - \overline{C}) $$ $$C = (\overline{o} - \overline{C}) \cdot (\overline{o} - \overline{C}) - r^2 $$
You should regularly commit your changes to your repo and occasionally push changes to the github remote. Note you must push to your remote to share updates with your partner. Ideally you should commit changes at the end of every session of working on the project. You will be graded on work that appears in your github repo by the project deadline.