Week 2: Shaders and the OpenGL Pipeline

The core of an OpenGL program is to first set up the shaders (what do you want to do?) and the input data (what do you want the shaders to process?). Once we send the shader program and the geometry data to the GPU, we give one small command gl.drawArrays to actually do the drawing.

Week 1 Recap

  • loading, compiling, linking shaders

    • We write shaders in GLSL, e.g., first_vs.js and first_fs.js.

    • We send the shader source to the GPU to be compiled and linked into a shader program. Each program must have one vertex shader and one fragment shader.

      programInfo = twgl.createProgramInfo(gl, [vshader, fshader]);
  • creating geometry data on the CPU

    const triangle = {
      position: {numComponents: 2,
                 data: [-1, -1, 1, -1, 0, 1] }
    };
  • creating GPU buffer

    bufferInfo = twgl.createBufferInfoFromArrays(gl, triangle);
  • setup VAO

     vao = twgl.createVAOFromBufferInfo(
        gl, programInfo, bufferInfo);
  • set OpenGL flags, initial state

    • OpenGL is simply a scaffold. To draw anything, it is essential that you provide shaders and models. But there is also additional state you can set in the init() function. One example is setting the background clear color.

      gl.clearColor(0.9,0.9,0.9,1.);

Most of OpenGL is just setting state. For most of the functions, none of the effects are directly visible. You are simply asking OpenGL to set some internal variable and remember it for later use. When it comes time to actually draw, the draw command will be relatively simple, but it will leverage all the current state you set with prior OpenGL function calls. This may be a new way of thinking about things, but we will continue to emphasize this point throughout the course.

Render

  • Adjust OpenGL context to fit current window

    • Occasionally, the user might resize the window and we’d like OpenGL to adjust the rendering context to fit the new size.

      twgl.resizeCanvasToDisplaySize(gl.canvas);
      gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
  • Erase old data

    • OpenGL will write output color fragments to a buffer and display this buffer in the current context. If you are rendering multiple frames, you must explicitly erase the old frame using gl.clear. This will fill the output buffer with the current clear color that you most recently set with the gl.clearColor function.

      gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
  • Make program, vao active

    • We are now ready to draw. It’s a good idea to ensure your program and buffers are active. Since there is only one program/buffer, this step probably could be done once in init, but for complex scenes, we may toggle between multiple programs and buffers and having this process in render is a good idea.

      gl.useProgram(programInfo.program);
      gl.bindVertexArray(vao);
  • Draw!

    • All the prep work is done. To actually get things going on the GPU and see an output image, we have to call a draw function.

      gl.drawArrays(gl.TRIANGLES, 0, 3);

      The syntax for drawArrays is drawArrays(primitive, offset, count). This function instructs OpenGL to start the vertex shader and start drawing the specified primitive gl.TRIANGLES using the data in the current buffer starting at the provided offset 0 and using count(=3) total vertices. In this example, we have just instructed OpenGL to draw one triangle. That seemed like a lot of work!

Monday Goals

  • Variable types in Vertex/Fragment shader programs

  • in

  • out

  • uniform

  • requestAnimationFrame, render, and the animation loop

  • adding uniforms

In, Out, Uniform

Your shader programs have their own syntax. They are written in a language called GLSL designed for running on the GPU. GLSL is not JavaScript or C, though parts of it look similar. You cannot print from inside a shader, but we will see ways later this semester to use the shader output as a debugging tool.

While there are many JavaScript features you cannot use in GLSL, there are also GLSL features that are not common or supported by default in JavaScript. One example is the vector and matrix types. These are built into GLSL, but once you get used to them, you tend to expect there to be a vec3 type built into JavaScript or C++. There is no such thing by default, though libraries like TWGL v3 provide similar abstractions.

Note that like C/C++/Java, but unlike JavaScript or Python, you must declare the type of your variables inside GLSL. Additionally, global variables in shaders have an in, out, or uniform qualifier.

in qualifier

An in variable is received as input from a previous stage of the OpenGL pipeline. In the vertex shader, in variables get their values from GPU buffers. The VAO maps the layout of the buffer to the shader input variables, and the drawArrays call instructs the GPU to call the vertex shader a given number of times using elements from the currently bound buffer. You will often see the phrase or function bind in OpenGL. It simply means make this thing (buffer, program, texture) the currently active thing.

In the fragment shader, an in variable gets its value from the output of the rasterizer. Typically the rasterizer will interpolate a corresponding out variable across the primitive to compute the in variable. We don’t have any in variables in the fragment shader in this example, but we did in lab1. The vertex shader had texture coordinates as both in and out variables. The in texture coordinate came from a buffer and there was one texture coordinate for each corner of the geometry (one square). During the primitive assembly and rasterization step, these texture coordinates were interpolated between vertices to produce a new in texture coordinate for each fragment in the fragment shader. We will talk more about texturing and interpolation in the next few weeks.

out qualifier

An out qualifier passes the variables final value to the next stage of the OpenGL pipeline. In our first example, only our fragment shader has an out variable, the final color of our image. The next stage after the fragment shader is updating the image on the output buffer/screen/canvas.

Lab1 had an out texture coordinate from the vertex shader (see above). Output variables from the vertex shader are interpolated between vertices during the primitive assembly and rasterization step before going on to the fragment shader.

uniform qualifier

Making one call to gl.drawArrays start many calls to the vertex shader program and fragment shader program in parallel. Each separate call will likely have different values for in variables and different values for out variables. But there are some variables that can stay the same across all calls to the shaders in a single drawArrays call. We assign the uniform qualifier to these values.

In lab1, we set the size of the image and size of the canvas as uniforms as these would not change on a per vertex/per fragment level.

Think of uniforms as applying to an entire shape or an entire scene instead of a single vertex or pixel.

Adding a uniform to control the color

The color of our triangle is set directly in the shader. To change it, we would need to edit the shader and recompile the shader program. Let’s add a uniform to our triangle program to control the color of our triangle.

We’ll start on the CPU side in first.js. We first add a JavaScript object in our global variables section before main() to store all the uniforms we want.

let uniforms = {};

We’ll assign an initial color value in init()

uniforms.u_color=[1.,0.,0.];

Note, we can add properties to javascript objects dynamically by assigning a name to the property and a value. Coming from a python perspective, I tend to think of these sort of like dictionaries, though there are some differences.

So far, we have just set some information on the CPU. We need to add support on the GPU and then send the uniform data on the CPU to the GPU.

updating the fragment shader

In first_fs.js add a new uniform vec3 u_color; outside main(). This name should match the name we set in init() on the CPU side.

In the body of main, replace the one line with code to use the uniform color as our base color.

fragColor = vec4(u_color,1.);

Connecting the CPU and GPU uniforms

If we run the program now, we will likely get a black triangle instead of red, because our shader is using a uniform value that is not set on the GPU side. We must copy the uniform values from the CPU to the GPU prior to drawing.

In render before drawArrays, add the line:

twgl.setUniforms(programInfo, uniforms);

Note: TWGL has a programInfo object which contains a program property. The core gl. commands typically expect the program ID, while TWGL often works best with the programInfo object. WebGL knows nothing about programInfo objects as they are part of the TWGL library. Check the TWGL docs for how to use various TWGL functions.

I strongly encourage you to use TWGL when possible. It has the right level of abstraction for this course. You are welcome to fall back to pure WebGL2 at anytime, but it gets waaaaaay more verbose.

Wednesday

  • Basic Animation

  • Orientation of Polygons

  • Face Culling

  • Adding more vertex attributes

Animation

Adding a uniform to the fragment shader allows us to control the color on the CPU side through the twgl.setUniforms function.

Javascript has a mechanism for requesting periodic redraws of the browser window with window.requestAnimationFrame. You will see a call to this function in main to do the initial draw, and inside render to allow animation. Each time the window requests a new frame, it will call its callback function, in this case render and pass in the time in milliseconds since the window started running. We can use this time variable to perform some animation.

In render, prior to setting the uniforms with twgl, change the u_color value on the CPU side.

uniforms.u_color[0] = 0.5*(1+Math.sin(2*time));

Here we are changing just the first (red) component of u_color to vary with a sine wave.

Refresh the demo and you should see a triangle that repeatedly fades between red and black.

Lab2 will ask you to explore uniforms and animation more on the vertex and fragment shader side.

Polygon orientation

You may find that OpenGL is picky about the order in which you specify vertices of a shape. An improper order may result in a shape being drawn improperly or not at all. This is usually an issue of polygon orientation or face winding. Consider walking around the boundary of a triangle. You can traverse the vertices in a clockwise or counter clockwise order. In OpenGL, it is assumed that all polygons have their front face oriented in the same direction. In 2D, there really isn’t much of a front and a back, but consider a analog clock in 3D. Usually when you look at the clock’s front face, the hands move in a clockwise motion. However, if the clock was semi-transparent and you could look from behind the clock, you would say the hands are moving in a counter clockwise motion.

When we move to 3D, we will typically focus on the front facing polygons. If we have a solid sphere, there is no sense rendering the portions of the sphere that are facing away from us. Using the orientation test, OpenGL can automatically cull these triangles. The culling happens during the primitive assembly stage and discards triangle that face the wrong way. As with any feature in OpenGL, face culling can be customized, enabled, or disabled.

By default, culling is disabled, and front faces are assumed to have a counter clockwise orientation. By calling gl.enable(gl.CULL_FACE);, you will instruct OpenGL to discard faces with a clockwise orientation.

We will revisit this again when we transition to 3D, but if you are not seeing shapes in lab2, try commenting out the call to enable gl.CULL_FACE and see if your faces are getting discarded. If they are, try to fix the orientation of your polygon faces so they draw properly with culling enabled.

More to read:

Reading for Friday

Having completed an overview of a basic WebGL2 program, we are almost ready to take the big jump to 3D and do bigger, more exciting things. So far, you have been manipulating 2D vector geometry and 3D colors, but before we get too far, it will be good to review some linear algebra. Before Friday’s class, please read the following sections from the Immersive Linear Algebra website

  • Chapter 1: Introduction

  • Chapter 2: Vectors (Sections 2.1-2.5)

  • Chapter 3: Dot Product (Sections 3.1-3.3, Ex 3.6., 3.7)

We may not get to the dot product stuff on Friday. We’ll eventually read parts of chapters 4 and 6. To guide your reading and some discussion on Friday, please know the mathematical/geometric definition of the following terms:

  • Point

  • Vector

  • Basis

The term vector gets overused in computer science, but for graphics, it will be important to distinguish at times between something like a generic type vec3 and the geometric concept of vector. To help guide that conversation, thin about the following operations and decide on the output type of the result. Some operations may not be valid geometrically.

  • point + vector

  • float * vector

  • point + point

  • vector + vector

  • vector - vector

  • point - point

Finally, think about the GLSL or TWGL type you would use to store each of these geometric objects (assume you have a 3D basis). Are any of the operations you declared invalid above valid in GLSL?

Friday

In addition to the Immersive Linear Algebra site, the Learn OpenGL website has a good summary on Vector math and 3D Transform Matrices, and Coordinate Systems. The Learn OpenGL guide uses glm and C++ syntax for some of the code examples, but we will have access to similar features in TWGL.

Transforms

In 3D, we will manipulate geometry including points and vectors primarily using 4x4 matrix multiplication. Some common transformations are listed below.

Translation

Translation in three dimensions can be expressed as a matrix-vector (4x1 matrix) multiplication using 4D homogenous coordinates. What happens if you apply a translation to a geometric vector with this transform?

\[T(t_x, t_y, t_x) \cdot P = \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{pmatrix} p_x \\ p_y \\ p_z \\ 1 \\ \end{pmatrix} = \begin{pmatrix} p_x + t_x \\ p_y + t_y\\ p_z + t_z\\ 1 \\ \end{pmatrix}\]

Scale

\[S(s_x, s_y, s_x) \cdot P = \begin{bmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{pmatrix} p_x \\ p_y \\ p_z \\ 1 \\ \end{pmatrix} = \begin{pmatrix} s_x \cdot p_x \\ s_y \cdot p_y \\ s_z \cdot p_z \\ 1 \\ \end{pmatrix}\]

Rotation

There are several ways to do rotation. We will primarily focus on methods that rotate about one of the axes of the basis. For example, a rotation around the \(z\)-axis is helpful, even for 2D applications, as it only modifies the \(x\) and \(y\) coordinates.

\[R_z(\theta) \cdot P = \begin{bmatrix} \cos \theta & -\sin \theta & 0 & 0 \\ \sin \theta & \cos \theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{pmatrix} p_x \\ p_y \\ p_z \\ 1 \\ \end{pmatrix} = \begin{pmatrix} p_x \cdot \cos \theta - p_y \cdot \sin \theta \\ p_x \cdot \sin \theta + p_y \cdot \cos \theta \\ p_z \\ 1 \\ \end{pmatrix}\]

Consider looking down the \(+z\)-axis. Is the rotation clockwise or counter-clockwise? How can you tell?

The matrices for \(R_x\) and \(R_y\) are given below.

\[R_x(\theta) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta & 0 \\ 0 & \sin \theta & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \ \ R_y (\theta) = \begin{bmatrix} \cos \theta & 0 & \sin \theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\]

Using matrix transforms

The good news for this course is you will almost never have to create one of these matrices by hand on either the CPU or the GPU. The link: TWGL.m4 module provides helper functions to create and manipulate 4x4 matrices and common graphics transforms.

What you should be able to do in this course is:

  • Decribe what effect a transform has on some geometry. Sketch a small scene before and after the transform.

  • Understand that the order in which you apply transforms matters. Matrix multiplication is not commutative, i.e., \(AB \neq BA\) in general for matrices \(A\) and \(B\)

  • Know what frame/basis you are working in and how to convert between frames/basis.

Reading

Before Monday’s class, please read the following sections from the Immersive Linear Algebra website

  • Chapter 6: The Matrix (6.1-6.4, 6.8)

  • Chapter 3: Dot Product (Sections 3.1-3.3, Ex 3.6., 3.7)