Week 2: Shaders and the OpenGL Pipeline

A Qt/OpenGL application.

The core of an OpenGL program is to first set up the shaders (what do you want to do?) and the input data (what do you want the shaders to process?). Once we send the shader program and the geometry data to the GPU, we give one small command gl.drawArrays to actually do the drawing.

  • loading, compiling, linking shaders

    • We write shaders in GLSL, e.g., vshader.glsl and fshader.glsl.

    • We send the shader source to the GPU to be compiled and linked into a shader program. Each program must have one vertex shader and one fragment shader.

  • set OpenGL flags, initial state

    • OpenGL is a state machine that provides scaffolding to support rendering images. To draw anything, it is essential that you provide shaders and models. But there is also additional state you can set in the initializeGL() method. One example is setting the background clear color.

      glClearColor(0.9,0.9,0.9,1.);

Most of OpenGL is just setting state. For most of the functions, none of the effects are directly visible. You are simply asking OpenGL to set some internal variable and remember it for later use. When it comes time to actually draw, the draw command will be relatively simple, but it will leverage all the current state you set with prior OpenGL function calls. This may be a new way of thinking about things, but we will continue to emphasize this point throughout the course.

paintGL

  • Adjust OpenGL context to fit current window

    • Occasionally, the user might resize the window and we’d like OpenGL to adjust the rendering context to fit the new size.

      void MyOGLWidget::resizeGL(int w, int h) {
        /* map the width/height of the widget to the OpenGL context*/
        glViewport(0, 0, w, h);shaderProgram->bind();
        shaderProgram->setUniformValue("iResolution", QVector2D(w,h));
        update(); /* repaint */
      }
  • Erase old data

    • OpenGL will write output color fragments to a buffer and display this buffer in the current context. If you are rendering multiple frames, you must explicitly erase the old frame using glClear. This will fill the output buffer with the current clear color that you most recently set with the glClearColor function.

      glClear(gl.COLOR_BUFFER_BIT );
  • Make program, vao active

    • We are now ready to draw. It’s a good idea to ensure your program and buffers are active. Since there is only one program/buffer, this step probably could be done once in init, but for complex scenes, we may toggle between multiple programs and buffers and having this process in paintGL is a good idea.

      vao->bind(); /*recall all binding of geometry, layouts */
      shaderProgram->bind();
  • Draw!

    • All the prep work is done. To actually get things going on the GPU and see an output image, we have to call a draw function.

      glDrawArrays(GL_TRIANGLES, 0, numVertices);

      The syntax for drawArrays is drawArrays(primitive, offset, count). This function instructs OpenGL to start the vertex shader and start drawing the specified primitive GL_TRIANGLES using the data in the current buffer starting at the provided offset 0 and using count(=3) total vertices. In this example, we have just instructed OpenGL to draw one triangle. That seemed like a lot of work!

In, Out, Uniform

Your shader programs have their own syntax. They are written in a language called GLSL designed for running on the GPU. GLSL is not C/C++, though parts of it look similar. You cannot print from inside a shader, but we will see ways later this semester to use the shader output as a debugging tool.

While there are many C features you cannot use in GLSL, there are also GLSL features that are not common or supported by default in Cxx. One example is the vector and matrix types. These are built into GLSL, but once you get used to them, you tend to expect there to be a vec3 type built into or C++. There is no such thing by default, though libraries like Qt provide similar abstractions.

Note that like C/C++/Java, but unlike JavaScript or Python, you must declare the type of your variables inside GLSL. Additionally, global variables in shaders have an in, out, or uniform qualifier.

in qualifier

An in variable is received as input from a previous stage of the OpenGL pipeline. In the vertex shader, in variables get their values from GPU buffers. The VAO maps the layout of the buffer to the shader input variables, and the drawArrays call instructs the GPU to call the vertex shader a given number of times using elements from the currently bound buffer. You will often see the phrase or function bind in OpenGL. It simply means make this thing (buffer, program, texture) the currently active thing.

In the fragment shader, an in variable gets its value from the output of the rasterizer. Typically the rasterizer will interpolate a corresponding out variable across the primitive to compute the in variable. We don’t have any in variables in the fragment shader in this example, but we did in lab1. The vertex shader had texture coordinates as both in and out variables. The in texture coordinate came from a buffer and there was one texture coordinate for each corner of the geometry (one square). During the primitive assembly and rasterization step, these texture coordinates were interpolated between vertices to produce a new in texture coordinate for each fragment in the fragment shader. We will talk more about texturing and interpolation in the next few weeks.

out qualifier

An out qualifier passes the variables final value to the next stage of the OpenGL pipeline. In our first example, only our fragment shader has an out variable, the final color of our image. The next stage after the fragment shader is updating the image on the output buffer/screen/canvas.

Lab1 had an out texture coordinate from the vertex shader (see above). Output variables from the vertex shader are interpolated between vertices during the primitive assembly and rasterization step before going on to the fragment shader.

uniform qualifier

Making one call to glDrawArrays start many calls to the vertex shader program and fragment shader program in parallel. Each separate call will likely have different values for in variables and different values for out variables. But there are some variables that can stay the same across all calls to the shaders in a single drawArrays call. We assign the uniform qualifier to these values.

In lab1, we set the size of the image and size of the canvas as uniforms as these would not change on a per vertex/per fragment level.

Think of uniforms as applying to an entire shape or an entire scene instead of a single vertex or pixel.

Reading

Having completed an overview of a basic Qt/OpenGL program, we are almost ready to take the big jump to 3D and do bigger, more exciting things. So far, you have been manipulating 2D vector geometry and 3D colors, but before we get too far, it will be good to review some linear algebra. Before Friday’s class, please read the following sections from the Immersive Linear Algebra website

  • Chapter 1: Introduction

  • Chapter 2: Vectors (Sections 2.1-2.5)

  • Chapter 3: Dot Product (Sections 3.1-3.3, Ex 3.6., 3.7)

We may not get to the dot product stuff on Friday. We’ll eventually read parts of chapters 4 and 6. To guide your reading and some discussion on Friday, please know the mathematical/geometric definition of the following terms:

  • Point

  • Vector

  • Basis

The term vector gets overused in computer science, but for graphics, it will be important to distinguish at times between something like a generic type vec3 and the geometric concept of vector. To help guide that conversation, thin about the following operations and decide on the output type of the result. Some operations may not be valid geometrically.

  • point + vector

  • float * vector

  • point + point

  • vector + vector

  • vector - vector

  • point - point

Finally, think about the GLSL or Qt type you would use to store each of these geometric objects (assume you have a 3D basis). Are any of the operations you declared invalid above valid in GLSL?

In addition to the Immersive Linear Algebra site, the Learn OpenGL website has a good summary on Vector math and 3D Transform Matrices, and Coordinate Systems. The Learn OpenGL guide uses glm and C++ syntax for some of the code examples, but we will have access to similar features in Qt.

Dot Product

The dot product or scalar product is an operation between two vectors that returns a scalar or float quantity. In graphics, we use the dot product primarily for it’s geometric intepretation.

\[\vec{u}\cdot\vec{v} = \|\vec{u}\|\|\vec{v}\| \cos(\theta)\]

The notation \(\|\vec{u}\|\) means the length or norm of \(\vec{u}\).

Orthonormal Basis

The dot product is well defined for any vectors regardless of basis (a vector can have an abstract representation without a basis), but in many graphics contexts, we specific a vector by its coefficients in a basis. Typically, the bases we choose in graphics are orthonormal, meaning that all vectors forming the basis have unit length and are perpendicular to each other. In this common case, we can express the dot product of two vectors in terms of their coefficients using the formula:

\[\vec{u}\cdot\vec{v}=(u_x,u_y,u_z)\cdot(v_x,v_y,v_z)=u_xv_x+u_yv_y+u_zv_z\]

In the same orthonormal basis, the length of a vector (squared) can be computed using a dot product as well.

\[\vec{u}\cdot\vec{u}=u_x^2+u_y^2+u_z^2 = \|\vec{u}\|^2\]

Applications

We will often use the dot product to

  1. compute the angle between two vectors:

    \[\cos(\theta) = \frac{\vec{u}\cdot\vec{v}}{\|\vec{u}\|\|\vec{v}\|}\]
  2. compute the length of a vector:

    \[\|\vec{u}\| = \sqrt{\vec{u}\cdot\vec{u}}\]
  3. normalize a vector:

    \[\hat{u} = \frac{\vec{u}}{\|\vec{u}\|}, \hat{u} \cdot \hat{u} = 1\]
  4. compute the projection of a vector \(\vec{u}\) onto \(\vec{v}\)

    \[\vec{u}_\parallel = \frac{\vec{u}\cdot\vec{v}}{\|\vec{v}\|} \hat{v} = \frac{\vec{u}\cdot\vec{v}}{\|\vec{v}\|^2} \vec{v}\]
  5. compute the portion of a vector \(\vec{u}\) perpendicular to \(\vec{v}\)

    \[\vec{u}_\perp = \vec{u} - \vec{u}_\parallel = \vec{u} - \frac{\vec{u}\cdot\vec{v}}{\|\vec{v}\|^2} \vec{v}\]

Cross Product

The cross product or vector product is only really defined well for 3D vectors. Given two vectors \(\vec{u}\) and \(\vec{v}\) in 3D, \(\vec{u}\times\vec{v}\) computes a third vector:

\[\vec{u}\times\vec{v} = \|\vec{u}\|\|\vec{v}\|\sin(\theta)\hat{n}\]

where \(\hat{n}\) is a normal unit vector perpendicular to the plane formed by \(\vec{u}\) and \(\vec{v}\). In 3D there are two choices for which way this normal vector could point. The direction of \(\hat{n}\) for the cross product is determined by the right hand rule, even if you are left handed.

Right hand rule

To determine the direction of \(\hat{n}\) in \(\vec{u}\times\vec{v}\), point the fingers of your right hand in the direction of \(\vec{u}\) and curl them towards your palm in the direction of \(\vec{v}\). Your right thumb will point in the direction of \(\hat{n}\). If you find yourself trying to bend your fingers backwards, rotate your wrist, which will rotate your thumb.

Note that with the right hand rule, you can easily verify that \(\vec{u}\times\vec{v} = -\vec{v}\times\vec{u}\). If you want to get meaningful results out of a cross product, you will need to pay attention to the order of the multiplication.

Orthonormal basis computation

Much like the dot product case, the cross product is well defined regardless of basis, but given an orthonormal basis, we can define the components of the cross product as:

\[\vec{u}\times\vec{v}=(u_yv_z-u_zv_y,u_zv_x-u_xv_z,u_xv_y-u_yv_x)\]

You will not need to know this for the course. QVector3D::crossProduct or some other library method will compute this for you.