Week 12: Framebuffers, Recap, Next steps

Reading

Demos

Monday

In lab today you’ll continue to work on your final project. A progress report is due Wednesday. Demos will be next Tuesday, 8 December from 9am to Noon. I will post a schedule soon.

Spring pre-registration is tomorrow and Wednesday. Enrollment pressures are expected to be high again for CS courses. Please pre-register if you are insterested in taking a CS course in the Spring.

Wednesday

Today we’ll explore how to set up framebuffers in a WebGL2 context with TWGL. Recall from Monday that a frame buffer allows us to redirect the output from the fragment shader to a frame buffer on the GPU that we can later use as a texture. This can be used for any number of effects but is a helpful way of combining model space pipeline rendering methods with screen space methods similar to those of Shadertoy.

The basic steps in setting up a frame buffer application are:

  1. Allocate space for a texture on the GPU

  2. Connect the texture space to the an active frame buffer.

  3. Set the size of the viewport to the size of the texture.

  4. Render the scene you want to draw to the frame buffer as you would normall draw in OpenGL, but the scene will be saved to the frame buffer/texture on the GPU. Nothing will display on the screen.

  5. Unbind the frame buffer so you can draw to the canvas/screen again.

  6. Make a second rendering pass, but this time you can use the texture data from step 4 as part of your rendering.

Repeating steps 3 through 6 can make this two pass rendering with a frame buffer an animation loop.

To keep the demo relatively simple, we will have two small programs for each of our rendering passes. In the first rendering pass, we will draw a triangle. In the second rendering pass we will draw a square that fills the canvas. The square will have texture coordinates so we can sample from the texture/frame buffer.

TWGL can handle creating most of the frame buffer and texture objects for us with twgl.createFramebufferInfo. We just need to specify the size of the buffer and what kind of buffers or attachments we want. A common use case is to have a color buffer and a depth buffer. The texture will contain the data from the color buffer.

To render to the framebuffer, we first make it active with a call to twgl.bindFramebufferInfo(gl, frameBufferInfo). To render to the screen again, we call twgl.bindFramebufferInfo(gl) with no framebuffer. There is some periodic resizing of the viewport--the output size of the fragment shader. TWGL will automatically resize the viewport to the size of the texture on bindFramebufferInfo, but we call twgl.resizeCanvasToDisplaySize(gl.canvas) to revert back to the full screen size for the second pass.

In the second pass, we can use the texture associated with the frameBufferInfo object as our u_texture sampler. This is stored as frameBufferInfo.attachments[0] in TWGL, though this is poorly documented. Some breakpoints and reading the TWGL source helped figure this out.

Friday

You should have received a link for a course survey from Qualtrics. Your feedback is appreciated.

Demos Tuesday 8 December

On Tuesday during the scheduled final time, groups will present a short demo of their final projects. There are 11 total groups, so with 15 minute slots, we can fit in all groups in the three hour window. I would recommend aiming for a 10-12 minute demo plus a few minutes for questions/comments and transitions between groups to stick to the schedule. During your demo, please provide a brief overview of your project including the core components you implemented and which key concepts you built on from the course. If the demos run in real time, you can share the demo live. In the case of long running rendering, please share some screen shots of the final product along with some idea of how long the rendering takes to process.

I ask that students stay for all presentations in their half (9:00-10:30) or (10:30-Noon), but you are welcome to view more demos if you would like. Student questions are encouraged. Please be aware that much like your own project, these demos are works in progress and the final project does not have to be in its complete form until 15 December. Presenters should give an indication of the degree of completeness of there project during the demo.

Time Group Topic Time Group Topic

9:00

Runze

CUDA Raytrace

10:30

Rachel & Erin

Modeling Willets

9:15

E.K.

Raymarch Optimizations

10:45

George & Megan

Cloth Modeling

9:30

Nicolas

Water Sailing Game

11:00

Geoffrey & Aron

First Person Maze

9:45

Elizabeth & Sam

Spotlight Search Game

11:15

Zeus & Youssef

Terrains with Noise

10:00

Rohan & Bellara

Data Viz of Swarthmore Trash Accumulation

11:30

10:15

Joey & Saul

3D Raymarched Metaballs

Review

  • Graphics, it’s all linear algebra (and a few sines/cosines)

  • It’s all parallel. This semester this was mostly transparent

  • It’s all programmable. Vertex and fragment shaders are the main entry points, but new programmable pieces are being added.

Future Directions

  • Digital to Physical: 3D Modeling, 3D printing, CNC Milling

  • Physical to Digital: Remote Sensing to 3D Models, desktop laser scanning, lidar.

  • More parallelism

  • CUDA/OpenCL/TensorFlow: Use the parallel hardware and linear algebra hardware optimizations to solve other problems not even related to graphics.

  • Data Visualization Observable. With JavaScript/Web experience, in great position to explore these opportunities more.

  • Game design: Unity

  • Rapid visualization development: Three.js

  • Real time raytracing