Home

Rendering an Image of a 3D Scene

Distributed under the terms of the CC BY-NC-ND 4.0 License.

  1. It All Starts with a Computer and a Computer Screen
  2. And It Follows with a 3D Scene
  3. An Overview of the Rendering Process: Visibility and Shading
  4. Perspective Projection
  5. The Visibility Problem
  6. A Light Simulator
  7. Light Transport
  8. Shading
  9. Summary and Other Considerations About Rendering

Summary and Other Considerations About Rendering

Reading time: 5 mins.

Summary

We will not reiterate what has already been discussed in previous chapters. Instead, let's summarize the essential terms and concepts from this lesson:

One of the aspects not discussed in previous chapters is the difference between rendering on the CPU versus the GPU. It's crucial not to equate the GPU exclusively with real-time rendering nor the CPU with offline rendering. Real-time and offline rendering each have precise meanings unrelated to the CPU or GPU. We refer to real-time rendering when a scene is rendered at 24 to 120 frames per second (fps), with 24 to 30 fps being the minimum to create the illusion of movement. A typical video game runs at about 60 fps. Rates below 24 fps and above 1 fps are considered interactive rendering. When rendering a frame takes from a few seconds up to minutes or hours, it falls under offline rendering. Achieving interactive or even real-time frame rates on the CPU is entirely feasible. The rendering time primarily depends on the scene's complexity. Even on the GPU, a highly complex scene may require more than a few seconds to render. Our point is that the association between GPU with real-time and CPU with offline rendering is a misconception. In this section's lessons, we will explore using OpenGL for GPU image rendering and implement both rasterization and ray-tracing algorithms on the CPU. A dedicated lesson will cover the advantages and disadvantages of GPU versus CPU rendering.

Another topic we will not cover in this section is the relationship between rendering and signal processing. Understanding this relationship is crucial but requires a solid foundation in signal processing, including Fourier analysis. We plan to introduce a series of lessons on these topics once we complete the basic section. Without a thorough understanding of the underlying theory, discussing this aspect of rendering might lead to confusion rather than clarity.

Figure 1: Simulation of depth of field (top) and motion blur (bottom) will also be among the topics covered.

With these concepts reviewed, you now have a preview of what to expect in the sections dedicated to rendering, especially those on light transport, ray tracing, and shading. The light transport section will discuss simulating global illumination effects. The ray-tracing section will delve into specific techniques like acceleration structures and ray differentials (it's okay if you're unfamiliar with these terms for now). In the shading section, we'll explore shaders and the mathematical models developed to simulate various material appearances.

We will also address engineering topics such as multi-threading, multi-processing, and utilizing hardware to accelerate rendering.

Most importantly, for those new to rendering, we recommend starting with the upcoming lessons in this section to grasp the fundamental and essential rendering techniques:

Ready to begin?

previous-