Tuesday, April 12, 2011

Thesis: Pre-computed Surface Radiance Transfer

Title: Pre-computed Surface Radiance Transfer
Examiner: Jonas Unger

Complete version of the thesis can be downloaded here.



Abstract:
Rendering a complex global illumination scene requires an extensive computational resource. It is hard to achieve a real time rendering of a complex object using explicit 3D information. To tackle this obstacle, many techniques have been introduced to close the gap between complex 3D scene and real time rendering. One of the proposed solutions is using an image-based rendering technique. Image-based rendering is a method to achieve the desired image by referencing the sampled image as a source.
This thesis will focus on a mix between image-based rendering and geometry-based rendering. Instead of rendering directly using a global illumination method, we use a set of image, which are captured during the offline rendering. We call the first process as light transport pre-calculation process. These images then treated as a texture and will be attached to the polygon during the online rendering process.This will split the burden of processing power by using the power of memory. Since the pre-rendered data could be huge, it is also important to discuss a compression method that can be applied in GPU architecture and fast enough to be rendered as a real time.

Keyword:
Pre-computed Surface Radiance Transfer, Global Illumination, BRDF, Image Based Modeling and Rendering, Real Time Rendering, GPU Programming, GLSL.

Pre-calculated Light Transport:
The idea of the implementation is to render pre-calculated data instead of calculating value during rendering process. Since we are dealing with a 3D scene, the data that is going to be captured is the distribution of reflected radiance at every point on the surface. The rendering method used in this program is similar to any ray tracing method available. But instead of casting ray from the camera to find pixel-to-point relation, we loop through every texel of the texture to find texel-to-point relation.
Mapping Texel to Point
Now that we know how to find the corresponding point from a texel, we can start making the set of textures for each object. We built a hemisphere surrounding the corresponding point, and find the amount of radiance reflected to each angle (represendted with spherical coordinate value) on the hemisphere.
Hemisphere to represent any possible viewing angle
The spherical coordinate value of each angle is then converted into a single value by a predefined formula. We keep the texture as a 3D texture file. OpenGL’s 3D texture has width, height, and depth. . The width and depth will define u and vof our texure. We will use the single index sphIndex as the depth of the texture.
3D texture as a multi texture file

we use linear piecewise interpolation to compress the data and color indexing to reduce the bit to represents the color.

Pre-calculated Rendering:
The main task in the rendering process is to determine which texel to be shown on the screen. 3D texture has 3 component; x, y, and z. The value of x and y represent a point on the scene. The z value represents an outgoing radiance based on a camera position. The main idea of the rendering process is to find the proper z value of each point based on the camera position on the scene. We can compute the index by finding the spherical position of the camera based on the point we want to render.
The engine of the renderer is written in GLSL. Most part of the engine is in its fragment shader.

Result:
Sample 3D Scene to be rendered using our method
Total Polygon: 906
Total Uncompressed Pre-calculated Data: 1,2 GB
Total Compressed Data: 204 GB
Render Rate: 2-3 FPS

No comments: