CS 184: Computer Graphics and Imaging, Spring 2019

Project 3-2: Pathtracer

Jireh Wei En Chew, CS184-agu



Overview

After project 3-1, we've implemented a simple diffuse material, but have yet to include more advanced materials like glass (transparent materials that refract and reflect), mirror materials (complete reflection) as well as microfacet materials.

In this project, we implement these materials, as well as a new environment light, which uses a texture (created from a 360-degree image of a scene) to sample that scene's lighting and use that to light objects in the scene.

Finally, we also implement a virtual camera model to improve our pinhole camera model, by introducing a thin lens and aperture to our camera to create depth of field effects.

Part 1: Mirror and Glass Materials

In order to implement glass and mirror materials, these helper functions are implemented: reflect and refract.





0 ray depth.
We only get direct lighting.
1 ray depth.
We get direct and indirect lighting.
2 ray depth.
Reflection off the spheres is apparent.
3 ray depth.
Refraction is apparent as the rays exit the glass.
4 ray depth.
The light rays can reflect off the ground.
5 ray depth.
More rays can bounce inside the sphere to exit at reflect off the wall.

100 ray depth.
Radiance has converged with no significant change.

Analysis of ray bounce and lighting in scene

  1. With 0 bounces, only direct lighting effects are seen.
  2. With 1 bounce, the mirror and glass surfaces do not reflect light into the camera, and only indirect and direct lighting effects appear.
  3. With 2 bounces we are able to see reflections on the mirror and glass surfaces. However, for the glass ball, since a lot of the light depends on the refraction of light rather than reflection, it is mostly black.
  4. With 3 bounces, we can finally see the refraction of light inside the ball and it appears as glass. We also notice that the reflection of the walls of the box on the mirror ball are lit, as light rays have enough depth to bounce on the wall, on to the mirror ball, and into the camera.
  5. With 4 bounces, the light inside the ball can reflect off the ground and thus there is a concentrated spot on light underneath the glass ball.
  6. With the light inside can also reflect off more times inside the ball to cast a secondary bright spot on the red wall on the right.
  7. With 100 bounces, we do not see any significant changes in the lighting of the scene.



Part 2: Microfacet Materials

In microfacet materials, we have to include multiple components in order to correctly represent both the macro-surface and micro-surface properties of the material.


alpha = 0.005.
The shiniest, most reflective dragon.
alpha = 0.05.
Not as shiny, but diffusing light into the camera.
alpha = 0.25.
Diffused light is more apparent.
alpha = 0.5.
Diffused light is most apparent.

As alpha increases, the lobe of diffused light grows, and this results in more diffusion of a ray of light on onne area. However, this also decreases the mirror properties as well, with the first dragon of alpha = 0.005 being the most reflective and the dragon of alpha = 0.5 being the least perfectly reflective.

Cosine sampling.
Bunny is noisy and reflections are not apparent.
Importance sampling.
Reflections on bunny are better represented.

Cosine sampling is less effective as we are randomly sampling over the entire hemisphere of reflection. As such, we are likely to find rays of light with very low or 0 radiance and thus the bunny appears dark and noisy. However, we know that for microfacet materials, the reflection lies within a lobe that is centred around the perfect reflection direction. As such, in importance sampling, our random sample only comes from this lobe, and we can better approximate the reflective properties of the microfacet material.

Cobalt dragon.

Part 3: Environment Light

In environment lighting, instead of declaring light sources, we use an image to represent a 360-degree sphere of light around the scene. This is typically captured using a 360-degree camera. The idea behind that is that this picture captures a colour of light from the environment that will hit the 360-degree camera, and thus we can simulate the object, in this case our bunny, at the position of the camera. We do this in order to simulate both direct and indirect lighting lighting up the object, and is a convenient way to "transplant" virtual objects into such captured light environments.

In this part, I used the field.exr environment light.

exr file in jpg format.


Probability debug image for the distribution.
Uniform sampling.
Bunny is noisy.
Importance sampling.
Much less noise.
Uniform sampling.
Bunny is noisy.
Importance sampling.
Much less noise.

Cosine sampling is less effective as we are randomly sampling over the entire hemisphere of reflection. As such, we are likely to find rays of light with very low or 0 radiance and thus the bunny appears dark and noisy. However, we know that for microfacet materials, the reflection lies within a lobe that is centred around the perfect reflection direction. As such, in importance sampling, our random sample only comes from this lobe, and we can better approximate the reflective properties of the microfacet material.

Part 4: Depth of Field

A pinhole camera model takes all the light passing through a point, which is what we did in project 3. However, now we implement a thin lens model in order to bend light towards the sensor. The effect is that only objects at the focal distance are in focus, and for everything else, the light rays from those areas end up blurry, since the light rays do not come from a single point at the focal distance.
In order to calculate the rays from the object refracted in the lens that reach the sensor, we perform these steps.


Shortest focal distance of 2.
Dragon snout is in focus.
Focal distance of 2.3.
Dragon snout to body is in focus.
Focal distance of 2.6.
Snout is blurry, body in focus.
Focal distance of 3.2
Only tail in focus.

We see the sections of the dragon come in and out of focus as we adjust the focal distance further and further away from the dragon's snout. This is due to the divergent light rays at the focal plane intersecting at the sensor plane, but for the sections of the dragon where it is not at the focal distance, the rays do not perfectly converge and thus the image comes out blurry.


Aperture of 0.01.
Whole image is in focus.
Aperture of 0.03
Edges of the Cornell box not in focus.
Aperture of 0.05.
Only the centre portion at the dragons mouth in focus.
Aperture of 0.1
Only the dragon's lips in focus.

As we adjust aperture, we control the amount of rays that are bent towards the sensor. Originally, if the aperture is very small, only a small portion of light rays are bent and thus only a small portion of the fringes of the photo are blurry. However, as we increase the aperture, more light rays are bent towards the sensor plane, and the fringes are not at the sensor plane, so it gets blurrier.