After project 3-1, we've implemented a simple diffuse material, but have yet to include more advanced materials like glass (transparent materials that refract and reflect), mirror materials (complete reflection) as well as microfacet materials.
In this project, we implement these materials, as well as a new environment light, which uses a texture (created from a 360-degree image of a scene) to sample that scene's lighting and use that to light objects
in the scene.
Finally, we also implement a virtual camera model to improve our pinhole camera model, by introducing a thin lens and aperture to our camera to create depth of field effects.
In order to implement glass and mirror materials, these helper functions are implemented: reflect and refract.
We only get direct lighting. |
We get direct and indirect lighting. |
Reflection off the spheres is apparent. |
Refraction is apparent as the rays exit the glass. |
The light rays can reflect off the ground. |
More rays can bounce inside the sphere to exit at reflect off the wall. |
Analysis of ray bounce and lighting in scene
In microfacet materials, we have to include multiple components in order to correctly represent both the macro-surface and micro-surface properties of the material.
The shiniest, most reflective dragon. |
Not as shiny, but diffusing light into the camera. |
Diffused light is more apparent. |
Diffused light is most apparent. |
As alpha increases, the lobe of diffused light grows, and this results in more diffusion of a ray of light on onne area. However, this also decreases the mirror properties as well, with the first dragon of alpha = 0.005 being the most reflective and the dragon of alpha = 0.5 being the least perfectly reflective.
Bunny is noisy and reflections are not apparent. |
Reflections on bunny are better represented. |
Cosine sampling is less effective as we are randomly sampling over the entire hemisphere of reflection. As such, we are likely to find rays of light with very low or 0 radiance and thus the bunny appears dark and noisy. However, we know that for microfacet materials, the reflection lies within a lobe that is centred around the perfect reflection direction. As such, in importance sampling, our random sample only comes from this lobe, and we can better approximate the reflective properties of the microfacet material.
In environment lighting, instead of declaring light sources, we use an image to represent a 360-degree sphere of light around the scene. This is typically captured using a 360-degree camera. The idea behind that is that this picture captures a colour of light from the environment that will hit the 360-degree camera, and thus we can simulate the object, in this case our bunny, at the position of the camera. We do this in order to simulate both direct and indirect lighting lighting up the object, and is a convenient way to "transplant" virtual objects into such captured light environments.
In this part, I used the field.exr environment light.
Bunny is noisy. |
Much less noise. |
Bunny is noisy. |
Much less noise. |
Cosine sampling is less effective as we are randomly sampling over the entire hemisphere of reflection. As such, we are likely to find rays of light with very low or 0 radiance and thus the bunny appears dark and noisy. However, we know that for microfacet materials, the reflection lies within a lobe that is centred around the perfect reflection direction. As such, in importance sampling, our random sample only comes from this lobe, and we can better approximate the reflective properties of the microfacet material.
A pinhole camera model takes all the light passing through a point, which is what we did in project 3. However, now we implement a thin lens model in order to bend light towards the sensor.
The effect is that only objects at the focal distance are in focus, and for everything else, the light rays from those areas end up blurry, since the light rays do not come from a single point at the focal
distance.
In order to calculate the rays from the object refracted in the lens that reach the sensor, we perform these steps.
Dragon snout is in focus. |
Dragon snout to body is in focus. |
Snout is blurry, body in focus. |
Only tail in focus. |
We see the sections of the dragon come in and out of focus as we adjust the focal distance further and further away from the dragon's snout. This is due to the divergent light rays at the focal plane intersecting at the sensor plane, but for the sections of the dragon where it is not at the focal distance, the rays do not perfectly converge and thus the image comes out blurry.
Whole image is in focus. |
Edges of the Cornell box not in focus. |
Only the centre portion at the dragons mouth in focus. |
Only the dragon's lips in focus. |
As we adjust aperture, we control the amount of rays that are bent towards the sensor. Originally, if the aperture is very small, only a small portion of light rays are bent and thus only a small portion of the fringes of the photo are blurry. However, as we increase the aperture, more light rays are bent towards the sensor plane, and the fringes are not at the sensor plane, so it gets blurrier.