We've basically created all the functions to render a simple scene, both from the geometric perspective of manipulation objects with transformations and also using data, by applying textures to triangles, and some combination of them. Furthermore, we implement some techniques used to combat some of the artifacting that occurs due to sampling.
In essence, we've created the foundation and basis for everything else we will be doing throughout the course. One particular thing that I learnt is visual debugging, where the image appearance needs to be interpreted and used to determine where the mistake would be in code. For example, if the image appeared to be darker or brighter, then I would suspect that the math behind how I assigned colours to pixels was slightly off. If parts of the image seemed to be in the wrong position, then I would look at my texture indexing or my width/height/x/y values to see if I had messed them up.
In DrawRend::rasterize_triangle, we are given a triangle's 3 vertices and the color of the triangle. We use this information to check on screen space, where we should be colouring our pixels. We use the centre of the pixel as the location to check whether a pixel should be coloured or not. For this, we use the equation of the 3 lines making up the triangle. The following lists the steps I took to rasterize the triangle.
Interesting Photo of Test 4
In part 2, instead of adding 0.5 to each pixel row and column, we need to check every subpixel.
Supersampling is useful as it creates a smoother transition on the edges of the triangle. A triangle is not made up of small squares (pixels) and thus without supersampling, we get pixels with only 2 colours. If we use supersampling, this creates pixels with colours that are a blend between white and the triangle color. These intermediate pixels trick our eyes into thinking that the edge is a continuous line rather than made up of individual squares of pixels.
Comparison between 1x, 4x and 16x sampling
|
|
|
From 1x to 16x, we see get the "broken-ness" of the triangle decreases. When we hit 16x, the subpixels tested lie in that very skinny region of the triangle, and some of the subpixels are then coloured. However, for 1x and 4x, the subpixels tested may not have been lying inside the triangle, and the skinny triangle was lying between the samples, thus the pixels in that region were not coloured.
Barycentric coordinates are basically another coordinate system that uses 3 values (alpha, beta and gamma) to define a point within the triangle. It is also a area-based coordinate system, where alpha, beta and gamma are the ratios of areas to the whole triangle for the 3 sub-triangles created at the given point. Since alpha, beta and gamma add up to 1, we can use them to interpolate any 3 values on the vertices for a given point on the triangle.
For example, the triangle above is drawn with very litte data. I have only given the three vertices the colour red, green and blue, and the point positions. The barycentric system is able to use this data and interpolate from red to green (giving yellow on the left side) and purple on the bottom of the triangle, as I am interpolating between the data points to provide a smooth transition from pixel to pixel.
In Part 5, we had to take data from textures and use them to colour our pixels. Basically pixel sampling is instead of taking a colour and applying on the triangle, we map areas of the texture to triangles in our scene and by using barycentric interpolation, find the appropriate pixel of the texture to sample from.
The implementation of texture sampling was implemented as such:
|
|
|
|
The largest difference is from nearest neighbour sampling to bilinear interpolation with no supersampling. It seems the pixels were lying nearer to the pixels of the map rather of the pixels of the gridline, and thus a large part of the gridline was ignored when using nearest neigbour. However, when we use bilinear interpolation, that takes into account the other neighbours pixels with white (representing the gridline colour) and thus the pixels have a partial amount of white that is rendered. Moving towards 16x supersampling just reduces the jaggies and gives the continents a smoothing transition of colour from pixel to pixel.
Level sampling is a method used to counteract the aliasing that happens when a scene includes both near and far objects that use textures. The further objects have a smaller screen footprint, and thus when sampling the full, high resolution texure, aliasing occurs. In order to combat this, we precompute different versions of the textures from high to low resolution and select the best texture resolution for the pixel at the current point. Thus, we are able to avoid the aliasing for objects further away and yet still have high definition textures for objects closer to the camera.
Implementation of level sampling:
|
|
|
|
It seems extra sampling takes the most time, but results in good images. However, a lot of the compute is done at runtime, and this would not be ideal for gaming or animation. As such, I feel that using mipmaps and precomputing all the mipmap levels (trading space for time) is a good approach, so that any animation or games can run smoothly. However, nowadays, as our graphics cards become much faster and can parallelize better, I think a good middle ground can be reached using bilinear interpolation and nearest level sampling, while employing supersampling when time computation permits.