Real-Time Ray Traced Voxel Global Illumination - Master's Dissertation

Programing Languages: C++, HLSL

Repository: https://github.com/TiagoJoseMagalhaes/DX_Renderer

Dissertation Document

Description

This work is my master dissertaion. Its initial goal was to develop a technique that hybridized conventional rasterization with the new real-time ray tracing technology in order to reach a middle point between image-quality and performance. This was very open ended, so I spent 90% of my state-of-the-art research going over the field of global illumination as a whole, in February I decided to move into an approach based on voxels, hence the name of this work.

One of the major complaints with current implementations of DXR is that they become very costly as you increase resolution, as such I though that at the very least decoupling ray-tracing resolution from video output resolution woul dbe benefitial. I realized that I could use a volumetric scene representation to achieve this. By having rays light voxels instead of individual pixels, I could light a much larger amount of pixels with a single ray, which should also allow us to reduce the number of rays cast overall, or cast more secondary rays.

In my work I developed a 3-stage pipeline. Stage 1 voxelizes scene data that is required for the ray tracing proccess. Stage 2 executes ray tracing via DXR and inject radiance data into the voxel grid. Step 3 is a final rendering pass that takes the data from the radiance grid and computes the final pixel colors. Voxelization is done via 3 draw-calls per material and the voxel grid is a regular 3D grid. Ray tracing is done from the light sources perspectice. Additionaly, due to the usage of OBJ test scenes, implementing a PBR material model was not possible. Do keep in mind that these aspect while they hold back the implementation analyzed in this work, the architecture of the solution itself does not hold us back from swaping them over to more efficient sollutions, since there were only 2.5-3 months to research and develop and implementation some short-cuts had to be taken.

At the end of the work, I believe that this approach has a lot of potential, and that at least personally I wish to keep working on it and explore the various aspect that I did not have time to explore, be them architecture or on the current implementation. The current implementation was fully developed in C++ using the Direct3D 12 Graphics API and the libraries used were Dear ImGUI and tinyobjloader.

The images generated by the implemented technique were then analyzed analytically via the structural similarity index, and were submited to a public user survery that aimed to understand how users perceived them when compared to pre-rendered images and if they could perceive individual artifacts.

I would also like to add that in the near future I intendt to write up a blog post explaining this work in a simpler way, that is not mired in the formalities of an academic document.

Some backstory on the implementation

I think its interesting to mention that the underlying renderer started its life as a personal project that I was working on to learn DirectX12. However, as the search for an implementation platform went on, I realized that the easiest way for me to do this work, would be to use this personal project and to just implement any missing features that I had no added in by then, this meant texture support, model loading and a GUI. Its interesting to think what would have happened if I had not made the somewhat random decision to start learning DirectX12 in August of 2019, guess life trully is a series of coincidences. The one great thing is that through this work I have definitly solidified my understand of Direct3D 12 and am now very confortable with it.

Demonstration