Final Project: Single Scattering Heterogeneous Volumetric Rendering

My hero image with 0.45 max extinction

  For my final project, I implemented single scattering heterogeneous volumetric rendering using delta tracking. Initially in my proposal, I was using a sphere as a homogenous volume. However, I soon discover the disadvantage of using it as volume, it is hard to model anything realistic since the density is the same throughout.
Homogenous volume with absorption only
  Hence, I decided to find something that represents a real volume. I naively tried to model a volume with a obj. I got this obj by modeling a volume using Houdini (which I have never used before starting this project), output the volume to vdb, then convert the vdb to obj, which although looks good with the absorption, cannot actually model realistic volume, because density is still the same throughout.
The cloud I modeled with Houdini


The obj file I was trying to model with
  Thanks to Andrew and Nick, I was introduced to openvdb, which can help load vdb file and preserve the density function. Hence I load up the vdb version of the above obj file. In order to go through a volume, I have to write a ray box intersection for the volume. I rendered the heterogeneous volume absorption using ray marching.
Absorption only using ray marching
  After attending Andrew discussion section, he suggested that I followed the SIGGRAPH 2017 paper on production volume rendering. In the paper, they suggested using delta tracking instead of ray marching as it is not only unbiased, and more efficient compare to ray marching.
  I separated the volume from all the other geometries. In order to see if a ray intersect with the volume, I implemented the ray box intersection.
  The general idea of volume rendering is to shoot two ray each time instead of one. The first ray (green ray) checks if the ray intersect with a volume, if yes, calculate the transmittance using Beers-Lambert Law and the extinction and gather color from the volume. After that, add the color from volume and apply the transmittance to the throughput and shoot the second ray (blue ray) that intersect with the surface (as if the volume never existed). However, when doing next even estimation, we have to check if the direction to light intersect with the volume, if yes, it is necessary to calculate the beam transmittance.
The two rays and NEE of surface and volume, needs to take into account of transmittance of volume in direct light
Transmittance only (without rendering volume) on lighting
  In delta tracking, I stochastically terminate the ray in volume depending on the max extinction of the entire volume and the current extinction value inside the ray. If it is terminated, then the transmittance is simply 0, if it go through the volume without terminating, then the transmittance is 1. This would require multiple sample per pixel, otherwise it would appear quite noisy.
  In single scattering, I use isotropic phase function and next event estimation to gather color. I did not implement multiple importance sampling and indirect lighting, as my hero image does not have much indirect light. In addition to sample per pixel, I also have an iteration count for integrating volumetric equation, because I found that volume rendering is more noisy than the rendering surface.
  Hence the general procedure is:
  1. Shoot a surface intersection ray to check if it hit anything
  2. Shoot a volume intersection ray to apply the transmittance to the throughput and volume color to the final color.
  3. Calculate the surface direct lighting with throughput and add to final color.
  4. Sample a new direction for indirect light.
  5. Repeat until the ray doesn't intersect any surface or max depth has been reached.

  To get absorption and scattering, I have a multiplier for the density function to get the extinction coefficient. Here are some results of using different multiplier, the images are all rendered with 128 spp and 32 iteration for volumetric rendering, the performance is quite slow, the dragon scene takes 3000 seconds and the Cornell scene takes 1000 seconds.
Hero image with 0.4 max extinction
Hero image with 0.5 max extinction, the cloud seems denser
Volume rendering with multiple light
  It is a shame that I didn't have enough time to try multiple scattering or experiment with other phase functions. There are couple things that I spent a lot of things trying to figure out:
  1. How to model a volume. I spent a week trying to model a volume with obj until the TAs tell me the existence of OpenVDB.
  2. Trying to install OpenVDB dependencies and incorporate into CMake. Installing OpenVDB was much more convoluted than I expected. In addition, I spent quite a while trying to set the correct path and package in CMake for my project.
  3. Setting up the OpenVDB density function, if Andrew didn't provide me with some implementation, I would not have figure out how to access the voxel to extract the density.
  4. Rendering optimization. Rendering volume really does take a while, unfamiliar with thread programming, it takes around 30 ~ 40 minutes for a single acceptable looking image. This was because I was only calling an expensive initialization operation every time I use find density in the volume, and I was only using 1 thread. I was doing this until 1 day before the due date where Andrew once again help me with setting up multi threading, which greatly help improve the runtime.

Comments

Popular posts from this blog

Final Project: Volumetric Rendering Proposal

HW1