Posts

Final Project: Single Scattering Heterogeneous Volumetric Rendering

Image
My hero image with 0.45 max extinction   For my final project, I implemented single scattering heterogeneous volumetric rendering using delta tracking. Initially in my proposal, I was using a sphere as a homogenous volume. However, I soon discover the disadvantage of using it as volume, it is hard to model anything realistic since the density is the same throughout. Homogenous volume with absorption only   Hence, I decided to find something that represents a real volume. I naively tried to model a volume with a obj. I got this obj by modeling a volume using Houdini (which I have never used before starting this project), output the volume to vdb, then convert the vdb to obj, which although looks good with the absorption, cannot actually model realistic volume, because density is still the same throughout. The cloud I modeled with Houdini The obj file I was trying to model with   Thanks to Andrew and Nick, I was introduced to openvdb, which can help load vdb file

Final Project: Volumetric Rendering Proposal

Image
For my final project of CSE 168, I decided to do volumetric rendering. My end goal is to create either a fog or smoke environment image. With some research I now know that they are four processes about a volumetric rendering: absorption, out-scattering, emission, and in-scattering. For my first initial goal, I wanted to render a isotropic homogenous volume. To render effect of going through a participating medium, I have to create a volume that serves as the medium. To make it simple, I used a sphere as a volume. I modified the test file from HW2, and created a sphere in front of a quadlight. The brown sphere represents the medium Then I started working on absorption. I defined an absorption coefficient of vec3(2, 2, 2). For now, I shoot a ray in the scene until it hits a surface, if it is a volume, ignore then shoot again until it is a surface or went out of the scene. Then, I get the emission color and shoots another ray in the opposite direction ( ​ ), but this time on

HW1

Image
CSE 168 HW1, no doubt as mentioned in the email, is the hardest programing assignment I have ever done in UCSD. If I have not started during the spring break, I don't think I would be able to finish it. This is probably the most physics based assignment I have ever done. The part that took the longest for me is the acceleration structure. I will talk about the general implementation of my nxnxn grid. The x and y coordinate represents the array indices of the grid First of all, the grid itself is considered as a bounding box with a vec3 min and vec3 max. The min and max are determined by the min coordinate of all objects in the scene. In the 2D example above, I have triangle a and sphere b. They both have their bounding boxes. The bounding box of triangle is calculate using the minimum and maximum of three vertices coordinates.  The bounding box of sphere is required the the radius after transformation. For ellipsoid, however, the scaling factor