On the road to the new material system

Not started yet.

I decided that I didn’t want errors in my reflection/transmission code to mess with the new material system and that I had to first make sure that code is working correctly. So I copied/adapted the code I had in the raytracer into the path tracer and create a scene  showing all 4 materials (emissive, diffuse, transmissive and reflective) in action. I already had those working a few years ago, I was sure it would have been straightforward.

FALSE!

Problems of all kinds emerged, and I had to spend some time tracking them down and fixing them. I’ve found bugs in how I was modulating the incident light, how I was calculating the intersections, problems in the “shape lights”, in the fresnel code et al. Not all of them have been fixed yet. For example, shape lights work as long as I use a square or a cube, but with more complex shapes (i.e. Suzanne from Blender) it looks like no light is emitted at all. From first investigation, every ray emitted is then “blocked” by a shadow test.

On a more positive note, now the output (at least for the simple scenes I’ve tested so far) really resembles what comes out from Cycles and I’ve also implemented a “true” sphere light to replace the previous one which was really fast and handy to use to model point lights, but still was a delta light. I don’t know how much this will improve things, but I already had a “shape light” which just “wrapped” a shape, I just needed to pass it a sphere instead than a mesh.

 

I hope to fix the shape lights in the weekend.

Before starting with cleanup…

Beefore I start with the code cleanup and with the new material, I just wanted to have one more render ready. I’ve decided to go with good old Sponza. The new BVH and the OIDN denoiser really helped to keep the rendering time much much lower than 10 years ago, despite the cpu of my current notebook being approximately as powerful as the old Q6600 used to render the first Sponza picture. Rendering time: less than 5 minutes, 700×700 px. So here it is.

I had to scale the sky luminance down by a factor of 1000. I’m not sure why, maybe the calculations are wrong, or the resulting ray color is expressed in a unit which must be handled differently (I’m not really paying all that attention to theoretical details… and sure I didn’t 12 years ago or so).

In addition, this image is not the direct output of QuarkLight. I’ve used Picturenaut to tone mapping it, because the tonemapper I’ve implemented either doesn’t work as good, or the parameters I use are not optimal. Either way, I prefer to fine tune the resulting image with a specialized application. (note to myself: maybe I could use an external library to do that?).

 

 

Some cleanup (but what a journey!)

Moving everything into a new project was not one of my most exciting tasks, and yet I thought it was one of the most important at this stage. As usual, I did not anticipate that it would have been so hard to accomplish (its just me maybe, never been all that smart anyway, and I’m getting older…)!

Coming from 10 years working with Java, I can describe the recipe to include an external library as follows:

  1. Get the jar files.
  2. Add them to the project.
  3. Have fun.

Usually, adding libraries to my previous c++ pet projects was only slighter more complex: I had to take include files and lib/dll binaries into account, but that was pretty much it. For QuarkLight I wanted to do the things “the way they are meant to be done” and moved to a cmake solution (never done that before) using Clang. That’s the recipe:

  1.  Get boost src. Compiled more than once before, but this time it proved to be impossible: b2 never worked once, no messages, no errors, it just would show a blank console. Tried with both Clang and VisualStudio toolsets. Spent a whole weekend trying to make it work, then realized that, with the exception of boost::thread, I did not need any compiled component and that new c++ specs already offer threading support. In addition, I could replace boost smart_ptr with one of the new alternatives provided by c++. So I just dropped boost and removed references from the project. I’ll never know why id did not work, and could not find an answer on the web.
  2. How to manage the other libraries? I could just get the lib/dll files, but I was looking for something more maintainable. I was learning something about cmake and found this “find_package” thing which looked really interesting. In order to use it, I got the package manager vcpkg.
  3. Installing Sdl, Assimp and FreeImage was not straightforward, unfortunately: vcpkg started to complain about outdated ninja. What the heck is ninja, anyway? Spent several hours to understand that for some reasons, vcpkg was using the version provided by VisualStudio instead than its own, and that that one was an older one. Replaced the executable with the most recent version and it worked.
  4. …not completely, though. An error was reporting permission issues when accessing the files. A few more hours just to figure out that it was the antivirus (never had such problem before, so did not think to it immediately).
  5. When I was able to have my libraries ready, I still had to learn how to import them into my project. It was not all that difficult (just a couple hours) but I was not able to compile my project: for some reasons, it was being linked against the dynamic runtime, and thus unable to link the statically linked external libraries.
  6. A few more hours later, I finally had my first pathtraced cornell box. In Debug mode. Of course, Release did not work, and for several reasons that I had to fix one by one.

Somewhere in the process, I really wished I used Java or C# instead than C++…

Truth be told (and forgive me if someone feels offended) I never truly understood why is the c++ compilation model still the same intricate mess. Why are the compilers free to handle so many things the way they want so that binaries are usually incompatible, why the need to have header and implementation files, why the unbeliveable complex preprocessor/builder/linker pipeline each one with its own parameters and learning curve. I mean, if you add the build tool, we have – how many? – five distinct building steps? cmake->make->preprocessor->compiler->linker?

(As I am in complain mode, I’d like to add that I’ve given a look at some new c++ syntax and… oh my, does it really need to get so cryptic?)

But there I am, with a new cmake project and a working exe 😀

Before starting to implement the new shader system (i’m seriously considering the Disney Principled Shader, or at least some subset of it) I want to clean the code some more. Some of it is 10+ years old, and the main rendering loop is only a draft in the main file. I also want to introduce some of the newest c++ features (optional return values look really cool) and idioms. I want to have a quick look at how and where I use smart pointers and check if that can be done better. I also want to check my new’s and that they are correctly freed. I want to render some more scenes – both because of fun and to check the output under different conditions, because it will be easier to debug the new shader if I can trust the renderer -. I don’t want to spend too much time on this cleanup, the code must be “good enough”, is not meant for production.

On the bright side, I hope that all this time is going to pay back in terms of competence 😀

Sampling light directly…

I was honestly sure it would have been close to trivial. I was not prepared to the structural changes, the mathematical formulas I had to constantly check, the programming errors that kept me busy the whole weekend.

And despite the results not fully convincing me, the performances improvement is impressive. The folliwing image took 36 seconds to render with no direct sampling:

This is with direct sampling, took 27 seconds:

If everything was correct, I would expect the same result, only cleaner. Anyway, if I take the hdr image instead than the ldr produced by my (apparently crappy) tone mapper, and use a decent one instead, I get:

I can reduce the sampling rate, enabled the denoiser and render the following image in 18 seconds:

 

Please ignore the black lamp on the ceiling: if I sample lights directly I need to skip them from the indirect calculations but I still should consider them for the eye rays (and speculars AFAIK). I still have to change the code to take this into account.

So what’s next?

So what’s next? I don’t honestly know where I will bring QuarkLight. But there are a few things I’d really like to see implemented. Among them, those I’d like to work on first are:

-Support for “mesh” lights. At the moment, only Point and Area/Sphere lights are supported by the raytracer and by the direct sampling in the path tracer. This means that emissive surfaces are ignored by the lighting calculations of the raýtracer and only contribute in the path tracer when paths hit these surfaces (ie, no direct sampling for emissive surfaces). I can do better by supporting these surfaces directly with a new MeshLight class (which is already present but never really completed/tested).

-Support for general material types. At the moment, the raytracer supports perfectly diffuse, reflective and transparent surfaces. The path tracer only support the diffuse ones. None of the surfaces in my last render show specular reflections. I’m not sure how to extend that. From a certain point of view this would be simple: just “compose” the 3 channels (diffuse, specular and transmissive) and give each a value that specifies ho much it contributes to the material look. The path tracer may also use this value to randomly sample one of the available channels.

There are a few problems with this approach however: this parameter should not be constant, textures may change it, but I must somehow ensure that the sum of all three is never > 1 (outgoing light can’t be more than incoming light). In addition, fresnel law may determine how much reflected vs refracted light we must handle, and this immediately suggests a layered approach. Layered models are fascinating, but which one to choose? Can I implement something simpler/quicker before adventuring there?

-New project. Current code is relatively clean, but the project is a mess. I should create a new one from scratch (and with debug libraries this time!). That’s not the most interesting part of working on QuarkLight, maybe that’s why I postpone…

-IBL/Texture/Output improvements. At the moment, I have a “Background class” (image background and sky implementations). I also have a BackgroundLight class that wraps a background and can be used for lighting calculations (IBL). In order to make it converge faster I should pre-filter the images (ie blur them) to reduce variance and speed up convergence. Also other fixes to the imaging pipeline are needed: textures must be transformed into linear space on load, alpha channel must be considered for intersection evaluation, And I want to separate rendering channels so that composition can be made afterward. I’ve already created a simple “render target” with albedo, normals, alpha etc channels because Oidn needs these additional informations, but the renderers still put the final image into a single channel. I’d like to change that.

-“Progressive” rendering. I’d like to upgrade the rendering loop so that 1) more samples are generate only when/where variance is still high and 2) the intermediate result is saved so that renderings can be paused and restarted. In addition, the path tracere currently doesn’t implement russian roulette to therminate the recursion. As soon as it reaches the recursion limit, it returns black. That’s not optimal, even thought I don’t know ho much that interferes with the resulting quality…

 

And then? Well. using my own geometric library is nice, but there are better/fatser ones out there. And what about evaluating OpenCL/DXR to speed up the rendering? That’s daydreaming, I know 😀

Oplà! New update!

Only a few years since last update XD

I have to admit, 10 years ago free time suddenly disappeared and when I had back some, I genuinely thought it was late to work on a cpu-based, not really advanced raytracer in the really few hours I had to work on things like these.

10 years have passed. Then the quarantine came, and I found myself with a lot of time. Children are older now, they can play without constant supervision. So I took my old backups and started looking at some of my old, never finished projects. I just wanted to compile my raytracer again, I swear, I never intended to start working again on it. I got the code, Visual Studio, and pushed “Run”.

Of course, this did not work. A couple dependencies were broken: I recompiled boost and Assimp, I installed FreeImage and SDL. And there it was, a simple raytraced scene in all its glory.

But, as said, there was not much I could do, in quarantine. In addition, my band is temporarily on hold (sad story, actually) so no new songs to learn, no much desire to play the guitar these days and I almost never paint during the day (acrilycs dry too quickly when I get disturbed).

So I wondered: what if I replaced my super-slow BIH with a fast BVH? I found a really good one: https://github.com/madmann91/bvh. I also moved from ms compiler to Clang. These 2 changes alone made a huge difference, with my raytracer being now 5-10 times faster on my test scenes. With such a performance improvement, I thought, maybe I could add a true path tracer on the top of my code, just a simple one, nothing to complex.

Well, to make it short: I firmly believed that my days on Quarklight were long gone, but I discovered that working on it is still a lot of fun. I don’t know for how long will I have the time to spend on it, but in the meantime I’ve implemented a couple interesting things, like a nice addition to point lights which create soft shadows, extended improvements to the assimp wrapper in order to support more scene elements and the integration of Oidn denoiser (which does miracles!). And I managed to fix a lot of bugs too 😀

Too much talking, here a picture that proves how much better is QuarkLight now (compare it to my last renders). Bye!

 

There be Light

Shame on me. I began to feel a bit sad because everyone and his grandmother has a working self-made GI raytracer (or at least this is my impression when I go on Ompf.org). Of course, QuarkLight has a few features that I never showed in the previous renders (a couple of procedural textures, textures operators, a thin lens camera model for Depth of Field just to name a few) but, com’on, no GI.

I added the following code to my Trace routine (right after the direct light computation and just before the reflection stuff):

//Add GI
 real gipdf;
 vector4<real> gi_sample = mat->Sample(in, hitdesc, gipdf, SCATTERING_TYPE::DIFFUSE);
 if(gipdf > real(0))
 {
 gi_sample = toWorld * gi_sample;
 SurfacePoint gipoint;
 gipoint.previousIOR = hitdesc.previousIOR;

 ray<vector4<real>> gi_r = ray<vector4<real>>(hitdesc.point.point + hitdesc.shadingGeometry.normal * Q_EPSILON, gi_sample);
 Spectrum gi = Trace(gi_r, gipoint, depth - 1);
 
 color = color + gi * gipdf * r.direction.AbsDot3(gi_sample);
 }

I would call it some sort of Path Tracing, but I fear It would be misleading. I don’t even know if this produces correct images…

This is what I got after 4 hours of rendering (1024 spp):

Just 2 considerations:

1) Yes, it is very noisy. I don’t have the time to leave the pc working for 8 hours (and my wife does not like to have it heavily working when nobody is at home or by night). I will try to move from uniform hemisphere sampling to cosine sampling. This should improve things. Once I’m sure this approach is ‘correct’ I might also try with multiple importance sampling but still, there are simply to many things to add/fix in the direct lighting raytracer before thinking about GI.

2) I’m currently using a very simple Tone Mapper. Results are not very good, but my other TM is broken so this works better. I also have the hdr output and I used the Qtpfsgui tool to get prettier pictures, but noise increases. Once I have a cleaner image I could use that toon on it before posting next time on the blog.


			

An old rendering

Nothing new about QuarkLight really. Fact is, I’ve been busy looking for a new job and getting used to it once found. In addition, I have spent much time working on my site. There are so many features that must be implemented as soon as possible (phong-like speculars, a better obj loader and a faster BIH) that I don’t even know where to start.

In the meantime I’ve found an old rendering I did several months ago, before implementing the BIH. I was sure I lost it, and since the whole scene was hard coded and I deleted that code, I would have never been able to render it again without rewriting every single scene element by hand. I thought to add the image to the blog, so here it is:

The nice thing about this image is that it clearly shows reflections, refraction, normal mapping and speculars.

Academy Awards

Now that the Academy Awards have been assigned, I would like to say a couple of things about the special effects oscar . First, there was no real choice: Avatar’s effects were simply amazing and IMHO no movie in the next three or four years will reach that high quality level. It is the Crytek’s Crysis of Hollywood.

Second: surprisingly enought, no SFX nominations for Transformers: Revenge Of The Fallen and Terminator Salvation, perhaps the two movies with the best effects after Avatar. This reminds me the 2008 awards, when The Golden Compass has been assigned the award instad than Transformers or Pirates of Carribean At World’s End, which definitely deserved it more.

Once I thought that the Academy Awards were reliable at least for the technical categories. Now I’m a bit more skeptic. As a side note, I’m really happy that District 9, an uncommon movie and yet beatiful, has been considered by the Oscar commitee, thought I suspect that this is due more to the social meaning of the plot than the movie itself.

Good Old Sponza…

Two more renderings. This time no reflection/refraction stuff, but the good old Sponza Atrium (new version by Crytek), because I was the onlu one left without a sponza render. No GI though, sky + sun direct lighting, and a point light to ’emulate’ GI (QuarkLight does not support the Ambient term). In the meantime I was able to speed up the rendering times by a factor of 2 thanks to the great support of the Ompf.org comunity. Rendering times are still quite high (10 min. more or less for the two pics below) but I’m getting better 🙂

You might notice a couple of artifacts in one of the two images: I’m quite sure that there is a problem in the bloom filter. I’ll check.