The render times below are not very representative of the final system, currently only very rudimentary acceleration structures are in place, plus the currently implemented Monte Carlo estimators are - to be blunt - pants. There are some variance reduction techniques, but not nearly enough to give reasonable quality at much reduced render times. The final goal is to implement bi-directional path tracing, culminating in Metropolis Light Transport. Bi-directional path tracing requires a slightly different mind set to an all out 'traditional' monte carlo renderer, which is why I haven't concentrated on accelerating the direct lighting yet (although that will be important as direct lighting is an important optimization to BPT).
Incidentally I'm using a couple of models which I didn't make, I don't have contact details right now (but I am looking for them) to ask permission to use them in the shots, if they are yours and you have a problem, please email me (if I don't mail you first). I did however get them from a public site and I'm sure the license is required to be free, still I'd like to credit the authors.
Details of the images are given. They are in more-or-less chronological order and are (obviously) my favourite images. All images are 512x512, but they are taken as screenshots from the OCaml graphics window so are slightly larger (the system does output image files, but they are CIE XYZ in a fixed point format so aren't directly useable). The final images aren't really very important for this set of renders since so much of the system will change. The test machine is an Athlon XP 1800+, 768 MB which should give a rough idea of performance.
I'm using JPEG compression for all the thumbnails and images here. That does, of course, introduce some extra artifacts on top of the rendering error. Originally I had all of the images as PNG, giving perfect reproduction but it meant the page took an age to load. If anyone wants the original PNG (or even the XYZ), or request that individual images be put up as PNG then mail me and I'll see what I can do.
A simple exponential exposure function is used for tone mapping, so some of the images are washed out or too dark. Also the camera transform is very wrong, so if the images look a little warped, this is why. It's currently a hack to get something visible, eventually a much more correct model will be used.
Three spheres with Lambertian diffuse BRDF (in two colours). The ground has a grey Cook-Torrance BRDF, although it is still sampled using a cosine distribution (hence the massive variance). All lighting comes from the sky dome, there is an attempt at some indirect lighting, but it is so half-assed it's unnoticeable.
Fairly low polygon model (5000 faces). Lighting comes from both the sky dome and a small triangular area light source. The lighting is very fake, it was supposed to somewhat simulate the sunlight, but didn't come out quite as it should. Ground plane is using the same Cook-Torrance BRDF as before, the model is using Ashikhmin BRDF (grey).
The APC using a pure grey Ashikhmin BRDF, light by a small white triangle. Ground plane is diffuse too.
Two white diffuse spheres and a phong shaded red one. Somewhat indirect lit, with some reflection. Notice the high variance of reflection due to sampling by cosine distribution and not the BRDF. Also I don't think the reflection is quite accurate, it looks a bit skewed. I know I had incorrectly written the cosine distribution sampling code (bitten be lexical aliasing - naughty) which might be causing this (now fixed).
Medium-low polygon ATAT walkers (10000 faces each, 40000 in total), lit by a large spherical light source. Each walker has a different material. This is the first demonstration of the geometry instancing and transformation system. It would appear that this model is an almost perfect match for the AABB tree acceleration, compared to the APC model the render times are almost half (for 8x the faces).
Same scene, higher quality. Had quite a lot of problems getting this to complete the render, kept crashing but I think that was due to my system since it worked perfectly after a reboot.
Same scene, different light and camera. Light is slightly yellow. Based on the previous renderings, it would appear there is little difference between 20 and 50 samples (as expected for naive Monte Carlo). 30 samples for 14 minutes seems a reasonable compromise.
A bunch of spheres and a single (visible) light source. Light source size is varied. Render times for both images are the same (within seconds) because the same number of rays are traced (although that isn't 100% true when you realize the shadow rays early-out if an intersection is found).
All spheres have a grey diffuse material. They are lit by two lightsources (which are situated about on the camera plane and just above the viewpoint), a red one (on the right) and a green one (off to the left). The variance is higher for the same computation time, as one light source is randomly selected for each sample (at least I think that is the effect). The image on the right is the same rendering with 10 samples per pixel and the radiance normalized so just the colour is visible. I'm pretty sure these images aren't correct and I've missed a 0.5 scale factor (i.e. a x2 increase in brightness) which could be why the error doesn't look so bad. The third image below has this factor and is rendered the same as the first image. The final image is the third image with the 'max RGB' GIMP filter applied. This shows the approximate contributions from each light source.
A little bit of smart making later and the quality as shot up (for the same render times).
I thought I'd better add HDR input/output early, so I did :) These images are the same .hdr that was rendered, but at different exposures using HDRview. They use the grace_probe.hdr from www.debevec.com.
These aren't the best demonstration, but since the indirect lighting is sucky at the moment, I've hacked in a specular reflection for all of the materials. Only one bounce though which is why the reflection of the spheres and ground plane are only direct lit.
After having one of those 'smack my head I'm dumb' moments, I fixed up the solid angle sampling and it all suddenly looks *much* better.
Finally implemented basic texturing (for meshes only). I also discovered the diffuse BRDF has been wrong all along, hehe. The images below are at the same exposure (the spheres have modified_phong reflectance). The top image is incorrect.
Ok, it's not really a 'hack' but it is biased. Still, can't really get much better quality than this with the current implementation without giving it a *long* time. These are just for fun, I haven't quite started the bpt implementation yet :)
Oh yeah, simple sphere texturing is visible (they've get a kind of marble texture, but it is far too reflective when applied to the phong model). And the Cornell Box doesn't have accurate materials so it looks a bit crappy (and obviously uses a spherical light source because meshes can't yet be lights).