The following image illustrates how removing the recursion distributes the photons better in the visible locations of the scene when using camera sampling:
Figure 1: Camera sampling with and without the recursion |
The following images show some rendered images with the new modified sampling techniques:
When we use light sampling, we can see that the density of the occlusion photons drops as the distance from the light source increases, which is what we expect. This is most noticable when we compare the shadows of the killeroos.
Uniform sampling succeeds in distributing the photons more evenly. This is also most noticable in the shadows of the killeroos. The distribution does not seem very uniform on the killeroo's themselves but this is expected as not every position on the killeroo can generate an occlusion photon.
Note that for the photon plot I positioned the camera a little more backwards to highlight the fact that no occlusion photons are stored at invisible locations of the scene. That's why the photon density plot of the camera sampling technique cuts of the shadow of the right killeroo. As the photon density plot clearly show, the occlusion photon locations exactly match the shadowed regions onthe killeroo.
Conclusion
From the resulting images, camera sampling is clearly the best option in terms of quality, because it places the occlusion photons in the most relevant positions of the scene. The downside of camera sampling is that we have to rebuild the occlusion map when the camera is moved. For interactive applications, uniform sampling would be a better choice, however, it takes the longest to compute. This is because we use a global line sampling (which misses a lot of the geometry) to generate uniform occlusion photons locations. This was necessary since, PBRT doesn't allow you to directly access the geometry of the scene.
PBRT allows accessing the geometry, but you need to go through the SurfaceIntegrator. If you have my source code you can look at NoAccelerator.cpp for a simple example.
ReplyDelete