SIGGRAPH 2012 – Sunday

SIGGRAPH started with the Fundamentals seminar which I skipped since the Level was Introductory and the Intended Audience was raw beginners.

My SIGGRAPH education started at Optimizing Realistic Rendering With Many-Light Methods (course notes here) which was heavily focussed on Virtual Point Light methods.

Instant radiosity

Alexander Keller started with a retrospective look at Instant Radiosity (1997 – original slides) from which the VPL concept could be said to originate. This presentation introduced 2 major ideas: the idea of lighting a scene from many individual points and accumulating the results; and the idea of a singularity due to inverse square distance scale component of the lighting equation – for example when the distance from the light is less than 1.0 units, the inverse square will increase the contribution of the light.

The original technique just used OpenGL to rasterize the scene multiple times with different lights enabled. The initial example sampled the light positions from an area light and the second example added virtual point lights to simulate a light bounce. At that time OpenGL would clamp the lighting to 1.0 when writing to the render target which clamped the singularity away. This was the first time the course highlighted the need to compensate for the energy lost by clamping to avoid overdarkening in scenes and this is where a lot of the VPL research has focussed in the last few years.

The last part of the talk went on to how to generate light paths but appeared to be an advert for the Advanced (Quasi) Monte Carlo Methods for Image Synthesis course later in the week.

Overall, the talk itself explained the original technique and the limitations but was heavy on equations with minimal descriptions which will leave the slides lacking notes somewhat harder to read – an audience member even raised a query based on a missing equation during the Q&A.

Virtual Spherical Lights (originally presented at SIGGRAPH Asia)

Virtual Spherical Lights is an extension to Virtual Point Lights to better handle glossy surfaces with pointed BRDF lobes – the example given being a kitchen with specular responses on all of the materials. The presenter reiterated that clamping the contribution to avoid the singularity loses energy and VSLs intend to reduce the need for clamping. Instead of integrating single points that can generate the singularities, VSLs integrate over the surface of a virtual spherical light (like a pseudo area light). The results look much better in the kitchen scene although in a contrived scene, “Anisotropic Tableau,” which contains strong directional lights and an anisotropic metallic plane, convergence requires a lot of lights.

The talk highlighted the reuse of the work in participating media extending ray lights (presented at SIGGRAPH this year as Virtual Ray Lights for Rendering Scenes with Participating Media) to beam lights (Progressive Virtual Beam Lights as seen at EGSR).

During the Q&A, someone asked how you pick the radius. The presenter said it’s based on the 10 nearest lights. The original paper also mentions a user specified constant.

Overall, the technique seems to give better results than VPLs at a minor cost increase, but the random choice of local light count and user specified constant feels a bit too voodoo.

Improved VPL distribution

3 techniques for distributing VPLs were covered in this talk, each with it’s own example scene. The examples for this talk were a simplex box with an object in, room shaped like an S curve where the light source was at one end of the S and the camera was at the other, and another kitchen surface with reflective walls. In each case, VPLs were needed in specific places to improve their contribution to resulting image.

1) Rejection of VPLs based on their contribution versus the average result (Simple and Robust Iterative Importance Sampling of Virtual Point Lights by Iliyan Georgiev, Philipp Slusallek).

The average result is generated from an initial set of pilot VPLs. The presenter said that there’s no need to be accurate and the mechanism is cheap and simple to implement. However this distribution is not good for local interreflections due to the sparse distribution of pilot VPLs. This mechanism is used in Autodesk’s 360 product covered later.

2) Build light paths from the light to the camera and select the 2nd vertex from the camera as the VPL.

Based on something like Metropolis Instant Radiosity you can mutate the paths using the Metropolis-Hastings algorithm to get a range of values. Although this technique works better than rejection in more complex cases, implementing Metropolis-Hastings is notoriously hard.

3) Sample VPLs from camera

Appearing in Bidirectional Instant Radiosity and Combining Global and Local Virtual Lights for Detailed Glossy Illumination, this method generates local VPLs which influence local tiles, rejecting those with zero contribution. This produces a result that can compensate for the earlier clamping when calculating the global VPL contribution. When compared to VSLs, VSLs lose highlights, but local VPLs lose shadow definition.

Overall, the rejection method seemed to give the best results for the complexity of implementation, although the scene in the talk and those in the paper are both relatively simple. Looking at the S curve room, my first thought was how was any real-time technique going to be able to light that and to be fair, since it was from the Metropolis example, that was the only one that solved it. Personally I think it should be used by all VPL techniques as a new Cornell box but it’s something that I’d expect to be better lit anyways.

Scalability with many lights I
This presentation introduced LightCuts and later Multidimensional LightCuts.

The basic principle is that each of the contributions from thousands or millions of VPLs is minimal and therefore the contribution of multiple VPLs could be evaluated at a time. To support this, the VPLS in the scene are recursively clustered where each cluster is based on a picking a dominant VPL from the cluster as a representative. These clusters represent a hierarchy called a light tree.

Once the light tree is built, you can light each pixel in the scene by creating a cut through the tree where the cut is the list of best representative lights that you will evaluate to light that pixel. Starting with the root of the light tree, you evaluate the error in lighting the pixel with that node and then use the child nodes if the error would be too great. The quoted error metric is 2% and is based on Weber’s Law which quantitatively defines perception. 2% implies that transitions should not be visible.

Multidimensional LightCuts extends light cuts into additional dimensions, such as those handled by higher order rasterization – time, antialiasing samples, sampling the aperture of the camera, and even participating media were provided as examples. The Multidimensional extension discretizes the points at which you’ll be gathering lighting contributions and then clusters those into a gather tree. The cartesian product of the light and gather trees creates a product graph which represents the combinations of cuts through both trees. Rather than creating the huge graph, selecting cuts in both trees achieves the same results. The error also needs to be evaluated between the gather cluster and light cluster rather than point to cluster in classical light cuts.

Although I’d heard of the LightCuts technique, I’d not looked into it’s implementation but this presentation made it easy to understand. There’s an obvious parallel with the Point Based Global Illumination techniques that use clustering of the points and then selecting points for lighting based on a solid angle error metric, but I do prefer the more accurate error metric of LightCuts.

Scalability with many lights II

This presentation demonstrated alternatives to the LightCuts method.

The first example was based on interpreting the lights to points mapping as a matrix and evaluating several rows and columns (Matrix Sampling For Global Illumination). Rows are randomly selected and lit and then the columns within the selected rows are clustered based on cost minimization. Once a set of clustered columns is selected, we can use those columns to light all of the other pixels. Examples demonstrated good results except where there were lots of independent lighting contributions which the clustering couldn’t handle.

The technique can also be extended to support animation by having a 2D matrix per frame and generating a 3D matrix. In this case, the sampled rows become horizontal slices and the clusters are smaller 3D matrices that are split based on highest cost with the split being based on time or lights depending on which is the better alternative.

The talk also introduced LightSlice: Matrix Slice Sampling for the Many-Lights Problem which chooses different clusters for large slices of the matrix. Another alternative implementation is clustering the visibility rather than the shading as part of the previously mentioned Combining Global and Local Virtual Lights for Detailed Glossy Illumination. The talk mentioned another clustered visibility method Real-time Indirect Illumination with Clustered Visibility which was presented by Dong in 2009.

Working with the problem as a matrix was an interesting way of looking at it. However, the first method being fundamentally based on the selection of the initial rows has an obvious parallel with randomly selecting points from the scene. It’s this randomness that makes this another technique that feels like voodoo. The system highlights the minimal contribution from a lot of the VPLs that techniques need to consider for scalability.

Real-time many-light rendering

This talk was presented by Carsten Dachsbacher who as far as I am aware has previously worked in the demo scene and who has a good idea what compromises are required to improve performance to real-time from hell slow (a term commonly used in this presentation to describe the performance of an offline implementation).

The major problem with the real-time implementation of VPLs is visibility computation between each point and each light. In real-time this means focussing more on rasterization than raytracing and reducing the number of VPLs, looking at thousands rather than a million.

To start with, the simplest way to generate VPLs is rasterization from the point of view of the light into something like a Reflective Shadow Map (PDF) (which is similar to the first stage of Light Propagation Volumes). Once the VPLs have been generated, there are lots of optimizations for using many VPLs with deferred shading, but the real bottleneck is the visibility check. Using shadow maps to accelerate this was the focus of the next part of the talk. The majority of the cost of the tiny shadow map required per VPL is in the rasterization stage. The suggested mechanism was to store a precalculated set of random points in the scene (as {triangle ID, barycentric coordinate} pairs to allow dynamic scenes) and use those to render the shadow maps – resulting in a “crappy map” due to the sparsity of the points leaving holes in the map. A technique called pull-push can reconstruct the missing parts of the map and can be applied to all maps at once if stored in the same texture. An alternative to rendering a set of points like this is to render something like a QSplat hierarchy which was implemented in the Micro-Rendering for Scalable, Parallel Final Gathering paper using CUDA.

The next part of the course covered achieving high quality results which focused on bias compensation and participating media. Reformulating the lighting equation, you can arrive at a version that bounds the transport to avoid singularities and then a residual which you iteratively accumulate in screen space. Only a few iterations are required, with more iterations needed at discontinuities which can be found by downsampling G buffers and creating a discontinuity buffer. As with all screen space techniques, this suffers from the lack of non-visible information that hasn’t been captured in a G buffer.

The participating media example referred to Approximate Bias Compensation for Rendering Scenes with Heterogeneous Participating Media. Handling participating media needs smarter bias compensation since VPLs generate sparkly points and sampling along a ray from the camera is an additional expense however you handle participating media. The underlying technique adds some real time approximations to Raab et al’s Unbiased Global Illumination with Participating Media.

The most important part of the presentation was the agreement that the visibility problem is the hardest part of the equation, something repeated from the PBGI work and something that needs an additional scene mechanism to handle, such as rasterization, raytracing or a voxel scene representation. I’m also glad to know that researchers are looking at the problem from a real-time point of view, although 8 fps isn’t sufficiently real-time yet. This presentation also left me with a large number of references to read!

Autodesk 360
Although very similar to the 360 presentation at HPG, this focused more on how the previously presented techniques from the course were integrated into and extended for Autodesk’s 360 cloud rendering service. They went with Multidimensional LightCuts since they describe it as scalable, uniform and supporting advanced effects. There were 3 main areas they improved the algorithm:

1) Eye Ray Splitting to better handle specular effects on glossy objects.
2) VPL targetting to support good lighting when rendering small subsets of large architectural models.
3) Directional Variant Light support since many Autodesk products expose them (and they are now part of bidirectional lightcuts).

The presentation covered 5 major advantages all sufficiently covered in-depth on the slides themselves.

My Wrapup

By the end of the course it appeared that there were some important takeaways:

1) VPLs are an active area of research mostly due to limitations with the bias and extensions to include other features.
2) Bias compensation is incredibly important due to the clamping to avoid problems with the inverse distance squared relation. Participating media makes this even scarier due to the generation of sparkly points local to the VPL.
3) Specular reflections are incredibly complex with VPL techniques.
4) Real time usage is limited due to the visbility term in the equation.

Technical Papers Fast Forward

A staple of the SIGGRAPH experience, the Technical Papers Fast Forward session has all of the technical papers cued up and ready to show with less than a minute per presenter to cover the salient points of their paper and attempt to coerce you to see their presentation. The video preview was made available prior to SIGGRAPH and was also shown in the Electronic Theater sessions. Typically songs, funny videos or overdramatic voice overs are used to promote the papers, but this year was mostly Haikus.

These are the main papers that caught my eye:

Theory, Analysis and Applications of 2D Global Illumination
Based on the idea of understanding 2D GI better, and to extend that understanding to applications in 3D GI.

Reconstructing the Indirect Light Field for Global Illumination
This reconstruction method appears to improve image quality based on sparse samples which could be a possibility for low resolution rendering or sampling.

Animating bubble interactions in a liquid foam (from http://www.cse.ohio-state.edu/~tamaldey/papers.html)
The videos for this talk made it appear interesting and approachable, based on what looks like a simple variant of Voronoi diagrams.

Eulerian Video Magnification for Revealing Subtle Changes in the World
The 2 example videos for this paper demonstrated detecting a person’s heart rate and an infant’s breathing based on amplifying changes in videos.

Tools for Placing Cuts and Transitions in Interview Video (Video)
This paper appeared interesting for its possible applications to evil by recutting any interview clips to appear linear. Fortunately a later paper appeared to be investigating techniques for forensic detection of modifications to video which could lead to an interesting arms race.

Resolution Enhancement by Vibrating Displays Video
I was surprised by the novel use of vibration to increase the perceived resolution of an image.

Design of Self-supporting Surfaces
This seemed interesting due to the comparisons to the low technology methods used by architects such as Gaudi – hang weights from strings and you can model your building upside down!

Position-Correcting Tools for 2D Digital Fabrication
The video showed a milling tool manipulated by hand that seemed to autocorrect as the user followed the template guide.

Additional Links:

The Self Shadow blog is doing a fantastic job of collecting all of the links to complement Ke-Sen Huang’s Technical Papers page.

One response to “SIGGRAPH 2012 – Sunday

  1. Pingback: SIGGRAPH 2012 – Monday | dickyjim

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s