SIGGRAPH 2012 – Tuesday

Beyond Programmable Shading (SlidesSIGGRAPH Page)

Five Major Challenges in Real-Time Rendering http://bps12.idav.ucdavis.edu/talks/02_johanAndersson_5MajorChallenges_bps2012.pdf
After the introduction to the BPS course, Johan lead straight into a followup to his talk from the 2010 course. The fact is that the 5 challenges from 2 years ago are still there. However, each of these challenges can be broken down into different areas resulting in a total of roughly 25 different topics to cover spread over the 5 major groups:

  1. Cinematic Image Quality – Types of aliasing, anti-aliasing and blur.
  2. Illumination – Dynamic GI, shadows, reflections
  3. Programmability – Exposing a common front end shader language for different backends (for example HSAIL (Heterogeneous Systems Architecture Intermediate Language), supporting the GPU generating its own work, improving coherency between tasks with things like queues, simpler CPU-GPU collaboration. Programmable blending (as is being exposed by Apple in iOS6 via APPLE_shader_framebuffer_fetch discussed here)
  4. Production costs – Reduce iteration time. A great renderer won’t reduce costs, need to reduce cost of creating content.
  5. Scaling – Power vs resolution.

I do know that prior to the talk, Johan was soliciting feedback from others in the industry which meant that you knew the challenges listed were more than one man’s personal list and it was good to see the names of the contributors at the end of the talk.

Intersecting Lights with Pixels: Reasoning about Forward and Deferred Rendering http://bps12.idav.ucdavis.edu/talks/03_lauritzenIntersectingLights_bps2012.pdf

This was an interesting overview of the state of the art for generating the lists of lights to apply to pixels in a forward or deferred renderer without going in to a forward vs deferred discussion beyond some bandwidth comparisons that were nearly inconclusive due to the approximate equality.

There were 2 main things that I thought made useful takeaways:
1) Doing per tile checks against the bounds of each tile performs a lot of the same tests multiple times. Instead you should use something like a hierarchical quad-tree which should reduce the number of tests required and avoid redundancy.
2) The suggestion of using bimodal Z clustering rather than a higher number of buckets as used in Clustered Deferred and Forward Shading by Olsson et al at HPG. While reading the Olsson paper, I thought that heavily subdividing the Z range was overkill and some twitter discussion around that time highlighted that in some games, there’s limited content between the end of the sights and the nearest wall, in which case a split into two Z ranges would help.

I’d recommend taking the time to read through the slides if you didn’t attend.

Dynamic Sparse Voxel Octrees for Next-Gen Real-Time Rendering http://bps12.idav.ucdavis.edu/talks/04_crassinVoxels_bps2012.pdf

With all of the buzz regarding Sparse Voxel Octrees based on their usage in Unreal 4 (covered in the Advances in Real-Time Rendering course), I thought it would be good to see a presentation from Cyril Crassin who has been working hard and presenting a lot of the work in this area. Most of the talk was an overview of how the system works and how it fits in versus a polygonal geometry representation, comparing the advantages that you can get from cone tracing an SVO.

Yet again, this was another SVO talk where processing cost and storage space were skated over, but since the slides are now available, you can see the numbers. 9 levels of SVO comes in at 200MB to 1GB and the initial construction for the GL sponza demo is 70ms with an update cost of 4-5ms for animated data each frame. The performance figures were slightly confused later when it was said that the GL demo from the new OpenGL Insights book (released chapter here) can build Sponza in 15.44ms. However since the technique is being used in Unreal Engine 4, it was expected that The Technology Behind the “Unreal Engine 4 Elemental Demo” presentation on the following day would show where you could reduce the time and space requirements.

I do like SVOs for the fact that they provide a scalable solution to the visibility part of a GI system and their use in Unreal shows that they can have a runtime implementation. If only they didn’t cost so much to generate and store!

Power Friendly GPU Programming http://bps12.idav.ucdavis.edu/talks/05_ribblePowerRendering_bps2012.pdf

This was a Snapdragon-based presentation on general optimizations that could be applied to save power. Unfortunately the generality of the optimizations – compress textures, draw front to back, and consider your render target changes – brought very little to the power saving discussion. Once frame limiting was recommended as the best way to save power I did wonder how helpful the content would be. I missed the end of the talk with an aim to return for the panel.

From Publication to Product: How Recent Graphics Research has (and has not) Shaped the Industry

This panel was lead by Kayvon Fatahalian and discussed the relationship between research departments and industry with a set of industry luminaries on the panel.

Some of the things that I took note of:

  • No one wants to learn new language – every time someone comes up with a new language, it’s unlikely to get a lot of adoption due to the languages already in use.
  • Papers need to use realistic workloads and industry needs to provide better workloads to facilitate this. Researchers working closely with industry typically get the most relevant workloads due to the requirements of the research.
  • The HLSL language was not expected to last this long – they thought it might run for 5 years or so.

Light Rays Technical Papers http://s2012.siggraph.org/attendees/sessions/100-59

Naïve Ray Tracing: A Divide-And-Conquer Approach ACM Digital Library Version

This presentation started with a back-to-basics description of ray tracing – intersect a bunch of rays with a bunch of geometry. A lot of optimization has gone into the geometry intersection using techniques such as bounding volume hierarchies which adds time to build, memory to store and complexity to create and intersect, and which typically ignores the distribution of the rays and can even increase cost with dynamic scenes. This new technique is based on recursively splitting the set of rays and the set of geometry until you can perform a naive set of intersection tests. It’s a simple algorithm so most of the rest of the talk consisted of results. The performance looked good and they said that the major limit was bandwidth as the reordered the rays and geometry.

This was an interesting presentation since it could lead to a rethink in the way that ray tracing is performed. I imagined the tests to partition the sets of rays and geometry would be prohibitively expensive but it sounds like it could be a good win. It’ll be interesting to see what comes of this technique and any further research. There’s a good write up here too.

Manifold Exploration: Rendering Scenes With Difficult Specular Transport SitePDF

This talk centered on a new way of dealing with specular that can be applied to Markov Chain Monte Carlo (MCMC) based rendering. My understanding is that once you know which points you want to transport light between and which surface to reflect from or refract through, you can make multiple paths that fulfil that transport. This was best expressed by the images in the slides. There was also an extension to account for roughness based on the area around the manifold. This works well in the highly reflective and rough scenes that they showed.

The source is available and includes an implementation of Veach’s Metropolis Light Transport which is notoriously difficult to implement – this meant an extra round of applause for the presenter.

Bidirectional Lightcuts http://www.cs.cornell.edu/~kb/publications/SIG12BidirLC.pdf

Continuing the theme of bias reduction in Virtual Point Light (VPL) systems, Bidirectional Lightcuts extends multidimensional lightcuts to use the same tracing mechanism that adds VPLs to add Virtual Sensor Points (VPSs). The paper introduces weighting mechanisms for the VPL/VPS pairings to allow the use of more advanced features such as gloss, subsurface scattering and anisotropic volumes.

Virtual Ray Lights for Rendering Scenes With Participating Media Site PDF

Virtual Ray Lights are intended to fix the issues that arise when rendering participating media with a VPL technique (firefly-like singularities around each VPL). The ray lights are the paths traced through the volume when adding VPLs and they can be integrated with the camera ray when rendering to add their contribution which gives very convincing results.

As mentioned before, this paper was superceded by beam lights (Progressive Virtual Beam Lights as seen at EGSR) since the ray lights change the firefly singularities to bright streaks.

Fun with Video http://s2012.siggraph.org/attendees/sessions/100-56

Video Deblurring for Hand-Held Cameras Using Patch-Based Synthesis http://cg.postech.ac.kr/research/video_deblur/

This paper discussed a method for fixing the motion blur that remains even after image stabilization. The algorithm is based on finding periods with sharp frames (since shaky videos have sharper and blurrier frames) then applying a patch based process that finds neighbours and blends. They find blur kernels to match the blur and use them to match the patches.

I’d not realized how bad the remaining motion blur could be, but this was a very interesting presentation and covered a lot of previous work.

Eulerian Video Magnification for Revealing Subtle Changes in the World http://people.csail.mit.edu/mrub/vidmag/

The fast forward version of this paper showed 2 example videos that demonstrated detecting a person’s heartrate and an infant’s breathing based on amplifying changes in videos – the main reasons I wanted to see this. The previous work section referenced Motion Magnification from SIGGRAPH 2005. For this technique, the video is spatially decomposed and then they calculate the luminance and apply a temporal filter that smooths the trace. Many equations are used to explain why this works, and slides refer you to the paper for more details.

More interesting results included detecting Bruce Wayne’s pulse in a Batman film when he’s supposed to be asleep and a running demo where the user moved slightly and his eyes didn’t which gave scary results.

Selectively De-Animating Video http://graphics.berkeley.edu/papers/Bai-SDV-2012-08/

The presentation demonstrated a user driven tool that could warp parts of a video to make some elements appear stationary while leaving the motion of other parts intact. These can be looped to create a cinemagraph. The features marked by the user are tracked through the video and used to define the required warp. The warped version is composited with the original to remove the last motion. A major advantage was that the user input requirement seemed minimal.

The main example was a beer being poured into a glass (a common SIGGRAPH video source) where the video was warped such that the glass looked stationary. Other examples given were a roulette wheel where the motion of the ball was faked or the wheel was held still, a video of a guitar player where the guitar was kept still, and a model that was kept still while moving her hair and eyes.

SIGGRAPH Dailies – http://s2012.siggraph.org/attendees/siggraph-dailies

SIGGRAPH Dailies is an end of day session that is based on multiple presentations of a minute each with a wide variety of topics. My memory of the event was that it was very artist driven with a very vocal set of presenters from Texas A+M. The length of the presentations means that they have a very strong visual component and it’s the artists presenting their work that I remember most.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s