Konstantin Shkurko

Time Interval Ray Tracing for Motion Blur

Konstantin Shkurko, Cem Yuksel, Daniel Kopta, Ian Mallett, and Erik Brunvand

In IEEE Transactions on Visualization and Computer Graphics (TVCG), 2017

Teaser image
Figure: A deforming slinky falling down a staircase generates complex interaction between blurred visibility and shadows. Our time interval ray tracing method produces noise-free motion blur in time similar to stratified sampling.

Abstract

We introduce a new motion blur computation method for ray tracing that provides an analytical approximation of motion blurred visibility per ray. Rather than relying on timestamped rays and Monte Carlo sampling to resolve the motion blur, we associate a time interval with rays and directly evaluate when and where each ray intersects with animated object faces. Based on our simplifications, the volume swept by each animated face is represented using a triangulation of the surface of this volume. Thus, we can resolve motion blur through ray intersections with stationary triangles, and we can use any standard ray tracing acceleration structure without modifications to account for the time dimension. Rays are intersected with these triangles to analytically determine the time interval and positions of the intersections with the moving objects. Furthermore, we explain an adaptive strategy to efficiently shade the intersection intervals. As a result, we can produce noise-free motion blur for both primary and secondary rays. We also provide a general framework for emulating various camera shutter mechanisms and an artistic modification that amplifies the visibility of moving objects for emphasizing the motion in videos or static images.


Links

Paper (pdf, 8.3 MB)
Paper (high res) (pdf, 40 MB)
Supplement (pdf, 7.1 MB)
Supplement (high res) (pdf, 41 MB)
Video, 1080p (mp4, 50 MB)
Publisher's Version

Description

We propose time interval ray tracing, providing an analytical approximation for computing motion-blurred visibility. Our approach is based on a simplification of the concept of intersecting rays with the volumes swept by moving triangles [2]. We invoke four simplifying assumptions that allow us to efficiently evaluate the spatiotemporal intersections of a given ray with a time interval (as opposed to a single timestamp) using traditional ray intersection tests with stationary triangles. Hence, we can use any ray tracing acceleration structure without modification to handle dynamic geometry. As a result, we can produce noise-free motion blur for both primary and secondary rays (including those for shadows, reflections, and global illumination) using only a single ray sample.

Below is a video showcasing the effectiveness of our method. It ends with a stress test for our triangulated prisms where a single triangle undergoes difficult dispalcement within a single frame. We compare our method to prisms with bilinear patches and time sampling.

Images

A few results images are shown below. Stratified sampling and Ours take the same amount of time.

Stratified SamplingOurs Anti-AliasedReference
SlinkySlinky, stratifiedSlinky, oursSlinky, reference
Dragon SponzaDragon Sponza, stratifiedDragon Sponza, oursDragon Sponza, reference
Stress test

Below is a selection of frames from the stress test, where a single triangle undergoes different transformations, each over the duration of a single frame. The Motion Geometry column shows the triangle keyframes, where the black triangle indicates the color and the position of the triangle at the start of the motion, and the colored triangle indicates the end of the motion. The Time Sampling Reference column shows the image when using stratified sampling. The last two columns show the results of our method using bilinear patches and triangulated prisms. Using the bilinear patches produces results virtually identical to the reference. While triangulation provides reasonable results for majority of the tested motions, in the extreme cases shown here it can deviate from the reference.

Time interval ray tracing stress test
Stress test
Programmable camera shutters

We demonstrate the effect of various shutter functions in a figure below. Notice that the shutter function also affects secondary effects such as shadows. Our method can handle any shutter function without any apparent performance penalty, including numerically challenging ones, such as the sharp peak that produces results similar to the photography trick “second-curtain flash.”

The scene below applies various shutter functions to a teapot moving left-to-right. Insets show the outline of each shutter function. ubfigures (a) and (f) show the effect of a rolling shutter. Subfigures (b)-(d), (h) and (i) show typical shutters used in computer graphics. More artistically-driven shutters can generate wildly varying effects, from (g) a sharp peak to (j) oscillating shutters.

Programmable camera shutters
Shutter functions applied to a moving teapot.
Emissive geometry

Resolving the noise due to motion blur with time sampling is highly challenging for high-dynamic-range (HDR) rendering. Figure below shows an example where a thrown lightsaber is moving across the image. This example requires an extremely large number of samples to resolve motion blur using time sampling. In comparison, our method quickly produces noise-free results.

The figure shows a lightsaber that moves across the image while rotating around its center of mass. It is frozen in the air in the last quarter of the time interval. There are 128 keyframes and the images include an imagespace bloom effect as a post-process, producing the glow around the lightsaber.

Lightsaber flying across an image
Lightsaber flying across an image.

BibTeX

@article{8115176,
   author = {Shkurko, Konstantin and Yuksel, Cem and Kopta, Daniel and Mallett, Ian and Brunvand, Erik},
   journal = {IEEE Transactions on Visualization and Computer Graphics},
   title = {Time Interval Ray Tracing for Motion Blur},
   year = {2018},
   volume = {24},
   number = {12},
   pages = {3225-3238},
   doi = {10.1109/TVCG.2017.2775241}
}

Acknowledgements

This material is supported in part by the National Science Foundation under Grant No. 1409129. Thiago Ize and Peter Shirley provided helpful feedback. Cem Yuksel provided Slinky, Clothball, and Lightsaber scenes, and combined Sponza atrium by Marko Dabrovic with the Stanford Dragon for the Dragon-Sponza scene. We also thank the anonymous reviewers for their time and helpful feedback.

Updated: 01.15.18 © Konstantin Shkurko, 2010 - validate css, html