tarlack wrote:@friedlinguini : don't you have structured aliasing patterns with your approach ? I remember blender doing this, and the number of images to avoid ghosting effects were large as soon as an object was going really fast relatively to the aperture time. For instance, reproducing a long exposure time photo (typically with these nice and so appealing curved headlights trails in an urban setting) requires a continuous sampling (or a prohibitively large number of images), maybe with some time-domain filtering.
What I'm suggesting assumes that motion blur is being done properly within a single frame, but doesn't address how. I had envisioned monte carlo sampling in the time domain, along with all the other sampling dimensions.
For motion-blur and raytracing, it seems to me that having two bounding boxes and interpolating between them does not allow to correctly take into account non linear transformations, such as rotors, a drifting car, or even the path of an object with very long exposure times. Maybe a more accurate way would be to have a single BVH in 4D ? Or, for an more intuitive point of view, a standard 3D BVH, where the bounding box of each object is simply the union of the bounding box of the object at all times during the aperture time. Then, when you hit a bbox, you compute the exact intersection for your ray's t value, for which you can use an acceleration structure based on time specific to the object to accelerate this computation. This way I think it should be possible to handle any non-linear motion blur.
Sounds doable, though it makes the bounds less tight to the geometry. Taking the union of bounding boxes across a complex nonlinear motion path does not sound like fun, though. I seem to recall that RenderMan uses piecewise linear motion with a default of one segment per frame, overrideable on a per-object basis. Such an approach could be used to preserve the tight bounds and easy computation of cessen's suggestion (maybe restricting to 2^N segments per object to avoid blowing up the number of segments for higher-level nodes).
Alternatively, average together a number of sub-frames, stratifying across the shutter time and using linear motion within each sub-frame.