I thought to share some of my results as it might be interesting for the one or other here.
A technology I am working on since a while now is to exploit temporal coherence between two consecutive rendered images to speed up ray-casting of primary rays. The idea is to store the x- y- and z-coordinate for each pixel in the scene in a coordinate-buffer and re-project it into the following screen using the differential view matrix. The resulting image will look as in the uppermost screenshot.
The method then gathers gathering empty 2x2 pixel blocks on the screen and stores them into an indexbuffer for raycasting the holes. Raycasting single pixels too inefficient. Small holes remaining after the hole-filling pass are closed by a simple image filter. To improve the overall quality, the method updates the screen in tiles (8x4) by raycasting an entire tile and overwriting the cache. Doing so, the entire cache is refreshed after 32 frames. Further, a triple buffer system is used. That means two image caches which are copied to alternately and one buffer that is written to. This is done since it often happens that a pixel is overwritten in one frame, but becomes visible already in the next frame. Therefore, before the hole filling starts, the two cache buffers are projected to the main image buffer.
Most of the pixels can be re-used using this technology. As only a fraction of the original needs to be raycasted, the speed up is significant and up to 5x the original speed, depending on the scene (see the other images below). Resolution for that test was 1024x768, the GPU was an NVIDIA GeForce GTX765M.
Here also two videos showing this technology in action:
(I uploaded them a while ago)
For further reading, I included some paper references in my original blog post:
Practical and theoretical implementation discussion.
1 post • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 2 guests