Wednesday, October 27, 2010

OIT - Order Independent Transparency #3

hi
recently i needed to do OIT in DX9 HW and as you know (from my previous posts), i don't have the same access i have in DX11 (i can't create linked list per pixel and do the sorting after), so what i can do is simple depth peeling...
few words about depth peeling: depth peeling works by peel geometry with specific depth value, so to create OIT we can peel geometry starting from the nearest depth (to the camera eye) going to the farthest depth, each step we peel few pixels and store them to in separate RT, then we composite those RT's in back to front order to achieve the right transparency order effect.
so how do we implement that?
well, from what we can see we need color RT to store the result, also a depth buffer so we could compare pixels depth against, we also need a "second" depth buffer so we could reject pixels that already peeled/visited.
1. color RT supported by DX9 and upper
2. first depth buffer, also supported but...
3. second depth buffer, not supported but...
so we have issues with 2,3, to implement pixel depth compression we need to have the ability to pass depth buffer into the pixel shader so we could do the compare ourselves so we could reject already peeled/visited pixels, BUT DX9 doesn't even let us bind depth buffer as texture (even if i'm not writing to it - can be done in DX10, but we need DX9), so we need a way to emulate depth buffer that can be bind as texture and act as depth buffer.
to do this we can use float point RT, for the OIT we will use 2 float point RT, the first one used for first pass, second for the second pass, then the first again and so on, this is our ping/pong depth RT's.
the reason we need this kind of ping/pong RT's is to peel the correct layer every pass.
for example:
to peel the first layer we need empty depth buffer, the we render and store it to our color RT
to peel the second layer, we need to use the depth buffer from the first pass, so we could ignore the pixels handled at the first layer
to peel the third layer, we need to use the depth buffer from the second pass, so we could ignore the pixels handled at the second layer (and first layer)
and so on... you can see that we don't need more then 2 depth buffer to manage all those passes.
ok, so now we can peel layers with our depth buffer RT's, but how can do OIT for more than 8 layers? (we can only bind 8/16 texture), well here comes the reverse depth peeling (RDP) technique, instead of peeling in front to back order, peel with back to front order, so every time you peel one layer you can immediately composite it to the back buffer, so only one color buffer needed.
total memory needed could be one of those:
1. 3 RT's, two 32 bit float point and one RGBA
2. 2 RT's, one 32 bit float point for both depth buffer, and one RGBA
NOTE: in deferred rendering you can get rid of the RGBA buffer.
so as you can see, total memory is very low.
after all the talking, here comes a video that shows RDP in action:



simple planes shows the idea, this model doesn't need per pixel sorting but it shows that the algorithm works well.

Teapot model - needs per pixel sorting


NOTE: technique implemented in render monkey, the video shows very simple mesh containing 8 planes so it will be easy to see the effect.
each peeled layer render with small alpha value, so you can see how layers blended when you see through all layers (you get the white color)
some performance info: RM window 1272x856 ATI 5850 - 270+ FPS
thats it for now, cya...

8 comments:

pixelmager said...

I may be wrong, but the video shows additive layers. Additive layers don't need sorting?

orenk2k said...

hi
you are right, planes doesn't need per pixel sorting, this planes model make it easy to see that the algorithm works well.
anyway, i updated my post and added two more images, shows the teapot model that needs per pixel sorting.
cya

kore3d said...

You can implement depth peeling with the same memory requrements. The problem of reverse depth peeling is having bad result when the scene has transparent layers, which are skipped. The simple example is peeling scene with teapot into one layer. In this case you must to select transparent pixels from front layers, but the reverse depth peeling ignores them.
If you have no idea how to implement it, I can describe it later.

orenk2k said...

hi
you probably know, but reverse depth peeling is as it sounds, it peeled layers in reverse order (back to front order), so in your teapot example you will get the farthest layer (when peeling 1 layer) and not the front one as expected.
in very complicated scene i didn't need to use more than 14 layers
i don't see how you can use same memory requirements in traditional dp, don't see how you can composite the layers in back to front order if you are not storing them somewhere...
will be happy to hear how ;P
cya

pixelmager said...

orenk2k - yes, but, you still do what looks like additive blending. Additive blending, no matter the model, doesn't need sorting, as addition is commutative.

You need an image like this one from your dx11-post.

orenk2k said...

hi pixelmager
render monkey isn't the best app to render each of the planes with different color (a lot of passes need other than the ones for the peeling)
note that the same blending used in both DX11 and DX9 samples
SRCALPHA, INVSRCALPHA blending
so the effect is the same, the difference is that in DX11 i sort the pixels, loop and blend on the gpu.
in DX9 the blending done with full screen quad for every slice peeled.
additive blending doesn't need sorting but the pixels rendered is, that's why simple geomtry like planes and particles need to be sorted back to front and why we need to peel parts from the model and blend them in the right order.
imagine a simple cube, in order to render it right, you need to render all the back faces first, and then blend over the front ones.
if you just turn on blending on and render the cube the blending can be right from one view but incorrect in others.
in more complex models the problem is worse.
hope this clear few things...
cya

pixelmager said...

After a healthy discussion with a colleague, our conclusion is that it's probably just kind of a bad example-scene :) I think just switching the background to white would make the images quite a bit more illustrative.

And thanks for taking the time to respond :)

orenk2k said...

hi pixelmager
no prob', thanks for reading it ;P