How do you display something using directX 11 compute shader?
I am wanting to write to a texture from my directX 11 compute shader. However I have no idea how to display this onto 开发者_如何学Pythonthe screen nor am I sure what sort of buffer I should be using to do this.
welcome on stackoverflow :)
The type of resource to choose is RWTexture2D<float4>
since you can print this directly on screen via a swapchain.
You can look at the DirectX SDK OIT sample:
They have declared a RWTexture2D<float4> frameBuffer
that they access in the function SortAndRenderCS
of OIT_CS.hlsl
.
// convert the color to floats
float4 color[3];
color[0].r = (r0 >> 0 & 0xFF) / 255.0f;
color[0].g = (r0 >> 8 & 0xFF) / 255.0f;
color[0].b = (r0 >> 16 & 0xFF) / 255.0f;
color[0].a = (r0 >> 24 & 0xFF) / 255.0f;
color[1].r = (r1 >> 0 & 0xFF) / 255.0f;
color[1].g = (r1 >> 8 & 0xFF) / 255.0f;
color[1].b = (r1 >> 16 & 0xFF) / 255.0f;
color[1].a = (r1 >> 24 & 0xFF) / 255.0f;
color[2].r = (r2 >> 0 & 0xFF) / 255.0f;
color[2].g = (r2 >> 8 & 0xFF) / 255.0f;
color[2].b = (r2 >> 16 & 0xFF) / 255.0f;
color[2].a = (r2 >> 24 & 0xFF) / 255.0f;
float4 result = lerp(lerp(lerp(0, color[2], color[2].a), color[1], color[1].a), color[0], color[0].a);
result.a = 1.0f;
frameBuffer[nDTid.xy] = result;
As you can see they have r0
, r1
and r2
uint
values that are actually RGBA colors (a byte for each channel), they extract each channel using shifts and masks and normalized it.
You don't need to do that if you have already float4
values of course.
Then they do those lerps (for interpolation). Again you shouldn't need to do that.
What interest you is that they access frameBuffer
using array notation and an uint2
for coordinates.
精彩评论