[760] | 1 | Depth Of Field demo
|
---|
| 2 | -------------------
|
---|
| 3 |
|
---|
| 4 | Controls:
|
---|
| 5 |
|
---|
| 6 | Mouse controlls the camera orientation.
|
---|
| 7 | Focal Distance parameter controlls the focal depth of the simulated camera.
|
---|
| 8 |
|
---|
| 9 | The implemented method:
|
---|
| 10 | -----------------------
|
---|
| 11 |
|
---|
| 12 |
|
---|
| 13 | Depth of Field with Simulation of Circle of Confusion
|
---|
| 14 |
|
---|
| 15 | Multiple rays scattered from a given point on an object will pass through the camera lens, forming the cone of light. If the object is in focus, all rays will converge at a single point on the image plane. However, if a given point on an object in the scene is not near the focal distance, the cone of light rays will intersect the image plane in an area shaped like a conic section. Typically, the conic section is approximated by a circle called the circle of confusion. The circle of confusion diameter b depends on the distance of the plane of focus and lens aperture setting a (also known as f-stop). For a known focus distance and lens parameters, size of the circle of confusion can be calculated as: b = (D * f* (zfocus - z))/(zfocus*(z - f)) where D is a lens diameter D= f / a and f is the focal length of the lens. Any circle of confusion greater than the smallest point a human eye can resolve contributes to the blurriness of the image that we see as a depth of field.
|
---|
| 16 |
|
---|
| 17 | The methods presented in this section are all post-processing methods. It means that they consist of two main phases. In the first phase, the scene is rendered into an off-screen buffer once or more. In the second phase, the final image is computed from the off-screen image buffers and some type of depth controlled blurring.
|
---|
| 18 |
|
---|
| 19 | Phase 1:
|
---|
| 20 | First, the whole scene is rendered by outputting depth and blurriness factor, which is used to describe how much each pixel should be blurred, in addition to the resulting scene rendering color. This can be accomplished by rendering the scene to the multiple buffers at one time. DirectX® 9 has a useful feature called Multiple Render Targets (MRT) that allows simultaneous shader output into the multiple renderable buffers. Using this feature gives us the ability to output all of the data channels (scene color, depth and blurriness factor) in our first pass. One of the MRT restrictions on some hardware is the requirement for all render surfaces to have the same bit depth, while allowing use of different surface formats. Guided by this requirement we can pick the D3DFMT_A8R8G8B8 format for the scene color output and the two-channel texture format D3DFMT_G16R16 format for depth and blurriness factor. Both formats are 32-bits per pixel, and provide us with enough space for the necessary information at the desired precision.
|
---|
| 21 |
|
---|
| 22 | Depth of Field with Simulation of Circle of Confusion
|
---|
| 23 |
|
---|
| 24 | Multiple rays scattered from a given point on an object will pass through the camera lens, forming the cone of light. If the object is in focus, all rays will converge at a single point on the image plane. However, if a given point on an object in the scene is not near the focal distance, the cone of light rays will intersect the image plane in an area shaped like a conic section. Typically, the conic section is approximated by a circle called the circle of confusion. The circle of confusion diameter b depends on the distance of the plane of focus and lens aperture setting a (also known as f-stop). For a known focus distance and lens parameters, size of the circle of confusion can be calculated as: b = (D * f* (zfocus - z))/(zfocus*(z - f)) where D is a lens diameter D= f / a and f is the focal length of the lens. Any circle of confusion greater than the smallest point a human eye can resolve contributes to the blurriness of the image that we see as a depth of field.
|
---|
| 25 |
|
---|
| 26 | The methods presented in this section are all post-processing methods. It means that they consist of two main phases. In the first phase, the scene is rendered into an off-screen buffer once or more. In the second phase, the final image is computed from the off-screen image buffers and some type of depth controlled blurring.
|
---|
| 27 |
|
---|
| 28 | Phase 1:
|
---|
| 29 | First, the whole scene is rendered by outputting depth and blurriness factor, which is used to describe how much each pixel should be blurred, in addition to the resulting scene rendering color. This can be accomplished by rendering the scene to the multiple buffers at one time. DirectX® 9 has a useful feature called Multiple Render Targets (MRT) that allows simultaneous shader output into the multiple renderable buffers. Using this feature gives us the ability to output all of the data channels (scene color, depth and blurriness factor) in our first pass. One of the MRT restrictions on some hardware is the requirement for all render surfaces to have the same bit depth, while allowing use of different surface formats. Guided by this requirement we can pick the D3DFMT_A8R8G8B8 format for the scene color output and the two-channel texture format D3DFMT_G16R16 format for depth and blurriness factor. Both formats are 32-bits per pixel, and provide us with enough space for the necessary information at the desired precision.
|
---|
| 30 |
|
---|
| 31 | The implementation of the first phase: The pixel shader of the scene rendering pass needs to compute the blurriness factor and output it along with the scene depth and color. To abstract from the different display sizes and resolutions, the blurriness is defined to lie in the 0..1 range. A value of zero means the pixel is perfectly sharp, while a value of one corresponds to the pixel of the maximal circle of confusion size. The reason behind using 0..1 range is twofold. First, the blurriness is not expressed in terms of pixels and can scale with resolution during the post-processing step. Second, the values can be directly used as sample weights when eliminating bleeding artifacts.
|
---|
| 32 | For each pixel of a scene, this shader computes the circle of confusion size based on the formula provided in the preceding discussion of the thin lens model. Later in the process, the size of the circle of confusion is scaled by the factor corresponding to size of the circle in pixels for a given resolution and display size. As a last step, the blurriness value is divided by maximal desired circle of confusion size in pixels (variable maxCoC) and clamped to the 0..1 range. Sometimes it might be necessary to limit the circle of confusion size (through the variable maxCoC) to reasonable values (i.e. 10 pixels) to avoid sampling artifacts caused by an insufficient number of filter taps.
|
---|
| 33 |
|
---|
| 34 | struct VS_INPUT {
|
---|
| 35 | float4 Position : POSITION;
|
---|
| 36 | float3 Normal : NORMAL;
|
---|
| 37 | float3 Binormal : BINORMAL;
|
---|
| 38 | float3 Tangent : TANGENT;
|
---|
| 39 | float4 TexCoord0 : TEXCOORD0;
|
---|
| 40 | };
|
---|
| 41 |
|
---|
| 42 | struct VS_OUTPUT {
|
---|
| 43 | float4 hPosition : POSITION; // point in normalized device space before homogeneous division
|
---|
| 44 | float2 TexCoord : TEXCOORD0; // texture coordinates
|
---|
| 45 | float3 tView : TEXCOORD1; // tangent space view vector
|
---|
| 46 | float3 tLight : TEXCOORD2; // tangent space light vector
|
---|
| 47 | float Depth : TEXCOORD3;
|
---|
| 48 | };
|
---|
| 49 |
|
---|
| 50 | struct PS_OUTPUT {
|
---|
| 51 | float4 Color : COLOR0;
|
---|
| 52 | float4 Depth : COLOR1;
|
---|
| 53 | };
|
---|
| 54 |
|
---|
| 55 | VS_OUTPUT BumpVS(VS_INPUT IN)
|
---|
| 56 | {
|
---|
| 57 | VS_OUTPUT output;
|
---|
| 58 |
|
---|
| 59 | // object-space tangent matrix
|
---|
| 60 | float3x3 Tan = float3x3(normalize(IN.Tangent), normalize(IN.Binormal), IN.Normal);
|
---|
| 61 | // position in view-space
|
---|
| 62 | float3 P = mul(IN.Position, WorldView);
|
---|
| 63 | // model-space view vector
|
---|
| 64 | float3 mView = mCameraPos - IN.Position;
|
---|
| 65 | // model-space light vector
|
---|
| 66 | float3 mLight = mLightPos - IN.Position;
|
---|
| 67 | // tangent-space view vector
|
---|
| 68 | output.tView = mul(Tan, mView);
|
---|
| 69 | // tangent-space light vector
|
---|
| 70 | output.tLight = mul(Tan, mLight);
|
---|
| 71 | // vertex position before homogenious division
|
---|
| 72 | output.hPosition = mul(IN.Position, WorldViewProj);
|
---|
| 73 | // tex coordinates passed to pixel shader
|
---|
| 74 | output.TexCoord = IN.TexCoord0;
|
---|
| 75 | output.Depth = output.hPosition.z;
|
---|
| 76 |
|
---|
| 77 | return output;
|
---|
| 78 | }
|
---|
| 79 |
|
---|
| 80 |
|
---|
| 81 | PS_OUTPUT BumpPS(VS_OUTPUT IN)
|
---|
| 82 | {
|
---|
| 83 | PS_OUTPUT output;
|
---|
| 84 | // needs normalization because of linear interpolation
|
---|
| 85 | float3 View = normalize( IN.tView );
|
---|
| 86 | // needs normalization because of linear interpolation
|
---|
| 87 | float3 Light = normalize( IN.tLight );
|
---|
| 88 | // get tangent-space normal from normal map
|
---|
| 89 | float3 Normal = tex2D(BumpMapSampler, IN.TexCoord).rgb;
|
---|
| 90 | // illumination calculation
|
---|
| 91 | output.Color = Illumination(Light, Normal, View, IN.TexCoord, Attenuation(IN.tLight));
|
---|
| 92 |
|
---|
| 93 | float blur = saturate(abs(IN.Depth - focalDist) * focalRange);
|
---|
| 94 | output.Depth = float4(IN.Depth, blur, 0, 0);
|
---|
| 95 |
|
---|
| 96 | return output;
|
---|
| 97 | }
|
---|
| 98 |
|
---|
| 99 |
|
---|
| 100 | Phase 2:
|
---|
| 101 | During the post-processing phase, the results of the previous rendering are processed and the color image is blurred based on the blurriness factor computed in the first phase. Blurring is performed using a variable-sized filter representing the circle of confusion. To perform image filtering, a simple screen-aligned quadrilateral is drawn, textured with the results of the first phase.
|
---|
| 102 | The filter kernel in the post-processing step has 13 samples - a center sample and 12 outer samples. The number of samples can vary but 12 represents the maximum number of samples that can be processed by a 2.0 pixel shader in a single pass.
|
---|
| 103 | The post-processing pixel-shader computes filter smaple positions based on a 2D offset stored in filterTaps array, initialized by the following function:
|
---|
| 104 |
|
---|
| 105 | void SetupFilterKernel()
|
---|
| 106 | {
|
---|
| 107 | FLOAT dx = 1.0f / (FLOAT)m_ScreenWidth;
|
---|
| 108 | FLOAT dy = 1.0f / (FLOAT)m_ScreenHeight;
|
---|
| 109 |
|
---|
| 110 | D3DXVECTOR4 v[12];
|
---|
| 111 | v[0] = D3DXVECTOR4(-0.326212f * dx, -0.405805f * dy, 0.0f, 0.0f);
|
---|
| 112 | v[1] = D3DXVECTOR4(-0.840144f * dx, -0.07358f * dy, 0.0f, 0.0f);
|
---|
| 113 | v[2] = D3DXVECTOR4(-0.695914f * dx, 0.457137f * dy, 0.0f, 0.0f);
|
---|
| 114 | v[3] = D3DXVECTOR4(-0.203345f * dx, 0.620716f * dy, 0.0f, 0.0f);
|
---|
| 115 | v[4] = D3DXVECTOR4(0.96234f * dx, -0.194983f * dy, 0.0f, 0.0f);
|
---|
| 116 | v[5] = D3DXVECTOR4(0.473434f * dx, -0.480026f * dy, 0.0f, 0.0f);
|
---|
| 117 | v[6] = D3DXVECTOR4(0.519456f * dx, 0.767022f * dy, 0.0f, 0.0f);
|
---|
| 118 | v[7] = D3DXVECTOR4(0.185461f * dx, -0.893124f * dy, 0.0f, 0.0f);
|
---|
| 119 | v[8] = D3DXVECTOR4(0.507431f * dx, 0.064425f * dy, 0.0f, 0.0f);
|
---|
| 120 | v[9] = D3DXVECTOR4(0.89642f * dx, 0.412458f * dy, 0.0f, 0.0f);
|
---|
| 121 | v[10] = D3DXVECTOR4(-0.32194f * dx, -0.932615f * dy, 0.0f, 0.0f);
|
---|
| 122 | v[11] = D3DXVECTOR4(-0.791559f * dx, -0.597705f * dy, 0.0f, 0.0f);
|
---|
| 123 |
|
---|
| 124 | g_pPostEffect->SetVectorArray("filterTaps", v, 12);
|
---|
| 125 | }
|
---|
| 126 |
|
---|
| 127 | One of the problems with all post-filtering methods is leaking or "bleeding" of color from sharp objects onto the blurry backgrounds. This results in faint halos around sharp objects. The color leaking happens because the filter for the blurry background will sample color from the sharp object in the vicinity due to the large filter size. To solve this problem, we will discard the outer samples that can contribute to leaking according to the following criteria: if the outer sample is in focus and it is in front of the blurry center sample, it should not contribute to the blurred color. This can introduce a minor popping effect when objects go in or out of focus. To combat sample popping, the outer sample blurriness factor is used as a sample weight to fade out its contribution gradually.
|
---|
| 128 |
|
---|
| 129 | The vertex shader implementation is feeded with a fullscreen quad without texture coordinates. Those are calculated in the vertex shader and passed to the pixel shader.
|
---|
| 130 |
|
---|
| 131 | struct VS_INPUT {
|
---|
| 132 | float3 Position : POSITION;
|
---|
| 133 | };
|
---|
| 134 |
|
---|
| 135 | struct VS_OUTPUT {
|
---|
| 136 | float4 hPosition : POSITION; // point in normalized device space before homogeneous division
|
---|
| 137 | float2 TexCoord : TEXCOORD0;
|
---|
| 138 | };
|
---|
| 139 |
|
---|
| 140 | VS_OUTPUT DepthVS(VS_INPUT IN)
|
---|
| 141 | {
|
---|
| 142 | VS_OUTPUT output;
|
---|
| 143 |
|
---|
| 144 | output.hPosition = half4(IN.Position, 1);
|
---|
| 145 |
|
---|
| 146 | output.TexCoord = IN.Position.xy * 0.5f + 0.5f;
|
---|
| 147 | output.TexCoord.y *= -1;
|
---|
| 148 |
|
---|
| 149 | return output;
|
---|
| 150 | }
|
---|
| 151 | //------------------------------------------------------------------------------------
|
---|
| 152 | //
|
---|
| 153 | // DoF pixel shader
|
---|
| 154 | //
|
---|
| 155 | //------------------------------------------------------------------------------------
|
---|
| 156 | const float maxCoC = 5;
|
---|
| 157 | float4 DepthPS(VS_OUTPUT IN) : COLOR
|
---|
| 158 | {
|
---|
| 159 | // Get center sample
|
---|
| 160 | float4 colorSum = tex2D(ColorMapSampler, IN.TexCoord);
|
---|
| 161 | float2 centerDepthBlur = tex2D(DepthMapSampler, IN.TexCoord);
|
---|
| 162 |
|
---|
| 163 | // Compute CoC size based on blurriness
|
---|
| 164 | float sizeCoC = centerDepthBlur.y * maxCoC;
|
---|
| 165 |
|
---|
| 166 | float totalContribution = 1.0f;
|
---|
| 167 |
|
---|
| 168 | // Run through all taps
|
---|
| 169 | for (int i = 0; i < NUM_DOF_TAPS; i++)
|
---|
| 170 | {
|
---|
| 171 | // Compute tap coordinates
|
---|
| 172 | float2 tapCoord = IN.TexCoord + filterTaps[i] * sizeCoC;
|
---|
| 173 |
|
---|
| 174 | // Fetch tap sample
|
---|
| 175 | float4 tapColor = tex2D(ColorMapSampler, tapCoord);
|
---|
| 176 | float2 tapDepthBlur = tex2D(DepthMapSampler, tapCoord);
|
---|
| 177 |
|
---|
| 178 | // Compute tap contribution
|
---|
| 179 | float tapContribution = (tapDepthBlur.x > centerDepthBlur.x) ? 1.0f : tapDepthBlur.y;
|
---|
| 180 |
|
---|
| 181 | // Accumulate color and contribution
|
---|
| 182 | colorSum += tapColor * tapContribution;
|
---|
| 183 | totalContribution += tapContribution;
|
---|
| 184 | }
|
---|
| 185 |
|
---|
| 186 | // Normalize to get proper luminance
|
---|
| 187 | float4 finalColor = colorSum / totalContribution;
|
---|
| 188 |
|
---|
| 189 | return finalColor;
|
---|
| 190 | }
|
---|
| 191 |
|
---|
| 192 |
|
---|