1) change shader system: look into fx files 2) tone mapping bloom (gaussian blur) use downsampling: first 4x4 to luminance then 3 times further to 1x1 use 3 new render render targets for this 3) mrt channels: 2 mrts: mrt 1: rgba color + emmissive / luminance mrt 2: world pos + projected depth mrt 3: normals + w coordinate mrt 4: flip-flop for mrt1 4) idea: donwsample color texture for ao 5) use depth formula on projected depth (or use linear depth) in order to combine ao + diffuse lookup into on texture lookup reorder mrts: mrt 1: rgba color + projected depth mrt 2: luminance mrt 3: normals + w coordinate mrt 4: flip-flop with mrt1 / store luminance after ssao 6) render sun disc ssao: 1) implement bilateral filtering 2) check physical properties paper of 2007 arikan formula: SW(P, C, r) = 2 * pi * (1 - cos(asin (r / |PC|))) but not working properly! 3) normal mapping problems: 4) update of converged regions something could become visible a) from outside b) from previously occluded regions a) could use ratio of samples outside current frame / last frame but slows donw code 5) dynamic objects: a) make ao stick on object: this should be possible somehow, as the information is still available!! tried to use difference of ao intensity between previous and current frame to find out if pixel ao is not valid anymore. but problems as some flickering was introduced while update of dynamic objects was not fast enough (annoying grey fade effect) b) fix the contact shadow on the floor: check when pixel not valid anymore: do that by checking for each sample if they were invalidated recently. if so, then invalidate current pixel ao. for each point, theoretically you don't have to compare the depth of the current pixel, but the depth of the samples taken for ao idea for incorporation of dynamic objects: store object id with render target for each object we know the trafo when doing the back projection => as usual: we have the world space position of the current pixel find pixel from last frame using the old projection view trafo but now we first apply the inverse transformation that brought the last pixel to the current pixel and then the old projection view!! => now we do the equality comparison as usual for reducing flickering: keep chain of kernels constant based on the state of convergence=> only use a single kernel, rotate based on noise texture we use fixed offset into the noise texture based on the #frames samples were accumulated problem: