Ignore:
Timestamp:
09/16/05 10:41:52 (19 years ago)
Author:
bittner
Message:

changes in the structure: renamed tools to algorithms

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/VUT/doc/SciReport/sampling.tex

    r273 r277  
    1 \chapter{Global Visibility Sampling Tool} 
    2  
     1\chapter{Global Visibility Sampling} 
     2 
     3\label{chap:sampling} 
    34 
    45The proposed visibility preprocessing framework consists of two major 
     
    67 
    78\begin{itemize} 
    8 \item The first step is an aggresive visibility sampling which gives 
     9\item The first step is an aggressive visibility sampling which gives 
    910initial estimate about global visibility in the scene. The sampling 
    1011itself involves several strategies which will be described bellow. 
    11 The imporant property of the aggresive sampling step is that it 
     12The important property of the aggressive sampling step is that it 
    1213provides a fast progressive solution to global visibility and thus it 
    1314can be easily integrated into the game development cycle. The 
    14 aggresive sampling will terminate when the average contribution of new 
     15aggressive sampling will terminate when the average contribution of new 
    1516ray samples falls bellow a predefined threshold. 
    1617 
    1718\item The second step is mutual visibility verification. This step 
    18 turns the previous aggresive visibility solution into either exact, 
    19 conservative or error bound aggresive solution. The choice of the 
     19turns the previous aggressive visibility solution into either exact, 
     20conservative or error bound aggressive solution. The choice of the 
    2021particular verifier is left on the user in order to select the best 
    2122one for a particular scene, application context and time 
    2223constrains. For example, in scenes like a forest an error bound 
    23 aggresive visibility can be the best compromise between the resulting 
    24 size of the PVS (and framerate) and the visual quality. The exact or 
     24aggressive visibility can be the best compromise between the resulting 
     25size of the PVS (and frame rate) and the visual quality. The exact or 
    2526conservative algorithm can however be chosen for urban scenes where 
    26 ommision of even small objects can be more distructing for the 
    27 user. The mutual visibility tool will be described in the next chapter. 
     27omission of even small objects can be more distracting for the 
     28user. The mutual visibility verification will be described in the next 
     29chapter. 
    2830 
    2931\end{itemize} 
     
    3133In traditional visibility preprocessing the view space is 
    3234subdivided into view cells and for each view cell the set of visible 
    33 objects --- potentially visible set (PVS) is computed. This framewoirk 
    34 has been used for conservative, aggresive and exact algorithms. 
     35objects --- potentially visible set (PVS) is computed. This framework 
     36has been used for conservative, aggressive and exact algorithms. 
    3537 
    3638We propose a different strategy which has several advantages for 
    37 sampling based aggresive visibility preprocessing. The stategy is 
     39sampling based aggressive visibility preprocessing. The strategy is 
    3840based on the following fundamental ideas: 
    3941\begin{itemize} 
     
    6365the partitioning. The major drawback of this approach is that for 
    6466polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in 
    65 the partitioning for unrestricted viewspace. A {\em scale space} 
     67the partitioning for unrestricted view space. A {\em scale space} 
    6668aspect graph~\cite{bb12595,bb12590} improves robustness of the method 
    6769by merging similar features according to the given scale. 
     
    121123The exact mutual visibility method presented later in the report is 
    122124based on method exploting \plucker coordinates of 
    123 lines~\cite{bittner02phd,nirenstein:02:egwr,haumont2005}. This 
     125lines~\cite{bittner:02:phd,nirenstein:02:egwr,haumont2005:egsr}. This 
    124126algorithm uses \plucker coordinates to compute visibility in shafts 
    125127defined by each polygon in the scene. 
     
    242244 
    243245 
    244 \begin{figure}[htb] 
    245   \centerline{ 
    246     \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01} 
    247      \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07} 
    248     } 
    249  
    250   \caption{(left) Vienna viewcells. (right) The view cells are prisms with a triangular base. } 
    251   \label{fig:vienna_viewcells} 
    252 \end{figure} 
    253246 
    254247 
     
    257250partition of the scene into view cells is an essential part of every 
    258251visibility system. If they are chosen too large, the PVS (Potentially 
    259 Visibible Set)  of a view cells is too large for efficient culling. If 
     252Visible Set)  of a view cells is too large for efficient culling. If 
    260253they are chosen too small or without consideration, then neighbouring 
    261254view cells contain redundant PVS information, hence boosting the PVS 
     
    266259height to form a view cell prism. 
    267260 
     261\begin{figure}[htb] 
     262  \centerline{ 
     263    \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01} 
     264     \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07} 
     265    } 
     266 
     267  \caption{(left) Vienna view cells. (right) The view cells are prisms with a triangular base. } 
     268  \label{fig:vienna_viewcells} 
     269\end{figure} 
     270 
    268271In order to efficiently use view cells with our sampling method, we 
    269272require a view cell representation which is 
     
    275278\end{itemize} 
    276279 
    277 We meet these requirements by employing spatial subdivisions (i.e.,  
    278 KD trees and BSP trees), to store the view cells. The initial view cells are 
    279 associated with the leaves. The reason why we chose BSP trees as view cell representation  
    280 is that they are very flexible. View cells forming arbitrary closed meshes can be closely matched. 
    281 Therefore we are able to find a view cells with only a few view ray-plane 
    282 intersections.  Furthermore, the hierarchical structures can be 
    283 exploited as hierarchy of view cells. Interior nodes form larger view cells 
    284 containing the children. If neccessary, a leaf can be easily subdivided into smaller view cells. 
    285  
    286 Currently we consider three different approaches to generate the initial view cell BSP tree. 
    287 The third method is not restricted to BSP trees, but BSP trees are prefered  
    288 because of their greater flexibility. 
     280We meet these requirements by employing spatial subdivisions (i.e., 
     281KD trees and BSP trees), to store the view cells. The initial view 
     282cells are associated with the leaves. The reason why we chose BSP 
     283trees as view cell representation  is that they are very 
     284flexible. View cells forming arbitrary closed meshes can be closely 
     285matched.  Therefore we are able to find a view cells with only a few 
     286view ray-plane intersections.  Furthermore, the hierarchical 
     287structures can be exploited as hierarchy of view cells. Interior nodes 
     288form larger view cells containing the children. If necessary, a leaf 
     289can be easily subdivided into smaller view cells. 
     290 
     291Currently we consider three different approaches to generate the 
     292initial view cell BSP tree.  The third method is not restricted to BSP 
     293trees, but BSP trees are preferred  because of their greater 
     294flexibility. 
    289295 
    290296 
     
    303309  \#splits & 25 & 188936\\\hline\hline 
    304310  max tree depth & 13 & 27\\\hline\hline 
    305   avg tree depth & 9.747 & 21.11\\\hline\hlien 
     311  avg tree depth & 9.747 & 21.11\\\hline\hline 
    306312   
    307313 \end{tabular} 
     
    312318\begin{itemize} 
    313319 
    314 \item We use a number of input view cells given in advance. As input view cell  
    315 any closed mesh can be applied. The only requirement is that the 
    316 any two view cells do not overlap.  
    317 First the view cell polygons are extracted, and the BSP is build from 
    318 these polygons using some global optimizations like tree balancing or 
    319 least splits. Then one view cell after the other 
    320 is inserted into the tree to find out the leafes where they are contained 
    321 in. The polygons of the view cell are filtered down the tree, 
    322 guiding the insertion process. Once we reach a leaf and there are no 
    323 more polygons left, we terminate the tree subdivision. If we are on 
    324 the inside of the last split plane (i.e., the leaf represents the 
    325 inside of the view cell), we associate the leaf with the view cell 
    326 (i.e., add a pointer to the view cell). One input view cell can 
    327 be associated with many leafes, whereas each leafs has only one view cell. 
    328 Some statistics about using this method on the vienna view cells set are given in table~\ref{tab:viewcell_bsp}. 
    329 However, sometimes a good set of view cells is not available. Or the scene 
    330 is changed frequently, and the designer does not want to create new view cells on each change. 
    331 In such a case one of the following two methods should rather be chosen, which generate view cells 
    332 automatically. 
     320\item We use a number of input view cells given in advance. As input 
     321view cell  any closed mesh can be applied. The only requirement is 
     322that the any two view cells do not overlap.  First the view cell 
     323polygons are extracted, and the BSP is build from these polygons using 
     324some global optimizations like tree balancing or least splits. Then 
     325one view cell after the other is inserted into the tree to find out 
     326the leaves where they are contained in. The polygons of the view cell 
     327are filtered down the tree, guiding the insertion process. Once we 
     328reach a leaf and there are no more polygons left, we terminate the 
     329tree subdivision. If we are on the inside of the last split plane 
     330(i.e., the leaf represents the inside of the view cell), we associate 
     331the leaf with the view cell (i.e., add a pointer to the view 
     332cell). One input view cell can be associated with many leaves, whereas 
     333each leafs has only one view cell.  Some statistics about using this 
     334method on the Vienna view cells set are given in 
     335table~\ref{tab:viewcell_bsp}.  However, sometimes a good set of view 
     336cells is not available. Or the scene is changed frequently, and the 
     337designer does not want to create new view cells on each change.  In 
     338such a case one of the following two methods should rather be chosen, 
     339which generate view cells automatically. 
    333340 
    334341 
    335342\item We apply a BSP tree subdivision to the scene geometry. Whenever 
    336 the subdivision terminates in a leaf, a view cell is associated with the leaf node. 
    337 This simple approach is justified because it places the view cell borders  
    338 along some discontinuities in the visibility function. 
    339  
    340 \item  The view cell generation can be guided by the sampling process. We start with 
    341 with a single initial view cell representing the whole space. If a given threshold 
    342 is reached during the preprocessing (e.g., the view cell is pierced by too many rays  
    343 resulting in a large PVS), the view cell is subdivided into smaller cells using some criteria. 
    344  
    345 In order to evaluate the best split plane, we first have to define the characteristics of a 
    346 good view cell partition:  The view cells should be quite large, while their PVS stays rather small. 
    347 The PVS of each two view cells should be as distinct as possible, otherwise they could be 
    348 merged into a larger view cell if the PVSs are too similar.  
    349 E.g., for a building, the perfect view cells are usually the single rooms connected by portals. 
    350  
    351 Hence we can define some useful criteria for the split: 1) the number of rays should be roughly equal among the new view cells. 2) The split plane should be chosen in a way that the rays are maximal distinct, i.e., the number 
    352 of rays contributing to more than one cell should be minimized => the PVSs are also distinct. 3) For BSP trees, 
    353 the split plane should be aligned with some scene geometry which is large enough to contribute 
    354 a lot of occlusion power. This criterium can be naturally combined with the second one. 
    355 As termination criterium we can choose the minimum PVS / piercing ray size or the maximal tree depth. 
     343the subdivision terminates in a leaf, a view cell is associated with 
     344the leaf node.  This simple approach is justified because it places 
     345the view cell borders  along some discontinuities in the visibility 
     346function. 
     347 
     348\item  The view cell generation can be guided by the sampling 
     349process. We start with with a single initial view cell representing 
     350the whole space. If a given threshold is reached during the 
     351preprocessing (e.g., the view cell is pierced by too many rays 
     352resulting in a large PVS), the view cell is subdivided into smaller 
     353cells using some criteria. 
     354 
     355In order to evaluate the best split plane, we first have to define the 
     356characteristics of a good view cell partition:  The view cells should 
     357be quite large, while their PVS stays rather small.  The PVS of each 
     358two view cells should be as distinct as possible, otherwise they could 
     359be merged into a larger view cell if the PVSs are too similar.  E.g., 
     360for a building, the perfect view cells are usually the single rooms 
     361connected by portals. 
     362 
     363Hence we can define some useful criteria for the split: 1) the number 
     364of rays should be roughly equal among the new view cells. 2) The split 
     365plane should be chosen in a way that the ray sets are disjoint, i.e., 
     366the number of rays contributing to more than one cell should be 
     367minimized. 3) For BSP trees, the split plane should be aligned with 
     368some scene geometry which is large enough to contribute a lot of 
     369occlusion power. This criterion can be naturally combined with the 
     370second one.  As termination criterion we can choose the minimum PVS / 
     371piercing ray size or the maximal tree depth. 
    356372\end{itemize} 
    357373 
    358 In the future we aim to extend the view cell construction by using 
    359 feedback from the PVS computation: the view cells which contain many 
    360 visibility changes will be subdivided further and neighboring view 
    361 cells with similar PVSs will be merged. We want to gain a more precise 
    362 information about visibility by selectivly storing rays with the 
    363 viewcells and computing visibility statistics for subsets of rays 
    364 which intersect subregions of the given view cell. 
    365  
     374 
     375% In the future we aim to extend the view cell construction by using 
     376% feedback from the PVS computation: the view cells which contain many 
     377% visibility changes will be subdivided further and neighboring view 
     378% cells with similar PVSs will be merged. We want to gain a more precise 
     379% information about visibility by selectively storing rays with the 
     380% view cells and computing visibility statistics for subsets of rays 
     381% which intersect subregions of the given view cell. 
    366382 
    367383\subsection{From-object based visibility} 
     
    386402At this phase of the computation we not only start the samples from 
    387403the objects, but we also store the PVS information centered at the 
    388 objects. Instead of storing a PVS consting of objects visible from 
     404objects. Instead of storing a PVS consisting of objects visible from 
    389405view cells, every object maintains a PVS consisting of potentially 
    390406visible view cells. While these representations contain exactly the 
     
    399415of the algorithm visits scene objects sequentially. For every scene 
    400416object we randomly choose a point on its surface. Then a ray is cast 
    401 from the selected point according to the randomly chosen direction. We 
    402 use a uniform distribution of the ray directions with respect to the 
    403 halfspace given by the surface normal. Using this strategy the samples 
    404 at deterministicaly placed at every object, with a randomization of 
    405 the location on the object surface. The uniformly distributed 
    406 direction is a simple and fast strategy to gain initial visibility 
    407 information. 
     417from the selected point according to the randomly chosen direction 
     418(see Figure~\ref{fig:sampling}). We use a uniform distribution of the 
     419ray directions with respect to the half space given by the surface 
     420normal. Using this strategy the samples at deterministically placed at 
     421every object, with a randomization of the location on the object 
     422surface. The uniformly distributed direction is a simple and fast 
     423strategy to gain initial visibility information. 
    408424 
    409425\begin{figure}%[htb] 
    410   \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\ 
     426  \centerline{ 
     427    \includegraphics[width=0.4\textwidth, draft=\DRAFTFIGS]{figs/sampling} 
     428  } 
     429   
    411430%\label{tab:online_culling_example} 
    412431  \caption{Three objects and a set of view cells corresponding to leaves 
     
    414433    samples cast from a selected object (shown in red). Note that most 
    415434    samples contribute to more view cells.  } 
    416   \label{fig:online_culling_example} 
     435  \label{fig:sampling} 
    417436\end{figure} 
    418437 
     
    444463local view cell directional density. This means placing more samples at 
    445464directions where more view cells are located: We select a random 
    446 viecell which lies at the halfpace given by the surface normal at the 
     465view cell which lies at the half space given by the surface normal at the 
    447466chosen point. We pick a random point inside the view cell and cast a 
    448467ray towards this point. 
     
    451470\subsection{Accounting for Visibility Events} 
    452471 
    453  Visibility events correspond to appearance and disapearance of 
     472 Visibility events correspond to appearance and disappearance of 
    454473objects with respect to a moving view point. In polygonal scenes the 
    455474events defined by event surfaces defined by three distinct scene 
    456475edges. Depending on the edge configuration we distinguish between 
    457 vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces 
     476vertex-edge events (VE) and triple edge (EEE) events. The VE surfaces 
    458477are planar planes whereas the EEE are in general quadratic surfaces. 
    459478 
     
    467486connectivity information we select a random silhouette edge from this 
    468487object and cast a sample which is tangent to that object at the 
    469 selectededge . 
     488selected edge . 
    470489 
    471490The second strategy works as follows: we randomly pickup two objects 
     
    483502\section{Summary} 
    484503 
    485 This chapter described a global visibility sampling tool which forms a 
    486 core of the visibility preprocessing framework. The global visibility 
    487 sampling computes aggresive visibility, i.e. it computes a subset of 
    488 the exact PVS for each view cell. The aggresive sampling provides a 
    489 fast progressive solution and thus it can be easily integrated into 
    490 the game development cycle. The sampling itself involves several 
    491 strategies which aim to pregressively discover more visibility 
    492 relationships in the scene. 
     504This chapter described the global visibility sampling algorithm which 
     505forms a core of the visibility preprocessing framework. The global 
     506visibility sampling computes aggressive visibility, i.e. it computes a 
     507subset of the exact PVS for each view cell. The aggressive sampling 
     508provides a fast progressive solution and thus it can be easily 
     509integrated into the game development cycle. The sampling itself 
     510involves several strategies which aim to progressively discover more 
     511visibility relationships in the scene. 
    493512 
    494513The methods presented in this chapter give a good initial estimate 
    495514about visibility in the scene, which can be verified by the mutual 
    496 visibility tool described in the next chapter. 
    497  
    498  
    499  
    500  
     515visibility algorithms described in the next chapter. 
     516 
     517 
     518 
     519 
Note: See TracChangeset for help on using the changeset viewer.