Ignore:
Timestamp:
09/15/05 18:31:13 (19 years ago)
Author:
bittner
Message:

sampling confict resolved

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/VUT/doc/SciReport/sampling.tex

    r272 r273  
    88\item The first step is an aggresive visibility sampling which gives 
    99initial estimate about global visibility in the scene. The sampling 
    10 itself involves several strategies which will be described in 
    11 section~\ref{sec:sampling}. The imporant property of the aggresive 
    12 sampling step is that it provides a fast progressive solution to 
    13 global visibility and thus it can be easily integrated into the game 
    14 development cycle. 
     10itself involves several strategies which will be described bellow. 
     11The imporant property of the aggresive sampling step is that it 
     12provides a fast progressive solution to global visibility and thus it 
     13can be easily integrated into the game development cycle. The 
     14aggresive sampling will terminate when the average contribution of new 
     15ray samples falls bellow a predefined threshold. 
    1516 
    1617\item The second step is mutual visibility verification. This step 
     
    2829\end{itemize} 
    2930 
    30  
    3131In traditional visibility preprocessing the view space is 
    3232subdivided into view cells and for each view cell the set of visible 
     
    4343 
    4444Both these points will be addressed in this chapter in more detail. 
    45  
    46  
    4745 
    4846\section{Related work} 
     
    241239strategies. 
    242240 
    243 \subsection{View Cell Representation} 
     241\subsection{View Space Partitioning} 
     242 
    244243 
    245244\begin{figure}[htb] 
     
    254253 
    255254 
    256 A good partition of the scene into view cells is an essential part of every visibility 
    257 system. If they are chosen too large, the PVS (Potentially Visibible Set)  
    258 of a view cells is too large for efficient culling. If they are chosen too small or 
    259 without consideration, then neighbouring view cells contain redundant PVS information, 
    260 hence boosting the PVS computation and storage costs for the scene. In the left image of figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna model, generated by triangulation of the streets.  
    261 In the closeup (right image) we can see that each triangle is extruded to a given height to form a view cell prism. 
     255Before the visibility computation itself, we subdivide the space of 
     256all possible viewpoints and viewing directions into view cells. A good 
     257partition of the scene into view cells is an essential part of every 
     258visibility system. If they are chosen too large, the PVS (Potentially 
     259Visibible Set)  of a view cells is too large for efficient culling. If 
     260they are chosen too small or without consideration, then neighbouring 
     261view cells contain redundant PVS information, hence boosting the PVS 
     262computation and storage costs for the scene. In the left image of 
     263figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna 
     264model, generated by triangulation of the streets.  In the closeup 
     265(right image) we can see that each triangle is extruded to a given 
     266height to form a view cell prism. 
    262267 
    263268In order to efficiently use view cells with our sampling method, we 
     
    294299  view cell insertion time & 0.016s & 7.984s \\\hline\hline 
    295300  \#nodes & 1137 & 597933 \\\hline\hline 
    296         \#interior nodes & 568 & 298966\\\hline\hline 
    297         \#leaf nodes  & 569 & 298967\\\hline\hline 
    298         \#splits & 25 & 188936\\\hline\hline 
    299         max tree depth & 13 & 27\\\hline\hline 
    300         avg tree depth & 9.747 & 21.11\\\hline\hlien 
    301          
     301  \#interior nodes & 568 & 298966\\\hline\hline 
     302  \#leaf nodes  & 569 & 298967\\\hline\hline 
     303  \#splits & 25 & 188936\\\hline\hline 
     304  max tree depth & 13 & 27\\\hline\hline 
     305  avg tree depth & 9.747 & 21.11\\\hline\hlien 
     306   
    302307 \end{tabular} 
    303308 \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp} 
     
    351356\end{itemize} 
    352357 
     358In the future we aim to extend the view cell construction by using 
     359feedback from the PVS computation: the view cells which contain many 
     360visibility changes will be subdivided further and neighboring view 
     361cells with similar PVSs will be merged. We want to gain a more precise 
     362information about visibility by selectivly storing rays with the 
     363viewcells and computing visibility statistics for subsets of rays 
     364which intersect subregions of the given view cell. 
     365 
     366 
    353367\subsection{From-object based visibility} 
    354368 
     
    380394 
    381395 
    382 \subsection{Basic Randomized Sampling} 
    383  
    384  
    385 The first phase of the sampling works as follows: At every pass of the 
    386 algorithm visits scene objects sequentially. For every scene object we 
    387 randomly choose a point on its surface. Then a ray is cast from the 
    388 selected point according to the randomly chosen direction. We use a 
    389 uniform distribution of the ray directions with respect to the 
     396\subsection{Naive Randomized Sampling} 
     397 
     398The naive global visibility sampling works as follows: At every pass 
     399of the algorithm visits scene objects sequentially. For every scene 
     400object we randomly choose a point on its surface. Then a ray is cast 
     401from the selected point according to the randomly chosen direction. We 
     402use a uniform distribution of the ray directions with respect to the 
    390403halfspace given by the surface normal. Using this strategy the samples 
    391404at deterministicaly placed at every object, with a randomization of 
     
    394407information. 
    395408 
    396  
    397  The described algorithm accounts for the irregular distribution of the 
     409\begin{figure}%[htb] 
     410  \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\ 
     411%\label{tab:online_culling_example} 
     412  \caption{Three objects and a set of view cells corresponding to leaves 
     413    of an axis aligned BSP tree. The figure depicts several random 
     414    samples cast from a selected object (shown in red). Note that most 
     415    samples contribute to more view cells.  } 
     416  \label{fig:online_culling_example} 
     417\end{figure} 
     418 
     419The described algorithm accounts for the irregular distribution of the 
    398420objects: more samples are placed at locations containing more 
    399421objects. Additionally every object is sampled many times depending on 
     
    402424of the view cells from which it is visible. 
    403425 
     426Each ray sample can contribute by a associating a number of view cells 
     427with the object from which the sample was cast. If the ray does not 
     428leave the scene it also contributes by associating the pierced view 
     429cells to the terminating object. Thus as the ray samples are cast we 
     430can measure the average contribution of a certain number of most 
     431recent samples.  If this contribution falls bellow a predefined 
     432constant we move on to the next sampling strategy, which aim to 
     433discover more complicated visibility relations. 
     434  
    404435 
    405436\subsection{Accounting for View Cell Distribution} 
     
    421452 
    422453 Visibility events correspond to appearance and disapearance of 
    423  objects with respect to a moving view point. In polygonal scenes the 
     454objects with respect to a moving view point. In polygonal scenes the 
    424455events defined by event surfaces defined by three distinct scene 
    425456edges. Depending on the edge configuration we distinguish between 
     
    431462objects. 
    432463 
    433  
     464The first strategy starts similarly to the above described sampling 
     465methods: we randomly select an object and a point on its surface. Then 
     466we randomly pickup an object from its PVS. If we have mesh 
     467connectivity information we select a random silhouette edge from this 
     468object and cast a sample which is tangent to that object at the 
     469selectededge . 
     470 
     471The second strategy works as follows: we randomly pickup two objects 
     472which are likely to see each other. Then we determine a ray which is 
     473tangent to both objects. For simple meshes the determination of such 
     474rays can be computed geometrically, for more complicated ones it is 
     475based again on random sampling. The selection of the two objects works 
     476as follows: first we randomly select the first object and a random 
     477non-empty view cell for which we know that it can see the object. Then 
     478we randomly select an object associated with that view cell as the 
     479second object. 
     480 
     481 
     482  
     483\section{Summary} 
     484 
     485This chapter described a global visibility sampling tool which forms a 
     486core of the visibility preprocessing framework. The global visibility 
     487sampling computes aggresive visibility, i.e. it computes a subset of 
     488the exact PVS for each view cell. The aggresive sampling provides a 
     489fast progressive solution and thus it can be easily integrated into 
     490the game development cycle. The sampling itself involves several 
     491strategies which aim to pregressively discover more visibility 
     492relationships in the scene. 
     493 
     494The methods presented in this chapter give a good initial estimate 
     495about visibility in the scene, which can be verified by the mutual 
     496visibility tool described in the next chapter. 
     497 
     498 
     499 
     500 
Note: See TracChangeset for help on using the changeset viewer.