Changeset 273


Ignore:
Timestamp:
09/15/05 18:31:13 (19 years ago)
Author:
bittner
Message:

sampling confict resolved

Location:
trunk/VUT/doc/SciReport
Files:
5 edited

Legend:

Unmodified
Added
Removed
  • trunk/VUT/doc/SciReport/introduction.tex

    r266 r273  
    1 \chapter{Introduction to visibility problems and algorithms}%\chapter 
     1\chapter{Introduction}%\chapter 
    22 
    33\label{chap:overview} 
     
    66 This chapter provides introduction to visibility problems and 
    77algorithms for computer graphics. In particular it presents a taxonomy 
    8 of visibility problems based on the {\em problem domain} and the {\em 
    9 type of the answer}. The taxonomy helps to understand the nature of a 
    10 particular visibility problem and provides a tool for grouping 
    11 problems of similar complexity independently of their target 
    12 application. We discuss typical visibility problems encountered in 
    13 computer graphics including their relation to the presented 
    14 taxonomy. Finally, we classify the latert described visibility tools 
    15 according to the presented taxonomy. 
     8of visibility problems and algorithms. We discuss typical visibility 
     9problems encountered in computer graphics including their relation to 
     10the presented taxonomy. Finally, we classify the later described 
     11visibility tools according to the presented taxonomy. 
     12 
     13 
     14\section{Structure of the report} 
     15 
    1616 
    1717 
     
    2424 
    2525 
    26 \subsection{Problem domain} 
     26\section{Domain of visibility problems} 
    2727\label{sec:prob_domain} 
    2828 
  • trunk/VUT/doc/SciReport/mutual.tex

    r272 r273  
    4040 
    4141 
    42 \section{Exact Mutual Visibility Test} 
    43  
    44 The exact mutual visibility test computes exact visibility between two 
    45 polyhedrons in the scene. This is computed by testing visibility 
     42\section{Exact Verifier} 
     43 
     44The exact mutual visibility verifier computes exact visibility between 
     45two polyhedrons in the scene. This is computed by testing visibility 
    4646between all pairs of potentially visible polygons of these 
    4747polyhedrons.  For each pair of tested polygons the computation is 
     
    4949used to determine the set of relevant occluders~\cite{Haines94}. 
    5050 
    51 \section{Occlusion tree} 
     51\subsection{Occlusion tree} 
    5252\label{sec:rot3d_ot} 
    5353 
     
    8989 
    9090 
    91 \section{Occlusion tree construction} 
     91\subsection{Occlusion tree construction} 
    9292 
    9393\label{sec:rot3d_construct} 
     
    109109 
    110110 
    111 \subsection{Insertion with splitting} 
     111\subsubsection{Insertion with splitting} 
    112112 
    113113\label{sec:rot3_insertwithsplit} 
     
    136136the new subtree. 
    137137 
    138 \subsection{Insertion without splitting} 
     138\subsubsection{Insertion without splitting} 
    139139\label{sec:insertion_nosplit} 
    140140 
     
    209209and combining the resulting visibility states. 
    210210 
    211 \subsection{Exact visibility test} 
    212  
    213  The exact visibility for a given polyhedral region proceeds as 
     211%\subsection{Exact visibility test} 
     212 
     213 The exact visibility test for a given polyhedral region proceeds as 
    214214follows: for each face of the region facing the given source polygon 
    215215we construct a blocker polyhedron. The blocker polyhedron is then 
     
    226226 
    227227 
    228 \subsection{Conservative visibility test} 
    229  
    230 \label{sec:rot3_conserva} 
    231  
    232  The conservative visibility test provides a fast estimation of 
    233 visibility of the given region since it does not require the 5D 
    234 polyhedra enumeration. Visibility of the given face of the region is 
    235 determined by a traversal of the occlusion tree and testing the 
    236 position of the corresponding blocker polyhedron with respect to the 
    237 encountered hyperplanes as described in 
    238 Section~\ref{sec:insertion_nosplit}.  If the blocker polyhedron 
    239 reaches only $in$-leaves and the face is behind the polygon(s) 
    240 associated with the leaves, the face is classified invisible 
    241 . Otherwise, it is conservatively classified as visible.  The 
    242 visibility of the whole region is determined using a combination of 
    243 visibility of its faces as mentioned in the previous section. 
    244  
    245 \section{Optimizations} 
     228% \subsection{Conservative visibility test} 
     229 
     230% \label{sec:rot3_conserva} 
     231 
     232% The conservative visibility test provides a fast estimation of 
     233% visibility of the given region since it does not require the 5D 
     234% polyhedra enumeration. Visibility of the given face of the region is 
     235% determined by a traversal of the occlusion tree and testing the 
     236% position of the corresponding blocker polyhedron with respect to the 
     237% encountered hyperplanes as described in 
     238% Section~\ref{sec:insertion_nosplit}.  If the blocker polyhedron 
     239% reaches only $in$-leaves and the face is behind the polygon(s) 
     240% associated with the leaves, the face is classified invisible 
     241% . Otherwise, it is conservatively classified as visible.  The 
     242% visibility of the whole region is determined using a combination of 
     243% visibility of its faces as mentioned in the previous section. 
     244 
     245\subsection{Optimizations} 
    246246 
    247247\label{sec:rot3d_optimizations} 
     
    278278 
    279279 
    280 \subsection{Visibility estimation} 
     280\subsubsection{Visibility estimation} 
    281281 
    282282 The visibility estimation aims to eliminate the polyhedron 
     
    312312 
    313313 
    314 \subsection{Visibility merging} 
     314\subsubsection{Visibility merging} 
    315315 
    316316 Visibility merging aims to propagate visibility classifications from 
  • trunk/VUT/doc/SciReport/online.tex

    r266 r273  
    724724\begin{figure} 
    725725\centering  
    726 \includegraphics[width=0.35\textwidth,draft=\DRAFTIMAGES]{images/chc_shadows} 
     726\includegraphics[width=0.4\textwidth,draft=\DRAFTIMAGES]{images/chc_shadows} 
    727727\caption{We can correctly handle shadow volumes together with CHC.} 
    728728\label{fig:chc_shadows} 
     
    810810 
    811811\begin{figure} 
    812 \centering \includegraphics[width=0.32\textwidth,draft=\DRAFTIMAGES]{images/city_vis} 
     812\centering \includegraphics[width=0.4\textwidth,draft=\DRAFTIMAGES]{images/city_vis} 
    813813\caption{Visibility classification of the kD-tree nodes in the city scene. 
    814814  The orange nodes were found visible, all the other depicted nodes are invisible. 
     
    10751075 
    10761076 
    1077 \section{Conclusion} 
     1077\section{Summary} 
    10781078\label{sec:conclusion} 
    10791079 
     
    11061106 
    11071107 
    1108   The major potential in improving the method is a better estimation 
    1109  of changes in the visibility classification of hierarchy nodes. If 
    1110  nodes tend to be mostly visible, we could automatically decrease the 
    1111  frequency of occlusion tests and thus better adapt the method to the 
    1112  actual occlusion in the scene. Another possibility for improvement is 
    1113  better tuning for a particular graphics hardware by means of more 
    1114  accurate rendering cost estimation. Skipping occlusion tests for 
    1115  simpler geometry  can be faster than issuing comparably expensive 
    1116  occlusion queries. 
     1108%   The major potential in improving the method is a better estimation 
     1109% of changes in the visibility classification of hierarchy nodes. If 
     1110% nodes tend to be mostly visible, we could automatically decrease the 
     1111% frequency of occlusion tests and thus better adapt the method to the 
     1112% actual occlusion in the scene. Another possibility for improvement is 
     1113% better tuning for a particular graphics hardware by means of more 
     1114% accurate rendering cost estimation. Skipping occlusion tests for 
     1115% simpler geometry  can be faster than issuing comparably expensive 
     1116% occlusion queries. 
    11171117 
    11181118 
    11191119\begin{figure*}[htb] 
    11201120  \centerline{ 
    1121      \hfill 
    1122      \includegraphics[height=0.2\textwidth,draft=\DRAFTFIGS]{images/teapots} 
    1123      \hfill 
    1124      \includegraphics[height=0.2\textwidth,draft=\DRAFTFIGS]{images/city} 
    1125      \hfill 
    1126      \includegraphics[height=0.2\textwidth,draft=\DRAFTFIGS]{images/pplant} 
    1127      \hfill 
    1128    } 
     1121     \includegraphics[width=0.4\textwidth,draft=\DRAFTFIGS]{images/teapots} 
     1122     } 
     1123  \centerline{ 
     1124     \includegraphics[width=0.4\textwidth,draft=\DRAFTFIGS]{images/city} 
     1125     } 
     1126  \centerline{ 
     1127     \includegraphics[width=0.4\textwidth,draft=\DRAFTFIGS]{images/pplant} 
     1128     } 
     1129    
    11291130   \caption{The test scenes: the teapots, the city, and the power plant.} 
    11301131   \label{fig:scenes} 
  • trunk/VUT/doc/SciReport/report.tex

    r250 r273  
    132132\begin{document} 
    133133 
     134\tableofcontents 
    134135 
    135136% --------------------------------------------------------------------- 
  • trunk/VUT/doc/SciReport/sampling.tex

    r272 r273  
    88\item The first step is an aggresive visibility sampling which gives 
    99initial estimate about global visibility in the scene. The sampling 
    10 itself involves several strategies which will be described in 
    11 section~\ref{sec:sampling}. The imporant property of the aggresive 
    12 sampling step is that it provides a fast progressive solution to 
    13 global visibility and thus it can be easily integrated into the game 
    14 development cycle. 
     10itself involves several strategies which will be described bellow. 
     11The imporant property of the aggresive sampling step is that it 
     12provides a fast progressive solution to global visibility and thus it 
     13can be easily integrated into the game development cycle. The 
     14aggresive sampling will terminate when the average contribution of new 
     15ray samples falls bellow a predefined threshold. 
    1516 
    1617\item The second step is mutual visibility verification. This step 
     
    2829\end{itemize} 
    2930 
    30  
    3131In traditional visibility preprocessing the view space is 
    3232subdivided into view cells and for each view cell the set of visible 
     
    4343 
    4444Both these points will be addressed in this chapter in more detail. 
    45  
    46  
    4745 
    4846\section{Related work} 
     
    241239strategies. 
    242240 
    243 \subsection{View Cell Representation} 
     241\subsection{View Space Partitioning} 
     242 
    244243 
    245244\begin{figure}[htb] 
     
    254253 
    255254 
    256 A good partition of the scene into view cells is an essential part of every visibility 
    257 system. If they are chosen too large, the PVS (Potentially Visibible Set)  
    258 of a view cells is too large for efficient culling. If they are chosen too small or 
    259 without consideration, then neighbouring view cells contain redundant PVS information, 
    260 hence boosting the PVS computation and storage costs for the scene. In the left image of figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna model, generated by triangulation of the streets.  
    261 In the closeup (right image) we can see that each triangle is extruded to a given height to form a view cell prism. 
     255Before the visibility computation itself, we subdivide the space of 
     256all possible viewpoints and viewing directions into view cells. A good 
     257partition of the scene into view cells is an essential part of every 
     258visibility system. If they are chosen too large, the PVS (Potentially 
     259Visibible Set)  of a view cells is too large for efficient culling. If 
     260they are chosen too small or without consideration, then neighbouring 
     261view cells contain redundant PVS information, hence boosting the PVS 
     262computation and storage costs for the scene. In the left image of 
     263figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna 
     264model, generated by triangulation of the streets.  In the closeup 
     265(right image) we can see that each triangle is extruded to a given 
     266height to form a view cell prism. 
    262267 
    263268In order to efficiently use view cells with our sampling method, we 
     
    294299  view cell insertion time & 0.016s & 7.984s \\\hline\hline 
    295300  \#nodes & 1137 & 597933 \\\hline\hline 
    296         \#interior nodes & 568 & 298966\\\hline\hline 
    297         \#leaf nodes  & 569 & 298967\\\hline\hline 
    298         \#splits & 25 & 188936\\\hline\hline 
    299         max tree depth & 13 & 27\\\hline\hline 
    300         avg tree depth & 9.747 & 21.11\\\hline\hlien 
    301          
     301  \#interior nodes & 568 & 298966\\\hline\hline 
     302  \#leaf nodes  & 569 & 298967\\\hline\hline 
     303  \#splits & 25 & 188936\\\hline\hline 
     304  max tree depth & 13 & 27\\\hline\hline 
     305  avg tree depth & 9.747 & 21.11\\\hline\hlien 
     306   
    302307 \end{tabular} 
    303308 \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp} 
     
    351356\end{itemize} 
    352357 
     358In the future we aim to extend the view cell construction by using 
     359feedback from the PVS computation: the view cells which contain many 
     360visibility changes will be subdivided further and neighboring view 
     361cells with similar PVSs will be merged. We want to gain a more precise 
     362information about visibility by selectivly storing rays with the 
     363viewcells and computing visibility statistics for subsets of rays 
     364which intersect subregions of the given view cell. 
     365 
     366 
    353367\subsection{From-object based visibility} 
    354368 
     
    380394 
    381395 
    382 \subsection{Basic Randomized Sampling} 
    383  
    384  
    385 The first phase of the sampling works as follows: At every pass of the 
    386 algorithm visits scene objects sequentially. For every scene object we 
    387 randomly choose a point on its surface. Then a ray is cast from the 
    388 selected point according to the randomly chosen direction. We use a 
    389 uniform distribution of the ray directions with respect to the 
     396\subsection{Naive Randomized Sampling} 
     397 
     398The naive global visibility sampling works as follows: At every pass 
     399of the algorithm visits scene objects sequentially. For every scene 
     400object we randomly choose a point on its surface. Then a ray is cast 
     401from the selected point according to the randomly chosen direction. We 
     402use a uniform distribution of the ray directions with respect to the 
    390403halfspace given by the surface normal. Using this strategy the samples 
    391404at deterministicaly placed at every object, with a randomization of 
     
    394407information. 
    395408 
    396  
    397  The described algorithm accounts for the irregular distribution of the 
     409\begin{figure}%[htb] 
     410  \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\ 
     411%\label{tab:online_culling_example} 
     412  \caption{Three objects and a set of view cells corresponding to leaves 
     413    of an axis aligned BSP tree. The figure depicts several random 
     414    samples cast from a selected object (shown in red). Note that most 
     415    samples contribute to more view cells.  } 
     416  \label{fig:online_culling_example} 
     417\end{figure} 
     418 
     419The described algorithm accounts for the irregular distribution of the 
    398420objects: more samples are placed at locations containing more 
    399421objects. Additionally every object is sampled many times depending on 
     
    402424of the view cells from which it is visible. 
    403425 
     426Each ray sample can contribute by a associating a number of view cells 
     427with the object from which the sample was cast. If the ray does not 
     428leave the scene it also contributes by associating the pierced view 
     429cells to the terminating object. Thus as the ray samples are cast we 
     430can measure the average contribution of a certain number of most 
     431recent samples.  If this contribution falls bellow a predefined 
     432constant we move on to the next sampling strategy, which aim to 
     433discover more complicated visibility relations. 
     434  
    404435 
    405436\subsection{Accounting for View Cell Distribution} 
     
    421452 
    422453 Visibility events correspond to appearance and disapearance of 
    423  objects with respect to a moving view point. In polygonal scenes the 
     454objects with respect to a moving view point. In polygonal scenes the 
    424455events defined by event surfaces defined by three distinct scene 
    425456edges. Depending on the edge configuration we distinguish between 
     
    431462objects. 
    432463 
    433  
     464The first strategy starts similarly to the above described sampling 
     465methods: we randomly select an object and a point on its surface. Then 
     466we randomly pickup an object from its PVS. If we have mesh 
     467connectivity information we select a random silhouette edge from this 
     468object and cast a sample which is tangent to that object at the 
     469selectededge . 
     470 
     471The second strategy works as follows: we randomly pickup two objects 
     472which are likely to see each other. Then we determine a ray which is 
     473tangent to both objects. For simple meshes the determination of such 
     474rays can be computed geometrically, for more complicated ones it is 
     475based again on random sampling. The selection of the two objects works 
     476as follows: first we randomly select the first object and a random 
     477non-empty view cell for which we know that it can see the object. Then 
     478we randomly select an object associated with that view cell as the 
     479second object. 
     480 
     481 
     482  
     483\section{Summary} 
     484 
     485This chapter described a global visibility sampling tool which forms a 
     486core of the visibility preprocessing framework. The global visibility 
     487sampling computes aggresive visibility, i.e. it computes a subset of 
     488the exact PVS for each view cell. The aggresive sampling provides a 
     489fast progressive solution and thus it can be easily integrated into 
     490the game development cycle. The sampling itself involves several 
     491strategies which aim to pregressively discover more visibility 
     492relationships in the scene. 
     493 
     494The methods presented in this chapter give a good initial estimate 
     495about visibility in the scene, which can be verified by the mutual 
     496visibility tool described in the next chapter. 
     497 
     498 
     499 
     500 
Note: See TracChangeset for help on using the changeset viewer.