Ignore:
Timestamp:
09/13/05 18:44:54 (19 years ago)
Author:
bittner
Message:

structural changes

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/VUT/doc/SciReport/sampling.tex

    r255 r266  
    11\chapter{Global Visibility Sampling Tool} 
    2  
    3  
    4 \section{Introduction} 
    52 
    63 
     
    2118conservative or error bound aggresive solution. The choice of the 
    2219particular verifier is left on the user in order to select the best 
    23 for a particular scene, application context and time constrains. For 
    24 example, in scenes like a forest an error bound aggresive visibility 
    25 can be the best compromise between the resulting size of the PVS (and 
    26 framerate) and the visual quality. The exact or conservative algorithm 
    27 can however be chosen for urban scenes where of even small objects can 
    28 be more distructing for the user. The mutual visibility tool will be 
    29 described in the next chapter. 
     20one for a particular scene, application context and time 
     21constrains. For example, in scenes like a forest an error bound 
     22aggresive visibility can be the best compromise between the resulting 
     23size of the PVS (and framerate) and the visual quality. The exact or 
     24conservative algorithm can however be chosen for urban scenes where 
     25ommision of even small objects can be more distructing for the 
     26user. The mutual visibility tool will be described in the next chapter. 
    3027 
    3128\end{itemize} 
     
    3532subdivided into viewcells and for each view cell the set of visible 
    3633objects --- potentially visible set (PVS) is computed. This framewoirk 
    37 has bee used for conservative, aggresive and exact algorithms. 
     34has been used for conservative, aggresive and exact algorithms. 
    3835 
    3936We propose a different strategy which has several advantages for 
     
    4138based on the following fundamental ideas: 
    4239\begin{itemize} 
    43 \item Replace the roles of view cells and objects 
    4440\item Compute progressive global visibility instead of sequential from-region visibility 
     41\item Replace the roles of view cells and objects for some parts of the computation 
    4542\end{itemize} 
    4643 
     
    5249\label{VFR3D_RELATED_WORK} 
    5350 
    54  
    55  Below we briefly discuss the related work on visibility preprocessing 
     51Below we briefly discuss the related work on visibility preprocessing 
    5652in several application areas. In particular we focus on computing 
    5753from-region which has been a core of most previous visibility 
     
    6359The first algorithms dealing with from-region visibility belong to the 
    6460area of computer vision. The {\em aspect 
    65 graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions 
     61  graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions 
    6662the view space into cells that group viewpoints from which the 
    6763projection of the scene is qualitatively equivalent. The aspect graph 
     
    237233attenuation, reflection and time delays. 
    238234 
    239 \section{Algorithm Setup} 
     235\section{Algorithm Description} 
     236 
     237This section first describes the setup of the global visibility 
     238sampling algorithm. In particular we describe the view cell 
     239representation and the novel concept of from-object based 
     240visibility. The we outline the different visibility sampling 
     241strategies. 
    240242 
    241243\subsection{View Cell Representation} 
     
    247249\item optimized for viewcell - ray intersection. 
    248250\item flexible, i.e., it can represent arbitrary geometry. 
    249 \item naturally suited for an hierarchical approach. %(i.e., there is a root view cell containing all others) 
     251\item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others) 
    250252\end{itemize} 
    251253 
     
    257259subdivide a BSP leaf view cell quite easily. 
    258260 
    259 Currently we use two approaches to generate the initial BSP view cell tree. 
     261Currently we use two approaches to generate the initial BSP view cell 
     262tree. 
    260263 
    261264\begin{itemize} 
     265 
    262266\item We use a number of dedicated input view cells. As input view 
    263267cell any closed mesh can be applied. The only requirement is that the 
     
    270274(i.e., add a pointer to the view cell). Hence a number of leafes can 
    271275be associated with the same input view cell. 
     276 
    272277\item We apply the BSP tree subdivision to the scene geometry. When 
    273278the subdivision terminates, the leaf nodes also represent the view 
    274279cells. 
     280 
    275281\end{itemize} 
    276282 
    277283\subsection{From-object based visibility} 
    278284 
    279 Our framework is based on the idea of sampling visibility by casting 
     285 Our framework is based on the idea of sampling visibility by casting 
    280286casting rays through the scene and collecting their contributions. A 
    281287visibility sample is computed by casting a ray from an object towards 
     
    304310 
    305311 
    306 \section{Basic Randomized Sampling} 
     312\subsection{Basic Randomized Sampling} 
    307313 
    308314 
     
    319325 
    320326 
    321 The described algorithm accounts for the irregular distribution of the 
     327 The described algorithm accounts for the irregular distribution of the 
    322328objects: more samples are placed at locations containing more 
    323329objects. Additionally every object is sampled many times depending on 
     
    327333 
    328334 
    329 \section{Accounting for View Cell Distribution} 
    330  
    331 The first modification to the basic algorithm accounts for irregular 
    332 distribution of the viewcells. Such a case in common for example in 
     335\subsection{Accounting for View Cell Distribution} 
     336 
     337 The first modification to the basic algorithm accounts for irregular 
     338distribution of the viewcells. Such a case is common for example in 
    333339urban scenes where the viewcells are mostly distributed in a 
    334340horizontal direction and more viewcells are placed at denser parts of 
    335341the city. The modification involves replacing the uniformly 
    336342distributed ray direction by directions distributed according to the 
    337 local view cell directional density. It means placing more samples at 
    338 directions where more view cells are located. We select a random 
     343local view cell directional density. This means placing more samples at 
     344directions where more view cells are located: We select a random 
    339345viecell which lies at the halfpace given by the surface normal at the 
    340346chosen point. We pick a random point inside the view cell and cast a 
     
    342348 
    343349 
    344 \section{Accounting for Visibility Events} 
     350\subsection{Accounting for Visibility Events} 
    345351 
    346352 Visibility events correspond to appearance and disapearance of 
    347353 objects with respect to a moving view point. In polygonal scenes the 
    348  events defined by event surfaces defined by three distinct scene 
    349  edges. Depending on the edge configuration we distinguish between 
    350  vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces 
    351  are planar planes whereas the EEE are in general quadratic surfaces. 
     354events defined by event surfaces defined by three distinct scene 
     355edges. Depending on the edge configuration we distinguish between 
     356vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces 
     357are planar planes whereas the EEE are in general quadratic surfaces. 
    352358 
    353359 To account for these event we explicitly place samples passing by the 
    354  object edges which are directed to edges and/or vertices of other 
    355  objects. 
    356  
    357  
     360object edges which are directed to edges and/or vertices of other 
     361objects. 
     362 
     363 
Note: See TracChangeset for help on using the changeset viewer.