Changeset 266 for trunk/VUT/doc/SciReport/sampling.tex
- Timestamp:
- 09/13/05 18:44:54 (19 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/VUT/doc/SciReport/sampling.tex
r255 r266 1 1 \chapter{Global Visibility Sampling Tool} 2 3 4 \section{Introduction}5 2 6 3 … … 21 18 conservative or error bound aggresive solution. The choice of the 22 19 particular verifier is left on the user in order to select the best 23 for a particular scene, application context and time constrains. For 24 example, in scenes like a forest an error bound aggresive visibility 25 can be the best compromise between the resulting size of the PVS (and 26 framerate) and the visual quality. The exact or conservative algorithm 27 c an however be chosen for urban scenes where of even small objects can28 be more distructing for the user. The mutual visibility tool will be29 described in the next chapter.20 one for a particular scene, application context and time 21 constrains. For example, in scenes like a forest an error bound 22 aggresive visibility can be the best compromise between the resulting 23 size of the PVS (and framerate) and the visual quality. The exact or 24 conservative algorithm can however be chosen for urban scenes where 25 ommision of even small objects can be more distructing for the 26 user. The mutual visibility tool will be described in the next chapter. 30 27 31 28 \end{itemize} … … 35 32 subdivided into viewcells and for each view cell the set of visible 36 33 objects --- potentially visible set (PVS) is computed. This framewoirk 37 has bee used for conservative, aggresive and exact algorithms.34 has been used for conservative, aggresive and exact algorithms. 38 35 39 36 We propose a different strategy which has several advantages for … … 41 38 based on the following fundamental ideas: 42 39 \begin{itemize} 43 \item Replace the roles of view cells and objects44 40 \item Compute progressive global visibility instead of sequential from-region visibility 41 \item Replace the roles of view cells and objects for some parts of the computation 45 42 \end{itemize} 46 43 … … 52 49 \label{VFR3D_RELATED_WORK} 53 50 54 55 Below we briefly discuss the related work on visibility preprocessing 51 Below we briefly discuss the related work on visibility preprocessing 56 52 in several application areas. In particular we focus on computing 57 53 from-region which has been a core of most previous visibility … … 63 59 The first algorithms dealing with from-region visibility belong to the 64 60 area of computer vision. The {\em aspect 65 graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions61 graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions 66 62 the view space into cells that group viewpoints from which the 67 63 projection of the scene is qualitatively equivalent. The aspect graph … … 237 233 attenuation, reflection and time delays. 238 234 239 \section{Algorithm Setup} 235 \section{Algorithm Description} 236 237 This section first describes the setup of the global visibility 238 sampling algorithm. In particular we describe the view cell 239 representation and the novel concept of from-object based 240 visibility. The we outline the different visibility sampling 241 strategies. 240 242 241 243 \subsection{View Cell Representation} … … 247 249 \item optimized for viewcell - ray intersection. 248 250 \item flexible, i.e., it can represent arbitrary geometry. 249 \item naturally suited for a nhierarchical approach. %(i.e., there is a root view cell containing all others)251 \item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others) 250 252 \end{itemize} 251 253 … … 257 259 subdivide a BSP leaf view cell quite easily. 258 260 259 Currently we use two approaches to generate the initial BSP view cell tree. 261 Currently we use two approaches to generate the initial BSP view cell 262 tree. 260 263 261 264 \begin{itemize} 265 262 266 \item We use a number of dedicated input view cells. As input view 263 267 cell any closed mesh can be applied. The only requirement is that the … … 270 274 (i.e., add a pointer to the view cell). Hence a number of leafes can 271 275 be associated with the same input view cell. 276 272 277 \item We apply the BSP tree subdivision to the scene geometry. When 273 278 the subdivision terminates, the leaf nodes also represent the view 274 279 cells. 280 275 281 \end{itemize} 276 282 277 283 \subsection{From-object based visibility} 278 284 279 Our framework is based on the idea of sampling visibility by casting285 Our framework is based on the idea of sampling visibility by casting 280 286 casting rays through the scene and collecting their contributions. A 281 287 visibility sample is computed by casting a ray from an object towards … … 304 310 305 311 306 \s ection{Basic Randomized Sampling}312 \subsection{Basic Randomized Sampling} 307 313 308 314 … … 319 325 320 326 321 The described algorithm accounts for the irregular distribution of the327 The described algorithm accounts for the irregular distribution of the 322 328 objects: more samples are placed at locations containing more 323 329 objects. Additionally every object is sampled many times depending on … … 327 333 328 334 329 \s ection{Accounting for View Cell Distribution}330 331 The first modification to the basic algorithm accounts for irregular332 distribution of the viewcells. Such a case i ncommon for example in335 \subsection{Accounting for View Cell Distribution} 336 337 The first modification to the basic algorithm accounts for irregular 338 distribution of the viewcells. Such a case is common for example in 333 339 urban scenes where the viewcells are mostly distributed in a 334 340 horizontal direction and more viewcells are placed at denser parts of 335 341 the city. The modification involves replacing the uniformly 336 342 distributed ray direction by directions distributed according to the 337 local view cell directional density. Itmeans placing more samples at338 directions where more view cells are located .We select a random343 local view cell directional density. This means placing more samples at 344 directions where more view cells are located: We select a random 339 345 viecell which lies at the halfpace given by the surface normal at the 340 346 chosen point. We pick a random point inside the view cell and cast a … … 342 348 343 349 344 \s ection{Accounting for Visibility Events}350 \subsection{Accounting for Visibility Events} 345 351 346 352 Visibility events correspond to appearance and disapearance of 347 353 objects with respect to a moving view point. In polygonal scenes the 348 349 350 351 354 events defined by event surfaces defined by three distinct scene 355 edges. Depending on the edge configuration we distinguish between 356 vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces 357 are planar planes whereas the EEE are in general quadratic surfaces. 352 358 353 359 To account for these event we explicitly place samples passing by the 354 355 356 357 360 object edges which are directed to edges and/or vertices of other 361 objects. 362 363
Note: See TracChangeset
for help on using the changeset viewer.