Changeset 277 for trunk/VUT/doc/SciReport/sampling.tex
- Timestamp:
- 09/16/05 10:41:52 (19 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/VUT/doc/SciReport/sampling.tex
r273 r277 1 \chapter{Global Visibility Sampling Tool} 2 1 \chapter{Global Visibility Sampling} 2 3 \label{chap:sampling} 3 4 4 5 The proposed visibility preprocessing framework consists of two major … … 6 7 7 8 \begin{itemize} 8 \item The first step is an aggres ive visibility sampling which gives9 \item The first step is an aggressive visibility sampling which gives 9 10 initial estimate about global visibility in the scene. The sampling 10 11 itself involves several strategies which will be described bellow. 11 The impor ant property of the aggresive sampling step is that it12 The important property of the aggressive sampling step is that it 12 13 provides a fast progressive solution to global visibility and thus it 13 14 can be easily integrated into the game development cycle. The 14 aggres ive sampling will terminate when the average contribution of new15 aggressive sampling will terminate when the average contribution of new 15 16 ray samples falls bellow a predefined threshold. 16 17 17 18 \item The second step is mutual visibility verification. This step 18 turns the previous aggres ive visibility solution into either exact,19 conservative or error bound aggres ive solution. The choice of the19 turns the previous aggressive visibility solution into either exact, 20 conservative or error bound aggressive solution. The choice of the 20 21 particular verifier is left on the user in order to select the best 21 22 one for a particular scene, application context and time 22 23 constrains. For example, in scenes like a forest an error bound 23 aggres ive visibility can be the best compromise between the resulting24 size of the PVS (and frame rate) and the visual quality. The exact or24 aggressive visibility can be the best compromise between the resulting 25 size of the PVS (and frame rate) and the visual quality. The exact or 25 26 conservative algorithm can however be chosen for urban scenes where 26 ommision of even small objects can be more distructing for the 27 user. The mutual visibility tool will be described in the next chapter. 27 omission of even small objects can be more distracting for the 28 user. The mutual visibility verification will be described in the next 29 chapter. 28 30 29 31 \end{itemize} … … 31 33 In traditional visibility preprocessing the view space is 32 34 subdivided into view cells and for each view cell the set of visible 33 objects --- potentially visible set (PVS) is computed. This framewo irk34 has been used for conservative, aggres ive and exact algorithms.35 objects --- potentially visible set (PVS) is computed. This framework 36 has been used for conservative, aggressive and exact algorithms. 35 37 36 38 We propose a different strategy which has several advantages for 37 sampling based aggres ive visibility preprocessing. The stategy is39 sampling based aggressive visibility preprocessing. The strategy is 38 40 based on the following fundamental ideas: 39 41 \begin{itemize} … … 63 65 the partitioning. The major drawback of this approach is that for 64 66 polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in 65 the partitioning for unrestricted view space. A {\em scale space}67 the partitioning for unrestricted view space. A {\em scale space} 66 68 aspect graph~\cite{bb12595,bb12590} improves robustness of the method 67 69 by merging similar features according to the given scale. … … 121 123 The exact mutual visibility method presented later in the report is 122 124 based on method exploting \plucker coordinates of 123 lines~\cite{bittner 02phd,nirenstein:02:egwr,haumont2005}. This125 lines~\cite{bittner:02:phd,nirenstein:02:egwr,haumont2005:egsr}. This 124 126 algorithm uses \plucker coordinates to compute visibility in shafts 125 127 defined by each polygon in the scene. … … 242 244 243 245 244 \begin{figure}[htb]245 \centerline{246 \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01}247 \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07}248 }249 250 \caption{(left) Vienna viewcells. (right) The view cells are prisms with a triangular base. }251 \label{fig:vienna_viewcells}252 \end{figure}253 246 254 247 … … 257 250 partition of the scene into view cells is an essential part of every 258 251 visibility system. If they are chosen too large, the PVS (Potentially 259 Visib ible Set) of a view cells is too large for efficient culling. If252 Visible Set) of a view cells is too large for efficient culling. If 260 253 they are chosen too small or without consideration, then neighbouring 261 254 view cells contain redundant PVS information, hence boosting the PVS … … 266 259 height to form a view cell prism. 267 260 261 \begin{figure}[htb] 262 \centerline{ 263 \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01} 264 \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07} 265 } 266 267 \caption{(left) Vienna view cells. (right) The view cells are prisms with a triangular base. } 268 \label{fig:vienna_viewcells} 269 \end{figure} 270 268 271 In order to efficiently use view cells with our sampling method, we 269 272 require a view cell representation which is … … 275 278 \end{itemize} 276 279 277 We meet these requirements by employing spatial subdivisions (i.e., 278 KD trees and BSP trees), to store the view cells. The initial view cells are 279 associated with the leaves. The reason why we chose BSP trees as view cell representation 280 is that they are very flexible. View cells forming arbitrary closed meshes can be closely matched. 281 Therefore we are able to find a view cells with only a few view ray-plane 282 intersections. Furthermore, the hierarchical structures can be 283 exploited as hierarchy of view cells. Interior nodes form larger view cells 284 containing the children. If neccessary, a leaf can be easily subdivided into smaller view cells. 285 286 Currently we consider three different approaches to generate the initial view cell BSP tree. 287 The third method is not restricted to BSP trees, but BSP trees are prefered 288 because of their greater flexibility. 280 We meet these requirements by employing spatial subdivisions (i.e., 281 KD trees and BSP trees), to store the view cells. The initial view 282 cells are associated with the leaves. The reason why we chose BSP 283 trees as view cell representation is that they are very 284 flexible. View cells forming arbitrary closed meshes can be closely 285 matched. Therefore we are able to find a view cells with only a few 286 view ray-plane intersections. Furthermore, the hierarchical 287 structures can be exploited as hierarchy of view cells. Interior nodes 288 form larger view cells containing the children. If necessary, a leaf 289 can be easily subdivided into smaller view cells. 290 291 Currently we consider three different approaches to generate the 292 initial view cell BSP tree. The third method is not restricted to BSP 293 trees, but BSP trees are preferred because of their greater 294 flexibility. 289 295 290 296 … … 303 309 \#splits & 25 & 188936\\\hline\hline 304 310 max tree depth & 13 & 27\\\hline\hline 305 avg tree depth & 9.747 & 21.11\\\hline\hli en311 avg tree depth & 9.747 & 21.11\\\hline\hline 306 312 307 313 \end{tabular} … … 312 318 \begin{itemize} 313 319 314 \item We use a number of input view cells given in advance. As input view cell 315 any closed mesh can be applied. The only requirement is that the 316 any two view cells do not overlap. 317 First the view cell polygons are extracted, and the BSP is build from 318 these polygons using some global optimizations like tree balancing or 319 least splits. Then one view cell after the other 320 is inserted into the tree to find out the leafes where they are contained 321 in. The polygons of the view cell are filtered down the tree, 322 guiding the insertion process. Once we reach a leaf and there are no 323 more polygons left, we terminate the tree subdivision. If we are on 324 the inside of the last split plane (i.e., the leaf represents the 325 inside of the view cell), we associate the leaf with the view cell 326 (i.e., add a pointer to the view cell). One input view cell can 327 be associated with many leafes, whereas each leafs has only one view cell. 328 Some statistics about using this method on the vienna view cells set are given in table~\ref{tab:viewcell_bsp}. 329 However, sometimes a good set of view cells is not available. Or the scene 330 is changed frequently, and the designer does not want to create new view cells on each change. 331 In such a case one of the following two methods should rather be chosen, which generate view cells 332 automatically. 320 \item We use a number of input view cells given in advance. As input 321 view cell any closed mesh can be applied. The only requirement is 322 that the any two view cells do not overlap. First the view cell 323 polygons are extracted, and the BSP is build from these polygons using 324 some global optimizations like tree balancing or least splits. Then 325 one view cell after the other is inserted into the tree to find out 326 the leaves where they are contained in. The polygons of the view cell 327 are filtered down the tree, guiding the insertion process. Once we 328 reach a leaf and there are no more polygons left, we terminate the 329 tree subdivision. If we are on the inside of the last split plane 330 (i.e., the leaf represents the inside of the view cell), we associate 331 the leaf with the view cell (i.e., add a pointer to the view 332 cell). One input view cell can be associated with many leaves, whereas 333 each leafs has only one view cell. Some statistics about using this 334 method on the Vienna view cells set are given in 335 table~\ref{tab:viewcell_bsp}. However, sometimes a good set of view 336 cells is not available. Or the scene is changed frequently, and the 337 designer does not want to create new view cells on each change. In 338 such a case one of the following two methods should rather be chosen, 339 which generate view cells automatically. 333 340 334 341 335 342 \item We apply a BSP tree subdivision to the scene geometry. Whenever 336 the subdivision terminates in a leaf, a view cell is associated with the leaf node. 337 This simple approach is justified because it places the view cell borders 338 along some discontinuities in the visibility function. 339 340 \item The view cell generation can be guided by the sampling process. We start with 341 with a single initial view cell representing the whole space. If a given threshold 342 is reached during the preprocessing (e.g., the view cell is pierced by too many rays 343 resulting in a large PVS), the view cell is subdivided into smaller cells using some criteria. 344 345 In order to evaluate the best split plane, we first have to define the characteristics of a 346 good view cell partition: The view cells should be quite large, while their PVS stays rather small. 347 The PVS of each two view cells should be as distinct as possible, otherwise they could be 348 merged into a larger view cell if the PVSs are too similar. 349 E.g., for a building, the perfect view cells are usually the single rooms connected by portals. 350 351 Hence we can define some useful criteria for the split: 1) the number of rays should be roughly equal among the new view cells. 2) The split plane should be chosen in a way that the rays are maximal distinct, i.e., the number 352 of rays contributing to more than one cell should be minimized => the PVSs are also distinct. 3) For BSP trees, 353 the split plane should be aligned with some scene geometry which is large enough to contribute 354 a lot of occlusion power. This criterium can be naturally combined with the second one. 355 As termination criterium we can choose the minimum PVS / piercing ray size or the maximal tree depth. 343 the subdivision terminates in a leaf, a view cell is associated with 344 the leaf node. This simple approach is justified because it places 345 the view cell borders along some discontinuities in the visibility 346 function. 347 348 \item The view cell generation can be guided by the sampling 349 process. We start with with a single initial view cell representing 350 the whole space. If a given threshold is reached during the 351 preprocessing (e.g., the view cell is pierced by too many rays 352 resulting in a large PVS), the view cell is subdivided into smaller 353 cells using some criteria. 354 355 In order to evaluate the best split plane, we first have to define the 356 characteristics of a good view cell partition: The view cells should 357 be quite large, while their PVS stays rather small. The PVS of each 358 two view cells should be as distinct as possible, otherwise they could 359 be merged into a larger view cell if the PVSs are too similar. E.g., 360 for a building, the perfect view cells are usually the single rooms 361 connected by portals. 362 363 Hence we can define some useful criteria for the split: 1) the number 364 of rays should be roughly equal among the new view cells. 2) The split 365 plane should be chosen in a way that the ray sets are disjoint, i.e., 366 the number of rays contributing to more than one cell should be 367 minimized. 3) For BSP trees, the split plane should be aligned with 368 some scene geometry which is large enough to contribute a lot of 369 occlusion power. This criterion can be naturally combined with the 370 second one. As termination criterion we can choose the minimum PVS / 371 piercing ray size or the maximal tree depth. 356 372 \end{itemize} 357 373 358 In the future we aim to extend the view cell construction by using 359 feedback from the PVS computation: the view cells which contain many 360 visibility changes will be subdivided further and neighboring view 361 cells with similar PVSs will be merged. We want to gain a more precise 362 information about visibility by selectivly storing rays with the363 viewcells and computing visibility statistics for subsets of rays 364 which intersect subregions of the given view cell. 365 374 375 % In the future we aim to extend the view cell construction by using 376 % feedback from the PVS computation: the view cells which contain many 377 % visibility changes will be subdivided further and neighboring view 378 % cells with similar PVSs will be merged. We want to gain a more precise 379 % information about visibility by selectively storing rays with the 380 % view cells and computing visibility statistics for subsets of rays 381 % which intersect subregions of the given view cell. 366 382 367 383 \subsection{From-object based visibility} … … 386 402 At this phase of the computation we not only start the samples from 387 403 the objects, but we also store the PVS information centered at the 388 objects. Instead of storing a PVS cons ting of objects visible from404 objects. Instead of storing a PVS consisting of objects visible from 389 405 view cells, every object maintains a PVS consisting of potentially 390 406 visible view cells. While these representations contain exactly the … … 399 415 of the algorithm visits scene objects sequentially. For every scene 400 416 object we randomly choose a point on its surface. Then a ray is cast 401 from the selected point according to the randomly chosen direction . We402 use a uniform distribution of the ray directions with respect tothe403 halfspace given by the surface normal. Using this strategy the samples 404 at deterministicaly placed at every object, with a randomization of 405 the location on the object surface. The uniformly distributed 406 direction is a simple and fast strategy to gain initial visibility 407 information.417 from the selected point according to the randomly chosen direction 418 (see Figure~\ref{fig:sampling}). We use a uniform distribution of the 419 ray directions with respect to the half space given by the surface 420 normal. Using this strategy the samples at deterministically placed at 421 every object, with a randomization of the location on the object 422 surface. The uniformly distributed direction is a simple and fast 423 strategy to gain initial visibility information. 408 424 409 425 \begin{figure}%[htb] 410 \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\ 426 \centerline{ 427 \includegraphics[width=0.4\textwidth, draft=\DRAFTFIGS]{figs/sampling} 428 } 429 411 430 %\label{tab:online_culling_example} 412 431 \caption{Three objects and a set of view cells corresponding to leaves … … 414 433 samples cast from a selected object (shown in red). Note that most 415 434 samples contribute to more view cells. } 416 \label{fig: online_culling_example}435 \label{fig:sampling} 417 436 \end{figure} 418 437 … … 444 463 local view cell directional density. This means placing more samples at 445 464 directions where more view cells are located: We select a random 446 vie cell which lies at the halfpace given by the surface normal at the465 view cell which lies at the half space given by the surface normal at the 447 466 chosen point. We pick a random point inside the view cell and cast a 448 467 ray towards this point. … … 451 470 \subsection{Accounting for Visibility Events} 452 471 453 Visibility events correspond to appearance and disap earance of472 Visibility events correspond to appearance and disappearance of 454 473 objects with respect to a moving view point. In polygonal scenes the 455 474 events defined by event surfaces defined by three distinct scene 456 475 edges. Depending on the edge configuration we distinguish between 457 vertex-edge events (VE) and trip ple edge (EEE) events. The VE surfaces476 vertex-edge events (VE) and triple edge (EEE) events. The VE surfaces 458 477 are planar planes whereas the EEE are in general quadratic surfaces. 459 478 … … 467 486 connectivity information we select a random silhouette edge from this 468 487 object and cast a sample which is tangent to that object at the 469 selected edge .488 selected edge . 470 489 471 490 The second strategy works as follows: we randomly pickup two objects … … 483 502 \section{Summary} 484 503 485 This chapter described a global visibility sampling tool which forms a486 core of the visibility preprocessing framework. The global visibility 487 sampling computes aggresive visibility, i.e. it computes a subset of 488 the exact PVS for each view cell. The aggresive sampling provides a 489 fast progressive solution and thus it can be easily integrated into 490 the game development cycle. The sampling itself involves several 491 strategies which aim to pregressively discover more visibility 492 relationships in the scene.504 This chapter described the global visibility sampling algorithm which 505 forms a core of the visibility preprocessing framework. The global 506 visibility sampling computes aggressive visibility, i.e. it computes a 507 subset of the exact PVS for each view cell. The aggressive sampling 508 provides a fast progressive solution and thus it can be easily 509 integrated into the game development cycle. The sampling itself 510 involves several strategies which aim to progressively discover more 511 visibility relationships in the scene. 493 512 494 513 The methods presented in this chapter give a good initial estimate 495 514 about visibility in the scene, which can be verified by the mutual 496 visibility tooldescribed in the next chapter.497 498 499 500 515 visibility algorithms described in the next chapter. 516 517 518 519
Note: See TracChangeset
for help on using the changeset viewer.