Changeset 273 for trunk/VUT/doc/SciReport/sampling.tex
- Timestamp:
- 09/15/05 18:31:13 (19 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/VUT/doc/SciReport/sampling.tex
r272 r273 8 8 \item The first step is an aggresive visibility sampling which gives 9 9 initial estimate about global visibility in the scene. The sampling 10 itself involves several strategies which will be described in 11 section~\ref{sec:sampling}. The imporant property of the aggresive 12 sampling step is that it provides a fast progressive solution to 13 global visibility and thus it can be easily integrated into the game 14 development cycle. 10 itself involves several strategies which will be described bellow. 11 The imporant property of the aggresive sampling step is that it 12 provides a fast progressive solution to global visibility and thus it 13 can be easily integrated into the game development cycle. The 14 aggresive sampling will terminate when the average contribution of new 15 ray samples falls bellow a predefined threshold. 15 16 16 17 \item The second step is mutual visibility verification. This step … … 28 29 \end{itemize} 29 30 30 31 31 In traditional visibility preprocessing the view space is 32 32 subdivided into view cells and for each view cell the set of visible … … 43 43 44 44 Both these points will be addressed in this chapter in more detail. 45 46 47 45 48 46 \section{Related work} … … 241 239 strategies. 242 240 243 \subsection{View Cell Representation} 241 \subsection{View Space Partitioning} 242 244 243 245 244 \begin{figure}[htb] … … 254 253 255 254 256 A good partition of the scene into view cells is an essential part of every visibility 257 system. If they are chosen too large, the PVS (Potentially Visibible Set) 258 of a view cells is too large for efficient culling. If they are chosen too small or 259 without consideration, then neighbouring view cells contain redundant PVS information, 260 hence boosting the PVS computation and storage costs for the scene. In the left image of figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna model, generated by triangulation of the streets. 261 In the closeup (right image) we can see that each triangle is extruded to a given height to form a view cell prism. 255 Before the visibility computation itself, we subdivide the space of 256 all possible viewpoints and viewing directions into view cells. A good 257 partition of the scene into view cells is an essential part of every 258 visibility system. If they are chosen too large, the PVS (Potentially 259 Visibible Set) of a view cells is too large for efficient culling. If 260 they are chosen too small or without consideration, then neighbouring 261 view cells contain redundant PVS information, hence boosting the PVS 262 computation and storage costs for the scene. In the left image of 263 figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna 264 model, generated by triangulation of the streets. In the closeup 265 (right image) we can see that each triangle is extruded to a given 266 height to form a view cell prism. 262 267 263 268 In order to efficiently use view cells with our sampling method, we … … 294 299 view cell insertion time & 0.016s & 7.984s \\\hline\hline 295 300 \#nodes & 1137 & 597933 \\\hline\hline 296 297 298 299 300 301 301 \#interior nodes & 568 & 298966\\\hline\hline 302 \#leaf nodes & 569 & 298967\\\hline\hline 303 \#splits & 25 & 188936\\\hline\hline 304 max tree depth & 13 & 27\\\hline\hline 305 avg tree depth & 9.747 & 21.11\\\hline\hlien 306 302 307 \end{tabular} 303 308 \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp} … … 351 356 \end{itemize} 352 357 358 In the future we aim to extend the view cell construction by using 359 feedback from the PVS computation: the view cells which contain many 360 visibility changes will be subdivided further and neighboring view 361 cells with similar PVSs will be merged. We want to gain a more precise 362 information about visibility by selectivly storing rays with the 363 viewcells and computing visibility statistics for subsets of rays 364 which intersect subregions of the given view cell. 365 366 353 367 \subsection{From-object based visibility} 354 368 … … 380 394 381 395 382 \subsection{Basic Randomized Sampling} 383 384 385 The first phase of the sampling works as follows: At every pass of the 386 algorithm visits scene objects sequentially. For every scene object we 387 randomly choose a point on its surface. Then a ray is cast from the 388 selected point according to the randomly chosen direction. We use a 389 uniform distribution of the ray directions with respect to the 396 \subsection{Naive Randomized Sampling} 397 398 The naive global visibility sampling works as follows: At every pass 399 of the algorithm visits scene objects sequentially. For every scene 400 object we randomly choose a point on its surface. Then a ray is cast 401 from the selected point according to the randomly chosen direction. We 402 use a uniform distribution of the ray directions with respect to the 390 403 halfspace given by the surface normal. Using this strategy the samples 391 404 at deterministicaly placed at every object, with a randomization of … … 394 407 information. 395 408 396 397 The described algorithm accounts for the irregular distribution of the 409 \begin{figure}%[htb] 410 \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\ 411 %\label{tab:online_culling_example} 412 \caption{Three objects and a set of view cells corresponding to leaves 413 of an axis aligned BSP tree. The figure depicts several random 414 samples cast from a selected object (shown in red). Note that most 415 samples contribute to more view cells. } 416 \label{fig:online_culling_example} 417 \end{figure} 418 419 The described algorithm accounts for the irregular distribution of the 398 420 objects: more samples are placed at locations containing more 399 421 objects. Additionally every object is sampled many times depending on … … 402 424 of the view cells from which it is visible. 403 425 426 Each ray sample can contribute by a associating a number of view cells 427 with the object from which the sample was cast. If the ray does not 428 leave the scene it also contributes by associating the pierced view 429 cells to the terminating object. Thus as the ray samples are cast we 430 can measure the average contribution of a certain number of most 431 recent samples. If this contribution falls bellow a predefined 432 constant we move on to the next sampling strategy, which aim to 433 discover more complicated visibility relations. 434 404 435 405 436 \subsection{Accounting for View Cell Distribution} … … 421 452 422 453 Visibility events correspond to appearance and disapearance of 423 454 objects with respect to a moving view point. In polygonal scenes the 424 455 events defined by event surfaces defined by three distinct scene 425 456 edges. Depending on the edge configuration we distinguish between … … 431 462 objects. 432 463 433 464 The first strategy starts similarly to the above described sampling 465 methods: we randomly select an object and a point on its surface. Then 466 we randomly pickup an object from its PVS. If we have mesh 467 connectivity information we select a random silhouette edge from this 468 object and cast a sample which is tangent to that object at the 469 selectededge . 470 471 The second strategy works as follows: we randomly pickup two objects 472 which are likely to see each other. Then we determine a ray which is 473 tangent to both objects. For simple meshes the determination of such 474 rays can be computed geometrically, for more complicated ones it is 475 based again on random sampling. The selection of the two objects works 476 as follows: first we randomly select the first object and a random 477 non-empty view cell for which we know that it can see the object. Then 478 we randomly select an object associated with that view cell as the 479 second object. 480 481 482 483 \section{Summary} 484 485 This chapter described a global visibility sampling tool which forms a 486 core of the visibility preprocessing framework. The global visibility 487 sampling computes aggresive visibility, i.e. it computes a subset of 488 the exact PVS for each view cell. The aggresive sampling provides a 489 fast progressive solution and thus it can be easily integrated into 490 the game development cycle. The sampling itself involves several 491 strategies which aim to pregressively discover more visibility 492 relationships in the scene. 493 494 The methods presented in this chapter give a good initial estimate 495 about visibility in the scene, which can be verified by the mutual 496 visibility tool described in the next chapter. 497 498 499 500
Note: See TracChangeset
for help on using the changeset viewer.