Changeset 273
- Timestamp:
- 09/15/05 18:31:13 (19 years ago)
- Location:
- trunk/VUT/doc/SciReport
- Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/VUT/doc/SciReport/introduction.tex
r266 r273 1 \chapter{Introduction to visibility problems and algorithms}%\chapter1 \chapter{Introduction}%\chapter 2 2 3 3 \label{chap:overview} … … 6 6 This chapter provides introduction to visibility problems and 7 7 algorithms for computer graphics. In particular it presents a taxonomy 8 of visibility problems based on the {\em problem domain} and the {\em9 type of the answer}. The taxonomy helps to understand the nature of a 10 particular visibility problem and provides a tool for grouping 11 problems of similar complexity independently of their target 12 application. We discuss typical visibility problems encountered in 13 computer graphics including their relation to the presented 14 taxonomy. Finally, we classify the latert described visibility tools 15 according to the presented taxonomy. 8 of visibility problems and algorithms. We discuss typical visibility 9 problems encountered in computer graphics including their relation to 10 the presented taxonomy. Finally, we classify the later described 11 visibility tools according to the presented taxonomy. 12 13 14 \section{Structure of the report} 15 16 16 17 17 … … 24 24 25 25 26 \s ubsection{Problem domain}26 \section{Domain of visibility problems} 27 27 \label{sec:prob_domain} 28 28 -
trunk/VUT/doc/SciReport/mutual.tex
r272 r273 40 40 41 41 42 \section{Exact Mutual Visibility Test}43 44 The exact mutual visibility test computes exact visibility between two45 polyhedrons in the scene. This is computed by testing visibility42 \section{Exact Verifier} 43 44 The exact mutual visibility verifier computes exact visibility between 45 two polyhedrons in the scene. This is computed by testing visibility 46 46 between all pairs of potentially visible polygons of these 47 47 polyhedrons. For each pair of tested polygons the computation is … … 49 49 used to determine the set of relevant occluders~\cite{Haines94}. 50 50 51 \s ection{Occlusion tree}51 \subsection{Occlusion tree} 52 52 \label{sec:rot3d_ot} 53 53 … … 89 89 90 90 91 \s ection{Occlusion tree construction}91 \subsection{Occlusion tree construction} 92 92 93 93 \label{sec:rot3d_construct} … … 109 109 110 110 111 \subs ection{Insertion with splitting}111 \subsubsection{Insertion with splitting} 112 112 113 113 \label{sec:rot3_insertwithsplit} … … 136 136 the new subtree. 137 137 138 \subs ection{Insertion without splitting}138 \subsubsection{Insertion without splitting} 139 139 \label{sec:insertion_nosplit} 140 140 … … 209 209 and combining the resulting visibility states. 210 210 211 \subsection{Exact visibility test}212 213 The exact visibility for a given polyhedral region proceeds as211 %\subsection{Exact visibility test} 212 213 The exact visibility test for a given polyhedral region proceeds as 214 214 follows: for each face of the region facing the given source polygon 215 215 we construct a blocker polyhedron. The blocker polyhedron is then … … 226 226 227 227 228 \subsection{Conservative visibility test}229 230 \label{sec:rot3_conserva}231 232 The conservative visibility test provides a fast estimation of233 visibility of the given region since it does not require the 5D234 polyhedra enumeration. Visibility of the given face of the region is235 determined by a traversal of the occlusion tree and testing the236 position of the corresponding blocker polyhedron with respect to the237 encountered hyperplanes as described in238 Section~\ref{sec:insertion_nosplit}. If the blocker polyhedron239 reaches only $in$-leaves and the face is behind the polygon(s)240 associated with the leaves, the face is classified invisible241 . Otherwise, it is conservatively classified as visible. The242 visibility of the whole region is determined using a combination of243 visibility of its faces as mentioned in the previous section.244 245 \s ection{Optimizations}228 % \subsection{Conservative visibility test} 229 230 % \label{sec:rot3_conserva} 231 232 % The conservative visibility test provides a fast estimation of 233 % visibility of the given region since it does not require the 5D 234 % polyhedra enumeration. Visibility of the given face of the region is 235 % determined by a traversal of the occlusion tree and testing the 236 % position of the corresponding blocker polyhedron with respect to the 237 % encountered hyperplanes as described in 238 % Section~\ref{sec:insertion_nosplit}. If the blocker polyhedron 239 % reaches only $in$-leaves and the face is behind the polygon(s) 240 % associated with the leaves, the face is classified invisible 241 % . Otherwise, it is conservatively classified as visible. The 242 % visibility of the whole region is determined using a combination of 243 % visibility of its faces as mentioned in the previous section. 244 245 \subsection{Optimizations} 246 246 247 247 \label{sec:rot3d_optimizations} … … 278 278 279 279 280 \subs ection{Visibility estimation}280 \subsubsection{Visibility estimation} 281 281 282 282 The visibility estimation aims to eliminate the polyhedron … … 312 312 313 313 314 \subs ection{Visibility merging}314 \subsubsection{Visibility merging} 315 315 316 316 Visibility merging aims to propagate visibility classifications from -
trunk/VUT/doc/SciReport/online.tex
r266 r273 724 724 \begin{figure} 725 725 \centering 726 \includegraphics[width=0. 35\textwidth,draft=\DRAFTIMAGES]{images/chc_shadows}726 \includegraphics[width=0.4\textwidth,draft=\DRAFTIMAGES]{images/chc_shadows} 727 727 \caption{We can correctly handle shadow volumes together with CHC.} 728 728 \label{fig:chc_shadows} … … 810 810 811 811 \begin{figure} 812 \centering \includegraphics[width=0. 32\textwidth,draft=\DRAFTIMAGES]{images/city_vis}812 \centering \includegraphics[width=0.4\textwidth,draft=\DRAFTIMAGES]{images/city_vis} 813 813 \caption{Visibility classification of the kD-tree nodes in the city scene. 814 814 The orange nodes were found visible, all the other depicted nodes are invisible. … … 1075 1075 1076 1076 1077 \section{ Conclusion}1077 \section{Summary} 1078 1078 \label{sec:conclusion} 1079 1079 … … 1106 1106 1107 1107 1108 The major potential in improving the method is a better estimation1109 of changes in the visibility classification of hierarchy nodes. If1110 nodes tend to be mostly visible, we could automatically decrease the1111 frequency of occlusion tests and thus better adapt the method to the1112 actual occlusion in the scene. Another possibility for improvement is1113 better tuning for a particular graphics hardware by means of more1114 accurate rendering cost estimation. Skipping occlusion tests for1115 simpler geometry can be faster than issuing comparably expensive1116 occlusion queries.1108 % The major potential in improving the method is a better estimation 1109 % of changes in the visibility classification of hierarchy nodes. If 1110 % nodes tend to be mostly visible, we could automatically decrease the 1111 % frequency of occlusion tests and thus better adapt the method to the 1112 % actual occlusion in the scene. Another possibility for improvement is 1113 % better tuning for a particular graphics hardware by means of more 1114 % accurate rendering cost estimation. Skipping occlusion tests for 1115 % simpler geometry can be faster than issuing comparably expensive 1116 % occlusion queries. 1117 1117 1118 1118 1119 1119 \begin{figure*}[htb] 1120 1120 \centerline{ 1121 \hfill 1122 \includegraphics[height=0.2\textwidth,draft=\DRAFTFIGS]{images/teapots} 1123 \hfill 1124 \includegraphics[height=0.2\textwidth,draft=\DRAFTFIGS]{images/city} 1125 \hfill 1126 \includegraphics[height=0.2\textwidth,draft=\DRAFTFIGS]{images/pplant} 1127 \hfill 1128 } 1121 \includegraphics[width=0.4\textwidth,draft=\DRAFTFIGS]{images/teapots} 1122 } 1123 \centerline{ 1124 \includegraphics[width=0.4\textwidth,draft=\DRAFTFIGS]{images/city} 1125 } 1126 \centerline{ 1127 \includegraphics[width=0.4\textwidth,draft=\DRAFTFIGS]{images/pplant} 1128 } 1129 1129 1130 \caption{The test scenes: the teapots, the city, and the power plant.} 1130 1131 \label{fig:scenes} -
trunk/VUT/doc/SciReport/report.tex
r250 r273 132 132 \begin{document} 133 133 134 \tableofcontents 134 135 135 136 % --------------------------------------------------------------------- -
trunk/VUT/doc/SciReport/sampling.tex
r272 r273 8 8 \item The first step is an aggresive visibility sampling which gives 9 9 initial estimate about global visibility in the scene. The sampling 10 itself involves several strategies which will be described in 11 section~\ref{sec:sampling}. The imporant property of the aggresive 12 sampling step is that it provides a fast progressive solution to 13 global visibility and thus it can be easily integrated into the game 14 development cycle. 10 itself involves several strategies which will be described bellow. 11 The imporant property of the aggresive sampling step is that it 12 provides a fast progressive solution to global visibility and thus it 13 can be easily integrated into the game development cycle. The 14 aggresive sampling will terminate when the average contribution of new 15 ray samples falls bellow a predefined threshold. 15 16 16 17 \item The second step is mutual visibility verification. This step … … 28 29 \end{itemize} 29 30 30 31 31 In traditional visibility preprocessing the view space is 32 32 subdivided into view cells and for each view cell the set of visible … … 43 43 44 44 Both these points will be addressed in this chapter in more detail. 45 46 47 45 48 46 \section{Related work} … … 241 239 strategies. 242 240 243 \subsection{View Cell Representation} 241 \subsection{View Space Partitioning} 242 244 243 245 244 \begin{figure}[htb] … … 254 253 255 254 256 A good partition of the scene into view cells is an essential part of every visibility 257 system. If they are chosen too large, the PVS (Potentially Visibible Set) 258 of a view cells is too large for efficient culling. If they are chosen too small or 259 without consideration, then neighbouring view cells contain redundant PVS information, 260 hence boosting the PVS computation and storage costs for the scene. In the left image of figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna model, generated by triangulation of the streets. 261 In the closeup (right image) we can see that each triangle is extruded to a given height to form a view cell prism. 255 Before the visibility computation itself, we subdivide the space of 256 all possible viewpoints and viewing directions into view cells. A good 257 partition of the scene into view cells is an essential part of every 258 visibility system. If they are chosen too large, the PVS (Potentially 259 Visibible Set) of a view cells is too large for efficient culling. If 260 they are chosen too small or without consideration, then neighbouring 261 view cells contain redundant PVS information, hence boosting the PVS 262 computation and storage costs for the scene. In the left image of 263 figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna 264 model, generated by triangulation of the streets. In the closeup 265 (right image) we can see that each triangle is extruded to a given 266 height to form a view cell prism. 262 267 263 268 In order to efficiently use view cells with our sampling method, we … … 294 299 view cell insertion time & 0.016s & 7.984s \\\hline\hline 295 300 \#nodes & 1137 & 597933 \\\hline\hline 296 297 298 299 300 301 301 \#interior nodes & 568 & 298966\\\hline\hline 302 \#leaf nodes & 569 & 298967\\\hline\hline 303 \#splits & 25 & 188936\\\hline\hline 304 max tree depth & 13 & 27\\\hline\hline 305 avg tree depth & 9.747 & 21.11\\\hline\hlien 306 302 307 \end{tabular} 303 308 \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp} … … 351 356 \end{itemize} 352 357 358 In the future we aim to extend the view cell construction by using 359 feedback from the PVS computation: the view cells which contain many 360 visibility changes will be subdivided further and neighboring view 361 cells with similar PVSs will be merged. We want to gain a more precise 362 information about visibility by selectivly storing rays with the 363 viewcells and computing visibility statistics for subsets of rays 364 which intersect subregions of the given view cell. 365 366 353 367 \subsection{From-object based visibility} 354 368 … … 380 394 381 395 382 \subsection{Basic Randomized Sampling} 383 384 385 The first phase of the sampling works as follows: At every pass of the 386 algorithm visits scene objects sequentially. For every scene object we 387 randomly choose a point on its surface. Then a ray is cast from the 388 selected point according to the randomly chosen direction. We use a 389 uniform distribution of the ray directions with respect to the 396 \subsection{Naive Randomized Sampling} 397 398 The naive global visibility sampling works as follows: At every pass 399 of the algorithm visits scene objects sequentially. For every scene 400 object we randomly choose a point on its surface. Then a ray is cast 401 from the selected point according to the randomly chosen direction. We 402 use a uniform distribution of the ray directions with respect to the 390 403 halfspace given by the surface normal. Using this strategy the samples 391 404 at deterministicaly placed at every object, with a randomization of … … 394 407 information. 395 408 396 397 The described algorithm accounts for the irregular distribution of the 409 \begin{figure}%[htb] 410 \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\ 411 %\label{tab:online_culling_example} 412 \caption{Three objects and a set of view cells corresponding to leaves 413 of an axis aligned BSP tree. The figure depicts several random 414 samples cast from a selected object (shown in red). Note that most 415 samples contribute to more view cells. } 416 \label{fig:online_culling_example} 417 \end{figure} 418 419 The described algorithm accounts for the irregular distribution of the 398 420 objects: more samples are placed at locations containing more 399 421 objects. Additionally every object is sampled many times depending on … … 402 424 of the view cells from which it is visible. 403 425 426 Each ray sample can contribute by a associating a number of view cells 427 with the object from which the sample was cast. If the ray does not 428 leave the scene it also contributes by associating the pierced view 429 cells to the terminating object. Thus as the ray samples are cast we 430 can measure the average contribution of a certain number of most 431 recent samples. If this contribution falls bellow a predefined 432 constant we move on to the next sampling strategy, which aim to 433 discover more complicated visibility relations. 434 404 435 405 436 \subsection{Accounting for View Cell Distribution} … … 421 452 422 453 Visibility events correspond to appearance and disapearance of 423 454 objects with respect to a moving view point. In polygonal scenes the 424 455 events defined by event surfaces defined by three distinct scene 425 456 edges. Depending on the edge configuration we distinguish between … … 431 462 objects. 432 463 433 464 The first strategy starts similarly to the above described sampling 465 methods: we randomly select an object and a point on its surface. Then 466 we randomly pickup an object from its PVS. If we have mesh 467 connectivity information we select a random silhouette edge from this 468 object and cast a sample which is tangent to that object at the 469 selectededge . 470 471 The second strategy works as follows: we randomly pickup two objects 472 which are likely to see each other. Then we determine a ray which is 473 tangent to both objects. For simple meshes the determination of such 474 rays can be computed geometrically, for more complicated ones it is 475 based again on random sampling. The selection of the two objects works 476 as follows: first we randomly select the first object and a random 477 non-empty view cell for which we know that it can see the object. Then 478 we randomly select an object associated with that view cell as the 479 second object. 480 481 482 483 \section{Summary} 484 485 This chapter described a global visibility sampling tool which forms a 486 core of the visibility preprocessing framework. The global visibility 487 sampling computes aggresive visibility, i.e. it computes a subset of 488 the exact PVS for each view cell. The aggresive sampling provides a 489 fast progressive solution and thus it can be easily integrated into 490 the game development cycle. The sampling itself involves several 491 strategies which aim to pregressively discover more visibility 492 relationships in the scene. 493 494 The methods presented in this chapter give a good initial estimate 495 about visibility in the scene, which can be verified by the mutual 496 visibility tool described in the next chapter. 497 498 499 500
Note: See TracChangeset
for help on using the changeset viewer.