Changeset 266 for trunk


Ignore:
Timestamp:
09/13/05 18:44:54 (19 years ago)
Author:
bittner
Message:

structural changes

Location:
trunk/VUT/doc/SciReport
Files:
9 added
5 edited

Legend:

Unmodified
Added
Removed
  • trunk/VUT/doc/SciReport/analysis.tex

    r255 r266  
    11\chapter{Analysis of Visibility in Polygonal Scenes} 
    22 
    3  
    4  
    5  
    6 \section{Analysis of 2D line space} 
     3This chapter provides analysis of the visibility in polygonal scenes, 
     4which are the input for all developed tools. The visibility analysis 
     5uncoveres the complexity of the from-region and global visibility 
     6problems and thus it especially important for a good design of the 
     7global visibility preprocessor. To better undestand the visibility 
     8relationships in primal space we use mapping to line space, where the 
     9visibility interactions can be observed easily by interactions of sets 
     10of points. Additionally for the sake of clarity, we first analyze 
     11visibility in 2D and then extend the analysis to 3D polygonal scenes. 
     12 
     13\section{Analysis of visibility in 2D} 
    714 
    815\label{LINE_SPACE} 
     
    10171024to handle numerical inaccuracies. 
    10181025 
     1026 
     1027\section{Summary} 
     1028 
     1029 In this chapter we discussed the relation of sets of lines in 3D and 
     1030the polyhedra in \plucker coordinates. We proposed a general size 
     1031measure for a set of lines described by a blocker polyhedron and a 
     1032size measure designed for the computation of PVS. 
     1033 
  • trunk/VUT/doc/SciReport/introduction.tex

    r255 r266  
    1 \chapter{Overview of visibility problems and algorithms}%\chapter 
     1\chapter{Introduction to visibility problems and algorithms}%\chapter 
    22 
    33\label{chap:overview} 
    44\label{chap:classes} 
    55 
    6  This chapter provides a taxonomy of visibility problems encountered 
    7 in computer graphics based on the {\em problem domain} and the {\em 
     6 This chapter provides introduction to visibility problems and 
     7algorithms for computer graphics. In particular it presents a taxonomy 
     8of visibility problems based on the {\em problem domain} and the {\em 
    89type of the answer}. The taxonomy helps to understand the nature of a 
    910particular visibility problem and provides a tool for grouping 
    1011problems of similar complexity independently of their target 
    1112application. We discuss typical visibility problems encountered in 
    12 computer graphics and identify their relation to the proposed 
    13 taxonomy. A visibility problem can be solved by means of various 
    14 visibility algorithms. We classify visibility algorithms according to 
    15 several important criteria and discuss important concepts in the 
    16 design of a visibility algorithm. The taxonomy and the discussion of 
    17 the algorithm design sums up ideas and concepts that are independent 
    18 of any specific algorithm. This can help algorithm designers to 
    19 transfer the existing algorithms to solve visibility problems in other 
    20 application areas. 
    21  
     13computer graphics including their relation to the presented 
     14taxonomy. Finally, we classify the latert described visibility tools 
     15according to the presented taxonomy. 
    2216 
    2317 
     
    314308most general visibility problems. In 3D the ${\cal L}_R$ is a 4D 
    315309subset of the 4D line space. In 2D the ${\cal L}_R$ is a 2D subset of 
    316 the 2D line space. Consequently, in the proposed classification 
     310the 2D line space. Consequently, in the presented classification 
    317311visibility from a region in 2D is equivalent to visibility from a line 
    318312segment in 2D. 
     
    518512is neither its subset or its superset for all possible inputs. 
    519513 
     514 A more precise quality measure of algorithms computing PVSs can be 
     515expressed by the {\em relative overestimation} and the {\em relative 
     516underestimation} of the PVS with respect to the exact PVS.  We can 
     517define a quality measure of an algorithm $A$ on input $I$ as a tuple 
     518$\mbi{Q}^A(I)$: 
     519 
     520\begin{eqnarray} 
     521  \mbi{Q}^A(I) & = & (Q^A_o(I), Q^A_u(I)), \qquad  I \in {\cal D} \\ 
     522  Q^A_o(I) & = & {|S^A(I) \setminus S^{\cal E}(I)| \over |S^{\cal E}(I)|} \\ 
     523  Q^A_u(I) & = & {|S^{\cal E}(I) \setminus S^A(I) | \over |S^{\cal E}(I)|} 
     524\end{eqnarray} 
     525 
     526where $I$ is an input from the input domain ${\cal D}$, $S^A(I)$ is 
     527the PVS determined by the algorithm $A$ for input $I$ and $S^{\cal 
     528E}(I)$ is the exact PVS for the given input. $Q^A_o(I)$ expresses the 
     529{\em relative overestimation} of the PVS, $Q^A_u(I)$ is the {\em 
     530relative underestimation}. 
     531 
     532The expected quality of the algorithm over all possible inputs can be 
     533given as: 
     534 
     535\begin{eqnarray} 
     536Q^A & = & E[\| \mbi{Q}^A(I) \|] \\ 
     537    & = & \sum_{\forall I \in {\cal D}} f(I).\sqrt{Q^A_o(I)^2 + Q^A_o(I)^2} 
     538\end{eqnarray} 
     539 
     540where f(I) is the probability density function expressing the 
     541probability of occurrence of input $I$. The quality measure 
     542$\mbi{Q}^A(I)$ can be used to classify a PVS algorithm into one of the 
     543four accuracy classes according to Section~\ref{sec:accuracy}: 
     544 
     545\begin{enumerate} 
     546\item exact\\ 
     547  $\forall I \in {\cal D} :Q_o^A(I) = 0 \wedge Q_u^A(I) = 0$ 
     548\item conservative\\ 
     549  $\forall I \in {\cal D} : Q_o^A(I) \geq 0 \wedge Q_u^A(I) = 0$ 
     550\item aggressive \\ 
     551  $\forall I \in {\cal D} : Q_o^A(I) = 0 \wedge Q_u^A(I) \geq 0$ 
     552\item approximate \\ 
     553  $\qquad \exists I_j, I_k \in {\cal D}: Q_o^A(I_j) > 0 \wedge Q_u^A(I_k) > 0$ 
     554\end{enumerate} 
     555 
     556 
    520557 
    521558\subsection{Solution space} 
     
    542579 
    543580 The classification according to the solution space is easily 
    544 demonstrated on visible surface algorithms (these algorithms will be 
    545 discussed in Section~\ref{chap:traditional}). The 
     581demonstrated on visible surface algorithms: The 
    546582z-buffer~\cite{Catmull:1975:CDC} is a common example of a discrete 
    547583algorithm. The Weiler-Atherton algorithm~\cite{Weiler:1977:HSR} is an 
     
    583619%***************************************************************************** 
    584620 
    585 \section{Visibility algorithm design} 
    586  
    587  Visibility algorithm design can be decoupled into a series of 
    588 important design decisions. The first step is to clearly formulate a 
    589 problem that should be solved by the algorithm. The  taxonomy stated 
    590 above can help to understand the difficulty of solving the given 
    591 problem and its relationship to other visibility problems in computer 
    592 graphics. The following sections summarize important steps in the 
    593 design of a visibility algorithm and discuss some commonly used 
    594 techniques. 
    595  
    596  
    597 \subsection{Scene structuring} 
    598  
    599  We discuss two issues dealing with structuring of the scene: 
    600 identifying occluders and occludees, and spatial data structures for 
    601 scene description. 
    602  
    603 \subsubsection{Occluders and occludees} 
    604 %occluders, occludees,  
    605  
    606 Many visibility algorithms restructure the scene description to 
    607 distinguish between {\em occluders} and {\em occludees}.  Occluders 
    608 are objects that cause changes in visibility (occlusion). Occludees 
    609 are objects that do not cause occlusion, but are sensitive to 
    610 visibility changes. In other words the algorithm studies visibility of 
    611 occludees with respect to occluders. 
    612  
    613 The concept of occluders and occludees is used to increase the 
    614 performance of the algorithm in both the running time and the accuracy 
    615 of the algorithm by reducing the number of primitives used for 
    616 visibility computations (the performance measures of visibility 
    617 algorithms will be discussed in 
    618 Section~\ref{sec:performance}). Typically, the number of occluders and 
    619 occludees is significantly smaller than the total number of objects in 
    620 the scene. Additionally, both the occluders and the occludees can be 
    621 accompanied with a topological (connectivity) information that might 
    622 be necessary for an efficient functionality of the algorithm. 
    623  
    624  The concept of occluders is applicable to most visibility 
    625 algorithms. The concept of occludees is useful for algorithms 
    626 providing answers (1) and (2) according to the taxonomy of 
    627 Section~\ref{sec:answers}. Some visibility algorithms do not 
    628 distinguish between occluders and occludees at all. For example all 
    629 traditional visible surface algorithms use all scene objects as both 
    630 occluders and occludees. 
    631  
    632  Both the occluders and the occludees can be represented by {\em 
    633 virtual objects} constructed from the scene primitives: the occluders 
    634 as simplified inscribed objects, occludees as simplified circumscribed 
    635 objects such as bounding boxes. Algorithms can be classified according 
    636 to the type of occluders they deal with. The classification follows 
    637 the scene restrictions discussed in 
    638 Section~\ref{sec:scene_restrictions} and adds classes specific to 
    639 occluder restrictions: 
    640  
    641 \begin{itemize} 
    642 \item 
    643   vertical prisms, 
    644 \item 
    645   axis-aligned polygons, 
    646 \item 
    647   axis-aligned rectangles. 
    648 \end{itemize} 
    649    
    650  The vertical prisms that are specifically important for computing 
    651 visibility in \m25d scenes. Some visibility algorithms can deal only 
    652 with axis-aligned polygons or even more restrictive axis-aligned 
    653 rectangles. 
    654  
    655  
    656 \begin{figure}[htb] 
    657 \centerline{ 
    658 \includegraphics[width=0.7\textwidth,draft=\DRAFTIMAGES]{images/houses}} 
    659 \caption{Occluders in an urban scene. In urban scenes the occluders 
    660 can be considered vertical prisms erected above the ground.} 
    661 \label{fig:houses} 
    662 \end{figure} 
    663  
    664 Other important criteria for evaluating algorithms according to 
    665 occluder restrictions include: 
    666  
    667  
    668 \begin{itemize} 
    669 \item 
    670   connectivity information, 
    671 \item 
    672   intersecting occluders. 
    673 \end{itemize} 
    674  
    675  
    676 The explicit knowledge of the connectivity is crucial for efficient 
    677 performance of some visibility algorithms (performance measures will 
    678 be discussed in the Section~\ref{sec:performance}). Intersecting 
    679 occluders cannot be handled properly by some visibility algorithms. 
    680 In such a case the possible occluder intersections should be resolved 
    681 in preprocessing. 
    682  
    683  A similar classification can be applied to occludees. However, the 
    684 visibility algorithms typically pose less restrictions on occludees 
    685 since they are not used to describe visibility but only to check 
    686 visibility with respect to the description provided by the occluders. 
    687  
    688  
    689 %occluder selection, occluder LOD, virtual occluders, 
    690  
    691 \subsubsection{Scene description} 
    692  
    693  The scene is typically represented by a collection of objects. For 
    694 purposes of visibility computations it can be advantageous to transform 
    695 the object centered representation to a spatial representation by 
    696 means of a spatial data structure.  For example the scene can be 
    697 represented by an octree where full voxels correspond to opaque parts 
    698 of the scene. Such a data structure is then used as an input to the 
    699 visibility algorithm. The spatial data structures for the scene 
    700 description are used for the following reasons: 
    701  
    702 \begin{itemize} 
    703  
    704 \item {\em Regularity}. A spatial data structure typically provides a 
    705   regular description of the scene that avoids complicated 
    706   configurations or overly detailed input. Furthermore, the 
    707   representation can be rather independent of the total scene 
    708   complexity. 
    709    
    710 \item {\em Efficiency}. The algorithm can be more efficient in both 
    711   the running time and the accuracy of the result. 
    712  
    713 \end{itemize} 
    714  
    715  
    716 Additionally, spatial data structures can be applied to structure the 
    717 solution space and/or to represent the desired solution. Another 
    718 application of spatial data structures is the acceleration of the 
    719 algorithm by providing spatial indexing. These applications of spatial 
    720 data structures will be discussed in 
    721 Sections~\ref{sec:solution_space_ds} 
    722 and~\ref{sec:acceleration_ds}. Note that a visibility algorithm can 
    723 use a single data structure for all three purposes (scene 
    724 structuring, solution space structuring, and spatial indexing) while 
    725 another visibility algorithm can use three conceptually different data 
    726 structures. 
    727  
    728  
    729 % gernots alg. 
    730 %used as solution space DS and/or acceleration DS 
    731  
    732 \subsection{Solution space data structures} 
    733 \label{sec:solution_space_ds} 
    734  
    735 A solution space data structure is used to maintain an intermediate 
    736 result during the operation of the algorithm and it is used to 
    737 generate the result of the algorithm. We distinguish between the 
    738 following solution space data structures: 
    739  
    740 \begin{itemize} 
    741  
    742 \item general data structures 
    743  
    744   single value (ray shooting), winged edge, active edge table, etc. 
    745    
    746 \item primal space (spatial) data structures 
    747    
    748   uniform grid, BSP tree (shadow volumes), 
    749   bounding volume hierarchy, kD-tree, octree, etc. 
    750    
    751 \item image space data structures 
    752  
    753   2D uniform grid (shadow map), 2D BSP tree, quadtree, kD-tree, etc. 
    754  
    755 \item line space data structures 
    756  
    757   regular grid, kD-tree, BSP tree, etc. 
    758    
    759 \end{itemize} 
    760  
    761  The image space data structures can be considered a special case of 
    762 the line space data structures since a point in the image represents a 
    763 ray through that point (see also Section~\ref{sec:solspace}). 
    764  
    765  If the dimension of the solution space matches the dimension of the 
    766 problem-relevant line set, the visibility problem can often be solved 
    767 with high accuracy by a single sweep through the scene. If the 
    768 dimensions do not match, the algorithm typically needs more passes to 
    769 compute a result with satisfying accuracy~\cite{EVL-2000-60,wonka00} 
    770 or neglects some visibility effects altogether~\cite{EVL-2000-59}. 
    771  
    772  
    773 %ray shooting none 
    774 %visible surface algorithms - list of scan-line intersections, BSP tree, 
    775 % z-buffer (graphics hardware) 
    776 %shadow computation shadow volumes, BSP tree, shadow map (graphics hardware) 
    777 %PVS for view cell - occlusion tree 
    778  
    779  
    780 \subsection{Performance} 
    781 \label{sec:performance} 
    782  
    783 %output sensitivity, memory consumption, running time, scalability 
    784  
    785  
    786 The performance of a visibility algorithm can be evaluated by measuring 
    787 the quality of the result, the running time and the memory consumption. 
    788 In this section we discuss several concepts related to these 
    789 performance criteria. 
    790  
    791  
    792 \subsubsection{Quality of the result} 
    793  
    794  One of the important performance measures of a visibility algorithm 
    795 is the quality of the result. The quality measure depends on the type 
    796 of the answer to the problem. Generally, the quality of the result 
    797 can be expressed as a distance from an exact result in the solution 
    798 space.  Such a quality measure can be seen as a more precise 
    799 expression of the accuracy of the algorithm discussed in 
    800 Section~\ref{sec:accuracy}. 
    801  
    802 For example a quality measure of algorithms computing a PVS can be 
    803 expressed by the {\em relative overestimation} and the {\em relative 
    804 underestimation} of the PVS with respect to the exact PVS.  We can 
    805 define a quality measure of an algorithm $A$ on input $I$ as a tuple 
    806 $\mbi{Q}^A(I)$: 
    807  
    808 \begin{eqnarray} 
    809   \mbi{Q}^A(I) & = & (Q^A_o(I), Q^A_u(I)), \qquad  I \in {\cal D} \\ 
    810   Q^A_o(I) & = & {|S^A(I) \setminus S^{\cal E}(I)| \over |S^{\cal E}(I)|} \\ 
    811   Q^A_u(I) & = & {|S^{\cal E}(I) \setminus S^A(I) | \over |S^{\cal E}(I)|} 
    812 \end{eqnarray} 
    813  
    814 where $I$ is an input from the input domain ${\cal D}$, $S^A(I)$ is 
    815 the PVS determined by the algorithm $A$ for input $I$ and $S^{\cal 
    816 E}(I)$ is the exact PVS for the given input. $Q^A_o(I)$ expresses the 
    817 {\em relative overestimation} of the PVS, $Q^A_u(I)$ is the {\em 
    818 relative underestimation}. 
    819  
    820 The expected quality of the algorithm over all possible inputs can be 
    821 given as: 
    822  
    823 \begin{eqnarray} 
    824 Q^A & = & E[\| \mbi{Q}^A(I) \|] \\ 
    825     & = & \sum_{\forall I \in {\cal D}} f(I).\sqrt{Q^A_o(I)^2 + Q^A_o(I)^2} 
    826 \end{eqnarray} 
    827  
    828 where f(I) is the probability density function expressing the 
    829 probability of occurrence of input $I$. The quality measure 
    830 $\mbi{Q}^A(I)$ can be used to classify a PVS algorithm into one of the 
    831 four accuracy classes according to Section~\ref{sec:accuracy}: 
    832  
    833 \begin{enumerate} 
    834 \item exact\\ 
    835   $\forall I \in {\cal D} :Q_o^A(I) = 0 \wedge Q_u^A(I) = 0$ 
    836 \item conservative\\ 
    837   $\forall I \in {\cal D} : Q_o^A(I) \geq 0 \wedge Q_u^A(I) = 0$ 
    838 \item aggressive \\ 
    839   $\forall I \in {\cal D} : Q_o^A(I) = 0 \wedge Q_u^A(I) \geq 0$ 
    840 \item approximate \\ 
    841   $\qquad \exists I_j, I_k \in {\cal D}: Q_o^A(I_j) > 0 \wedge Q_u^A(I_k) > 0$ 
    842 \end{enumerate} 
    843  
    844  
    845  
    846 \subsubsection{Scalability} 
    847  
    848  Scalability expresses the ability of the visibility algorithm to cope 
    849 with larger inputs. A more precise definition of scalability of an 
    850 algorithm depends on the problem for which the algorithm is 
    851 designed. The scalability of an algorithm can be studied with respect 
    852 to the size of the scene (e.g. number of scene objects). Another 
    853 measure might consider the dependence of the algorithm on the number 
    854 of only the visible objects. Scalability can also be studied 
    855 according to the given domain restrictions, e.g. volume of the view 
    856 cell. 
    857  
    858  A well designed visibility algorithm should be scalable with respect 
    859 to the number of structural changes or discontinuities of 
    860 visibility. Furthermore, its performance should be given by the 
    861 complexity of the visible part of the scene. These two important 
    862 measures of scalability of an algorithm are discussed in the next two 
    863 sections. 
    864  
    865 \subsubsection{Use of coherence} 
    866  
    867  Scenes in computer graphics typically consist of objects whose 
    868 properties vary smoothly from one part to another. A view of such a 
    869 scene contains regions of smooth changes (changes in color, depth, 
    870 texture,etc.) and discontinuities that let us distinguish between 
    871 objects. The degree to which the scene or its projection exhibit local 
    872 similarities is called {\em coherence}~\cite{Foley90}. 
    873  
    874  Coherence can be exploited by reusing calculations made for one part 
    875 of the scene for nearby parts. Algorithms exploiting coherence are 
    876 typically more efficient than algorithms computing the result from the 
    877 scratch. 
    878  
    879  Sutherland et al.~\cite{Sutherland:1974:CTH} identified several 
    880 different types of coherence in the context of visible surface 
    881 algorithms. We simplify the classification proposed by Sutherland et 
    882 al. to reflect general visibility problems and distinguish between the 
    883 following three types of {\em visibility coherence}: 
    884  
    885 \begin{itemize} 
    886  
    887 \item {\em Spatial coherence}. Visibility of points in space tends to 
    888   be coherent in the sense that the visible part of the scene consists 
    889   of compact sets (regions) of visible and invisible points. We can 
    890   reuse calculations made for a given region for the neighboring 
    891   regions or its subregions. 
    892  
    893 \item {\em Line-space coherence}. Sets of similar rays tend to have the 
    894   same visibility classification, i.e. the rays intersect the same 
    895   object. We can reuse calculations for the given set of rays for its 
    896   subsets or the sets of nearby rays. 
    897  
    898 \item {\em Temporal coherence}. Visibility at two successive moments is 
    899   likely to be similar despite small changes in the scene or a 
    900   region/point of interest. Calculations made for one frame can be 
    901   reused for the next frame in a sequence. 
    902  
    903 \end{itemize} 
    904   
    905  The degree to which the algorithm exploits various types of coherence 
    906 is one of the major design paradigms in research of new visibility 
    907 algorithms. The importance of exploiting coherence is emphasized by 
    908 the large amount of data that need to be processed by the current 
    909 rendering algorithms. 
    910  
    911  
    912 \subsubsection{Output sensitivity} 
    913  
    914  
    915 An algorithm is said to be {\em output-sensitive} if its running time 
    916 is sensitive to the size of output. In the computer graphics community 
    917 the term output-sensitive algorithm is used in a broader meaning than 
    918 in computational geometry~\cite{berg:97}. The attention is paid to a 
    919 practical usage of the algorithm, i.e. to an efficient implementation 
    920 in terms of the practical average case performance. The algorithms are 
    921 usually evaluated experimentally using test data and measuring the 
    922 running time and the size of output of the algorithm. The formal 
    923 average case analysis is usually not carried out for the following two 
    924 reasons: 
    925  
    926 \begin{enumerate} 
    927  
    928 \item {\em The algorithm is too obscured}. Visibility algorithms 
    929 exploit data structures that are built according to various heuristics 
    930 and it is difficult to derive proper bounds even on the expected size 
    931 of these supporting data structures. 
    932  
    933 \item {\em It is difficult to properly model the input data}. In 
    934 general it is difficult to create a reasonable model that captures 
    935 properties of real world scenes as well as the probability of 
    936 occurrence of a particular configuration. 
    937  
    938 \end{enumerate} 
    939  
    940  A visibility algorithm can often be divided into the {\em offline} 
    941 phase and the {\em online} phase. The offline phase is also called 
    942 preprocessing. The preprocessing is often amortized over many 
    943 executions of the algorithm and therefore it is advantageous to 
    944 express it separately from the online running time. 
    945  
    946  For example an ideal output-sensitive visible surface algorithm runs 
    947 in $O(n\log n + k^2)$, where $n$ is the number of scene polygons (size 
    948 of input) and $k$ is the number of visible polygons (in the worst case 
    949 $k$ visible polygons induce $O(k^2)$ visible polygon fragments). 
    950  
    951  
    952  
    953 \subsubsection{Acceleration data structures} 
    954 \label{sec:acceleration_ds} 
    955  
    956  Acceleration data structures are often used to achieve the performance 
    957 goals of a visibility algorithm. These data structures allow efficient 
    958 point location, proximity queries, or scene traversal required by many 
    959 visibility algorithms. 
    960  
    961  With a few exceptions the acceleration data structures provide a {\em 
    962 spatial index} for the scene by means of a spatial data structure. 
    963 The spatial data structures group scene objects according to the 
    964 spatial proximity. On the contrary line space data structures group 
    965 rays according to their proximity in line space. 
    966  
    967 The common acceleration data structures can be divided into the 
    968 following  categories: 
    969  
    970 \begin{itemize} 
    971 \item Spatial data structures 
    972   \begin{itemize} 
    973   \item {\em Spatial subdivisions} 
    974  
    975     uniform grid, hierarchical grid, kD-tree, BSP tree, octree, quadtree, etc. 
    976      
    977   \item {\em Bounding volume hierarchies} 
    978  
    979     hierarchy of bounding spheres, 
    980     hierarchy of bounding boxes, etc. 
    981      
    982   \item {\em Hybrid} 
    983  
    984     hierarchy of uniform grids, hierarchy of kD-trees, etc. 
    985  
    986   \end{itemize} 
    987    
    988 \item Line space data structures 
    989   \begin{itemize} 
    990   \item {\em General} 
    991  
    992     regular grid, kD-tree, BSP tree, etc. 
    993   \end{itemize} 
    994    
    995 \end{itemize} 
    996  
    997  
    998  
    999 \subsubsection{Use of graphics hardware} 
    1000  
    1001  Visibility algorithms can be accelerated by exploiting dedicated 
    1002 graphics hardware. The hardware implementation of the z-buffer 
    1003 algorithm that is common even on a low-end graphics hardware can be 
    1004 used to accelerate solutions to other visibility problems. Recall that the 
    1005 z-buffer algorithm solves the visibility from a point problem by 
    1006 providing a discrete approximation of the visible surfaces. 
    1007 %$(3-D-2b(i), A-3b)$  
    1008  
    1009 A visibility algorithm can be accelerated by the graphics hardware if 
    1010 it can be decomposed so that the decomposition includes the 
    1011 problem solved by the z-buffer or a series of such problems. 
    1012 %$(3-D-2b(i), A-3b)$ 
    1013 Prospectively, the recent features of the graphics hardware, such as 
    1014 the pixel and vertex shaders  allow easier application of the graphics 
    1015 hardware for solving specific visibility tasks. The software interface 
    1016 between the graphics hardware and the CPU is usually provided by 
    1017 OpenGL~\cite{Moller02-RTR}. 
    1018  
    1019  
    1020 \section{Visibility in urban environments} 
    1021  
    1022  Urban environments constitute an important class of real world scenes 
    1023 computer graphics deals with. We can identify two fundamental 
    1024 subclasses of urban scenes. Firstly, we consider {\em outdoor} scenes, 
    1025 i.e. urban scenes as observed from streets, parks, rivers, or a 
    1026 bird's-eye view. Secondly, we consider {\em indoor} scenes, i.e. urban 
    1027 scenes representing building interiors. In the following two sections 
    1028 we discuss the essential characteristics of visibility in both the 
    1029 outdoor and the indoor scenes. The discussion is followed by 
    1030 summarizing the suitable visibility techniques. 
    1031  
    1032  
    1033 \subsection{Analysis of visibility in outdoor urban areas} 
    1034  
    1035 \label{sec:analysis_ue} 
    1036 \label{sec:ANALYSIS_UE} 
    1037  
    1038  
    1039 Outdoor urban scenes are viewed using two different scenarios. In a 
    1040 {\em flyover} scenario the scene is observed from the bird's eye 
    1041 view. A large part of the scene is visible. Visibility is  mainly 
    1042 restricted due to the structure of the terrain, atmospheric 
    1043 constraints (fog, clouds) and the finite resolution of human 
    1044 retina. Rendering of the flyover scenarios is usually accelerated 
    1045 using LOD, image-based rendering and terrain visibility algorithms, 
    1046 but there is no significant potential for visibility culling. 
    1047  
    1048 In a {\em walkthrough} scenario the scene is observed from a 
    1049 pedestrians point of view and the visibility is often very 
    1050 restricted. In the remainder of this section we discuss the walkthrough 
    1051 scenario in more detail. 
    1052  
    1053 Due to technological and physical restrictions urban scenes viewed 
    1054 from outdoor closely resemble a 2D {\em height function}, i.e. a 
    1055 function expressing the height of the scene elements above the ground. 
    1056 The height function cannot capture certain objects such as bridges, 
    1057 passages, subways, or detailed objects such as trees.  Nevertheless 
    1058 buildings, usually the most important part of the scene, can be 
    1059 captured accurately by the height function in most cases.  For the 
    1060 sake of visibility computations the objects that cannot be represented 
    1061 by the height function can be ignored. The resulting scene is then 
    1062 called a {\em \m25d scene}. 
    1063  
    1064  In a dense urban area with high buildings visibility is very 
    1065 restricted when the scene is viewed from a street (see 
    1066 Figure~\ref{fig:outdoor}-a). Only buildings from nearby streets are 
    1067 visible. Often there are no buildings visible above roofs of buildings 
    1068 close to the viewpoint. In such a case visibility is essentially 
    1069 two-dimensional, i.e. it could be solved accurately using a 2D 
    1070 footprint of the scene and a 2D visibility algorithm. In areas with 
    1071 smaller houses of different shapes visibility is not so severely 
    1072 restricted since some objects can be visible by looking over other 
    1073 objects. The view complexity increases (measured in number of visible 
    1074 objects) and the height structure becomes increasingly 
    1075 important. Complex views with far visibility can be seen also near 
    1076 rivers, squares, and parks (see Figure~\ref{fig:outdoor}-b). 
    1077  
    1078 \begin{figure}[htb] 
    1079   \centerline{ 
    1080     \hfill 
    1081     \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_street1} 
    1082     \hfill 
    1083     \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_castle1} 
    1084     \hfill 
    1085   } 
    1086   \caption{Visibility in outdoor urban areas. (left) In the center of a city 
    1087   visibility is typically restricted to a few nearby streets. (right) 
    1088   Near river banks typically a large part of the city is visible. Note 
    1089   that many distant objects are visible due to the terrain gradation.} 
    1090   \label{fig:outdoor} 
    1091 \end{figure} 
    1092  
    1093  In scenes with large differences in terrain height the view complexity 
    1094 is often very high. Many objects can be visible that are situated for 
    1095 example on a hill or on a slope behind a river. Especially in areas 
    1096 with smaller housing visibility is much defined by the terrain itself. 
    1097  
    1098 We can summarize the observations as follows (based on 
    1099 Wonka~\cite{wonka_phd}) : 
    1100  
    1101 \begin{itemize} 
    1102    
    1103 \item 
    1104   Outdoor urban environments have basically \m25d structure and 
    1105   consequently visibility is restricted accordingly. 
    1106    
    1107 \item 
    1108   The view is very restricted in certain areas, such as in the 
    1109   city center. However the complexity of the view can vary 
    1110   significantly.  It is always not the case that only few objects are 
    1111   visible. 
    1112    
    1113 \item 
    1114   If there are large height differences in the terrain, many 
    1115   objects are visible for most viewpoints. 
    1116    
    1117 \item 
    1118   In the same view a close object can be visible next to a very 
    1119   distant one. 
    1120  
    1121 \end{itemize} 
    1122  
    1123  
    1124  In the simplest case the outdoor scene consists only of the terrain 
    1125 populated by a few buildings. Then the visibility can be calculated on 
    1126 the terrain itself with satisfying 
    1127 accuracy~\cite{Floriani:1995:HCH,Cohen-Or:1995:VDZ, Stewart:1997:HVT}. 
    1128 Outdoor urban environments have a similar structure as terrains: 
    1129 buildings can be treated as a terrain with {\em many discontinuities} 
    1130 in the height function (assuming that the buildings do not contain 
    1131 holes or significant variations in their fa{\c{c}}ades). To 
    1132 accurately capture visibility in such an environment specialized 
    1133 algorithms have been developed that compute visibility from a given 
    1134 viewpoint~\cite{downs:2001:I3DG} or view 
    1135 cell~\cite{wonka00,koltun01,bittner:2001:PG}. 
    1136  
    1137 %  The methods presented later in the thesis make use of the specific 
    1138 % structure of the outdoor scenes to efficiently compute a PVS for the 
    1139 % given view cell.  The key observation is that the PVS for a view cell 
    1140 % in a \m25d can be determined by computing visibility from its top 
    1141 % boundary edges. This problem becomes a restricted variant of the 
    1142 % visibility from a line segment in 3D with $d({\cal L}_R) = 3$. 
    1143  
    1144  
    1145 \subsection{Analysis of indoor visibility} 
    1146  
    1147 Building interiors constitute another important class of real world 
    1148 scenes.  A typical building consists of rooms, halls, corridors, and 
    1149 stairways. It is possible to see from one room to another through an 
    1150 open door or window. Similarly it is possible to see from one corridor 
    1151 to another one through a door or other connecting structure.  In 
    1152 general the scene can be subdivided into cells corresponding to the 
    1153 rooms, halls, corridors, etc., and transparent portals that connect 
    1154 the cells~\cite{Airey90,Teller:1991:VPI}. Some portals 
    1155 correspond to the real doors and windows, others provide only a 
    1156 virtual connection between cells. For example an L-shaped corridor 
    1157 can be represented by two cells and one virtual portal connecting them. 
    1158  
    1159 Visibility in a building interior is often significantly restricted 
    1160 (see Figure~\ref{fig:indoor}).  We can see the room we are located at 
    1161 and possibly few other rooms visible through open doors. Due to the 
    1162 natural partition of the scene into cells and portals visibility can 
    1163 be solved by determining which cells can be seen through a give set of 
    1164 portals and their sequences. A sequence of portals that we can see 
    1165 through is called {\em feasible}. 
    1166  
    1167 \begin{figure}[htb] 
    1168   \centerline{ 
    1169     \hfill 
    1170     \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_chodba1} 
    1171     \hfill 
    1172     \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_sloupy2} 
    1173     \hfill 
    1174   }  
    1175   \caption{Indoor visibility. (left) Visibility in indoor scenes is typically 
    1176     restricted to a few rooms or corridors. (right) In scenes with more complex 
    1177     interior structure visibility gets more complicated. 
    1178   } 
    1179   \label{fig:indoor} 
    1180 \end{figure} 
    1181  
    1182  
    1183  Many algorithms for computing indoor visibility~\cite{Airey90, 
    1184 Teller92phd, Luebke:1995:PMS} exploit the cell/portal structure of the 
    1185 scene. The potential problem of this approach is its strong 
    1186 sensitivity to the arrangement of the environment. In a scene with a 
    1187 complicated structure with many portals there are many feasible portal 
    1188 sequences.  Imagine a hall with columns arranged on a grid. The number 
    1189 of feasible portal sequences rapidly increases with the distance from 
    1190 the given view cell~\cite{Teller92phd} if the columns are sufficiently 
    1191 small (see Figure~\ref{fig:portal_explosion}).  Paradoxically most of 
    1192 the scene is visible and there is almost no benefit of using any 
    1193 visibility culling algorithm. 
    1194  
    1195  The methods presented later in the report partially avoids this 
    1196 problem since it does not rely on finding feasible portal sequences 
    1197 even in the indoor scenes. Instead of determining what {\em can} be 
    1198 visible through a transparent complement of the scene (portals) the 
    1199 method determines what {\em cannot} be visible due to the scene 
    1200 objects themselves (occluders). This approach also avoids the explicit 
    1201 enumeration of portals and the construction of the cell/portal graph. 
    1202  
    1203  
    1204  
    1205 \begin{figure}[htb]      
    1206   \centerline{ 
    1207     \includegraphics[width=0.45\textwidth,draft=\DRAFTFIGS]{figs/portals_explosion}} 
    1208     \caption{In sparsely occluded scenes the cell/portal algorithm can 
    1209     exhibit a combinatorial explosion in number of feasible portal 
    1210     sequences. Paradoxically visibility culling provides almost no 
    1211     benefit in such scenes.} 
    1212     \label{fig:portal_explosion} 
    1213 \end{figure} 
    1214  
    1215621 
    1216622 
    1217623\section{Summary} 
    1218624 
    1219  Visibility problems and algorithms penetrate a large part of computer 
    1220 graphics research. The proposed taxonomy aims to classify visibility 
    1221 problems independently of their target application. The 
    1222 classification should help to understand the nature of the given 
    1223 problem and it should assist in finding relationships between 
    1224 visibility problems and algorithms in different application areas. 
    1225 The tools address the following classes of visibility problems: 
     625The presented taxonomy classifies visibility problems independently of 
     626their target application. The classification should help to understand 
     627the nature of the given problem and it should assist in finding 
     628relationships between visibility problems and algorithms in different 
     629application areas.  The tools address the following classes of 
     630visibility problems: 
    1226631 
    1227632\begin{itemize} 
     
    1242647\item Domain: 
    1243648  \begin{itemize} 
    1244   \item viewpoint (Chapter~\ref{chap:online}), 
    1245   \item polygon or polyhedron (Chapters~\ref{chap:sampling,chap:mutual}) 
     649  \item viewpoint (online culling tool), 
     650  \item global visibility (global visibility sampling tool) 
     651  \item polygon or polyhedron (mutual visibility tool) 
    1246652  \end{itemize} 
    1247653\item Scene restrictions (occluders): 
     
    1251657\item Scene restrictions (group objects): 
    1252658  \begin{itemize} 
    1253   \item bounding boxes (Chapters~\ref{chap:rtviscull},~\ref{chap:vismap},~Chapter~\ref{chap:rot25d} and~\ref{chap:rot3d}), 
     659  \item bounding boxes 
    1254660  \end{itemize} 
    1255661\item Output: 
    1256662  \begin{itemize} 
    1257   \item Visibility classification of objects or hierarchy nodes 
    1258663  \item PVS 
    1259664  \end{itemize} 
     
    1264669  \item aggresive 
    1265670  \end{itemize} 
    1266   \item Solution space: 
     671\item Solution space: 
    1267672  \begin{itemize} 
    1268   \item discrete (Chapters~\ref{chap:online},~\ref{chap:sampling}) 
    1269   \item continuous, line space / primal space (Chapter~\ref{chap:rot25d}) 
     673  \item discrete (online culling tool, global visibility sampling tool, conservative and approximate algorithm from the mutual visibility tool) 
     674  \item continuous (exact algorithm from mutual visibility tool) 
    1270675  \end{itemize} 
    1271   \item Solution space data structures: viewport (Chapter~\ref{chap:online}), ray stack (Chapter~\ref{chap:sampling}), ray stack or BSP tree (Chapter~\ref{chap:mutual}) 
     676  \item Solution space data structures: viewport (online culling tool), ray stack (global visibility sampling tool, conservative and approximate algorithm from the mutual visibility tool), BSP tree (exact algorithm from the mutual visibility tool) 
    1272677  \item Use of coherence of visibility: 
    1273678    \begin{itemize} 
    1274     \item spatial coherence (all methods) 
    1275     \item temporal coherence (Chapter~\ref{chap:online}) 
     679    \item spatial coherence (all tools) 
     680    \item temporal coherence (online culling tool) 
    1276681    \end{itemize} 
    1277   \item Output sensitivity: expected in practice (all methods) 
    1278   \item Acceleration data structure: kD-tree (all methods) 
    1279   \item Use of graphics hardware: Chapter~\ref{chap:online} 
    1280      
     682  \item Output sensitivity: expected in practice (all tools) 
     683  \item Acceleration data structure: kD-tree (all tools) 
     684  \item Use of graphics hardware: online culling tool 
    1281685  \end{itemize} 
    1282686 
  • trunk/VUT/doc/SciReport/mutual.tex

    r249 r266  
    11\chapter{Mutual Visibility Tool} 
    22 
    3 \section{Introduction} 
    43 
    54If a fast visibility solution is required such as during the 
     
    1312cell pairs which are classified as mutually invisible. We process the 
    1413objects sequentially and for every given object we determine the view 
    15 cell which form the boundary of the invisible view cell set. 
     14cells which form the boundary of the invisible view cell set. 
    1615 
    1716This boundary it is based on the current view cell visibility 
     
    3433simpler case both are defined by an axis aligned bounding box. 
    3534 
     35Below, we describe three different mutual visibility verification 
     36algorithms. The first algorithm which is described in most detail 
     37computes exact visibility. The second one is a coservative algorithm 
     38and the third one is an approximate algorithm with a guaranteed error 
     39bound. 
    3640 
    3741 
     
    4044The exact mutual visibility test computes exact visibility between two 
    4145polyhedrons in the scene. This is computed by testing visibility 
    42 between all pairs of potentially polygons of these polyhedrons. 
    43  
     46between all pairs of potentially visible polygons of these 
     47polyhedrons.  For each pair of tested polygons the computation is 
     48localized into the shaft defined by their convex hull. This shaft is 
     49used to determine the set of relevant occluders~\cite{Haines94}. 
    4450 
    4551\section{Occlusion tree} 
     
    5965 Leaves of the occlusion tree are classified $in$ or $out$. If $N$ is 
    6066an $out$-leaf, ${\mathcal Q^*_N}$ represents unoccluded rays emerging 
    61 from the source polygon $P_S$. If $N$ is an $in$-leaf, it is associated 
    62 with an occluder $O_N$ that blocks the corresponding set of rays 
    63 ${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the blocker 
    64 polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The intersection of 
    65 $B_N$ and the \plucker quadric corresponds to a set of stabbers ${\cal 
    66 S}_N$ through which $O_N$ is visible from $P_S$. 
     67from the source polygon $P_S$. If $N$ is an $in$-leaf, it is 
     68associated with an occluder $O_N$ that blocks the corresponding set of 
     69rays ${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the 
     70blocker polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The 
     71intersection of $B_N$ and the \plucker quadric corresponds to a set of 
     72stabbers ${\cal S}_N$ through which $O_N$ is visible from $P_S$. A 2D 
     73example of an occlusion tree is shown at Figure~\ref{fig:ot}. 
     74 
     75\begin{figure}[htb] 
     76\centerline{ 
     77\includegraphics[width=0.9\textwidth,draft=\DRAFTFIGS]{figs/ot1}} 
     78\caption{A 2D example of an occlusion tree. Rays intersecting scene polygons 
     79are represented by polyhedra P1, P2 and P3. The occlusion tree represents a union 
     80of the polyhedra. Note that P2 has been split during the insertion process.} 
     81\label{fig:duality2d} 
     82\end{figure} 
     83 
    6784 
    6885%%  Consider a source polygon and a single occluder such as in 
     
    7794 
    7895 The occlusion tree is constructed incrementally by inserting blocker 
    79 polyhedra in the order given by the approximate occlusion sweep of the 
    80 scene polygons. When processing a polygon $P_j$ the algorithm inserts 
    81 a polyhedron $B_{P_SP_j}$ representing the feasible line set between 
    82 the source polygon $P_S$ and the polygon $P_j$. The polyhedron is 
    83 split into fragments that represent either occluded or unoccluded rays. 
     96polyhedra in the order given by the size of the polygons. When 
     97processing a polygon $P_j$ the algorithm inserts a polyhedron 
     98$B_{P_SP_j}$ representing the feasible line set between the source 
     99polygon $P_S$ and the polygon $P_j$. The polyhedron is split into 
     100fragments that represent either occluded or unoccluded rays. 
    84101 
    85102 We describe two methods that can be used to insert a blocker 
     
    113130classification.  If $N_{c}$ is an $out$-leaf then $B_{c}$ is visible 
    114131and $N_c$ is replaced by the elementary occlusion tree of $B_{c}$. If 
    115 $N_{c}$ is an $in$-leaf, the mutual position of the currently 
    116 processed polygon $B_j$ and the polygon $B_{N_{c}}$ associated with 
    117 $N_{c}$ is determined. This test will be described in 
    118 Section~\ref{sec:position}. If $B_{c}$ is behind $B_{N_{c}}$ it is 
    119 invisible and no modification to the tree necessary. Otherwise we need 
    120 to {\em merge} $B_{c}$ into the tree. The merging replaces $N_{c}$ by 
    121 the elementary occlusion tree of $B_c$ and inserts the old fragment 
    122 $B_{N_{c}}$ in the new subtree. 
     132$N_{c}$ is an $in$-leaf it corresponds to already occluded rays and no 
     133modification to the tree necessary. Otherwise we need to {\em merge} 
     134$B_{c}$ into the tree. The merging replaces $N_{c}$ by the elementary 
     135occlusion tree of $B_c$ and inserts the old fragment $B_{N_{c}}$ in 
     136the new subtree. 
    123137 
    124138\subsection{Insertion without splitting} 
     
    142156continues in the left subtree. If $B_c$ intersects both halfspaces the 
    143157algorithm proceeds in both subtrees of $N_c$ and $\pplane_{N_c}$ is 
    144 added to the list of splitting hyperplanes with a correct sign for each 
    145 subtree. Reaching an $out$-leaf the list of splitting hyperplanes and 
    146 the associated signs correspond to halfspaces bounding the 
     158added to the list of splitting hyperplanes with a correct sign for 
     159each subtree. Reaching an $out$-leaf the list of splitting hyperplanes 
     160and the associated signs correspond to halfspaces bounding the 
    147161corresponding polyhedron fragment. The polyhedron enumeration 
    148162algorithm is  applied using these halfspaces and the original 
     
    151165halfspaces is empty. Such a case occurs due to the conservative 
    152166traversal of the tree that only tests the position of the inserted 
    153 polyhedron with respect to the splitting hyperplanes\footnote{Such a 
    154 traversal was also used in Section~\ref{sec:rtviscull_conserva} in the 
    155 context of the from-point visibility culling.}. If the fragment is not 
    156 empty, the tree is extended as described in the previous section. 
     167polyhedron with respect to the splitting hyperplanes. If the fragment 
     168is not empty, the tree is extended as described in the previous 
     169section. 
    157170 
    158171Reaching an $in$-leaf the polygon positional test is applied. If the 
     
    186199 
    187200 
    188 \subsection{Polygon positional test} 
    189 \label{sec:position} 
    190  
    191 The polygon positional test aims to determine the order of two 
    192 polygons $P_i$ and $P_j$ with respect to the source polygon 
    193 $P_S$~\cite{Chrysantho1996a}. We assume that polygons $P_i$ and $P_j$ 
    194 do not intersect, i.e. all potential polygon intersections were 
    195 resolved in preprocessing. The test is applied to determine which of 
    196 the two polygons is closer to $P_S$ with respect to the given set of 
    197 rays intersecting both polygons. The test proceeds as follows: 
    198  
    199 \begin{enumerate} 
    200 \item If $P_i$ lays in the positive/negative halfspace defined by 
    201 $P_j$, it is before/behind $P_j$. Otherwise, proceed with the next step. 
    202 \item If $P_j$ lays in the positive/negative halfspace defined by 
    203 $P_i$, it is before/behind $P_i$. Otherwise, proceed with the next step. 
    204 \item Find a ray from the given set of rays that intersects both 
    205 polygons. Compute an intersection of the ray with $P_i$ and $P_j$ and 
    206 determine the position of the polygons according to the order of 
    207 intersection points along the ray. 
    208 \end{enumerate} 
    209  
    210 The first two steps aim to determine the absolute priority of one of 
    211 the polygons. If these steps fail, the order of the polygons is 
    212 determined using a sample ray in step 3. 
    213201 
    214202\section{Visibility test} 
     
    263251alter the accuracy of the visibility algorithm. 
    264252  
    265 \subsection{Shaft culling} 
    266  
    267 \label{sec:rot3_shaft} 
    268  
    269 The algorithm as described computes and maintains visibility of all 
    270 polygons facing the given source polygon. In complex scenes the 
    271 description of visibility from the source polygon can reach $O(n^4)$ 
    272 complexity in the worse 
    273 case~\cite{Teller92phd,p-rsls-97,Durand99-phd}. The size of the 
    274 occlusion tree is then bound by $O(n^5)$. Although such a 
    275 configuration occurs rather rare in practice we need to apply some 
    276 general technique to avoid the worst-case memory complexity. If the 
    277 algorithm should provide an exact analytic solution to the problem, we 
    278 cannot avoid the $O(n^4\log n)$ worst-case computational complexity, 
    279 but the computation can be decoupled to reduce memory requirements. 
    280  
    281  We propose to use the {\em shaft culling}~\cite{Haines94} method that 
    282 divides the computation into series of from-region visibility 
    283 problems restricted by a given shaft of lines. Ideally the shafts are 
    284 determined so that the {\em visibility complexity} in the shaft is 
    285 bound by a given constant. This is however the from-region visibility 
    286 problem itself. We can use an estimate based on a number of polygons 
    287 in the given shaft. First, the hemicube is erected over the source 
    288 polygon and it is used to construct five shafts corresponding to the 
    289 five faces  of the hemicube. The shafts are then subdivided as 
    290 follows: if a given shaft intersects more than a predefined number of 
    291 polygons the corresponding hemicube face is split in two and the new 
    292 shafts are processed recursively.  Visibility in the shaft is 
    293 evaluated using all polygons intersecting the shaft. When the 
    294 computation for the shaft is finished the occlusion tree is 
    295 destroyed. The algorithm then proceeds with the next shaft until all 
    296 generated shafts have been processed. See Figure~\ref{fig:rot3_shafts} 
    297 for an example of a regular subdivision of the problem-relevant line 
    298 set into 80 shafts. 
    299  
    300  
    301 This technique shares a similarity with the algorithm recently 
    302 published by Nirenstein et al.~\cite{nirenstein:02:egwr} that uses 
    303 shafts between the source polygon and each polygon in the 
    304 scene. Our technique provides the following benefits: 
    305  
    306 \begin{itemize} 
    307 \item The shaft can contain thousands of polygons, which allows to 
    308 better exploit the coherence of visibility. The hierarchical line 
    309 space subdivision allows efficient searches and updates. 
    310  
    311 \item The method is applicable even in scenes with big differences in 
    312 polygon sizes. Unlike the method of Nirenstein et 
    313 al.~\cite{nirenstein:02:egwr}, the proposed technique generates shafts 
    314 to find balance between the use of coherence and the memory 
    315 requirements. Due to the hierarchical subdivision of the 
    316 problem-relevant line set our method can handle the case when there is 
    317 a huge number of polygons in a single shaft between two large scene 
    318 polygons. 
    319  
    320 \item Visibility in the whole shaft is described in a unified data 
    321   structure, which can serve for further computations (e.g. extraction 
    322   of event surfaces, hierarchical representation of visibility, etc.). 
    323 \end{itemize} 
    324  
    325  
    326  
    327 \subsection{Occluder sorting} 
    328  
    329  Occluder sorting aims to increase the accuracy of the front-to-back 
    330 ordering determined by the approximate occlusion sweep. Higher 
    331 accuracy of the ordering decreases the number of the late merging of 
    332 blocker polyhedra in the tree.  Recall that the merging extends the 
    333 occlusion tree by a blocker polyhedron corresponding to an occluder 
    334 that hides an already processed occluder. 
    335  
    336  The occluder sorting is applied when reaching a leaf node of the 
    337 spatial hierarchy during the approximate occlusion sweep. Given a leaf 
    338 node $N$ the occluder sorting proceeds as follows: 
    339  
    340 \begin{enumerate} 
    341 \item Determine a ray $r$ piercing the center of the source polygon 
    342 $P_S$ and the node $N$. 
    343 \item Compute intersections of $r$ and all supporting planes of the 
    344 polygons associated with $N$. 
    345 \item Sort the polygons according to the front-to-back order of the 
    346 corresponding intersection points along $r$. 
    347 \end{enumerate} 
    348  
    349 The occluder sorting provides an exact priority order within the given 
    350 leaf in the case that the polygons are pierced by the computed ray $r$ 
    351 and they do not intersect. 
     253% \subsection{Occluder sorting} 
     254 
     255%  Occluder sorting aims to increase the accuracy of the front-to-back 
     256% ordering determined by the approximate occlusion sweep. Higher 
     257% accuracy of the ordering decreases the number of the late merging of 
     258% blocker polyhedra in the tree.  Recall that the merging extends the 
     259% occlusion tree by a blocker polyhedron corresponding to an occluder 
     260% that hides an already processed occluder. 
     261 
     262%  The occluder sorting is applied when reaching a leaf node of the 
     263% spatial hierarchy during the approximate occlusion sweep. Given a leaf 
     264% node $N$ the occluder sorting proceeds as follows: 
     265 
     266% \begin{enumerate} 
     267% \item Determine a ray $r$ piercing the center of the source polygon 
     268% $P_S$ and the node $N$. 
     269% \item Compute intersections of $r$ and all supporting planes of the 
     270% polygons associated with $N$. 
     271% \item Sort the polygons according to the front-to-back order of the 
     272% corresponding intersection points along $r$. 
     273% \end{enumerate} 
     274 
     275% The occluder sorting provides an exact priority order within the given 
     276% leaf in the case that the polygons are pierced by the computed ray $r$ 
     277% and they do not intersect. 
    352278 
    353279 
     
    413339\section{Conservative Verifier} 
    414340 
    415  
    416 \section{Error Bound Verifier} 
     341A conservative verifier is a faster alternative to the exact 
     342visibility verifier described above. The verifier is an extension of 
     343the strong occlusion algorithm of Cohen-Or et 
     344al.~\cite{cohen-egc-98}. In particular our verfier refines the search 
     345for a strong occluder by using a hiearrchical subdivision of space of 
     346lines connecting the two regions tested for mutual 
     347visibility. Initially the shaft bounding the the tested regions is 
     348constructed. Rays bounding the shaft are traced through the scene and 
     349we compute all intersections with the scene objects between the tested 
     350regions. The the algorithm proceeds as follows: 
     351 
     352\begin{itemize} 
     353\item In the case that any ray does not intersect any object the 
     354tested regions are classified as visibility and the algorithm 
     355terminates. 
     356\item If the rays intersect the same convex object (at any 
     357depth) this object is a strong occluder with respect to the shaft and 
     358thus it also hides all rays in the corresponding shaft. 
     359\item If the rays do not intersect a single convex object four new 
     360shafts are constructed by splitting both regions in half and the 
     361process is repeated recursivelly. 
     362\end{itemize} 
     363 
     364If the subdivision does not terminate till reaching a predefined 
     365subdivision depth, we conservativelly classify the tested regions as 
     366mutually visible. 
     367 
     368 
     369\section{Error Bound Approximate Verifier} 
     370 
     371The approximate verifier will be based on the similar idea as the 
     372conservative one. However it will behave differently in the finer 
     373subdivision of the ray shafts. The idea is to use the above algorithm 
     374as far as the shafts get small enough that we can guarantee that even 
     375if the shaft is not blocked by the scene objects, a pixel error 
     376induced due to omission of objects potential visible behind the shaft 
     377is bellow a given threshold. 
     378 
     379For the computation of the maximal error due to the current shaft we 
     380assume that one tested region is a viewcell, whereas the other is an 
     381object bounding box or cell of the spatial hierarchy. The threshold is 
     382computed as follows: We first triangulate the farthest intesection 
     383points in the shaft as seen from the view cell side of the shaft. Then 
     384for each computed triangle we calculate a point in the viewcell which 
     385maximizes the projected area of the triangle. The conservative 
     386estimate of the maximal error is then given by a sum of the computed 
     387projected areas. 
    417388 
    418389%Option: - if there is sufficient number of samples from 1 + 2 and some 
     
    421392%from there values. 
    422393 
    423 a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve. 
    424  
    425 The basic idea is the following: 
    426  
    427 Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: 
    428  
    429 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects 
    430  
    431 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer. 
    432  
    433 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one. 
    434  
    435 4. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible. 
    436 Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer. 
    437  
    438 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed. 
    439  
    440 It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy. 
    441  
    442  
    443 > Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane. 
    444 > That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further. 
    445 > Is this description clearer? 
    446  
    447  
    448 partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error? 
     394% a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve. 
     395 
     396% The basic idea is the following: 
     397 
     398% Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: 
     399 
     400% 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects 
     401 
     402% 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer. 
     403 
     404% 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one. 
     405 
     406% 4. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible. 
     407% Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer. 
     408 
     409% 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed. 
     410 
     411% It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy. 
     412 
     413 
     414% > Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane. 
     415% > That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further. 
     416% > Is this description clearer? 
     417 
     418 
     419% partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error? 
    449420 
    450421 
     
    454425\label{sec:rot3d_summary} 
    455426 
    456  This chapter presented a new method for computing from-region 
    457 visibility in polygonal scenes. The method is based on the concept of 
    458 line space subdivision, approximate occlusion sweep and hierarchical 
    459 visibility tests. The key idea is a hierarchical subdivision of the 
    460 problem-relevant line set using \plucker coordinates and the occlusion 
    461 tree.  \plucker coordinates allow to perform operations on sets of 
    462 lines by means of set theoretical operations on the 5D polyhedra. The 
    463 occlusion tree is used to maintain a union of the polyhedra that 
    464 represent lines occluded from the given region (polygon). 
    465  
    466  We discussed the relation of sets of lines in 3D and the polyhedra in 
    467 \plucker coordinates. We proposed a general size measure for a set of 
    468 lines described by a blocker polyhedron and a size measure designed 
    469 for the computation of PVS. The chapter presented two algorithms for 
    470 construction of the occlusion tree by incremental insertion of blocker 
    471 polyhedra. The occlusion tree was used to test visibility of a given 
    472 polygon or region with respect to the source polygon/region. We 
    473 proposed several optimization of the algorithm that make the approach 
    474 applicable to large scenes. The chapter discussed three possible 
    475 applications of the method: discontinuity meshing, visibility culling, 
    476 and occluder synthesis. 
    477  
    478 The implementation of the method was evaluated on scenes consisting of 
    479 randomly generated triangles, structured scenes, and a real-world 
    480 urban model. The evaluation was focused on the application method for 
    481 PVS computation.  By computing a PVS in a model of the city of Graz it 
    482 was shown that the approach is suitable for visibility preprocessing 
    483 in urban scenes. The principal advantage of the method is that it does 
    484 not rely on various tuning parameters that are characterizing many 
    485 conservative or approximate algorithms. On the other hand the 
    486 exactness of the method requires higher computational demands and 
    487 implementation effort. 
    488  
     427 This chapter presented a mutuaql visibility tool, which determines 
     428 whether two regions in the scene are visible. 
     429 
     430 First, we have described an advanced exact visibility algorithm which 
     431is a simple of modification of an algorithm for computing from-region 
     432visibility in polygonal scenes. The key idea is a hierarchical 
     433subdivision of the problem-relevant line set using \plucker 
     434coordinates and the occlusion tree.  \plucker coordinates allow to 
     435perform operations on sets of lines by means of set theoretical 
     436operations on the 5D polyhedra. The occlusion tree is used to maintain 
     437a union of the polyhedra that represent lines occluded from the given 
     438region (polygon). We described two algorithms for construction of the 
     439occlusion tree by incremental insertion of blocker polyhedra. The 
     440occlusion tree was used to test visibility of a given polygon or 
     441region with respect to the source polygon/region. We proposed several 
     442optimization of the algorithm that make the approach applicable to 
     443large scenes. The principal advantage of the exact method over the 
     444conservative and the approximate ones is that it does not rely on 
     445various tuning parameters that are characterizing many conservative or 
     446approximate algorithms. On the other hand the exactness of the method 
     447requires higher computational demands and implementation effort. 
     448 
     449Second, we have described a conservative verifier which is an 
     450extension of the algorithm based on strong occluders. The conservative 
     451verifier is requires a specification of the shaft size at which the 
     452tested regions are conservativelly classfied as visible. 
     453 
     454Third, we have described an approximate error bound verifier which is 
     455an extension of the algorithm based on strong occluders. The 
     456conservative verifier is requires a specification of the shaft size at 
     457which the tested regions are conservativelly classfied as visible. 
     458 
  • trunk/VUT/doc/SciReport/online.tex

    r253 r266  
    4444\begin{figure}%[htb] 
    4545  %\centerline 
    46         \centering \footnotesize 
    47         \begin{tabular}{c} 
    48         \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{images/ogre_terrain} \\ 
    49         %\hline  
    50         \includegraphics[width=0.3\textwidth, draft=\DRAFTFIGS]{images/vis_viewfrustum} \hfill \includegraphics[width=0.3\textwidth, draft=\DRAFTFIGS]{images/vis_chc} \\ 
    51         \end{tabular} 
     46  \centering \footnotesize 
     47  \begin{tabular}{c} 
     48  \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{images/ogre_terrain} \\ 
     49  %\hline  
     50  \includegraphics[width=0.3\textwidth, draft=\DRAFTFIGS]{images/vis_viewfrustum} \hfill \includegraphics[width=0.3\textwidth, draft=\DRAFTFIGS]{images/vis_chc} \\ 
     51  \end{tabular} 
    5252%\label{tab:online_culling_example} 
    5353  \caption{(top) The rendered terrain scene. (bottom) Visualizion of the rendered / culled objects. 
    5454    Using view frustum culling (left image) vs. occlusion queries (right image). 
    5555    The yellow boxes show the actually rendered scene objects. The 
    56         red boxes depict the view frustum culled hierarchy nodes, the blue boxes depict the 
    57         occlusion query culled hierarchy nodes.} 
     56    red boxes depict the view frustum culled hierarchy nodes, the blue boxes depict the 
     57    occlusion query culled hierarchy nodes.} 
    5858  \label{fig:online_culling_example} 
    5959\end{figure} 
    6060 
    6161Recent graphics hardware natively supports an \emph{occlusion query} 
    62 to detect the visibility of an object against the current contents of the 
    63 z-buffer.  Although the query itself is processed quickly using the 
    64 raw power of the graphics processing unit (GPU), its result is not 
     62to detect the visibility of an object against the current contents of 
     63the z-buffer.  Although the query itself is processed quickly using 
     64the raw power of the graphics processing unit (GPU), its result is not 
    6565available immediately due to the delay between issuing the query and 
    6666its actual processing in the graphics pipeline. As a result, a naive 
     
    7474simplicity and versatility: the method can be easily integrated in 
    7575existing real-time rendering packages on architectures supporting the 
    76 underlying occlusion query. 
    77 In figure~\ref{fig:online_culling_example}, the same scene (top row) is rendered using view frustum 
    78 culling (visualization in the bottom left image) versus online culling using occlusion queries (visualization  
    79 in the bottom right image). It can be seen that with view frustum culling only many objects are still rendered. 
     76underlying occlusion query.  In 
     77figure~\ref{fig:online_culling_example}, the same scene (top row) is 
     78rendered using view frustum culling (visualization in the bottom left 
     79image) versus online culling using occlusion queries (visualization 
     80in the bottom right image). It can be seen that with view frustum 
     81culling only many objects are still rendered. 
    8082%Using spatial and assuming temporal coherence 
    8183 
     
    105107z-buffer allows to quickly determine if the geometry in question is 
    106108occluded. To a certain extent this idea is used in the current 
    107 generation of graphics hardware by applying early z-tests of 
    108 fragments in the graphics pipeline (e.g., Hyper-Z technology of ATI or 
    109 Z-cull of NVIDIA). However, the geometry still needs to be sent to the 
    110 GPU, transformed, and coarsely rasterized even if it is later 
    111 determined invisible. 
     109generation of graphics hardware by applying early z-tests of fragments 
     110in the graphics pipeline (e.g., Hyper-Z technology of ATI or Z-cull of 
     111NVIDIA). However, the geometry still needs to be sent to the GPU, 
     112transformed, and coarsely rasterized even if it is later determined 
     113invisible. 
    112114 
    113115 Zhang~\cite{EVL-1997-163} proposed hierarchical occlusion maps, which 
     
    774776%shown in the accompanying video. 
    775777 
    776 The complete power plant model is quite challenging even to load into memory, 
    777 but on the other hand it offers good 
    778 occlusion. This scene is an interesting candidate for testing not only 
    779 due to its size, but also due to significant changes in visibility and depth complexity in 
     778The complete power plant model is quite challenging even to load into 
     779memory, but on the other hand it offers good occlusion. This scene is 
     780an interesting candidate for testing not only due to its size, but 
     781also due to significant changes in visibility and depth complexity in 
    780782its different parts. 
    781783 
     
    915917\caption{Statistics for the three test scenes. VFC is rendering with only view-frustum culling, S\&W is the 
    916918  hierarchical stop and wait method, CHC is our new method, and Ideal 
    917 is a perfect method with respect to the given hierarchy. All values are averages over 
    918 all frames (including the speedup).} 
     919is a perfect method with respect to the given hierarchy. All values 
     920are averages over all frames (including the speedup).} 
    919921\label{tab:averages} 
    920922\end{table*} 
  • trunk/VUT/doc/SciReport/sampling.tex

    r255 r266  
    11\chapter{Global Visibility Sampling Tool} 
    2  
    3  
    4 \section{Introduction} 
    52 
    63 
     
    2118conservative or error bound aggresive solution. The choice of the 
    2219particular verifier is left on the user in order to select the best 
    23 for a particular scene, application context and time constrains. For 
    24 example, in scenes like a forest an error bound aggresive visibility 
    25 can be the best compromise between the resulting size of the PVS (and 
    26 framerate) and the visual quality. The exact or conservative algorithm 
    27 can however be chosen for urban scenes where of even small objects can 
    28 be more distructing for the user. The mutual visibility tool will be 
    29 described in the next chapter. 
     20one for a particular scene, application context and time 
     21constrains. For example, in scenes like a forest an error bound 
     22aggresive visibility can be the best compromise between the resulting 
     23size of the PVS (and framerate) and the visual quality. The exact or 
     24conservative algorithm can however be chosen for urban scenes where 
     25ommision of even small objects can be more distructing for the 
     26user. The mutual visibility tool will be described in the next chapter. 
    3027 
    3128\end{itemize} 
     
    3532subdivided into viewcells and for each view cell the set of visible 
    3633objects --- potentially visible set (PVS) is computed. This framewoirk 
    37 has bee used for conservative, aggresive and exact algorithms. 
     34has been used for conservative, aggresive and exact algorithms. 
    3835 
    3936We propose a different strategy which has several advantages for 
     
    4138based on the following fundamental ideas: 
    4239\begin{itemize} 
    43 \item Replace the roles of view cells and objects 
    4440\item Compute progressive global visibility instead of sequential from-region visibility 
     41\item Replace the roles of view cells and objects for some parts of the computation 
    4542\end{itemize} 
    4643 
     
    5249\label{VFR3D_RELATED_WORK} 
    5350 
    54  
    55  Below we briefly discuss the related work on visibility preprocessing 
     51Below we briefly discuss the related work on visibility preprocessing 
    5652in several application areas. In particular we focus on computing 
    5753from-region which has been a core of most previous visibility 
     
    6359The first algorithms dealing with from-region visibility belong to the 
    6460area of computer vision. The {\em aspect 
    65 graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions 
     61  graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions 
    6662the view space into cells that group viewpoints from which the 
    6763projection of the scene is qualitatively equivalent. The aspect graph 
     
    237233attenuation, reflection and time delays. 
    238234 
    239 \section{Algorithm Setup} 
     235\section{Algorithm Description} 
     236 
     237This section first describes the setup of the global visibility 
     238sampling algorithm. In particular we describe the view cell 
     239representation and the novel concept of from-object based 
     240visibility. The we outline the different visibility sampling 
     241strategies. 
    240242 
    241243\subsection{View Cell Representation} 
     
    247249\item optimized for viewcell - ray intersection. 
    248250\item flexible, i.e., it can represent arbitrary geometry. 
    249 \item naturally suited for an hierarchical approach. %(i.e., there is a root view cell containing all others) 
     251\item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others) 
    250252\end{itemize} 
    251253 
     
    257259subdivide a BSP leaf view cell quite easily. 
    258260 
    259 Currently we use two approaches to generate the initial BSP view cell tree. 
     261Currently we use two approaches to generate the initial BSP view cell 
     262tree. 
    260263 
    261264\begin{itemize} 
     265 
    262266\item We use a number of dedicated input view cells. As input view 
    263267cell any closed mesh can be applied. The only requirement is that the 
     
    270274(i.e., add a pointer to the view cell). Hence a number of leafes can 
    271275be associated with the same input view cell. 
     276 
    272277\item We apply the BSP tree subdivision to the scene geometry. When 
    273278the subdivision terminates, the leaf nodes also represent the view 
    274279cells. 
     280 
    275281\end{itemize} 
    276282 
    277283\subsection{From-object based visibility} 
    278284 
    279 Our framework is based on the idea of sampling visibility by casting 
     285 Our framework is based on the idea of sampling visibility by casting 
    280286casting rays through the scene and collecting their contributions. A 
    281287visibility sample is computed by casting a ray from an object towards 
     
    304310 
    305311 
    306 \section{Basic Randomized Sampling} 
     312\subsection{Basic Randomized Sampling} 
    307313 
    308314 
     
    319325 
    320326 
    321 The described algorithm accounts for the irregular distribution of the 
     327 The described algorithm accounts for the irregular distribution of the 
    322328objects: more samples are placed at locations containing more 
    323329objects. Additionally every object is sampled many times depending on 
     
    327333 
    328334 
    329 \section{Accounting for View Cell Distribution} 
    330  
    331 The first modification to the basic algorithm accounts for irregular 
    332 distribution of the viewcells. Such a case in common for example in 
     335\subsection{Accounting for View Cell Distribution} 
     336 
     337 The first modification to the basic algorithm accounts for irregular 
     338distribution of the viewcells. Such a case is common for example in 
    333339urban scenes where the viewcells are mostly distributed in a 
    334340horizontal direction and more viewcells are placed at denser parts of 
    335341the city. The modification involves replacing the uniformly 
    336342distributed ray direction by directions distributed according to the 
    337 local view cell directional density. It means placing more samples at 
    338 directions where more view cells are located. We select a random 
     343local view cell directional density. This means placing more samples at 
     344directions where more view cells are located: We select a random 
    339345viecell which lies at the halfpace given by the surface normal at the 
    340346chosen point. We pick a random point inside the view cell and cast a 
     
    342348 
    343349 
    344 \section{Accounting for Visibility Events} 
     350\subsection{Accounting for Visibility Events} 
    345351 
    346352 Visibility events correspond to appearance and disapearance of 
    347353 objects with respect to a moving view point. In polygonal scenes the 
    348  events defined by event surfaces defined by three distinct scene 
    349  edges. Depending on the edge configuration we distinguish between 
    350  vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces 
    351  are planar planes whereas the EEE are in general quadratic surfaces. 
     354events defined by event surfaces defined by three distinct scene 
     355edges. Depending on the edge configuration we distinguish between 
     356vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces 
     357are planar planes whereas the EEE are in general quadratic surfaces. 
    352358 
    353359 To account for these event we explicitly place samples passing by the 
    354  object edges which are directed to edges and/or vertices of other 
    355  objects. 
    356  
    357  
     360object edges which are directed to edges and/or vertices of other 
     361objects. 
     362 
     363 
Note: See TracChangeset for help on using the changeset viewer.