\chapter{Introduction}%\chapter \label{chap:introduction} \section{Structure of the report} The report consists of two introductory chapters, which provide a theoretical background for description of the algorithms, and three chapters dealing with the actual visibility algorithms. This chapter provides an introduction to visibility by using a taxonomy of visibility problems and algorithms. The taxonomy is used to classify the later described visibility algorithms. Chapter~\ref{chap:analysis} provides an analysis of visibility in 2D and 3D polygonal scenes. This analysis also includes formal description of visibility using \plucker coordinates of lines. \plucker coordinates are exploited later in algorithms for mutual visibility verification (Chapter~\ref{chap:mutual}). Chapter~\ref{chap:online} describes a visibility culling algorithm used to implement the online visibility culling module. This algorithm can be used accelerate rendering of fully dynamic scenes using recent graphics hardware. Chapter~\ref{chap:sampling} describes global visibility sampling algorithm which forms a core of the PVS computation module. This chapter also describes view space partitioning algorithms used in close relation with the PVS computation. Finally, Chapter~\ref{chap:mutual} describes mutual visibility verification algorithms, which are used by the PVS computation module to generate the final solution for precomputed visibility. %% summarize the state of the art %% visibility algorithms and their relation to the proposed taxonomy of %% visibility problems. The second part of the survey should serve as a %% catalogue of visibility algorithms that is indexed by the proposed %% taxonomy of visibility problems. \section{Domain of visibility problems} \label{sec:prob_domain} Computer graphics deals with visibility problems in the context of 2D, \m25d, or 3D scenes. The actual problem domain is given by restricting the set of rays for which visibility should be determined. Below we list common problem domains used and the corresponding domain restrictions: \begin{enumerate} \item {\em visibility along a line} \begin{enumerate} \item line \item ray (origin + direction) \end{enumerate} \newpage \item {\em visibility from a point} ({\em from-point visibility}) \begin{enumerate} \item point \item point + restricted set of rays \begin{enumerate} \item point + raster image (discrete form) \item point + beam (continuous form) \end{enumerate} \end{enumerate} \item {\em visibility from a line segment} ({\em from-segment visibility}) \begin{enumerate} \item line segment \item line segment + restricted set of rays \end{enumerate} \item {\em visibility from a polygon} ({\em from-polygon visibility}) \begin{enumerate} \item polygon \item polygon + restricted set of rays \end{enumerate} \item {\em visibility from a region} ({\em from-region visibility}) \begin{enumerate} \item region \item region + restricted set of rays \end{enumerate} \item {\em global visibility} \begin{enumerate} \item no further input (all rays in the scene) \item restricted set of rays \end{enumerate} \end{enumerate} The domain restrictions can be given independently of the dimension of the scene, but the impact of the restrictions differs depending on the scene dimension. For example, visibility from a polygon is equivalent to visibility from a (polygonal) region in 2D, but not in 3D. %***************************************************************************** \section{Dimension of the problem-relevant line set} The six domains of visibility problems stated in Section~\ref{sec:prob_domain} can be characterized by the {\em problem-relevant line set} denoted ${\cal L}_R$. We give a classification of visibility problems according to the dimension of the problem-relevant line set. We discuss why this classification is important for understanding the nature of the given visibility problem and for identifying its relation to other problems. For the following discussion we assume that a line in {\em primal space} can be mapped to a point in {\em line space}. For purposes of the classification we define the line space as a vector space where a point corresponds to a line in the primal space\footnote{A classical mathematical definition says: Line space is a direct product of two Hilbert spaces~\cite{Weisstein:1999:CCE}. However, this definition differs from the common understanding of line space in computer graphics~\cite{Durand99-phd}}. \subsection{Parametrization of lines in 2D} There are two independent parameters that specify a 2D line and thus the corresponding set of lines is two-dimensional. There is a natural duality between lines and points in 2D. For example a line expressed as: $l:y=ax+c$ is dual to a point $p=(-c,a)$. This particular duality cannot handle vertical lines. %See Figure~\ref{fig:duality2d} for an example of other dual mappings in the plane. To avoid the singularity in the mapping, a line $l:ax+by+c=0$ can be represented as a point $p_l=(a,b,c)$ in 2D projective space ${\cal P}^2$~\cite{Stolfi:1991:OPG}. Multiplying $p_l$ by a non-zero scalar we obtain a vector that represents the same line $l$. More details about this singularity-free mapping will be discussed in Chapter~\ref{chap:analysis}. %\begin{figure}[!htb] %\centerline{ %\includegraphics[width=0.9\textwidth,draft=\DRAFTFIGS]{figs/duality2d}} %\caption{Duality between points and lines in 2D.} %\label{fig:duality2d} %\end{figure} To sum up: In 2D there are two degrees of freedom in description of a line and the corresponding line space is two-dimensional. The problem-relevant line set ${\cal L}_R$ then forms a $k$-dimensional subset of ${\cal P}^2$, where $0\leq k \leq 2$. An illustration of the concept of the problem-relevant line set is depicted in %Figure~\ref{fig:classes}. %\begin{figure}[htb] %\centerline{ %\includegraphics[width=0.8\textwidth,draft=\DRAFTFIGS]{figs/classes}} %\caption{The problem-relevant set of lines in 2D. The ${\cal L}_R$ for %visibility along a line is formed by a single point that is a mapping %of the given line. The ${\cal L}_R$ for visibility from a point $p$ is %formed by points lying on a line. This line is a dual mapping of the %point $p$. ${\cal L}_R$ for visibility from a line segment is formed %by a 2D region bounded by dual mappings of endpoints of the given %segment.} %\label{fig:classes} %\end{figure} \subsection{Parametrization of lines in 3D} Lines in 3D form a four-parametric space~\cite{p-rsls-97}. A line intersecting a given scene can be described by two points on a sphere enclosing the scene. Since the surface of the sphere is a two parametric space, we need four parameters to describe the line. The {\em two plane parametrization} of 3D lines describes a line by points of intersection with the given two planes~\cite{Gu:1997:PGT}. This parametrization exhibits a singularity since it cannot describe lines parallel to these planes. See %Figure~\ref{fig:3dlines} for illustrations of the spherical and the two plane parameterizations. %\begin{figure}[htb] %\centerline{ %\includegraphics[width=0.78\textwidth,draft=\DRAFTFIGS]{figs/3dlines}} %\caption{Parametrization of lines in 3D. (left) A line can be % described by two points on a sphere enclosing the scene. (right) The % two plane parametrization describes a line by point of intersection % with two given planes.} %\label{fig:3dlines} %\end{figure} Another common parametrization of 3D lines are the {\em \plucker coordinates}. \plucker coordinates of an oriented 3D line are a six tuple that can be understood as a point in 5D oriented projective space~\cite{Stolfi:1991:OPG}. There are six coordinates in \plucker representation of a line although we know that the ${\cal L}_R$ is four-dimensional. This can be explained as follows: \begin{itemize} \item Firstly, \plucker coordinates are {\em homogeneous coordinates} of a 5D point. By multiplication of the coordinates by any positive scalar we get a mapping of the same line. \item Secondly, only 4D subset of the 5D oriented projective space corresponds to real lines. This subset is a 4D ruled quadric called the {\em \plucker quadric} or the {\em Grassman manifold}~\cite{Stolfi:1991:OPG,Pu98-DSGIV}. \end{itemize} Although the \plucker coordinates need more coefficients they have no singularity and preserve some linearities: lines intersecting a set of lines in 3D correspond to an intersection of 5D hyperplanes. More details on \plucker coordinates will be discussed in Chapter~\ref{chap:analysis} and Chapter~\ref{chap:mutual} where they are used to solve the from-region visibility problem. To sum up: In 3D there are four degrees of freedom in the description of a line and thus the corresponding line space is four-dimensional. Fixing certain line parameters (e.g. direction) the problem-relevant line set, denoted ${\cal L}_R$, forms a $k$-dimensional subset of ${\cal P}^4$, where $0\leq k \leq 4$. \subsection{Visibility along a line} The simplest visibility problems deal with visibility along a single line. The problem-relevant line set is zero-dimensional, i.e. it is fully specified by the given line. A typical example of a visibility along a line problem is {\em ray shooting}. A similar problem to ray shooting is the {\em point-to-point} visibility. The point-to-point visibility determines whether the line segment between two points is occluded, i.e. it has an intersection with an opaque object in the scene. Point-to-point visibility provides a visibility classification (answer 1a), whereas ray shooting determines a visible object (answer 2a) and/or a point of intersection (answer 3a). Note that the {\em point-to-point} visibility can be solved easily by means of ray shooting. Another constructive visibility along a line problem is determining the {\em %maximal free line segments} on a given line. See Figure~\ref{fig:val} for an illustration of typical visibility along a line problems. %\begin{figure}[htb] %\centerline{ %\includegraphics[width=0.85\textwidth,draft=\DRAFTFIGS]{figs/val}} %\caption{Visibility along a line. (left) Ray shooting. (center) Point-to-point visibility. (right) Maximal free line segments between two points.} %\label{fig:val} %\end{figure} \subsection{Visibility from a point} Lines intersecting a point in 3D can be described by two parameters. For example the lines can be expressed by an intersection with a unit sphere centered at the given point. The most common parametrization describes a line by a point of intersection with a given viewport. Note that this parametrization accounts only for a %subset of lines that intersect the viewport (see Figure~\ref{fig:vfp}). %\begin{figure}[htb] %\centerline{ %\includegraphics[width=0.6\textwidth,draft=\DRAFTFIGS]{figs/vfp}} %\caption{Visibility from a point. Lines intersecting a point can be described by a % point of intersection with the given viewport.} %\label{fig:vfp} %\end{figure} In 3D the problem-relevant line set ${\cal L}_R$ is a 2D subset of the 4D line space. In 2D the ${\cal L}_R$ is a 1D subset of the 2D line space. The typical visibility from a point problem is the visible surface determination. Due to its importance the visible surface determination is covered by the majority of existing visibility algorithms. Other visibility from a point problem is the construction of the {\em visibility map} or the {\em point-to-region visibility} that classifies a region as visible, invisible, or partially visible with respect to the given point. \subsection{Visibility from a line segment} Lines intersecting a line segment in 3D can be described by three parameters. One parameter fixes the intersection of the line with the segment the other two express the direction of the line. The problem-relevant line set ${\cal L}_R$ is three-dimensional and it can be understood as a 2D cross section of ${\cal L}_R$ swept according to the translation on the given line segment (see %Figure~\ref{fig:vls}). %\begin{figure}[htb] %\centerline{ %\includegraphics[width=0.8\textwidth,draft=\DRAFTFIGS]{figs/vls}} %\caption{Visibility from a line segment. (left) Line segment, a % spherical object $O$, and its projections $O^*_0$, $O^*_{0.5}$, $O^*_{1}$ % with respect to the three points on the line segment. (right) % A possible parametrization of lines that stacks up 2D planes. % Each plane corresponds to mappings of lines intersecting a given % point on the line segment.} %\label{fig:vls} %\end{figure} In 2D lines intersecting a line segment form a two-dimensional problem-relevant line set. Thus for the 2D case the ${\cal L}_R$ is a two-dimensional subset of 2D line space. \subsection{Visibility from a region} Visibility from a region (or from-region visibility) involves the most general visibility problems. In 3D the ${\cal L}_R$ is a 4D subset of the 4D line space. In 2D the ${\cal L}_R$ is a 2D subset of the 2D line space. Consequently, in the presented classification visibility from a region in 2D is equivalent to visibility from a line segment in 2D. A typical visibility from a region problem is the problem of {\em region-to-region} visibility that aims to determine if the two given regions in the scene are visible, invisible, or partially visible (see %Figure~\ref{fig:vfr}). Another visibility from region problem is the computation of a {\em potentially visible set} (PVS) with respect to a given view cell. The PVS consists of a set of objects that are potentially visible from any point inside the view cell. Further visibility from a region problems include computing form factors between two polygons, soft shadow algorithms or discontinuity meshing. %\begin{figure}[htb] %\centerline{ %\includegraphics[width=0.6\textwidth,draft=\DRAFTFIGS]{figs/vfr}} %\caption{Visibility from a region --- an example of the region-to-region % visibility. Two regions and two occluders $A$, $B$ % in a 2D scene. In line space the region-to-region visibility can be % solved by subtracting the sets of lines $A^*$ and $B^*$ % intersecting objects $A$ and $B$ from the lines intersecting both % regions.} %\label{fig:vfr} %\end{figure} \subsection{Global visibility} According to the classification the global visibility problems can be seen as an extension of the from-region visibility problems. The dimension of the problem-relevant line set is the same ($k=2$ for 2D and $k=4$ for 3D scenes). Nevertheless, the global visibility problems typically deal with much larger set of rays, i.e. all rays that penetrate the scene. Additionally, there is no given set of reference points from which visibility is studied and hence there is no given priority ordering of objects along each particular line from ${\cal L}_R$. Therefore an additional parameter must be used to describe visibility (visible object) along each ray. \subsection{Summary} The classification of visibility problems according to the dimension of the problem-relevant line set is summarized in Table~\ref{table:classification3D}. This classification provides means for understanding how difficult it is to compute, describe, and maintain visibility for a particular class of problems. For example a data structure representing the visible or occluded parts of the scene for the visibility from a point problem needs to partition a 2D ${\cal L}_R$ into visible and occluded sets of lines. This observation conforms with the traditional visible surface algorithms -- they partition a 2D viewport into empty/nonempty regions and associate each nonempty regions (pixels) with a visible object. In this case the viewport represents the ${\cal L}_R$ as each point of the viewport corresponds to a line through that point. To analytically describe visibility from a region a subdivision of 4D ${\cal L}_R$ should be performed. This is much more difficult than the 2D subdivision. Moreover the description of visibility from a region involves non-linear subdivisions of both primal space and line space even for polygonal scenes~\cite{Teller:1992:CAA,Durand99-phd}. \begin{table*}[htb] \begin{small} \begin{center} \begin{tabular}{|l|c|l|} \hline \multicolumn{3}{|c|}{2D} \\ \hline \mc{domain} & $d({\cal L}_R)$ & \mc{problems} \\ \hline \hline \begin{tabular}{l}visibility along a line\end{tabular} & 0 & \begin{tabular}{l}ray shooting, point-to-point visibility\end{tabular}\\ \hline \begin{tabular}{l}visibility from a point\end{tabular} & 1 & \begin{tabular}{l}view around a point, point-to-region visibility\end{tabular}\\ \hline \begin{tabular}{l} visibility from a line segment \\ visibility from region \\ global visibility \end{tabular} & 2 & \begin{tabular}{l} region-to-region visibility, PVS \end{tabular}\\ \hline \hline \multicolumn{3}{|c|}{3D} \\ \hline \mc{domain} & $d({\cal L}_R)$ & \mc{problems} \\ \hline \hline \begin{tabular}{l}visibility along a line\end{tabular} & 0 & \begin{tabular}{l} ray shooting, point-to-point visibility \end{tabular}\\ \hline \begin{tabular}{l}from point in a surface\end{tabular} & 1 & \begin{tabular}{l} see visibility from point in 2D \end{tabular}\\ \hline \begin{tabular}{l}visibility from a point\end{tabular} & 2 & \begin{tabular}{l} visible (hidden) surfaces, point-to-region visibility,\\ visibility map, hard shadows \end{tabular} \\ \hline \begin{tabular}{l}visibility from a line segment\end{tabular} & 3 & \begin{tabular}{l} segment-to-region visibility (rare) \end{tabular}\\ \hline \begin{tabular}{l}visibility from a region\\global visibility\end{tabular} & 4 & \begin{tabular}{l} region-region visibility, PVS, aspect graph,\\ soft shadows, discontinuity meshing \end{tabular} \\ \hline \end{tabular} \end{center} \end{small} \caption{Classification of visibility problems in 2D and 3D according to the dimension of the problem-relevant line set.} \label{table:classification3D} \end{table*} \section{Classification of visibility algorithms} The taxonomy of visibility problems groups similar visibility problems in the same class. A visibility problem can be solved by means of various visibility algorithms. A visibility algorithm poses further restrictions on the input and output data. These restrictions can be seen as a more precise definition of the visibility problem that is solved by the algorithm. Above we classified visibility problems according to the problem domain and the desired answers. In this section we provide a classification of visibility algorithms according to other important criteria characterizing a particular visibility algorithm. \subsection{Scene restrictions} \label{sec:scene_restrictions} Visibility algorithms can be classified according to the restrictions they pose on the scene description. The type of the scene description influences the difficulty of solving the given problem: it is simpler to implement an algorithm computing a visibility map for scenes consisting of triangles than for scenes with NURBS surfaces. We list common restrictions on the scene primitives suitable for visibility computations: \begin{itemize} \item triangles, convex polygons, concave polygons, \item volumetric data, \item points, \item general parametric, implicit, or procedural surfaces. \end{itemize} Some attributes of scenes objects further increase the complexity of the visibility computation: \begin{itemize} \item transparent objects, \item dynamic objects. \end{itemize} The majority of analytic visibility algorithms deals with static polygonal scenes without transparency. The polygons are often subdivided into triangles for easier manipulation and representation. \subsection{Accuracy} \label{sec:accuracy} Visibility algorithms can be classified according to the accuracy of the result as: \begin{itemize} \item exact, \item conservative, \item aggressive, \item approximate. \end{itemize} An exact algorithm provides an exact analytic result for the given problem (in practice however this result is typically influenced by the finite precision of the floating point arithmetics). A conservative algorithm overestimates visibility, i.e. it never misses any visible object, surface or point. An aggressive algorithm always underestimates visibility, i.e. it never reports an invisible object, surface or point as visible. An approximate algorithm provides only an approximation of the result, i.e. it can overestimate visibility for one input and underestimate visibility for another input. The classification according to the accuracy is best illustrated on computing PVS: an exact algorithm computes an exact PVS. A conservative algorithm computes a superset of the exact PVS. An aggressive algorithm determines a subset of the exact PVS. An approximate algorithm computes an approximation to the exact PVS that is neither its subset or its superset for all possible inputs. A more precise quality measure of algorithms computing PVSs can be expressed by the {\em relative overestimation} and the {\em relative underestimation} of the PVS with respect to the exact PVS. We can define a quality measure of an algorithm $A$ on input $I$ as a tuple $\mbi{Q}^A(I)$: \begin{eqnarray} \mbi{Q}^A(I) & = & (Q^A_o(I), Q^A_u(I)), \qquad I \in {\cal D} \\ Q^A_o(I) & = & {|S^A(I) \setminus S^{\cal E}(I)| \over |S^{\cal E}(I)|} \\ Q^A_u(I) & = & {|S^{\cal E}(I) \setminus S^A(I) | \over |S^{\cal E}(I)|} \end{eqnarray} where $I$ is an input from the input domain ${\cal D}$, $S^A(I)$ is the PVS determined by the algorithm $A$ for input $I$ and $S^{\cal E}(I)$ is the exact PVS for the given input. $Q^A_o(I)$ expresses the {\em relative overestimation} of the PVS, $Q^A_u(I)$ is the {\em relative underestimation}. The expected quality of the algorithm over all possible inputs can be given as: \begin{eqnarray} Q^A & = & E[\| \mbi{Q}^A(I) \|] \\ & = & \sum_{\forall I \in {\cal D}} f(I).\sqrt{Q^A_o(I)^2 + Q^A_o(I)^2} \end{eqnarray} where f(I) is the probability density function expressing the probability of occurrence of input $I$. The quality measure $\mbi{Q}^A(I)$ can be used to classify a PVS algorithm into one of the four accuracy classes according to Section~\ref{sec:accuracy}: \begin{enumerate} \item exact\\ $\forall I \in {\cal D} :Q_o^A(I) = 0 \wedge Q_u^A(I) = 0$ \item conservative\\ $\forall I \in {\cal D} : Q_o^A(I) \geq 0 \wedge Q_u^A(I) = 0$ \item aggressive \\ $\forall I \in {\cal D} : Q_o^A(I) = 0 \wedge Q_u^A(I) \geq 0$ \item approximate \\ $\qquad \exists I_j, I_k \in {\cal D}: Q_o^A(I_j) > 0 \wedge Q_u^A(I_k) > 0$ \end{enumerate} \subsection{Solution space} \label{sec:solspace} The solution space is the domain in which the algorithm determines the desired result. Note that the solution space does not need to match the domain of the result. The algorithms can be classified as: \begin{itemize} \item discrete, \item continuous, \item hybrid. \end{itemize} A discrete algorithm solves the problem using a discrete solution space; the solution is typically an approximation of the result. A continuous algorithm works in a continuous domain and often computes an analytic solution to the given problem. A hybrid algorithm uses both the discrete and the continuous solution space. The classification according to the solution space is easily demonstrated on visible surface algorithms: The z-buffer~\cite{Catmull:1975:CDC} is a common example of a discrete algorithm. The Weiler-Atherton algorithm~\cite{Weiler:1977:HSR} is an example of a continuous one. A hybrid solution space is used by scan-line algorithms that solve the problem in discrete steps (scan-lines) and for each step they provide a continuous solution (spans). Further classification reflects the semantics of the solution space. According to this criteria we can classify the algorithms as: \begin{itemize} \item primal space (object space), \item line space, \begin{itemize} \item image space, \item general, \end{itemize} \item hybrid. \end{itemize} A primal space algorithm solves the problem by studying the visibility between objects without a transformation to a different solution space. A line space algorithm studies visibility using a transformation of the problem to line space. Image space algorithms can be seen as an important subclass of line space algorithms for solving visibility from a point problems in 3D. These algorithms cover all visible surface algorithms and many visibility culling algorithms. They solve visibility in a given image plane that represents the problem-relevant line set ${\cal L}_R$ --- each ray originating at the viewpoint corresponds to a point in the image plane. The described classification differs from the sometimes mentioned understanding of image space and object space algorithms that incorrectly considers all image space algorithms discrete and all object space algorithms continuous. %***************************************************************************** \section{Summary} The presented taxonomy classifies visibility problems independently of their target application. The classification should help to understand the nature of the given problem and it should assist in finding relationships between visibility problems and algorithms in different application areas. The algorithms address the following classes of visibility problems: \begin{itemize} \item Visibility from a point in 3D $d({\cal L}_R)=2$. \item Global visibility in 3D $d({\cal L}_R)=4$. \item Visibility from a region in 3D, $d({\cal L}_R)=4$. \end{itemize} This chapter discussed several important criteria for the classification of visibility algorithms. This classification can be seen as a finer structuring of the taxonomy of visibility problems. We discussed important steps in the design of a visibility algorithm that should also assist in understanding the quality of a visibility algorithm. According to the classification the visibility algorithms described later in the report address algorithms with the following properties: \begin{itemize} \item Domain: \begin{itemize} \item viewpoint (online visibility culling), \item global visibility (global visibility sampling) \item polygon or polyhedron (mutual visibility verification) \end{itemize} \item Scene restrictions (occluders): \begin{itemize} \item meshes consisting of convex polygons \end{itemize} \item Scene restrictions (group objects): \begin{itemize} \item bounding boxes \end{itemize} \item Output: \begin{itemize} \item PVS \end{itemize} \item Accuracy: \begin{itemize} \item conservative \item exact \item aggresive \end{itemize} \item Solution space: \begin{itemize} \item discrete (online visibility culling, global visibility sampling, conservative and approximate algorithm from the mutual visibility verification) \item continuous (exact algorithm from mutual visibility verification) \end{itemize} \item Solution space data structures: viewport (online visibility culling), ray stack (global visibility sampling, conservative and approximate algorithm from the mutual visibility verification), BSP tree (exact algorithm from the mutual visibility verification) \item Use of coherence of visibility: \begin{itemize} \item spatial coherence (all algorithms) \item temporal coherence (online visibility culling) \end{itemize} \item Output sensitivity: expected in practice (all algorithms) \item Acceleration data structure: kD-tree (all algorithms) \item Use of graphics hardware: online visibility culling \end{itemize}