source: trunk/VUT/doc/SciReport/introduction.tex @ 266

Revision 266, 27.0 KB checked in by bittner, 19 years ago (diff)

structural changes

Line 
1\chapter{Introduction to visibility problems and algorithms}%\chapter
2
3\label{chap:overview}
4\label{chap:classes}
5
6 This chapter provides introduction to visibility problems and
7algorithms for computer graphics. In particular it presents a taxonomy
8of visibility problems based on the {\em problem domain} and the {\em
9type of the answer}. The taxonomy helps to understand the nature of a
10particular visibility problem and provides a tool for grouping
11problems of similar complexity independently of their target
12application. We discuss typical visibility problems encountered in
13computer graphics including their relation to the presented
14taxonomy. Finally, we classify the latert described visibility tools
15according to the presented taxonomy.
16
17
18%% summarize the state of the art
19%% visibility algorithms and their relation to the proposed taxonomy of
20%% visibility problems. The second part of the survey should serve as a
21%% catalogue of visibility algorithms that is indexed by the proposed
22%% taxonomy of visibility problems.
23
24
25
26\subsection{Problem domain}
27\label{sec:prob_domain}
28
29 Computer graphics deals with visibility problems in the context of
302D, \m25d, or 3D scenes. The actual problem domain is given by
31restricting the set of rays for which visibility should be
32determined.
33
34Below we list common problem domains used and the corresponding domain
35restrictions:
36
37\begin{enumerate}
38\item
39  {\em visibility along a line}
40  \begin{enumerate}
41  \item line
42  \item ray (origin + direction)
43  \end{enumerate}
44  \newpage
45\item
46  {\em visibility from a point} ({\em from-point visibility})
47  \begin{enumerate}
48  \item point
49  \item point + restricted set of rays
50    \begin{enumerate}
51    \item point + raster image (discrete form)
52    \item point + beam (continuous form)
53    \end{enumerate}
54  \end{enumerate}
55
56 
57\item
58  {\em visibility from a line segment} ({\em from-segment visibility})
59  \begin{enumerate}
60  \item line segment
61  \item line segment + restricted set of rays
62  \end{enumerate}
63 
64\item
65  {\em visibility from a polygon} ({\em from-polygon visibility})
66  \begin{enumerate}
67  \item polygon
68  \item polygon + restricted set of rays
69  \end{enumerate}
70 
71\item
72  {\em visibility from a region} ({\em from-region visibility})
73  \begin{enumerate}
74    \item region
75    \item region + restricted set of rays
76  \end{enumerate}
77
78\item
79  {\em global visibility}
80  \begin{enumerate}
81    \item no further input (all rays in the scene)
82    \item restricted set of rays
83  \end{enumerate}
84\end{enumerate}
85 
86 The domain restrictions can be given independently of the dimension
87of the scene, but the impact of the restrictions differs depending on
88the scene dimension. For example, visibility from a polygon is
89equivalent to visibility from a (polygonal) region in 2D, but not in
903D.
91
92%*****************************************************************************
93
94\section{Dimension of the problem-relevant line set}
95
96 The six domains of visibility problems stated in
97Section~\ref{sec:prob_domain} can be characterized by the {\em
98problem-relevant line set} denoted ${\cal L}_R$. We give a
99classification of visibility problems according to the dimension of
100the problem-relevant line set. We discuss why this classification is
101important for understanding the nature of the given visibility problem
102and for identifying its relation to other problems.
103
104
105 For the following discussion we assume that a line in {\em primal
106space} can be mapped to a point in {\em line space}. For purposes of
107the classification we define the line space as a vector space where a
108point corresponds to a line in the primal space\footnote{A classical
109mathematical definition says: Line space is a direct product of two
110Hilbert spaces~\cite{Weisstein:1999:CCE}. However, this definition
111differs from the common understanding of line space in computer
112graphics~\cite{Durand99-phd}}.
113
114
115
116\subsection{Parametrization of lines in 2D}
117
118 There are two independent parameters that specify a 2D line and thus
119the corresponding set of lines is two-dimensional. There is a natural
120duality between lines and points in 2D. For example a line expressed
121as: $l:y=ax+c$ is dual to a point $p=(-c,a)$. This particular duality
122cannot handle vertical lines. See Figure~\ref{fig:duality2d} for an
123example of other dual mappings in the plane.  To avoid the singularity
124in the mapping, a line $l:ax+by+c=0$ can be represented as a point
125$p_l=(a,b,c)$ in 2D projective space ${\cal
126P}^2$~\cite{Stolfi:1991:OPG}. Multiplying $p_l$ by a non-zero scalar
127we obtain a vector that represents the same line $l$. More details
128about this singularity-free mapping will be discussed in
129Chapter~\ref{chap:vfr2d}.
130
131
132\begin{figure}[!htb]
133\centerline{
134\includegraphics[width=0.9\textwidth,draft=\DRAFTFIGS]{figs/duality2d}}
135\caption{Duality between points and lines in 2D.}
136\label{fig:duality2d}
137\end{figure}
138
139
140
141
142
143 To sum up: In 2D there are two degrees of freedom in description of a
144line and the corresponding line space is two-dimensional. The
145problem-relevant line set ${\cal L}_R$ then forms a $k$-dimensional
146subset of ${\cal P}^2$, where $0\leq k \leq 2$. An illustration of the
147concept of the problem-relevant line set is depicted in
148Figure~\ref{fig:classes}.
149
150
151\begin{figure}[htb]
152\centerline{
153\includegraphics[width=0.8\textwidth,draft=\DRAFTFIGS]{figs/classes}}
154\caption{The problem-relevant set of lines in 2D. The ${\cal L}_R$ for
155visibility along a line is formed by a single point that is a mapping
156of the given line. The ${\cal L}_R$ for visibility from a point $p$ is
157formed by points lying on a line. This line is a dual mapping of the
158point $p$. ${\cal L}_R$ for visibility from a line segment is formed
159by a 2D region bounded by dual mappings of endpoints of the given
160segment.}
161\label{fig:classes}
162\end{figure}
163
164
165\subsection{Parametrization of lines in 3D}
166
167
168Lines in 3D form a four-parametric space~\cite{p-rsls-97}. A line
169intersecting a given scene can be described by two points on a sphere
170enclosing the scene. Since the surface of the sphere is a two
171parametric space, we need four parameters to describe the line.
172
173 The {\em two plane parametrization} of 3D lines describes a line by
174points of intersection with the given two
175planes~\cite{Gu:1997:PGT}. This parametrization exhibits a singularity
176since it cannot describe lines parallel to these planes. See
177Figure~\ref{fig:3dlines} for illustrations of the spherical and the
178two plane parameterizations.
179
180
181\begin{figure}[htb]
182\centerline{
183\includegraphics[width=0.78\textwidth,draft=\DRAFTFIGS]{figs/3dlines}}
184\caption{Parametrization of lines in 3D. (left) A line can be
185  described by two points on a sphere enclosing the scene. (right) The
186  two plane parametrization describes a line by point of intersection
187  with two given planes.}
188\label{fig:3dlines}
189\end{figure}
190
191 Another common parametrization of 3D lines are the {\em \plucker
192coordinates}. \plucker coordinates of an oriented 3D line are a six
193tuple that can be understood as a point in 5D oriented projective
194space~\cite{Stolfi:1991:OPG}. There are six coordinates in \plucker
195representation of a line although we know that the ${\cal L}_R$ is
196four-dimensional. This can be explained as follows:
197
198\begin{itemize}
199  \item Firstly, \plucker coordinates are {\em homogeneous
200    coordinates} of a 5D point. By multiplication of the coordinates
201    by any positive scalar we get a mapping of the same line.
202  \item Secondly, only 4D subset of the 5D oriented projective space
203  corresponds to real lines. This subset is a 4D ruled quadric called
204  the {\em \plucker quadric} or the {\em Grassman
205  manifold}~\cite{Stolfi:1991:OPG,Pu98-DSGIV}.
206\end{itemize}
207
208 Although the \plucker coordinates need more coefficients they have no
209singularity and preserve some linearities: lines intersecting a set of
210lines in 3D correspond to an intersection of 5D hyperplanes. More details
211on \plucker coordinates will be discussed in
212Chapters~\ref{chap:vfr25d} and~\ref{chap:vfr3d} where they are used to
213solve the from-region visibility problem.
214
215  To sum up: In 3D there are four degrees of freedom in the
216description of a line and thus the corresponding line space is
217four-dimensional. Fixing certain line parameters (e.g. direction) the
218problem-relevant line set, denoted ${\cal L}_R$, forms a
219$k$-dimensional subset of ${\cal P}^4$, where $0\leq k \leq 4$.
220
221
222\subsection{Visibility along a line}
223
224 The simplest visibility problems deal with visibility along a single
225line. The problem-relevant line set is zero-dimensional, i.e. it is
226fully specified by the given line. A typical example of a visibility
227along a line problem is {\em ray shooting}.
228
229 A similar problem to ray shooting is the {\em point-to-point}
230visibility.  The  point-to-point visibility determines whether the
231line segment between two points is occluded, i.e. it has an
232intersection with an opaque object in the scene. Point-to-point
233visibility provides a visibility classification (answer 1a), whereas
234ray shooting determines a visible object (answer 2a) and/or a point of
235intersection (answer 3a). Note that the {\em point-to-point}
236visibility can be solved easily by means of ray shooting. Another
237constructive visibility along a line problem is determining the {\em
238maximal free line segments} on a given line. See Figure~\ref{fig:val}
239for an illustration of typical visibility along a line problems.
240
241\begin{figure}[htb]
242\centerline{
243\includegraphics[width=0.85\textwidth,draft=\DRAFTFIGS]{figs/val}}
244\caption{Visibility along a line. (left) Ray shooting. (center) Point-to-point visibility. (right) Maximal free line segments between two points.}
245\label{fig:val}
246\end{figure}
247
248
249\subsection{Visibility from a point}
250
251 Lines intersecting a point in 3D can be described  by two
252parameters. For example the lines can be expressed by an intersection
253with a unit sphere centered at the given point. The most common
254parametrization describes a line by a point of intersection with a
255given viewport. Note that this parametrization accounts only for a
256subset of lines that intersect the viewport (see Figure~\ref{fig:vfp}).
257
258\begin{figure}[htb]
259\centerline{
260\includegraphics[width=0.6\textwidth,draft=\DRAFTFIGS]{figs/vfp}}
261\caption{Visibility from a point. Lines intersecting a point can be described by a
262  point of intersection with the given viewport.}
263\label{fig:vfp}
264\end{figure}
265
266 In 3D the problem-relevant line set ${\cal L}_R$ is a 2D subset of
267the 4D line space. In 2D the ${\cal L}_R$ is a 1D subset of the 2D
268line space. The typical visibility from a point problem is the visible
269surface determination. Due to its importance the visible surface
270determination is covered by the majority of existing visibility
271algorithms. Other visibility from a point problem is the construction
272of the {\em visibility map} or the {\em point-to-region visibility}
273that classifies a region as visible, invisible, or partially visible
274with respect to the given point.
275
276
277\subsection{Visibility from a line segment}
278
279 Lines intersecting a line segment in 3D can be described by three
280parameters. One parameter fixes the intersection of the line with the
281segment the other two express the direction of the line. The
282problem-relevant line set ${\cal L}_R$ is three-dimensional and it can
283be understood as a 2D cross section of ${\cal L}_R$  swept according
284to the translation on the given line segment (see
285Figure~\ref{fig:vls}).
286
287
288\begin{figure}[htb]
289\centerline{
290\includegraphics[width=0.8\textwidth,draft=\DRAFTFIGS]{figs/vls}}
291\caption{Visibility from a line segment. (left) Line segment, a
292  spherical object $O$, and its projections $O^*_0$, $O^*_{0.5}$, $O^*_{1}$
293  with respect to the three points on the line segment. (right)
294  A possible parametrization of lines that stacks up 2D planes.
295  Each plane corresponds to mappings of lines intersecting a given
296  point on the line segment.}
297\label{fig:vls}
298\end{figure}
299
300In 2D lines intersecting a line segment form a two-dimensional
301problem-relevant line set. Thus for the 2D case the ${\cal L}_R$ is a
302two-dimensional subset of 2D line space.
303
304
305\subsection{Visibility from a region}
306
307 Visibility from a region (or from-region visibility) involves the
308most general visibility problems. In 3D the ${\cal L}_R$ is a 4D
309subset of the 4D line space. In 2D the ${\cal L}_R$ is a 2D subset of
310the 2D line space. Consequently, in the presented classification
311visibility from a region in 2D is equivalent to visibility from a line
312segment in 2D.
313
314 A typical visibility from a region problem is the problem of {\em
315region-to-region} visibility that aims to determine if the two given
316regions in the scene are visible, invisible, or partially visible (see
317Figure~\ref{fig:vfr}). Another visibility from region problem is the
318computation of a {\em potentially visible set} (PVS) with respect to a
319given view cell. The PVS consists of a set of objects that are
320potentially visible from any point inside the view cell. Further
321visibility from a region problems include computing form factors
322between two polygons, soft shadow algorithms or discontinuity meshing.
323
324
325\begin{figure}[htb]
326\centerline{
327\includegraphics[width=0.6\textwidth,draft=\DRAFTFIGS]{figs/vfr}}
328\caption{Visibility from a region --- an example of the region-to-region
329  visibility. Two regions and two occluders $A$, $B$
330  in a 2D scene.  In line space the region-to-region visibility can be
331  solved by subtracting the sets of lines $A^*$ and $B^*$
332  intersecting objects $A$ and $B$ from the lines intersecting both
333  regions.}
334\label{fig:vfr}
335\end{figure}
336
337
338\subsection{Global visibility}
339
340 According to the classification the global visibility problems can be
341seen as an extension of the from-region visibility problems. The
342dimension of the problem-relevant line set is the same ($k=2$ for 2D
343and $k=4$ for 3D scenes). Nevertheless, the global visibility problems
344typically deal with much larger set of rays, i.e.  all rays that
345penetrate the scene. Additionally, there is no given set of reference
346points from which visibility is studied and hence there is no given
347priority ordering of objects along each particular line from ${\cal
348L}_R$. Therefore an additional parameter must be used to describe
349visibility (visible object) along each ray.
350
351
352\subsection{Summary}
353
354 The classification of visibility problems according to the dimension
355of the problem-relevant line set is summarized in
356Table~\ref{table:classification3D}.  This classification provides
357means for understanding how difficult it is to compute, describe, and
358maintain visibility for a particular class of problems. For example a
359data structure representing the visible or occluded parts of the scene
360for the visibility from a point problem needs to partition a 2D ${\cal
361L}_R$ into visible and occluded sets of lines. This observation
362conforms with the traditional visible surface algorithms -- they
363partition a 2D viewport into empty/nonempty regions and associate each
364nonempty regions (pixels) with a visible object. In this case the
365viewport represents the ${\cal L}_R$ as each point of the viewport
366corresponds to a line through that point. To analytically describe
367visibility from a region a subdivision of 4D ${\cal L}_R$ should be
368performed. This is much more difficult than the 2D
369subdivision. Moreover the description of visibility from a region
370involves non-linear subdivisions of both primal space and line space
371even for polygonal scenes~\cite{Teller:1992:CAA,Durand99-phd}.
372
373\begin{table*}[htb]
374\begin{small}
375\begin{center}
376\begin{tabular}{|l|c|l|}
377\hline
378\multicolumn{3}{|c|}{2D} \\
379\hline
380\mc{domain} & $d({\cal L}_R)$ & \mc{problems} \\
381\hline
382\hline
383\begin{tabular}{l}visibility along a line\end{tabular}         & 0 &  \begin{tabular}{l}ray shooting, point-to-point visibility\end{tabular}\\
384\hline
385\begin{tabular}{l}visibility from a point\end{tabular}         & 1 &  \begin{tabular}{l}view around a point, point-to-region visibility\end{tabular}\\
386\hline
387\begin{tabular}{l} visibility from a line segment \\ visibility from region \\ global visibility \end{tabular}
388   & 2 & \begin{tabular}{l} region-to-region visibility, PVS \end{tabular}\\
389\hline
390\hline
391\multicolumn{3}{|c|}{3D} \\
392\hline
393\mc{domain} & $d({\cal L}_R)$ & \mc{problems} \\
394\hline
395\hline
396\begin{tabular}{l}visibility along a line\end{tabular}  & 0 & \begin{tabular}{l} ray shooting, point-to-point visibility \end{tabular}\\
397\hline
398\begin{tabular}{l}from point in a surface\end{tabular}  & 1 & \begin{tabular}{l} see visibility from point in 2D \end{tabular}\\
399\hline
400\begin{tabular}{l}visibility from a point\end{tabular}  & 2 & \begin{tabular}{l} visible (hidden) surfaces, point-to-region visibility,\\
401                                      visibility map, hard shadows
402                               \end{tabular} \\
403\hline
404\begin{tabular}{l}visibility from a line segment\end{tabular} & 3 & \begin{tabular}{l} segment-to-region visibility (rare) \end{tabular}\\
405\hline
406\begin{tabular}{l}visibility from a region\\global visibility\end{tabular}       & 4 & \begin{tabular}{l} region-region visibility, PVS, aspect graph,\\
407                                            soft shadows, discontinuity meshing
408                                     \end{tabular} \\
409
410\hline 
411\end{tabular}
412\end{center}
413\end{small}
414\caption{Classification of visibility problems in 2D and 3D according
415to the dimension of the problem-relevant line set.}
416\label{table:classification3D}
417\end{table*}
418
419
420
421
422
423\section{Classification of visibility algorithms}
424
425
426The taxonomy of visibility problems groups similar visibility problems
427in the same class.  A visibility problem can be solved by means of
428various visibility algorithms. A visibility algorithm poses further
429restrictions on the input and output data. These restrictions can be
430seen as a more precise definition of the visibility problem that is
431solved by the algorithm.
432
433 Above we classified visibility problems according to the problem
434domain and the desired answers. In this section we provide a
435classification of visibility algorithms according to other
436important criteria characterizing a particular visibility algorithm.
437
438
439\subsection{Scene restrictions}
440\label{sec:scene_restrictions}
441
442Visibility algorithms can be classified according to the restrictions
443they pose on the scene description.  The type of the scene description
444influences the difficulty of solving the given problem: it is simpler
445to implement an algorithm computing a visibility map for scenes
446consisting of triangles than for scenes with NURBS surfaces. We list
447common restrictions on the scene primitives suitable for visibility
448computations:
449
450
451\begin{itemize}
452\item
453  triangles, convex polygons, concave polygons,
454 
455\item
456  volumetric data,
457
458\item
459  points,
460
461\item
462  general parametric, implicit, or procedural surfaces.
463
464\end{itemize}
465
466Some attributes of scenes objects further increase the complexity of the visibility computation:
467
468\begin{itemize}
469
470\item
471  transparent objects,
472
473\item
474  dynamic objects.
475
476\end{itemize}
477
478The majority of analytic visibility algorithms deals with static
479polygonal scenes without transparency. The polygons are often
480subdivided into triangles for easier manipulation and representation.
481
482\subsection{Accuracy}
483\label{sec:accuracy}
484
485 Visibility algorithms can be classified according to the accuracy of
486the result as:
487
488\begin{itemize}
489\item exact,
490\item conservative,
491\item aggressive,
492\item approximate.
493\end{itemize}
494
495
496 An exact algorithm provides an exact analytic result for the given
497problem (in practice however this result is typically influenced by
498the finite precision of the floating point arithmetics). A
499conservative algorithm overestimates visibility, i.e. it never
500misses any visible object, surface or point. An aggressive algorithm
501always underestimates visibility, i.e. it never reports an invisible
502object, surface or point as visible.  An approximate algorithm
503provides only an approximation of the result, i.e. it can overestimate
504visibility for one input and underestimate visibility for another
505input.
506
507 The classification according to the accuracy is best illustrated on
508computing PVS: an exact algorithm computes an exact PVS. A
509conservative algorithm computes a superset of the exact PVS.  An
510aggressive algorithm determines a subset of the exact PVS.  An
511approximate algorithm computes an approximation to the exact PVS that
512is neither its subset or its superset for all possible inputs.
513
514 A more precise quality measure of algorithms computing PVSs can be
515expressed by the {\em relative overestimation} and the {\em relative
516underestimation} of the PVS with respect to the exact PVS.  We can
517define a quality measure of an algorithm $A$ on input $I$ as a tuple
518$\mbi{Q}^A(I)$:
519
520\begin{eqnarray}
521  \mbi{Q}^A(I) & = & (Q^A_o(I), Q^A_u(I)), \qquad  I \in {\cal D} \\
522  Q^A_o(I) & = & {|S^A(I) \setminus S^{\cal E}(I)| \over |S^{\cal E}(I)|} \\
523  Q^A_u(I) & = & {|S^{\cal E}(I) \setminus S^A(I) | \over |S^{\cal E}(I)|}
524\end{eqnarray}
525
526where $I$ is an input from the input domain ${\cal D}$, $S^A(I)$ is
527the PVS determined by the algorithm $A$ for input $I$ and $S^{\cal
528E}(I)$ is the exact PVS for the given input. $Q^A_o(I)$ expresses the
529{\em relative overestimation} of the PVS, $Q^A_u(I)$ is the {\em
530relative underestimation}.
531
532The expected quality of the algorithm over all possible inputs can be
533given as:
534
535\begin{eqnarray}
536Q^A & = & E[\| \mbi{Q}^A(I) \|] \\
537    & = & \sum_{\forall I \in {\cal D}} f(I).\sqrt{Q^A_o(I)^2 + Q^A_o(I)^2}
538\end{eqnarray}
539
540where f(I) is the probability density function expressing the
541probability of occurrence of input $I$. The quality measure
542$\mbi{Q}^A(I)$ can be used to classify a PVS algorithm into one of the
543four accuracy classes according to Section~\ref{sec:accuracy}:
544
545\begin{enumerate}
546\item exact\\
547  $\forall I \in {\cal D} :Q_o^A(I) = 0 \wedge Q_u^A(I) = 0$
548\item conservative\\
549  $\forall I \in {\cal D} : Q_o^A(I) \geq 0 \wedge Q_u^A(I) = 0$
550\item aggressive \\
551  $\forall I \in {\cal D} : Q_o^A(I) = 0 \wedge Q_u^A(I) \geq 0$
552\item approximate \\
553  $\qquad \exists I_j, I_k \in {\cal D}: Q_o^A(I_j) > 0 \wedge Q_u^A(I_k) > 0$
554\end{enumerate}
555
556
557
558\subsection{Solution space}
559
560\label{sec:solspace}
561
562 The solution space is the domain in which the algorithm determines
563the desired result. Note that the solution space does not need to
564match the domain of the result.
565
566The algorithms can be classified as:
567
568\begin{itemize}
569  \item discrete,
570  \item continuous,
571  \item hybrid.
572\end{itemize}
573
574 A discrete algorithm solves the problem using a discrete solution
575space; the solution is typically an approximation of the result. A
576continuous algorithm works in a continuous domain and often computes an
577analytic solution to the given problem. A hybrid algorithm uses both
578the discrete and the continuous solution space.
579
580 The classification according to the solution space is easily
581demonstrated on visible surface algorithms: The
582z-buffer~\cite{Catmull:1975:CDC} is a common example of a discrete
583algorithm. The Weiler-Atherton algorithm~\cite{Weiler:1977:HSR} is an
584example of a continuous one. A hybrid solution space is used by
585scan-line algorithms that solve the problem in discrete steps
586(scan-lines) and for each step they provide a continuous solution
587(spans).
588
589Further classification reflects the semantics of the solution
590space. According to this criteria we can classify the algorithms as:
591
592\begin{itemize}
593  \item primal space (object space),
594  \item line space,
595    \begin{itemize}
596    \item image space,
597    \item general,
598    \end{itemize}
599  \item hybrid.
600\end{itemize}
601
602 A primal space algorithm solves the problem by studying the
603visibility between objects without a transformation to a different
604solution space. A line space algorithm studies visibility using a
605transformation of the problem to line space. Image space algorithms
606can be seen as an important subclass of line space algorithms for
607solving visibility from a point problems in 3D. These algorithms cover
608all visible surface algorithms and many visibility culling
609algorithms. They solve visibility in a given image plane that
610represents the problem-relevant line set ${\cal L}_R$ --- each ray
611originating at the viewpoint corresponds to a point in the image plane.
612
613 The described classification differs from the sometimes mentioned
614understanding of image space and object space algorithms that
615incorrectly considers all image space algorithms discrete and all
616object space algorithms continuous.
617
618
619%*****************************************************************************
620
621
622
623\section{Summary}
624
625The presented taxonomy classifies visibility problems independently of
626their target application. The classification should help to understand
627the nature of the given problem and it should assist in finding
628relationships between visibility problems and algorithms in different
629application areas.  The tools address the following classes of
630visibility problems:
631
632\begin{itemize}
633\item Visibility from a point in 3D $d({\cal L}_R)=2$.
634\item Global visibility in 3D $d({\cal L}_R)=4$.
635\item Visibility from a region in 3D, $d({\cal L}_R)=4$.
636\end{itemize}
637
638 This chapter discussed several important criteria for the
639classification of visibility algorithms. This classification can be
640seen as a finer structuring of the taxonomy of visibility problems. We
641discussed important steps in the design of a visibility algorithm that
642should also assist in understanding the quality of a visibility
643algorithm. According to the classification the tools address
644algorithms with the following properties:
645
646\begin{itemize}
647\item Domain:
648  \begin{itemize}
649  \item viewpoint (online culling tool),
650  \item global visibility (global visibility sampling tool)
651  \item polygon or polyhedron (mutual visibility tool)
652  \end{itemize}
653\item Scene restrictions (occluders):
654  \begin{itemize}
655  \item meshes consisting of convex polygons
656  \end{itemize}
657\item Scene restrictions (group objects):
658  \begin{itemize}
659  \item bounding boxes
660  \end{itemize}
661\item Output:
662  \begin{itemize}
663  \item PVS
664  \end{itemize}
665\item Accuracy:
666  \begin{itemize}
667  \item conservative
668  \item exact
669  \item aggresive
670  \end{itemize}
671\item Solution space:
672  \begin{itemize}
673  \item discrete (online culling tool, global visibility sampling tool, conservative and approximate algorithm from the mutual visibility tool)
674  \item continuous (exact algorithm from mutual visibility tool)
675  \end{itemize}
676  \item Solution space data structures: viewport (online culling tool), ray stack (global visibility sampling tool, conservative and approximate algorithm from the mutual visibility tool), BSP tree (exact algorithm from the mutual visibility tool)
677  \item Use of coherence of visibility:
678    \begin{itemize}
679    \item spatial coherence (all tools)
680    \item temporal coherence (online culling tool)
681    \end{itemize}
682  \item Output sensitivity: expected in practice (all tools)
683  \item Acceleration data structure: kD-tree (all tools)
684  \item Use of graphics hardware: online culling tool
685  \end{itemize}
686
687
Note: See TracBrowser for help on using the repository browser.