source: trunk/VUT/doc/SciReport/introduction.tex @ 255

Revision 255, 51.7 KB checked in by bittner, 19 years ago (diff)
RevLine 
[255]1\chapter{Overview of visibility problems and algorithms}%\chapter
2
3\label{chap:overview}
4\label{chap:classes}
5
6 This chapter provides a taxonomy of visibility problems encountered
7in computer graphics based on the {\em problem domain} and the {\em
8type of the answer}. The taxonomy helps to understand the nature of a
9particular visibility problem and provides a tool for grouping
10problems of similar complexity independently of their target
11application. We discuss typical visibility problems encountered in
12computer graphics and identify their relation to the proposed
13taxonomy. A visibility problem can be solved by means of various
14visibility algorithms. We classify visibility algorithms according to
15several important criteria and discuss important concepts in the
16design of a visibility algorithm. The taxonomy and the discussion of
17the algorithm design sums up ideas and concepts that are independent
18of any specific algorithm. This can help algorithm designers to
19transfer the existing algorithms to solve visibility problems in other
20application areas.
21
22
23
24%% summarize the state of the art
25%% visibility algorithms and their relation to the proposed taxonomy of
26%% visibility problems. The second part of the survey should serve as a
27%% catalogue of visibility algorithms that is indexed by the proposed
28%% taxonomy of visibility problems.
29
30
31
32\subsection{Problem domain}
33\label{sec:prob_domain}
34
35 Computer graphics deals with visibility problems in the context of
362D, \m25d, or 3D scenes. The actual problem domain is given by
37restricting the set of rays for which visibility should be
38determined.
39
40Below we list common problem domains used and the corresponding domain
41restrictions:
42
43\begin{enumerate}
44\item
45  {\em visibility along a line}
46  \begin{enumerate}
47  \item line
48  \item ray (origin + direction)
49  \end{enumerate}
50  \newpage
51\item
52  {\em visibility from a point} ({\em from-point visibility})
53  \begin{enumerate}
54  \item point
55  \item point + restricted set of rays
56    \begin{enumerate}
57    \item point + raster image (discrete form)
58    \item point + beam (continuous form)
59    \end{enumerate}
60  \end{enumerate}
61
62 
63\item
64  {\em visibility from a line segment} ({\em from-segment visibility})
65  \begin{enumerate}
66  \item line segment
67  \item line segment + restricted set of rays
68  \end{enumerate}
69 
70\item
71  {\em visibility from a polygon} ({\em from-polygon visibility})
72  \begin{enumerate}
73  \item polygon
74  \item polygon + restricted set of rays
75  \end{enumerate}
76 
77\item
78  {\em visibility from a region} ({\em from-region visibility})
79  \begin{enumerate}
80    \item region
81    \item region + restricted set of rays
82  \end{enumerate}
83
84\item
85  {\em global visibility}
86  \begin{enumerate}
87    \item no further input (all rays in the scene)
88    \item restricted set of rays
89  \end{enumerate}
90\end{enumerate}
91 
92 The domain restrictions can be given independently of the dimension
93of the scene, but the impact of the restrictions differs depending on
94the scene dimension. For example, visibility from a polygon is
95equivalent to visibility from a (polygonal) region in 2D, but not in
963D.
97
98%*****************************************************************************
99
100\section{Dimension of the problem-relevant line set}
101
102 The six domains of visibility problems stated in
103Section~\ref{sec:prob_domain} can be characterized by the {\em
104problem-relevant line set} denoted ${\cal L}_R$. We give a
105classification of visibility problems according to the dimension of
106the problem-relevant line set. We discuss why this classification is
107important for understanding the nature of the given visibility problem
108and for identifying its relation to other problems.
109
110
111 For the following discussion we assume that a line in {\em primal
112space} can be mapped to a point in {\em line space}. For purposes of
113the classification we define the line space as a vector space where a
114point corresponds to a line in the primal space\footnote{A classical
115mathematical definition says: Line space is a direct product of two
116Hilbert spaces~\cite{Weisstein:1999:CCE}. However, this definition
117differs from the common understanding of line space in computer
118graphics~\cite{Durand99-phd}}.
119
120
121
122\subsection{Parametrization of lines in 2D}
123
124 There are two independent parameters that specify a 2D line and thus
125the corresponding set of lines is two-dimensional. There is a natural
126duality between lines and points in 2D. For example a line expressed
127as: $l:y=ax+c$ is dual to a point $p=(-c,a)$. This particular duality
128cannot handle vertical lines. See Figure~\ref{fig:duality2d} for an
129example of other dual mappings in the plane.  To avoid the singularity
130in the mapping, a line $l:ax+by+c=0$ can be represented as a point
131$p_l=(a,b,c)$ in 2D projective space ${\cal
132P}^2$~\cite{Stolfi:1991:OPG}. Multiplying $p_l$ by a non-zero scalar
133we obtain a vector that represents the same line $l$. More details
134about this singularity-free mapping will be discussed in
135Chapter~\ref{chap:vfr2d}.
136
137
138\begin{figure}[!htb]
139\centerline{
140\includegraphics[width=0.9\textwidth,draft=\DRAFTFIGS]{figs/duality2d}}
141\caption{Duality between points and lines in 2D.}
142\label{fig:duality2d}
143\end{figure}
144
145
146
147
148
149 To sum up: In 2D there are two degrees of freedom in description of a
150line and the corresponding line space is two-dimensional. The
151problem-relevant line set ${\cal L}_R$ then forms a $k$-dimensional
152subset of ${\cal P}^2$, where $0\leq k \leq 2$. An illustration of the
153concept of the problem-relevant line set is depicted in
154Figure~\ref{fig:classes}.
155
156
157\begin{figure}[htb]
158\centerline{
159\includegraphics[width=0.8\textwidth,draft=\DRAFTFIGS]{figs/classes}}
160\caption{The problem-relevant set of lines in 2D. The ${\cal L}_R$ for
161visibility along a line is formed by a single point that is a mapping
162of the given line. The ${\cal L}_R$ for visibility from a point $p$ is
163formed by points lying on a line. This line is a dual mapping of the
164point $p$. ${\cal L}_R$ for visibility from a line segment is formed
165by a 2D region bounded by dual mappings of endpoints of the given
166segment.}
167\label{fig:classes}
168\end{figure}
169
170
171\subsection{Parametrization of lines in 3D}
172
173
174Lines in 3D form a four-parametric space~\cite{p-rsls-97}. A line
175intersecting a given scene can be described by two points on a sphere
176enclosing the scene. Since the surface of the sphere is a two
177parametric space, we need four parameters to describe the line.
178
179 The {\em two plane parametrization} of 3D lines describes a line by
180points of intersection with the given two
181planes~\cite{Gu:1997:PGT}. This parametrization exhibits a singularity
182since it cannot describe lines parallel to these planes. See
183Figure~\ref{fig:3dlines} for illustrations of the spherical and the
184two plane parameterizations.
185
186
187\begin{figure}[htb]
188\centerline{
189\includegraphics[width=0.78\textwidth,draft=\DRAFTFIGS]{figs/3dlines}}
190\caption{Parametrization of lines in 3D. (left) A line can be
191  described by two points on a sphere enclosing the scene. (right) The
192  two plane parametrization describes a line by point of intersection
193  with two given planes.}
194\label{fig:3dlines}
195\end{figure}
196
197 Another common parametrization of 3D lines are the {\em \plucker
198coordinates}. \plucker coordinates of an oriented 3D line are a six
199tuple that can be understood as a point in 5D oriented projective
200space~\cite{Stolfi:1991:OPG}. There are six coordinates in \plucker
201representation of a line although we know that the ${\cal L}_R$ is
202four-dimensional. This can be explained as follows:
203
204\begin{itemize}
205  \item Firstly, \plucker coordinates are {\em homogeneous
206    coordinates} of a 5D point. By multiplication of the coordinates
207    by any positive scalar we get a mapping of the same line.
208  \item Secondly, only 4D subset of the 5D oriented projective space
209  corresponds to real lines. This subset is a 4D ruled quadric called
210  the {\em \plucker quadric} or the {\em Grassman
211  manifold}~\cite{Stolfi:1991:OPG,Pu98-DSGIV}.
212\end{itemize}
213
214 Although the \plucker coordinates need more coefficients they have no
215singularity and preserve some linearities: lines intersecting a set of
216lines in 3D correspond to an intersection of 5D hyperplanes. More details
217on \plucker coordinates will be discussed in
218Chapters~\ref{chap:vfr25d} and~\ref{chap:vfr3d} where they are used to
219solve the from-region visibility problem.
220
221  To sum up: In 3D there are four degrees of freedom in the
222description of a line and thus the corresponding line space is
223four-dimensional. Fixing certain line parameters (e.g. direction) the
224problem-relevant line set, denoted ${\cal L}_R$, forms a
225$k$-dimensional subset of ${\cal P}^4$, where $0\leq k \leq 4$.
226
227
228\subsection{Visibility along a line}
229
230 The simplest visibility problems deal with visibility along a single
231line. The problem-relevant line set is zero-dimensional, i.e. it is
232fully specified by the given line. A typical example of a visibility
233along a line problem is {\em ray shooting}.
234
235 A similar problem to ray shooting is the {\em point-to-point}
236visibility.  The  point-to-point visibility determines whether the
237line segment between two points is occluded, i.e. it has an
238intersection with an opaque object in the scene. Point-to-point
239visibility provides a visibility classification (answer 1a), whereas
240ray shooting determines a visible object (answer 2a) and/or a point of
241intersection (answer 3a). Note that the {\em point-to-point}
242visibility can be solved easily by means of ray shooting. Another
243constructive visibility along a line problem is determining the {\em
244maximal free line segments} on a given line. See Figure~\ref{fig:val}
245for an illustration of typical visibility along a line problems.
246
247\begin{figure}[htb]
248\centerline{
249\includegraphics[width=0.85\textwidth,draft=\DRAFTFIGS]{figs/val}}
250\caption{Visibility along a line. (left) Ray shooting. (center) Point-to-point visibility. (right) Maximal free line segments between two points.}
251\label{fig:val}
252\end{figure}
253
254
255\subsection{Visibility from a point}
256
257 Lines intersecting a point in 3D can be described  by two
258parameters. For example the lines can be expressed by an intersection
259with a unit sphere centered at the given point. The most common
260parametrization describes a line by a point of intersection with a
261given viewport. Note that this parametrization accounts only for a
262subset of lines that intersect the viewport (see Figure~\ref{fig:vfp}).
263
264\begin{figure}[htb]
265\centerline{
266\includegraphics[width=0.6\textwidth,draft=\DRAFTFIGS]{figs/vfp}}
267\caption{Visibility from a point. Lines intersecting a point can be described by a
268  point of intersection with the given viewport.}
269\label{fig:vfp}
270\end{figure}
271
272 In 3D the problem-relevant line set ${\cal L}_R$ is a 2D subset of
273the 4D line space. In 2D the ${\cal L}_R$ is a 1D subset of the 2D
274line space. The typical visibility from a point problem is the visible
275surface determination. Due to its importance the visible surface
276determination is covered by the majority of existing visibility
277algorithms. Other visibility from a point problem is the construction
278of the {\em visibility map} or the {\em point-to-region visibility}
279that classifies a region as visible, invisible, or partially visible
280with respect to the given point.
281
282
283\subsection{Visibility from a line segment}
284
285 Lines intersecting a line segment in 3D can be described by three
286parameters. One parameter fixes the intersection of the line with the
287segment the other two express the direction of the line. The
288problem-relevant line set ${\cal L}_R$ is three-dimensional and it can
289be understood as a 2D cross section of ${\cal L}_R$  swept according
290to the translation on the given line segment (see
291Figure~\ref{fig:vls}).
292
293
294\begin{figure}[htb]
295\centerline{
296\includegraphics[width=0.8\textwidth,draft=\DRAFTFIGS]{figs/vls}}
297\caption{Visibility from a line segment. (left) Line segment, a
298  spherical object $O$, and its projections $O^*_0$, $O^*_{0.5}$, $O^*_{1}$
299  with respect to the three points on the line segment. (right)
300  A possible parametrization of lines that stacks up 2D planes.
301  Each plane corresponds to mappings of lines intersecting a given
302  point on the line segment.}
303\label{fig:vls}
304\end{figure}
305
306In 2D lines intersecting a line segment form a two-dimensional
307problem-relevant line set. Thus for the 2D case the ${\cal L}_R$ is a
308two-dimensional subset of 2D line space.
309
310
311\subsection{Visibility from a region}
312
313 Visibility from a region (or from-region visibility) involves the
314most general visibility problems. In 3D the ${\cal L}_R$ is a 4D
315subset of the 4D line space. In 2D the ${\cal L}_R$ is a 2D subset of
316the 2D line space. Consequently, in the proposed classification
317visibility from a region in 2D is equivalent to visibility from a line
318segment in 2D.
319
320 A typical visibility from a region problem is the problem of {\em
321region-to-region} visibility that aims to determine if the two given
322regions in the scene are visible, invisible, or partially visible (see
323Figure~\ref{fig:vfr}). Another visibility from region problem is the
324computation of a {\em potentially visible set} (PVS) with respect to a
325given view cell. The PVS consists of a set of objects that are
326potentially visible from any point inside the view cell. Further
327visibility from a region problems include computing form factors
328between two polygons, soft shadow algorithms or discontinuity meshing.
329
330
331\begin{figure}[htb]
332\centerline{
333\includegraphics[width=0.6\textwidth,draft=\DRAFTFIGS]{figs/vfr}}
334\caption{Visibility from a region --- an example of the region-to-region
335  visibility. Two regions and two occluders $A$, $B$
336  in a 2D scene.  In line space the region-to-region visibility can be
337  solved by subtracting the sets of lines $A^*$ and $B^*$
338  intersecting objects $A$ and $B$ from the lines intersecting both
339  regions.}
340\label{fig:vfr}
341\end{figure}
342
343
344\subsection{Global visibility}
345
346 According to the classification the global visibility problems can be
347seen as an extension of the from-region visibility problems. The
348dimension of the problem-relevant line set is the same ($k=2$ for 2D
349and $k=4$ for 3D scenes). Nevertheless, the global visibility problems
350typically deal with much larger set of rays, i.e.  all rays that
351penetrate the scene. Additionally, there is no given set of reference
352points from which visibility is studied and hence there is no given
353priority ordering of objects along each particular line from ${\cal
354L}_R$. Therefore an additional parameter must be used to describe
355visibility (visible object) along each ray.
356
357
358\subsection{Summary}
359
360 The classification of visibility problems according to the dimension
361of the problem-relevant line set is summarized in
362Table~\ref{table:classification3D}.  This classification provides
363means for understanding how difficult it is to compute, describe, and
364maintain visibility for a particular class of problems. For example a
365data structure representing the visible or occluded parts of the scene
366for the visibility from a point problem needs to partition a 2D ${\cal
367L}_R$ into visible and occluded sets of lines. This observation
368conforms with the traditional visible surface algorithms -- they
369partition a 2D viewport into empty/nonempty regions and associate each
370nonempty regions (pixels) with a visible object. In this case the
371viewport represents the ${\cal L}_R$ as each point of the viewport
372corresponds to a line through that point. To analytically describe
373visibility from a region a subdivision of 4D ${\cal L}_R$ should be
374performed. This is much more difficult than the 2D
375subdivision. Moreover the description of visibility from a region
376involves non-linear subdivisions of both primal space and line space
377even for polygonal scenes~\cite{Teller:1992:CAA,Durand99-phd}.
378
379\begin{table*}[htb]
380\begin{small}
381\begin{center}
382\begin{tabular}{|l|c|l|}
383\hline
384\multicolumn{3}{|c|}{2D} \\
385\hline
386\mc{domain} & $d({\cal L}_R)$ & \mc{problems} \\
387\hline
388\hline
389\begin{tabular}{l}visibility along a line\end{tabular}         & 0 &  \begin{tabular}{l}ray shooting, point-to-point visibility\end{tabular}\\
390\hline
391\begin{tabular}{l}visibility from a point\end{tabular}         & 1 &  \begin{tabular}{l}view around a point, point-to-region visibility\end{tabular}\\
392\hline
393\begin{tabular}{l} visibility from a line segment \\ visibility from region \\ global visibility \end{tabular}
394   & 2 & \begin{tabular}{l} region-to-region visibility, PVS \end{tabular}\\
395\hline
396\hline
397\multicolumn{3}{|c|}{3D} \\
398\hline
399\mc{domain} & $d({\cal L}_R)$ & \mc{problems} \\
400\hline
401\hline
402\begin{tabular}{l}visibility along a line\end{tabular}  & 0 & \begin{tabular}{l} ray shooting, point-to-point visibility \end{tabular}\\
403\hline
404\begin{tabular}{l}from point in a surface\end{tabular}  & 1 & \begin{tabular}{l} see visibility from point in 2D \end{tabular}\\
405\hline
406\begin{tabular}{l}visibility from a point\end{tabular}  & 2 & \begin{tabular}{l} visible (hidden) surfaces, point-to-region visibility,\\
407                                      visibility map, hard shadows
408                               \end{tabular} \\
409\hline
410\begin{tabular}{l}visibility from a line segment\end{tabular} & 3 & \begin{tabular}{l} segment-to-region visibility (rare) \end{tabular}\\
411\hline
412\begin{tabular}{l}visibility from a region\\global visibility\end{tabular}       & 4 & \begin{tabular}{l} region-region visibility, PVS, aspect graph,\\
413                                            soft shadows, discontinuity meshing
414                                     \end{tabular} \\
415
416\hline 
417\end{tabular}
418\end{center}
419\end{small}
420\caption{Classification of visibility problems in 2D and 3D according
421to the dimension of the problem-relevant line set.}
422\label{table:classification3D}
423\end{table*}
424
425
426
427
428
429\section{Classification of visibility algorithms}
430
431
432The taxonomy of visibility problems groups similar visibility problems
433in the same class.  A visibility problem can be solved by means of
434various visibility algorithms. A visibility algorithm poses further
435restrictions on the input and output data. These restrictions can be
436seen as a more precise definition of the visibility problem that is
437solved by the algorithm.
438
439 Above we classified visibility problems according to the problem
440domain and the desired answers. In this section we provide a
441classification of visibility algorithms according to other
442important criteria characterizing a particular visibility algorithm.
443
444
445\subsection{Scene restrictions}
446\label{sec:scene_restrictions}
447
448Visibility algorithms can be classified according to the restrictions
449they pose on the scene description.  The type of the scene description
450influences the difficulty of solving the given problem: it is simpler
451to implement an algorithm computing a visibility map for scenes
452consisting of triangles than for scenes with NURBS surfaces. We list
453common restrictions on the scene primitives suitable for visibility
454computations:
455
456
457\begin{itemize}
458\item
459  triangles, convex polygons, concave polygons,
460 
461\item
462  volumetric data,
463
464\item
465  points,
466
467\item
468  general parametric, implicit, or procedural surfaces.
469
470\end{itemize}
471
472Some attributes of scenes objects further increase the complexity of the visibility computation:
473
474\begin{itemize}
475
476\item
477  transparent objects,
478
479\item
480  dynamic objects.
481
482\end{itemize}
483
484The majority of analytic visibility algorithms deals with static
485polygonal scenes without transparency. The polygons are often
486subdivided into triangles for easier manipulation and representation.
487
488\subsection{Accuracy}
489\label{sec:accuracy}
490
491 Visibility algorithms can be classified according to the accuracy of
492the result as:
493
494\begin{itemize}
495\item exact,
496\item conservative,
497\item aggressive,
498\item approximate.
499\end{itemize}
500
501
502 An exact algorithm provides an exact analytic result for the given
503problem (in practice however this result is typically influenced by
504the finite precision of the floating point arithmetics). A
505conservative algorithm overestimates visibility, i.e. it never
506misses any visible object, surface or point. An aggressive algorithm
507always underestimates visibility, i.e. it never reports an invisible
508object, surface or point as visible.  An approximate algorithm
509provides only an approximation of the result, i.e. it can overestimate
510visibility for one input and underestimate visibility for another
511input.
512
513 The classification according to the accuracy is best illustrated on
514computing PVS: an exact algorithm computes an exact PVS. A
515conservative algorithm computes a superset of the exact PVS.  An
516aggressive algorithm determines a subset of the exact PVS.  An
517approximate algorithm computes an approximation to the exact PVS that
518is neither its subset or its superset for all possible inputs.
519
520
521\subsection{Solution space}
522
523\label{sec:solspace}
524
525 The solution space is the domain in which the algorithm determines
526the desired result. Note that the solution space does not need to
527match the domain of the result.
528
529The algorithms can be classified as:
530
531\begin{itemize}
532  \item discrete,
533  \item continuous,
534  \item hybrid.
535\end{itemize}
536
537 A discrete algorithm solves the problem using a discrete solution
538space; the solution is typically an approximation of the result. A
539continuous algorithm works in a continuous domain and often computes an
540analytic solution to the given problem. A hybrid algorithm uses both
541the discrete and the continuous solution space.
542
543 The classification according to the solution space is easily
544demonstrated on visible surface algorithms (these algorithms will be
545discussed in Section~\ref{chap:traditional}). The
546z-buffer~\cite{Catmull:1975:CDC} is a common example of a discrete
547algorithm. The Weiler-Atherton algorithm~\cite{Weiler:1977:HSR} is an
548example of a continuous one. A hybrid solution space is used by
549scan-line algorithms that solve the problem in discrete steps
550(scan-lines) and for each step they provide a continuous solution
551(spans).
552
553Further classification reflects the semantics of the solution
554space. According to this criteria we can classify the algorithms as:
555
556\begin{itemize}
557  \item primal space (object space),
558  \item line space,
559    \begin{itemize}
560    \item image space,
561    \item general,
562    \end{itemize}
563  \item hybrid.
564\end{itemize}
565
566 A primal space algorithm solves the problem by studying the
567visibility between objects without a transformation to a different
568solution space. A line space algorithm studies visibility using a
569transformation of the problem to line space. Image space algorithms
570can be seen as an important subclass of line space algorithms for
571solving visibility from a point problems in 3D. These algorithms cover
572all visible surface algorithms and many visibility culling
573algorithms. They solve visibility in a given image plane that
574represents the problem-relevant line set ${\cal L}_R$ --- each ray
575originating at the viewpoint corresponds to a point in the image plane.
576
577 The described classification differs from the sometimes mentioned
578understanding of image space and object space algorithms that
579incorrectly considers all image space algorithms discrete and all
580object space algorithms continuous.
581
582
583%*****************************************************************************
584
585\section{Visibility algorithm design}
586
587 Visibility algorithm design can be decoupled into a series of
588important design decisions. The first step is to clearly formulate a
589problem that should be solved by the algorithm. The  taxonomy stated
590above can help to understand the difficulty of solving the given
591problem and its relationship to other visibility problems in computer
592graphics. The following sections summarize important steps in the
593design of a visibility algorithm and discuss some commonly used
594techniques.
595
596
597\subsection{Scene structuring}
598
599 We discuss two issues dealing with structuring of the scene:
600identifying occluders and occludees, and spatial data structures for
601scene description.
602
603\subsubsection{Occluders and occludees}
604%occluders, occludees,
605
606Many visibility algorithms restructure the scene description to
607distinguish between {\em occluders} and {\em occludees}.  Occluders
608are objects that cause changes in visibility (occlusion). Occludees
609are objects that do not cause occlusion, but are sensitive to
610visibility changes. In other words the algorithm studies visibility of
611occludees with respect to occluders.
612
613The concept of occluders and occludees is used to increase the
614performance of the algorithm in both the running time and the accuracy
615of the algorithm by reducing the number of primitives used for
616visibility computations (the performance measures of visibility
617algorithms will be discussed in
618Section~\ref{sec:performance}). Typically, the number of occluders and
619occludees is significantly smaller than the total number of objects in
620the scene. Additionally, both the occluders and the occludees can be
621accompanied with a topological (connectivity) information that might
622be necessary for an efficient functionality of the algorithm.
623
624 The concept of occluders is applicable to most visibility
625algorithms. The concept of occludees is useful for algorithms
626providing answers (1) and (2) according to the taxonomy of
627Section~\ref{sec:answers}. Some visibility algorithms do not
628distinguish between occluders and occludees at all. For example all
629traditional visible surface algorithms use all scene objects as both
630occluders and occludees.
631
632 Both the occluders and the occludees can be represented by {\em
633virtual objects} constructed from the scene primitives: the occluders
634as simplified inscribed objects, occludees as simplified circumscribed
635objects such as bounding boxes. Algorithms can be classified according
636to the type of occluders they deal with. The classification follows
637the scene restrictions discussed in
638Section~\ref{sec:scene_restrictions} and adds classes specific to
639occluder restrictions:
640
641\begin{itemize}
642\item
643  vertical prisms,
644\item
645  axis-aligned polygons,
646\item
647  axis-aligned rectangles.
648\end{itemize}
649 
650 The vertical prisms that are specifically important for computing
651visibility in \m25d scenes. Some visibility algorithms can deal only
652with axis-aligned polygons or even more restrictive axis-aligned
653rectangles.
654
655
656\begin{figure}[htb]
657\centerline{
658\includegraphics[width=0.7\textwidth,draft=\DRAFTIMAGES]{images/houses}}
659\caption{Occluders in an urban scene. In urban scenes the occluders
660can be considered vertical prisms erected above the ground.}
661\label{fig:houses}
662\end{figure}
663
664Other important criteria for evaluating algorithms according to
665occluder restrictions include:
666
667
668\begin{itemize}
669\item
670  connectivity information,
671\item
672  intersecting occluders.
673\end{itemize}
674
675
676The explicit knowledge of the connectivity is crucial for efficient
677performance of some visibility algorithms (performance measures will
678be discussed in the Section~\ref{sec:performance}). Intersecting
679occluders cannot be handled properly by some visibility algorithms.
680In such a case the possible occluder intersections should be resolved
681in preprocessing.
682
683 A similar classification can be applied to occludees. However, the
684visibility algorithms typically pose less restrictions on occludees
685since they are not used to describe visibility but only to check
686visibility with respect to the description provided by the occluders.
687
688
689%occluder selection, occluder LOD, virtual occluders,
690
691\subsubsection{Scene description}
692
693 The scene is typically represented by a collection of objects. For
694purposes of visibility computations it can be advantageous to transform
695the object centered representation to a spatial representation by
696means of a spatial data structure.  For example the scene can be
697represented by an octree where full voxels correspond to opaque parts
698of the scene. Such a data structure is then used as an input to the
699visibility algorithm. The spatial data structures for the scene
700description are used for the following reasons:
701
702\begin{itemize}
703
704\item {\em Regularity}. A spatial data structure typically provides a
705  regular description of the scene that avoids complicated
706  configurations or overly detailed input. Furthermore, the
707  representation can be rather independent of the total scene
708  complexity.
709 
710\item {\em Efficiency}. The algorithm can be more efficient in both
711  the running time and the accuracy of the result.
712
713\end{itemize}
714
715
716Additionally, spatial data structures can be applied to structure the
717solution space and/or to represent the desired solution. Another
718application of spatial data structures is the acceleration of the
719algorithm by providing spatial indexing. These applications of spatial
720data structures will be discussed in
721Sections~\ref{sec:solution_space_ds}
722and~\ref{sec:acceleration_ds}. Note that a visibility algorithm can
723use a single data structure for all three purposes (scene
724structuring, solution space structuring, and spatial indexing) while
725another visibility algorithm can use three conceptually different data
726structures.
727
728
729% gernots alg.
730%used as solution space DS and/or acceleration DS
731
732\subsection{Solution space data structures}
733\label{sec:solution_space_ds}
734
735A solution space data structure is used to maintain an intermediate
736result during the operation of the algorithm and it is used to
737generate the result of the algorithm. We distinguish between the
738following solution space data structures:
739
740\begin{itemize}
741
742\item general data structures
743
744  single value (ray shooting), winged edge, active edge table, etc.
745 
746\item primal space (spatial) data structures
747 
748  uniform grid, BSP tree (shadow volumes),
749  bounding volume hierarchy, kD-tree, octree, etc.
750 
751\item image space data structures
752
753  2D uniform grid (shadow map), 2D BSP tree, quadtree, kD-tree, etc.
754
755\item line space data structures
756
757  regular grid, kD-tree, BSP tree, etc.
758 
759\end{itemize}
760
761 The image space data structures can be considered a special case of
762the line space data structures since a point in the image represents a
763ray through that point (see also Section~\ref{sec:solspace}).
764
765 If the dimension of the solution space matches the dimension of the
766problem-relevant line set, the visibility problem can often be solved
767with high accuracy by a single sweep through the scene. If the
768dimensions do not match, the algorithm typically needs more passes to
769compute a result with satisfying accuracy~\cite{EVL-2000-60,wonka00}
770or neglects some visibility effects altogether~\cite{EVL-2000-59}.
771
772
773%ray shooting none
774%visible surface algorithms - list of scan-line intersections, BSP tree,
775% z-buffer (graphics hardware)
776%shadow computation shadow volumes, BSP tree, shadow map (graphics hardware)
777%PVS for view cell - occlusion tree
778
779
780\subsection{Performance}
781\label{sec:performance}
782
783%output sensitivity, memory consumption, running time, scalability
784
785
786The performance of a visibility algorithm can be evaluated by measuring
787the quality of the result, the running time and the memory consumption.
788In this section we discuss several concepts related to these
789performance criteria.
790
791
792\subsubsection{Quality of the result}
793
794 One of the important performance measures of a visibility algorithm
795is the quality of the result. The quality measure depends on the type
796of the answer to the problem. Generally, the quality of the result
797can be expressed as a distance from an exact result in the solution
798space.  Such a quality measure can be seen as a more precise
799expression of the accuracy of the algorithm discussed in
800Section~\ref{sec:accuracy}.
801
802For example a quality measure of algorithms computing a PVS can be
803expressed by the {\em relative overestimation} and the {\em relative
804underestimation} of the PVS with respect to the exact PVS.  We can
805define a quality measure of an algorithm $A$ on input $I$ as a tuple
806$\mbi{Q}^A(I)$:
807
808\begin{eqnarray}
809  \mbi{Q}^A(I) & = & (Q^A_o(I), Q^A_u(I)), \qquad  I \in {\cal D} \\
810  Q^A_o(I) & = & {|S^A(I) \setminus S^{\cal E}(I)| \over |S^{\cal E}(I)|} \\
811  Q^A_u(I) & = & {|S^{\cal E}(I) \setminus S^A(I) | \over |S^{\cal E}(I)|}
812\end{eqnarray}
813
814where $I$ is an input from the input domain ${\cal D}$, $S^A(I)$ is
815the PVS determined by the algorithm $A$ for input $I$ and $S^{\cal
816E}(I)$ is the exact PVS for the given input. $Q^A_o(I)$ expresses the
817{\em relative overestimation} of the PVS, $Q^A_u(I)$ is the {\em
818relative underestimation}.
819
820The expected quality of the algorithm over all possible inputs can be
821given as:
822
823\begin{eqnarray}
824Q^A & = & E[\| \mbi{Q}^A(I) \|] \\
825    & = & \sum_{\forall I \in {\cal D}} f(I).\sqrt{Q^A_o(I)^2 + Q^A_o(I)^2}
826\end{eqnarray}
827
828where f(I) is the probability density function expressing the
829probability of occurrence of input $I$. The quality measure
830$\mbi{Q}^A(I)$ can be used to classify a PVS algorithm into one of the
831four accuracy classes according to Section~\ref{sec:accuracy}:
832
833\begin{enumerate}
834\item exact\\
835  $\forall I \in {\cal D} :Q_o^A(I) = 0 \wedge Q_u^A(I) = 0$
836\item conservative\\
837  $\forall I \in {\cal D} : Q_o^A(I) \geq 0 \wedge Q_u^A(I) = 0$
838\item aggressive \\
839  $\forall I \in {\cal D} : Q_o^A(I) = 0 \wedge Q_u^A(I) \geq 0$
840\item approximate \\
841  $\qquad \exists I_j, I_k \in {\cal D}: Q_o^A(I_j) > 0 \wedge Q_u^A(I_k) > 0$
842\end{enumerate}
843
844
845
846\subsubsection{Scalability}
847
848 Scalability expresses the ability of the visibility algorithm to cope
849with larger inputs. A more precise definition of scalability of an
850algorithm depends on the problem for which the algorithm is
851designed. The scalability of an algorithm can be studied with respect
852to the size of the scene (e.g. number of scene objects). Another
853measure might consider the dependence of the algorithm on the number
854of only the visible objects. Scalability can also be studied
855according to the given domain restrictions, e.g. volume of the view
856cell.
857
858 A well designed visibility algorithm should be scalable with respect
859to the number of structural changes or discontinuities of
860visibility. Furthermore, its performance should be given by the
861complexity of the visible part of the scene. These two important
862measures of scalability of an algorithm are discussed in the next two
863sections.
864
865\subsubsection{Use of coherence}
866
867 Scenes in computer graphics typically consist of objects whose
868properties vary smoothly from one part to another. A view of such a
869scene contains regions of smooth changes (changes in color, depth,
870texture,etc.) and discontinuities that let us distinguish between
871objects. The degree to which the scene or its projection exhibit local
872similarities is called {\em coherence}~\cite{Foley90}.
873
874 Coherence can be exploited by reusing calculations made for one part
875of the scene for nearby parts. Algorithms exploiting coherence are
876typically more efficient than algorithms computing the result from the
877scratch.
878
879 Sutherland et al.~\cite{Sutherland:1974:CTH} identified several
880different types of coherence in the context of visible surface
881algorithms. We simplify the classification proposed by Sutherland et
882al. to reflect general visibility problems and distinguish between the
883following three types of {\em visibility coherence}:
884
885\begin{itemize}
886
887\item {\em Spatial coherence}. Visibility of points in space tends to
888  be coherent in the sense that the visible part of the scene consists
889  of compact sets (regions) of visible and invisible points. We can
890  reuse calculations made for a given region for the neighboring
891  regions or its subregions.
892
893\item {\em Line-space coherence}. Sets of similar rays tend to have the
894  same visibility classification, i.e. the rays intersect the same
895  object. We can reuse calculations for the given set of rays for its
896  subsets or the sets of nearby rays.
897
898\item {\em Temporal coherence}. Visibility at two successive moments is
899  likely to be similar despite small changes in the scene or a
900  region/point of interest. Calculations made for one frame can be
901  reused for the next frame in a sequence.
902
903\end{itemize}
904 
905 The degree to which the algorithm exploits various types of coherence
906is one of the major design paradigms in research of new visibility
907algorithms. The importance of exploiting coherence is emphasized by
908the large amount of data that need to be processed by the current
909rendering algorithms.
910
911
912\subsubsection{Output sensitivity}
913
914
915An algorithm is said to be {\em output-sensitive} if its running time
916is sensitive to the size of output. In the computer graphics community
917the term output-sensitive algorithm is used in a broader meaning than
918in computational geometry~\cite{berg:97}. The attention is paid to a
919practical usage of the algorithm, i.e. to an efficient implementation
920in terms of the practical average case performance. The algorithms are
921usually evaluated experimentally using test data and measuring the
922running time and the size of output of the algorithm. The formal
923average case analysis is usually not carried out for the following two
924reasons:
925
926\begin{enumerate}
927
928\item {\em The algorithm is too obscured}. Visibility algorithms
929exploit data structures that are built according to various heuristics
930and it is difficult to derive proper bounds even on the expected size
931of these supporting data structures.
932
933\item {\em It is difficult to properly model the input data}. In
934general it is difficult to create a reasonable model that captures
935properties of real world scenes as well as the probability of
936occurrence of a particular configuration.
937
938\end{enumerate}
939
940 A visibility algorithm can often be divided into the {\em offline}
941phase and the {\em online} phase. The offline phase is also called
942preprocessing. The preprocessing is often amortized over many
943executions of the algorithm and therefore it is advantageous to
944express it separately from the online running time.
945
946 For example an ideal output-sensitive visible surface algorithm runs
947in $O(n\log n + k^2)$, where $n$ is the number of scene polygons (size
948of input) and $k$ is the number of visible polygons (in the worst case
949$k$ visible polygons induce $O(k^2)$ visible polygon fragments).
950
951
952
953\subsubsection{Acceleration data structures}
954\label{sec:acceleration_ds}
955
956 Acceleration data structures are often used to achieve the performance
957goals of a visibility algorithm. These data structures allow efficient
958point location, proximity queries, or scene traversal required by many
959visibility algorithms.
960
961 With a few exceptions the acceleration data structures provide a {\em
962spatial index} for the scene by means of a spatial data structure.
963The spatial data structures group scene objects according to the
964spatial proximity. On the contrary line space data structures group
965rays according to their proximity in line space.
966
967The common acceleration data structures can be divided into the
968following  categories:
969
970\begin{itemize}
971\item Spatial data structures
972  \begin{itemize}
973  \item {\em Spatial subdivisions}
974
975    uniform grid, hierarchical grid, kD-tree, BSP tree, octree, quadtree, etc.
976   
977  \item {\em Bounding volume hierarchies}
978
979    hierarchy of bounding spheres,
980    hierarchy of bounding boxes, etc.
981   
982  \item {\em Hybrid}
983
984    hierarchy of uniform grids, hierarchy of kD-trees, etc.
985
986  \end{itemize}
987 
988\item Line space data structures
989  \begin{itemize}
990  \item {\em General}
991
992    regular grid, kD-tree, BSP tree, etc.
993  \end{itemize}
994 
995\end{itemize}
996
997
998
999\subsubsection{Use of graphics hardware}
1000
1001 Visibility algorithms can be accelerated by exploiting dedicated
1002graphics hardware. The hardware implementation of the z-buffer
1003algorithm that is common even on a low-end graphics hardware can be
1004used to accelerate solutions to other visibility problems. Recall that the
1005z-buffer algorithm solves the visibility from a point problem by
1006providing a discrete approximation of the visible surfaces.
1007%$(3-D-2b(i), A-3b)$
1008
1009A visibility algorithm can be accelerated by the graphics hardware if
1010it can be decomposed so that the decomposition includes the
1011problem solved by the z-buffer or a series of such problems.
1012%$(3-D-2b(i), A-3b)$
1013Prospectively, the recent features of the graphics hardware, such as
1014the pixel and vertex shaders  allow easier application of the graphics
1015hardware for solving specific visibility tasks. The software interface
1016between the graphics hardware and the CPU is usually provided by
1017OpenGL~\cite{Moller02-RTR}.
1018
1019
1020\section{Visibility in urban environments}
1021
1022 Urban environments constitute an important class of real world scenes
1023computer graphics deals with. We can identify two fundamental
1024subclasses of urban scenes. Firstly, we consider {\em outdoor} scenes,
1025i.e. urban scenes as observed from streets, parks, rivers, or a
1026bird's-eye view. Secondly, we consider {\em indoor} scenes, i.e. urban
1027scenes representing building interiors. In the following two sections
1028we discuss the essential characteristics of visibility in both the
1029outdoor and the indoor scenes. The discussion is followed by
1030summarizing the suitable visibility techniques.
1031
1032
1033\subsection{Analysis of visibility in outdoor urban areas}
1034
1035\label{sec:analysis_ue}
1036\label{sec:ANALYSIS_UE}
1037
1038
1039Outdoor urban scenes are viewed using two different scenarios. In a
1040{\em flyover} scenario the scene is observed from the bird's eye
1041view. A large part of the scene is visible. Visibility is  mainly
1042restricted due to the structure of the terrain, atmospheric
1043constraints (fog, clouds) and the finite resolution of human
1044retina. Rendering of the flyover scenarios is usually accelerated
1045using LOD, image-based rendering and terrain visibility algorithms,
1046but there is no significant potential for visibility culling.
1047
1048In a {\em walkthrough} scenario the scene is observed from a
1049pedestrians point of view and the visibility is often very
1050restricted. In the remainder of this section we discuss the walkthrough
1051scenario in more detail.
1052
1053Due to technological and physical restrictions urban scenes viewed
1054from outdoor closely resemble a 2D {\em height function}, i.e. a
1055function expressing the height of the scene elements above the ground.
1056The height function cannot capture certain objects such as bridges,
1057passages, subways, or detailed objects such as trees.  Nevertheless
1058buildings, usually the most important part of the scene, can be
1059captured accurately by the height function in most cases.  For the
1060sake of visibility computations the objects that cannot be represented
1061by the height function can be ignored. The resulting scene is then
1062called a {\em \m25d scene}.
1063
1064 In a dense urban area with high buildings visibility is very
1065restricted when the scene is viewed from a street (see
1066Figure~\ref{fig:outdoor}-a). Only buildings from nearby streets are
1067visible. Often there are no buildings visible above roofs of buildings
1068close to the viewpoint. In such a case visibility is essentially
1069two-dimensional, i.e. it could be solved accurately using a 2D
1070footprint of the scene and a 2D visibility algorithm. In areas with
1071smaller houses of different shapes visibility is not so severely
1072restricted since some objects can be visible by looking over other
1073objects. The view complexity increases (measured in number of visible
1074objects) and the height structure becomes increasingly
1075important. Complex views with far visibility can be seen also near
1076rivers, squares, and parks (see Figure~\ref{fig:outdoor}-b).
1077
1078\begin{figure}[htb]
1079  \centerline{
1080    \hfill
1081    \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_street1}
1082    \hfill
1083    \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_castle1}
1084    \hfill
1085  }
1086  \caption{Visibility in outdoor urban areas. (left) In the center of a city
1087  visibility is typically restricted to a few nearby streets. (right)
1088  Near river banks typically a large part of the city is visible. Note
1089  that many distant objects are visible due to the terrain gradation.}
1090  \label{fig:outdoor}
1091\end{figure}
1092
1093 In scenes with large differences in terrain height the view complexity
1094is often very high. Many objects can be visible that are situated for
1095example on a hill or on a slope behind a river. Especially in areas
1096with smaller housing visibility is much defined by the terrain itself.
1097
1098We can summarize the observations as follows (based on
1099Wonka~\cite{wonka_phd}) :
1100
1101\begin{itemize}
1102 
1103\item
1104  Outdoor urban environments have basically \m25d structure and
1105  consequently visibility is restricted accordingly.
1106 
1107\item
1108  The view is very restricted in certain areas, such as in the
1109  city center. However the complexity of the view can vary
1110  significantly.  It is always not the case that only few objects are
1111  visible.
1112 
1113\item
1114  If there are large height differences in the terrain, many
1115  objects are visible for most viewpoints.
1116 
1117\item
1118  In the same view a close object can be visible next to a very
1119  distant one.
1120
1121\end{itemize}
1122
1123
1124 In the simplest case the outdoor scene consists only of the terrain
1125populated by a few buildings. Then the visibility can be calculated on
1126the terrain itself with satisfying
1127accuracy~\cite{Floriani:1995:HCH,Cohen-Or:1995:VDZ, Stewart:1997:HVT}.
1128Outdoor urban environments have a similar structure as terrains:
1129buildings can be treated as a terrain with {\em many discontinuities}
1130in the height function (assuming that the buildings do not contain
1131holes or significant variations in their fa{\c{c}}ades). To
1132accurately capture visibility in such an environment specialized
1133algorithms have been developed that compute visibility from a given
1134viewpoint~\cite{downs:2001:I3DG} or view
1135cell~\cite{wonka00,koltun01,bittner:2001:PG}.
1136
1137%  The methods presented later in the thesis make use of the specific
1138% structure of the outdoor scenes to efficiently compute a PVS for the
1139% given view cell.  The key observation is that the PVS for a view cell
1140% in a \m25d can be determined by computing visibility from its top
1141% boundary edges. This problem becomes a restricted variant of the
1142% visibility from a line segment in 3D with $d({\cal L}_R) = 3$.
1143
1144
1145\subsection{Analysis of indoor visibility}
1146
1147Building interiors constitute another important class of real world
1148scenes.  A typical building consists of rooms, halls, corridors, and
1149stairways. It is possible to see from one room to another through an
1150open door or window. Similarly it is possible to see from one corridor
1151to another one through a door or other connecting structure.  In
1152general the scene can be subdivided into cells corresponding to the
1153rooms, halls, corridors, etc., and transparent portals that connect
1154the cells~\cite{Airey90,Teller:1991:VPI}. Some portals
1155correspond to the real doors and windows, others provide only a
1156virtual connection between cells. For example an L-shaped corridor
1157can be represented by two cells and one virtual portal connecting them.
1158
1159Visibility in a building interior is often significantly restricted
1160(see Figure~\ref{fig:indoor}).  We can see the room we are located at
1161and possibly few other rooms visible through open doors. Due to the
1162natural partition of the scene into cells and portals visibility can
1163be solved by determining which cells can be seen through a give set of
1164portals and their sequences. A sequence of portals that we can see
1165through is called {\em feasible}.
1166
1167\begin{figure}[htb]
1168  \centerline{
1169    \hfill
1170    \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_chodba1}
1171    \hfill
1172    \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_sloupy2}
1173    \hfill
1174  }
1175  \caption{Indoor visibility. (left) Visibility in indoor scenes is typically
1176    restricted to a few rooms or corridors. (right) In scenes with more complex
1177    interior structure visibility gets more complicated.
1178  }
1179  \label{fig:indoor}
1180\end{figure}
1181
1182
1183 Many algorithms for computing indoor visibility~\cite{Airey90,
1184Teller92phd, Luebke:1995:PMS} exploit the cell/portal structure of the
1185scene. The potential problem of this approach is its strong
1186sensitivity to the arrangement of the environment. In a scene with a
1187complicated structure with many portals there are many feasible portal
1188sequences.  Imagine a hall with columns arranged on a grid. The number
1189of feasible portal sequences rapidly increases with the distance from
1190the given view cell~\cite{Teller92phd} if the columns are sufficiently
1191small (see Figure~\ref{fig:portal_explosion}).  Paradoxically most of
1192the scene is visible and there is almost no benefit of using any
1193visibility culling algorithm.
1194
1195 The methods presented later in the report partially avoids this
1196problem since it does not rely on finding feasible portal sequences
1197even in the indoor scenes. Instead of determining what {\em can} be
1198visible through a transparent complement of the scene (portals) the
1199method determines what {\em cannot} be visible due to the scene
1200objects themselves (occluders). This approach also avoids the explicit
1201enumeration of portals and the construction of the cell/portal graph.
1202
1203
1204
1205\begin{figure}[htb]     
1206  \centerline{
1207    \includegraphics[width=0.45\textwidth,draft=\DRAFTFIGS]{figs/portals_explosion}}
1208    \caption{In sparsely occluded scenes the cell/portal algorithm can
1209    exhibit a combinatorial explosion in number of feasible portal
1210    sequences. Paradoxically visibility culling provides almost no
1211    benefit in such scenes.}
1212    \label{fig:portal_explosion}
1213\end{figure}
1214
1215
1216
1217\section{Summary}
1218
1219 Visibility problems and algorithms penetrate a large part of computer
1220graphics research. The proposed taxonomy aims to classify visibility
1221problems independently of their target application. The
1222classification should help to understand the nature of the given
1223problem and it should assist in finding relationships between
1224visibility problems and algorithms in different application areas.
1225The tools address the following classes of visibility problems:
1226
1227\begin{itemize}
1228\item Visibility from a point in 3D $d({\cal L}_R)=2$.
1229\item Global visibility in 3D $d({\cal L}_R)=4$.
1230\item Visibility from a region in 3D, $d({\cal L}_R)=4$.
1231\end{itemize}
1232
1233 This chapter discussed several important criteria for the
1234classification of visibility algorithms. This classification can be
1235seen as a finer structuring of the taxonomy of visibility problems. We
1236discussed important steps in the design of a visibility algorithm that
1237should also assist in understanding the quality of a visibility
1238algorithm. According to the classification the tools address
1239algorithms with the following properties:
1240
1241\begin{itemize}
1242\item Domain:
1243  \begin{itemize}
1244  \item viewpoint (Chapter~\ref{chap:online}),
1245  \item polygon or polyhedron (Chapters~\ref{chap:sampling,chap:mutual})
1246  \end{itemize}
1247\item Scene restrictions (occluders):
1248  \begin{itemize}
1249  \item meshes consisting of convex polygons
1250  \end{itemize}
1251\item Scene restrictions (group objects):
1252  \begin{itemize}
1253  \item bounding boxes (Chapters~\ref{chap:rtviscull},~\ref{chap:vismap},~Chapter~\ref{chap:rot25d} and~\ref{chap:rot3d}),
1254  \end{itemize}
1255\item Output:
1256  \begin{itemize}
1257  \item Visibility classification of objects or hierarchy nodes
1258  \item PVS
1259  \end{itemize}
1260\item Accuracy:
1261  \begin{itemize}
1262  \item conservative
1263  \item exact
1264  \item aggresive
1265  \end{itemize}
1266  \item Solution space:
1267  \begin{itemize}
1268  \item discrete (Chapters~\ref{chap:online},~\ref{chap:sampling})
1269  \item continuous, line space / primal space (Chapter~\ref{chap:rot25d})
1270  \end{itemize}
1271  \item Solution space data structures: viewport (Chapter~\ref{chap:online}), ray stack (Chapter~\ref{chap:sampling}), ray stack or BSP tree (Chapter~\ref{chap:mutual})
1272  \item Use of coherence of visibility:
1273    \begin{itemize}
1274    \item spatial coherence (all methods)
1275    \item temporal coherence (Chapter~\ref{chap:online})
1276    \end{itemize}
1277  \item Output sensitivity: expected in practice (all methods)
1278  \item Acceleration data structure: kD-tree (all methods)
1279  \item Use of graphics hardware: Chapter~\ref{chap:online}
1280   
1281  \end{itemize}
1282
1283
Note: See TracBrowser for help on using the repository browser.