source: obsolete/trunk/VUT/doc/SciReport/sampling.tex @ 288

Revision 288, 26.7 KB checked in by bittner, 19 years ago (diff)
Line 
1\chapter{Global Visibility Sampling}
2
3\label{chap:sampling}
4
5The proposed visibility preprocessing framework consists of two major
6steps.
7
8\begin{itemize}
9\item The first step is an aggressive visibility sampling which gives
10initial estimate about global visibility in the scene. The sampling
11itself involves several strategies which will be described bellow.
12The important property of the aggressive sampling step is that it
13provides a fast progressive solution to global visibility and thus it
14can be easily integrated into the game development cycle. The
15aggressive sampling will terminate when the average contribution of new
16ray samples falls below a predefined threshold.
17
18\item The second step is mutual visibility verification. This step
19turns the previous aggressive visibility solution into either exact,
20conservative or error bound aggressive solution. The choice of the
21particular verifier is left on the user in order to select the best
22one for a particular scene, application context and time
23constrains. For example, in scenes like a forest an error bound
24aggressive visibility can be the best compromise between the resulting
25size of the PVS (and frame rate) and the visual quality. The exact or
26conservative algorithm can however be chosen for urban scenes where
27omission of even small objects can be more distracting for the
28user. The mutual visibility verification will be described in the next
29chapter.
30
31\end{itemize}
32
33In traditional visibility preprocessing the view space is
34subdivided into view cells and for each view cell the set of visible
35objects --- potentially visible set (PVS) is computed. This framework
36has been used for conservative, aggressive and exact algorithms.
37
38We propose a different strategy which has several advantages for
39sampling based aggressive visibility preprocessing. The strategy is
40based on the following fundamental ideas:
41\begin{itemize}
42\item Compute progressive global visibility instead of sequential from-region visibility
43\item Replace the roles of view cells and objects for some parts of the computation
44\end{itemize}
45
46Both these points will be addressed in this chapter in more detail.
47
48\section{Related work}
49\label{VFR3D_RELATED_WORK}
50
51Below we briefly discuss the related work on visibility preprocessing
52in several application areas. In particular we focus on computing
53from-region which has been a core of most previous visibility
54preprocessing techniques.
55
56
57\subsection{Aspect graph}
58
59The first algorithms dealing with from-region visibility belong to the
60area of computer vision. The {\em aspect
61  graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions
62the view space into cells that group viewpoints from which the
63projection of the scene is qualitatively equivalent. The aspect graph
64is a graph describing the view of the scene (aspect) for each cell of
65the partitioning. The major drawback of this approach is that for
66polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in
67the partitioning for unrestricted view space. A {\em scale space}
68aspect graph~\cite{bb12595,bb12590} improves robustness of the method
69by merging similar features according to the given scale.
70
71
72\subsection{Potentially visible sets}
73
74
75 In the computer graphics community Airey~\cite{Airey90} introduced
76the concept of {\em potentially visible sets} (PVS).  Airey assumes
77the existence of a natural subdivision of the environment into
78cells. For models of building interiors these cells roughly correspond
79to rooms and corridors.  For each cell the PVS is formed by cells
80visible from any point of that cell.  Airey uses ray shooting to
81approximate visibility between cells of the subdivision and so the
82computed PVS is not conservative.
83
84This concept was further elaborated by Teller et
85al.~\cite{Teller92phd,Teller:1991:VPI} to establish a conservative
86PVS.  The PVS is constructed by testing the existence of a stabbing
87line through a sequence of polygonal portals between cells. Teller
88proposed an exact solution to this problem using \plucker
89coordinates~\cite{Teller:1992:CAA} and a simpler and more robust
90conservative solution~\cite{Teller92phd}.  The portal based methods
91are well suited to static densely occluded environments with a
92particular structure.  For less structured models they can face a
93combinatorial explosion of complexity~\cite{Teller92phd}. Yagel and
94Ray~\cite{Yagel95a} present an algorithm, that uses a regular spatial
95subdivision. Their approach is not sensitive to the structure of the
96model in terms of complexity, but its efficiency is altered by the
97discrete representation of the scene.
98
99Plantinga proposed a PVS algorithm based on a conservative viewspace
100partitioning by evaluating visual
101events~\cite{Plantinga:1993:CVP}. The construction of viewspace
102partitioning was further studied by Chrysanthou et
103al.~\cite{Chrysanthou:1998:VP}, Cohen-Or et al.~\cite{cohen-egc-98}
104and Sadagic~\cite{Sadagic}.  Sudarsky and
105Gotsman~\cite{Sudarsky:1996:OVA} proposed an output-sensitive
106visibility algorithm for dynamic scenes.  Cohen-Or et
107al.~\cite{COZ-gi98} developed a conservative algorithm determining
108visibility of an $\epsilon$-neighborhood of a given viewpoint that was
109used for network based walkthroughs.
110
111Conservative algorithms for computing PVS developed by Durand et
112al.~\cite{EVL-2000-60} and Schaufler et al.~\cite{EVL-2000-59}  make
113use of several simplifying assumptions to avoid the usage of 4D data
114structures.  Wang et al.~\cite{Wang98} proposed an algorithm that
115precomputes visibility within beams originating from the restricted
116viewpoint region. The approach is very similar to the 5D subdivision
117for ray tracing~\cite{Simiakakis:1994:FAS} and so it exhibits similar
118problems, namely inadequate memory and preprocessing complexities.
119Specialized algorithms for computing PVS in \m25d scenes were proposed
120by Wonka et al.~\cite{wonka00}, Koltun et al.~\cite{koltun01}, and
121Bittner et al.~\cite{bittner:2001:PG}.
122
123The exact mutual visibility method presented later in the report is
124based on method exploting \plucker coordinates of
125lines~\cite{bittner:02:phd,nirenstein:02:egwr,haumont2005:egsr}. This
126algorithm uses \plucker coordinates to compute visibility in shafts
127defined by each polygon in the scene.
128
129
130\subsection{Rendering of shadows}
131
132
133The from-region visibility problems include the computation of soft
134shadows due to an areal light source. Continuous algorithms for
135real-time soft shadow generation were studied by Chin and
136Feiner~\cite{Chin:1992:FOP}, Loscos and
137Drettakis~\cite{Loscos:1997:IHS}, and
138Chrysanthou~\cite{Chrysantho1996a} and Chrysanthou and
139Slater~\cite{Chrysanthou:1997:IUS}. Discrete solutions have been
140proposed by Nishita~\cite{Nishita85}, Brotman and
141Badler~\cite{Brotman:1984:GSS}, and Soler and Sillion~\cite{SS98}. An
142exact algorithm computing an antipenumbra of an areal light source was
143developed by Teller~\cite{Teller:1992:CAA}.
144
145
146\subsection{Discontinuity meshing}
147
148
149Discontinuity meshing is used in the context of the radiosity global
150illumination algorithm or computing soft shadows due to areal light
151sources.  First approximate discontinuity meshing algorithms were
152studied by Campbell~\cite{Campbell:1990:AMG, Campbell91},
153Lischinski~\cite{lischinski92a}, and Heckbert~\cite{Heckbert92discon}.
154More elaborate methods were developed by
155Drettakis~\cite{Drettakis94-SSRII, Drettakis94-FSAAL}, and Stewart and
156Ghali~\cite{Stewart93-OSACS, Stewart:1994:FCSb}. These methods are
157capable of creating a complete discontinuity mesh that encodes all
158visual events involving the light source.
159
160The classical radiosity is based on an evaluation of form factors
161between two patches~\cite{Schroder:1993:FFB}. The visibility
162computation is a crucial step in the form factor
163evaluation~\cite{Teller:1993:GVA,Haines94,Teller:1994:POL,
164Nechvile:1996:FFE,Teichmann:WV}. Similar visibility computation takes
165place in the scope of hierarchical radiosity
166algorithms~\cite{Soler:1996:AEB, Drettakis:1997:IUG, Daubert:1997:HLS}.
167
168
169
170\subsection{Global visibility}
171
172 The aim of {\em global visibility} computations is to capture and
173describe visibility in the whole scene~\cite{Durand:1996:VCN}. The
174global visibility algorithms are typically based on some form of {\em
175line space subdivision} that partitions lines or rays into equivalence
176classes according to their visibility classification. Each class
177corresponds to a continuous set of rays with a common visibility
178classification. The techniques differ mainly in the way how the line
179space subdivision is computed and maintained. A practical application
180of most of the proposed global visibility structures for 3D scenes is
181still an open problem.  Prospectively these techniques provide an
182elegant method for ray shooting acceleration --- the ray shooting
183problem can be reduced to a point location in the line space
184subdivision.
185
186
187Pocchiola and Vegter introduced the visibility complex~\cite{pv-vc-93}
188that describes global visibility in 2D scenes. The visibility complex
189has been applied to solve various 2D visibility
190problems~\cite{r-tsvcp-95,r-wvcav-97, r-dvpsv-97,Orti96-UVCRC}.  The
191approach was generalized to 3D by Durand et
192al.~\cite{Durand:1996:VCN}. Nevertheless, no implementation of the 3D
193visibility complex is currently known. Durand et
194al.~\cite{Durand:1997:VSP} introduced the {\em visibility skeleton}
195that is a graph describing a skeleton of the 3D visibility
196complex. The visibility skeleton was verified experimentally and  the
197results indicate that its $O(n^4\log n)$ worst case complexity is much
198better in practice. Pu~\cite{Pu98-DSGIV} developed a similar method to
199the one presented in this chapter. He uses a BSP tree in \plucker
200coordinates to represent a global visibility map for a given set of
201polygons. The computation is performed considering all rays piercing
202the scene and so the method exhibits unacceptable memory complexity
203even for scenes of moderate size. Recently, Duguet and
204Drettakis~\cite{duguet:02:sig} developed a robust variant of the
205visibility skeleton algorithm that uses robust epsilon-visibility
206predicates.
207
208 Discrete methods aiming to describe visibility in a 4D data structure
209were presented by Chrysanthou et al.~\cite{chrysanthou:cgi:98} and
210Blais and Poulin~\cite{blais98a}.  These data structures are closely
211related to the {\em lumigraph}~\cite{Gortler:1996:L,buehler2001} or
212{\em light field}~\cite{Levoy:1996:LFR}. An interesting discrete
213hierarchical visibility algorithm for two-dimensional scenes was
214developed by Hinkenjann and M\"uller~\cite{EVL-1996-10}. One of the
215biggest problems of the discrete solution space data structures is
216their memory consumption required to achieve a reasonable
217accuracy. Prospectively, the scene complexity
218measures~\cite{Cazals:3204:1997} provide a useful estimate on the
219required sampling density and the size of the solution space data
220structure.
221
222
223\subsection{Other applications}
224
225 Certain from-point visibility problems determining visibility over a
226period of time can be transformed to a static from-region visibility
227problem. Such a transformation is particularly useful for antialiasing
228purposes~\cite{grant85a}. The from-region visibility can also be used
229in the context of simulation of the sound
230propagation~\cite{Funkhouser98}. The sound propagation algorithms
231typically require lower resolution than the algorithms simulating the
232propagation of light, but they need to account for simulation of
233attenuation, reflection and time delays.
234
235\section{Algorithm Description}
236
237This section first describes the setup of the global visibility
238sampling algorithm. In particular we describe the view cell
239representation and the novel concept of from-object based
240visibility. The we outline the different visibility sampling
241strategies.
242
243\subsection{View Space Partitioning}
244
245
246
247
248Before the visibility computation itself, we subdivide the space of
249all possible viewpoints and viewing directions into view cells. A good
250partition of the scene into view cells is an essential part of every
251visibility system. If they are chosen too large, the PVS (Potentially
252Visible Set)  of a view cells is too large for efficient culling. If
253they are chosen too small or without consideration, then neighbouring
254view cells contain redundant PVS information, hence boosting the PVS
255computation and storage costs for the scene. In the left image of
256figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna
257model, generated by triangulation of the streets.  In the closeup
258(right image) we can see that each triangle is extruded to a given
259height to form a view cell prism.
260
261\begin{figure}[htb]
262  \centerline{
263    \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01}
264     \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07}
265    }
266
267  \caption{(left) Vienna view cells. (right) The view cells are prisms with a triangular base. }
268  \label{fig:vienna_viewcells}
269\end{figure}
270
271In order to efficiently use view cells with our sampling method, we
272require a view cell representation which is
273
274\begin{itemize}
275\item optimized for view cell - ray intersection.
276\item flexible, i.e., it can represent arbitrary geometry.
277\item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others)
278\end{itemize}
279
280We meet these requirements by employing spatial subdivisions (i.e.,
281KD trees and BSP trees), to store the view cells. The initial view
282cells are associated with the leaves. The reason why we chose BSP
283trees as view cell representation  is that they are very
284flexible. View cells forming arbitrary closed meshes can be closely
285matched.  Therefore we are able to find a view cells with only a few
286view ray-plane intersections.  Furthermore, the hierarchical
287structures can be exploited as hierarchy of view cells. Interior nodes
288form larger view cells containing the children. If necessary, a leaf
289can be easily subdivided into smaller view cells.
290
291Currently we consider three different approaches to generate the
292initial view cell BSP tree.  The third method is not restricted to BSP
293trees, but BSP trees are preferred  because of their greater
294flexibility.
295
296
297
298\begin{itemize}
299
300\item A number of input view cells is given in advance, and we insert them into
301a BSP tree (i.e., we are changing their representation for a fast BSP tree lookup). As input
302view cell any closed mesh can be applied. The only requirement is
303that the any two view cells do not overlap.  The view cell
304polygons are extracted, storing a pointer to the parent view cell
305with the polygon. The BSP is build from these polygons using
306some global optimizations like tree balancing or least splits. The
307polygons guide the split process as they are filtered down the tree.
308The subdivision terminates when there is only one polygon left, which is coincident
309to the last split plane. Then two leaves are created and the view cell pointer
310(stored with the polygon) is inserted into the leaf representing the inside of the view cell.
311One input view cell can be associated with many leaves in case
312a view cell was split during the traversal. On the other hand, each leafs corresponds
313to exactly one or no view cell.
314
315However, sometimes a good set of view
316cells is not available. Or the scene is changed frequently, and the
317designer does not want to create new view cells on each change.  In
318such a case one of the following two methods should rather be chosen,
319which generate view cells automatically.
320
321
322\item We apply a BSP tree subdivision to the scene geometry. Whenever
323the subdivision terminates in a leaf, a view cell is associated with
324the leaf node.  This simple approach is justified because it places
325the view cell borders  along some discontinuities in the visibility
326function.
327
328\begin{figure}[htb]
329  \centerline{
330    \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{figs/viewcell_part}
331      }
332  \caption{A good view cell partition with respect to the sample rays piercing the scene objects
333  and the view cell minimizes the number of rays
334  piercing more than one view cell. During subdivision, this can be achieved by aligning
335  the split plane with one of the long sides of occluder $O$. }
336  \label{fig:viewcell_part}
337\end{figure}
338
339\item  The view cell generation can be guided by the sampling
340process. We start with with a single initial view cell representing
341the whole space. If a given threshold is reached during the
342preprocessing (e.g., the view cell is pierced by too many rays
343resulting in a large PVS), the view cell is subdivided into smaller
344cells using some criteria.
345
346\begin{table}
347\centering \footnotesize
348\begin{tabular}{|l|c|c|c|}
349  \hline\hline
350  Input & Vienna view cells selection & Vienna view cells full & Vienna simple scene\\\hline\hline
351  method & insert input viewcells & insert input view cells & generate from scene polygons\\\hline\hline
352  \#view cells & 105 & 16447 & 4867\\\hline\hline
353  \#input polygons & 525 & 82235 & 16151\\\hline\hline
354  BSP tree generation time & 0.016s & 10.328s & 0.61s\\\hline\hline
355  %%view cell insertion time & 0.016s & 7.984s \\\hline\hline
356  \#nodes & 1137 & 597933 & 9733\\\hline\hline
357  \#interior nodes & 568 & 298966 & 4866\\\hline\hline
358  \#leaf nodes  & 569 & 298967& 4867\\\hline\hline
359  \#splits & 25 & 188936 & 2010\\\hline\hline
360  max tree depth & 13 & 27 & 17\\\hline\hline
361  avg tree depth & 9.747 & 21.11 & 12.48\\\hline\hline
362 \end{tabular}
363 \caption{Statistics for the view cell BSP tree. In the first column we insert a selection of given view cells from the Vienna scene
364 into a BSP tree. In the second column we do the same for the full Vienna view cell set. In the third column we generate
365 new view cells using a BSP tree subdivision of the Vienna simple scene. The termination criterion was to stop subdivision if there are 
366 3 or less polygons per node.}\label{tab:viewcell_bsp}
367\end{table}
368
369
370In order to evaluate the best split plane, we first have to define the
371characteristics of a good view cell partition:  The view cells should
372be quite large, while their PVS stays rather small.  The PVS of each
373two view cells should be as distinct as possible, otherwise they could
374be merged into a larger view cell if the PVSs are too similar.  E.g.,
375for a building, the perfect view cells are usually the single rooms
376connected by portals.
377
378Hence we can define some useful criteria for the split: 1) the number
379of rays should be roughly equal among the new view cells. 2) The split
380plane should be chosen in a way that the ray sets are disjoint, i.e.,
381the number of rays contributing to more than one cell should be
382minimized. 3) For BSP trees, the split plane should be aligned with
383some scene geometry which is large enough to contribute a lot of
384occlusion power. This criterion can be naturally combined with the
385second one.  As termination criterion we can choose the minimum PVS /
386piercing ray size or the maximal tree depth. An illustration of
387a good and a bad choice of a split plane is given in figure~\ref{fig:viewcell_part}.
388\end{itemize}
389
390
391
392Some statistics about the first two methods (i.e., the insertion of the view cells into the BSP tree,
393and the automatic generation from the scene polygons using a BSP tree subdivision) is given in table~\ref{tab:viewcell_bsp}.
394We used a selection from given view cells for the Vienna scene for the first column,
395the full set for the second column, and the Vienna simple scene geometry for the
396automatic view cell generation. The measurements were conducted on a PC with 3.4GHz
397P4 CPU.
398
399% In the future we aim to extend the view cell construction by using
400% feedback from the PVS computation: the view cells which contain many
401% visibility changes will be subdivided further and neighboring view
402% cells with similar PVSs will be merged. We want to gain a more precise
403% information about visibility by selectively storing rays with the
404% view cells and computing visibility statistics for subsets of rays
405% which intersect subregions of the given view cell.
406
407\subsection{From-Object Based Visibility}
408
409 Our framework is based on the idea of sampling visibility by casting
410casting rays through the scene and collecting their contributions. A
411visibility sample is computed by casting a ray from an object towards
412the view cells and computing the nearest intersection with the scene
413objects. All view cells pierced by the ray segment can the object and
414thus the object can be added to their PVS. If the ray is terminated at
415another scene object the PVS of the pierced view cells can also be
416extended by this terminating object. Thus a single ray can make a
417number of contributions to the progressively computed PVSs. A ray
418sample piercing $n$ view cells which is bound by two distinct objects
419contributes by at most $2*n$ entries to the current PVSs. Apart from
420this performance benefit there is also a benefit in terms of the
421sampling density: Assuming that the view cells are usually much larger
422than the objects (which is typically the case) starting the sampling
423deterministically from the objects increases the probability of small
424objects being captured in the PVS.
425
426At this phase of the computation we not only start the samples from
427the objects, but we also store the PVS information centered at the
428objects. Instead of storing a PVS consisting of objects visible from
429view cells, every object maintains a PVS consisting of potentially
430visible view cells. While these representations contain exactly the
431same information as we shall see later the object centered PVS is
432better suited for the importance sampling phase as well as the
433visibility verification phase.
434
435
436\subsection{Naive Randomized Sampling}
437
438The naive global visibility sampling works as follows: At every pass
439of the algorithm visits scene objects sequentially. For every scene
440object we randomly choose a point on its surface. Then a ray is cast
441from the selected point according to the randomly chosen direction
442(see Figure~\ref{fig:sampling}). We use a uniform distribution of the
443ray directions with respect to the half space given by the surface
444normal. Using this strategy the samples at deterministically placed at
445every object, with a randomization of the location on the object
446surface. The uniformly distributed direction is a simple and fast
447strategy to gain initial visibility information.
448
449\begin{figure}%[htb]
450  \centerline{
451    \includegraphics[width=0.4\textwidth, draft=\DRAFTFIGS]{figs/sampling}
452  }
453 
454%\label{tab:online_culling_example}
455  \caption{Three objects and a set of view cells corresponding to leaves
456    of an axis aligned BSP tree. The figure depicts several random
457    samples cast from a selected object (shown in red). Note that most
458    samples contribute to more view cells.  }
459  \label{fig:sampling}
460\end{figure}
461
462The described algorithm accounts for the irregular distribution of the
463objects: more samples are placed at locations containing more
464objects. Additionally every object is sampled many times depending on
465the number of passes in which this sampling strategy is applied. This
466increases the chance of even a small object being captured in the PVS
467of the view cells from which it is visible.
468
469Each ray sample can contribute by a associating a number of view cells
470with the object from which the sample was cast. If the ray does not
471leave the scene it also contributes by associating the pierced view
472cells to the terminating object. Thus as the ray samples are cast we
473can measure the average contribution of a certain number of most
474recent samples.  If this contribution falls below a predefined
475constant we move on to the next sampling strategy, which aim to
476discover more complicated visibility relations.
477 
478
479\subsection{Accounting for View Cell Distribution}
480
481 The first modification to the basic algorithm accounts for irregular
482distribution of the view cells. Such a case is common for example in
483urban scenes where the view cells are mostly distributed in a
484horizontal direction and more view cells are placed at denser parts of
485the city. The modification involves replacing the uniformly
486distributed ray direction by directions distributed according to the
487local view cell directional density. This means placing more samples at
488directions where more view cells are located: We select a random
489view cell which lies at the half space given by the surface normal at the
490chosen point. We pick a random point inside the view cell and cast a
491ray towards this point.
492
493
494\subsection{Accounting for Visibility Events}
495
496 Visibility events correspond to appearance and disappearance of
497objects with respect to a moving view point. In polygonal scenes the
498events defined by event surfaces defined by three distinct scene
499edges. Depending on the edge configuration we distinguish between
500vertex-edge events (VE) and triple edge (EEE) events. The VE surfaces
501are planar planes whereas the EEE are in general quadratic surfaces.
502
503 To account for these events we explicitly place samples passing by
504the object edges which are directed to edges and/or vertices of other
505objects. In this way we perform stochastic sampling at boundaries of
506the visibility complex~\cite{Durand:1996:VCN}.
507
508The first strategy starts similarly to the above described sampling
509methods: we randomly select an object and a point on its surface. Then
510we randomly pickup an object from its PVS. If we have mesh
511connectivity information we select a random silhouette edge from this
512object and cast a sample which is tangent to that object at the
513selected edge.
514
515The second strategy works as follows: we randomly pickup two objects
516which are likely to see each other. Then we determine a ray which is
517tangent to both objects. For simple meshes the determination of such
518rays can be computed geometrically, for more complicated ones it is
519based again on random sampling. The selection of the two objects works
520as follows: first we randomly select the first object and a random
521non-empty view cell for which we know that it can see the object. Then
522we randomly select an object associated with that view cell as the
523second object.
524
525
526 
527\section{Summary}
528
529This chapter described the global visibility sampling algorithm which
530forms a core of the visibility preprocessing framework. The global
531visibility sampling computes aggressive visibility, i.e. it computes a
532subset of the exact PVS for each view cell. The aggressive sampling
533provides a fast progressive solution and thus it can be easily
534integrated into the game development cycle. The sampling itself
535involves several strategies which aim to progressively discover more
536visibility relationships in the scene.
537
538The methods presented in this chapter give a good initial estimate
539about visibility in the scene, which can be verified by the mutual
540visibility algorithms described in the next chapter.
541
542
543
544
Note: See TracBrowser for help on using the repository browser.