source: trunk/VUT/doc/SciReport/sampling.tex @ 273

Revision 273, 24.9 KB checked in by bittner, 19 years ago (diff)

sampling confict resolved

Line 
1\chapter{Global Visibility Sampling Tool}
2
3
4The proposed visibility preprocessing framework consists of two major
5steps.
6
7\begin{itemize}
8\item The first step is an aggresive visibility sampling which gives
9initial estimate about global visibility in the scene. The sampling
10itself involves several strategies which will be described bellow.
11The imporant property of the aggresive sampling step is that it
12provides a fast progressive solution to global visibility and thus it
13can be easily integrated into the game development cycle. The
14aggresive sampling will terminate when the average contribution of new
15ray samples falls bellow a predefined threshold.
16
17\item The second step is mutual visibility verification. This step
18turns the previous aggresive visibility solution into either exact,
19conservative or error bound aggresive solution. The choice of the
20particular verifier is left on the user in order to select the best
21one for a particular scene, application context and time
22constrains. For example, in scenes like a forest an error bound
23aggresive visibility can be the best compromise between the resulting
24size of the PVS (and framerate) and the visual quality. The exact or
25conservative algorithm can however be chosen for urban scenes where
26ommision of even small objects can be more distructing for the
27user. The mutual visibility tool will be described in the next chapter.
28
29\end{itemize}
30
31In traditional visibility preprocessing the view space is
32subdivided into view cells and for each view cell the set of visible
33objects --- potentially visible set (PVS) is computed. This framewoirk
34has been used for conservative, aggresive and exact algorithms.
35
36We propose a different strategy which has several advantages for
37sampling based aggresive visibility preprocessing. The stategy is
38based on the following fundamental ideas:
39\begin{itemize}
40\item Compute progressive global visibility instead of sequential from-region visibility
41\item Replace the roles of view cells and objects for some parts of the computation
42\end{itemize}
43
44Both these points will be addressed in this chapter in more detail.
45
46\section{Related work}
47\label{VFR3D_RELATED_WORK}
48
49Below we briefly discuss the related work on visibility preprocessing
50in several application areas. In particular we focus on computing
51from-region which has been a core of most previous visibility
52preprocessing techniques.
53
54
55\subsection{Aspect graph}
56
57The first algorithms dealing with from-region visibility belong to the
58area of computer vision. The {\em aspect
59  graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions
60the view space into cells that group viewpoints from which the
61projection of the scene is qualitatively equivalent. The aspect graph
62is a graph describing the view of the scene (aspect) for each cell of
63the partitioning. The major drawback of this approach is that for
64polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in
65the partitioning for unrestricted viewspace. A {\em scale space}
66aspect graph~\cite{bb12595,bb12590} improves robustness of the method
67by merging similar features according to the given scale.
68
69
70\subsection{Potentially visible sets}
71
72
73 In the computer graphics community Airey~\cite{Airey90} introduced
74the concept of {\em potentially visible sets} (PVS).  Airey assumes
75the existence of a natural subdivision of the environment into
76cells. For models of building interiors these cells roughly correspond
77to rooms and corridors.  For each cell the PVS is formed by cells
78visible from any point of that cell.  Airey uses ray shooting to
79approximate visibility between cells of the subdivision and so the
80computed PVS is not conservative.
81
82This concept was further elaborated by Teller et
83al.~\cite{Teller92phd,Teller:1991:VPI} to establish a conservative
84PVS.  The PVS is constructed by testing the existence of a stabbing
85line through a sequence of polygonal portals between cells. Teller
86proposed an exact solution to this problem using \plucker
87coordinates~\cite{Teller:1992:CAA} and a simpler and more robust
88conservative solution~\cite{Teller92phd}.  The portal based methods
89are well suited to static densely occluded environments with a
90particular structure.  For less structured models they can face a
91combinatorial explosion of complexity~\cite{Teller92phd}. Yagel and
92Ray~\cite{Yagel95a} present an algorithm, that uses a regular spatial
93subdivision. Their approach is not sensitive to the structure of the
94model in terms of complexity, but its efficiency is altered by the
95discrete representation of the scene.
96
97Plantinga proposed a PVS algorithm based on a conservative viewspace
98partitioning by evaluating visual
99events~\cite{Plantinga:1993:CVP}. The construction of viewspace
100partitioning was further studied by Chrysanthou et
101al.~\cite{Chrysanthou:1998:VP}, Cohen-Or et al.~\cite{cohen-egc-98}
102and Sadagic~\cite{Sadagic}.  Sudarsky and
103Gotsman~\cite{Sudarsky:1996:OVA} proposed an output-sensitive
104visibility algorithm for dynamic scenes.  Cohen-Or et
105al.~\cite{COZ-gi98} developed a conservative algorithm determining
106visibility of an $\epsilon$-neighborhood of a given viewpoint that was
107used for network based walkthroughs.
108
109Conservative algorithms for computing PVS developed by Durand et
110al.~\cite{EVL-2000-60} and Schaufler et al.~\cite{EVL-2000-59}  make
111use of several simplifying assumptions to avoid the usage of 4D data
112structures.  Wang et al.~\cite{Wang98} proposed an algorithm that
113precomputes visibility within beams originating from the restricted
114viewpoint region. The approach is very similar to the 5D subdivision
115for ray tracing~\cite{Simiakakis:1994:FAS} and so it exhibits similar
116problems, namely inadequate memory and preprocessing complexities.
117Specialized algorithms for computing PVS in \m25d scenes were proposed
118by Wonka et al.~\cite{wonka00}, Koltun et al.~\cite{koltun01}, and
119Bittner et al.~\cite{bittner:2001:PG}.
120
121The exact mutual visibility method presented later in the report is
122based on method exploting \plucker coordinates of
123lines~\cite{bittner02phd,nirenstein:02:egwr,haumont2005}. This
124algorithm uses \plucker coordinates to compute visibility in shafts
125defined by each polygon in the scene.
126
127
128\subsection{Rendering of shadows}
129
130
131The from-region visibility problems include the computation of soft
132shadows due to an areal light source. Continuous algorithms for
133real-time soft shadow generation were studied by Chin and
134Feiner~\cite{Chin:1992:FOP}, Loscos and
135Drettakis~\cite{Loscos:1997:IHS}, and
136Chrysanthou~\cite{Chrysantho1996a} and Chrysanthou and
137Slater~\cite{Chrysanthou:1997:IUS}. Discrete solutions have been
138proposed by Nishita~\cite{Nishita85}, Brotman and
139Badler~\cite{Brotman:1984:GSS}, and Soler and Sillion~\cite{SS98}. An
140exact algorithm computing an antipenumbra of an areal light source was
141developed by Teller~\cite{Teller:1992:CAA}.
142
143
144\subsection{Discontinuity meshing}
145
146
147Discontinuity meshing is used in the context of the radiosity global
148illumination algorithm or computing soft shadows due to areal light
149sources.  First approximate discontinuity meshing algorithms were
150studied by Campbell~\cite{Campbell:1990:AMG, Campbell91},
151Lischinski~\cite{lischinski92a}, and Heckbert~\cite{Heckbert92discon}.
152More elaborate methods were developed by
153Drettakis~\cite{Drettakis94-SSRII, Drettakis94-FSAAL}, and Stewart and
154Ghali~\cite{Stewart93-OSACS, Stewart:1994:FCSb}. These methods are
155capable of creating a complete discontinuity mesh that encodes all
156visual events involving the light source.
157
158The classical radiosity is based on an evaluation of form factors
159between two patches~\cite{Schroder:1993:FFB}. The visibility
160computation is a crucial step in the form factor
161evaluation~\cite{Teller:1993:GVA,Haines94,Teller:1994:POL,
162Nechvile:1996:FFE,Teichmann:WV}. Similar visibility computation takes
163place in the scope of hierarchical radiosity
164algorithms~\cite{Soler:1996:AEB, Drettakis:1997:IUG, Daubert:1997:HLS}.
165
166
167
168\subsection{Global visibility}
169
170 The aim of {\em global visibility} computations is to capture and
171describe visibility in the whole scene~\cite{Durand:1996:VCN}. The
172global visibility algorithms are typically based on some form of {\em
173line space subdivision} that partitions lines or rays into equivalence
174classes according to their visibility classification. Each class
175corresponds to a continuous set of rays with a common visibility
176classification. The techniques differ mainly in the way how the line
177space subdivision is computed and maintained. A practical application
178of most of the proposed global visibility structures for 3D scenes is
179still an open problem.  Prospectively these techniques provide an
180elegant method for ray shooting acceleration --- the ray shooting
181problem can be reduced to a point location in the line space
182subdivision.
183
184
185Pocchiola and Vegter introduced the visibility complex~\cite{pv-vc-93}
186that describes global visibility in 2D scenes. The visibility complex
187has been applied to solve various 2D visibility
188problems~\cite{r-tsvcp-95,r-wvcav-97, r-dvpsv-97,Orti96-UVCRC}.  The
189approach was generalized to 3D by Durand et
190al.~\cite{Durand:1996:VCN}. Nevertheless, no implementation of the 3D
191visibility complex is currently known. Durand et
192al.~\cite{Durand:1997:VSP} introduced the {\em visibility skeleton}
193that is a graph describing a skeleton of the 3D visibility
194complex. The visibility skeleton was verified experimentally and  the
195results indicate that its $O(n^4\log n)$ worst case complexity is much
196better in practice. Pu~\cite{Pu98-DSGIV} developed a similar method to
197the one presented in this chapter. He uses a BSP tree in \plucker
198coordinates to represent a global visibility map for a given set of
199polygons. The computation is performed considering all rays piercing
200the scene and so the method exhibits unacceptable memory complexity
201even for scenes of moderate size. Recently, Duguet and
202Drettakis~\cite{duguet:02:sig} developed a robust variant of the
203visibility skeleton algorithm that uses robust epsilon-visibility
204predicates.
205
206 Discrete methods aiming to describe visibility in a 4D data structure
207were presented by Chrysanthou et al.~\cite{chrysanthou:cgi:98} and
208Blais and Poulin~\cite{blais98a}.  These data structures are closely
209related to the {\em lumigraph}~\cite{Gortler:1996:L,buehler2001} or
210{\em light field}~\cite{Levoy:1996:LFR}. An interesting discrete
211hierarchical visibility algorithm for two-dimensional scenes was
212developed by Hinkenjann and M\"uller~\cite{EVL-1996-10}. One of the
213biggest problems of the discrete solution space data structures is
214their memory consumption required to achieve a reasonable
215accuracy. Prospectively, the scene complexity
216measures~\cite{Cazals:3204:1997} provide a useful estimate on the
217required sampling density and the size of the solution space data
218structure.
219
220
221\subsection{Other applications}
222
223 Certain from-point visibility problems determining visibility over a
224period of time can be transformed to a static from-region visibility
225problem. Such a transformation is particularly useful for antialiasing
226purposes~\cite{grant85a}. The from-region visibility can also be used
227in the context of simulation of the sound
228propagation~\cite{Funkhouser98}. The sound propagation algorithms
229typically require lower resolution than the algorithms simulating the
230propagation of light, but they need to account for simulation of
231attenuation, reflection and time delays.
232
233\section{Algorithm Description}
234
235This section first describes the setup of the global visibility
236sampling algorithm. In particular we describe the view cell
237representation and the novel concept of from-object based
238visibility. The we outline the different visibility sampling
239strategies.
240
241\subsection{View Space Partitioning}
242
243
244\begin{figure}[htb]
245  \centerline{
246    \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01}
247     \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07}
248    }
249
250  \caption{(left) Vienna viewcells. (right) The view cells are prisms with a triangular base. }
251  \label{fig:vienna_viewcells}
252\end{figure}
253
254
255Before the visibility computation itself, we subdivide the space of
256all possible viewpoints and viewing directions into view cells. A good
257partition of the scene into view cells is an essential part of every
258visibility system. If they are chosen too large, the PVS (Potentially
259Visibible Set)  of a view cells is too large for efficient culling. If
260they are chosen too small or without consideration, then neighbouring
261view cells contain redundant PVS information, hence boosting the PVS
262computation and storage costs for the scene. In the left image of
263figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna
264model, generated by triangulation of the streets.  In the closeup
265(right image) we can see that each triangle is extruded to a given
266height to form a view cell prism.
267
268In order to efficiently use view cells with our sampling method, we
269require a view cell representation which is
270
271\begin{itemize}
272\item optimized for view cell - ray intersection.
273\item flexible, i.e., it can represent arbitrary geometry.
274\item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others)
275\end{itemize}
276
277We meet these requirements by employing spatial subdivisions (i.e.,
278KD trees and BSP trees), to store the view cells. The initial view cells are
279associated with the leaves. The reason why we chose BSP trees as view cell representation
280is that they are very flexible. View cells forming arbitrary closed meshes can be closely matched.
281Therefore we are able to find a view cells with only a few view ray-plane
282intersections.  Furthermore, the hierarchical structures can be
283exploited as hierarchy of view cells. Interior nodes form larger view cells
284containing the children. If neccessary, a leaf can be easily subdivided into smaller view cells.
285
286Currently we consider three different approaches to generate the initial view cell BSP tree.
287The third method is not restricted to BSP trees, but BSP trees are prefered
288because of their greater flexibility.
289
290
291\begin{table}
292\centering \footnotesize
293\begin{tabular}{|l|c|c|}
294  \hline\hline
295  View cells & Vienna selection & Vienna full \\\hline\hline
296  \#view cells & 105 & 16447 \\\hline\hline
297  \#input polygons & 525 & 82235 \\\hline\hline
298  BSP tree generation time & 0.016s & 10.328s \\\hline\hline
299  view cell insertion time & 0.016s & 7.984s \\\hline\hline
300  \#nodes & 1137 & 597933 \\\hline\hline
301  \#interior nodes & 568 & 298966\\\hline\hline
302  \#leaf nodes  & 569 & 298967\\\hline\hline
303  \#splits & 25 & 188936\\\hline\hline
304  max tree depth & 13 & 27\\\hline\hline
305  avg tree depth & 9.747 & 21.11\\\hline\hlien
306 
307 \end{tabular}
308 \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp}
309\end{table}
310
311
312\begin{itemize}
313
314\item We use a number of input view cells given in advance. As input view cell
315any closed mesh can be applied. The only requirement is that the
316any two view cells do not overlap.
317First the view cell polygons are extracted, and the BSP is build from
318these polygons using some global optimizations like tree balancing or
319least splits. Then one view cell after the other
320is inserted into the tree to find out the leafes where they are contained
321in. The polygons of the view cell are filtered down the tree,
322guiding the insertion process. Once we reach a leaf and there are no
323more polygons left, we terminate the tree subdivision. If we are on
324the inside of the last split plane (i.e., the leaf represents the
325inside of the view cell), we associate the leaf with the view cell
326(i.e., add a pointer to the view cell). One input view cell can
327be associated with many leafes, whereas each leafs has only one view cell.
328Some statistics about using this method on the vienna view cells set are given in table~\ref{tab:viewcell_bsp}.
329However, sometimes a good set of view cells is not available. Or the scene
330is changed frequently, and the designer does not want to create new view cells on each change.
331In such a case one of the following two methods should rather be chosen, which generate view cells
332automatically.
333
334
335\item We apply a BSP tree subdivision to the scene geometry. Whenever
336the subdivision terminates in a leaf, a view cell is associated with the leaf node.
337This simple approach is justified because it places the view cell borders
338along some discontinuities in the visibility function.
339
340\item  The view cell generation can be guided by the sampling process. We start with
341with a single initial view cell representing the whole space. If a given threshold
342is reached during the preprocessing (e.g., the view cell is pierced by too many rays
343resulting in a large PVS), the view cell is subdivided into smaller cells using some criteria.
344
345In order to evaluate the best split plane, we first have to define the characteristics of a
346good view cell partition:  The view cells should be quite large, while their PVS stays rather small.
347The PVS of each two view cells should be as distinct as possible, otherwise they could be
348merged into a larger view cell if the PVSs are too similar.
349E.g., for a building, the perfect view cells are usually the single rooms connected by portals.
350
351Hence we can define some useful criteria for the split: 1) the number of rays should be roughly equal among the new view cells. 2) The split plane should be chosen in a way that the rays are maximal distinct, i.e., the number
352of rays contributing to more than one cell should be minimized => the PVSs are also distinct. 3) For BSP trees,
353the split plane should be aligned with some scene geometry which is large enough to contribute
354a lot of occlusion power. This criterium can be naturally combined with the second one.
355As termination criterium we can choose the minimum PVS / piercing ray size or the maximal tree depth.
356\end{itemize}
357
358In the future we aim to extend the view cell construction by using
359feedback from the PVS computation: the view cells which contain many
360visibility changes will be subdivided further and neighboring view
361cells with similar PVSs will be merged. We want to gain a more precise
362information about visibility by selectivly storing rays with the
363viewcells and computing visibility statistics for subsets of rays
364which intersect subregions of the given view cell.
365
366
367\subsection{From-object based visibility}
368
369 Our framework is based on the idea of sampling visibility by casting
370casting rays through the scene and collecting their contributions. A
371visibility sample is computed by casting a ray from an object towards
372the view cells and computing the nearest intersection with the scene
373objects. All view cells pierced by the ray segment can the object and
374thus the object can be added to their PVS. If the ray is terminated at
375another scene object the PVS of the pierced view cells can also be
376extended by this terminating object. Thus a single ray can make a
377number of contributions to the progressively computed PVSs. A ray
378sample piercing $n$ view cells which is bound by two distinct objects
379contributes by at most $2*n$ entries to the current PVSs. Apart from
380this performance benefit there is also a benefit in terms of the
381sampling density: Assuming that the view cells are usually much larger
382than the objects (which is typically the case) starting the sampling
383deterministically from the objects increases the probability of small
384objects being captured in the PVS.
385
386At this phase of the computation we not only start the samples from
387the objects, but we also store the PVS information centered at the
388objects. Instead of storing a PVS consting of objects visible from
389view cells, every object maintains a PVS consisting of potentially
390visible view cells. While these representations contain exactly the
391same information as we shall see later the object centered PVS is
392better suited for the importance sampling phase as well as the
393visibility verification phase.
394
395
396\subsection{Naive Randomized Sampling}
397
398The naive global visibility sampling works as follows: At every pass
399of the algorithm visits scene objects sequentially. For every scene
400object we randomly choose a point on its surface. Then a ray is cast
401from the selected point according to the randomly chosen direction. We
402use a uniform distribution of the ray directions with respect to the
403halfspace given by the surface normal. Using this strategy the samples
404at deterministicaly placed at every object, with a randomization of
405the location on the object surface. The uniformly distributed
406direction is a simple and fast strategy to gain initial visibility
407information.
408
409\begin{figure}%[htb]
410  \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\
411%\label{tab:online_culling_example}
412  \caption{Three objects and a set of view cells corresponding to leaves
413    of an axis aligned BSP tree. The figure depicts several random
414    samples cast from a selected object (shown in red). Note that most
415    samples contribute to more view cells.  }
416  \label{fig:online_culling_example}
417\end{figure}
418
419The described algorithm accounts for the irregular distribution of the
420objects: more samples are placed at locations containing more
421objects. Additionally every object is sampled many times depending on
422the number of passes in which this sampling strategy is applied. This
423increases the chance of even a small object being captured in the PVS
424of the view cells from which it is visible.
425
426Each ray sample can contribute by a associating a number of view cells
427with the object from which the sample was cast. If the ray does not
428leave the scene it also contributes by associating the pierced view
429cells to the terminating object. Thus as the ray samples are cast we
430can measure the average contribution of a certain number of most
431recent samples.  If this contribution falls bellow a predefined
432constant we move on to the next sampling strategy, which aim to
433discover more complicated visibility relations.
434 
435
436\subsection{Accounting for View Cell Distribution}
437
438 The first modification to the basic algorithm accounts for irregular
439distribution of the view cells. Such a case is common for example in
440urban scenes where the view cells are mostly distributed in a
441horizontal direction and more view cells are placed at denser parts of
442the city. The modification involves replacing the uniformly
443distributed ray direction by directions distributed according to the
444local view cell directional density. This means placing more samples at
445directions where more view cells are located: We select a random
446viecell which lies at the halfpace given by the surface normal at the
447chosen point. We pick a random point inside the view cell and cast a
448ray towards this point.
449
450
451\subsection{Accounting for Visibility Events}
452
453 Visibility events correspond to appearance and disapearance of
454objects with respect to a moving view point. In polygonal scenes the
455events defined by event surfaces defined by three distinct scene
456edges. Depending on the edge configuration we distinguish between
457vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces
458are planar planes whereas the EEE are in general quadratic surfaces.
459
460 To account for these event we explicitly place samples passing by the
461object edges which are directed to edges and/or vertices of other
462objects.
463
464The first strategy starts similarly to the above described sampling
465methods: we randomly select an object and a point on its surface. Then
466we randomly pickup an object from its PVS. If we have mesh
467connectivity information we select a random silhouette edge from this
468object and cast a sample which is tangent to that object at the
469selectededge .
470
471The second strategy works as follows: we randomly pickup two objects
472which are likely to see each other. Then we determine a ray which is
473tangent to both objects. For simple meshes the determination of such
474rays can be computed geometrically, for more complicated ones it is
475based again on random sampling. The selection of the two objects works
476as follows: first we randomly select the first object and a random
477non-empty view cell for which we know that it can see the object. Then
478we randomly select an object associated with that view cell as the
479second object.
480
481
482 
483\section{Summary}
484
485This chapter described a global visibility sampling tool which forms a
486core of the visibility preprocessing framework. The global visibility
487sampling computes aggresive visibility, i.e. it computes a subset of
488the exact PVS for each view cell. The aggresive sampling provides a
489fast progressive solution and thus it can be easily integrated into
490the game development cycle. The sampling itself involves several
491strategies which aim to pregressively discover more visibility
492relationships in the scene.
493
494The methods presented in this chapter give a good initial estimate
495about visibility in the scene, which can be verified by the mutual
496visibility tool described in the next chapter.
497
498
499
500
Note: See TracBrowser for help on using the repository browser.