source: trunk/VUT/doc/SciReport/sampling.tex @ 266

Revision 266, 17.3 KB checked in by bittner, 19 years ago (diff)

structural changes

Line 
1\chapter{Global Visibility Sampling Tool}
2
3
4The proposed visibility preprocessing framework consists of two major
5steps.
6
7\begin{itemize}
8\item The first step is an aggresive visibility sampling which gives
9initial estimate about global visibility in the scene. The sampling
10itself involves several strategies which will be described in
11section~\ref{sec:sampling}. The imporant property of the aggresive
12sampling step is that it provides a fast progressive solution to
13global visibility and thus it can be easily integrated into the game
14development cycle.
15
16\item The second step is mutual visibility verification. This step
17turns the previous aggresive visibility solution into either exact,
18conservative or error bound aggresive solution. The choice of the
19particular verifier is left on the user in order to select the best
20one for a particular scene, application context and time
21constrains. For example, in scenes like a forest an error bound
22aggresive visibility can be the best compromise between the resulting
23size of the PVS (and framerate) and the visual quality. The exact or
24conservative algorithm can however be chosen for urban scenes where
25ommision of even small objects can be more distructing for the
26user. The mutual visibility tool will be described in the next chapter.
27
28\end{itemize}
29
30
31In traditional visibility preprocessing the view space is
32subdivided into viewcells and for each view cell the set of visible
33objects --- potentially visible set (PVS) is computed. This framewoirk
34has been used for conservative, aggresive and exact algorithms.
35
36We propose a different strategy which has several advantages for
37sampling based aggresive visibility preprocessing. The stategy is
38based on the following fundamental ideas:
39\begin{itemize}
40\item Compute progressive global visibility instead of sequential from-region visibility
41\item Replace the roles of view cells and objects for some parts of the computation
42\end{itemize}
43
44Both these points will be addressed in this chapter in more detail.
45
46
47
48\section{Related work}
49\label{VFR3D_RELATED_WORK}
50
51Below we briefly discuss the related work on visibility preprocessing
52in several application areas. In particular we focus on computing
53from-region which has been a core of most previous visibility
54preprocessing techniques.
55
56
57\subsection{Aspect graph}
58
59The first algorithms dealing with from-region visibility belong to the
60area of computer vision. The {\em aspect
61  graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions
62the view space into cells that group viewpoints from which the
63projection of the scene is qualitatively equivalent. The aspect graph
64is a graph describing the view of the scene (aspect) for each cell of
65the partitioning. The major drawback of this approach is that for
66polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in
67the partitioning for unrestricted viewspace. A {\em scale space}
68aspect graph~\cite{bb12595,bb12590} improves robustness of the method
69by merging similar features according to the given scale.
70
71
72\subsection{Potentially visible sets}
73
74
75 In the computer graphics community Airey~\cite{Airey90} introduced
76the concept of {\em potentially visible sets} (PVS).  Airey assumes
77the existence of a natural subdivision of the environment into
78cells. For models of building interiors these cells roughly correspond
79to rooms and corridors.  For each cell the PVS is formed by cells
80visible from any point of that cell.  Airey uses ray shooting to
81approximate visibility between cells of the subdivision and so the
82computed PVS is not conservative.
83
84This concept was further elaborated by Teller et
85al.~\cite{Teller92phd,Teller:1991:VPI} to establish a conservative
86PVS.  The PVS is constructed by testing the existence of a stabbing
87line through a sequence of polygonal portals between cells. Teller
88proposed an exact solution to this problem using \plucker
89coordinates~\cite{Teller:1992:CAA} and a simpler and more robust
90conservative solution~\cite{Teller92phd}.  The portal based methods
91are well suited to static densely occluded environments with a
92particular structure.  For less structured models they can face a
93combinatorial explosion of complexity~\cite{Teller92phd}. Yagel and
94Ray~\cite{Yagel95a} present an algorithm, that uses a regular spatial
95subdivision. Their approach is not sensitive to the structure of the
96model in terms of complexity, but its efficiency is altered by the
97discrete representation of the scene.
98
99Plantinga proposed a PVS algorithm based on a conservative viewspace
100partitioning by evaluating visual
101events~\cite{Plantinga:1993:CVP}. The construction of viewspace
102partitioning was further studied by Chrysanthou et
103al.~\cite{Chrysanthou:1998:VP}, Cohen-Or et al.~\cite{cohen-egc-98}
104and Sadagic~\cite{Sadagic}.  Sudarsky and
105Gotsman~\cite{Sudarsky:1996:OVA} proposed an output-sensitive
106visibility algorithm for dynamic scenes.  Cohen-Or et
107al.~\cite{COZ-gi98} developed a conservative algorithm determining
108visibility of an $\epsilon$-neighborhood of a given viewpoint that was
109used for network based walkthroughs.
110
111Conservative algorithms for computing PVS developed by Durand et
112al.~\cite{EVL-2000-60} and Schaufler et al.~\cite{EVL-2000-59}  make
113use of several simplifying assumptions to avoid the usage of 4D data
114structures.  Wang et al.~\cite{Wang98} proposed an algorithm that
115precomputes visibility within beams originating from the restricted
116viewpoint region. The approach is very similar to the 5D subdivision
117for ray tracing~\cite{Simiakakis:1994:FAS} and so it exhibits similar
118problems, namely inadequate memory and preprocessing complexities.
119Specialized algorithms for computing PVS in \m25d scenes were proposed
120by Wonka et al.~\cite{wonka00}, Koltun et al.~\cite{koltun01}, and
121Bittner et al.~\cite{bittner:2001:PG}.
122
123The exact mutual visibility method presented later in the report is
124based on method exploting \plucker coordinates of
125lines~\cite{bittner02phd,nirenstein:02:egwr,haumont2005}. This
126algorithm uses \plucker coordinates to compute visibility in shafts
127defined by each polygon in the scene.
128
129
130\subsection{Rendering of shadows}
131
132
133The from-region visibility problems include the computation of soft
134shadows due to an areal light source. Continuous algorithms for
135real-time soft shadow generation were studied by Chin and
136Feiner~\cite{Chin:1992:FOP}, Loscos and
137Drettakis~\cite{Loscos:1997:IHS}, and
138Chrysanthou~\cite{Chrysantho1996a} and Chrysanthou and
139Slater~\cite{Chrysanthou:1997:IUS}. Discrete solutions have been
140proposed by Nishita~\cite{Nishita85}, Brotman and
141Badler~\cite{Brotman:1984:GSS}, and Soler and Sillion~\cite{SS98}. An
142exact algorithm computing an antipenumbra of an areal light source was
143developed by Teller~\cite{Teller:1992:CAA}.
144
145
146\subsection{Discontinuity meshing}
147
148
149Discontinuity meshing is used in the context of the radiosity global
150illumination algorithm or computing soft shadows due to areal light
151sources.  First approximate discontinuity meshing algorithms were
152studied by Campbell~\cite{Campbell:1990:AMG, Campbell91},
153Lischinski~\cite{lischinski92a}, and Heckbert~\cite{Heckbert92discon}.
154More elaborate methods were developed by
155Drettakis~\cite{Drettakis94-SSRII, Drettakis94-FSAAL}, and Stewart and
156Ghali~\cite{Stewart93-OSACS, Stewart:1994:FCSb}. These methods are
157capable of creating a complete discontinuity mesh that encodes all
158visual events involving the light source.
159
160The classical radiosity is based on an evaluation of form factors
161between two patches~\cite{Schroder:1993:FFB}. The visibility
162computation is a crucial step in the form factor
163evaluation~\cite{Teller:1993:GVA,Haines94,Teller:1994:POL,
164Nechvile:1996:FFE,Teichmann:WV}. Similar visibility computation takes
165place in the scope of hierarchical radiosity
166algorithms~\cite{Soler:1996:AEB, Drettakis:1997:IUG, Daubert:1997:HLS}.
167
168
169
170\subsection{Global visibility}
171
172 The aim of {\em global visibility} computations is to capture and
173describe visibility in the whole scene~\cite{Durand:1996:VCN}. The
174global visibility algorithms are typically based on some form of {\em
175line space subdivision} that partitions lines or rays into equivalence
176classes according to their visibility classification. Each class
177corresponds to a continuous set of rays with a common visibility
178classification. The techniques differ mainly in the way how the line
179space subdivision is computed and maintained. A practical application
180of most of the proposed global visibility structures for 3D scenes is
181still an open problem.  Prospectively these techniques provide an
182elegant method for ray shooting acceleration --- the ray shooting
183problem can be reduced to a point location in the line space
184subdivision.
185
186
187Pocchiola and Vegter introduced the visibility complex~\cite{pv-vc-93}
188that describes global visibility in 2D scenes. The visibility complex
189has been applied to solve various 2D visibility
190problems~\cite{r-tsvcp-95,r-wvcav-97, r-dvpsv-97,Orti96-UVCRC}.  The
191approach was generalized to 3D by Durand et
192al.~\cite{Durand:1996:VCN}. Nevertheless, no implementation of the 3D
193visibility complex is currently known. Durand et
194al.~\cite{Durand:1997:VSP} introduced the {\em visibility skeleton}
195that is a graph describing a skeleton of the 3D visibility
196complex. The visibility skeleton was verified experimentally and  the
197results indicate that its $O(n^4\log n)$ worst case complexity is much
198better in practice. Pu~\cite{Pu98-DSGIV} developed a similar method to
199the one presented in this chapter. He uses a BSP tree in \plucker
200coordinates to represent a global visibility map for a given set of
201polygons. The computation is performed considering all rays piercing
202the scene and so the method exhibits unacceptable memory complexity
203even for scenes of moderate size. Recently, Duguet and
204Drettakis~\cite{duguet:02:sig} developed a robust variant of the
205visibility skeleton algorithm that uses robust epsilon-visibility
206predicates.
207
208 Discrete methods aiming to describe visibility in a 4D data structure
209were presented by Chrysanthou et al.~\cite{chrysanthou:cgi:98} and
210Blais and Poulin~\cite{blais98a}.  These data structures are closely
211related to the {\em lumigraph}~\cite{Gortler:1996:L,buehler2001} or
212{\em light field}~\cite{Levoy:1996:LFR}. An interesting discrete
213hierarchical visibility algorithm for two-dimensional scenes was
214developed by Hinkenjann and M\"uller~\cite{EVL-1996-10}. One of the
215biggest problems of the discrete solution space data structures is
216their memory consumption required to achieve a reasonable
217accuracy. Prospectively, the scene complexity
218measures~\cite{Cazals:3204:1997} provide a useful estimate on the
219required sampling density and the size of the solution space data
220structure.
221
222
223\subsection{Other applications}
224
225 Certain from-point visibility problems determining visibility over a
226period of time can be transformed to a static from-region visibility
227problem. Such a transformation is particularly useful for antialiasing
228purposes~\cite{grant85a}. The from-region visibility can also be used
229in the context of simulation of the sound
230propagation~\cite{Funkhouser98}. The sound propagation algorithms
231typically require lower resolution than the algorithms simulating the
232propagation of light, but they need to account for simulation of
233attenuation, reflection and time delays.
234
235\section{Algorithm Description}
236
237This section first describes the setup of the global visibility
238sampling algorithm. In particular we describe the view cell
239representation and the novel concept of from-object based
240visibility. The we outline the different visibility sampling
241strategies.
242
243\subsection{View Cell Representation}
244
245In order to efficiently use view cells with our sampling method, we
246require a view cell representation which is
247
248\begin{itemize}
249\item optimized for viewcell - ray intersection.
250\item flexible, i.e., it can represent arbitrary geometry.
251\item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others)
252\end{itemize}
253
254We meet these requirements by using a view cell BSP tree, where the
255BSP leafs are associated with the view cells.  Using the BSP tree, we
256are able to find the initial view cells with only a few view ray-plane
257intersections.  The hierarchical structure of the BSP tree can be
258exploited as hierarchy of view cells. If neccessary, we could further
259subdivide a BSP leaf view cell quite easily.
260
261Currently we use two approaches to generate the initial BSP view cell
262tree.
263
264\begin{itemize}
265
266\item We use a number of dedicated input view cells. As input view
267cell any closed mesh can be applied. The only requirement is that the
268view cells do not overlap. We insert one view cell after the other
269into the tree. The polygons of a view cell are filtered down the tree,
270guiding the insertion process. Once we reach a leaf and there are no
271more polygons left, we terminate the tree subdivision. If we are on
272the inside of the last split plane (i.e., the leaf is representing the
273inside of the view cell), we associate the leaf with the view cell
274(i.e., add a pointer to the view cell). Hence a number of leafes can
275be associated with the same input view cell.
276
277\item We apply the BSP tree subdivision to the scene geometry. When
278the subdivision terminates, the leaf nodes also represent the view
279cells.
280
281\end{itemize}
282
283\subsection{From-object based visibility}
284
285 Our framework is based on the idea of sampling visibility by casting
286casting rays through the scene and collecting their contributions. A
287visibility sample is computed by casting a ray from an object towards
288the viewcells and computing the nearest intersection with the scene
289objects. All view cells pierced by the ray segment can the object and
290thus the object can be added to their PVS. If the ray is terminated at
291another scene object the PVS of the pierced view cells can also be
292extended by this terminating object. Thus a single ray can make a
293number of contributions to the progressively computed PVSs. A ray
294sample piercing $n$ viewcells which is bound by two distinct objects
295contributes by at most $2*n$ entries to the current PVSs. Appart from
296this performance benefit there is also a benefit in terms of the
297sampling density: Assuming that the view cells are usually much larger
298than the objects (which is typically the case) starting the sampling
299deterministically from the objects increases the probability of small
300objects being captured in the PVS.
301
302At this phase of the computation we not only start the samples from
303the objects, but we also store the PVS information centered at the
304objects. Instead of storing a PVSs consting of objects visible from
305view cells, every object maintains a PVS consisting of potentially
306visible view cells. While these representations contain exactly the
307same information as we shall see later the object centered PVS is
308better suited for the importance sampling phase as well as the
309visibility verification phase.
310
311
312\subsection{Basic Randomized Sampling}
313
314
315The first phase of the sampling works as follows: At every pass of the
316algorithm visits scene objects sequentially. For every scene object we
317randomly choose a point on its surface. Then a ray is cast from the
318selected point according to the randomly chosen direction. We use a
319uniform distribution of the ray directions with respect to the
320halfspace given by the surface normal. Using this strategy the samples
321at deterministicaly placed at every object, with a randomization of
322the location on the object surface. The uniformly distributed
323direction is a simple and fast strategy to gain initial visibility
324information.
325
326
327 The described algorithm accounts for the irregular distribution of the
328objects: more samples are placed at locations containing more
329objects. Additionally every object is sampled many times depending on
330the number of passes in which this sampling strategy is applied. This
331increases the chance of even a small object being captured in the PVS
332of the view cells from which it is visible.
333
334
335\subsection{Accounting for View Cell Distribution}
336
337 The first modification to the basic algorithm accounts for irregular
338distribution of the viewcells. Such a case is common for example in
339urban scenes where the viewcells are mostly distributed in a
340horizontal direction and more viewcells are placed at denser parts of
341the city. The modification involves replacing the uniformly
342distributed ray direction by directions distributed according to the
343local view cell directional density. This means placing more samples at
344directions where more view cells are located: We select a random
345viecell which lies at the halfpace given by the surface normal at the
346chosen point. We pick a random point inside the view cell and cast a
347ray towards this point.
348
349
350\subsection{Accounting for Visibility Events}
351
352 Visibility events correspond to appearance and disapearance of
353 objects with respect to a moving view point. In polygonal scenes the
354events defined by event surfaces defined by three distinct scene
355edges. Depending on the edge configuration we distinguish between
356vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces
357are planar planes whereas the EEE are in general quadratic surfaces.
358
359 To account for these event we explicitly place samples passing by the
360object edges which are directed to edges and/or vertices of other
361objects.
362
363
Note: See TracBrowser for help on using the repository browser.