source: trunk/VUT/doc/SciReport/sampling.tex @ 255

Revision 255, 17.0 KB checked in by bittner, 19 years ago (diff)
Line 
1\chapter{Global Visibility Sampling Tool}
2
3
4\section{Introduction}
5
6
7The proposed visibility preprocessing framework consists of two major
8steps.
9
10\begin{itemize}
11\item The first step is an aggresive visibility sampling which gives
12initial estimate about global visibility in the scene. The sampling
13itself involves several strategies which will be described in
14section~\ref{sec:sampling}. The imporant property of the aggresive
15sampling step is that it provides a fast progressive solution to
16global visibility and thus it can be easily integrated into the game
17development cycle.
18
19\item The second step is mutual visibility verification. This step
20turns the previous aggresive visibility solution into either exact,
21conservative or error bound aggresive solution. The choice of the
22particular verifier is left on the user in order to select the best
23for a particular scene, application context and time constrains. For
24example, in scenes like a forest an error bound aggresive visibility
25can be the best compromise between the resulting size of the PVS (and
26framerate) and the visual quality. The exact or conservative algorithm
27can however be chosen for urban scenes where of even small objects can
28be more distructing for the user. The mutual visibility tool will be
29described in the next chapter.
30
31\end{itemize}
32
33
34In traditional visibility preprocessing the view space is
35subdivided into viewcells and for each view cell the set of visible
36objects --- potentially visible set (PVS) is computed. This framewoirk
37has bee used for conservative, aggresive and exact algorithms.
38
39We propose a different strategy which has several advantages for
40sampling based aggresive visibility preprocessing. The stategy is
41based on the following fundamental ideas:
42\begin{itemize}
43\item Replace the roles of view cells and objects
44\item Compute progressive global visibility instead of sequential from-region visibility
45\end{itemize}
46
47Both these points will be addressed in this chapter in more detail.
48
49
50
51\section{Related work}
52\label{VFR3D_RELATED_WORK}
53
54
55 Below we briefly discuss the related work on visibility preprocessing
56in several application areas. In particular we focus on computing
57from-region which has been a core of most previous visibility
58preprocessing techniques.
59
60
61\subsection{Aspect graph}
62
63The first algorithms dealing with from-region visibility belong to the
64area of computer vision. The {\em aspect
65graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions
66the view space into cells that group viewpoints from which the
67projection of the scene is qualitatively equivalent. The aspect graph
68is a graph describing the view of the scene (aspect) for each cell of
69the partitioning. The major drawback of this approach is that for
70polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in
71the partitioning for unrestricted viewspace. A {\em scale space}
72aspect graph~\cite{bb12595,bb12590} improves robustness of the method
73by merging similar features according to the given scale.
74
75
76\subsection{Potentially visible sets}
77
78
79 In the computer graphics community Airey~\cite{Airey90} introduced
80the concept of {\em potentially visible sets} (PVS).  Airey assumes
81the existence of a natural subdivision of the environment into
82cells. For models of building interiors these cells roughly correspond
83to rooms and corridors.  For each cell the PVS is formed by cells
84visible from any point of that cell.  Airey uses ray shooting to
85approximate visibility between cells of the subdivision and so the
86computed PVS is not conservative.
87
88This concept was further elaborated by Teller et
89al.~\cite{Teller92phd,Teller:1991:VPI} to establish a conservative
90PVS.  The PVS is constructed by testing the existence of a stabbing
91line through a sequence of polygonal portals between cells. Teller
92proposed an exact solution to this problem using \plucker
93coordinates~\cite{Teller:1992:CAA} and a simpler and more robust
94conservative solution~\cite{Teller92phd}.  The portal based methods
95are well suited to static densely occluded environments with a
96particular structure.  For less structured models they can face a
97combinatorial explosion of complexity~\cite{Teller92phd}. Yagel and
98Ray~\cite{Yagel95a} present an algorithm, that uses a regular spatial
99subdivision. Their approach is not sensitive to the structure of the
100model in terms of complexity, but its efficiency is altered by the
101discrete representation of the scene.
102
103Plantinga proposed a PVS algorithm based on a conservative viewspace
104partitioning by evaluating visual
105events~\cite{Plantinga:1993:CVP}. The construction of viewspace
106partitioning was further studied by Chrysanthou et
107al.~\cite{Chrysanthou:1998:VP}, Cohen-Or et al.~\cite{cohen-egc-98}
108and Sadagic~\cite{Sadagic}.  Sudarsky and
109Gotsman~\cite{Sudarsky:1996:OVA} proposed an output-sensitive
110visibility algorithm for dynamic scenes.  Cohen-Or et
111al.~\cite{COZ-gi98} developed a conservative algorithm determining
112visibility of an $\epsilon$-neighborhood of a given viewpoint that was
113used for network based walkthroughs.
114
115Conservative algorithms for computing PVS developed by Durand et
116al.~\cite{EVL-2000-60} and Schaufler et al.~\cite{EVL-2000-59}  make
117use of several simplifying assumptions to avoid the usage of 4D data
118structures.  Wang et al.~\cite{Wang98} proposed an algorithm that
119precomputes visibility within beams originating from the restricted
120viewpoint region. The approach is very similar to the 5D subdivision
121for ray tracing~\cite{Simiakakis:1994:FAS} and so it exhibits similar
122problems, namely inadequate memory and preprocessing complexities.
123Specialized algorithms for computing PVS in \m25d scenes were proposed
124by Wonka et al.~\cite{wonka00}, Koltun et al.~\cite{koltun01}, and
125Bittner et al.~\cite{bittner:2001:PG}.
126
127The exact mutual visibility method presented later in the report is
128based on method exploting \plucker coordinates of
129lines~\cite{bittner02phd,nirenstein:02:egwr,haumont2005}. This
130algorithm uses \plucker coordinates to compute visibility in shafts
131defined by each polygon in the scene.
132
133
134\subsection{Rendering of shadows}
135
136
137The from-region visibility problems include the computation of soft
138shadows due to an areal light source. Continuous algorithms for
139real-time soft shadow generation were studied by Chin and
140Feiner~\cite{Chin:1992:FOP}, Loscos and
141Drettakis~\cite{Loscos:1997:IHS}, and
142Chrysanthou~\cite{Chrysantho1996a} and Chrysanthou and
143Slater~\cite{Chrysanthou:1997:IUS}. Discrete solutions have been
144proposed by Nishita~\cite{Nishita85}, Brotman and
145Badler~\cite{Brotman:1984:GSS}, and Soler and Sillion~\cite{SS98}. An
146exact algorithm computing an antipenumbra of an areal light source was
147developed by Teller~\cite{Teller:1992:CAA}.
148
149
150\subsection{Discontinuity meshing}
151
152
153Discontinuity meshing is used in the context of the radiosity global
154illumination algorithm or computing soft shadows due to areal light
155sources.  First approximate discontinuity meshing algorithms were
156studied by Campbell~\cite{Campbell:1990:AMG, Campbell91},
157Lischinski~\cite{lischinski92a}, and Heckbert~\cite{Heckbert92discon}.
158More elaborate methods were developed by
159Drettakis~\cite{Drettakis94-SSRII, Drettakis94-FSAAL}, and Stewart and
160Ghali~\cite{Stewart93-OSACS, Stewart:1994:FCSb}. These methods are
161capable of creating a complete discontinuity mesh that encodes all
162visual events involving the light source.
163
164The classical radiosity is based on an evaluation of form factors
165between two patches~\cite{Schroder:1993:FFB}. The visibility
166computation is a crucial step in the form factor
167evaluation~\cite{Teller:1993:GVA,Haines94,Teller:1994:POL,
168Nechvile:1996:FFE,Teichmann:WV}. Similar visibility computation takes
169place in the scope of hierarchical radiosity
170algorithms~\cite{Soler:1996:AEB, Drettakis:1997:IUG, Daubert:1997:HLS}.
171
172
173
174\subsection{Global visibility}
175
176 The aim of {\em global visibility} computations is to capture and
177describe visibility in the whole scene~\cite{Durand:1996:VCN}. The
178global visibility algorithms are typically based on some form of {\em
179line space subdivision} that partitions lines or rays into equivalence
180classes according to their visibility classification. Each class
181corresponds to a continuous set of rays with a common visibility
182classification. The techniques differ mainly in the way how the line
183space subdivision is computed and maintained. A practical application
184of most of the proposed global visibility structures for 3D scenes is
185still an open problem.  Prospectively these techniques provide an
186elegant method for ray shooting acceleration --- the ray shooting
187problem can be reduced to a point location in the line space
188subdivision.
189
190
191Pocchiola and Vegter introduced the visibility complex~\cite{pv-vc-93}
192that describes global visibility in 2D scenes. The visibility complex
193has been applied to solve various 2D visibility
194problems~\cite{r-tsvcp-95,r-wvcav-97, r-dvpsv-97,Orti96-UVCRC}.  The
195approach was generalized to 3D by Durand et
196al.~\cite{Durand:1996:VCN}. Nevertheless, no implementation of the 3D
197visibility complex is currently known. Durand et
198al.~\cite{Durand:1997:VSP} introduced the {\em visibility skeleton}
199that is a graph describing a skeleton of the 3D visibility
200complex. The visibility skeleton was verified experimentally and  the
201results indicate that its $O(n^4\log n)$ worst case complexity is much
202better in practice. Pu~\cite{Pu98-DSGIV} developed a similar method to
203the one presented in this chapter. He uses a BSP tree in \plucker
204coordinates to represent a global visibility map for a given set of
205polygons. The computation is performed considering all rays piercing
206the scene and so the method exhibits unacceptable memory complexity
207even for scenes of moderate size. Recently, Duguet and
208Drettakis~\cite{duguet:02:sig} developed a robust variant of the
209visibility skeleton algorithm that uses robust epsilon-visibility
210predicates.
211
212 Discrete methods aiming to describe visibility in a 4D data structure
213were presented by Chrysanthou et al.~\cite{chrysanthou:cgi:98} and
214Blais and Poulin~\cite{blais98a}.  These data structures are closely
215related to the {\em lumigraph}~\cite{Gortler:1996:L,buehler2001} or
216{\em light field}~\cite{Levoy:1996:LFR}. An interesting discrete
217hierarchical visibility algorithm for two-dimensional scenes was
218developed by Hinkenjann and M\"uller~\cite{EVL-1996-10}. One of the
219biggest problems of the discrete solution space data structures is
220their memory consumption required to achieve a reasonable
221accuracy. Prospectively, the scene complexity
222measures~\cite{Cazals:3204:1997} provide a useful estimate on the
223required sampling density and the size of the solution space data
224structure.
225
226
227\subsection{Other applications}
228
229 Certain from-point visibility problems determining visibility over a
230period of time can be transformed to a static from-region visibility
231problem. Such a transformation is particularly useful for antialiasing
232purposes~\cite{grant85a}. The from-region visibility can also be used
233in the context of simulation of the sound
234propagation~\cite{Funkhouser98}. The sound propagation algorithms
235typically require lower resolution than the algorithms simulating the
236propagation of light, but they need to account for simulation of
237attenuation, reflection and time delays.
238
239\section{Algorithm Setup}
240
241\subsection{View Cell Representation}
242
243In order to efficiently use view cells with our sampling method, we
244require a view cell representation which is
245
246\begin{itemize}
247\item optimized for viewcell - ray intersection.
248\item flexible, i.e., it can represent arbitrary geometry.
249\item naturally suited for an hierarchical approach. %(i.e., there is a root view cell containing all others)
250\end{itemize}
251
252We meet these requirements by using a view cell BSP tree, where the
253BSP leafs are associated with the view cells.  Using the BSP tree, we
254are able to find the initial view cells with only a few view ray-plane
255intersections.  The hierarchical structure of the BSP tree can be
256exploited as hierarchy of view cells. If neccessary, we could further
257subdivide a BSP leaf view cell quite easily.
258
259Currently we use two approaches to generate the initial BSP view cell tree.
260
261\begin{itemize}
262\item We use a number of dedicated input view cells. As input view
263cell any closed mesh can be applied. The only requirement is that the
264view cells do not overlap. We insert one view cell after the other
265into the tree. The polygons of a view cell are filtered down the tree,
266guiding the insertion process. Once we reach a leaf and there are no
267more polygons left, we terminate the tree subdivision. If we are on
268the inside of the last split plane (i.e., the leaf is representing the
269inside of the view cell), we associate the leaf with the view cell
270(i.e., add a pointer to the view cell). Hence a number of leafes can
271be associated with the same input view cell.
272\item We apply the BSP tree subdivision to the scene geometry. When
273the subdivision terminates, the leaf nodes also represent the view
274cells.
275\end{itemize}
276
277\subsection{From-object based visibility}
278
279Our framework is based on the idea of sampling visibility by casting
280casting rays through the scene and collecting their contributions. A
281visibility sample is computed by casting a ray from an object towards
282the viewcells and computing the nearest intersection with the scene
283objects. All view cells pierced by the ray segment can the object and
284thus the object can be added to their PVS. If the ray is terminated at
285another scene object the PVS of the pierced view cells can also be
286extended by this terminating object. Thus a single ray can make a
287number of contributions to the progressively computed PVSs. A ray
288sample piercing $n$ viewcells which is bound by two distinct objects
289contributes by at most $2*n$ entries to the current PVSs. Appart from
290this performance benefit there is also a benefit in terms of the
291sampling density: Assuming that the view cells are usually much larger
292than the objects (which is typically the case) starting the sampling
293deterministically from the objects increases the probability of small
294objects being captured in the PVS.
295
296At this phase of the computation we not only start the samples from
297the objects, but we also store the PVS information centered at the
298objects. Instead of storing a PVSs consting of objects visible from
299view cells, every object maintains a PVS consisting of potentially
300visible view cells. While these representations contain exactly the
301same information as we shall see later the object centered PVS is
302better suited for the importance sampling phase as well as the
303visibility verification phase.
304
305
306\section{Basic Randomized Sampling}
307
308
309The first phase of the sampling works as follows: At every pass of the
310algorithm visits scene objects sequentially. For every scene object we
311randomly choose a point on its surface. Then a ray is cast from the
312selected point according to the randomly chosen direction. We use a
313uniform distribution of the ray directions with respect to the
314halfspace given by the surface normal. Using this strategy the samples
315at deterministicaly placed at every object, with a randomization of
316the location on the object surface. The uniformly distributed
317direction is a simple and fast strategy to gain initial visibility
318information.
319
320
321The described algorithm accounts for the irregular distribution of the
322objects: more samples are placed at locations containing more
323objects. Additionally every object is sampled many times depending on
324the number of passes in which this sampling strategy is applied. This
325increases the chance of even a small object being captured in the PVS
326of the view cells from which it is visible.
327
328
329\section{Accounting for View Cell Distribution}
330
331The first modification to the basic algorithm accounts for irregular
332distribution of the viewcells. Such a case in common for example in
333urban scenes where the viewcells are mostly distributed in a
334horizontal direction and more viewcells are placed at denser parts of
335the city. The modification involves replacing the uniformly
336distributed ray direction by directions distributed according to the
337local view cell directional density. It means placing more samples at
338directions where more view cells are located. We select a random
339viecell which lies at the halfpace given by the surface normal at the
340chosen point. We pick a random point inside the view cell and cast a
341ray towards this point.
342
343
344\section{Accounting for Visibility Events}
345
346 Visibility events correspond to appearance and disapearance of
347 objects with respect to a moving view point. In polygonal scenes the
348 events defined by event surfaces defined by three distinct scene
349 edges. Depending on the edge configuration we distinguish between
350 vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces
351 are planar planes whereas the EEE are in general quadratic surfaces.
352
353 To account for these event we explicitly place samples passing by the
354 object edges which are directed to edges and/or vertices of other
355 objects.
356
357
Note: See TracBrowser for help on using the repository browser.