1 | \chapter{Global Visibility Sampling Tool}
|
---|
2 |
|
---|
3 |
|
---|
4 | The proposed visibility preprocessing framework consists of two major
|
---|
5 | steps.
|
---|
6 |
|
---|
7 | \begin{itemize}
|
---|
8 | \item The first step is an aggresive visibility sampling which gives
|
---|
9 | initial estimate about global visibility in the scene. The sampling
|
---|
10 | itself involves several strategies which will be described bellow.
|
---|
11 | The imporant property of the aggresive sampling step is that it
|
---|
12 | provides a fast progressive solution to global visibility and thus it
|
---|
13 | can be easily integrated into the game development cycle. The
|
---|
14 | aggresive sampling will terminate when the average contribution of new
|
---|
15 | ray samples falls bellow a predefined threshold.
|
---|
16 |
|
---|
17 | \item The second step is mutual visibility verification. This step
|
---|
18 | turns the previous aggresive visibility solution into either exact,
|
---|
19 | conservative or error bound aggresive solution. The choice of the
|
---|
20 | particular verifier is left on the user in order to select the best
|
---|
21 | one for a particular scene, application context and time
|
---|
22 | constrains. For example, in scenes like a forest an error bound
|
---|
23 | aggresive visibility can be the best compromise between the resulting
|
---|
24 | size of the PVS (and framerate) and the visual quality. The exact or
|
---|
25 | conservative algorithm can however be chosen for urban scenes where
|
---|
26 | ommision of even small objects can be more distructing for the
|
---|
27 | user. The mutual visibility tool will be described in the next chapter.
|
---|
28 |
|
---|
29 | \end{itemize}
|
---|
30 |
|
---|
31 | In traditional visibility preprocessing the view space is
|
---|
32 | subdivided into view cells and for each view cell the set of visible
|
---|
33 | objects --- potentially visible set (PVS) is computed. This framewoirk
|
---|
34 | has been used for conservative, aggresive and exact algorithms.
|
---|
35 |
|
---|
36 | We propose a different strategy which has several advantages for
|
---|
37 | sampling based aggresive visibility preprocessing. The stategy is
|
---|
38 | based on the following fundamental ideas:
|
---|
39 | \begin{itemize}
|
---|
40 | \item Compute progressive global visibility instead of sequential from-region visibility
|
---|
41 | \item Replace the roles of view cells and objects for some parts of the computation
|
---|
42 | \end{itemize}
|
---|
43 |
|
---|
44 | Both these points will be addressed in this chapter in more detail.
|
---|
45 |
|
---|
46 | \section{Related work}
|
---|
47 | \label{VFR3D_RELATED_WORK}
|
---|
48 |
|
---|
49 | Below we briefly discuss the related work on visibility preprocessing
|
---|
50 | in several application areas. In particular we focus on computing
|
---|
51 | from-region which has been a core of most previous visibility
|
---|
52 | preprocessing techniques.
|
---|
53 |
|
---|
54 |
|
---|
55 | \subsection{Aspect graph}
|
---|
56 |
|
---|
57 | The first algorithms dealing with from-region visibility belong to the
|
---|
58 | area of computer vision. The {\em aspect
|
---|
59 | graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions
|
---|
60 | the view space into cells that group viewpoints from which the
|
---|
61 | projection of the scene is qualitatively equivalent. The aspect graph
|
---|
62 | is a graph describing the view of the scene (aspect) for each cell of
|
---|
63 | the partitioning. The major drawback of this approach is that for
|
---|
64 | polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in
|
---|
65 | the partitioning for unrestricted viewspace. A {\em scale space}
|
---|
66 | aspect graph~\cite{bb12595,bb12590} improves robustness of the method
|
---|
67 | by merging similar features according to the given scale.
|
---|
68 |
|
---|
69 |
|
---|
70 | \subsection{Potentially visible sets}
|
---|
71 |
|
---|
72 |
|
---|
73 | In the computer graphics community Airey~\cite{Airey90} introduced
|
---|
74 | the concept of {\em potentially visible sets} (PVS). Airey assumes
|
---|
75 | the existence of a natural subdivision of the environment into
|
---|
76 | cells. For models of building interiors these cells roughly correspond
|
---|
77 | to rooms and corridors. For each cell the PVS is formed by cells
|
---|
78 | visible from any point of that cell. Airey uses ray shooting to
|
---|
79 | approximate visibility between cells of the subdivision and so the
|
---|
80 | computed PVS is not conservative.
|
---|
81 |
|
---|
82 | This concept was further elaborated by Teller et
|
---|
83 | al.~\cite{Teller92phd,Teller:1991:VPI} to establish a conservative
|
---|
84 | PVS. The PVS is constructed by testing the existence of a stabbing
|
---|
85 | line through a sequence of polygonal portals between cells. Teller
|
---|
86 | proposed an exact solution to this problem using \plucker
|
---|
87 | coordinates~\cite{Teller:1992:CAA} and a simpler and more robust
|
---|
88 | conservative solution~\cite{Teller92phd}. The portal based methods
|
---|
89 | are well suited to static densely occluded environments with a
|
---|
90 | particular structure. For less structured models they can face a
|
---|
91 | combinatorial explosion of complexity~\cite{Teller92phd}. Yagel and
|
---|
92 | Ray~\cite{Yagel95a} present an algorithm, that uses a regular spatial
|
---|
93 | subdivision. Their approach is not sensitive to the structure of the
|
---|
94 | model in terms of complexity, but its efficiency is altered by the
|
---|
95 | discrete representation of the scene.
|
---|
96 |
|
---|
97 | Plantinga proposed a PVS algorithm based on a conservative viewspace
|
---|
98 | partitioning by evaluating visual
|
---|
99 | events~\cite{Plantinga:1993:CVP}. The construction of viewspace
|
---|
100 | partitioning was further studied by Chrysanthou et
|
---|
101 | al.~\cite{Chrysanthou:1998:VP}, Cohen-Or et al.~\cite{cohen-egc-98}
|
---|
102 | and Sadagic~\cite{Sadagic}. Sudarsky and
|
---|
103 | Gotsman~\cite{Sudarsky:1996:OVA} proposed an output-sensitive
|
---|
104 | visibility algorithm for dynamic scenes. Cohen-Or et
|
---|
105 | al.~\cite{COZ-gi98} developed a conservative algorithm determining
|
---|
106 | visibility of an $\epsilon$-neighborhood of a given viewpoint that was
|
---|
107 | used for network based walkthroughs.
|
---|
108 |
|
---|
109 | Conservative algorithms for computing PVS developed by Durand et
|
---|
110 | al.~\cite{EVL-2000-60} and Schaufler et al.~\cite{EVL-2000-59} make
|
---|
111 | use of several simplifying assumptions to avoid the usage of 4D data
|
---|
112 | structures. Wang et al.~\cite{Wang98} proposed an algorithm that
|
---|
113 | precomputes visibility within beams originating from the restricted
|
---|
114 | viewpoint region. The approach is very similar to the 5D subdivision
|
---|
115 | for ray tracing~\cite{Simiakakis:1994:FAS} and so it exhibits similar
|
---|
116 | problems, namely inadequate memory and preprocessing complexities.
|
---|
117 | Specialized algorithms for computing PVS in \m25d scenes were proposed
|
---|
118 | by Wonka et al.~\cite{wonka00}, Koltun et al.~\cite{koltun01}, and
|
---|
119 | Bittner et al.~\cite{bittner:2001:PG}.
|
---|
120 |
|
---|
121 | The exact mutual visibility method presented later in the report is
|
---|
122 | based on method exploting \plucker coordinates of
|
---|
123 | lines~\cite{bittner02phd,nirenstein:02:egwr,haumont2005}. This
|
---|
124 | algorithm uses \plucker coordinates to compute visibility in shafts
|
---|
125 | defined by each polygon in the scene.
|
---|
126 |
|
---|
127 |
|
---|
128 | \subsection{Rendering of shadows}
|
---|
129 |
|
---|
130 |
|
---|
131 | The from-region visibility problems include the computation of soft
|
---|
132 | shadows due to an areal light source. Continuous algorithms for
|
---|
133 | real-time soft shadow generation were studied by Chin and
|
---|
134 | Feiner~\cite{Chin:1992:FOP}, Loscos and
|
---|
135 | Drettakis~\cite{Loscos:1997:IHS}, and
|
---|
136 | Chrysanthou~\cite{Chrysantho1996a} and Chrysanthou and
|
---|
137 | Slater~\cite{Chrysanthou:1997:IUS}. Discrete solutions have been
|
---|
138 | proposed by Nishita~\cite{Nishita85}, Brotman and
|
---|
139 | Badler~\cite{Brotman:1984:GSS}, and Soler and Sillion~\cite{SS98}. An
|
---|
140 | exact algorithm computing an antipenumbra of an areal light source was
|
---|
141 | developed by Teller~\cite{Teller:1992:CAA}.
|
---|
142 |
|
---|
143 |
|
---|
144 | \subsection{Discontinuity meshing}
|
---|
145 |
|
---|
146 |
|
---|
147 | Discontinuity meshing is used in the context of the radiosity global
|
---|
148 | illumination algorithm or computing soft shadows due to areal light
|
---|
149 | sources. First approximate discontinuity meshing algorithms were
|
---|
150 | studied by Campbell~\cite{Campbell:1990:AMG, Campbell91},
|
---|
151 | Lischinski~\cite{lischinski92a}, and Heckbert~\cite{Heckbert92discon}.
|
---|
152 | More elaborate methods were developed by
|
---|
153 | Drettakis~\cite{Drettakis94-SSRII, Drettakis94-FSAAL}, and Stewart and
|
---|
154 | Ghali~\cite{Stewart93-OSACS, Stewart:1994:FCSb}. These methods are
|
---|
155 | capable of creating a complete discontinuity mesh that encodes all
|
---|
156 | visual events involving the light source.
|
---|
157 |
|
---|
158 | The classical radiosity is based on an evaluation of form factors
|
---|
159 | between two patches~\cite{Schroder:1993:FFB}. The visibility
|
---|
160 | computation is a crucial step in the form factor
|
---|
161 | evaluation~\cite{Teller:1993:GVA,Haines94,Teller:1994:POL,
|
---|
162 | Nechvile:1996:FFE,Teichmann:WV}. Similar visibility computation takes
|
---|
163 | place in the scope of hierarchical radiosity
|
---|
164 | algorithms~\cite{Soler:1996:AEB, Drettakis:1997:IUG, Daubert:1997:HLS}.
|
---|
165 |
|
---|
166 |
|
---|
167 |
|
---|
168 | \subsection{Global visibility}
|
---|
169 |
|
---|
170 | The aim of {\em global visibility} computations is to capture and
|
---|
171 | describe visibility in the whole scene~\cite{Durand:1996:VCN}. The
|
---|
172 | global visibility algorithms are typically based on some form of {\em
|
---|
173 | line space subdivision} that partitions lines or rays into equivalence
|
---|
174 | classes according to their visibility classification. Each class
|
---|
175 | corresponds to a continuous set of rays with a common visibility
|
---|
176 | classification. The techniques differ mainly in the way how the line
|
---|
177 | space subdivision is computed and maintained. A practical application
|
---|
178 | of most of the proposed global visibility structures for 3D scenes is
|
---|
179 | still an open problem. Prospectively these techniques provide an
|
---|
180 | elegant method for ray shooting acceleration --- the ray shooting
|
---|
181 | problem can be reduced to a point location in the line space
|
---|
182 | subdivision.
|
---|
183 |
|
---|
184 |
|
---|
185 | Pocchiola and Vegter introduced the visibility complex~\cite{pv-vc-93}
|
---|
186 | that describes global visibility in 2D scenes. The visibility complex
|
---|
187 | has been applied to solve various 2D visibility
|
---|
188 | problems~\cite{r-tsvcp-95,r-wvcav-97, r-dvpsv-97,Orti96-UVCRC}. The
|
---|
189 | approach was generalized to 3D by Durand et
|
---|
190 | al.~\cite{Durand:1996:VCN}. Nevertheless, no implementation of the 3D
|
---|
191 | visibility complex is currently known. Durand et
|
---|
192 | al.~\cite{Durand:1997:VSP} introduced the {\em visibility skeleton}
|
---|
193 | that is a graph describing a skeleton of the 3D visibility
|
---|
194 | complex. The visibility skeleton was verified experimentally and the
|
---|
195 | results indicate that its $O(n^4\log n)$ worst case complexity is much
|
---|
196 | better in practice. Pu~\cite{Pu98-DSGIV} developed a similar method to
|
---|
197 | the one presented in this chapter. He uses a BSP tree in \plucker
|
---|
198 | coordinates to represent a global visibility map for a given set of
|
---|
199 | polygons. The computation is performed considering all rays piercing
|
---|
200 | the scene and so the method exhibits unacceptable memory complexity
|
---|
201 | even for scenes of moderate size. Recently, Duguet and
|
---|
202 | Drettakis~\cite{duguet:02:sig} developed a robust variant of the
|
---|
203 | visibility skeleton algorithm that uses robust epsilon-visibility
|
---|
204 | predicates.
|
---|
205 |
|
---|
206 | Discrete methods aiming to describe visibility in a 4D data structure
|
---|
207 | were presented by Chrysanthou et al.~\cite{chrysanthou:cgi:98} and
|
---|
208 | Blais and Poulin~\cite{blais98a}. These data structures are closely
|
---|
209 | related to the {\em lumigraph}~\cite{Gortler:1996:L,buehler2001} or
|
---|
210 | {\em light field}~\cite{Levoy:1996:LFR}. An interesting discrete
|
---|
211 | hierarchical visibility algorithm for two-dimensional scenes was
|
---|
212 | developed by Hinkenjann and M\"uller~\cite{EVL-1996-10}. One of the
|
---|
213 | biggest problems of the discrete solution space data structures is
|
---|
214 | their memory consumption required to achieve a reasonable
|
---|
215 | accuracy. Prospectively, the scene complexity
|
---|
216 | measures~\cite{Cazals:3204:1997} provide a useful estimate on the
|
---|
217 | required sampling density and the size of the solution space data
|
---|
218 | structure.
|
---|
219 |
|
---|
220 |
|
---|
221 | \subsection{Other applications}
|
---|
222 |
|
---|
223 | Certain from-point visibility problems determining visibility over a
|
---|
224 | period of time can be transformed to a static from-region visibility
|
---|
225 | problem. Such a transformation is particularly useful for antialiasing
|
---|
226 | purposes~\cite{grant85a}. The from-region visibility can also be used
|
---|
227 | in the context of simulation of the sound
|
---|
228 | propagation~\cite{Funkhouser98}. The sound propagation algorithms
|
---|
229 | typically require lower resolution than the algorithms simulating the
|
---|
230 | propagation of light, but they need to account for simulation of
|
---|
231 | attenuation, reflection and time delays.
|
---|
232 |
|
---|
233 | \section{Algorithm Description}
|
---|
234 |
|
---|
235 | This section first describes the setup of the global visibility
|
---|
236 | sampling algorithm. In particular we describe the view cell
|
---|
237 | representation and the novel concept of from-object based
|
---|
238 | visibility. The we outline the different visibility sampling
|
---|
239 | strategies.
|
---|
240 |
|
---|
241 | \subsection{View Space Partitioning}
|
---|
242 |
|
---|
243 |
|
---|
244 | \begin{figure}[htb]
|
---|
245 | \centerline{
|
---|
246 | \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01}
|
---|
247 | \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07}
|
---|
248 | }
|
---|
249 |
|
---|
250 | \caption{(left) Vienna viewcells. (right) The view cells are prisms with a triangular base. }
|
---|
251 | \label{fig:vienna_viewcells}
|
---|
252 | \end{figure}
|
---|
253 |
|
---|
254 |
|
---|
255 | Before the visibility computation itself, we subdivide the space of
|
---|
256 | all possible viewpoints and viewing directions into view cells. A good
|
---|
257 | partition of the scene into view cells is an essential part of every
|
---|
258 | visibility system. If they are chosen too large, the PVS (Potentially
|
---|
259 | Visibible Set) of a view cells is too large for efficient culling. If
|
---|
260 | they are chosen too small or without consideration, then neighbouring
|
---|
261 | view cells contain redundant PVS information, hence boosting the PVS
|
---|
262 | computation and storage costs for the scene. In the left image of
|
---|
263 | figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna
|
---|
264 | model, generated by triangulation of the streets. In the closeup
|
---|
265 | (right image) we can see that each triangle is extruded to a given
|
---|
266 | height to form a view cell prism.
|
---|
267 |
|
---|
268 | In order to efficiently use view cells with our sampling method, we
|
---|
269 | require a view cell representation which is
|
---|
270 |
|
---|
271 | \begin{itemize}
|
---|
272 | \item optimized for view cell - ray intersection.
|
---|
273 | \item flexible, i.e., it can represent arbitrary geometry.
|
---|
274 | \item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others)
|
---|
275 | \end{itemize}
|
---|
276 |
|
---|
277 | We meet these requirements by employing spatial subdivisions (i.e.,
|
---|
278 | KD trees and BSP trees), to store the view cells. The initial view cells are
|
---|
279 | associated with the leaves. The reason why we chose BSP trees as view cell representation
|
---|
280 | is that they are very flexible. View cells forming arbitrary closed meshes can be closely matched.
|
---|
281 | Therefore we are able to find a view cells with only a few view ray-plane
|
---|
282 | intersections. Furthermore, the hierarchical structures can be
|
---|
283 | exploited as hierarchy of view cells. Interior nodes form larger view cells
|
---|
284 | containing the children. If neccessary, a leaf can be easily subdivided into smaller view cells.
|
---|
285 |
|
---|
286 | Currently we consider three different approaches to generate the initial view cell BSP tree.
|
---|
287 | The third method is not restricted to BSP trees, but BSP trees are prefered
|
---|
288 | because of their greater flexibility.
|
---|
289 |
|
---|
290 |
|
---|
291 | \begin{table}
|
---|
292 | \centering \footnotesize
|
---|
293 | \begin{tabular}{|l|c|c|}
|
---|
294 | \hline\hline
|
---|
295 | View cells & Vienna selection & Vienna full \\\hline\hline
|
---|
296 | \#view cells & 105 & 16447 \\\hline\hline
|
---|
297 | \#input polygons & 525 & 82235 \\\hline\hline
|
---|
298 | BSP tree generation time & 0.016s & 10.328s \\\hline\hline
|
---|
299 | view cell insertion time & 0.016s & 7.984s \\\hline\hline
|
---|
300 | \#nodes & 1137 & 597933 \\\hline\hline
|
---|
301 | \#interior nodes & 568 & 298966\\\hline\hline
|
---|
302 | \#leaf nodes & 569 & 298967\\\hline\hline
|
---|
303 | \#splits & 25 & 188936\\\hline\hline
|
---|
304 | max tree depth & 13 & 27\\\hline\hline
|
---|
305 | avg tree depth & 9.747 & 21.11\\\hline\hlien
|
---|
306 |
|
---|
307 | \end{tabular}
|
---|
308 | \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp}
|
---|
309 | \end{table}
|
---|
310 |
|
---|
311 |
|
---|
312 | \begin{itemize}
|
---|
313 |
|
---|
314 | \item We use a number of input view cells given in advance. As input view cell
|
---|
315 | any closed mesh can be applied. The only requirement is that the
|
---|
316 | any two view cells do not overlap.
|
---|
317 | First the view cell polygons are extracted, and the BSP is build from
|
---|
318 | these polygons using some global optimizations like tree balancing or
|
---|
319 | least splits. Then one view cell after the other
|
---|
320 | is inserted into the tree to find out the leafes where they are contained
|
---|
321 | in. The polygons of the view cell are filtered down the tree,
|
---|
322 | guiding the insertion process. Once we reach a leaf and there are no
|
---|
323 | more polygons left, we terminate the tree subdivision. If we are on
|
---|
324 | the inside of the last split plane (i.e., the leaf represents the
|
---|
325 | inside of the view cell), we associate the leaf with the view cell
|
---|
326 | (i.e., add a pointer to the view cell). One input view cell can
|
---|
327 | be associated with many leafes, whereas each leafs has only one view cell.
|
---|
328 | Some statistics about using this method on the vienna view cells set are given in table~\ref{tab:viewcell_bsp}.
|
---|
329 | However, sometimes a good set of view cells is not available. Or the scene
|
---|
330 | is changed frequently, and the designer does not want to create new view cells on each change.
|
---|
331 | In such a case one of the following two methods should rather be chosen, which generate view cells
|
---|
332 | automatically.
|
---|
333 |
|
---|
334 |
|
---|
335 | \item We apply a BSP tree subdivision to the scene geometry. Whenever
|
---|
336 | the subdivision terminates in a leaf, a view cell is associated with the leaf node.
|
---|
337 | This simple approach is justified because it places the view cell borders
|
---|
338 | along some discontinuities in the visibility function.
|
---|
339 |
|
---|
340 | \item The view cell generation can be guided by the sampling process. We start with
|
---|
341 | with a single initial view cell representing the whole space. If a given threshold
|
---|
342 | is reached during the preprocessing (e.g., the view cell is pierced by too many rays
|
---|
343 | resulting in a large PVS), the view cell is subdivided into smaller cells using some criteria.
|
---|
344 |
|
---|
345 | In order to evaluate the best split plane, we first have to define the characteristics of a
|
---|
346 | good view cell partition: The view cells should be quite large, while their PVS stays rather small.
|
---|
347 | The PVS of each two view cells should be as distinct as possible, otherwise they could be
|
---|
348 | merged into a larger view cell if the PVSs are too similar.
|
---|
349 | E.g., for a building, the perfect view cells are usually the single rooms connected by portals.
|
---|
350 |
|
---|
351 | Hence we can define some useful criteria for the split: 1) the number of rays should be roughly equal among the new view cells. 2) The split plane should be chosen in a way that the rays are maximal distinct, i.e., the number
|
---|
352 | of rays contributing to more than one cell should be minimized => the PVSs are also distinct. 3) For BSP trees,
|
---|
353 | the split plane should be aligned with some scene geometry which is large enough to contribute
|
---|
354 | a lot of occlusion power. This criterium can be naturally combined with the second one.
|
---|
355 | As termination criterium we can choose the minimum PVS / piercing ray size or the maximal tree depth.
|
---|
356 | \end{itemize}
|
---|
357 |
|
---|
358 | In the future we aim to extend the view cell construction by using
|
---|
359 | feedback from the PVS computation: the view cells which contain many
|
---|
360 | visibility changes will be subdivided further and neighboring view
|
---|
361 | cells with similar PVSs will be merged. We want to gain a more precise
|
---|
362 | information about visibility by selectivly storing rays with the
|
---|
363 | viewcells and computing visibility statistics for subsets of rays
|
---|
364 | which intersect subregions of the given view cell.
|
---|
365 |
|
---|
366 |
|
---|
367 | \subsection{From-object based visibility}
|
---|
368 |
|
---|
369 | Our framework is based on the idea of sampling visibility by casting
|
---|
370 | casting rays through the scene and collecting their contributions. A
|
---|
371 | visibility sample is computed by casting a ray from an object towards
|
---|
372 | the view cells and computing the nearest intersection with the scene
|
---|
373 | objects. All view cells pierced by the ray segment can the object and
|
---|
374 | thus the object can be added to their PVS. If the ray is terminated at
|
---|
375 | another scene object the PVS of the pierced view cells can also be
|
---|
376 | extended by this terminating object. Thus a single ray can make a
|
---|
377 | number of contributions to the progressively computed PVSs. A ray
|
---|
378 | sample piercing $n$ view cells which is bound by two distinct objects
|
---|
379 | contributes by at most $2*n$ entries to the current PVSs. Apart from
|
---|
380 | this performance benefit there is also a benefit in terms of the
|
---|
381 | sampling density: Assuming that the view cells are usually much larger
|
---|
382 | than the objects (which is typically the case) starting the sampling
|
---|
383 | deterministically from the objects increases the probability of small
|
---|
384 | objects being captured in the PVS.
|
---|
385 |
|
---|
386 | At this phase of the computation we not only start the samples from
|
---|
387 | the objects, but we also store the PVS information centered at the
|
---|
388 | objects. Instead of storing a PVS consting of objects visible from
|
---|
389 | view cells, every object maintains a PVS consisting of potentially
|
---|
390 | visible view cells. While these representations contain exactly the
|
---|
391 | same information as we shall see later the object centered PVS is
|
---|
392 | better suited for the importance sampling phase as well as the
|
---|
393 | visibility verification phase.
|
---|
394 |
|
---|
395 |
|
---|
396 | \subsection{Naive Randomized Sampling}
|
---|
397 |
|
---|
398 | The naive global visibility sampling works as follows: At every pass
|
---|
399 | of the algorithm visits scene objects sequentially. For every scene
|
---|
400 | object we randomly choose a point on its surface. Then a ray is cast
|
---|
401 | from the selected point according to the randomly chosen direction. We
|
---|
402 | use a uniform distribution of the ray directions with respect to the
|
---|
403 | halfspace given by the surface normal. Using this strategy the samples
|
---|
404 | at deterministicaly placed at every object, with a randomization of
|
---|
405 | the location on the object surface. The uniformly distributed
|
---|
406 | direction is a simple and fast strategy to gain initial visibility
|
---|
407 | information.
|
---|
408 |
|
---|
409 | \begin{figure}%[htb]
|
---|
410 | \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{figs/sampling} \\
|
---|
411 | %\label{tab:online_culling_example}
|
---|
412 | \caption{Three objects and a set of view cells corresponding to leaves
|
---|
413 | of an axis aligned BSP tree. The figure depicts several random
|
---|
414 | samples cast from a selected object (shown in red). Note that most
|
---|
415 | samples contribute to more view cells. }
|
---|
416 | \label{fig:online_culling_example}
|
---|
417 | \end{figure}
|
---|
418 |
|
---|
419 | The described algorithm accounts for the irregular distribution of the
|
---|
420 | objects: more samples are placed at locations containing more
|
---|
421 | objects. Additionally every object is sampled many times depending on
|
---|
422 | the number of passes in which this sampling strategy is applied. This
|
---|
423 | increases the chance of even a small object being captured in the PVS
|
---|
424 | of the view cells from which it is visible.
|
---|
425 |
|
---|
426 | Each ray sample can contribute by a associating a number of view cells
|
---|
427 | with the object from which the sample was cast. If the ray does not
|
---|
428 | leave the scene it also contributes by associating the pierced view
|
---|
429 | cells to the terminating object. Thus as the ray samples are cast we
|
---|
430 | can measure the average contribution of a certain number of most
|
---|
431 | recent samples. If this contribution falls bellow a predefined
|
---|
432 | constant we move on to the next sampling strategy, which aim to
|
---|
433 | discover more complicated visibility relations.
|
---|
434 |
|
---|
435 |
|
---|
436 | \subsection{Accounting for View Cell Distribution}
|
---|
437 |
|
---|
438 | The first modification to the basic algorithm accounts for irregular
|
---|
439 | distribution of the view cells. Such a case is common for example in
|
---|
440 | urban scenes where the view cells are mostly distributed in a
|
---|
441 | horizontal direction and more view cells are placed at denser parts of
|
---|
442 | the city. The modification involves replacing the uniformly
|
---|
443 | distributed ray direction by directions distributed according to the
|
---|
444 | local view cell directional density. This means placing more samples at
|
---|
445 | directions where more view cells are located: We select a random
|
---|
446 | viecell which lies at the halfpace given by the surface normal at the
|
---|
447 | chosen point. We pick a random point inside the view cell and cast a
|
---|
448 | ray towards this point.
|
---|
449 |
|
---|
450 |
|
---|
451 | \subsection{Accounting for Visibility Events}
|
---|
452 |
|
---|
453 | Visibility events correspond to appearance and disapearance of
|
---|
454 | objects with respect to a moving view point. In polygonal scenes the
|
---|
455 | events defined by event surfaces defined by three distinct scene
|
---|
456 | edges. Depending on the edge configuration we distinguish between
|
---|
457 | vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces
|
---|
458 | are planar planes whereas the EEE are in general quadratic surfaces.
|
---|
459 |
|
---|
460 | To account for these event we explicitly place samples passing by the
|
---|
461 | object edges which are directed to edges and/or vertices of other
|
---|
462 | objects.
|
---|
463 |
|
---|
464 | The first strategy starts similarly to the above described sampling
|
---|
465 | methods: we randomly select an object and a point on its surface. Then
|
---|
466 | we randomly pickup an object from its PVS. If we have mesh
|
---|
467 | connectivity information we select a random silhouette edge from this
|
---|
468 | object and cast a sample which is tangent to that object at the
|
---|
469 | selectededge .
|
---|
470 |
|
---|
471 | The second strategy works as follows: we randomly pickup two objects
|
---|
472 | which are likely to see each other. Then we determine a ray which is
|
---|
473 | tangent to both objects. For simple meshes the determination of such
|
---|
474 | rays can be computed geometrically, for more complicated ones it is
|
---|
475 | based again on random sampling. The selection of the two objects works
|
---|
476 | as follows: first we randomly select the first object and a random
|
---|
477 | non-empty view cell for which we know that it can see the object. Then
|
---|
478 | we randomly select an object associated with that view cell as the
|
---|
479 | second object.
|
---|
480 |
|
---|
481 |
|
---|
482 |
|
---|
483 | \section{Summary}
|
---|
484 |
|
---|
485 | This chapter described a global visibility sampling tool which forms a
|
---|
486 | core of the visibility preprocessing framework. The global visibility
|
---|
487 | sampling computes aggresive visibility, i.e. it computes a subset of
|
---|
488 | the exact PVS for each view cell. The aggresive sampling provides a
|
---|
489 | fast progressive solution and thus it can be easily integrated into
|
---|
490 | the game development cycle. The sampling itself involves several
|
---|
491 | strategies which aim to pregressively discover more visibility
|
---|
492 | relationships in the scene.
|
---|
493 |
|
---|
494 | The methods presented in this chapter give a good initial estimate
|
---|
495 | about visibility in the scene, which can be verified by the mutual
|
---|
496 | visibility tool described in the next chapter.
|
---|
497 |
|
---|
498 |
|
---|
499 |
|
---|
500 |
|
---|