[249] | 1 | \chapter{Global Visibility Sampling Tool}
|
---|
| 2 |
|
---|
| 3 |
|
---|
| 4 | \section{Introduction}
|
---|
| 5 |
|
---|
| 6 |
|
---|
| 7 | The proposed visibility preprocessing framework consists of two major
|
---|
| 8 | steps.
|
---|
[255] | 9 |
|
---|
[249] | 10 | \begin{itemize}
|
---|
| 11 | \item The first step is an aggresive visibility sampling which gives
|
---|
| 12 | initial estimate about global visibility in the scene. The sampling
|
---|
| 13 | itself involves several strategies which will be described in
|
---|
| 14 | section~\ref{sec:sampling}. The imporant property of the aggresive
|
---|
| 15 | sampling step is that it provides a fast progressive solution to
|
---|
[255] | 16 | global visibility and thus it can be easily integrated into the game
|
---|
| 17 | development cycle.
|
---|
[249] | 18 |
|
---|
[255] | 19 | \item The second step is mutual visibility verification. This step
|
---|
| 20 | turns the previous aggresive visibility solution into either exact,
|
---|
| 21 | conservative or error bound aggresive solution. The choice of the
|
---|
| 22 | particular verifier is left on the user in order to select the best
|
---|
| 23 | for a particular scene, application context and time constrains. For
|
---|
[249] | 24 | example, in scenes like a forest an error bound aggresive visibility
|
---|
| 25 | can be the best compromise between the resulting size of the PVS (and
|
---|
| 26 | framerate) and the visual quality. The exact or conservative algorithm
|
---|
| 27 | can however be chosen for urban scenes where of even small objects can
|
---|
[255] | 28 | be more distructing for the user. The mutual visibility tool will be
|
---|
| 29 | described in the next chapter.
|
---|
| 30 |
|
---|
[249] | 31 | \end{itemize}
|
---|
| 32 |
|
---|
| 33 |
|
---|
| 34 | In traditional visibility preprocessing the view space is
|
---|
| 35 | subdivided into viewcells and for each view cell the set of visible
|
---|
| 36 | objects --- potentially visible set (PVS) is computed. This framewoirk
|
---|
| 37 | has bee used for conservative, aggresive and exact algorithms.
|
---|
| 38 |
|
---|
| 39 | We propose a different strategy which has several advantages for
|
---|
| 40 | sampling based aggresive visibility preprocessing. The stategy is
|
---|
| 41 | based on the following fundamental ideas:
|
---|
| 42 | \begin{itemize}
|
---|
| 43 | \item Replace the roles of view cells and objects
|
---|
| 44 | \item Compute progressive global visibility instead of sequential from-region visibility
|
---|
| 45 | \end{itemize}
|
---|
| 46 |
|
---|
[255] | 47 | Both these points will be addressed in this chapter in more detail.
|
---|
[249] | 48 |
|
---|
[255] | 49 |
|
---|
| 50 |
|
---|
| 51 | \section{Related work}
|
---|
| 52 | \label{VFR3D_RELATED_WORK}
|
---|
| 53 |
|
---|
| 54 |
|
---|
| 55 | Below we briefly discuss the related work on visibility preprocessing
|
---|
| 56 | in several application areas. In particular we focus on computing
|
---|
| 57 | from-region which has been a core of most previous visibility
|
---|
| 58 | preprocessing techniques.
|
---|
| 59 |
|
---|
| 60 |
|
---|
| 61 | \subsection{Aspect graph}
|
---|
| 62 |
|
---|
| 63 | The first algorithms dealing with from-region visibility belong to the
|
---|
| 64 | area of computer vision. The {\em aspect
|
---|
| 65 | graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions
|
---|
| 66 | the view space into cells that group viewpoints from which the
|
---|
| 67 | projection of the scene is qualitatively equivalent. The aspect graph
|
---|
| 68 | is a graph describing the view of the scene (aspect) for each cell of
|
---|
| 69 | the partitioning. The major drawback of this approach is that for
|
---|
| 70 | polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in
|
---|
| 71 | the partitioning for unrestricted viewspace. A {\em scale space}
|
---|
| 72 | aspect graph~\cite{bb12595,bb12590} improves robustness of the method
|
---|
| 73 | by merging similar features according to the given scale.
|
---|
| 74 |
|
---|
| 75 |
|
---|
| 76 | \subsection{Potentially visible sets}
|
---|
| 77 |
|
---|
| 78 |
|
---|
| 79 | In the computer graphics community Airey~\cite{Airey90} introduced
|
---|
| 80 | the concept of {\em potentially visible sets} (PVS). Airey assumes
|
---|
| 81 | the existence of a natural subdivision of the environment into
|
---|
| 82 | cells. For models of building interiors these cells roughly correspond
|
---|
| 83 | to rooms and corridors. For each cell the PVS is formed by cells
|
---|
| 84 | visible from any point of that cell. Airey uses ray shooting to
|
---|
| 85 | approximate visibility between cells of the subdivision and so the
|
---|
| 86 | computed PVS is not conservative.
|
---|
| 87 |
|
---|
| 88 | This concept was further elaborated by Teller et
|
---|
| 89 | al.~\cite{Teller92phd,Teller:1991:VPI} to establish a conservative
|
---|
| 90 | PVS. The PVS is constructed by testing the existence of a stabbing
|
---|
| 91 | line through a sequence of polygonal portals between cells. Teller
|
---|
| 92 | proposed an exact solution to this problem using \plucker
|
---|
| 93 | coordinates~\cite{Teller:1992:CAA} and a simpler and more robust
|
---|
| 94 | conservative solution~\cite{Teller92phd}. The portal based methods
|
---|
| 95 | are well suited to static densely occluded environments with a
|
---|
| 96 | particular structure. For less structured models they can face a
|
---|
| 97 | combinatorial explosion of complexity~\cite{Teller92phd}. Yagel and
|
---|
| 98 | Ray~\cite{Yagel95a} present an algorithm, that uses a regular spatial
|
---|
| 99 | subdivision. Their approach is not sensitive to the structure of the
|
---|
| 100 | model in terms of complexity, but its efficiency is altered by the
|
---|
| 101 | discrete representation of the scene.
|
---|
| 102 |
|
---|
| 103 | Plantinga proposed a PVS algorithm based on a conservative viewspace
|
---|
| 104 | partitioning by evaluating visual
|
---|
| 105 | events~\cite{Plantinga:1993:CVP}. The construction of viewspace
|
---|
| 106 | partitioning was further studied by Chrysanthou et
|
---|
| 107 | al.~\cite{Chrysanthou:1998:VP}, Cohen-Or et al.~\cite{cohen-egc-98}
|
---|
| 108 | and Sadagic~\cite{Sadagic}. Sudarsky and
|
---|
| 109 | Gotsman~\cite{Sudarsky:1996:OVA} proposed an output-sensitive
|
---|
| 110 | visibility algorithm for dynamic scenes. Cohen-Or et
|
---|
| 111 | al.~\cite{COZ-gi98} developed a conservative algorithm determining
|
---|
| 112 | visibility of an $\epsilon$-neighborhood of a given viewpoint that was
|
---|
| 113 | used for network based walkthroughs.
|
---|
| 114 |
|
---|
| 115 | Conservative algorithms for computing PVS developed by Durand et
|
---|
| 116 | al.~\cite{EVL-2000-60} and Schaufler et al.~\cite{EVL-2000-59} make
|
---|
| 117 | use of several simplifying assumptions to avoid the usage of 4D data
|
---|
| 118 | structures. Wang et al.~\cite{Wang98} proposed an algorithm that
|
---|
| 119 | precomputes visibility within beams originating from the restricted
|
---|
| 120 | viewpoint region. The approach is very similar to the 5D subdivision
|
---|
| 121 | for ray tracing~\cite{Simiakakis:1994:FAS} and so it exhibits similar
|
---|
| 122 | problems, namely inadequate memory and preprocessing complexities.
|
---|
| 123 | Specialized algorithms for computing PVS in \m25d scenes were proposed
|
---|
| 124 | by Wonka et al.~\cite{wonka00}, Koltun et al.~\cite{koltun01}, and
|
---|
| 125 | Bittner et al.~\cite{bittner:2001:PG}.
|
---|
| 126 |
|
---|
| 127 | The exact mutual visibility method presented later in the report is
|
---|
| 128 | based on method exploting \plucker coordinates of
|
---|
| 129 | lines~\cite{bittner02phd,nirenstein:02:egwr,haumont2005}. This
|
---|
| 130 | algorithm uses \plucker coordinates to compute visibility in shafts
|
---|
| 131 | defined by each polygon in the scene.
|
---|
| 132 |
|
---|
| 133 |
|
---|
| 134 | \subsection{Rendering of shadows}
|
---|
| 135 |
|
---|
| 136 |
|
---|
| 137 | The from-region visibility problems include the computation of soft
|
---|
| 138 | shadows due to an areal light source. Continuous algorithms for
|
---|
| 139 | real-time soft shadow generation were studied by Chin and
|
---|
| 140 | Feiner~\cite{Chin:1992:FOP}, Loscos and
|
---|
| 141 | Drettakis~\cite{Loscos:1997:IHS}, and
|
---|
| 142 | Chrysanthou~\cite{Chrysantho1996a} and Chrysanthou and
|
---|
| 143 | Slater~\cite{Chrysanthou:1997:IUS}. Discrete solutions have been
|
---|
| 144 | proposed by Nishita~\cite{Nishita85}, Brotman and
|
---|
| 145 | Badler~\cite{Brotman:1984:GSS}, and Soler and Sillion~\cite{SS98}. An
|
---|
| 146 | exact algorithm computing an antipenumbra of an areal light source was
|
---|
| 147 | developed by Teller~\cite{Teller:1992:CAA}.
|
---|
| 148 |
|
---|
| 149 |
|
---|
| 150 | \subsection{Discontinuity meshing}
|
---|
| 151 |
|
---|
| 152 |
|
---|
| 153 | Discontinuity meshing is used in the context of the radiosity global
|
---|
| 154 | illumination algorithm or computing soft shadows due to areal light
|
---|
| 155 | sources. First approximate discontinuity meshing algorithms were
|
---|
| 156 | studied by Campbell~\cite{Campbell:1990:AMG, Campbell91},
|
---|
| 157 | Lischinski~\cite{lischinski92a}, and Heckbert~\cite{Heckbert92discon}.
|
---|
| 158 | More elaborate methods were developed by
|
---|
| 159 | Drettakis~\cite{Drettakis94-SSRII, Drettakis94-FSAAL}, and Stewart and
|
---|
| 160 | Ghali~\cite{Stewart93-OSACS, Stewart:1994:FCSb}. These methods are
|
---|
| 161 | capable of creating a complete discontinuity mesh that encodes all
|
---|
| 162 | visual events involving the light source.
|
---|
| 163 |
|
---|
| 164 | The classical radiosity is based on an evaluation of form factors
|
---|
| 165 | between two patches~\cite{Schroder:1993:FFB}. The visibility
|
---|
| 166 | computation is a crucial step in the form factor
|
---|
| 167 | evaluation~\cite{Teller:1993:GVA,Haines94,Teller:1994:POL,
|
---|
| 168 | Nechvile:1996:FFE,Teichmann:WV}. Similar visibility computation takes
|
---|
| 169 | place in the scope of hierarchical radiosity
|
---|
| 170 | algorithms~\cite{Soler:1996:AEB, Drettakis:1997:IUG, Daubert:1997:HLS}.
|
---|
| 171 |
|
---|
| 172 |
|
---|
| 173 |
|
---|
| 174 | \subsection{Global visibility}
|
---|
| 175 |
|
---|
| 176 | The aim of {\em global visibility} computations is to capture and
|
---|
| 177 | describe visibility in the whole scene~\cite{Durand:1996:VCN}. The
|
---|
| 178 | global visibility algorithms are typically based on some form of {\em
|
---|
| 179 | line space subdivision} that partitions lines or rays into equivalence
|
---|
| 180 | classes according to their visibility classification. Each class
|
---|
| 181 | corresponds to a continuous set of rays with a common visibility
|
---|
| 182 | classification. The techniques differ mainly in the way how the line
|
---|
| 183 | space subdivision is computed and maintained. A practical application
|
---|
| 184 | of most of the proposed global visibility structures for 3D scenes is
|
---|
| 185 | still an open problem. Prospectively these techniques provide an
|
---|
| 186 | elegant method for ray shooting acceleration --- the ray shooting
|
---|
| 187 | problem can be reduced to a point location in the line space
|
---|
| 188 | subdivision.
|
---|
| 189 |
|
---|
| 190 |
|
---|
| 191 | Pocchiola and Vegter introduced the visibility complex~\cite{pv-vc-93}
|
---|
| 192 | that describes global visibility in 2D scenes. The visibility complex
|
---|
| 193 | has been applied to solve various 2D visibility
|
---|
| 194 | problems~\cite{r-tsvcp-95,r-wvcav-97, r-dvpsv-97,Orti96-UVCRC}. The
|
---|
| 195 | approach was generalized to 3D by Durand et
|
---|
| 196 | al.~\cite{Durand:1996:VCN}. Nevertheless, no implementation of the 3D
|
---|
| 197 | visibility complex is currently known. Durand et
|
---|
| 198 | al.~\cite{Durand:1997:VSP} introduced the {\em visibility skeleton}
|
---|
| 199 | that is a graph describing a skeleton of the 3D visibility
|
---|
| 200 | complex. The visibility skeleton was verified experimentally and the
|
---|
| 201 | results indicate that its $O(n^4\log n)$ worst case complexity is much
|
---|
| 202 | better in practice. Pu~\cite{Pu98-DSGIV} developed a similar method to
|
---|
| 203 | the one presented in this chapter. He uses a BSP tree in \plucker
|
---|
| 204 | coordinates to represent a global visibility map for a given set of
|
---|
| 205 | polygons. The computation is performed considering all rays piercing
|
---|
| 206 | the scene and so the method exhibits unacceptable memory complexity
|
---|
| 207 | even for scenes of moderate size. Recently, Duguet and
|
---|
| 208 | Drettakis~\cite{duguet:02:sig} developed a robust variant of the
|
---|
| 209 | visibility skeleton algorithm that uses robust epsilon-visibility
|
---|
| 210 | predicates.
|
---|
| 211 |
|
---|
| 212 | Discrete methods aiming to describe visibility in a 4D data structure
|
---|
| 213 | were presented by Chrysanthou et al.~\cite{chrysanthou:cgi:98} and
|
---|
| 214 | Blais and Poulin~\cite{blais98a}. These data structures are closely
|
---|
| 215 | related to the {\em lumigraph}~\cite{Gortler:1996:L,buehler2001} or
|
---|
| 216 | {\em light field}~\cite{Levoy:1996:LFR}. An interesting discrete
|
---|
| 217 | hierarchical visibility algorithm for two-dimensional scenes was
|
---|
| 218 | developed by Hinkenjann and M\"uller~\cite{EVL-1996-10}. One of the
|
---|
| 219 | biggest problems of the discrete solution space data structures is
|
---|
| 220 | their memory consumption required to achieve a reasonable
|
---|
| 221 | accuracy. Prospectively, the scene complexity
|
---|
| 222 | measures~\cite{Cazals:3204:1997} provide a useful estimate on the
|
---|
| 223 | required sampling density and the size of the solution space data
|
---|
| 224 | structure.
|
---|
| 225 |
|
---|
| 226 |
|
---|
| 227 | \subsection{Other applications}
|
---|
| 228 |
|
---|
| 229 | Certain from-point visibility problems determining visibility over a
|
---|
| 230 | period of time can be transformed to a static from-region visibility
|
---|
| 231 | problem. Such a transformation is particularly useful for antialiasing
|
---|
| 232 | purposes~\cite{grant85a}. The from-region visibility can also be used
|
---|
| 233 | in the context of simulation of the sound
|
---|
| 234 | propagation~\cite{Funkhouser98}. The sound propagation algorithms
|
---|
| 235 | typically require lower resolution than the algorithms simulating the
|
---|
| 236 | propagation of light, but they need to account for simulation of
|
---|
| 237 | attenuation, reflection and time delays.
|
---|
| 238 |
|
---|
| 239 | \section{Algorithm Setup}
|
---|
| 240 |
|
---|
| 241 | \subsection{View Cell Representation}
|
---|
| 242 |
|
---|
| 243 | In order to efficiently use view cells with our sampling method, we
|
---|
| 244 | require a view cell representation which is
|
---|
| 245 |
|
---|
| 246 | \begin{itemize}
|
---|
| 247 | \item optimized for viewcell - ray intersection.
|
---|
| 248 | \item flexible, i.e., it can represent arbitrary geometry.
|
---|
| 249 | \item naturally suited for an hierarchical approach. %(i.e., there is a root view cell containing all others)
|
---|
| 250 | \end{itemize}
|
---|
| 251 |
|
---|
| 252 | We meet these requirements by using a view cell BSP tree, where the
|
---|
| 253 | BSP leafs are associated with the view cells. Using the BSP tree, we
|
---|
| 254 | are able to find the initial view cells with only a few view ray-plane
|
---|
| 255 | intersections. The hierarchical structure of the BSP tree can be
|
---|
| 256 | exploited as hierarchy of view cells. If neccessary, we could further
|
---|
| 257 | subdivide a BSP leaf view cell quite easily.
|
---|
| 258 |
|
---|
| 259 | Currently we use two approaches to generate the initial BSP view cell tree.
|
---|
| 260 |
|
---|
| 261 | \begin{itemize}
|
---|
| 262 | \item We use a number of dedicated input view cells. As input view
|
---|
| 263 | cell any closed mesh can be applied. The only requirement is that the
|
---|
| 264 | view cells do not overlap. We insert one view cell after the other
|
---|
| 265 | into the tree. The polygons of a view cell are filtered down the tree,
|
---|
| 266 | guiding the insertion process. Once we reach a leaf and there are no
|
---|
| 267 | more polygons left, we terminate the tree subdivision. If we are on
|
---|
| 268 | the inside of the last split plane (i.e., the leaf is representing the
|
---|
| 269 | inside of the view cell), we associate the leaf with the view cell
|
---|
| 270 | (i.e., add a pointer to the view cell). Hence a number of leafes can
|
---|
| 271 | be associated with the same input view cell.
|
---|
| 272 | \item We apply the BSP tree subdivision to the scene geometry. When
|
---|
| 273 | the subdivision terminates, the leaf nodes also represent the view
|
---|
| 274 | cells.
|
---|
| 275 | \end{itemize}
|
---|
| 276 |
|
---|
[249] | 277 | \subsection{From-object based visibility}
|
---|
| 278 |
|
---|
| 279 | Our framework is based on the idea of sampling visibility by casting
|
---|
| 280 | casting rays through the scene and collecting their contributions. A
|
---|
| 281 | visibility sample is computed by casting a ray from an object towards
|
---|
| 282 | the viewcells and computing the nearest intersection with the scene
|
---|
| 283 | objects. All view cells pierced by the ray segment can the object and
|
---|
| 284 | thus the object can be added to their PVS. If the ray is terminated at
|
---|
| 285 | another scene object the PVS of the pierced view cells can also be
|
---|
| 286 | extended by this terminating object. Thus a single ray can make a
|
---|
| 287 | number of contributions to the progressively computed PVSs. A ray
|
---|
| 288 | sample piercing $n$ viewcells which is bound by two distinct objects
|
---|
| 289 | contributes by at most $2*n$ entries to the current PVSs. Appart from
|
---|
| 290 | this performance benefit there is also a benefit in terms of the
|
---|
| 291 | sampling density: Assuming that the view cells are usually much larger
|
---|
| 292 | than the objects (which is typically the case) starting the sampling
|
---|
| 293 | deterministically from the objects increases the probability of small
|
---|
| 294 | objects being captured in the PVS.
|
---|
| 295 |
|
---|
| 296 | At this phase of the computation we not only start the samples from
|
---|
| 297 | the objects, but we also store the PVS information centered at the
|
---|
| 298 | objects. Instead of storing a PVSs consting of objects visible from
|
---|
| 299 | view cells, every object maintains a PVS consisting of potentially
|
---|
| 300 | visible view cells. While these representations contain exactly the
|
---|
| 301 | same information as we shall see later the object centered PVS is
|
---|
| 302 | better suited for the importance sampling phase as well as the
|
---|
| 303 | visibility verification phase.
|
---|
| 304 |
|
---|
| 305 |
|
---|
[255] | 306 | \section{Basic Randomized Sampling}
|
---|
[249] | 307 |
|
---|
| 308 |
|
---|
| 309 | The first phase of the sampling works as follows: At every pass of the
|
---|
| 310 | algorithm visits scene objects sequentially. For every scene object we
|
---|
| 311 | randomly choose a point on its surface. Then a ray is cast from the
|
---|
| 312 | selected point according to the randomly chosen direction. We use a
|
---|
| 313 | uniform distribution of the ray directions with respect to the
|
---|
| 314 | halfspace given by the surface normal. Using this strategy the samples
|
---|
| 315 | at deterministicaly placed at every object, with a randomization of
|
---|
| 316 | the location on the object surface. The uniformly distributed
|
---|
| 317 | direction is a simple and fast strategy to gain initial visibility
|
---|
| 318 | information.
|
---|
| 319 |
|
---|
| 320 |
|
---|
| 321 | The described algorithm accounts for the irregular distribution of the
|
---|
| 322 | objects: more samples are placed at locations containing more
|
---|
| 323 | objects. Additionally every object is sampled many times depending on
|
---|
| 324 | the number of passes in which this sampling strategy is applied. This
|
---|
| 325 | increases the chance of even a small object being captured in the PVS
|
---|
| 326 | of the view cells from which it is visible.
|
---|
| 327 |
|
---|
| 328 |
|
---|
[255] | 329 | \section{Accounting for View Cell Distribution}
|
---|
[249] | 330 |
|
---|
| 331 | The first modification to the basic algorithm accounts for irregular
|
---|
| 332 | distribution of the viewcells. Such a case in common for example in
|
---|
| 333 | urban scenes where the viewcells are mostly distributed in a
|
---|
| 334 | horizontal direction and more viewcells are placed at denser parts of
|
---|
| 335 | the city. The modification involves replacing the uniformly
|
---|
| 336 | distributed ray direction by directions distributed according to the
|
---|
| 337 | local view cell directional density. It means placing more samples at
|
---|
| 338 | directions where more view cells are located. We select a random
|
---|
| 339 | viecell which lies at the halfpace given by the surface normal at the
|
---|
| 340 | chosen point. We pick a random point inside the view cell and cast a
|
---|
| 341 | ray towards this point.
|
---|
| 342 |
|
---|
| 343 |
|
---|
[255] | 344 | \section{Accounting for Visibility Events}
|
---|
[249] | 345 |
|
---|
| 346 | Visibility events correspond to appearance and disapearance of
|
---|
| 347 | objects with respect to a moving view point. In polygonal scenes the
|
---|
| 348 | events defined by event surfaces defined by three distinct scene
|
---|
| 349 | edges. Depending on the edge configuration we distinguish between
|
---|
| 350 | vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces
|
---|
| 351 | are planar planes whereas the EEE are in general quadratic surfaces.
|
---|
| 352 |
|
---|
| 353 | To account for these event we explicitly place samples passing by the
|
---|
| 354 | object edges which are directed to edges and/or vertices of other
|
---|
| 355 | objects.
|
---|
| 356 |
|
---|
| 357 |
|
---|