[243] | 1 | \chapter{Visibility Preprocessing}
|
---|
[246] | 2 |
|
---|
| 3 |
|
---|
[243] | 4 | \section{Introduction}
|
---|
[246] | 5 |
|
---|
| 6 |
|
---|
| 7 | \section{Related Work}
|
---|
| 8 |
|
---|
| 9 |
|
---|
| 10 | \section{Overview of Visibility Preprocessor}
|
---|
| 11 |
|
---|
| 12 | The proposed visibility preprocessing framework consists of two major
|
---|
| 13 | steps.
|
---|
| 14 | \begin{itemize}
|
---|
| 15 | \item The first step is an aggresive visibility sampling which gives
|
---|
| 16 | initial estimate about global visibility in the scene. The sampling
|
---|
| 17 | itself involves several strategies which will be described in
|
---|
| 18 | section~\ref{sec:sampling}. The imporant property of the aggresive
|
---|
| 19 | sampling step is that it provides a fast progressive solution to
|
---|
| 20 | global visibility and thus it can be easily integrated into the
|
---|
| 21 | game development cycle.
|
---|
| 22 |
|
---|
| 23 | \item The second step is visibility verification. This step turns the
|
---|
| 24 | previous aggresive visibility solution into either exact, conservative
|
---|
| 25 | or error bound aggresive solution. The choice of the particular
|
---|
| 26 | verifier is left on the user in order to select the best for a
|
---|
| 27 | particular scene, application context and time constrains. For
|
---|
| 28 | example, in scenes like a forest an error bound aggresive visibility
|
---|
| 29 | can be the best compromise between the resulting size of the PVS (and
|
---|
| 30 | framerate) and the visual quality. The exact or conservative algorithm
|
---|
| 31 | can however be chosen for urban scenes where of even small objects can
|
---|
| 32 | be more distructing for the user.
|
---|
[247] | 33 | \end{itemize}
|
---|
[246] | 34 |
|
---|
| 35 |
|
---|
| 36 | \section{Aggresive Global Visibility Sampling}
|
---|
| 37 |
|
---|
| 38 | In traditional visibility preprocessing the view space is
|
---|
| 39 | subdivided into viewcells and for each view cell the set of visible
|
---|
| 40 | objects --- potentially visible set (PVS) is computed. This framewoirk
|
---|
| 41 | has bee used for conservative, aggresive and exact algorithms.
|
---|
| 42 |
|
---|
| 43 | We propose a different strategy which has several advantages for
|
---|
| 44 | sampling based aggresive visibility preprocessing. The stategy is
|
---|
| 45 | based on the following fundamental ideas:
|
---|
| 46 | \begin{itemize}
|
---|
| 47 | \item Replace the roles of view cells and objects
|
---|
| 48 | \item Compute progressive global visibility instead of sequential from-region visibility
|
---|
| 49 | \end{itemize}
|
---|
| 50 |
|
---|
| 51 | Both of these points are addressed bellow in more detail.
|
---|
| 52 |
|
---|
| 53 | \subsection{From-object based visibility}
|
---|
| 54 |
|
---|
[247] | 55 | Our framework is based on the idea of sampling visibility by casting
|
---|
| 56 | casting rays through the scene and collecting their contributions. A
|
---|
| 57 | visibility sample is computed by casting a ray from an object towards
|
---|
| 58 | the viewcells and computing the nearest intersection with the scene
|
---|
| 59 | objects. All view cells pierced by the ray segment can the object and
|
---|
| 60 | thus the object can be added to their PVS. If the ray is terminated at
|
---|
| 61 | another scene object the PVS of the pierced view cells can also be
|
---|
| 62 | extended by this terminating object. Thus a single ray can make a
|
---|
| 63 | number of contributions to the progressively computed PVSs. A ray
|
---|
[246] | 64 | sample piercing $n$ viewcells which is bound by two distinct objects
|
---|
[247] | 65 | contributes by at most $2*n$ entries to the current PVSs. Appart from
|
---|
| 66 | this performance benefit there is also a benefit in terms of the
|
---|
| 67 | sampling density: Assuming that the view cells are usually much larger
|
---|
| 68 | than the objects (which is typically the case) starting the sampling
|
---|
| 69 | deterministically from the objects increases the probability of small
|
---|
| 70 | objects being captured in the PVS.
|
---|
[246] | 71 |
|
---|
| 72 | At this phase of the computation we not only start the samples from
|
---|
| 73 | the objects, but we also store the PVS information centered at the
|
---|
| 74 | objects. Instead of storing a PVSs consting of objects visible from
|
---|
| 75 | view cells, every object maintains a PVS consisting of potentially
|
---|
| 76 | visible view cells. While these representations contain exactly the
|
---|
| 77 | same information as we shall see later the object centered PVS is
|
---|
| 78 | better suited for the importance sampling phase as well as the
|
---|
| 79 | visibility verification phase.
|
---|
| 80 |
|
---|
| 81 |
|
---|
[247] | 82 | \subsection{Basic Randomized Sampling}
|
---|
[246] | 83 |
|
---|
| 84 |
|
---|
[247] | 85 | The first phase of the sampling works as follows: At every pass of the
|
---|
| 86 | algorithm visits scene objects sequentially. For every scene object we
|
---|
| 87 | randomly choose a point on its surface. Then a ray is cast from the
|
---|
| 88 | selected point according to the randomly chosen direction. We use a
|
---|
| 89 | uniform distribution of the ray directions with respect to the
|
---|
| 90 | halfspace given by the surface normal. Using this strategy the samples
|
---|
| 91 | at deterministicaly placed at every object, with a randomization of
|
---|
| 92 | the location on the object surface. The uniformly distributed
|
---|
| 93 | direction is a simple and fast strategy to gain initial visibility
|
---|
| 94 | information.
|
---|
[246] | 95 |
|
---|
| 96 |
|
---|
[247] | 97 | The described algorithm accounts for the irregular distribution of the
|
---|
| 98 | objects: more samples are placed at locations containing more
|
---|
| 99 | objects. Additionally every object is sampled many times depending on
|
---|
| 100 | the number of passes in which this sampling strategy is applied. This
|
---|
| 101 | increases the chance of even a small object being captured in the PVS
|
---|
| 102 | of the view cells from which it is visible.
|
---|
[246] | 103 |
|
---|
| 104 |
|
---|
[247] | 105 | \subsection{Accounting for View Cell Distribution}
|
---|
| 106 |
|
---|
| 107 | The first modification to the basic algorithm accounts for
|
---|
| 108 | irregular distribution of the viewcells. Such a case in common for
|
---|
| 109 | example in urban scenes where the viewcells are mostly distributed in
|
---|
| 110 | a horizontal direction and more viewcells are placed at denser parts
|
---|
| 111 | of the city. The modification involves replacing the uniformly
|
---|
| 112 | distributed ray direction by direction distribution according to the
|
---|
| 113 | local view cell density. We select a random viecell which lies at the
|
---|
| 114 | halfpace given by the surface normal at the chosen point. We pick a
|
---|
| 115 | random point inside the view cell and cast a ray towards this point.
|
---|
| 116 |
|
---|
| 117 |
|
---|
| 118 | \subsection{Accounting for Visibility Events}
|
---|
| 119 |
|
---|
| 120 |
|
---|
[251] | 121 | \subsection{View Cell Representation}
|
---|
[247] | 122 |
|
---|
[251] | 123 | In order to efficiently use view cells with our sampling method, we require a view cell representation which is
|
---|
[247] | 124 |
|
---|
[251] | 125 | \begin{itemize}
|
---|
| 126 | \item optimized for viewcell - ray intersection.
|
---|
| 127 | \item flexible, i.e., it can represent arbitrary geometry.
|
---|
| 128 | \item naturally suited for an hierarchical approach. %(i.e., there is a root view cell containing all others)
|
---|
| 129 | \end{itemize}
|
---|
| 130 |
|
---|
[255] | 131 | We meet these requirements by using a view cell BSP tree, where the
|
---|
| 132 | BSP leafs are associated with the view cells. Using the BSP tree, we
|
---|
| 133 | are able to find the initial view cells with only a few view ray-plane
|
---|
| 134 | intersections. The hierarchical structure of the BSP tree can be
|
---|
| 135 | exploited as hierarchy of view cells. If neccessary, the BSP approach
|
---|
| 136 | makes it very easy to further subdivide a view cell.
|
---|
[251] | 137 |
|
---|
| 138 | Currently we use two approaches to generate the initial BSP view cell tree.
|
---|
| 139 |
|
---|
| 140 | \begin{itemize}
|
---|
[255] | 141 | \item We use a number of dedicated input view cells. As input view
|
---|
| 142 | cell any closed mesh can be applied. The only requirement is that the
|
---|
| 143 | view cells do not overlap. We insert one view cell after the other
|
---|
| 144 | into the tree. The polygons of a view cell are filtered down the tree,
|
---|
| 145 | guiding the insertion process. Once we reach a leaf and there are no
|
---|
| 146 | more polygons left, we terminate the tree subdivision. If we are on
|
---|
| 147 | the inside of the last split plane (i.e., the leaf is representing the
|
---|
| 148 | inside of the view cell), we associate the leaf with the view cell
|
---|
| 149 | (i.e., add a pointer to the view cell). Hence a number of leafes can
|
---|
| 150 | be associated with the same input view cell.
|
---|
| 151 | \item We apply the BSP tree subdivision to the scene geometry. When
|
---|
| 152 | the subdivision terminates, the leaf nodes also represent the view
|
---|
| 153 | cells.
|
---|
[251] | 154 | \end{itemize}
|
---|
| 155 |
|
---|
| 156 |
|
---|
[246] | 157 | \section{Visibility Verification}
|
---|
| 158 |
|
---|
| 159 |
|
---|
| 160 | \subsection{Exact Verifier}
|
---|
| 161 |
|
---|
[247] | 162 | The exact verifier computes exact mutual visibility between two
|
---|
| 163 | polyhedrons in the scene. This is computed by testing visibility
|
---|
| 164 | between all pairs of potentially polygons of these polyhedrons.
|
---|
[246] | 165 |
|
---|
[247] | 166 |
|
---|
| 167 |
|
---|
[246] | 168 | \subsection{Conservative Verifier}
|
---|
| 169 |
|
---|
| 170 |
|
---|
| 171 | \subsection{Error Bound Verifier}
|
---|
| 172 |
|
---|
| 173 |
|
---|
| 174 |
|
---|