1 | \chapter{Global Visibility Sampling Tool}
|
---|
2 |
|
---|
3 |
|
---|
4 | \section{Introduction}
|
---|
5 |
|
---|
6 |
|
---|
7 | The proposed visibility preprocessing framework consists of two major
|
---|
8 | steps.
|
---|
9 | \begin{itemize}
|
---|
10 | \item The first step is an aggresive visibility sampling which gives
|
---|
11 | initial estimate about global visibility in the scene. The sampling
|
---|
12 | itself involves several strategies which will be described in
|
---|
13 | section~\ref{sec:sampling}. The imporant property of the aggresive
|
---|
14 | sampling step is that it provides a fast progressive solution to
|
---|
15 | global visibility and thus it can be easily integrated into the
|
---|
16 | game development cycle.
|
---|
17 |
|
---|
18 | \item The second step is visibility verification. This step turns the
|
---|
19 | previous aggresive visibility solution into either exact, conservative
|
---|
20 | or error bound aggresive solution. The choice of the particular
|
---|
21 | verifier is left on the user in order to select the best for a
|
---|
22 | particular scene, application context and time constrains. For
|
---|
23 | example, in scenes like a forest an error bound aggresive visibility
|
---|
24 | can be the best compromise between the resulting size of the PVS (and
|
---|
25 | framerate) and the visual quality. The exact or conservative algorithm
|
---|
26 | can however be chosen for urban scenes where of even small objects can
|
---|
27 | be more distructing for the user.
|
---|
28 | \end{itemize}
|
---|
29 |
|
---|
30 |
|
---|
31 |
|
---|
32 | In traditional visibility preprocessing the view space is
|
---|
33 | subdivided into viewcells and for each view cell the set of visible
|
---|
34 | objects --- potentially visible set (PVS) is computed. This framewoirk
|
---|
35 | has bee used for conservative, aggresive and exact algorithms.
|
---|
36 |
|
---|
37 | We propose a different strategy which has several advantages for
|
---|
38 | sampling based aggresive visibility preprocessing. The stategy is
|
---|
39 | based on the following fundamental ideas:
|
---|
40 | \begin{itemize}
|
---|
41 | \item Replace the roles of view cells and objects
|
---|
42 | \item Compute progressive global visibility instead of sequential from-region visibility
|
---|
43 | \end{itemize}
|
---|
44 |
|
---|
45 | Both of these points are addressed bellow in more detail.
|
---|
46 |
|
---|
47 | \subsection{From-object based visibility}
|
---|
48 |
|
---|
49 | Our framework is based on the idea of sampling visibility by casting
|
---|
50 | casting rays through the scene and collecting their contributions. A
|
---|
51 | visibility sample is computed by casting a ray from an object towards
|
---|
52 | the viewcells and computing the nearest intersection with the scene
|
---|
53 | objects. All view cells pierced by the ray segment can the object and
|
---|
54 | thus the object can be added to their PVS. If the ray is terminated at
|
---|
55 | another scene object the PVS of the pierced view cells can also be
|
---|
56 | extended by this terminating object. Thus a single ray can make a
|
---|
57 | number of contributions to the progressively computed PVSs. A ray
|
---|
58 | sample piercing $n$ viewcells which is bound by two distinct objects
|
---|
59 | contributes by at most $2*n$ entries to the current PVSs. Appart from
|
---|
60 | this performance benefit there is also a benefit in terms of the
|
---|
61 | sampling density: Assuming that the view cells are usually much larger
|
---|
62 | than the objects (which is typically the case) starting the sampling
|
---|
63 | deterministically from the objects increases the probability of small
|
---|
64 | objects being captured in the PVS.
|
---|
65 |
|
---|
66 | At this phase of the computation we not only start the samples from
|
---|
67 | the objects, but we also store the PVS information centered at the
|
---|
68 | objects. Instead of storing a PVSs consting of objects visible from
|
---|
69 | view cells, every object maintains a PVS consisting of potentially
|
---|
70 | visible view cells. While these representations contain exactly the
|
---|
71 | same information as we shall see later the object centered PVS is
|
---|
72 | better suited for the importance sampling phase as well as the
|
---|
73 | visibility verification phase.
|
---|
74 |
|
---|
75 |
|
---|
76 | \subsection{Basic Randomized Sampling}
|
---|
77 |
|
---|
78 |
|
---|
79 | The first phase of the sampling works as follows: At every pass of the
|
---|
80 | algorithm visits scene objects sequentially. For every scene object we
|
---|
81 | randomly choose a point on its surface. Then a ray is cast from the
|
---|
82 | selected point according to the randomly chosen direction. We use a
|
---|
83 | uniform distribution of the ray directions with respect to the
|
---|
84 | halfspace given by the surface normal. Using this strategy the samples
|
---|
85 | at deterministicaly placed at every object, with a randomization of
|
---|
86 | the location on the object surface. The uniformly distributed
|
---|
87 | direction is a simple and fast strategy to gain initial visibility
|
---|
88 | information.
|
---|
89 |
|
---|
90 |
|
---|
91 | The described algorithm accounts for the irregular distribution of the
|
---|
92 | objects: more samples are placed at locations containing more
|
---|
93 | objects. Additionally every object is sampled many times depending on
|
---|
94 | the number of passes in which this sampling strategy is applied. This
|
---|
95 | increases the chance of even a small object being captured in the PVS
|
---|
96 | of the view cells from which it is visible.
|
---|
97 |
|
---|
98 |
|
---|
99 | \subsection{Accounting for View Cell Distribution}
|
---|
100 |
|
---|
101 | The first modification to the basic algorithm accounts for irregular
|
---|
102 | distribution of the viewcells. Such a case in common for example in
|
---|
103 | urban scenes where the viewcells are mostly distributed in a
|
---|
104 | horizontal direction and more viewcells are placed at denser parts of
|
---|
105 | the city. The modification involves replacing the uniformly
|
---|
106 | distributed ray direction by directions distributed according to the
|
---|
107 | local view cell directional density. It means placing more samples at
|
---|
108 | directions where more view cells are located. We select a random
|
---|
109 | viecell which lies at the halfpace given by the surface normal at the
|
---|
110 | chosen point. We pick a random point inside the view cell and cast a
|
---|
111 | ray towards this point.
|
---|
112 |
|
---|
113 |
|
---|
114 | \subsection{Accounting for Visibility Events}
|
---|
115 |
|
---|
116 | Visibility events correspond to appearance and disapearance of
|
---|
117 | objects with respect to a moving view point. In polygonal scenes the
|
---|
118 | events defined by event surfaces defined by three distinct scene
|
---|
119 | edges. Depending on the edge configuration we distinguish between
|
---|
120 | vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces
|
---|
121 | are planar planes whereas the EEE are in general quadratic surfaces.
|
---|
122 |
|
---|
123 | To account for these event we explicitly place samples passing by the
|
---|
124 | object edges which are directed to edges and/or vertices of other
|
---|
125 | objects.
|
---|
126 |
|
---|
127 |
|
---|
128 |
|
---|