source: trunk/VUT/doc/SciReport/mutual.tex @ 249

Revision 249, 25.6 KB checked in by bittner, 19 years ago (diff)

new report structure

Line 
1\chapter{Mutual Visibility Tool}
2
3\section{Introduction}
4
5If a fast visibility solution is required such as during the
6development cycle then the aggresive visibility sampling discussed in
7the previous chapter is a good candidate. However for final solution
8(production) we should either be sure that there can be no visible
9error due to the preprocessed visibility or the error is suffieciently
10small and thus not noticable.
11
12The verification starts with identifiying a minimal set of object/view
13cell pairs which are classified as mutually invisible. We process the
14objects sequentially and for every given object we determine the view
15cell which form the boundary of the invisible view cell set.
16
17This boundary it is based on the current view cell visibility
18classifications. We assume that there is a defined connectivity
19between the viewcells which is the case for both BSP-tree or kD-tree
20based view cells. Then given a PVS consisting of visible viewcells
21(with repsect to an object) we can find all the neighbors of the
22visible viewcells which are invisible. In order words a view cell
23belongs to this boundary if it is vlassified invisible and has at
24least one visible neighbor. If all view cells from this boundary are
25proven invisible, not other viewcell (behind this boundary) can be
26visible. If the verification determines some view cells as visible,
27the current invisible boundary is extended and the verification is
28applied on the new view cells forming the boundary.
29
30The basic operation of the verification is mutual visibility test
31between an object and a view cell. This test works with a bounding
32volume of the object and the boundary of the view cell. In the most
33general case both are defined bound by a convex polyhedron, in a
34simpler case both are defined by an axis aligned bounding box.
35
36
37
38\section{Exact Mutual Visibility Test}
39
40The exact mutual visibility test computes exact visibility between two
41polyhedrons in the scene. This is computed by testing visibility
42between all pairs of potentially polygons of these polyhedrons.
43
44
45\section{Occlusion tree}
46\label{sec:rot3d_ot}
47
48 The occlusion tree for the visibility from region problem is a 5D BSP
49tree maintaining a collection of the 5D blocker polyhedra. The tree is
50constructed for each source polygon $P_S$ and represents all rays
51emerging from $P_S$. Each node $N$ of the tree represents a subset of
52line space ${\mathcal Q^*_N}$. The root of the tree represents the
53whole problem-relevant line set ${\mathcal L}_R$. If $N$ is an
54interior node, it is associated with a \plucker plane $\pplane_N$.
55Left child of $N$ represents ${\mathcal Q^*_N} \cap \pplane^-_N$,
56right child ${\mathcal Q^*_N} \cap \pplane^+_N$, where $\pplane^-_N$
57and $\pplane^+_N$ are halfspaces induced by $\pplane_N$.
58
59 Leaves of the occlusion tree are classified $in$ or $out$. If $N$ is
60an $out$-leaf, ${\mathcal Q^*_N}$ represents unoccluded rays emerging
61from the source polygon $P_S$. If $N$ is an $in$-leaf, it is associated
62with an occluder $O_N$ that blocks the corresponding set of rays
63${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the blocker
64polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The intersection of
65$B_N$ and the \plucker quadric corresponds to a set of stabbers ${\cal
66S}_N$ through which $O_N$ is visible from $P_S$.
67
68%%  Consider a source polygon and a single occluder such as in
69%% Figure~\ref{lines3}-(a). The corresponding elementary occlusion tree
70%% has interior nodes associated with \plucker hyperplanes corresponding
71%% to edges of the two polygons.
72
73
74\section{Occlusion tree construction}
75
76\label{sec:rot3d_construct}
77
78 The occlusion tree is constructed incrementally by inserting blocker
79polyhedra in the order given by the approximate occlusion sweep of the
80scene polygons. When processing a polygon $P_j$ the algorithm inserts
81a polyhedron $B_{P_SP_j}$ representing the feasible line set between
82the source polygon $P_S$ and the polygon $P_j$. The polyhedron is
83split into fragments that represent either occluded or unoccluded rays.
84
85 We describe two methods that can be used to insert a blocker
86polyhedron into the occlusion tree. The first method inserts the
87polyhedron by splitting it using hyperplanes encountered during the
88traversal of the tree. The second method identifies hyperplanes that
89split the polyhedron and uses them later for the construction of
90polyhedron fragments in leaf nodes.
91
92
93
94\subsection{Insertion with splitting}
95
96\label{sec:rot3_insertwithsplit}
97
98 The polyhedron insertion algorithm maintains two variables --- the
99current node $N_c$ and the current polyhedron fragment
100$B_c$. Initially $N_c$ is set to the root of the tree and $B_c$ equals
101to $B_{P_SP_j}$. The insertion of a polyhedron in the tree proceeds as
102follows: If $N_c$ is an interior node, we determine the position of
103$B_c$ and the hyperplane $\pplane_{N_c}$ associated with $N_c$. If
104$B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the
105algorithm continues in the right subtree.  Similarly, if $B_c$ lies in
106the negative halfspace induced by $\pplane_{N_c}$,  the algorithm
107continues in the left subtree. If $B_c$ intersects both halfspaces, it
108is split by $\pplane_{N_{c}}$ into two parts $B^+_{c}$ and $B^-_{c}$
109and the algorithm proceeds in both subtrees of $N_c$ with relevant
110fragments of $B_{c}$.
111
112If $N_{c}$ is a leaf node then we make a decision depending on its
113classification.  If $N_{c}$ is an $out$-leaf then $B_{c}$ is visible
114and $N_c$ is replaced by the elementary occlusion tree of $B_{c}$. If
115$N_{c}$ is an $in$-leaf, the mutual position of the currently
116processed polygon $B_j$ and the polygon $B_{N_{c}}$ associated with
117$N_{c}$ is determined. This test will be described in
118Section~\ref{sec:position}. If $B_{c}$ is behind $B_{N_{c}}$ it is
119invisible and no modification to the tree necessary. Otherwise we need
120to {\em merge} $B_{c}$ into the tree. The merging replaces $N_{c}$ by
121the elementary occlusion tree of $B_c$ and inserts the old fragment
122$B_{N_{c}}$ in the new subtree.
123
124\subsection{Insertion without splitting}
125\label{sec:insertion_nosplit}
126
127 The above described polyhedron insertion algorithm requires that the
128polyhedron is split by the hyperplanes encountered during the
129traversal of the occlusion tree. Another possibility is an algorithm
130that only tests the position of the polyhedron with respect to the
131hyperplane and remembers the hyperplanes that split the polyhedron on
132the path from the root to the leaves. Reaching a leaf node these
133hyperplanes are used to construct the corresponding polyhedron
134fragment using a polyhedron enumeration algorithm.
135
136 The splitting-free polyhedron insertion algorithm proceeds as
137follows: we determine the position of the blocker polyhedron and the
138hyperplane $\pplane_{N_c}$ associated with the current node $N_c$. If
139$B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the
140algorithm continues in the right subtree.  Similarly if $B_c$ lies in
141the negative halfspace induced by $\pplane_{N_c}$  the algorithm
142continues in the left subtree. If $B_c$ intersects both halfspaces the
143algorithm proceeds in both subtrees of $N_c$ and $\pplane_{N_c}$ is
144added to the list of splitting hyperplanes with a correct sign for each
145subtree. Reaching an $out$-leaf the list of splitting hyperplanes and
146the associated signs correspond to halfspaces bounding the
147corresponding polyhedron fragment. The polyhedron enumeration
148algorithm is  applied using these halfspaces and the original
149halfspaces defining the blocker polyhedron. Note that it is possible
150that no feasible polyhedron exists since the intersection of
151halfspaces is empty. Such a case occurs due to the conservative
152traversal of the tree that only tests the position of the inserted
153polyhedron with respect to the splitting hyperplanes\footnote{Such a
154traversal was also used in Section~\ref{sec:rtviscull_conserva} in the
155context of the from-point visibility culling.}. If the fragment is not
156empty, the tree is extended as described in the previous section.
157
158Reaching an $in$-leaf the polygon positional test is applied. If the
159inserted polygon is closer than the polygon associated with the leaf,
160the polyhedron fragment is constructed and it is merged in the tree as
161described in the previous section. The splitting-free polyhedron
162insertion algorithm has the following properties:
163
164\begin{itemize}
165\item If the polyhedron reaches only $in$-leaves the 5D set operations
166  on the polyhedron are avoided completely.
167\item If the polyhedron reaches only a few leaves the application of the
168  polyhedron enumeration algorithm is potentially more efficient than
169  the sequential splitting. On the contrary, when reaching many
170  $out$-leaves the splitting-free method makes less use of coherence,
171  i.e. the polyhedron enumeration algorithm is applied independently
172  in each leaf even if the corresponding polyhedra are bound by
173  coherent sets of hyperplanes.
174\item An existing implementation of the polyhedron enumeration
175algorithm can be used~\cite{cdd_site,lrs_site}.
176\end{itemize}
177
178
179 The polyhedron enumeration algorithm constructs the polyhedron as an
180intersection of a set of halfspaces. The polyhedron is described as a
181set of {\em vertices} and {\em rays} and their adjacency to the
182hyperplanes bounding the polyhedron~\cite{Fukuda:1996:DDM,avis96}. The
183adjacency information is used to construct a 1D skeleton of the
184polyhedron that is required for computation of the intersection with
185the \plucker quadric.
186
187
188\subsection{Polygon positional test}
189\label{sec:position}
190
191The polygon positional test aims to determine the order of two
192polygons $P_i$ and $P_j$ with respect to the source polygon
193$P_S$~\cite{Chrysantho1996a}. We assume that polygons $P_i$ and $P_j$
194do not intersect, i.e. all potential polygon intersections were
195resolved in preprocessing. The test is applied to determine which of
196the two polygons is closer to $P_S$ with respect to the given set of
197rays intersecting both polygons. The test proceeds as follows:
198
199\begin{enumerate}
200\item If $P_i$ lays in the positive/negative halfspace defined by
201$P_j$, it is before/behind $P_j$. Otherwise, proceed with the next step.
202\item If $P_j$ lays in the positive/negative halfspace defined by
203$P_i$, it is before/behind $P_i$. Otherwise, proceed with the next step.
204\item Find a ray from the given set of rays that intersects both
205polygons. Compute an intersection of the ray with $P_i$ and $P_j$ and
206determine the position of the polygons according to the order of
207intersection points along the ray.
208\end{enumerate}
209
210The first two steps aim to determine the absolute priority of one of
211the polygons. If these steps fail, the order of the polygons is
212determined using a sample ray in step 3.
213
214\section{Visibility test}
215
216\label{sec:rot3d_vistest}
217
218 The visibility test classifies visibility of a given polygon with respect
219to the source polygon. The test can be used to classify visibility of
220a polyhedral region by applying it on the boundary faces of the region
221and combining the resulting visibility states.
222
223\subsection{Exact visibility test}
224
225 The exact visibility for a given polyhedral region proceeds as
226follows: for each face of the region facing the given source polygon
227we construct a blocker polyhedron. The blocker polyhedron is then
228tested for visibility by the traversal of the occlusion tree. The
229visibility test proceeds as the algorithms described in
230Section~\ref{sec:rot3d_construct}, but no modifications to the tree are
231performed. If the polyhedron is classified as visible in all reached
232leaves, the current face is fully visible. If the polyhedron is
233invisible in all reached leaves, the face is invisible. Otherwise it
234is partially visible since some rays connecting the face and the
235source polygon are occluded and some are unoccluded. The visibility of
236the whole region is computed using a combination of visibility states
237of its boundary faces according to Table~\ref{table:vis_states}.
238
239
240\subsection{Conservative visibility test}
241
242\label{sec:rot3_conserva}
243
244 The conservative visibility test provides a fast estimation of
245visibility of the given region since it does not require the 5D
246polyhedra enumeration. Visibility of the given face of the region is
247determined by a traversal of the occlusion tree and testing the
248position of the corresponding blocker polyhedron with respect to the
249encountered hyperplanes as described in
250Section~\ref{sec:insertion_nosplit}.  If the blocker polyhedron
251reaches only $in$-leaves and the face is behind the polygon(s)
252associated with the leaves, the face is classified invisible
253. Otherwise, it is conservatively classified as visible.  The
254visibility of the whole region is determined using a combination of
255visibility of its faces as mentioned in the previous section.
256
257\section{Optimizations}
258
259\label{sec:rot3d_optimizations}
260
261 Below we discuss several optimization techniques that can be used to
262improve the performance of the algorithm. The optimizations do not
263alter the accuracy of the visibility algorithm.
264 
265\subsection{Shaft culling}
266
267\label{sec:rot3_shaft}
268
269The algorithm as described computes and maintains visibility of all
270polygons facing the given source polygon. In complex scenes the
271description of visibility from the source polygon can reach $O(n^4)$
272complexity in the worse
273case~\cite{Teller92phd,p-rsls-97,Durand99-phd}. The size of the
274occlusion tree is then bound by $O(n^5)$. Although such a
275configuration occurs rather rare in practice we need to apply some
276general technique to avoid the worst-case memory complexity. If the
277algorithm should provide an exact analytic solution to the problem, we
278cannot avoid the $O(n^4\log n)$ worst-case computational complexity,
279but the computation can be decoupled to reduce memory requirements.
280
281 We propose to use the {\em shaft culling}~\cite{Haines94} method that
282divides the computation into series of from-region visibility
283problems restricted by a given shaft of lines. Ideally the shafts are
284determined so that the {\em visibility complexity} in the shaft is
285bound by a given constant. This is however the from-region visibility
286problem itself. We can use an estimate based on a number of polygons
287in the given shaft. First, the hemicube is erected over the source
288polygon and it is used to construct five shafts corresponding to the
289five faces  of the hemicube. The shafts are then subdivided as
290follows: if a given shaft intersects more than a predefined number of
291polygons the corresponding hemicube face is split in two and the new
292shafts are processed recursively.  Visibility in the shaft is
293evaluated using all polygons intersecting the shaft. When the
294computation for the shaft is finished the occlusion tree is
295destroyed. The algorithm then proceeds with the next shaft until all
296generated shafts have been processed. See Figure~\ref{fig:rot3_shafts}
297for an example of a regular subdivision of the problem-relevant line
298set into 80 shafts.
299
300
301This technique shares a similarity with the algorithm recently
302published by Nirenstein et al.~\cite{nirenstein:02:egwr} that uses
303shafts between the source polygon and each polygon in the
304scene. Our technique provides the following benefits:
305
306\begin{itemize}
307\item The shaft can contain thousands of polygons, which allows to
308better exploit the coherence of visibility. The hierarchical line
309space subdivision allows efficient searches and updates.
310
311\item The method is applicable even in scenes with big differences in
312polygon sizes. Unlike the method of Nirenstein et
313al.~\cite{nirenstein:02:egwr}, the proposed technique generates shafts
314to find balance between the use of coherence and the memory
315requirements. Due to the hierarchical subdivision of the
316problem-relevant line set our method can handle the case when there is
317a huge number of polygons in a single shaft between two large scene
318polygons.
319
320\item Visibility in the whole shaft is described in a unified data
321  structure, which can serve for further computations (e.g. extraction
322  of event surfaces, hierarchical representation of visibility, etc.).
323\end{itemize}
324
325
326
327\subsection{Occluder sorting}
328
329 Occluder sorting aims to increase the accuracy of the front-to-back
330ordering determined by the approximate occlusion sweep. Higher
331accuracy of the ordering decreases the number of the late merging of
332blocker polyhedra in the tree.  Recall that the merging extends the
333occlusion tree by a blocker polyhedron corresponding to an occluder
334that hides an already processed occluder.
335
336 The occluder sorting is applied when reaching a leaf node of the
337spatial hierarchy during the approximate occlusion sweep. Given a leaf
338node $N$ the occluder sorting proceeds as follows:
339
340\begin{enumerate}
341\item Determine a ray $r$ piercing the center of the source polygon
342$P_S$ and the node $N$.
343\item Compute intersections of $r$ and all supporting planes of the
344polygons associated with $N$.
345\item Sort the polygons according to the front-to-back order of the
346corresponding intersection points along $r$.
347\end{enumerate}
348
349The occluder sorting provides an exact priority order within the given
350leaf in the case that the polygons are pierced by the computed ray $r$
351and they do not intersect.
352
353
354\subsection{Visibility estimation}
355
356 The visibility estimation aims to eliminate the polyhedron
357enumeration in the leaves of the occlusion tree. If we find out that
358the currently processed polygon is potentially visible in the given
359leaf-node (it is an $out$-leaf or it is an $in$-leaf and the
360positional test reports the polygon as the closest), we estimate its
361visibility by shooting random rays. We can use the current occlusion
362tree to perform ray shooting in line space.  We select a random ray
363connecting the source polygon and the currently processed
364polygon. This ray is mapped to a \plucker point and this point is
365tested for inclusion in halfspaces defined by the \plucker planes
366splitting the polyhedron on the path from to root to the given
367leaf. If the point is contained in all tested halfspaces the
368corresponding ray is unoccluded and the algorithm inserts the blocker
369polyhedron into the tree. Otherwise it continues by selecting another
370random ray until a predefined number of rays was tested.
371
372 The insertion of the blocker polyhedron devotes further
373discussion. Since the polyhedron was not enumerated we do not know
374which of its bounding hyperplanes really bound the polyhedron fragment
375and which are redundant for the given leaf.  Considering all
376hyperplanes defining the blocker polyhedron could lead to inclusion of
377many redundant nodes in the tree. We used a simple conservative
378algorithm that tests if the given hyperplane is bounding the
379(unconstructed) polyhedron fragment. For each hyperplane $H_i$
380bounding the blocker polyhedron the algorithm tests the position of
381extremal lines embedded in this hyperplane with respect to each
382hyperplane splitting the polyhedron. If mappings of all extremal lines
383lay in the same open halfspace defined by a splitting hyperplane,
384hyperplane $H_i$ does not bound the current polyhedron fragment and
385thus it can be culled.
386
387
388\subsection{Visibility merging}
389
390 Visibility merging aims to propagate visibility classifications from
391the leaves of the occlusion tree up into the interior nodes of the
392hierarchy. Visibility merging is connected with the approximate
393occlusion sweep, which simplifies the treatment of the depth of the
394scene polygons.
395
396The algorithm classifies an interior node of the occlusion tree as
397occluded ($in$) if the following conditions hold:
398
399\begin{itemize}
400\item Both its children are $in$-nodes.
401\item The occluders associated with both children are strictly closer than
402  the closest unswept node of the spatial hierarchy.
403\end{itemize}
404
405 The first condition ensures that both child nodes correspond to
406occluded nodes. The second condition ensures that any unprocessed
407occluder is behind the occluders associated with the children. Using
408this procedure the effective depth of the occlusion becomes
409progressively smaller if more and more rays become occluded.
410
411
412
413\section{Conservative Verifier}
414
415
416\section{Error Bound Verifier}
417
418%Option: - if there is sufficient number of samples from 1 + 2 and some
419%of their statistic (local spatial and directional density + distances
420%to the visible objects), maybe an error estimate could only be derived
421%from there values.
422
423a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve.
424
425The basic idea is the following:
426
427Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows:
428
4291. Cast the 4 corner rays and deterine ALL intersections with the occluding objects
430
4312. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer.
432
4333. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one.
434
4354. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible.
436Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer.
437
4385. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed.
439
440It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy.
441
442
443> Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane.
444> That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further.
445> Is this description clearer?
446
447
448partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error?
449
450
451
452\section{Summary}
453
454\label{sec:rot3d_summary}
455
456 This chapter presented a new method for computing from-region
457visibility in polygonal scenes. The method is based on the concept of
458line space subdivision, approximate occlusion sweep and hierarchical
459visibility tests. The key idea is a hierarchical subdivision of the
460problem-relevant line set using \plucker coordinates and the occlusion
461tree.  \plucker coordinates allow to perform operations on sets of
462lines by means of set theoretical operations on the 5D polyhedra. The
463occlusion tree is used to maintain a union of the polyhedra that
464represent lines occluded from the given region (polygon).
465
466 We discussed the relation of sets of lines in 3D and the polyhedra in
467\plucker coordinates. We proposed a general size measure for a set of
468lines described by a blocker polyhedron and a size measure designed
469for the computation of PVS. The chapter presented two algorithms for
470construction of the occlusion tree by incremental insertion of blocker
471polyhedra. The occlusion tree was used to test visibility of a given
472polygon or region with respect to the source polygon/region. We
473proposed several optimization of the algorithm that make the approach
474applicable to large scenes. The chapter discussed three possible
475applications of the method: discontinuity meshing, visibility culling,
476and occluder synthesis.
477
478The implementation of the method was evaluated on scenes consisting of
479randomly generated triangles, structured scenes, and a real-world
480urban model. The evaluation was focused on the application method for
481PVS computation.  By computing a PVS in a model of the city of Graz it
482was shown that the approach is suitable for visibility preprocessing
483in urban scenes. The principal advantage of the method is that it does
484not rely on various tuning parameters that are characterizing many
485conservative or approximate algorithms. On the other hand the
486exactness of the method requires higher computational demands and
487implementation effort.
488
Note: See TracBrowser for help on using the repository browser.