source: trunk/VUT/doc/SciReport/mutual.tex @ 269

Revision 269, 24.1 KB checked in by mattausch, 19 years ago (diff)

did bsp stuff

Line 
1\chapter{Mutual Visibility Tool}
2
3
4If a fast visibility solution is required such as during the
5development cycle then the aggresive visibility sampling discussed in
6the previous chapter is a good candidate. However for final solution
7(production) we should either be sure that there can be no visible
8error due to the preprocessed visibility or the error is suffieciently
9small and thus not noticable.
10
11The verification starts with identifiying a minimal set of object/view
12cell pairs which are classified as mutually invisible. We process the
13objects sequentially and for every given object we determine the view
14cells which form the boundary of the invisible view cell set.
15
16This boundary it is based on the current view cell visibility
17classifications. We assume that there is a defined connectivity
18between the viewcells which is the case for both BSP-tree or kD-tree
19based view cells. Then given a PVS consisting of visible viewcells
20(with respect to an object) we can find all the neighbors of the
21visible viewcells which are invisible. In order words a view cell
22belongs to this boundary if it is classified invisible and has at
23least one visible neighbor. If all view cells from this boundary are
24proven invisible, no other viewcell (behind this boundary) can be
25visible. If the verification determines some view cells as visible,
26the current invisible boundary is extended and the verification is
27applied on the new view cells forming the boundary.
28
29The basic operation of the verification is mutual visibility test
30between an object and a view cell. This test works with a bounding
31volume of the object and the boundary of the view cell. In the most
32general case both are defined bound by a convex polyhedron, in a
33simpler case both are defined by an axis aligned bounding box.
34
35Below, we describe three different mutual visibility verification
36algorithms. The first algorithm which is described in most detail
37computes exact visibility. The second one is a coservative algorithm
38and the third one is an approximate algorithm with a guaranteed error
39bound.
40
41
42\section{Exact Mutual Visibility Test}
43
44The exact mutual visibility test computes exact visibility between two
45polyhedrons in the scene. This is computed by testing visibility
46between all pairs of potentially visible polygons of these
47polyhedrons.  For each pair of tested polygons the computation is
48localized into the shaft defined by their convex hull. This shaft is
49used to determine the set of relevant occluders~\cite{Haines94}.
50
51\section{Occlusion tree}
52\label{sec:rot3d_ot}
53
54 The occlusion tree for the visibility from region problem is a 5D BSP
55tree maintaining a collection of the 5D blocker polyhedra. The tree is
56constructed for each source polygon $P_S$ and represents all rays
57emerging from $P_S$. Each node $N$ of the tree represents a subset of
58line space ${\mathcal Q^*_N}$. The root of the tree represents the
59whole problem-relevant line set ${\mathcal L}_R$. If $N$ is an
60interior node, it is associated with a \plucker plane $\pplane_N$.
61Left child of $N$ represents ${\mathcal Q^*_N} \cap \pplane^-_N$,
62right child ${\mathcal Q^*_N} \cap \pplane^+_N$, where $\pplane^-_N$
63and $\pplane^+_N$ are halfspaces induced by $\pplane_N$.
64
65 Leaves of the occlusion tree are classified $in$ or $out$. If $N$ is
66an $out$-leaf, ${\mathcal Q^*_N}$ represents unoccluded rays emerging
67from the source polygon $P_S$. If $N$ is an $in$-leaf, it is
68associated with an occluder $O_N$ that blocks the corresponding set of
69rays ${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the
70blocker polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The
71intersection of $B_N$ and the \plucker quadric corresponds to a set of
72stabbers ${\cal S}_N$ through which $O_N$ is visible from $P_S$. A 2D
73example of an occlusion tree is shown at Figure~\ref{fig:ot}.
74
75\begin{figure}[htb]
76\centerline{
77\includegraphics[width=0.9\textwidth,draft=\DRAFTFIGS]{figs/ot1}}
78\caption{A 2D example of an occlusion tree. Rays intersecting scene polygons
79are represented by polyhedra P1, P2 and P3. The occlusion tree represents a union
80of the polyhedra. Note that P2 has been split during the insertion process.}
81\label{fig:duality2d}
82\end{figure}
83
84
85%%  Consider a source polygon and a single occluder such as in
86%% Figure~\ref{lines3}-(a). The corresponding elementary occlusion tree
87%% has interior nodes associated with \plucker hyperplanes corresponding
88%% to edges of the two polygons.
89
90
91\section{Occlusion tree construction}
92
93\label{sec:rot3d_construct}
94
95 The occlusion tree is constructed incrementally by inserting blocker
96polyhedra in the order given by the size of the polygons. When
97processing a polygon $P_j$ the algorithm inserts a polyhedron
98$B_{P_SP_j}$ representing the feasible line set between the source
99polygon $P_S$ and the polygon $P_j$. The polyhedron is split into
100fragments that represent either occluded or unoccluded rays.
101
102 We describe two methods that can be used to insert a blocker
103polyhedron into the occlusion tree. The first method inserts the
104polyhedron by splitting it using hyperplanes encountered during the
105traversal of the tree. The second method identifies hyperplanes that
106split the polyhedron and uses them later for the construction of
107polyhedron fragments in leaf nodes.
108
109
110
111\subsection{Insertion with splitting}
112
113\label{sec:rot3_insertwithsplit}
114
115 The polyhedron insertion algorithm maintains two variables --- the
116current node $N_c$ and the current polyhedron fragment
117$B_c$. Initially $N_c$ is set to the root of the tree and $B_c$ equals
118to $B_{P_SP_j}$. The insertion of a polyhedron in the tree proceeds as
119follows: If $N_c$ is an interior node, we determine the position of
120$B_c$ and the hyperplane $\pplane_{N_c}$ associated with $N_c$. If
121$B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the
122algorithm continues in the right subtree.  Similarly, if $B_c$ lies in
123the negative halfspace induced by $\pplane_{N_c}$,  the algorithm
124continues in the left subtree. If $B_c$ intersects both halfspaces, it
125is split by $\pplane_{N_{c}}$ into two parts $B^+_{c}$ and $B^-_{c}$
126and the algorithm proceeds in both subtrees of $N_c$ with relevant
127fragments of $B_{c}$.
128
129If $N_{c}$ is a leaf node then we make a decision depending on its
130classification.  If $N_{c}$ is an $out$-leaf then $B_{c}$ is visible
131and $N_c$ is replaced by the elementary occlusion tree of $B_{c}$. If
132$N_{c}$ is an $in$-leaf it corresponds to already occluded rays and no
133modification to the tree necessary. Otherwise we need to {\em merge}
134$B_{c}$ into the tree. The merging replaces $N_{c}$ by the elementary
135occlusion tree of $B_c$ and inserts the old fragment $B_{N_{c}}$ in
136the new subtree.
137
138\subsection{Insertion without splitting}
139\label{sec:insertion_nosplit}
140
141 The above described polyhedron insertion algorithm requires that the
142polyhedron is split by the hyperplanes encountered during the
143traversal of the occlusion tree. Another possibility is an algorithm
144that only tests the position of the polyhedron with respect to the
145hyperplane and remembers the hyperplanes that split the polyhedron on
146the path from the root to the leaves. Reaching a leaf node these
147hyperplanes are used to construct the corresponding polyhedron
148fragment using a polyhedron enumeration algorithm.
149
150 The splitting-free polyhedron insertion algorithm proceeds as
151follows: we determine the position of the blocker polyhedron and the
152hyperplane $\pplane_{N_c}$ associated with the current node $N_c$. If
153$B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the
154algorithm continues in the right subtree.  Similarly if $B_c$ lies in
155the negative halfspace induced by $\pplane_{N_c}$  the algorithm
156continues in the left subtree. If $B_c$ intersects both halfspaces the
157algorithm proceeds in both subtrees of $N_c$ and $\pplane_{N_c}$ is
158added to the list of splitting hyperplanes with a correct sign for
159each subtree. Reaching an $out$-leaf the list of splitting hyperplanes
160and the associated signs correspond to halfspaces bounding the
161corresponding polyhedron fragment. The polyhedron enumeration
162algorithm is  applied using these halfspaces and the original
163halfspaces defining the blocker polyhedron. Note that it is possible
164that no feasible polyhedron exists since the intersection of
165halfspaces is empty. Such a case occurs due to the conservative
166traversal of the tree that only tests the position of the inserted
167polyhedron with respect to the splitting hyperplanes. If the fragment
168is not empty, the tree is extended as described in the previous
169section.
170
171Reaching an $in$-leaf the polygon positional test is applied. If the
172inserted polygon is closer than the polygon associated with the leaf,
173the polyhedron fragment is constructed and it is merged in the tree as
174described in the previous section. The splitting-free polyhedron
175insertion algorithm has the following properties:
176
177\begin{itemize}
178\item If the polyhedron reaches only $in$-leaves the 5D set operations
179  on the polyhedron are avoided completely.
180\item If the polyhedron reaches only a few leaves the application of the
181  polyhedron enumeration algorithm is potentially more efficient than
182  the sequential splitting. On the contrary, when reaching many
183  $out$-leaves the splitting-free method makes less use of coherence,
184  i.e. the polyhedron enumeration algorithm is applied independently
185  in each leaf even if the corresponding polyhedra are bound by
186  coherent sets of hyperplanes.
187\item An existing implementation of the polyhedron enumeration
188algorithm can be used~\cite{cdd_site,lrs_site}.
189\end{itemize}
190
191
192 The polyhedron enumeration algorithm constructs the polyhedron as an
193intersection of a set of halfspaces. The polyhedron is described as a
194set of {\em vertices} and {\em rays} and their adjacency to the
195hyperplanes bounding the polyhedron~\cite{Fukuda:1996:DDM,avis96}. The
196adjacency information is used to construct a 1D skeleton of the
197polyhedron that is required for computation of the intersection with
198the \plucker quadric.
199
200
201
202\section{Visibility test}
203
204\label{sec:rot3d_vistest}
205
206 The visibility test classifies visibility of a given polygon with respect
207to the source polygon. The test can be used to classify visibility of
208a polyhedral region by applying it on the boundary faces of the region
209and combining the resulting visibility states.
210
211\subsection{Exact visibility test}
212
213 The exact visibility for a given polyhedral region proceeds as
214follows: for each face of the region facing the given source polygon
215we construct a blocker polyhedron. The blocker polyhedron is then
216tested for visibility by the traversal of the occlusion tree. The
217visibility test proceeds as the algorithms described in
218Section~\ref{sec:rot3d_construct}, but no modifications to the tree are
219performed. If the polyhedron is classified as visible in all reached
220leaves, the current face is fully visible. If the polyhedron is
221invisible in all reached leaves, the face is invisible. Otherwise it
222is partially visible since some rays connecting the face and the
223source polygon are occluded and some are unoccluded. The visibility of
224the whole region is computed using a combination of visibility states
225of its boundary faces according to Table~\ref{table:vis_states}.
226
227
228\subsection{Conservative visibility test}
229
230\label{sec:rot3_conserva}
231
232 The conservative visibility test provides a fast estimation of
233visibility of the given region since it does not require the 5D
234polyhedra enumeration. Visibility of the given face of the region is
235determined by a traversal of the occlusion tree and testing the
236position of the corresponding blocker polyhedron with respect to the
237encountered hyperplanes as described in
238Section~\ref{sec:insertion_nosplit}.  If the blocker polyhedron
239reaches only $in$-leaves and the face is behind the polygon(s)
240associated with the leaves, the face is classified invisible
241. Otherwise, it is conservatively classified as visible.  The
242visibility of the whole region is determined using a combination of
243visibility of its faces as mentioned in the previous section.
244
245\section{Optimizations}
246
247\label{sec:rot3d_optimizations}
248
249 Below we discuss several optimization techniques that can be used to
250improve the performance of the algorithm. The optimizations do not
251alter the accuracy of the visibility algorithm.
252 
253% \subsection{Occluder sorting}
254
255%  Occluder sorting aims to increase the accuracy of the front-to-back
256% ordering determined by the approximate occlusion sweep. Higher
257% accuracy of the ordering decreases the number of the late merging of
258% blocker polyhedra in the tree.  Recall that the merging extends the
259% occlusion tree by a blocker polyhedron corresponding to an occluder
260% that hides an already processed occluder.
261
262%  The occluder sorting is applied when reaching a leaf node of the
263% spatial hierarchy during the approximate occlusion sweep. Given a leaf
264% node $N$ the occluder sorting proceeds as follows:
265
266% \begin{enumerate}
267% \item Determine a ray $r$ piercing the center of the source polygon
268% $P_S$ and the node $N$.
269% \item Compute intersections of $r$ and all supporting planes of the
270% polygons associated with $N$.
271% \item Sort the polygons according to the front-to-back order of the
272% corresponding intersection points along $r$.
273% \end{enumerate}
274
275% The occluder sorting provides an exact priority order within the given
276% leaf in the case that the polygons are pierced by the computed ray $r$
277% and they do not intersect.
278
279
280\subsection{Visibility estimation}
281
282 The visibility estimation aims to eliminate the polyhedron
283enumeration in the leaves of the occlusion tree. If we find out that
284the currently processed polygon is potentially visible in the given
285leaf-node (it is an $out$-leaf or it is an $in$-leaf and the
286positional test reports the polygon as the closest), we estimate its
287visibility by shooting random rays. We can use the current occlusion
288tree to perform ray shooting in line space.  We select a random ray
289connecting the source polygon and the currently processed
290polygon. This ray is mapped to a \plucker point and this point is
291tested for inclusion in halfspaces defined by the \plucker planes
292splitting the polyhedron on the path from to root to the given
293leaf. If the point is contained in all tested halfspaces the
294corresponding ray is unoccluded and the algorithm inserts the blocker
295polyhedron into the tree. Otherwise it continues by selecting another
296random ray until a predefined number of rays was tested.
297
298 The insertion of the blocker polyhedron devotes further
299discussion. Since the polyhedron was not enumerated we do not know
300which of its bounding hyperplanes really bound the polyhedron fragment
301and which are redundant for the given leaf.  Considering all
302hyperplanes defining the blocker polyhedron could lead to inclusion of
303many redundant nodes in the tree. We used a simple conservative
304algorithm that tests if the given hyperplane is bounding the
305(unconstructed) polyhedron fragment. For each hyperplane $H_i$
306bounding the blocker polyhedron the algorithm tests the position of
307extremal lines embedded in this hyperplane with respect to each
308hyperplane splitting the polyhedron. If mappings of all extremal lines
309lay in the same open halfspace defined by a splitting hyperplane,
310hyperplane $H_i$ does not bound the current polyhedron fragment and
311thus it can be culled.
312
313
314\subsection{Visibility merging}
315
316 Visibility merging aims to propagate visibility classifications from
317the leaves of the occlusion tree up into the interior nodes of the
318hierarchy. Visibility merging is connected with the approximate
319occlusion sweep, which simplifies the treatment of the depth of the
320scene polygons.
321
322The algorithm classifies an interior node of the occlusion tree as
323occluded ($in$) if the following conditions hold:
324
325\begin{itemize}
326\item Both its children are $in$-nodes.
327\item The occluders associated with both children are strictly closer than
328  the closest unswept node of the spatial hierarchy.
329\end{itemize}
330
331 The first condition ensures that both child nodes correspond to
332occluded nodes. The second condition ensures that any unprocessed
333occluder is behind the occluders associated with the children. Using
334this procedure the effective depth of the occlusion becomes
335progressively smaller if more and more rays become occluded.
336
337
338
339\section{Conservative Verifier}
340
341A conservative verifier is a faster alternative to the exact
342visibility verifier described above. The verifier is an extension of
343the strong occlusion algorithm of Cohen-Or et
344al.~\cite{cohen-egc-98}. In particular our verfier refines the search
345for a strong occluder by using a hierarchical subdivision of space of
346lines connecting the two regions tested for mutual
347visibility. Initially the shaft bounding the the tested regions is
348constructed. Rays bounding the shaft are traced through the scene and
349we compute all intersections with the scene objects between the tested
350regions. The the algorithm proceeds as follows:
351
352\begin{itemize}
353\item In the case that any ray does not intersect any object the
354tested regions are classified as visibility and the algorithm
355terminates.
356\item If the rays intersect the same convex object (at any
357depth) this object is a strong occluder with respect to the shaft and
358thus it also hides all rays in the corresponding shaft.
359\item If the rays do not intersect a single convex object four new
360shafts are constructed by splitting both regions in half and the
361process is repeated recursivelly.
362\end{itemize}
363
364If the subdivision does not terminate till reaching a predefined
365subdivision depth, we conservativelly classify the tested regions as
366mutually visible.
367
368
369\section{Error Bound Approximate Verifier}
370
371The approximate verifier will be based on the similar idea as the
372conservative one. However it will behave differently in the finer
373subdivision of the ray shafts. The idea is to use the above algorithm
374as far as the shafts get small enough that we can guarantee that even
375if the shaft is not blocked by the scene objects, a pixel error
376induced due to omission of objects potential visible behind the shaft
377is bellow a given threshold.
378
379For the computation of the maximal error due to the current shaft we
380assume that one tested region is a viewcell, whereas the other is an
381object bounding box or cell of the spatial hierarchy. The threshold is
382computed as follows: We first triangulate the farthest intersection
383points in the shaft as seen from the view cell side of the shaft. Then
384for each computed triangle we calculate a point in the viewcell which
385maximizes the projected area of the triangle. The conservative
386estimate of the maximal error is then given by a sum of the computed
387projected areas.
388
389%Option: - if there is sufficient number of samples from 1 + 2 and some
390%of their statistic (local spatial and directional density + distances
391%to the visible objects), maybe an error estimate could only be derived
392%from there values.
393
394% a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve.
395
396% The basic idea is the following:
397
398% Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows:
399
400% 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects
401
402% 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer.
403
404% 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one.
405
406% 4. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible.
407% Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer.
408
409% 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed.
410
411% It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy.
412
413
414% > Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane.
415% > That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further.
416% > Is this description clearer?
417
418
419% partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error?
420
421
422
423\section{Summary}
424
425\label{sec:rot3d_summary}
426
427 This chapter presented a mutual visibility tool, which determines
428 whether two regions in the scene are visible.
429
430 First, we have described an advanced exact visibility algorithm which
431is a simple of modification of an algorithm for computing from-region
432visibility in polygonal scenes. The key idea is a hierarchical
433subdivision of the problem-relevant line set using \plucker
434coordinates and the occlusion tree.  \plucker coordinates allow to
435perform operations on sets of lines by means of set theoretical
436operations on the 5D polyhedra. The occlusion tree is used to maintain
437a union of the polyhedra that represent lines occluded from the given
438region (polygon). We described two algorithms for construction of the
439occlusion tree by incremental insertion of blocker polyhedra. The
440occlusion tree was used to test visibility of a given polygon or
441region with respect to the source polygon/region. We proposed several
442optimization of the algorithm that make the approach applicable to
443large scenes. The principal advantage of the exact method over the
444conservative and the approximate ones is that it does not rely on
445various tuning parameters that are characterizing many conservative or
446approximate algorithms. On the other hand the exactness of the method
447requires higher computational demands and implementation effort.
448
449Second, we have described a conservative verifier which is an
450extension of the algorithm based on strong occluders. The conservative
451verifier is requires a specification of the shaft size at which the
452tested regions are conservativelly classfied as visible.
453
454Third, we have described an approximate error bound verifier which is
455an extension of the algorithm based on strong occluders. The
456conservative verifier is requires a specification of the shaft size at
457which the tested regions are conservativelly classfied as visible.
458
Note: See TracBrowser for help on using the repository browser.