\chapter{Mutual Visibility Tool} If a fast visibility solution is required such as during the development cycle then the aggresive visibility sampling discussed in the previous chapter is a good candidate. However for final solution (production) we should either be sure that there can be no visible error due to the preprocessed visibility or the error is suffieciently small and thus not noticable. The verification starts with identifiying a minimal set of object/view cell pairs which are classified as mutually invisible. We process the objects sequentially and for every given object we determine the view cells which form the boundary of the invisible view cell set. This boundary it is based on the current view cell visibility classifications. We assume that there is a defined connectivity between the viewcells which is the case for both BSP-tree or kD-tree based view cells. Then given a PVS consisting of visible viewcells (with respect to an object) we can find all the neighbors of the visible viewcells which are invisible. In order words a view cell belongs to this boundary if it is classified invisible and has at least one visible neighbor. If all view cells from this boundary are proven invisible, no other viewcell (behind this boundary) can be visible. If the verification determines some view cells as visible, the current invisible boundary is extended and the verification is applied on the new view cells forming the boundary. The basic operation of the verification is mutual visibility test between an object and a view cell. This test works with a bounding volume of the object and the boundary of the view cell. In the most general case both are defined bound by a convex polyhedron, in a simpler case both are defined by an axis aligned bounding box. Below, we describe three different mutual visibility verification algorithms. The first algorithm which is described in most detail computes exact visibility. The second one is a coservative algorithm and the third one is an approximate algorithm with a guaranteed error bound. \section{Exact Mutual Visibility Test} The exact mutual visibility test computes exact visibility between two polyhedrons in the scene. This is computed by testing visibility between all pairs of potentially visible polygons of these polyhedrons. For each pair of tested polygons the computation is localized into the shaft defined by their convex hull. This shaft is used to determine the set of relevant occluders~\cite{Haines94}. \section{Occlusion tree} \label{sec:rot3d_ot} The occlusion tree for the visibility from region problem is a 5D BSP tree maintaining a collection of the 5D blocker polyhedra. The tree is constructed for each source polygon $P_S$ and represents all rays emerging from $P_S$. Each node $N$ of the tree represents a subset of line space ${\mathcal Q^*_N}$. The root of the tree represents the whole problem-relevant line set ${\mathcal L}_R$. If $N$ is an interior node, it is associated with a \plucker plane $\pplane_N$. Left child of $N$ represents ${\mathcal Q^*_N} \cap \pplane^-_N$, right child ${\mathcal Q^*_N} \cap \pplane^+_N$, where $\pplane^-_N$ and $\pplane^+_N$ are halfspaces induced by $\pplane_N$. Leaves of the occlusion tree are classified $in$ or $out$. If $N$ is an $out$-leaf, ${\mathcal Q^*_N}$ represents unoccluded rays emerging from the source polygon $P_S$. If $N$ is an $in$-leaf, it is associated with an occluder $O_N$ that blocks the corresponding set of rays ${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the blocker polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The intersection of $B_N$ and the \plucker quadric corresponds to a set of stabbers ${\cal S}_N$ through which $O_N$ is visible from $P_S$. A 2D example of an occlusion tree is shown at Figure~\ref{fig:ot}. \begin{figure}[htb] \centerline{ \includegraphics[width=0.9\textwidth,draft=\DRAFTFIGS]{figs/ot1}} \caption{A 2D example of an occlusion tree. Rays intersecting scene polygons are represented by polyhedra P1, P2 and P3. The occlusion tree represents a union of the polyhedra. Note that P2 has been split during the insertion process.} \label{fig:duality2d} \end{figure} %% Consider a source polygon and a single occluder such as in %% Figure~\ref{lines3}-(a). The corresponding elementary occlusion tree %% has interior nodes associated with \plucker hyperplanes corresponding %% to edges of the two polygons. \section{Occlusion tree construction} \label{sec:rot3d_construct} The occlusion tree is constructed incrementally by inserting blocker polyhedra in the order given by the size of the polygons. When processing a polygon $P_j$ the algorithm inserts a polyhedron $B_{P_SP_j}$ representing the feasible line set between the source polygon $P_S$ and the polygon $P_j$. The polyhedron is split into fragments that represent either occluded or unoccluded rays. We describe two methods that can be used to insert a blocker polyhedron into the occlusion tree. The first method inserts the polyhedron by splitting it using hyperplanes encountered during the traversal of the tree. The second method identifies hyperplanes that split the polyhedron and uses them later for the construction of polyhedron fragments in leaf nodes. \subsection{Insertion with splitting} \label{sec:rot3_insertwithsplit} The polyhedron insertion algorithm maintains two variables --- the current node $N_c$ and the current polyhedron fragment $B_c$. Initially $N_c$ is set to the root of the tree and $B_c$ equals to $B_{P_SP_j}$. The insertion of a polyhedron in the tree proceeds as follows: If $N_c$ is an interior node, we determine the position of $B_c$ and the hyperplane $\pplane_{N_c}$ associated with $N_c$. If $B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the algorithm continues in the right subtree. Similarly, if $B_c$ lies in the negative halfspace induced by $\pplane_{N_c}$, the algorithm continues in the left subtree. If $B_c$ intersects both halfspaces, it is split by $\pplane_{N_{c}}$ into two parts $B^+_{c}$ and $B^-_{c}$ and the algorithm proceeds in both subtrees of $N_c$ with relevant fragments of $B_{c}$. If $N_{c}$ is a leaf node then we make a decision depending on its classification. If $N_{c}$ is an $out$-leaf then $B_{c}$ is visible and $N_c$ is replaced by the elementary occlusion tree of $B_{c}$. If $N_{c}$ is an $in$-leaf it corresponds to already occluded rays and no modification to the tree necessary. Otherwise we need to {\em merge} $B_{c}$ into the tree. The merging replaces $N_{c}$ by the elementary occlusion tree of $B_c$ and inserts the old fragment $B_{N_{c}}$ in the new subtree. \subsection{Insertion without splitting} \label{sec:insertion_nosplit} The above described polyhedron insertion algorithm requires that the polyhedron is split by the hyperplanes encountered during the traversal of the occlusion tree. Another possibility is an algorithm that only tests the position of the polyhedron with respect to the hyperplane and remembers the hyperplanes that split the polyhedron on the path from the root to the leaves. Reaching a leaf node these hyperplanes are used to construct the corresponding polyhedron fragment using a polyhedron enumeration algorithm. The splitting-free polyhedron insertion algorithm proceeds as follows: we determine the position of the blocker polyhedron and the hyperplane $\pplane_{N_c}$ associated with the current node $N_c$. If $B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the algorithm continues in the right subtree. Similarly if $B_c$ lies in the negative halfspace induced by $\pplane_{N_c}$ the algorithm continues in the left subtree. If $B_c$ intersects both halfspaces the algorithm proceeds in both subtrees of $N_c$ and $\pplane_{N_c}$ is added to the list of splitting hyperplanes with a correct sign for each subtree. Reaching an $out$-leaf the list of splitting hyperplanes and the associated signs correspond to halfspaces bounding the corresponding polyhedron fragment. The polyhedron enumeration algorithm is applied using these halfspaces and the original halfspaces defining the blocker polyhedron. Note that it is possible that no feasible polyhedron exists since the intersection of halfspaces is empty. Such a case occurs due to the conservative traversal of the tree that only tests the position of the inserted polyhedron with respect to the splitting hyperplanes. If the fragment is not empty, the tree is extended as described in the previous section. Reaching an $in$-leaf the polygon positional test is applied. If the inserted polygon is closer than the polygon associated with the leaf, the polyhedron fragment is constructed and it is merged in the tree as described in the previous section. The splitting-free polyhedron insertion algorithm has the following properties: \begin{itemize} \item If the polyhedron reaches only $in$-leaves the 5D set operations on the polyhedron are avoided completely. \item If the polyhedron reaches only a few leaves the application of the polyhedron enumeration algorithm is potentially more efficient than the sequential splitting. On the contrary, when reaching many $out$-leaves the splitting-free method makes less use of coherence, i.e. the polyhedron enumeration algorithm is applied independently in each leaf even if the corresponding polyhedra are bound by coherent sets of hyperplanes. \item An existing implementation of the polyhedron enumeration algorithm can be used~\cite{cdd_site,lrs_site}. \end{itemize} The polyhedron enumeration algorithm constructs the polyhedron as an intersection of a set of halfspaces. The polyhedron is described as a set of {\em vertices} and {\em rays} and their adjacency to the hyperplanes bounding the polyhedron~\cite{Fukuda:1996:DDM,avis96}. The adjacency information is used to construct a 1D skeleton of the polyhedron that is required for computation of the intersection with the \plucker quadric. \section{Visibility test} \label{sec:rot3d_vistest} The visibility test classifies visibility of a given polygon with respect to the source polygon. The test can be used to classify visibility of a polyhedral region by applying it on the boundary faces of the region and combining the resulting visibility states. \subsection{Exact visibility test} The exact visibility for a given polyhedral region proceeds as follows: for each face of the region facing the given source polygon we construct a blocker polyhedron. The blocker polyhedron is then tested for visibility by the traversal of the occlusion tree. The visibility test proceeds as the algorithms described in Section~\ref{sec:rot3d_construct}, but no modifications to the tree are performed. If the polyhedron is classified as visible in all reached leaves, the current face is fully visible. If the polyhedron is invisible in all reached leaves, the face is invisible. Otherwise it is partially visible since some rays connecting the face and the source polygon are occluded and some are unoccluded. The visibility of the whole region is computed using a combination of visibility states of its boundary faces according to Table~\ref{table:vis_states}. \subsection{Conservative visibility test} \label{sec:rot3_conserva} The conservative visibility test provides a fast estimation of visibility of the given region since it does not require the 5D polyhedra enumeration. Visibility of the given face of the region is determined by a traversal of the occlusion tree and testing the position of the corresponding blocker polyhedron with respect to the encountered hyperplanes as described in Section~\ref{sec:insertion_nosplit}. If the blocker polyhedron reaches only $in$-leaves and the face is behind the polygon(s) associated with the leaves, the face is classified invisible . Otherwise, it is conservatively classified as visible. The visibility of the whole region is determined using a combination of visibility of its faces as mentioned in the previous section. \section{Optimizations} \label{sec:rot3d_optimizations} Below we discuss several optimization techniques that can be used to improve the performance of the algorithm. The optimizations do not alter the accuracy of the visibility algorithm. % \subsection{Occluder sorting} % Occluder sorting aims to increase the accuracy of the front-to-back % ordering determined by the approximate occlusion sweep. Higher % accuracy of the ordering decreases the number of the late merging of % blocker polyhedra in the tree. Recall that the merging extends the % occlusion tree by a blocker polyhedron corresponding to an occluder % that hides an already processed occluder. % The occluder sorting is applied when reaching a leaf node of the % spatial hierarchy during the approximate occlusion sweep. Given a leaf % node $N$ the occluder sorting proceeds as follows: % \begin{enumerate} % \item Determine a ray $r$ piercing the center of the source polygon % $P_S$ and the node $N$. % \item Compute intersections of $r$ and all supporting planes of the % polygons associated with $N$. % \item Sort the polygons according to the front-to-back order of the % corresponding intersection points along $r$. % \end{enumerate} % The occluder sorting provides an exact priority order within the given % leaf in the case that the polygons are pierced by the computed ray $r$ % and they do not intersect. \subsection{Visibility estimation} The visibility estimation aims to eliminate the polyhedron enumeration in the leaves of the occlusion tree. If we find out that the currently processed polygon is potentially visible in the given leaf-node (it is an $out$-leaf or it is an $in$-leaf and the positional test reports the polygon as the closest), we estimate its visibility by shooting random rays. We can use the current occlusion tree to perform ray shooting in line space. We select a random ray connecting the source polygon and the currently processed polygon. This ray is mapped to a \plucker point and this point is tested for inclusion in halfspaces defined by the \plucker planes splitting the polyhedron on the path from to root to the given leaf. If the point is contained in all tested halfspaces the corresponding ray is unoccluded and the algorithm inserts the blocker polyhedron into the tree. Otherwise it continues by selecting another random ray until a predefined number of rays was tested. The insertion of the blocker polyhedron devotes further discussion. Since the polyhedron was not enumerated we do not know which of its bounding hyperplanes really bound the polyhedron fragment and which are redundant for the given leaf. Considering all hyperplanes defining the blocker polyhedron could lead to inclusion of many redundant nodes in the tree. We used a simple conservative algorithm that tests if the given hyperplane is bounding the (unconstructed) polyhedron fragment. For each hyperplane $H_i$ bounding the blocker polyhedron the algorithm tests the position of extremal lines embedded in this hyperplane with respect to each hyperplane splitting the polyhedron. If mappings of all extremal lines lay in the same open halfspace defined by a splitting hyperplane, hyperplane $H_i$ does not bound the current polyhedron fragment and thus it can be culled. \subsection{Visibility merging} Visibility merging aims to propagate visibility classifications from the leaves of the occlusion tree up into the interior nodes of the hierarchy. Visibility merging is connected with the approximate occlusion sweep, which simplifies the treatment of the depth of the scene polygons. The algorithm classifies an interior node of the occlusion tree as occluded ($in$) if the following conditions hold: \begin{itemize} \item Both its children are $in$-nodes. \item The occluders associated with both children are strictly closer than the closest unswept node of the spatial hierarchy. \end{itemize} The first condition ensures that both child nodes correspond to occluded nodes. The second condition ensures that any unprocessed occluder is behind the occluders associated with the children. Using this procedure the effective depth of the occlusion becomes progressively smaller if more and more rays become occluded. \section{Conservative Verifier} A conservative verifier is a faster alternative to the exact visibility verifier described above. The verifier is an extension of the strong occlusion algorithm of Cohen-Or et al.~\cite{cohen-egc-98}. In particular our verfier refines the search for a strong occluder by using a hierarchical subdivision of space of lines connecting the two regions tested for mutual visibility. Initially the shaft bounding the the tested regions is constructed. Rays bounding the shaft are traced through the scene and we compute all intersections with the scene objects between the tested regions. The the algorithm proceeds as follows: \begin{itemize} \item In the case that any ray does not intersect any object the tested regions are classified as visibility and the algorithm terminates. \item If the rays intersect the same convex object (at any depth) this object is a strong occluder with respect to the shaft and thus it also hides all rays in the corresponding shaft. \item If the rays do not intersect a single convex object four new shafts are constructed by splitting both regions in half and the process is repeated recursivelly. \end{itemize} If the subdivision does not terminate till reaching a predefined subdivision depth, we conservativelly classify the tested regions as mutually visible. \section{Error Bound Approximate Verifier} The approximate verifier will be based on the similar idea as the conservative one. However it will behave differently in the finer subdivision of the ray shafts. The idea is to use the above algorithm as far as the shafts get small enough that we can guarantee that even if the shaft is not blocked by the scene objects, a pixel error induced due to omission of objects potential visible behind the shaft is bellow a given threshold. For the computation of the maximal error due to the current shaft we assume that one tested region is a viewcell, whereas the other is an object bounding box or cell of the spatial hierarchy. The threshold is computed as follows: We first triangulate the farthest intersection points in the shaft as seen from the view cell side of the shaft. Then for each computed triangle we calculate a point in the viewcell which maximizes the projected area of the triangle. The conservative estimate of the maximal error is then given by a sum of the computed projected areas. %Option: - if there is sufficient number of samples from 1 + 2 and some %of their statistic (local spatial and directional density + distances %to the visible objects), maybe an error estimate could only be derived %from there values. % a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve. % The basic idea is the following: % Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: % 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects % 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer. % 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one. % 4. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible. % Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer. % 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed. % It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy. % > Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane. % > That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further. % > Is this description clearer? % partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error? \section{Summary} \label{sec:rot3d_summary} This chapter presented a mutual visibility tool, which determines whether two regions in the scene are visible. First, we have described an advanced exact visibility algorithm which is a simple of modification of an algorithm for computing from-region visibility in polygonal scenes. The key idea is a hierarchical subdivision of the problem-relevant line set using \plucker coordinates and the occlusion tree. \plucker coordinates allow to perform operations on sets of lines by means of set theoretical operations on the 5D polyhedra. The occlusion tree is used to maintain a union of the polyhedra that represent lines occluded from the given region (polygon). We described two algorithms for construction of the occlusion tree by incremental insertion of blocker polyhedra. The occlusion tree was used to test visibility of a given polygon or region with respect to the source polygon/region. We proposed several optimization of the algorithm that make the approach applicable to large scenes. The principal advantage of the exact method over the conservative and the approximate ones is that it does not rely on various tuning parameters that are characterizing many conservative or approximate algorithms. On the other hand the exactness of the method requires higher computational demands and implementation effort. Second, we have described a conservative verifier which is an extension of the algorithm based on strong occluders. The conservative verifier is requires a specification of the shaft size at which the tested regions are conservativelly classfied as visible. Third, we have described an approximate error bound verifier which is an extension of the algorithm based on strong occluders. The conservative verifier is requires a specification of the shaft size at which the tested regions are conservativelly classfied as visible.