\chapter{Mutual Visibility Tool} \section{Introduction} If a fast visibility solution is required such as during the development cycle then the aggresive visibility sampling discussed in the previous chapter is a good candidate. However for final solution (production) we should either be sure that there can be no visible error due to the preprocessed visibility or the error is suffieciently small and thus not noticable. The verification starts with identifiying a minimal set of object/view cell pairs which are classified as mutually invisible. We process the objects sequentially and for every given object we determine the view cell which form the boundary of the invisible view cell set. This boundary it is based on the current view cell visibility classifications. We assume that there is a defined connectivity between the viewcells which is the case for both BSP-tree or kD-tree based view cells. Then given a PVS consisting of visible viewcells (with repsect to an object) we can find all the neighbors of the visible viewcells which are invisible. In order words a view cell belongs to this boundary if it is vlassified invisible and has at least one visible neighbor. If all view cells from this boundary are proven invisible, not other viewcell (behind this boundary) can be visible. If the verification determines some view cells as visible, the current invisible boundary is extended and the verification is applied on the new view cells forming the boundary. The basic operation of the verification is mutual visibility test between an object and a view cell. This test works with a bounding volume of the object and the boundary of the view cell. In the most general case both are defined bound by a convex polyhedron, in a simpler case both are defined by an axis aligned bounding box. \section{Exact Mutual Visibility Test} The exact mutual visibility test computes exact visibility between two polyhedrons in the scene. This is computed by testing visibility between all pairs of potentially polygons of these polyhedrons. \section{Occlusion tree} \label{sec:rot3d_ot} The occlusion tree for the visibility from region problem is a 5D BSP tree maintaining a collection of the 5D blocker polyhedra. The tree is constructed for each source polygon $P_S$ and represents all rays emerging from $P_S$. Each node $N$ of the tree represents a subset of line space ${\mathcal Q^*_N}$. The root of the tree represents the whole problem-relevant line set ${\mathcal L}_R$. If $N$ is an interior node, it is associated with a \plucker plane $\pplane_N$. Left child of $N$ represents ${\mathcal Q^*_N} \cap \pplane^-_N$, right child ${\mathcal Q^*_N} \cap \pplane^+_N$, where $\pplane^-_N$ and $\pplane^+_N$ are halfspaces induced by $\pplane_N$. Leaves of the occlusion tree are classified $in$ or $out$. If $N$ is an $out$-leaf, ${\mathcal Q^*_N}$ represents unoccluded rays emerging from the source polygon $P_S$. If $N$ is an $in$-leaf, it is associated with an occluder $O_N$ that blocks the corresponding set of rays ${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the blocker polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The intersection of $B_N$ and the \plucker quadric corresponds to a set of stabbers ${\cal S}_N$ through which $O_N$ is visible from $P_S$. %% Consider a source polygon and a single occluder such as in %% Figure~\ref{lines3}-(a). The corresponding elementary occlusion tree %% has interior nodes associated with \plucker hyperplanes corresponding %% to edges of the two polygons. \section{Occlusion tree construction} \label{sec:rot3d_construct} The occlusion tree is constructed incrementally by inserting blocker polyhedra in the order given by the approximate occlusion sweep of the scene polygons. When processing a polygon $P_j$ the algorithm inserts a polyhedron $B_{P_SP_j}$ representing the feasible line set between the source polygon $P_S$ and the polygon $P_j$. The polyhedron is split into fragments that represent either occluded or unoccluded rays. We describe two methods that can be used to insert a blocker polyhedron into the occlusion tree. The first method inserts the polyhedron by splitting it using hyperplanes encountered during the traversal of the tree. The second method identifies hyperplanes that split the polyhedron and uses them later for the construction of polyhedron fragments in leaf nodes. \subsection{Insertion with splitting} \label{sec:rot3_insertwithsplit} The polyhedron insertion algorithm maintains two variables --- the current node $N_c$ and the current polyhedron fragment $B_c$. Initially $N_c$ is set to the root of the tree and $B_c$ equals to $B_{P_SP_j}$. The insertion of a polyhedron in the tree proceeds as follows: If $N_c$ is an interior node, we determine the position of $B_c$ and the hyperplane $\pplane_{N_c}$ associated with $N_c$. If $B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the algorithm continues in the right subtree. Similarly, if $B_c$ lies in the negative halfspace induced by $\pplane_{N_c}$, the algorithm continues in the left subtree. If $B_c$ intersects both halfspaces, it is split by $\pplane_{N_{c}}$ into two parts $B^+_{c}$ and $B^-_{c}$ and the algorithm proceeds in both subtrees of $N_c$ with relevant fragments of $B_{c}$. If $N_{c}$ is a leaf node then we make a decision depending on its classification. If $N_{c}$ is an $out$-leaf then $B_{c}$ is visible and $N_c$ is replaced by the elementary occlusion tree of $B_{c}$. If $N_{c}$ is an $in$-leaf, the mutual position of the currently processed polygon $B_j$ and the polygon $B_{N_{c}}$ associated with $N_{c}$ is determined. This test will be described in Section~\ref{sec:position}. If $B_{c}$ is behind $B_{N_{c}}$ it is invisible and no modification to the tree necessary. Otherwise we need to {\em merge} $B_{c}$ into the tree. The merging replaces $N_{c}$ by the elementary occlusion tree of $B_c$ and inserts the old fragment $B_{N_{c}}$ in the new subtree. \subsection{Insertion without splitting} \label{sec:insertion_nosplit} The above described polyhedron insertion algorithm requires that the polyhedron is split by the hyperplanes encountered during the traversal of the occlusion tree. Another possibility is an algorithm that only tests the position of the polyhedron with respect to the hyperplane and remembers the hyperplanes that split the polyhedron on the path from the root to the leaves. Reaching a leaf node these hyperplanes are used to construct the corresponding polyhedron fragment using a polyhedron enumeration algorithm. The splitting-free polyhedron insertion algorithm proceeds as follows: we determine the position of the blocker polyhedron and the hyperplane $\pplane_{N_c}$ associated with the current node $N_c$. If $B_c$ lies in the positive halfspace induced by $\pplane_{N_c}$ the algorithm continues in the right subtree. Similarly if $B_c$ lies in the negative halfspace induced by $\pplane_{N_c}$ the algorithm continues in the left subtree. If $B_c$ intersects both halfspaces the algorithm proceeds in both subtrees of $N_c$ and $\pplane_{N_c}$ is added to the list of splitting hyperplanes with a correct sign for each subtree. Reaching an $out$-leaf the list of splitting hyperplanes and the associated signs correspond to halfspaces bounding the corresponding polyhedron fragment. The polyhedron enumeration algorithm is applied using these halfspaces and the original halfspaces defining the blocker polyhedron. Note that it is possible that no feasible polyhedron exists since the intersection of halfspaces is empty. Such a case occurs due to the conservative traversal of the tree that only tests the position of the inserted polyhedron with respect to the splitting hyperplanes\footnote{Such a traversal was also used in Section~\ref{sec:rtviscull_conserva} in the context of the from-point visibility culling.}. If the fragment is not empty, the tree is extended as described in the previous section. Reaching an $in$-leaf the polygon positional test is applied. If the inserted polygon is closer than the polygon associated with the leaf, the polyhedron fragment is constructed and it is merged in the tree as described in the previous section. The splitting-free polyhedron insertion algorithm has the following properties: \begin{itemize} \item If the polyhedron reaches only $in$-leaves the 5D set operations on the polyhedron are avoided completely. \item If the polyhedron reaches only a few leaves the application of the polyhedron enumeration algorithm is potentially more efficient than the sequential splitting. On the contrary, when reaching many $out$-leaves the splitting-free method makes less use of coherence, i.e. the polyhedron enumeration algorithm is applied independently in each leaf even if the corresponding polyhedra are bound by coherent sets of hyperplanes. \item An existing implementation of the polyhedron enumeration algorithm can be used~\cite{cdd_site,lrs_site}. \end{itemize} The polyhedron enumeration algorithm constructs the polyhedron as an intersection of a set of halfspaces. The polyhedron is described as a set of {\em vertices} and {\em rays} and their adjacency to the hyperplanes bounding the polyhedron~\cite{Fukuda:1996:DDM,avis96}. The adjacency information is used to construct a 1D skeleton of the polyhedron that is required for computation of the intersection with the \plucker quadric. \subsection{Polygon positional test} \label{sec:position} The polygon positional test aims to determine the order of two polygons $P_i$ and $P_j$ with respect to the source polygon $P_S$~\cite{Chrysantho1996a}. We assume that polygons $P_i$ and $P_j$ do not intersect, i.e. all potential polygon intersections were resolved in preprocessing. The test is applied to determine which of the two polygons is closer to $P_S$ with respect to the given set of rays intersecting both polygons. The test proceeds as follows: \begin{enumerate} \item If $P_i$ lays in the positive/negative halfspace defined by $P_j$, it is before/behind $P_j$. Otherwise, proceed with the next step. \item If $P_j$ lays in the positive/negative halfspace defined by $P_i$, it is before/behind $P_i$. Otherwise, proceed with the next step. \item Find a ray from the given set of rays that intersects both polygons. Compute an intersection of the ray with $P_i$ and $P_j$ and determine the position of the polygons according to the order of intersection points along the ray. \end{enumerate} The first two steps aim to determine the absolute priority of one of the polygons. If these steps fail, the order of the polygons is determined using a sample ray in step 3. \section{Visibility test} \label{sec:rot3d_vistest} The visibility test classifies visibility of a given polygon with respect to the source polygon. The test can be used to classify visibility of a polyhedral region by applying it on the boundary faces of the region and combining the resulting visibility states. \subsection{Exact visibility test} The exact visibility for a given polyhedral region proceeds as follows: for each face of the region facing the given source polygon we construct a blocker polyhedron. The blocker polyhedron is then tested for visibility by the traversal of the occlusion tree. The visibility test proceeds as the algorithms described in Section~\ref{sec:rot3d_construct}, but no modifications to the tree are performed. If the polyhedron is classified as visible in all reached leaves, the current face is fully visible. If the polyhedron is invisible in all reached leaves, the face is invisible. Otherwise it is partially visible since some rays connecting the face and the source polygon are occluded and some are unoccluded. The visibility of the whole region is computed using a combination of visibility states of its boundary faces according to Table~\ref{table:vis_states}. \subsection{Conservative visibility test} \label{sec:rot3_conserva} The conservative visibility test provides a fast estimation of visibility of the given region since it does not require the 5D polyhedra enumeration. Visibility of the given face of the region is determined by a traversal of the occlusion tree and testing the position of the corresponding blocker polyhedron with respect to the encountered hyperplanes as described in Section~\ref{sec:insertion_nosplit}. If the blocker polyhedron reaches only $in$-leaves and the face is behind the polygon(s) associated with the leaves, the face is classified invisible . Otherwise, it is conservatively classified as visible. The visibility of the whole region is determined using a combination of visibility of its faces as mentioned in the previous section. \section{Optimizations} \label{sec:rot3d_optimizations} Below we discuss several optimization techniques that can be used to improve the performance of the algorithm. The optimizations do not alter the accuracy of the visibility algorithm. \subsection{Shaft culling} \label{sec:rot3_shaft} The algorithm as described computes and maintains visibility of all polygons facing the given source polygon. In complex scenes the description of visibility from the source polygon can reach $O(n^4)$ complexity in the worse case~\cite{Teller92phd,p-rsls-97,Durand99-phd}. The size of the occlusion tree is then bound by $O(n^5)$. Although such a configuration occurs rather rare in practice we need to apply some general technique to avoid the worst-case memory complexity. If the algorithm should provide an exact analytic solution to the problem, we cannot avoid the $O(n^4\log n)$ worst-case computational complexity, but the computation can be decoupled to reduce memory requirements. We propose to use the {\em shaft culling}~\cite{Haines94} method that divides the computation into series of from-region visibility problems restricted by a given shaft of lines. Ideally the shafts are determined so that the {\em visibility complexity} in the shaft is bound by a given constant. This is however the from-region visibility problem itself. We can use an estimate based on a number of polygons in the given shaft. First, the hemicube is erected over the source polygon and it is used to construct five shafts corresponding to the five faces of the hemicube. The shafts are then subdivided as follows: if a given shaft intersects more than a predefined number of polygons the corresponding hemicube face is split in two and the new shafts are processed recursively. Visibility in the shaft is evaluated using all polygons intersecting the shaft. When the computation for the shaft is finished the occlusion tree is destroyed. The algorithm then proceeds with the next shaft until all generated shafts have been processed. See Figure~\ref{fig:rot3_shafts} for an example of a regular subdivision of the problem-relevant line set into 80 shafts. This technique shares a similarity with the algorithm recently published by Nirenstein et al.~\cite{nirenstein:02:egwr} that uses shafts between the source polygon and each polygon in the scene. Our technique provides the following benefits: \begin{itemize} \item The shaft can contain thousands of polygons, which allows to better exploit the coherence of visibility. The hierarchical line space subdivision allows efficient searches and updates. \item The method is applicable even in scenes with big differences in polygon sizes. Unlike the method of Nirenstein et al.~\cite{nirenstein:02:egwr}, the proposed technique generates shafts to find balance between the use of coherence and the memory requirements. Due to the hierarchical subdivision of the problem-relevant line set our method can handle the case when there is a huge number of polygons in a single shaft between two large scene polygons. \item Visibility in the whole shaft is described in a unified data structure, which can serve for further computations (e.g. extraction of event surfaces, hierarchical representation of visibility, etc.). \end{itemize} \subsection{Occluder sorting} Occluder sorting aims to increase the accuracy of the front-to-back ordering determined by the approximate occlusion sweep. Higher accuracy of the ordering decreases the number of the late merging of blocker polyhedra in the tree. Recall that the merging extends the occlusion tree by a blocker polyhedron corresponding to an occluder that hides an already processed occluder. The occluder sorting is applied when reaching a leaf node of the spatial hierarchy during the approximate occlusion sweep. Given a leaf node $N$ the occluder sorting proceeds as follows: \begin{enumerate} \item Determine a ray $r$ piercing the center of the source polygon $P_S$ and the node $N$. \item Compute intersections of $r$ and all supporting planes of the polygons associated with $N$. \item Sort the polygons according to the front-to-back order of the corresponding intersection points along $r$. \end{enumerate} The occluder sorting provides an exact priority order within the given leaf in the case that the polygons are pierced by the computed ray $r$ and they do not intersect. \subsection{Visibility estimation} The visibility estimation aims to eliminate the polyhedron enumeration in the leaves of the occlusion tree. If we find out that the currently processed polygon is potentially visible in the given leaf-node (it is an $out$-leaf or it is an $in$-leaf and the positional test reports the polygon as the closest), we estimate its visibility by shooting random rays. We can use the current occlusion tree to perform ray shooting in line space. We select a random ray connecting the source polygon and the currently processed polygon. This ray is mapped to a \plucker point and this point is tested for inclusion in halfspaces defined by the \plucker planes splitting the polyhedron on the path from to root to the given leaf. If the point is contained in all tested halfspaces the corresponding ray is unoccluded and the algorithm inserts the blocker polyhedron into the tree. Otherwise it continues by selecting another random ray until a predefined number of rays was tested. The insertion of the blocker polyhedron devotes further discussion. Since the polyhedron was not enumerated we do not know which of its bounding hyperplanes really bound the polyhedron fragment and which are redundant for the given leaf. Considering all hyperplanes defining the blocker polyhedron could lead to inclusion of many redundant nodes in the tree. We used a simple conservative algorithm that tests if the given hyperplane is bounding the (unconstructed) polyhedron fragment. For each hyperplane $H_i$ bounding the blocker polyhedron the algorithm tests the position of extremal lines embedded in this hyperplane with respect to each hyperplane splitting the polyhedron. If mappings of all extremal lines lay in the same open halfspace defined by a splitting hyperplane, hyperplane $H_i$ does not bound the current polyhedron fragment and thus it can be culled. \subsection{Visibility merging} Visibility merging aims to propagate visibility classifications from the leaves of the occlusion tree up into the interior nodes of the hierarchy. Visibility merging is connected with the approximate occlusion sweep, which simplifies the treatment of the depth of the scene polygons. The algorithm classifies an interior node of the occlusion tree as occluded ($in$) if the following conditions hold: \begin{itemize} \item Both its children are $in$-nodes. \item The occluders associated with both children are strictly closer than the closest unswept node of the spatial hierarchy. \end{itemize} The first condition ensures that both child nodes correspond to occluded nodes. The second condition ensures that any unprocessed occluder is behind the occluders associated with the children. Using this procedure the effective depth of the occlusion becomes progressively smaller if more and more rays become occluded. \section{Conservative Verifier} \section{Error Bound Verifier} %Option: - if there is sufficient number of samples from 1 + 2 and some %of their statistic (local spatial and directional density + distances %to the visible objects), maybe an error estimate could only be derived %from there values. a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve. The basic idea is the following: Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer. 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one. 4. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible. Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer. 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed. It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy. > Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane. > That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further. > Is this description clearer? partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error? \section{Summary} \label{sec:rot3d_summary} This chapter presented a new method for computing from-region visibility in polygonal scenes. The method is based on the concept of line space subdivision, approximate occlusion sweep and hierarchical visibility tests. The key idea is a hierarchical subdivision of the problem-relevant line set using \plucker coordinates and the occlusion tree. \plucker coordinates allow to perform operations on sets of lines by means of set theoretical operations on the 5D polyhedra. The occlusion tree is used to maintain a union of the polyhedra that represent lines occluded from the given region (polygon). We discussed the relation of sets of lines in 3D and the polyhedra in \plucker coordinates. We proposed a general size measure for a set of lines described by a blocker polyhedron and a size measure designed for the computation of PVS. The chapter presented two algorithms for construction of the occlusion tree by incremental insertion of blocker polyhedra. The occlusion tree was used to test visibility of a given polygon or region with respect to the source polygon/region. We proposed several optimization of the algorithm that make the approach applicable to large scenes. The chapter discussed three possible applications of the method: discontinuity meshing, visibility culling, and occluder synthesis. The implementation of the method was evaluated on scenes consisting of randomly generated triangles, structured scenes, and a real-world urban model. The evaluation was focused on the application method for PVS computation. By computing a PVS in a model of the city of Graz it was shown that the approach is suitable for visibility preprocessing in urban scenes. The principal advantage of the method is that it does not rely on various tuning parameters that are characterizing many conservative or approximate algorithms. On the other hand the exactness of the method requires higher computational demands and implementation effort.