\chapter{Global Visibility Sampling} \label{chap:sampling} The proposed visibility preprocessing framework consists of two major steps. \begin{itemize} \item The first step is an aggressive visibility sampling which gives initial estimate about global visibility in the scene. The sampling itself involves several strategies which will be described bellow. The important property of the aggressive sampling step is that it provides a fast progressive solution to global visibility and thus it can be easily integrated into the game development cycle. The aggressive sampling will terminate when the average contribution of new ray samples falls bellow a predefined threshold. \item The second step is mutual visibility verification. This step turns the previous aggressive visibility solution into either exact, conservative or error bound aggressive solution. The choice of the particular verifier is left on the user in order to select the best one for a particular scene, application context and time constrains. For example, in scenes like a forest an error bound aggressive visibility can be the best compromise between the resulting size of the PVS (and frame rate) and the visual quality. The exact or conservative algorithm can however be chosen for urban scenes where omission of even small objects can be more distracting for the user. The mutual visibility verification will be described in the next chapter. \end{itemize} In traditional visibility preprocessing the view space is subdivided into view cells and for each view cell the set of visible objects --- potentially visible set (PVS) is computed. This framework has been used for conservative, aggressive and exact algorithms. We propose a different strategy which has several advantages for sampling based aggressive visibility preprocessing. The strategy is based on the following fundamental ideas: \begin{itemize} \item Compute progressive global visibility instead of sequential from-region visibility \item Replace the roles of view cells and objects for some parts of the computation \end{itemize} Both these points will be addressed in this chapter in more detail. \section{Related work} \label{VFR3D_RELATED_WORK} Below we briefly discuss the related work on visibility preprocessing in several application areas. In particular we focus on computing from-region which has been a core of most previous visibility preprocessing techniques. \subsection{Aspect graph} The first algorithms dealing with from-region visibility belong to the area of computer vision. The {\em aspect graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions the view space into cells that group viewpoints from which the projection of the scene is qualitatively equivalent. The aspect graph is a graph describing the view of the scene (aspect) for each cell of the partitioning. The major drawback of this approach is that for polygonal scenes with $n$ polygons there can be $\Theta(n^9)$ cells in the partitioning for unrestricted view space. A {\em scale space} aspect graph~\cite{bb12595,bb12590} improves robustness of the method by merging similar features according to the given scale. \subsection{Potentially visible sets} In the computer graphics community Airey~\cite{Airey90} introduced the concept of {\em potentially visible sets} (PVS). Airey assumes the existence of a natural subdivision of the environment into cells. For models of building interiors these cells roughly correspond to rooms and corridors. For each cell the PVS is formed by cells visible from any point of that cell. Airey uses ray shooting to approximate visibility between cells of the subdivision and so the computed PVS is not conservative. This concept was further elaborated by Teller et al.~\cite{Teller92phd,Teller:1991:VPI} to establish a conservative PVS. The PVS is constructed by testing the existence of a stabbing line through a sequence of polygonal portals between cells. Teller proposed an exact solution to this problem using \plucker coordinates~\cite{Teller:1992:CAA} and a simpler and more robust conservative solution~\cite{Teller92phd}. The portal based methods are well suited to static densely occluded environments with a particular structure. For less structured models they can face a combinatorial explosion of complexity~\cite{Teller92phd}. Yagel and Ray~\cite{Yagel95a} present an algorithm, that uses a regular spatial subdivision. Their approach is not sensitive to the structure of the model in terms of complexity, but its efficiency is altered by the discrete representation of the scene. Plantinga proposed a PVS algorithm based on a conservative viewspace partitioning by evaluating visual events~\cite{Plantinga:1993:CVP}. The construction of viewspace partitioning was further studied by Chrysanthou et al.~\cite{Chrysanthou:1998:VP}, Cohen-Or et al.~\cite{cohen-egc-98} and Sadagic~\cite{Sadagic}. Sudarsky and Gotsman~\cite{Sudarsky:1996:OVA} proposed an output-sensitive visibility algorithm for dynamic scenes. Cohen-Or et al.~\cite{COZ-gi98} developed a conservative algorithm determining visibility of an $\epsilon$-neighborhood of a given viewpoint that was used for network based walkthroughs. Conservative algorithms for computing PVS developed by Durand et al.~\cite{EVL-2000-60} and Schaufler et al.~\cite{EVL-2000-59} make use of several simplifying assumptions to avoid the usage of 4D data structures. Wang et al.~\cite{Wang98} proposed an algorithm that precomputes visibility within beams originating from the restricted viewpoint region. The approach is very similar to the 5D subdivision for ray tracing~\cite{Simiakakis:1994:FAS} and so it exhibits similar problems, namely inadequate memory and preprocessing complexities. Specialized algorithms for computing PVS in \m25d scenes were proposed by Wonka et al.~\cite{wonka00}, Koltun et al.~\cite{koltun01}, and Bittner et al.~\cite{bittner:2001:PG}. The exact mutual visibility method presented later in the report is based on method exploting \plucker coordinates of lines~\cite{bittner:02:phd,nirenstein:02:egwr,haumont2005:egsr}. This algorithm uses \plucker coordinates to compute visibility in shafts defined by each polygon in the scene. \subsection{Rendering of shadows} The from-region visibility problems include the computation of soft shadows due to an areal light source. Continuous algorithms for real-time soft shadow generation were studied by Chin and Feiner~\cite{Chin:1992:FOP}, Loscos and Drettakis~\cite{Loscos:1997:IHS}, and Chrysanthou~\cite{Chrysantho1996a} and Chrysanthou and Slater~\cite{Chrysanthou:1997:IUS}. Discrete solutions have been proposed by Nishita~\cite{Nishita85}, Brotman and Badler~\cite{Brotman:1984:GSS}, and Soler and Sillion~\cite{SS98}. An exact algorithm computing an antipenumbra of an areal light source was developed by Teller~\cite{Teller:1992:CAA}. \subsection{Discontinuity meshing} Discontinuity meshing is used in the context of the radiosity global illumination algorithm or computing soft shadows due to areal light sources. First approximate discontinuity meshing algorithms were studied by Campbell~\cite{Campbell:1990:AMG, Campbell91}, Lischinski~\cite{lischinski92a}, and Heckbert~\cite{Heckbert92discon}. More elaborate methods were developed by Drettakis~\cite{Drettakis94-SSRII, Drettakis94-FSAAL}, and Stewart and Ghali~\cite{Stewart93-OSACS, Stewart:1994:FCSb}. These methods are capable of creating a complete discontinuity mesh that encodes all visual events involving the light source. The classical radiosity is based on an evaluation of form factors between two patches~\cite{Schroder:1993:FFB}. The visibility computation is a crucial step in the form factor evaluation~\cite{Teller:1993:GVA,Haines94,Teller:1994:POL, Nechvile:1996:FFE,Teichmann:WV}. Similar visibility computation takes place in the scope of hierarchical radiosity algorithms~\cite{Soler:1996:AEB, Drettakis:1997:IUG, Daubert:1997:HLS}. \subsection{Global visibility} The aim of {\em global visibility} computations is to capture and describe visibility in the whole scene~\cite{Durand:1996:VCN}. The global visibility algorithms are typically based on some form of {\em line space subdivision} that partitions lines or rays into equivalence classes according to their visibility classification. Each class corresponds to a continuous set of rays with a common visibility classification. The techniques differ mainly in the way how the line space subdivision is computed and maintained. A practical application of most of the proposed global visibility structures for 3D scenes is still an open problem. Prospectively these techniques provide an elegant method for ray shooting acceleration --- the ray shooting problem can be reduced to a point location in the line space subdivision. Pocchiola and Vegter introduced the visibility complex~\cite{pv-vc-93} that describes global visibility in 2D scenes. The visibility complex has been applied to solve various 2D visibility problems~\cite{r-tsvcp-95,r-wvcav-97, r-dvpsv-97,Orti96-UVCRC}. The approach was generalized to 3D by Durand et al.~\cite{Durand:1996:VCN}. Nevertheless, no implementation of the 3D visibility complex is currently known. Durand et al.~\cite{Durand:1997:VSP} introduced the {\em visibility skeleton} that is a graph describing a skeleton of the 3D visibility complex. The visibility skeleton was verified experimentally and the results indicate that its $O(n^4\log n)$ worst case complexity is much better in practice. Pu~\cite{Pu98-DSGIV} developed a similar method to the one presented in this chapter. He uses a BSP tree in \plucker coordinates to represent a global visibility map for a given set of polygons. The computation is performed considering all rays piercing the scene and so the method exhibits unacceptable memory complexity even for scenes of moderate size. Recently, Duguet and Drettakis~\cite{duguet:02:sig} developed a robust variant of the visibility skeleton algorithm that uses robust epsilon-visibility predicates. Discrete methods aiming to describe visibility in a 4D data structure were presented by Chrysanthou et al.~\cite{chrysanthou:cgi:98} and Blais and Poulin~\cite{blais98a}. These data structures are closely related to the {\em lumigraph}~\cite{Gortler:1996:L,buehler2001} or {\em light field}~\cite{Levoy:1996:LFR}. An interesting discrete hierarchical visibility algorithm for two-dimensional scenes was developed by Hinkenjann and M\"uller~\cite{EVL-1996-10}. One of the biggest problems of the discrete solution space data structures is their memory consumption required to achieve a reasonable accuracy. Prospectively, the scene complexity measures~\cite{Cazals:3204:1997} provide a useful estimate on the required sampling density and the size of the solution space data structure. \subsection{Other applications} Certain from-point visibility problems determining visibility over a period of time can be transformed to a static from-region visibility problem. Such a transformation is particularly useful for antialiasing purposes~\cite{grant85a}. The from-region visibility can also be used in the context of simulation of the sound propagation~\cite{Funkhouser98}. The sound propagation algorithms typically require lower resolution than the algorithms simulating the propagation of light, but they need to account for simulation of attenuation, reflection and time delays. \section{Algorithm Description} This section first describes the setup of the global visibility sampling algorithm. In particular we describe the view cell representation and the novel concept of from-object based visibility. The we outline the different visibility sampling strategies. \subsection{View Space Partitioning} Before the visibility computation itself, we subdivide the space of all possible viewpoints and viewing directions into view cells. A good partition of the scene into view cells is an essential part of every visibility system. If they are chosen too large, the PVS (Potentially Visible Set) of a view cells is too large for efficient culling. If they are chosen too small or without consideration, then neighbouring view cells contain redundant PVS information, hence boosting the PVS computation and storage costs for the scene. In the left image of figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna model, generated by triangulation of the streets. In the closeup (right image) we can see that each triangle is extruded to a given height to form a view cell prism. \begin{figure}[htb] \centerline{ \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01} \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07} } \caption{(left) Vienna view cells. (right) The view cells are prisms with a triangular base. } \label{fig:vienna_viewcells} \end{figure} In order to efficiently use view cells with our sampling method, we require a view cell representation which is \begin{itemize} \item optimized for view cell - ray intersection. \item flexible, i.e., it can represent arbitrary geometry. \item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others) \end{itemize} We meet these requirements by employing spatial subdivisions (i.e., KD trees and BSP trees), to store the view cells. The initial view cells are associated with the leaves. The reason why we chose BSP trees as view cell representation is that they are very flexible. View cells forming arbitrary closed meshes can be closely matched. Therefore we are able to find a view cells with only a few view ray-plane intersections. Furthermore, the hierarchical structures can be exploited as hierarchy of view cells. Interior nodes form larger view cells containing the children. If necessary, a leaf can be easily subdivided into smaller view cells. Currently we consider three different approaches to generate the initial view cell BSP tree. The third method is not restricted to BSP trees, but BSP trees are preferred because of their greater flexibility. \begin{table} \centering \footnotesize \begin{tabular}{|l|c|c|} \hline\hline View cells & Vienna selection & Vienna full \\\hline\hline \#view cells & 105 & 16447 \\\hline\hline \#input polygons & 525 & 82235 \\\hline\hline BSP tree generation time & 0.016s & 10.328s \\\hline\hline view cell insertion time & 0.016s & 7.984s \\\hline\hline \#nodes & 1137 & 597933 \\\hline\hline \#interior nodes & 568 & 298966\\\hline\hline \#leaf nodes & 569 & 298967\\\hline\hline \#splits & 25 & 188936\\\hline\hline max tree depth & 13 & 27\\\hline\hline avg tree depth & 9.747 & 21.11\\\hline\hline \end{tabular} \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp} \end{table} \begin{itemize} \item We use a number of input view cells given in advance. As input view cell any closed mesh can be applied. The only requirement is that the any two view cells do not overlap. First the view cell polygons are extracted, and the BSP is build from these polygons using some global optimizations like tree balancing or least splits. Then one view cell after the other is inserted into the tree to find out the leaves where they are contained in. The polygons of the view cell are filtered down the tree, guiding the insertion process. Once we reach a leaf and there are no more polygons left, we terminate the tree subdivision. If we are on the inside of the last split plane (i.e., the leaf represents the inside of the view cell), we associate the leaf with the view cell (i.e., add a pointer to the view cell). One input view cell can be associated with many leaves, whereas each leafs has only one view cell. Some statistics about using this method on the Vienna view cells set are given in table~\ref{tab:viewcell_bsp}. However, sometimes a good set of view cells is not available. Or the scene is changed frequently, and the designer does not want to create new view cells on each change. In such a case one of the following two methods should rather be chosen, which generate view cells automatically. \item We apply a BSP tree subdivision to the scene geometry. Whenever the subdivision terminates in a leaf, a view cell is associated with the leaf node. This simple approach is justified because it places the view cell borders along some discontinuities in the visibility function. \item The view cell generation can be guided by the sampling process. We start with with a single initial view cell representing the whole space. If a given threshold is reached during the preprocessing (e.g., the view cell is pierced by too many rays resulting in a large PVS), the view cell is subdivided into smaller cells using some criteria. In order to evaluate the best split plane, we first have to define the characteristics of a good view cell partition: The view cells should be quite large, while their PVS stays rather small. The PVS of each two view cells should be as distinct as possible, otherwise they could be merged into a larger view cell if the PVSs are too similar. E.g., for a building, the perfect view cells are usually the single rooms connected by portals. Hence we can define some useful criteria for the split: 1) the number of rays should be roughly equal among the new view cells. 2) The split plane should be chosen in a way that the ray sets are disjoint, i.e., the number of rays contributing to more than one cell should be minimized. 3) For BSP trees, the split plane should be aligned with some scene geometry which is large enough to contribute a lot of occlusion power. This criterion can be naturally combined with the second one. As termination criterion we can choose the minimum PVS / piercing ray size or the maximal tree depth. \end{itemize} % In the future we aim to extend the view cell construction by using % feedback from the PVS computation: the view cells which contain many % visibility changes will be subdivided further and neighboring view % cells with similar PVSs will be merged. We want to gain a more precise % information about visibility by selectively storing rays with the % view cells and computing visibility statistics for subsets of rays % which intersect subregions of the given view cell. \subsection{From-object based visibility} Our framework is based on the idea of sampling visibility by casting casting rays through the scene and collecting their contributions. A visibility sample is computed by casting a ray from an object towards the view cells and computing the nearest intersection with the scene objects. All view cells pierced by the ray segment can the object and thus the object can be added to their PVS. If the ray is terminated at another scene object the PVS of the pierced view cells can also be extended by this terminating object. Thus a single ray can make a number of contributions to the progressively computed PVSs. A ray sample piercing $n$ view cells which is bound by two distinct objects contributes by at most $2*n$ entries to the current PVSs. Apart from this performance benefit there is also a benefit in terms of the sampling density: Assuming that the view cells are usually much larger than the objects (which is typically the case) starting the sampling deterministically from the objects increases the probability of small objects being captured in the PVS. At this phase of the computation we not only start the samples from the objects, but we also store the PVS information centered at the objects. Instead of storing a PVS consisting of objects visible from view cells, every object maintains a PVS consisting of potentially visible view cells. While these representations contain exactly the same information as we shall see later the object centered PVS is better suited for the importance sampling phase as well as the visibility verification phase. \subsection{Naive Randomized Sampling} The naive global visibility sampling works as follows: At every pass of the algorithm visits scene objects sequentially. For every scene object we randomly choose a point on its surface. Then a ray is cast from the selected point according to the randomly chosen direction (see Figure~\ref{fig:sampling}). We use a uniform distribution of the ray directions with respect to the half space given by the surface normal. Using this strategy the samples at deterministically placed at every object, with a randomization of the location on the object surface. The uniformly distributed direction is a simple and fast strategy to gain initial visibility information. \begin{figure}%[htb] \centerline{ \includegraphics[width=0.4\textwidth, draft=\DRAFTFIGS]{figs/sampling} } %\label{tab:online_culling_example} \caption{Three objects and a set of view cells corresponding to leaves of an axis aligned BSP tree. The figure depicts several random samples cast from a selected object (shown in red). Note that most samples contribute to more view cells. } \label{fig:sampling} \end{figure} The described algorithm accounts for the irregular distribution of the objects: more samples are placed at locations containing more objects. Additionally every object is sampled many times depending on the number of passes in which this sampling strategy is applied. This increases the chance of even a small object being captured in the PVS of the view cells from which it is visible. Each ray sample can contribute by a associating a number of view cells with the object from which the sample was cast. If the ray does not leave the scene it also contributes by associating the pierced view cells to the terminating object. Thus as the ray samples are cast we can measure the average contribution of a certain number of most recent samples. If this contribution falls bellow a predefined constant we move on to the next sampling strategy, which aim to discover more complicated visibility relations. \subsection{Accounting for View Cell Distribution} The first modification to the basic algorithm accounts for irregular distribution of the view cells. Such a case is common for example in urban scenes where the view cells are mostly distributed in a horizontal direction and more view cells are placed at denser parts of the city. The modification involves replacing the uniformly distributed ray direction by directions distributed according to the local view cell directional density. This means placing more samples at directions where more view cells are located: We select a random view cell which lies at the half space given by the surface normal at the chosen point. We pick a random point inside the view cell and cast a ray towards this point. \subsection{Accounting for Visibility Events} Visibility events correspond to appearance and disappearance of objects with respect to a moving view point. In polygonal scenes the events defined by event surfaces defined by three distinct scene edges. Depending on the edge configuration we distinguish between vertex-edge events (VE) and triple edge (EEE) events. The VE surfaces are planar planes whereas the EEE are in general quadratic surfaces. To account for these event we explicitly place samples passing by the object edges which are directed to edges and/or vertices of other objects. The first strategy starts similarly to the above described sampling methods: we randomly select an object and a point on its surface. Then we randomly pickup an object from its PVS. If we have mesh connectivity information we select a random silhouette edge from this object and cast a sample which is tangent to that object at the selected edge . The second strategy works as follows: we randomly pickup two objects which are likely to see each other. Then we determine a ray which is tangent to both objects. For simple meshes the determination of such rays can be computed geometrically, for more complicated ones it is based again on random sampling. The selection of the two objects works as follows: first we randomly select the first object and a random non-empty view cell for which we know that it can see the object. Then we randomly select an object associated with that view cell as the second object. \section{Summary} This chapter described the global visibility sampling algorithm which forms a core of the visibility preprocessing framework. The global visibility sampling computes aggressive visibility, i.e. it computes a subset of the exact PVS for each view cell. The aggressive sampling provides a fast progressive solution and thus it can be easily integrated into the game development cycle. The sampling itself involves several strategies which aim to progressively discover more visibility relationships in the scene. The methods presented in this chapter give a good initial estimate about visibility in the scene, which can be verified by the mutual visibility algorithms described in the next chapter.