\chapter{Global Visibility Sampling Tool} \section{Introduction} The proposed visibility preprocessing framework consists of two major steps. \begin{itemize} \item The first step is an aggresive visibility sampling which gives initial estimate about global visibility in the scene. The sampling itself involves several strategies which will be described in section~\ref{sec:sampling}. The imporant property of the aggresive sampling step is that it provides a fast progressive solution to global visibility and thus it can be easily integrated into the game development cycle. \item The second step is visibility verification. This step turns the previous aggresive visibility solution into either exact, conservative or error bound aggresive solution. The choice of the particular verifier is left on the user in order to select the best for a particular scene, application context and time constrains. For example, in scenes like a forest an error bound aggresive visibility can be the best compromise between the resulting size of the PVS (and framerate) and the visual quality. The exact or conservative algorithm can however be chosen for urban scenes where of even small objects can be more distructing for the user. \end{itemize} In traditional visibility preprocessing the view space is subdivided into viewcells and for each view cell the set of visible objects --- potentially visible set (PVS) is computed. This framewoirk has bee used for conservative, aggresive and exact algorithms. We propose a different strategy which has several advantages for sampling based aggresive visibility preprocessing. The stategy is based on the following fundamental ideas: \begin{itemize} \item Replace the roles of view cells and objects \item Compute progressive global visibility instead of sequential from-region visibility \end{itemize} Both of these points are addressed bellow in more detail. \subsection{From-object based visibility} Our framework is based on the idea of sampling visibility by casting casting rays through the scene and collecting their contributions. A visibility sample is computed by casting a ray from an object towards the viewcells and computing the nearest intersection with the scene objects. All view cells pierced by the ray segment can the object and thus the object can be added to their PVS. If the ray is terminated at another scene object the PVS of the pierced view cells can also be extended by this terminating object. Thus a single ray can make a number of contributions to the progressively computed PVSs. A ray sample piercing $n$ viewcells which is bound by two distinct objects contributes by at most $2*n$ entries to the current PVSs. Appart from this performance benefit there is also a benefit in terms of the sampling density: Assuming that the view cells are usually much larger than the objects (which is typically the case) starting the sampling deterministically from the objects increases the probability of small objects being captured in the PVS. At this phase of the computation we not only start the samples from the objects, but we also store the PVS information centered at the objects. Instead of storing a PVSs consting of objects visible from view cells, every object maintains a PVS consisting of potentially visible view cells. While these representations contain exactly the same information as we shall see later the object centered PVS is better suited for the importance sampling phase as well as the visibility verification phase. \subsection{Basic Randomized Sampling} The first phase of the sampling works as follows: At every pass of the algorithm visits scene objects sequentially. For every scene object we randomly choose a point on its surface. Then a ray is cast from the selected point according to the randomly chosen direction. We use a uniform distribution of the ray directions with respect to the halfspace given by the surface normal. Using this strategy the samples at deterministicaly placed at every object, with a randomization of the location on the object surface. The uniformly distributed direction is a simple and fast strategy to gain initial visibility information. The described algorithm accounts for the irregular distribution of the objects: more samples are placed at locations containing more objects. Additionally every object is sampled many times depending on the number of passes in which this sampling strategy is applied. This increases the chance of even a small object being captured in the PVS of the view cells from which it is visible. \subsection{Accounting for View Cell Distribution} The first modification to the basic algorithm accounts for irregular distribution of the viewcells. Such a case in common for example in urban scenes where the viewcells are mostly distributed in a horizontal direction and more viewcells are placed at denser parts of the city. The modification involves replacing the uniformly distributed ray direction by directions distributed according to the local view cell directional density. It means placing more samples at directions where more view cells are located. We select a random viecell which lies at the halfpace given by the surface normal at the chosen point. We pick a random point inside the view cell and cast a ray towards this point. \subsection{Accounting for Visibility Events} Visibility events correspond to appearance and disapearance of objects with respect to a moving view point. In polygonal scenes the events defined by event surfaces defined by three distinct scene edges. Depending on the edge configuration we distinguish between vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces are planar planes whereas the EEE are in general quadratic surfaces. To account for these event we explicitly place samples passing by the object edges which are directed to edges and/or vertices of other objects.