Changeset 272 for trunk/VUT/doc


Ignore:
Timestamp:
09/15/05 18:17:46 (19 years ago)
Author:
mattausch
Message:

added view cell stuff to the report

Location:
trunk/VUT/doc/SciReport
Files:
13 edited

Legend:

Unmodified
Added
Removed
  • trunk/VUT/doc/SciReport/figs/basic.eps

    r251 r272  
    22%%Title: basic.fig 
    33%%Creator: fig2dev Version 3.2 Patchlevel 4 
    4 %%CreationDate: Tue Aug 23 20:55:48 2005 
     4%%CreationDate: Thu Sep 15 12:53:12 2005 
    55%%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 
    66%%BoundingBox: 0 0 793 244 
  • trunk/VUT/doc/SciReport/figs/cut.eps

    r251 r272  
    22%%Title: cut.fig 
    33%%Creator: fig2dev Version 3.2 Patchlevel 4 
    4 %%CreationDate: Tue Aug 23 20:55:48 2005 
     4%%CreationDate: Thu Sep 15 12:53:13 2005 
    55%%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 
    66%%BoundingBox: 0 0 979 330 
  • trunk/VUT/doc/SciReport/figs/dummy.eps

    r251 r272  
    22%%Title: dummy.fig 
    33%%Creator: fig2dev Version 3.2 Patchlevel 4 
    4 %%CreationDate: Tue Aug 23 20:55:48 2005 
     4%%CreationDate: Thu Sep 15 12:53:14 2005 
    55%%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 
    66%%BoundingBox: 0 0 474 443 
  • trunk/VUT/doc/SciReport/figs/latency.eps

    r251 r272  
    22%%Title: latency.fig 
    33%%Creator: fig2dev Version 3.2 Patchlevel 4 
    4 %%CreationDate: Tue Aug 23 20:55:48 2005 
     4%%CreationDate: Thu Sep 15 12:53:15 2005 
    55%%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 
    66%%BoundingBox: 0 0 1153 326 
  • trunk/VUT/doc/SciReport/figs/latency2.eps

    r251 r272  
    22%%Title: latency2.fig 
    33%%Creator: fig2dev Version 3.2 Patchlevel 4 
    4 %%CreationDate: Tue Aug 23 20:55:48 2005 
     4%%CreationDate: Thu Sep 15 12:53:16 2005 
    55%%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 
    66%%BoundingBox: 0 0 992 393 
  • trunk/VUT/doc/SciReport/mutual.tex

    r269 r272  
    1616This boundary it is based on the current view cell visibility 
    1717classifications. We assume that there is a defined connectivity 
    18 between the viewcells which is the case for both BSP-tree or kD-tree 
    19 based view cells. Then given a PVS consisting of visible viewcells 
     18between the view cells which is the case for both BSP-tree or kD-tree 
     19based view cells. Then given a PVS consisting of visible view cells 
    2020(with respect to an object) we can find all the neighbors of the 
    21 visible viewcells which are invisible. In order words a view cell 
     21visible view cells which are invisible. In order words a view cell 
    2222belongs to this boundary if it is classified invisible and has at 
    2323least one visible neighbor. If all view cells from this boundary are 
    24 proven invisible, no other viewcell (behind this boundary) can be 
     24proven invisible, no other view cell (behind this boundary) can be 
    2525visible. If the verification determines some view cells as visible, 
    2626the current invisible boundary is extended and the verification is 
     
    378378 
    379379For the computation of the maximal error due to the current shaft we 
    380 assume that one tested region is a viewcell, whereas the other is an 
     380assume that one tested region is a view cell, whereas the other is an 
    381381object bounding box or cell of the spatial hierarchy. The threshold is 
    382382computed as follows: We first triangulate the farthest intersection 
    383383points in the shaft as seen from the view cell side of the shaft. Then 
    384 for each computed triangle we calculate a point in the viewcell which 
     384for each computed triangle we calculate a point in the view cell which 
    385385maximizes the projected area of the triangle. The conservative 
    386386estimate of the maximal error is then given by a sum of the computed 
     
    396396% The basic idea is the following: 
    397397 
    398 % Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: 
     398% Use 2 plane parametrization for describing all lines intersecting the object and the view cell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: 
    399399 
    400400% 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects 
    401401 
    402 % 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer. 
     402% 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new view cell (and possibly more if the rays goes further). Start the verification for the new view cells in the occluded layer. 
    403403 
    404404% 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one. 
     
    407407% Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer. 
    408408 
    409 % 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed. 
     409% 5. If 4 does not terminate subdivide either the view cell or the object rectangle depending on the sample depths and error computed. 
    410410 
    411411% It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy. 
  • trunk/VUT/doc/SciReport/preprocessing.tex

    r269 r272  
    3737 
    3838In traditional visibility preprocessing the view space is 
    39 subdivided into viewcells and for each view cell the set of visible 
     39subdivided into view cells and for each view cell the set of visible 
    4040objects --- potentially visible set (PVS) is computed. This framewoirk 
    4141has bee used for conservative, aggresive and exact algorithms. 
     
    5656casting rays through the scene and collecting their contributions. A 
    5757visibility sample is computed by casting a ray from an object towards 
    58 the viewcells and computing the nearest intersection with the scene 
     58the view cells and computing the nearest intersection with the scene 
    5959objects. All view cells pierced by the ray segment can see the object and 
    6060thus the object can be added to their PVS. If the ray is terminated at 
     
    6262extended by this terminating object. Thus a single ray can make a 
    6363number of contributions to the progressively computed PVSs. A ray 
    64 sample piercing $n$ viewcells which is bound by two distinct objects 
     64sample piercing $n$ view cells which is bound by two distinct objects 
    6565contributes by at most $2*n$ entries to the current PVSs. Appart from 
    6666this performance benefit there is also a benefit in terms of the 
     
    106106 
    107107The first modification to the basic algorithm accounts for 
    108 irregular distribution of the viewcells. Such a case in common for 
    109 example in urban scenes where the viewcells are mostly distributed in 
    110 a horizontal direction and more viewcells are placed at denser parts 
     108irregular distribution of the view cells. Such a case in common for 
     109example in urban scenes where the view cells are mostly distributed in 
     110a horizontal direction and more view cells are placed at denser parts 
    111111of the city. The modification involves replacing the uniformly 
    112112distributed ray direction by direction distribution according to the 
     
    124124 
    125125\begin{itemize} 
    126 \item optimized for viewcell - ray intersection. 
     126\item optimized for view cell - ray intersection. 
    127127\item flexible, i.e., it can represent arbitrary geometry. 
    128128\item naturally suited for an hierarchical approach. %(i.e., there is a root view cell containing all others) 
  • trunk/VUT/doc/SciReport/sampling.tex

    r266 r272  
    3030 
    3131In traditional visibility preprocessing the view space is 
    32 subdivided into viewcells and for each view cell the set of visible 
     32subdivided into view cells and for each view cell the set of visible 
    3333objects --- potentially visible set (PVS) is computed. This framewoirk 
    3434has been used for conservative, aggresive and exact algorithms. 
     
    243243\subsection{View Cell Representation} 
    244244 
     245\begin{figure}[htb] 
     246  \centerline{ 
     247    \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01} 
     248     \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07} 
     249    } 
     250 
     251  \caption{(left) Vienna viewcells. (right) The view cells are prisms with a triangular base. } 
     252  \label{fig:vienna_viewcells} 
     253\end{figure} 
     254 
     255 
     256A good partition of the scene into view cells is an essential part of every visibility 
     257system. If they are chosen too large, the PVS (Potentially Visibible Set)  
     258of a view cells is too large for efficient culling. If they are chosen too small or 
     259without consideration, then neighbouring view cells contain redundant PVS information, 
     260hence boosting the PVS computation and storage costs for the scene. In the left image of figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna model, generated by triangulation of the streets.  
     261In the closeup (right image) we can see that each triangle is extruded to a given height to form a view cell prism. 
     262 
    245263In order to efficiently use view cells with our sampling method, we 
    246264require a view cell representation which is 
    247265 
    248266\begin{itemize} 
    249 \item optimized for viewcell - ray intersection. 
     267\item optimized for view cell - ray intersection. 
    250268\item flexible, i.e., it can represent arbitrary geometry. 
    251269\item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others) 
    252270\end{itemize} 
    253271 
    254 We meet these requirements by using a view cell BSP tree, where the 
    255 BSP leafs are associated with the view cells.  Using the BSP tree, we 
    256 are able to find the initial view cells with only a few view ray-plane 
    257 intersections.  The hierarchical structure of the BSP tree can be 
    258 exploited as hierarchy of view cells. If neccessary, we could further 
    259 subdivide a BSP leaf view cell quite easily. 
    260  
    261 Currently we use two approaches to generate the initial BSP view cell 
    262 tree. 
     272We meet these requirements by employing spatial subdivisions (i.e.,  
     273KD trees and BSP trees), to store the view cells. The initial view cells are 
     274associated with the leaves. The reason why we chose BSP trees as view cell representation  
     275is that they are very flexible. View cells forming arbitrary closed meshes can be closely matched. 
     276Therefore we are able to find a view cells with only a few view ray-plane 
     277intersections.  Furthermore, the hierarchical structures can be 
     278exploited as hierarchy of view cells. Interior nodes form larger view cells 
     279containing the children. If neccessary, a leaf can be easily subdivided into smaller view cells. 
     280 
     281Currently we consider three different approaches to generate the initial view cell BSP tree. 
     282The third method is not restricted to BSP trees, but BSP trees are prefered  
     283because of their greater flexibility. 
     284 
     285 
     286\begin{table} 
     287\centering \footnotesize 
     288\begin{tabular}{|l|c|c|} 
     289  \hline\hline 
     290  View cells & Vienna selection & Vienna full \\\hline\hline 
     291  \#view cells & 105 & 16447 \\\hline\hline 
     292  \#input polygons & 525 & 82235 \\\hline\hline 
     293  BSP tree generation time & 0.016s & 10.328s \\\hline\hline 
     294  view cell insertion time & 0.016s & 7.984s \\\hline\hline 
     295  \#nodes & 1137 & 597933 \\\hline\hline 
     296        \#interior nodes & 568 & 298966\\\hline\hline 
     297        \#leaf nodes  & 569 & 298967\\\hline\hline 
     298        \#splits & 25 & 188936\\\hline\hline 
     299        max tree depth & 13 & 27\\\hline\hline 
     300        avg tree depth & 9.747 & 21.11\\\hline\hlien 
     301         
     302 \end{tabular} 
     303 \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp} 
     304\end{table} 
     305 
    263306 
    264307\begin{itemize} 
    265308 
    266 \item We use a number of dedicated input view cells. As input view 
    267 cell any closed mesh can be applied. The only requirement is that the 
    268 view cells do not overlap. We insert one view cell after the other 
    269 into the tree. The polygons of a view cell are filtered down the tree, 
     309\item We use a number of input view cells given in advance. As input view cell  
     310any closed mesh can be applied. The only requirement is that the 
     311any two view cells do not overlap.  
     312First the view cell polygons are extracted, and the BSP is build from 
     313these polygons using some global optimizations like tree balancing or 
     314least splits. Then one view cell after the other 
     315is inserted into the tree to find out the leafes where they are contained 
     316in. The polygons of the view cell are filtered down the tree, 
    270317guiding the insertion process. Once we reach a leaf and there are no 
    271318more polygons left, we terminate the tree subdivision. If we are on 
    272 the inside of the last split plane (i.e., the leaf is representing the 
     319the inside of the last split plane (i.e., the leaf represents the 
    273320inside of the view cell), we associate the leaf with the view cell 
    274 (i.e., add a pointer to the view cell). Hence a number of leafes can 
    275 be associated with the same input view cell. 
    276  
    277 \item We apply the BSP tree subdivision to the scene geometry. When 
    278 the subdivision terminates, the leaf nodes also represent the view 
    279 cells. 
    280  
     321(i.e., add a pointer to the view cell). One input view cell can 
     322be associated with many leafes, whereas each leafs has only one view cell. 
     323Some statistics about using this method on the vienna view cells set are given in table~\ref{tab:viewcell_bsp}. 
     324However, sometimes a good set of view cells is not available. Or the scene 
     325is changed frequently, and the designer does not want to create new view cells on each change. 
     326In such a case one of the following two methods should rather be chosen, which generate view cells 
     327automatically. 
     328 
     329 
     330\item We apply a BSP tree subdivision to the scene geometry. Whenever 
     331the subdivision terminates in a leaf, a view cell is associated with the leaf node. 
     332This simple approach is justified because it places the view cell borders  
     333along some discontinuities in the visibility function. 
     334 
     335\item  The view cell generation can be guided by the sampling process. We start with 
     336with a single initial view cell representing the whole space. If a given threshold 
     337is reached during the preprocessing (e.g., the view cell is pierced by too many rays  
     338resulting in a large PVS), the view cell is subdivided into smaller cells using some criteria. 
     339 
     340In order to evaluate the best split plane, we first have to define the characteristics of a 
     341good view cell partition:  The view cells should be quite large, while their PVS stays rather small. 
     342The PVS of each two view cells should be as distinct as possible, otherwise they could be 
     343merged into a larger view cell if the PVSs are too similar.  
     344E.g., for a building, the perfect view cells are usually the single rooms connected by portals. 
     345 
     346Hence we can define some useful criteria for the split: 1) the number of rays should be roughly equal among the new view cells. 2) The split plane should be chosen in a way that the rays are maximal distinct, i.e., the number 
     347of rays contributing to more than one cell should be minimized => the PVSs are also distinct. 3) For BSP trees, 
     348the split plane should be aligned with some scene geometry which is large enough to contribute 
     349a lot of occlusion power. This criterium can be naturally combined with the second one. 
     350As termination criterium we can choose the minimum PVS / piercing ray size or the maximal tree depth. 
    281351\end{itemize} 
    282352 
     
    286356casting rays through the scene and collecting their contributions. A 
    287357visibility sample is computed by casting a ray from an object towards 
    288 the viewcells and computing the nearest intersection with the scene 
     358the view cells and computing the nearest intersection with the scene 
    289359objects. All view cells pierced by the ray segment can the object and 
    290360thus the object can be added to their PVS. If the ray is terminated at 
     
    292362extended by this terminating object. Thus a single ray can make a 
    293363number of contributions to the progressively computed PVSs. A ray 
    294 sample piercing $n$ viewcells which is bound by two distinct objects 
    295 contributes by at most $2*n$ entries to the current PVSs. Appart from 
     364sample piercing $n$ view cells which is bound by two distinct objects 
     365contributes by at most $2*n$ entries to the current PVSs. Apart from 
    296366this performance benefit there is also a benefit in terms of the 
    297367sampling density: Assuming that the view cells are usually much larger 
     
    302372At this phase of the computation we not only start the samples from 
    303373the objects, but we also store the PVS information centered at the 
    304 objects. Instead of storing a PVSs consting of objects visible from 
     374objects. Instead of storing a PVS consting of objects visible from 
    305375view cells, every object maintains a PVS consisting of potentially 
    306376visible view cells. While these representations contain exactly the 
     
    336406 
    337407 The first modification to the basic algorithm accounts for irregular 
    338 distribution of the viewcells. Such a case is common for example in 
    339 urban scenes where the viewcells are mostly distributed in a 
    340 horizontal direction and more viewcells are placed at denser parts of 
     408distribution of the view cells. Such a case is common for example in 
     409urban scenes where the view cells are mostly distributed in a 
     410horizontal direction and more view cells are placed at denser parts of 
    341411the city. The modification involves replacing the uniformly 
    342412distributed ray direction by directions distributed according to the 
Note: See TracChangeset for help on using the changeset viewer.