- Timestamp:
- 09/15/05 18:17:46 (19 years ago)
- Location:
- trunk/VUT/doc/SciReport
- Files:
-
- 13 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/VUT/doc/SciReport/figs/basic.eps
r251 r272 2 2 %%Title: basic.fig 3 3 %%Creator: fig2dev Version 3.2 Patchlevel 4 4 %%CreationDate: T ue Aug 23 20:55:4820054 %%CreationDate: Thu Sep 15 12:53:12 2005 5 5 %%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 6 6 %%BoundingBox: 0 0 793 244 -
trunk/VUT/doc/SciReport/figs/cut.eps
r251 r272 2 2 %%Title: cut.fig 3 3 %%Creator: fig2dev Version 3.2 Patchlevel 4 4 %%CreationDate: T ue Aug 23 20:55:4820054 %%CreationDate: Thu Sep 15 12:53:13 2005 5 5 %%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 6 6 %%BoundingBox: 0 0 979 330 -
trunk/VUT/doc/SciReport/figs/dummy.eps
r251 r272 2 2 %%Title: dummy.fig 3 3 %%Creator: fig2dev Version 3.2 Patchlevel 4 4 %%CreationDate: T ue Aug 23 20:55:4820054 %%CreationDate: Thu Sep 15 12:53:14 2005 5 5 %%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 6 6 %%BoundingBox: 0 0 474 443 -
trunk/VUT/doc/SciReport/figs/latency.eps
r251 r272 2 2 %%Title: latency.fig 3 3 %%Creator: fig2dev Version 3.2 Patchlevel 4 4 %%CreationDate: T ue Aug 23 20:55:4820054 %%CreationDate: Thu Sep 15 12:53:15 2005 5 5 %%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 6 6 %%BoundingBox: 0 0 1153 326 -
trunk/VUT/doc/SciReport/figs/latency2.eps
r251 r272 2 2 %%Title: latency2.fig 3 3 %%Creator: fig2dev Version 3.2 Patchlevel 4 4 %%CreationDate: T ue Aug 23 20:55:4820054 %%CreationDate: Thu Sep 15 12:53:16 2005 5 5 %%For: matt@sakura (matt,U-COMPUTERGRAPHIK\matt,S-1-5-21-1259991482-2646404242-2158041560-3508) 6 6 %%BoundingBox: 0 0 992 393 -
trunk/VUT/doc/SciReport/mutual.tex
r269 r272 16 16 This boundary it is based on the current view cell visibility 17 17 classifications. We assume that there is a defined connectivity 18 between the view cells which is the case for both BSP-tree or kD-tree19 based view cells. Then given a PVS consisting of visible view cells18 between the view cells which is the case for both BSP-tree or kD-tree 19 based view cells. Then given a PVS consisting of visible view cells 20 20 (with respect to an object) we can find all the neighbors of the 21 visible view cells which are invisible. In order words a view cell21 visible view cells which are invisible. In order words a view cell 22 22 belongs to this boundary if it is classified invisible and has at 23 23 least one visible neighbor. If all view cells from this boundary are 24 proven invisible, no other view cell (behind this boundary) can be24 proven invisible, no other view cell (behind this boundary) can be 25 25 visible. If the verification determines some view cells as visible, 26 26 the current invisible boundary is extended and the verification is … … 378 378 379 379 For the computation of the maximal error due to the current shaft we 380 assume that one tested region is a view cell, whereas the other is an380 assume that one tested region is a view cell, whereas the other is an 381 381 object bounding box or cell of the spatial hierarchy. The threshold is 382 382 computed as follows: We first triangulate the farthest intersection 383 383 points in the shaft as seen from the view cell side of the shaft. Then 384 for each computed triangle we calculate a point in the view cell which384 for each computed triangle we calculate a point in the view cell which 385 385 maximizes the projected area of the triangle. The conservative 386 386 estimate of the maximal error is then given by a sum of the computed … … 396 396 % The basic idea is the following: 397 397 398 % Use 2 plane parametrization for describing all lines intersecting the object and the view cell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows:398 % Use 2 plane parametrization for describing all lines intersecting the object and the view cell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: 399 399 400 400 % 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects 401 401 402 % 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new vie cell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer.402 % 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new view cell (and possibly more if the rays goes further). Start the verification for the new view cells in the occluded layer. 403 403 404 404 % 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one. … … 407 407 % Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer. 408 408 409 % 5. If 4 does not terminate subdivide either the view cell or the object rectangle depending on the sample depths and error computed.409 % 5. If 4 does not terminate subdivide either the view cell or the object rectangle depending on the sample depths and error computed. 410 410 411 411 % It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy. -
trunk/VUT/doc/SciReport/preprocessing.tex
r269 r272 37 37 38 38 In traditional visibility preprocessing the view space is 39 subdivided into view cells and for each view cell the set of visible39 subdivided into view cells and for each view cell the set of visible 40 40 objects --- potentially visible set (PVS) is computed. This framewoirk 41 41 has bee used for conservative, aggresive and exact algorithms. … … 56 56 casting rays through the scene and collecting their contributions. A 57 57 visibility sample is computed by casting a ray from an object towards 58 the view cells and computing the nearest intersection with the scene58 the view cells and computing the nearest intersection with the scene 59 59 objects. All view cells pierced by the ray segment can see the object and 60 60 thus the object can be added to their PVS. If the ray is terminated at … … 62 62 extended by this terminating object. Thus a single ray can make a 63 63 number of contributions to the progressively computed PVSs. A ray 64 sample piercing $n$ view cells which is bound by two distinct objects64 sample piercing $n$ view cells which is bound by two distinct objects 65 65 contributes by at most $2*n$ entries to the current PVSs. Appart from 66 66 this performance benefit there is also a benefit in terms of the … … 106 106 107 107 The first modification to the basic algorithm accounts for 108 irregular distribution of the view cells. Such a case in common for109 example in urban scenes where the view cells are mostly distributed in110 a horizontal direction and more view cells are placed at denser parts108 irregular distribution of the view cells. Such a case in common for 109 example in urban scenes where the view cells are mostly distributed in 110 a horizontal direction and more view cells are placed at denser parts 111 111 of the city. The modification involves replacing the uniformly 112 112 distributed ray direction by direction distribution according to the … … 124 124 125 125 \begin{itemize} 126 \item optimized for view cell - ray intersection.126 \item optimized for view cell - ray intersection. 127 127 \item flexible, i.e., it can represent arbitrary geometry. 128 128 \item naturally suited for an hierarchical approach. %(i.e., there is a root view cell containing all others) -
trunk/VUT/doc/SciReport/sampling.tex
r266 r272 30 30 31 31 In traditional visibility preprocessing the view space is 32 subdivided into view cells and for each view cell the set of visible32 subdivided into view cells and for each view cell the set of visible 33 33 objects --- potentially visible set (PVS) is computed. This framewoirk 34 34 has been used for conservative, aggresive and exact algorithms. … … 243 243 \subsection{View Cell Representation} 244 244 245 \begin{figure}[htb] 246 \centerline{ 247 \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_01} 248 \includegraphics[height=0.35\textwidth,draft=\DRAFTFIGS]{images/vienna_viewcells_07} 249 } 250 251 \caption{(left) Vienna viewcells. (right) The view cells are prisms with a triangular base. } 252 \label{fig:vienna_viewcells} 253 \end{figure} 254 255 256 A good partition of the scene into view cells is an essential part of every visibility 257 system. If they are chosen too large, the PVS (Potentially Visibible Set) 258 of a view cells is too large for efficient culling. If they are chosen too small or 259 without consideration, then neighbouring view cells contain redundant PVS information, 260 hence boosting the PVS computation and storage costs for the scene. In the left image of figure~\ref{fig:vienna_viewcells} we see view cells of the Vienna model, generated by triangulation of the streets. 261 In the closeup (right image) we can see that each triangle is extruded to a given height to form a view cell prism. 262 245 263 In order to efficiently use view cells with our sampling method, we 246 264 require a view cell representation which is 247 265 248 266 \begin{itemize} 249 \item optimized for view cell - ray intersection.267 \item optimized for view cell - ray intersection. 250 268 \item flexible, i.e., it can represent arbitrary geometry. 251 269 \item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others) 252 270 \end{itemize} 253 271 254 We meet these requirements by using a view cell BSP tree, where the 255 BSP leafs are associated with the view cells. Using the BSP tree, we 256 are able to find the initial view cells with only a few view ray-plane 257 intersections. The hierarchical structure of the BSP tree can be 258 exploited as hierarchy of view cells. If neccessary, we could further 259 subdivide a BSP leaf view cell quite easily. 260 261 Currently we use two approaches to generate the initial BSP view cell 262 tree. 272 We meet these requirements by employing spatial subdivisions (i.e., 273 KD trees and BSP trees), to store the view cells. The initial view cells are 274 associated with the leaves. The reason why we chose BSP trees as view cell representation 275 is that they are very flexible. View cells forming arbitrary closed meshes can be closely matched. 276 Therefore we are able to find a view cells with only a few view ray-plane 277 intersections. Furthermore, the hierarchical structures can be 278 exploited as hierarchy of view cells. Interior nodes form larger view cells 279 containing the children. If neccessary, a leaf can be easily subdivided into smaller view cells. 280 281 Currently we consider three different approaches to generate the initial view cell BSP tree. 282 The third method is not restricted to BSP trees, but BSP trees are prefered 283 because of their greater flexibility. 284 285 286 \begin{table} 287 \centering \footnotesize 288 \begin{tabular}{|l|c|c|} 289 \hline\hline 290 View cells & Vienna selection & Vienna full \\\hline\hline 291 \#view cells & 105 & 16447 \\\hline\hline 292 \#input polygons & 525 & 82235 \\\hline\hline 293 BSP tree generation time & 0.016s & 10.328s \\\hline\hline 294 view cell insertion time & 0.016s & 7.984s \\\hline\hline 295 \#nodes & 1137 & 597933 \\\hline\hline 296 \#interior nodes & 568 & 298966\\\hline\hline 297 \#leaf nodes & 569 & 298967\\\hline\hline 298 \#splits & 25 & 188936\\\hline\hline 299 max tree depth & 13 & 27\\\hline\hline 300 avg tree depth & 9.747 & 21.11\\\hline\hlien 301 302 \end{tabular} 303 \caption{Statistics for view cell BSP tree on the Vienna view cells and a selection of the Vienna view cells.}\label{tab:viewcell_bsp} 304 \end{table} 305 263 306 264 307 \begin{itemize} 265 308 266 \item We use a number of dedicated input view cells. As input view 267 cell any closed mesh can be applied. The only requirement is that the 268 view cells do not overlap. We insert one view cell after the other 269 into the tree. The polygons of a view cell are filtered down the tree, 309 \item We use a number of input view cells given in advance. As input view cell 310 any closed mesh can be applied. The only requirement is that the 311 any two view cells do not overlap. 312 First the view cell polygons are extracted, and the BSP is build from 313 these polygons using some global optimizations like tree balancing or 314 least splits. Then one view cell after the other 315 is inserted into the tree to find out the leafes where they are contained 316 in. The polygons of the view cell are filtered down the tree, 270 317 guiding the insertion process. Once we reach a leaf and there are no 271 318 more polygons left, we terminate the tree subdivision. If we are on 272 the inside of the last split plane (i.e., the leaf is representingthe319 the inside of the last split plane (i.e., the leaf represents the 273 320 inside of the view cell), we associate the leaf with the view cell 274 (i.e., add a pointer to the view cell). Hence a number of leafes can 275 be associated with the same input view cell. 276 277 \item We apply the BSP tree subdivision to the scene geometry. When 278 the subdivision terminates, the leaf nodes also represent the view 279 cells. 280 321 (i.e., add a pointer to the view cell). One input view cell can 322 be associated with many leafes, whereas each leafs has only one view cell. 323 Some statistics about using this method on the vienna view cells set are given in table~\ref{tab:viewcell_bsp}. 324 However, sometimes a good set of view cells is not available. Or the scene 325 is changed frequently, and the designer does not want to create new view cells on each change. 326 In such a case one of the following two methods should rather be chosen, which generate view cells 327 automatically. 328 329 330 \item We apply a BSP tree subdivision to the scene geometry. Whenever 331 the subdivision terminates in a leaf, a view cell is associated with the leaf node. 332 This simple approach is justified because it places the view cell borders 333 along some discontinuities in the visibility function. 334 335 \item The view cell generation can be guided by the sampling process. We start with 336 with a single initial view cell representing the whole space. If a given threshold 337 is reached during the preprocessing (e.g., the view cell is pierced by too many rays 338 resulting in a large PVS), the view cell is subdivided into smaller cells using some criteria. 339 340 In order to evaluate the best split plane, we first have to define the characteristics of a 341 good view cell partition: The view cells should be quite large, while their PVS stays rather small. 342 The PVS of each two view cells should be as distinct as possible, otherwise they could be 343 merged into a larger view cell if the PVSs are too similar. 344 E.g., for a building, the perfect view cells are usually the single rooms connected by portals. 345 346 Hence we can define some useful criteria for the split: 1) the number of rays should be roughly equal among the new view cells. 2) The split plane should be chosen in a way that the rays are maximal distinct, i.e., the number 347 of rays contributing to more than one cell should be minimized => the PVSs are also distinct. 3) For BSP trees, 348 the split plane should be aligned with some scene geometry which is large enough to contribute 349 a lot of occlusion power. This criterium can be naturally combined with the second one. 350 As termination criterium we can choose the minimum PVS / piercing ray size or the maximal tree depth. 281 351 \end{itemize} 282 352 … … 286 356 casting rays through the scene and collecting their contributions. A 287 357 visibility sample is computed by casting a ray from an object towards 288 the view cells and computing the nearest intersection with the scene358 the view cells and computing the nearest intersection with the scene 289 359 objects. All view cells pierced by the ray segment can the object and 290 360 thus the object can be added to their PVS. If the ray is terminated at … … 292 362 extended by this terminating object. Thus a single ray can make a 293 363 number of contributions to the progressively computed PVSs. A ray 294 sample piercing $n$ view cells which is bound by two distinct objects295 contributes by at most $2*n$ entries to the current PVSs. Ap part from364 sample piercing $n$ view cells which is bound by two distinct objects 365 contributes by at most $2*n$ entries to the current PVSs. Apart from 296 366 this performance benefit there is also a benefit in terms of the 297 367 sampling density: Assuming that the view cells are usually much larger … … 302 372 At this phase of the computation we not only start the samples from 303 373 the objects, but we also store the PVS information centered at the 304 objects. Instead of storing a PVS sconsting of objects visible from374 objects. Instead of storing a PVS consting of objects visible from 305 375 view cells, every object maintains a PVS consisting of potentially 306 376 visible view cells. While these representations contain exactly the … … 336 406 337 407 The first modification to the basic algorithm accounts for irregular 338 distribution of the view cells. Such a case is common for example in339 urban scenes where the view cells are mostly distributed in a340 horizontal direction and more view cells are placed at denser parts of408 distribution of the view cells. Such a case is common for example in 409 urban scenes where the view cells are mostly distributed in a 410 horizontal direction and more view cells are placed at denser parts of 341 411 the city. The modification involves replacing the uniformly 342 412 distributed ray direction by directions distributed according to the
Note: See TracChangeset
for help on using the changeset viewer.