- Timestamp:
- 09/13/05 18:44:54 (19 years ago)
- Location:
- trunk/VUT/doc/SciReport
- Files:
-
- 9 added
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/VUT/doc/SciReport/analysis.tex
r255 r266 1 1 \chapter{Analysis of Visibility in Polygonal Scenes} 2 2 3 4 5 6 \section{Analysis of 2D line space} 3 This chapter provides analysis of the visibility in polygonal scenes, 4 which are the input for all developed tools. The visibility analysis 5 uncoveres the complexity of the from-region and global visibility 6 problems and thus it especially important for a good design of the 7 global visibility preprocessor. To better undestand the visibility 8 relationships in primal space we use mapping to line space, where the 9 visibility interactions can be observed easily by interactions of sets 10 of points. Additionally for the sake of clarity, we first analyze 11 visibility in 2D and then extend the analysis to 3D polygonal scenes. 12 13 \section{Analysis of visibility in 2D} 7 14 8 15 \label{LINE_SPACE} … … 1017 1024 to handle numerical inaccuracies. 1018 1025 1026 1027 \section{Summary} 1028 1029 In this chapter we discussed the relation of sets of lines in 3D and 1030 the polyhedra in \plucker coordinates. We proposed a general size 1031 measure for a set of lines described by a blocker polyhedron and a 1032 size measure designed for the computation of PVS. 1033 -
trunk/VUT/doc/SciReport/introduction.tex
r255 r266 1 \chapter{ Overview ofvisibility problems and algorithms}%\chapter1 \chapter{Introduction to visibility problems and algorithms}%\chapter 2 2 3 3 \label{chap:overview} 4 4 \label{chap:classes} 5 5 6 This chapter provides a taxonomy of visibility problems encountered 7 in computer graphics based on the {\em problem domain} and the {\em 6 This chapter provides introduction to visibility problems and 7 algorithms for computer graphics. In particular it presents a taxonomy 8 of visibility problems based on the {\em problem domain} and the {\em 8 9 type of the answer}. The taxonomy helps to understand the nature of a 9 10 particular visibility problem and provides a tool for grouping 10 11 problems of similar complexity independently of their target 11 12 application. We discuss typical visibility problems encountered in 12 computer graphics and identify their relation to the proposed 13 taxonomy. A visibility problem can be solved by means of various 14 visibility algorithms. We classify visibility algorithms according to 15 several important criteria and discuss important concepts in the 16 design of a visibility algorithm. The taxonomy and the discussion of 17 the algorithm design sums up ideas and concepts that are independent 18 of any specific algorithm. This can help algorithm designers to 19 transfer the existing algorithms to solve visibility problems in other 20 application areas. 21 13 computer graphics including their relation to the presented 14 taxonomy. Finally, we classify the latert described visibility tools 15 according to the presented taxonomy. 22 16 23 17 … … 314 308 most general visibility problems. In 3D the ${\cal L}_R$ is a 4D 315 309 subset of the 4D line space. In 2D the ${\cal L}_R$ is a 2D subset of 316 the 2D line space. Consequently, in the pr oposed classification310 the 2D line space. Consequently, in the presented classification 317 311 visibility from a region in 2D is equivalent to visibility from a line 318 312 segment in 2D. … … 518 512 is neither its subset or its superset for all possible inputs. 519 513 514 A more precise quality measure of algorithms computing PVSs can be 515 expressed by the {\em relative overestimation} and the {\em relative 516 underestimation} of the PVS with respect to the exact PVS. We can 517 define a quality measure of an algorithm $A$ on input $I$ as a tuple 518 $\mbi{Q}^A(I)$: 519 520 \begin{eqnarray} 521 \mbi{Q}^A(I) & = & (Q^A_o(I), Q^A_u(I)), \qquad I \in {\cal D} \\ 522 Q^A_o(I) & = & {|S^A(I) \setminus S^{\cal E}(I)| \over |S^{\cal E}(I)|} \\ 523 Q^A_u(I) & = & {|S^{\cal E}(I) \setminus S^A(I) | \over |S^{\cal E}(I)|} 524 \end{eqnarray} 525 526 where $I$ is an input from the input domain ${\cal D}$, $S^A(I)$ is 527 the PVS determined by the algorithm $A$ for input $I$ and $S^{\cal 528 E}(I)$ is the exact PVS for the given input. $Q^A_o(I)$ expresses the 529 {\em relative overestimation} of the PVS, $Q^A_u(I)$ is the {\em 530 relative underestimation}. 531 532 The expected quality of the algorithm over all possible inputs can be 533 given as: 534 535 \begin{eqnarray} 536 Q^A & = & E[\| \mbi{Q}^A(I) \|] \\ 537 & = & \sum_{\forall I \in {\cal D}} f(I).\sqrt{Q^A_o(I)^2 + Q^A_o(I)^2} 538 \end{eqnarray} 539 540 where f(I) is the probability density function expressing the 541 probability of occurrence of input $I$. The quality measure 542 $\mbi{Q}^A(I)$ can be used to classify a PVS algorithm into one of the 543 four accuracy classes according to Section~\ref{sec:accuracy}: 544 545 \begin{enumerate} 546 \item exact\\ 547 $\forall I \in {\cal D} :Q_o^A(I) = 0 \wedge Q_u^A(I) = 0$ 548 \item conservative\\ 549 $\forall I \in {\cal D} : Q_o^A(I) \geq 0 \wedge Q_u^A(I) = 0$ 550 \item aggressive \\ 551 $\forall I \in {\cal D} : Q_o^A(I) = 0 \wedge Q_u^A(I) \geq 0$ 552 \item approximate \\ 553 $\qquad \exists I_j, I_k \in {\cal D}: Q_o^A(I_j) > 0 \wedge Q_u^A(I_k) > 0$ 554 \end{enumerate} 555 556 520 557 521 558 \subsection{Solution space} … … 542 579 543 580 The classification according to the solution space is easily 544 demonstrated on visible surface algorithms (these algorithms will be 545 discussed in Section~\ref{chap:traditional}). The 581 demonstrated on visible surface algorithms: The 546 582 z-buffer~\cite{Catmull:1975:CDC} is a common example of a discrete 547 583 algorithm. The Weiler-Atherton algorithm~\cite{Weiler:1977:HSR} is an … … 583 619 %***************************************************************************** 584 620 585 \section{Visibility algorithm design}586 587 Visibility algorithm design can be decoupled into a series of588 important design decisions. The first step is to clearly formulate a589 problem that should be solved by the algorithm. The taxonomy stated590 above can help to understand the difficulty of solving the given591 problem and its relationship to other visibility problems in computer592 graphics. The following sections summarize important steps in the593 design of a visibility algorithm and discuss some commonly used594 techniques.595 596 597 \subsection{Scene structuring}598 599 We discuss two issues dealing with structuring of the scene:600 identifying occluders and occludees, and spatial data structures for601 scene description.602 603 \subsubsection{Occluders and occludees}604 %occluders, occludees,605 606 Many visibility algorithms restructure the scene description to607 distinguish between {\em occluders} and {\em occludees}. Occluders608 are objects that cause changes in visibility (occlusion). Occludees609 are objects that do not cause occlusion, but are sensitive to610 visibility changes. In other words the algorithm studies visibility of611 occludees with respect to occluders.612 613 The concept of occluders and occludees is used to increase the614 performance of the algorithm in both the running time and the accuracy615 of the algorithm by reducing the number of primitives used for616 visibility computations (the performance measures of visibility617 algorithms will be discussed in618 Section~\ref{sec:performance}). Typically, the number of occluders and619 occludees is significantly smaller than the total number of objects in620 the scene. Additionally, both the occluders and the occludees can be621 accompanied with a topological (connectivity) information that might622 be necessary for an efficient functionality of the algorithm.623 624 The concept of occluders is applicable to most visibility625 algorithms. The concept of occludees is useful for algorithms626 providing answers (1) and (2) according to the taxonomy of627 Section~\ref{sec:answers}. Some visibility algorithms do not628 distinguish between occluders and occludees at all. For example all629 traditional visible surface algorithms use all scene objects as both630 occluders and occludees.631 632 Both the occluders and the occludees can be represented by {\em633 virtual objects} constructed from the scene primitives: the occluders634 as simplified inscribed objects, occludees as simplified circumscribed635 objects such as bounding boxes. Algorithms can be classified according636 to the type of occluders they deal with. The classification follows637 the scene restrictions discussed in638 Section~\ref{sec:scene_restrictions} and adds classes specific to639 occluder restrictions:640 641 \begin{itemize}642 \item643 vertical prisms,644 \item645 axis-aligned polygons,646 \item647 axis-aligned rectangles.648 \end{itemize}649 650 The vertical prisms that are specifically important for computing651 visibility in \m25d scenes. Some visibility algorithms can deal only652 with axis-aligned polygons or even more restrictive axis-aligned653 rectangles.654 655 656 \begin{figure}[htb]657 \centerline{658 \includegraphics[width=0.7\textwidth,draft=\DRAFTIMAGES]{images/houses}}659 \caption{Occluders in an urban scene. In urban scenes the occluders660 can be considered vertical prisms erected above the ground.}661 \label{fig:houses}662 \end{figure}663 664 Other important criteria for evaluating algorithms according to665 occluder restrictions include:666 667 668 \begin{itemize}669 \item670 connectivity information,671 \item672 intersecting occluders.673 \end{itemize}674 675 676 The explicit knowledge of the connectivity is crucial for efficient677 performance of some visibility algorithms (performance measures will678 be discussed in the Section~\ref{sec:performance}). Intersecting679 occluders cannot be handled properly by some visibility algorithms.680 In such a case the possible occluder intersections should be resolved681 in preprocessing.682 683 A similar classification can be applied to occludees. However, the684 visibility algorithms typically pose less restrictions on occludees685 since they are not used to describe visibility but only to check686 visibility with respect to the description provided by the occluders.687 688 689 %occluder selection, occluder LOD, virtual occluders,690 691 \subsubsection{Scene description}692 693 The scene is typically represented by a collection of objects. For694 purposes of visibility computations it can be advantageous to transform695 the object centered representation to a spatial representation by696 means of a spatial data structure. For example the scene can be697 represented by an octree where full voxels correspond to opaque parts698 of the scene. Such a data structure is then used as an input to the699 visibility algorithm. The spatial data structures for the scene700 description are used for the following reasons:701 702 \begin{itemize}703 704 \item {\em Regularity}. A spatial data structure typically provides a705 regular description of the scene that avoids complicated706 configurations or overly detailed input. Furthermore, the707 representation can be rather independent of the total scene708 complexity.709 710 \item {\em Efficiency}. The algorithm can be more efficient in both711 the running time and the accuracy of the result.712 713 \end{itemize}714 715 716 Additionally, spatial data structures can be applied to structure the717 solution space and/or to represent the desired solution. Another718 application of spatial data structures is the acceleration of the719 algorithm by providing spatial indexing. These applications of spatial720 data structures will be discussed in721 Sections~\ref{sec:solution_space_ds}722 and~\ref{sec:acceleration_ds}. Note that a visibility algorithm can723 use a single data structure for all three purposes (scene724 structuring, solution space structuring, and spatial indexing) while725 another visibility algorithm can use three conceptually different data726 structures.727 728 729 % gernots alg.730 %used as solution space DS and/or acceleration DS731 732 \subsection{Solution space data structures}733 \label{sec:solution_space_ds}734 735 A solution space data structure is used to maintain an intermediate736 result during the operation of the algorithm and it is used to737 generate the result of the algorithm. We distinguish between the738 following solution space data structures:739 740 \begin{itemize}741 742 \item general data structures743 744 single value (ray shooting), winged edge, active edge table, etc.745 746 \item primal space (spatial) data structures747 748 uniform grid, BSP tree (shadow volumes),749 bounding volume hierarchy, kD-tree, octree, etc.750 751 \item image space data structures752 753 2D uniform grid (shadow map), 2D BSP tree, quadtree, kD-tree, etc.754 755 \item line space data structures756 757 regular grid, kD-tree, BSP tree, etc.758 759 \end{itemize}760 761 The image space data structures can be considered a special case of762 the line space data structures since a point in the image represents a763 ray through that point (see also Section~\ref{sec:solspace}).764 765 If the dimension of the solution space matches the dimension of the766 problem-relevant line set, the visibility problem can often be solved767 with high accuracy by a single sweep through the scene. If the768 dimensions do not match, the algorithm typically needs more passes to769 compute a result with satisfying accuracy~\cite{EVL-2000-60,wonka00}770 or neglects some visibility effects altogether~\cite{EVL-2000-59}.771 772 773 %ray shooting none774 %visible surface algorithms - list of scan-line intersections, BSP tree,775 % z-buffer (graphics hardware)776 %shadow computation shadow volumes, BSP tree, shadow map (graphics hardware)777 %PVS for view cell - occlusion tree778 779 780 \subsection{Performance}781 \label{sec:performance}782 783 %output sensitivity, memory consumption, running time, scalability784 785 786 The performance of a visibility algorithm can be evaluated by measuring787 the quality of the result, the running time and the memory consumption.788 In this section we discuss several concepts related to these789 performance criteria.790 791 792 \subsubsection{Quality of the result}793 794 One of the important performance measures of a visibility algorithm795 is the quality of the result. The quality measure depends on the type796 of the answer to the problem. Generally, the quality of the result797 can be expressed as a distance from an exact result in the solution798 space. Such a quality measure can be seen as a more precise799 expression of the accuracy of the algorithm discussed in800 Section~\ref{sec:accuracy}.801 802 For example a quality measure of algorithms computing a PVS can be803 expressed by the {\em relative overestimation} and the {\em relative804 underestimation} of the PVS with respect to the exact PVS. We can805 define a quality measure of an algorithm $A$ on input $I$ as a tuple806 $\mbi{Q}^A(I)$:807 808 \begin{eqnarray}809 \mbi{Q}^A(I) & = & (Q^A_o(I), Q^A_u(I)), \qquad I \in {\cal D} \\810 Q^A_o(I) & = & {|S^A(I) \setminus S^{\cal E}(I)| \over |S^{\cal E}(I)|} \\811 Q^A_u(I) & = & {|S^{\cal E}(I) \setminus S^A(I) | \over |S^{\cal E}(I)|}812 \end{eqnarray}813 814 where $I$ is an input from the input domain ${\cal D}$, $S^A(I)$ is815 the PVS determined by the algorithm $A$ for input $I$ and $S^{\cal816 E}(I)$ is the exact PVS for the given input. $Q^A_o(I)$ expresses the817 {\em relative overestimation} of the PVS, $Q^A_u(I)$ is the {\em818 relative underestimation}.819 820 The expected quality of the algorithm over all possible inputs can be821 given as:822 823 \begin{eqnarray}824 Q^A & = & E[\| \mbi{Q}^A(I) \|] \\825 & = & \sum_{\forall I \in {\cal D}} f(I).\sqrt{Q^A_o(I)^2 + Q^A_o(I)^2}826 \end{eqnarray}827 828 where f(I) is the probability density function expressing the829 probability of occurrence of input $I$. The quality measure830 $\mbi{Q}^A(I)$ can be used to classify a PVS algorithm into one of the831 four accuracy classes according to Section~\ref{sec:accuracy}:832 833 \begin{enumerate}834 \item exact\\835 $\forall I \in {\cal D} :Q_o^A(I) = 0 \wedge Q_u^A(I) = 0$836 \item conservative\\837 $\forall I \in {\cal D} : Q_o^A(I) \geq 0 \wedge Q_u^A(I) = 0$838 \item aggressive \\839 $\forall I \in {\cal D} : Q_o^A(I) = 0 \wedge Q_u^A(I) \geq 0$840 \item approximate \\841 $\qquad \exists I_j, I_k \in {\cal D}: Q_o^A(I_j) > 0 \wedge Q_u^A(I_k) > 0$842 \end{enumerate}843 844 845 846 \subsubsection{Scalability}847 848 Scalability expresses the ability of the visibility algorithm to cope849 with larger inputs. A more precise definition of scalability of an850 algorithm depends on the problem for which the algorithm is851 designed. The scalability of an algorithm can be studied with respect852 to the size of the scene (e.g. number of scene objects). Another853 measure might consider the dependence of the algorithm on the number854 of only the visible objects. Scalability can also be studied855 according to the given domain restrictions, e.g. volume of the view856 cell.857 858 A well designed visibility algorithm should be scalable with respect859 to the number of structural changes or discontinuities of860 visibility. Furthermore, its performance should be given by the861 complexity of the visible part of the scene. These two important862 measures of scalability of an algorithm are discussed in the next two863 sections.864 865 \subsubsection{Use of coherence}866 867 Scenes in computer graphics typically consist of objects whose868 properties vary smoothly from one part to another. A view of such a869 scene contains regions of smooth changes (changes in color, depth,870 texture,etc.) and discontinuities that let us distinguish between871 objects. The degree to which the scene or its projection exhibit local872 similarities is called {\em coherence}~\cite{Foley90}.873 874 Coherence can be exploited by reusing calculations made for one part875 of the scene for nearby parts. Algorithms exploiting coherence are876 typically more efficient than algorithms computing the result from the877 scratch.878 879 Sutherland et al.~\cite{Sutherland:1974:CTH} identified several880 different types of coherence in the context of visible surface881 algorithms. We simplify the classification proposed by Sutherland et882 al. to reflect general visibility problems and distinguish between the883 following three types of {\em visibility coherence}:884 885 \begin{itemize}886 887 \item {\em Spatial coherence}. Visibility of points in space tends to888 be coherent in the sense that the visible part of the scene consists889 of compact sets (regions) of visible and invisible points. We can890 reuse calculations made for a given region for the neighboring891 regions or its subregions.892 893 \item {\em Line-space coherence}. Sets of similar rays tend to have the894 same visibility classification, i.e. the rays intersect the same895 object. We can reuse calculations for the given set of rays for its896 subsets or the sets of nearby rays.897 898 \item {\em Temporal coherence}. Visibility at two successive moments is899 likely to be similar despite small changes in the scene or a900 region/point of interest. Calculations made for one frame can be901 reused for the next frame in a sequence.902 903 \end{itemize}904 905 The degree to which the algorithm exploits various types of coherence906 is one of the major design paradigms in research of new visibility907 algorithms. The importance of exploiting coherence is emphasized by908 the large amount of data that need to be processed by the current909 rendering algorithms.910 911 912 \subsubsection{Output sensitivity}913 914 915 An algorithm is said to be {\em output-sensitive} if its running time916 is sensitive to the size of output. In the computer graphics community917 the term output-sensitive algorithm is used in a broader meaning than918 in computational geometry~\cite{berg:97}. The attention is paid to a919 practical usage of the algorithm, i.e. to an efficient implementation920 in terms of the practical average case performance. The algorithms are921 usually evaluated experimentally using test data and measuring the922 running time and the size of output of the algorithm. The formal923 average case analysis is usually not carried out for the following two924 reasons:925 926 \begin{enumerate}927 928 \item {\em The algorithm is too obscured}. Visibility algorithms929 exploit data structures that are built according to various heuristics930 and it is difficult to derive proper bounds even on the expected size931 of these supporting data structures.932 933 \item {\em It is difficult to properly model the input data}. In934 general it is difficult to create a reasonable model that captures935 properties of real world scenes as well as the probability of936 occurrence of a particular configuration.937 938 \end{enumerate}939 940 A visibility algorithm can often be divided into the {\em offline}941 phase and the {\em online} phase. The offline phase is also called942 preprocessing. The preprocessing is often amortized over many943 executions of the algorithm and therefore it is advantageous to944 express it separately from the online running time.945 946 For example an ideal output-sensitive visible surface algorithm runs947 in $O(n\log n + k^2)$, where $n$ is the number of scene polygons (size948 of input) and $k$ is the number of visible polygons (in the worst case949 $k$ visible polygons induce $O(k^2)$ visible polygon fragments).950 951 952 953 \subsubsection{Acceleration data structures}954 \label{sec:acceleration_ds}955 956 Acceleration data structures are often used to achieve the performance957 goals of a visibility algorithm. These data structures allow efficient958 point location, proximity queries, or scene traversal required by many959 visibility algorithms.960 961 With a few exceptions the acceleration data structures provide a {\em962 spatial index} for the scene by means of a spatial data structure.963 The spatial data structures group scene objects according to the964 spatial proximity. On the contrary line space data structures group965 rays according to their proximity in line space.966 967 The common acceleration data structures can be divided into the968 following categories:969 970 \begin{itemize}971 \item Spatial data structures972 \begin{itemize}973 \item {\em Spatial subdivisions}974 975 uniform grid, hierarchical grid, kD-tree, BSP tree, octree, quadtree, etc.976 977 \item {\em Bounding volume hierarchies}978 979 hierarchy of bounding spheres,980 hierarchy of bounding boxes, etc.981 982 \item {\em Hybrid}983 984 hierarchy of uniform grids, hierarchy of kD-trees, etc.985 986 \end{itemize}987 988 \item Line space data structures989 \begin{itemize}990 \item {\em General}991 992 regular grid, kD-tree, BSP tree, etc.993 \end{itemize}994 995 \end{itemize}996 997 998 999 \subsubsection{Use of graphics hardware}1000 1001 Visibility algorithms can be accelerated by exploiting dedicated1002 graphics hardware. The hardware implementation of the z-buffer1003 algorithm that is common even on a low-end graphics hardware can be1004 used to accelerate solutions to other visibility problems. Recall that the1005 z-buffer algorithm solves the visibility from a point problem by1006 providing a discrete approximation of the visible surfaces.1007 %$(3-D-2b(i), A-3b)$1008 1009 A visibility algorithm can be accelerated by the graphics hardware if1010 it can be decomposed so that the decomposition includes the1011 problem solved by the z-buffer or a series of such problems.1012 %$(3-D-2b(i), A-3b)$1013 Prospectively, the recent features of the graphics hardware, such as1014 the pixel and vertex shaders allow easier application of the graphics1015 hardware for solving specific visibility tasks. The software interface1016 between the graphics hardware and the CPU is usually provided by1017 OpenGL~\cite{Moller02-RTR}.1018 1019 1020 \section{Visibility in urban environments}1021 1022 Urban environments constitute an important class of real world scenes1023 computer graphics deals with. We can identify two fundamental1024 subclasses of urban scenes. Firstly, we consider {\em outdoor} scenes,1025 i.e. urban scenes as observed from streets, parks, rivers, or a1026 bird's-eye view. Secondly, we consider {\em indoor} scenes, i.e. urban1027 scenes representing building interiors. In the following two sections1028 we discuss the essential characteristics of visibility in both the1029 outdoor and the indoor scenes. The discussion is followed by1030 summarizing the suitable visibility techniques.1031 1032 1033 \subsection{Analysis of visibility in outdoor urban areas}1034 1035 \label{sec:analysis_ue}1036 \label{sec:ANALYSIS_UE}1037 1038 1039 Outdoor urban scenes are viewed using two different scenarios. In a1040 {\em flyover} scenario the scene is observed from the bird's eye1041 view. A large part of the scene is visible. Visibility is mainly1042 restricted due to the structure of the terrain, atmospheric1043 constraints (fog, clouds) and the finite resolution of human1044 retina. Rendering of the flyover scenarios is usually accelerated1045 using LOD, image-based rendering and terrain visibility algorithms,1046 but there is no significant potential for visibility culling.1047 1048 In a {\em walkthrough} scenario the scene is observed from a1049 pedestrians point of view and the visibility is often very1050 restricted. In the remainder of this section we discuss the walkthrough1051 scenario in more detail.1052 1053 Due to technological and physical restrictions urban scenes viewed1054 from outdoor closely resemble a 2D {\em height function}, i.e. a1055 function expressing the height of the scene elements above the ground.1056 The height function cannot capture certain objects such as bridges,1057 passages, subways, or detailed objects such as trees. Nevertheless1058 buildings, usually the most important part of the scene, can be1059 captured accurately by the height function in most cases. For the1060 sake of visibility computations the objects that cannot be represented1061 by the height function can be ignored. The resulting scene is then1062 called a {\em \m25d scene}.1063 1064 In a dense urban area with high buildings visibility is very1065 restricted when the scene is viewed from a street (see1066 Figure~\ref{fig:outdoor}-a). Only buildings from nearby streets are1067 visible. Often there are no buildings visible above roofs of buildings1068 close to the viewpoint. In such a case visibility is essentially1069 two-dimensional, i.e. it could be solved accurately using a 2D1070 footprint of the scene and a 2D visibility algorithm. In areas with1071 smaller houses of different shapes visibility is not so severely1072 restricted since some objects can be visible by looking over other1073 objects. The view complexity increases (measured in number of visible1074 objects) and the height structure becomes increasingly1075 important. Complex views with far visibility can be seen also near1076 rivers, squares, and parks (see Figure~\ref{fig:outdoor}-b).1077 1078 \begin{figure}[htb]1079 \centerline{1080 \hfill1081 \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_street1}1082 \hfill1083 \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_castle1}1084 \hfill1085 }1086 \caption{Visibility in outdoor urban areas. (left) In the center of a city1087 visibility is typically restricted to a few nearby streets. (right)1088 Near river banks typically a large part of the city is visible. Note1089 that many distant objects are visible due to the terrain gradation.}1090 \label{fig:outdoor}1091 \end{figure}1092 1093 In scenes with large differences in terrain height the view complexity1094 is often very high. Many objects can be visible that are situated for1095 example on a hill or on a slope behind a river. Especially in areas1096 with smaller housing visibility is much defined by the terrain itself.1097 1098 We can summarize the observations as follows (based on1099 Wonka~\cite{wonka_phd}) :1100 1101 \begin{itemize}1102 1103 \item1104 Outdoor urban environments have basically \m25d structure and1105 consequently visibility is restricted accordingly.1106 1107 \item1108 The view is very restricted in certain areas, such as in the1109 city center. However the complexity of the view can vary1110 significantly. It is always not the case that only few objects are1111 visible.1112 1113 \item1114 If there are large height differences in the terrain, many1115 objects are visible for most viewpoints.1116 1117 \item1118 In the same view a close object can be visible next to a very1119 distant one.1120 1121 \end{itemize}1122 1123 1124 In the simplest case the outdoor scene consists only of the terrain1125 populated by a few buildings. Then the visibility can be calculated on1126 the terrain itself with satisfying1127 accuracy~\cite{Floriani:1995:HCH,Cohen-Or:1995:VDZ, Stewart:1997:HVT}.1128 Outdoor urban environments have a similar structure as terrains:1129 buildings can be treated as a terrain with {\em many discontinuities}1130 in the height function (assuming that the buildings do not contain1131 holes or significant variations in their fa{\c{c}}ades). To1132 accurately capture visibility in such an environment specialized1133 algorithms have been developed that compute visibility from a given1134 viewpoint~\cite{downs:2001:I3DG} or view1135 cell~\cite{wonka00,koltun01,bittner:2001:PG}.1136 1137 % The methods presented later in the thesis make use of the specific1138 % structure of the outdoor scenes to efficiently compute a PVS for the1139 % given view cell. The key observation is that the PVS for a view cell1140 % in a \m25d can be determined by computing visibility from its top1141 % boundary edges. This problem becomes a restricted variant of the1142 % visibility from a line segment in 3D with $d({\cal L}_R) = 3$.1143 1144 1145 \subsection{Analysis of indoor visibility}1146 1147 Building interiors constitute another important class of real world1148 scenes. A typical building consists of rooms, halls, corridors, and1149 stairways. It is possible to see from one room to another through an1150 open door or window. Similarly it is possible to see from one corridor1151 to another one through a door or other connecting structure. In1152 general the scene can be subdivided into cells corresponding to the1153 rooms, halls, corridors, etc., and transparent portals that connect1154 the cells~\cite{Airey90,Teller:1991:VPI}. Some portals1155 correspond to the real doors and windows, others provide only a1156 virtual connection between cells. For example an L-shaped corridor1157 can be represented by two cells and one virtual portal connecting them.1158 1159 Visibility in a building interior is often significantly restricted1160 (see Figure~\ref{fig:indoor}). We can see the room we are located at1161 and possibly few other rooms visible through open doors. Due to the1162 natural partition of the scene into cells and portals visibility can1163 be solved by determining which cells can be seen through a give set of1164 portals and their sequences. A sequence of portals that we can see1165 through is called {\em feasible}.1166 1167 \begin{figure}[htb]1168 \centerline{1169 \hfill1170 \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_chodba1}1171 \hfill1172 \includegraphics[width=0.45\textwidth,draft=\DRAFTIMAGES]{images/foto_sloupy2}1173 \hfill1174 }1175 \caption{Indoor visibility. (left) Visibility in indoor scenes is typically1176 restricted to a few rooms or corridors. (right) In scenes with more complex1177 interior structure visibility gets more complicated.1178 }1179 \label{fig:indoor}1180 \end{figure}1181 1182 1183 Many algorithms for computing indoor visibility~\cite{Airey90,1184 Teller92phd, Luebke:1995:PMS} exploit the cell/portal structure of the1185 scene. The potential problem of this approach is its strong1186 sensitivity to the arrangement of the environment. In a scene with a1187 complicated structure with many portals there are many feasible portal1188 sequences. Imagine a hall with columns arranged on a grid. The number1189 of feasible portal sequences rapidly increases with the distance from1190 the given view cell~\cite{Teller92phd} if the columns are sufficiently1191 small (see Figure~\ref{fig:portal_explosion}). Paradoxically most of1192 the scene is visible and there is almost no benefit of using any1193 visibility culling algorithm.1194 1195 The methods presented later in the report partially avoids this1196 problem since it does not rely on finding feasible portal sequences1197 even in the indoor scenes. Instead of determining what {\em can} be1198 visible through a transparent complement of the scene (portals) the1199 method determines what {\em cannot} be visible due to the scene1200 objects themselves (occluders). This approach also avoids the explicit1201 enumeration of portals and the construction of the cell/portal graph.1202 1203 1204 1205 \begin{figure}[htb]1206 \centerline{1207 \includegraphics[width=0.45\textwidth,draft=\DRAFTFIGS]{figs/portals_explosion}}1208 \caption{In sparsely occluded scenes the cell/portal algorithm can1209 exhibit a combinatorial explosion in number of feasible portal1210 sequences. Paradoxically visibility culling provides almost no1211 benefit in such scenes.}1212 \label{fig:portal_explosion}1213 \end{figure}1214 1215 621 1216 622 1217 623 \section{Summary} 1218 624 1219 Visibility problems and algorithms penetrate a large part of computer 1220 graphics research. The proposed taxonomy aims to classify visibility 1221 problems independently of their target application. The 1222 classification should help to understand the nature of the given 1223 problem and it should assist in finding relationships between 1224 visibility problems and algorithms in different application areas. 1225 The tools address the following classes of visibility problems: 625 The presented taxonomy classifies visibility problems independently of 626 their target application. The classification should help to understand 627 the nature of the given problem and it should assist in finding 628 relationships between visibility problems and algorithms in different 629 application areas. The tools address the following classes of 630 visibility problems: 1226 631 1227 632 \begin{itemize} … … 1242 647 \item Domain: 1243 648 \begin{itemize} 1244 \item viewpoint (Chapter~\ref{chap:online}), 1245 \item polygon or polyhedron (Chapters~\ref{chap:sampling,chap:mutual}) 649 \item viewpoint (online culling tool), 650 \item global visibility (global visibility sampling tool) 651 \item polygon or polyhedron (mutual visibility tool) 1246 652 \end{itemize} 1247 653 \item Scene restrictions (occluders): … … 1251 657 \item Scene restrictions (group objects): 1252 658 \begin{itemize} 1253 \item bounding boxes (Chapters~\ref{chap:rtviscull},~\ref{chap:vismap},~Chapter~\ref{chap:rot25d} and~\ref{chap:rot3d}),659 \item bounding boxes 1254 660 \end{itemize} 1255 661 \item Output: 1256 662 \begin{itemize} 1257 \item Visibility classification of objects or hierarchy nodes1258 663 \item PVS 1259 664 \end{itemize} … … 1264 669 \item aggresive 1265 670 \end{itemize} 1266 671 \item Solution space: 1267 672 \begin{itemize} 1268 \item discrete ( Chapters~\ref{chap:online},~\ref{chap:sampling})1269 \item continuous , line space / primal space (Chapter~\ref{chap:rot25d})673 \item discrete (online culling tool, global visibility sampling tool, conservative and approximate algorithm from the mutual visibility tool) 674 \item continuous (exact algorithm from mutual visibility tool) 1270 675 \end{itemize} 1271 \item Solution space data structures: viewport ( Chapter~\ref{chap:online}), ray stack (Chapter~\ref{chap:sampling}), ray stack or BSP tree (Chapter~\ref{chap:mutual})676 \item Solution space data structures: viewport (online culling tool), ray stack (global visibility sampling tool, conservative and approximate algorithm from the mutual visibility tool), BSP tree (exact algorithm from the mutual visibility tool) 1272 677 \item Use of coherence of visibility: 1273 678 \begin{itemize} 1274 \item spatial coherence (all methods)1275 \item temporal coherence ( Chapter~\ref{chap:online})679 \item spatial coherence (all tools) 680 \item temporal coherence (online culling tool) 1276 681 \end{itemize} 1277 \item Output sensitivity: expected in practice (all methods) 1278 \item Acceleration data structure: kD-tree (all methods) 1279 \item Use of graphics hardware: Chapter~\ref{chap:online} 1280 682 \item Output sensitivity: expected in practice (all tools) 683 \item Acceleration data structure: kD-tree (all tools) 684 \item Use of graphics hardware: online culling tool 1281 685 \end{itemize} 1282 686 -
trunk/VUT/doc/SciReport/mutual.tex
r249 r266 1 1 \chapter{Mutual Visibility Tool} 2 2 3 \section{Introduction}4 3 5 4 If a fast visibility solution is required such as during the … … 13 12 cell pairs which are classified as mutually invisible. We process the 14 13 objects sequentially and for every given object we determine the view 15 cell which form the boundary of the invisible view cell set.14 cells which form the boundary of the invisible view cell set. 16 15 17 16 This boundary it is based on the current view cell visibility … … 34 33 simpler case both are defined by an axis aligned bounding box. 35 34 35 Below, we describe three different mutual visibility verification 36 algorithms. The first algorithm which is described in most detail 37 computes exact visibility. The second one is a coservative algorithm 38 and the third one is an approximate algorithm with a guaranteed error 39 bound. 36 40 37 41 … … 40 44 The exact mutual visibility test computes exact visibility between two 41 45 polyhedrons in the scene. This is computed by testing visibility 42 between all pairs of potentially polygons of these polyhedrons. 43 46 between all pairs of potentially visible polygons of these 47 polyhedrons. For each pair of tested polygons the computation is 48 localized into the shaft defined by their convex hull. This shaft is 49 used to determine the set of relevant occluders~\cite{Haines94}. 44 50 45 51 \section{Occlusion tree} … … 59 65 Leaves of the occlusion tree are classified $in$ or $out$. If $N$ is 60 66 an $out$-leaf, ${\mathcal Q^*_N}$ represents unoccluded rays emerging 61 from the source polygon $P_S$. If $N$ is an $in$-leaf, it is associated 62 with an occluder $O_N$ that blocks the corresponding set of rays 63 ${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the blocker 64 polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The intersection of 65 $B_N$ and the \plucker quadric corresponds to a set of stabbers ${\cal 66 S}_N$ through which $O_N$ is visible from $P_S$. 67 from the source polygon $P_S$. If $N$ is an $in$-leaf, it is 68 associated with an occluder $O_N$ that blocks the corresponding set of 69 rays ${\mathcal Q^*_N}$. Additionally $N$ stores a fragment of the 70 blocker polyhedron $B_N$ representing ${\mathcal Q^*_N}$. The 71 intersection of $B_N$ and the \plucker quadric corresponds to a set of 72 stabbers ${\cal S}_N$ through which $O_N$ is visible from $P_S$. A 2D 73 example of an occlusion tree is shown at Figure~\ref{fig:ot}. 74 75 \begin{figure}[htb] 76 \centerline{ 77 \includegraphics[width=0.9\textwidth,draft=\DRAFTFIGS]{figs/ot1}} 78 \caption{A 2D example of an occlusion tree. Rays intersecting scene polygons 79 are represented by polyhedra P1, P2 and P3. The occlusion tree represents a union 80 of the polyhedra. Note that P2 has been split during the insertion process.} 81 \label{fig:duality2d} 82 \end{figure} 83 67 84 68 85 %% Consider a source polygon and a single occluder such as in … … 77 94 78 95 The occlusion tree is constructed incrementally by inserting blocker 79 polyhedra in the order given by the approximate occlusion sweep of the80 scene polygons. When processing a polygon $P_j$ the algorithm inserts 81 a polyhedron $B_{P_SP_j}$ representing the feasible line set between 82 the source polygon $P_S$ and the polygon $P_j$. The polyhedron is 83 split intofragments that represent either occluded or unoccluded rays.96 polyhedra in the order given by the size of the polygons. When 97 processing a polygon $P_j$ the algorithm inserts a polyhedron 98 $B_{P_SP_j}$ representing the feasible line set between the source 99 polygon $P_S$ and the polygon $P_j$. The polyhedron is split into 100 fragments that represent either occluded or unoccluded rays. 84 101 85 102 We describe two methods that can be used to insert a blocker … … 113 130 classification. If $N_{c}$ is an $out$-leaf then $B_{c}$ is visible 114 131 and $N_c$ is replaced by the elementary occlusion tree of $B_{c}$. If 115 $N_{c}$ is an $in$-leaf, the mutual position of the currently 116 processed polygon $B_j$ and the polygon $B_{N_{c}}$ associated with 117 $N_{c}$ is determined. This test will be described in 118 Section~\ref{sec:position}. If $B_{c}$ is behind $B_{N_{c}}$ it is 119 invisible and no modification to the tree necessary. Otherwise we need 120 to {\em merge} $B_{c}$ into the tree. The merging replaces $N_{c}$ by 121 the elementary occlusion tree of $B_c$ and inserts the old fragment 122 $B_{N_{c}}$ in the new subtree. 132 $N_{c}$ is an $in$-leaf it corresponds to already occluded rays and no 133 modification to the tree necessary. Otherwise we need to {\em merge} 134 $B_{c}$ into the tree. The merging replaces $N_{c}$ by the elementary 135 occlusion tree of $B_c$ and inserts the old fragment $B_{N_{c}}$ in 136 the new subtree. 123 137 124 138 \subsection{Insertion without splitting} … … 142 156 continues in the left subtree. If $B_c$ intersects both halfspaces the 143 157 algorithm proceeds in both subtrees of $N_c$ and $\pplane_{N_c}$ is 144 added to the list of splitting hyperplanes with a correct sign for each145 subtree. Reaching an $out$-leaf the list of splitting hyperplanes and 146 the associated signs correspond to halfspaces bounding the158 added to the list of splitting hyperplanes with a correct sign for 159 each subtree. Reaching an $out$-leaf the list of splitting hyperplanes 160 and the associated signs correspond to halfspaces bounding the 147 161 corresponding polyhedron fragment. The polyhedron enumeration 148 162 algorithm is applied using these halfspaces and the original … … 151 165 halfspaces is empty. Such a case occurs due to the conservative 152 166 traversal of the tree that only tests the position of the inserted 153 polyhedron with respect to the splitting hyperplanes\footnote{Such a 154 traversal was also used in Section~\ref{sec:rtviscull_conserva} in the 155 context of the from-point visibility culling.}. If the fragment is not 156 empty, the tree is extended as described in the previous section. 167 polyhedron with respect to the splitting hyperplanes. If the fragment 168 is not empty, the tree is extended as described in the previous 169 section. 157 170 158 171 Reaching an $in$-leaf the polygon positional test is applied. If the … … 186 199 187 200 188 \subsection{Polygon positional test}189 \label{sec:position}190 191 The polygon positional test aims to determine the order of two192 polygons $P_i$ and $P_j$ with respect to the source polygon193 $P_S$~\cite{Chrysantho1996a}. We assume that polygons $P_i$ and $P_j$194 do not intersect, i.e. all potential polygon intersections were195 resolved in preprocessing. The test is applied to determine which of196 the two polygons is closer to $P_S$ with respect to the given set of197 rays intersecting both polygons. The test proceeds as follows:198 199 \begin{enumerate}200 \item If $P_i$ lays in the positive/negative halfspace defined by201 $P_j$, it is before/behind $P_j$. Otherwise, proceed with the next step.202 \item If $P_j$ lays in the positive/negative halfspace defined by203 $P_i$, it is before/behind $P_i$. Otherwise, proceed with the next step.204 \item Find a ray from the given set of rays that intersects both205 polygons. Compute an intersection of the ray with $P_i$ and $P_j$ and206 determine the position of the polygons according to the order of207 intersection points along the ray.208 \end{enumerate}209 210 The first two steps aim to determine the absolute priority of one of211 the polygons. If these steps fail, the order of the polygons is212 determined using a sample ray in step 3.213 201 214 202 \section{Visibility test} … … 263 251 alter the accuracy of the visibility algorithm. 264 252 265 \subsection{Shaft culling} 266 267 \label{sec:rot3_shaft} 268 269 The algorithm as described computes and maintains visibility of all 270 polygons facing the given source polygon. In complex scenes the 271 description of visibility from the source polygon can reach $O(n^4)$ 272 complexity in the worse 273 case~\cite{Teller92phd,p-rsls-97,Durand99-phd}. The size of the 274 occlusion tree is then bound by $O(n^5)$. Although such a 275 configuration occurs rather rare in practice we need to apply some 276 general technique to avoid the worst-case memory complexity. If the 277 algorithm should provide an exact analytic solution to the problem, we 278 cannot avoid the $O(n^4\log n)$ worst-case computational complexity, 279 but the computation can be decoupled to reduce memory requirements. 280 281 We propose to use the {\em shaft culling}~\cite{Haines94} method that 282 divides the computation into series of from-region visibility 283 problems restricted by a given shaft of lines. Ideally the shafts are 284 determined so that the {\em visibility complexity} in the shaft is 285 bound by a given constant. This is however the from-region visibility 286 problem itself. We can use an estimate based on a number of polygons 287 in the given shaft. First, the hemicube is erected over the source 288 polygon and it is used to construct five shafts corresponding to the 289 five faces of the hemicube. The shafts are then subdivided as 290 follows: if a given shaft intersects more than a predefined number of 291 polygons the corresponding hemicube face is split in two and the new 292 shafts are processed recursively. Visibility in the shaft is 293 evaluated using all polygons intersecting the shaft. When the 294 computation for the shaft is finished the occlusion tree is 295 destroyed. The algorithm then proceeds with the next shaft until all 296 generated shafts have been processed. See Figure~\ref{fig:rot3_shafts} 297 for an example of a regular subdivision of the problem-relevant line 298 set into 80 shafts. 299 300 301 This technique shares a similarity with the algorithm recently 302 published by Nirenstein et al.~\cite{nirenstein:02:egwr} that uses 303 shafts between the source polygon and each polygon in the 304 scene. Our technique provides the following benefits: 305 306 \begin{itemize} 307 \item The shaft can contain thousands of polygons, which allows to 308 better exploit the coherence of visibility. The hierarchical line 309 space subdivision allows efficient searches and updates. 310 311 \item The method is applicable even in scenes with big differences in 312 polygon sizes. Unlike the method of Nirenstein et 313 al.~\cite{nirenstein:02:egwr}, the proposed technique generates shafts 314 to find balance between the use of coherence and the memory 315 requirements. Due to the hierarchical subdivision of the 316 problem-relevant line set our method can handle the case when there is 317 a huge number of polygons in a single shaft between two large scene 318 polygons. 319 320 \item Visibility in the whole shaft is described in a unified data 321 structure, which can serve for further computations (e.g. extraction 322 of event surfaces, hierarchical representation of visibility, etc.). 323 \end{itemize} 324 325 326 327 \subsection{Occluder sorting} 328 329 Occluder sorting aims to increase the accuracy of the front-to-back 330 ordering determined by the approximate occlusion sweep. Higher 331 accuracy of the ordering decreases the number of the late merging of 332 blocker polyhedra in the tree. Recall that the merging extends the 333 occlusion tree by a blocker polyhedron corresponding to an occluder 334 that hides an already processed occluder. 335 336 The occluder sorting is applied when reaching a leaf node of the 337 spatial hierarchy during the approximate occlusion sweep. Given a leaf 338 node $N$ the occluder sorting proceeds as follows: 339 340 \begin{enumerate} 341 \item Determine a ray $r$ piercing the center of the source polygon 342 $P_S$ and the node $N$. 343 \item Compute intersections of $r$ and all supporting planes of the 344 polygons associated with $N$. 345 \item Sort the polygons according to the front-to-back order of the 346 corresponding intersection points along $r$. 347 \end{enumerate} 348 349 The occluder sorting provides an exact priority order within the given 350 leaf in the case that the polygons are pierced by the computed ray $r$ 351 and they do not intersect. 253 % \subsection{Occluder sorting} 254 255 % Occluder sorting aims to increase the accuracy of the front-to-back 256 % ordering determined by the approximate occlusion sweep. Higher 257 % accuracy of the ordering decreases the number of the late merging of 258 % blocker polyhedra in the tree. Recall that the merging extends the 259 % occlusion tree by a blocker polyhedron corresponding to an occluder 260 % that hides an already processed occluder. 261 262 % The occluder sorting is applied when reaching a leaf node of the 263 % spatial hierarchy during the approximate occlusion sweep. Given a leaf 264 % node $N$ the occluder sorting proceeds as follows: 265 266 % \begin{enumerate} 267 % \item Determine a ray $r$ piercing the center of the source polygon 268 % $P_S$ and the node $N$. 269 % \item Compute intersections of $r$ and all supporting planes of the 270 % polygons associated with $N$. 271 % \item Sort the polygons according to the front-to-back order of the 272 % corresponding intersection points along $r$. 273 % \end{enumerate} 274 275 % The occluder sorting provides an exact priority order within the given 276 % leaf in the case that the polygons are pierced by the computed ray $r$ 277 % and they do not intersect. 352 278 353 279 … … 413 339 \section{Conservative Verifier} 414 340 415 416 \section{Error Bound Verifier} 341 A conservative verifier is a faster alternative to the exact 342 visibility verifier described above. The verifier is an extension of 343 the strong occlusion algorithm of Cohen-Or et 344 al.~\cite{cohen-egc-98}. In particular our verfier refines the search 345 for a strong occluder by using a hiearrchical subdivision of space of 346 lines connecting the two regions tested for mutual 347 visibility. Initially the shaft bounding the the tested regions is 348 constructed. Rays bounding the shaft are traced through the scene and 349 we compute all intersections with the scene objects between the tested 350 regions. The the algorithm proceeds as follows: 351 352 \begin{itemize} 353 \item In the case that any ray does not intersect any object the 354 tested regions are classified as visibility and the algorithm 355 terminates. 356 \item If the rays intersect the same convex object (at any 357 depth) this object is a strong occluder with respect to the shaft and 358 thus it also hides all rays in the corresponding shaft. 359 \item If the rays do not intersect a single convex object four new 360 shafts are constructed by splitting both regions in half and the 361 process is repeated recursivelly. 362 \end{itemize} 363 364 If the subdivision does not terminate till reaching a predefined 365 subdivision depth, we conservativelly classify the tested regions as 366 mutually visible. 367 368 369 \section{Error Bound Approximate Verifier} 370 371 The approximate verifier will be based on the similar idea as the 372 conservative one. However it will behave differently in the finer 373 subdivision of the ray shafts. The idea is to use the above algorithm 374 as far as the shafts get small enough that we can guarantee that even 375 if the shaft is not blocked by the scene objects, a pixel error 376 induced due to omission of objects potential visible behind the shaft 377 is bellow a given threshold. 378 379 For the computation of the maximal error due to the current shaft we 380 assume that one tested region is a viewcell, whereas the other is an 381 object bounding box or cell of the spatial hierarchy. The threshold is 382 computed as follows: We first triangulate the farthest intesection 383 points in the shaft as seen from the view cell side of the shaft. Then 384 for each computed triangle we calculate a point in the viewcell which 385 maximizes the projected area of the triangle. The conservative 386 estimate of the maximal error is then given by a sum of the computed 387 projected areas. 417 388 418 389 %Option: - if there is sufficient number of samples from 1 + 2 and some … … 421 392 %from there values. 422 393 423 a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve.424 425 The basic idea is the following:426 427 Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows:428 429 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects430 431 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer.432 433 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one.434 435 4. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible.436 Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer.437 438 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed.439 440 It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy.441 442 443 > Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane.444 > That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further.445 > Is this description clearer?446 447 448 partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error?394 % a) is perhaps the most interesting since this has not been done so far at all. I though about different methods how to do that (e.g. maintaining a triangulated front visible layer), but I always found some problem. I believe that the method which is almost implemented now is solid and I just wonder what performance it will achieve. 395 396 % The basic idea is the following: 397 398 % Use 2 plane parametrization for describing all lines intersecting the object and the viewcell. The line sets will be described by aligned rectangles on the two planes which allows to construct convex shafts bounded always by only 4 lines. After determing the initial rectangles bounding the whole line set perform recursive subdivsion as follows: 399 400 % 1. Cast the 4 corner rays and deterine ALL intersections with the occluding objects 401 402 % 2. If any ray is unoccluded termite the whole test with VISIBLE and extend the PVS of the object with the new viecell (and possibly more if the rays goes further). Start the verification for the new viewcells in the occluded layer. 403 404 % 3. If all 4 rays pierce the same convex mesh, (or mesh triangle for non convex meshes) terminate this branch with INVISIBLE. Note the terminating mesh need not be the front most one. 405 406 % 4. Triangulate the depth of the shaft as seen from the viecell side of the shaft and compute the maximal projected area of the 2 computed triangles. This determines the maximal pixel error we can make by terminating this branch as invisible. 407 % Note that here it would be the best to selected those intersections for the rays which have the most similar depths to better estimate the maximal error, i.e. the "depth triangles" need not be formed by the first visible layer. 408 409 % 5. If 4 does not terminate subdivide either the viewcell or the object rectangle depending on the sample depths and error computed. 410 411 % It is actually quite simple to create a conservative variant of the algorithm: subdivide only to a given depth and avoid test 4. Then the samples can only be terminated by a "STRONG occluder" which hides the whole shaft. This is similar to Dannys method, but here a hierarchical shaft subdivision is performed which should deliver much more accuracy. 412 413 414 % > Partly: area subdivision algorithm does not only cast "corner" rays, but maintains lists of the triangles for the given area. However the basic idea is the same: subdivide the set or rays until there is a trivial decision (a single convex object intersected by all corner rays) or the error is too small. The second criterion is different from the from-poin t setup, since the error cannot be evaluated in a single projection plane. 415 % > That is why this triangulation would be done: Imagine that the 4 corner rays of a shaft intersect some object(s) in the scene. Now connect these intersection points by creating two triangles. These triangles do not exist in the scene, but define the largest possible windows through which we can see further in the shaft. Now it is easy to find two points at the origin of the shaft which maximize the projected area of the two triangles. The sum of these areas gives a conservative estimate of the spatial angle error which can be made by not subdividing the shaft any further. 416 % > Is this description clearer? 417 418 419 % partly - I just don't see it in front of me now. How is the result influenced by which triangles are chosen? Is it so that the nearer the intersections, the bigger the error? 449 420 450 421 … … 454 425 \label{sec:rot3d_summary} 455 426 456 This chapter presented a new method for computing from-region 457 visibility in polygonal scenes. The method is based on the concept of 458 line space subdivision, approximate occlusion sweep and hierarchical 459 visibility tests. The key idea is a hierarchical subdivision of the 460 problem-relevant line set using \plucker coordinates and the occlusion 461 tree. \plucker coordinates allow to perform operations on sets of 462 lines by means of set theoretical operations on the 5D polyhedra. The 463 occlusion tree is used to maintain a union of the polyhedra that 464 represent lines occluded from the given region (polygon). 465 466 We discussed the relation of sets of lines in 3D and the polyhedra in 467 \plucker coordinates. We proposed a general size measure for a set of 468 lines described by a blocker polyhedron and a size measure designed 469 for the computation of PVS. The chapter presented two algorithms for 470 construction of the occlusion tree by incremental insertion of blocker 471 polyhedra. The occlusion tree was used to test visibility of a given 472 polygon or region with respect to the source polygon/region. We 473 proposed several optimization of the algorithm that make the approach 474 applicable to large scenes. The chapter discussed three possible 475 applications of the method: discontinuity meshing, visibility culling, 476 and occluder synthesis. 477 478 The implementation of the method was evaluated on scenes consisting of 479 randomly generated triangles, structured scenes, and a real-world 480 urban model. The evaluation was focused on the application method for 481 PVS computation. By computing a PVS in a model of the city of Graz it 482 was shown that the approach is suitable for visibility preprocessing 483 in urban scenes. The principal advantage of the method is that it does 484 not rely on various tuning parameters that are characterizing many 485 conservative or approximate algorithms. On the other hand the 486 exactness of the method requires higher computational demands and 487 implementation effort. 488 427 This chapter presented a mutuaql visibility tool, which determines 428 whether two regions in the scene are visible. 429 430 First, we have described an advanced exact visibility algorithm which 431 is a simple of modification of an algorithm for computing from-region 432 visibility in polygonal scenes. The key idea is a hierarchical 433 subdivision of the problem-relevant line set using \plucker 434 coordinates and the occlusion tree. \plucker coordinates allow to 435 perform operations on sets of lines by means of set theoretical 436 operations on the 5D polyhedra. The occlusion tree is used to maintain 437 a union of the polyhedra that represent lines occluded from the given 438 region (polygon). We described two algorithms for construction of the 439 occlusion tree by incremental insertion of blocker polyhedra. The 440 occlusion tree was used to test visibility of a given polygon or 441 region with respect to the source polygon/region. We proposed several 442 optimization of the algorithm that make the approach applicable to 443 large scenes. The principal advantage of the exact method over the 444 conservative and the approximate ones is that it does not rely on 445 various tuning parameters that are characterizing many conservative or 446 approximate algorithms. On the other hand the exactness of the method 447 requires higher computational demands and implementation effort. 448 449 Second, we have described a conservative verifier which is an 450 extension of the algorithm based on strong occluders. The conservative 451 verifier is requires a specification of the shaft size at which the 452 tested regions are conservativelly classfied as visible. 453 454 Third, we have described an approximate error bound verifier which is 455 an extension of the algorithm based on strong occluders. The 456 conservative verifier is requires a specification of the shaft size at 457 which the tested regions are conservativelly classfied as visible. 458 -
trunk/VUT/doc/SciReport/online.tex
r253 r266 44 44 \begin{figure}%[htb] 45 45 %\centerline 46 47 48 49 50 51 46 \centering \footnotesize 47 \begin{tabular}{c} 48 \includegraphics[width=0.6\textwidth, draft=\DRAFTFIGS]{images/ogre_terrain} \\ 49 %\hline 50 \includegraphics[width=0.3\textwidth, draft=\DRAFTFIGS]{images/vis_viewfrustum} \hfill \includegraphics[width=0.3\textwidth, draft=\DRAFTFIGS]{images/vis_chc} \\ 51 \end{tabular} 52 52 %\label{tab:online_culling_example} 53 53 \caption{(top) The rendered terrain scene. (bottom) Visualizion of the rendered / culled objects. 54 54 Using view frustum culling (left image) vs. occlusion queries (right image). 55 55 The yellow boxes show the actually rendered scene objects. The 56 57 56 red boxes depict the view frustum culled hierarchy nodes, the blue boxes depict the 57 occlusion query culled hierarchy nodes.} 58 58 \label{fig:online_culling_example} 59 59 \end{figure} 60 60 61 61 Recent graphics hardware natively supports an \emph{occlusion query} 62 to detect the visibility of an object against the current contents of the63 z-buffer. Although the query itself is processed quickly using the 64 raw power of the graphics processing unit (GPU), its result is not62 to detect the visibility of an object against the current contents of 63 the z-buffer. Although the query itself is processed quickly using 64 the raw power of the graphics processing unit (GPU), its result is not 65 65 available immediately due to the delay between issuing the query and 66 66 its actual processing in the graphics pipeline. As a result, a naive … … 74 74 simplicity and versatility: the method can be easily integrated in 75 75 existing real-time rendering packages on architectures supporting the 76 underlying occlusion query. 77 In figure~\ref{fig:online_culling_example}, the same scene (top row) is rendered using view frustum 78 culling (visualization in the bottom left image) versus online culling using occlusion queries (visualization 79 in the bottom right image). It can be seen that with view frustum culling only many objects are still rendered. 76 underlying occlusion query. In 77 figure~\ref{fig:online_culling_example}, the same scene (top row) is 78 rendered using view frustum culling (visualization in the bottom left 79 image) versus online culling using occlusion queries (visualization 80 in the bottom right image). It can be seen that with view frustum 81 culling only many objects are still rendered. 80 82 %Using spatial and assuming temporal coherence 81 83 … … 105 107 z-buffer allows to quickly determine if the geometry in question is 106 108 occluded. To a certain extent this idea is used in the current 107 generation of graphics hardware by applying early z-tests of 108 fragments in the graphics pipeline (e.g., Hyper-Z technology of ATI or 109 Z-cull of NVIDIA). However, the geometry still needs to be sent to the 110 GPU, transformed, and coarsely rasterized even if it is later 111 determinedinvisible.109 generation of graphics hardware by applying early z-tests of fragments 110 in the graphics pipeline (e.g., Hyper-Z technology of ATI or Z-cull of 111 NVIDIA). However, the geometry still needs to be sent to the GPU, 112 transformed, and coarsely rasterized even if it is later determined 113 invisible. 112 114 113 115 Zhang~\cite{EVL-1997-163} proposed hierarchical occlusion maps, which … … 774 776 %shown in the accompanying video. 775 777 776 The complete power plant model is quite challenging even to load into memory,777 but on the other hand it offers good 778 occlusion. This scene is an interesting candidate for testing not only 779 due to its size, butalso due to significant changes in visibility and depth complexity in778 The complete power plant model is quite challenging even to load into 779 memory, but on the other hand it offers good occlusion. This scene is 780 an interesting candidate for testing not only due to its size, but 781 also due to significant changes in visibility and depth complexity in 780 782 its different parts. 781 783 … … 915 917 \caption{Statistics for the three test scenes. VFC is rendering with only view-frustum culling, S\&W is the 916 918 hierarchical stop and wait method, CHC is our new method, and Ideal 917 is a perfect method with respect to the given hierarchy. All values are averages over918 a ll frames (including the speedup).}919 is a perfect method with respect to the given hierarchy. All values 920 are averages over all frames (including the speedup).} 919 921 \label{tab:averages} 920 922 \end{table*} -
trunk/VUT/doc/SciReport/sampling.tex
r255 r266 1 1 \chapter{Global Visibility Sampling Tool} 2 3 4 \section{Introduction}5 2 6 3 … … 21 18 conservative or error bound aggresive solution. The choice of the 22 19 particular verifier is left on the user in order to select the best 23 for a particular scene, application context and time constrains. For 24 example, in scenes like a forest an error bound aggresive visibility 25 can be the best compromise between the resulting size of the PVS (and 26 framerate) and the visual quality. The exact or conservative algorithm 27 c an however be chosen for urban scenes where of even small objects can28 be more distructing for the user. The mutual visibility tool will be29 described in the next chapter.20 one for a particular scene, application context and time 21 constrains. For example, in scenes like a forest an error bound 22 aggresive visibility can be the best compromise between the resulting 23 size of the PVS (and framerate) and the visual quality. The exact or 24 conservative algorithm can however be chosen for urban scenes where 25 ommision of even small objects can be more distructing for the 26 user. The mutual visibility tool will be described in the next chapter. 30 27 31 28 \end{itemize} … … 35 32 subdivided into viewcells and for each view cell the set of visible 36 33 objects --- potentially visible set (PVS) is computed. This framewoirk 37 has bee used for conservative, aggresive and exact algorithms.34 has been used for conservative, aggresive and exact algorithms. 38 35 39 36 We propose a different strategy which has several advantages for … … 41 38 based on the following fundamental ideas: 42 39 \begin{itemize} 43 \item Replace the roles of view cells and objects44 40 \item Compute progressive global visibility instead of sequential from-region visibility 41 \item Replace the roles of view cells and objects for some parts of the computation 45 42 \end{itemize} 46 43 … … 52 49 \label{VFR3D_RELATED_WORK} 53 50 54 55 Below we briefly discuss the related work on visibility preprocessing 51 Below we briefly discuss the related work on visibility preprocessing 56 52 in several application areas. In particular we focus on computing 57 53 from-region which has been a core of most previous visibility … … 63 59 The first algorithms dealing with from-region visibility belong to the 64 60 area of computer vision. The {\em aspect 65 graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions61 graph}~\cite{Gigus90,Plantinga:1990:RTH, Sojka:1995:AGT} partitions 66 62 the view space into cells that group viewpoints from which the 67 63 projection of the scene is qualitatively equivalent. The aspect graph … … 237 233 attenuation, reflection and time delays. 238 234 239 \section{Algorithm Setup} 235 \section{Algorithm Description} 236 237 This section first describes the setup of the global visibility 238 sampling algorithm. In particular we describe the view cell 239 representation and the novel concept of from-object based 240 visibility. The we outline the different visibility sampling 241 strategies. 240 242 241 243 \subsection{View Cell Representation} … … 247 249 \item optimized for viewcell - ray intersection. 248 250 \item flexible, i.e., it can represent arbitrary geometry. 249 \item naturally suited for a nhierarchical approach. %(i.e., there is a root view cell containing all others)251 \item naturally suited for a hierarchical approach. %(i.e., there is a root view cell containing all others) 250 252 \end{itemize} 251 253 … … 257 259 subdivide a BSP leaf view cell quite easily. 258 260 259 Currently we use two approaches to generate the initial BSP view cell tree. 261 Currently we use two approaches to generate the initial BSP view cell 262 tree. 260 263 261 264 \begin{itemize} 265 262 266 \item We use a number of dedicated input view cells. As input view 263 267 cell any closed mesh can be applied. The only requirement is that the … … 270 274 (i.e., add a pointer to the view cell). Hence a number of leafes can 271 275 be associated with the same input view cell. 276 272 277 \item We apply the BSP tree subdivision to the scene geometry. When 273 278 the subdivision terminates, the leaf nodes also represent the view 274 279 cells. 280 275 281 \end{itemize} 276 282 277 283 \subsection{From-object based visibility} 278 284 279 Our framework is based on the idea of sampling visibility by casting285 Our framework is based on the idea of sampling visibility by casting 280 286 casting rays through the scene and collecting their contributions. A 281 287 visibility sample is computed by casting a ray from an object towards … … 304 310 305 311 306 \s ection{Basic Randomized Sampling}312 \subsection{Basic Randomized Sampling} 307 313 308 314 … … 319 325 320 326 321 The described algorithm accounts for the irregular distribution of the327 The described algorithm accounts for the irregular distribution of the 322 328 objects: more samples are placed at locations containing more 323 329 objects. Additionally every object is sampled many times depending on … … 327 333 328 334 329 \s ection{Accounting for View Cell Distribution}330 331 The first modification to the basic algorithm accounts for irregular332 distribution of the viewcells. Such a case i ncommon for example in335 \subsection{Accounting for View Cell Distribution} 336 337 The first modification to the basic algorithm accounts for irregular 338 distribution of the viewcells. Such a case is common for example in 333 339 urban scenes where the viewcells are mostly distributed in a 334 340 horizontal direction and more viewcells are placed at denser parts of 335 341 the city. The modification involves replacing the uniformly 336 342 distributed ray direction by directions distributed according to the 337 local view cell directional density. Itmeans placing more samples at338 directions where more view cells are located .We select a random343 local view cell directional density. This means placing more samples at 344 directions where more view cells are located: We select a random 339 345 viecell which lies at the halfpace given by the surface normal at the 340 346 chosen point. We pick a random point inside the view cell and cast a … … 342 348 343 349 344 \s ection{Accounting for Visibility Events}350 \subsection{Accounting for Visibility Events} 345 351 346 352 Visibility events correspond to appearance and disapearance of 347 353 objects with respect to a moving view point. In polygonal scenes the 348 349 350 351 354 events defined by event surfaces defined by three distinct scene 355 edges. Depending on the edge configuration we distinguish between 356 vertex-edge events (VE) and tripple edge (EEE) events. The VE surfaces 357 are planar planes whereas the EEE are in general quadratic surfaces. 352 358 353 359 To account for these event we explicitly place samples passing by the 354 355 356 357 360 object edges which are directed to edges and/or vertices of other 361 objects. 362 363
Note: See TracChangeset
for help on using the changeset viewer.