[692] | 1 | \input texinfo @c -*-texinfo-*- |
---|
| 2 | @c %**start of header |
---|
| 3 | @setfilename manual.info |
---|
| 4 | @settitle OGRE Manual v1.2.0 ('Dagon') |
---|
| 5 | @c %**end of header |
---|
| 6 | |
---|
| 7 | @titlepage |
---|
| 8 | @title OGRE Manual |
---|
| 9 | @author Steve Streeting |
---|
| 10 | @page |
---|
| 11 | @vskip 0pt plus 1filll |
---|
| 12 | Copyright @copyright{} The OGRE Team@*@* |
---|
| 13 | |
---|
| 14 | Permission is granted to make and distribute verbatim |
---|
| 15 | copies of this manual provided the copyright notice and |
---|
| 16 | this permission notice are preserved on all copies.@*@* |
---|
| 17 | |
---|
| 18 | Permission is granted to copy and distribute modified |
---|
| 19 | versions of this manual under the conditions for verbatim |
---|
| 20 | copying, provided that the entire resulting derived work is |
---|
| 21 | distributed under the terms of a permission notice |
---|
| 22 | identical to this one.@*@* |
---|
| 23 | @end titlepage |
---|
| 24 | |
---|
| 25 | @node Top |
---|
| 26 | @top OGRE Manual |
---|
| 27 | Copyright @copyright{} The OGRE Team@*@* |
---|
| 28 | |
---|
| 29 | |
---|
| 30 | This work is licenced under the Creative Commons Attribution-ShareAlike 2.5 License. To view a copy of this licence, visit @url{http://creativecommons.org/licenses/by-sa/2.5/} or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.@*@* |
---|
| 31 | |
---|
| 32 | @ifinfo |
---|
| 33 | @menu |
---|
| 34 | * Introduction:: |
---|
| 35 | * The Core Objects:: |
---|
| 36 | * Scripts:: |
---|
| 37 | * Mesh Tools:: |
---|
| 38 | * Hardware Buffers:: |
---|
| 39 | * External Texture Sources:: |
---|
| 40 | * Shadows:: |
---|
| 41 | * Animation:: |
---|
| 42 | @detailmenu |
---|
| 43 | @end detailmenu |
---|
| 44 | @end menu |
---|
| 45 | @end ifinfo |
---|
| 46 | |
---|
| 47 | @c ------------------------------------------- |
---|
| 48 | @node Introduction |
---|
| 49 | @chapter Introduction |
---|
| 50 | This chapter is intended to give you an overview of the main components of OGRE and why they have been put together that way. |
---|
| 51 | @c ------------------------------------------- |
---|
| 52 | @node Object Orientation - more than just a buzzword |
---|
| 53 | @section Object Orientation - more than just a buzzword |
---|
| 54 | The name is a dead giveaway. It says Object-Oriented Graphics Rendering Engine, and that's exactly what it is. Ok, but why? Why did I choose to make such a big deal about this?@*@* |
---|
| 55 | |
---|
| 56 | Well, nowadays graphics engines are like any other large software system. They start small, but soon they balloon into monstrously complex beasts which just can't be all understood at once. It's pretty hard to manage systems of this size, and even harder to make changes to them reliably, and that's pretty important in a field where new techniques and approaches seem to appear every other week. Designing systems around huge files full of C function calls just doesn't cut it anymore - even if the whole thing is written by one person (not likely) they will find it hard to locate that elusive bit of code after a few months and even harder to work out how it all fits together.@*@* |
---|
| 57 | |
---|
| 58 | Object orientation is a very popular approach to addressing the complexity problem. It's a step up from decomposing your code into separate functions, it groups function and state data together in classes which are designed to represent real concepts. It allows you to hide complexity inside easily recognised packages with a conceptually simple interface so they are easy to recognise and have a feel of 'building blocks' which you can plug together again later. You can also organise these blocks so that some of them look the same on the outside, but have very different ways of achieving their objectives on the inside, again reducing the complexity for the developers because they only have to learn one interface.@*@* |
---|
| 59 | |
---|
| 60 | I'm not going to teach you OO here, that's a subject for many other books, but suffice to say I'd seen enough benefits of OO in business systems that I was surprised most graphics code seemed to be written in C function stylee. I was interested to see whether I could apply my design experience in other types of software to an area which has long held a place in my heart - 3D graphics engines. Some people I spoke to were of the opinion that using full C++ wouldn't be fast enough for a real-time graphics engine, but others (including me) were of the opinion that, with care, and object-oriented framework can be performant. We were right. |
---|
| 61 | |
---|
| 62 | In summary, here's the benefits an object-oriented approach brings to OGRE: |
---|
| 63 | @table @asis |
---|
| 64 | @item Abstraction |
---|
| 65 | Common interfaces hide the nuances between different implementations of 3D API and operating systems |
---|
| 66 | @item Encapsulation |
---|
| 67 | There is a lot of state management and context-specific actions to be done in a graphics engine - encapsulation allows me to put the code and data nearest to where it is used which makes the code cleaner and easier to understand, and more reliable because duplication is avoided |
---|
| 68 | @item Polymorphism |
---|
| 69 | The behaviour of methods changes depending on the type of object you are using, even if you only learn one interface, e.g. a class specialised for managing indoor levels behaves completely differently from the standard scene manager, but looks identical to other classes in the system and has the same methods called on it |
---|
| 70 | @end table |
---|
| 71 | @c ------------------------------------------- |
---|
| 72 | @node Multi-everything |
---|
| 73 | @section Multi-everything |
---|
| 74 | I wanted to do more than create a 3D engine that ran on one 3D API, on one platform, with one type of scene (indoor levels are most popular). I wanted OGRE to be able to extend to any kind of scene (but yet still implement scene-specific optimisations under the surface), any platform and any 3D API.@*@* |
---|
| 75 | |
---|
| 76 | Therefore all the 'visible' parts of OGRE are completely independent of platform, 3D API and scene type. There are no dependencies on Windows types, no assumptions about the type of scene you are creating, and the principles of the 3D aspects are based on core maths texts rather than one particular API implementation.@*@* |
---|
| 77 | |
---|
| 78 | Now of course somewhere OGRE has to get down to the nitty-gritty of the specifics of the platform, API and scene, but it does this in subclasses specially designed for the environment in question, but which still expose the same interface as the abstract versions.@*@* |
---|
| 79 | |
---|
| 80 | For example, there is a 'Win32Window' class which handles all the details about rendering windows on a Win32 platform - however the application designer only has to manipulate it via the superclass interface 'RenderWindow', which will be the same across all platforms. |
---|
| 81 | |
---|
| 82 | Similarly the 'SceneManager' class looks after the arrangement of objects in the scene and their rendering sequence. Applications only have to use this interface, but there is a 'BspSceneManager' class which optimises the scene management for indoor levels, meaning you get both performance and an easy to learn interface. All applications have to do is hint about the kind of scene they will be creating and let OGRE choose the most appropriate implementation - this is covered in a later tutorial.@*@* |
---|
| 83 | |
---|
| 84 | OGRE's object-oriented nature makes all this possible. Currently OGRE runs on both Windows and Linux, using plugins to drive the underlying rendering API (currently Direct3D or OpenGL). Applications use OGRE at the abstract level, thus ensuring that they automatically operate on all platforms and rendering subsystems that OGRE provides without any need for platform or API specific code.@*@* |
---|
| 85 | |
---|
| 86 | @node The Core Objects |
---|
| 87 | @chapter The Core Objects |
---|
| 88 | @heading Introduction |
---|
| 89 | |
---|
| 90 | This tutorial gives you a quick summary of the core objects that you will use in OGRE and what they are used for. |
---|
| 91 | |
---|
| 92 | @heading A Word About Namespaces |
---|
| 93 | |
---|
| 94 | OGRE uses a C++ feature called namespaces. This lets you put classes, enums, structures, anything really within a 'namespace' scope which is an easy way to prevent name clashes, i.e. situations where you have 2 things called the same thing. Since OGRE is designed to be used inside other applications, I wanted to be sure that name clashes would not be a problem. Some people prefix their classes/types with a short code because some compilers don't support namespaces, but I chose to use them because they are the 'right' way to do it. Sorry if you have a non-compliant compiler, but hey, the C++ standard has been defined for years, so compiler writers really have no excuse anymore. If your compiler doesn't support namespaces then it's probably because it's sh*t - get a better one. ;) |
---|
| 95 | |
---|
| 96 | This means every class, type etc should be prefixed with 'Ogre::', e.g. 'Ogre::Camera', 'Ogre::Vector3' etc which means if elsewhere in your application you have used a Vector3 type you won't get name clashes. To avoid lots of extra typing you can add a 'using namespace Ogre;' statement to your code which means you don't have to type the 'Ogre::' prefix unless there is ambiguity (in the situation where you have another definition with the same name). |
---|
| 97 | |
---|
| 98 | @heading UML Diagram |
---|
| 99 | |
---|
| 100 | Shown below is a UML diagram of the core objects and how they relate to each other. Even if you don't know UML I'm sure you can work out the gist... |
---|
| 101 | |
---|
| 102 | @image{images/uml-overview} |
---|
| 103 | |
---|
| 104 | More details on these objects can be found in the following sections. |
---|
| 105 | |
---|
| 106 | @node The Root Object |
---|
| 107 | @section The Root object |
---|
| 108 | The 'Root' object is the entry point to the OGRE system. This object MUST be the first one to be created, and the last one to be destroyed. In the example applications I chose to make an instance of Root a member of my application object which ensured that it was created as soon as my application object was, and deleted when the application object was deleted.@*@* |
---|
| 109 | |
---|
| 110 | The root object lets you configure the system, for example through the showConfigDialog() method which is an extremely handy method which performs all render system options detection and shows a dialog for the user to customise resolution, colour depth, full screen options etc. It also sets the options the user selects so that you can initialise the system directly afterwards.@*@* |
---|
| 111 | |
---|
| 112 | The root object is also your method for obtaining pointers to other objects in the system, such as the SceneManager, RenderSystem and various other resource managers. See below for details.@*@* |
---|
| 113 | |
---|
| 114 | Finally, if you run OGRE in continuous rendering mode, i.e. you want to always refresh all the rendering targets as fast as possible (the norm for games and demos, but not for windowed utilities), the root object has a method called startRendering, which when called will enter a continuous rendering loop which will only end when all rendering windows are closed, or any FrameListener objects indicate that they want to stop the cycle (see below for details of FrameListener objects).@*@* |
---|
| 115 | |
---|
| 116 | @node The RenderSystem object |
---|
| 117 | @section The RenderSystem object |
---|
| 118 | |
---|
| 119 | The RenderSystem object is actually an abstract class which defines the interface to the underlying 3D API. It is responsible for sending rendering operations to the API and setting all the various rendering options. This class is abstract because all the implementation is rendering API specific - there are API-specific subclasses for each rendering API (e.g. D3DRenderSystem for Direct3D). After the system has been initialised through Root::initialise, the RenderSystem object for the selected rendering API is available via the Root::getRenderSystem() method.@*@* |
---|
| 120 | |
---|
| 121 | However, a typical application should not normally need to manipulate the RenderSystem object directly - everything you need for rendering objects and customising settings should be available on the SceneManager, Material and other scene-oriented classes. It's only if you want to create multiple rendering windows (completely separate windows in this case, not multiple viewports like a split-screen effect which is done via the RenderWindow class) or access other advanced features that you need access to the RenderSystem object.@*@* |
---|
| 122 | |
---|
| 123 | For this reason I will not discuss the RenderSystem object further in these tutorials. You can assume the SceneManager handles the calls to the RenderSystem at the appropriate times.@*@* |
---|
| 124 | |
---|
| 125 | @node The SceneManager object |
---|
| 126 | @section The SceneManager object |
---|
| 127 | |
---|
| 128 | Apart from the Root object, this is probably the most critical part of the system from the application's point of view. Certainly it will be the object which is most used by the application. The SceneManager is in charge of the contents of the scene which is to be rendered by the engine. It is responsible for organising the contents using whatever technique it deems best, for creating and managing all the cameras, movable objects (entities), lights and materials (surface properties of objects), and for managing the 'world geometry' which is the sprawling static geometry usually used to represent the immovable parts of a scene.@*@* |
---|
| 129 | |
---|
| 130 | It is to the SceneManager that you go when you want to create a camera for the scene. It's also where you go to retrieve a material which is used by an object, or to remove a light from the scene. There is no need for your application to keep lists of objects, the SceneManager keeps a named set of all of the scene objects for you to access, should you need them. Look in the main documentation under the getCamera, getMaterial, getLight etc methods.@*@* |
---|
| 131 | |
---|
| 132 | The SceneManager also sends the scene to the RenderSystem object when it is time to render the scene. You never have to call the SceneManager::_renderScene method directly though - it is called automatically whenever a rendering target is asked to update.@*@* |
---|
| 133 | |
---|
| 134 | So most of your interaction with the SceneManager is during scene setup. You're likely to call a great number of methods (perhaps driven by some input file containing the scene data) in order to set up your scene. You can also modify the contents of the scene dynamically during the rendering cycle if you create your own FrameListener object (see later).@*@* |
---|
| 135 | |
---|
| 136 | Because different scene types require very different algorithmic approaches to deciding which objects get sent to the RenderSystem in order to attain good rendering performance, the SceneManager class is designed to be subclassed for different scene types. The default SceneManager object will render a scene, but it does little or no scene organisation and you should not expect the results to be high performance in the case of large scenes. The intention is that specialisations will be created for each type of scene such that under the surface the subclass will optimise the scene organisation for best performance given assumptions which can be made for that scene type. An example is the BspSceneManager which optimises rendering for large indoor levels based on a Binary Space Partition (BSP) tree.@*@* |
---|
| 137 | |
---|
| 138 | The application using OGRE does not have to know which subclasses are available. The application simply calls Root::getSceneManager(..) passing as a parameter one of a number of scene types (e.g. ST_GENERIC, ST_INTERIOR etc). OGRE will automatically use the best SceneManager subclass available for that scene type, or default to the basic SceneManager if a specialist one is not available. This allows the developers of OGRE to add new scene specialisations later and thus optimise previously unoptimised scene types without the user applications having to change any code.@*@* |
---|
| 139 | |
---|
| 140 | |
---|
| 141 | @node The ResourceManager Objects |
---|
| 142 | @section The ResourceManager Objects |
---|
| 143 | |
---|
| 144 | The ResourceManager class is actually just a base class for a number of other classes which are used to manage resources. In this context, resources are sets of data which must be loaded from somewhere to provide OGRE with the data it needs. Examples are textures, meshes and maps. There is a subclass of ResourceManager to manage each of the types of resources, e.g. TextureManager for loading textures, MeshManager for loading mesh objects.@*@* |
---|
| 145 | |
---|
| 146 | ResourceManager's ensure that resources are only loaded once and shared throughout the OGRE engine. They also manage the memory requirements of the resources they look after. They can also search in a number of locations for the resources they need, including multiple search paths and compressed archives (ZIP files).@*@* |
---|
| 147 | |
---|
| 148 | Most of the time you won't interact with resource managers directly. Resource managers will be called by other parts of the OGRE system as required, for example when you request for a texture to be added to a Material, the TextureManager will be called for you. If you like, you can call the appropriate resource manager directly to preload resources (if for example you want to prevent disk access later on) but most of the time it's ok to let OGRE decide when to do it.@*@* |
---|
| 149 | |
---|
| 150 | Probably the only time you will need to call a ResourceManager is when you want to tell it where to look for resources. You can do this by calling the addSearchPath and addArchive methods of the resource manager, which will cause it to also look in the folder/archive you specify next time it searches for files.@*@* |
---|
| 151 | |
---|
| 152 | The above methods only affect the particular resource manager you call (e.g. it will only affect texture loading if you call it on TextureManager). Alternatively you can also call the static method ResourceManager::addCommonSearchPath or ResourceManager::addCommonArchive if you want ALL resource managers to look in the folder/archive you specify.@*@* |
---|
| 153 | |
---|
| 154 | Because there is only ever 1 instance of each resource manager in the engine, if you do want to get a reference to a resource manager use the following syntax: |
---|
| 155 | @example |
---|
| 156 | TextureManager::getSingleton().someMethod() |
---|
| 157 | MeshManager::getSingleton().someMethod() |
---|
| 158 | @end example |
---|
| 159 | @*@* |
---|
| 160 | |
---|
| 161 | @node The Mesh Object |
---|
| 162 | @section The Mesh Object |
---|
| 163 | |
---|
| 164 | A Mesh object represents a discrete model, a set of geometry which is self-contained and is typically fairly small on a world scale. Mesh objects are assumed to represent movable objects and are not used for the sprawling level geometry typically used to create backgrounds.@*@* |
---|
| 165 | |
---|
| 166 | Mesh objects are a type of resource, and are managed by the MeshManager resource manager. They are typically loaded from OGRE's custom object format, the '.mesh' format. Mesh files are typically created by exporting from a modelling tool @xref{Exporters} and can be maipulated through various @ref{Mesh Tools}@*@* |
---|
| 167 | |
---|
| 168 | You can also create Mesh objects manually by calling the MeshManager::createManual method. This way you can define the geometry yourself, but this is outside the scope of this manual.@*@* |
---|
| 169 | |
---|
| 170 | Mesh objects are the basis for the individual movable objects in the world, which are called @ref{Entities}.@*@* |
---|
| 171 | |
---|
| 172 | Mesh objects can also be animated using @xref{Skeletal Animation}. |
---|
| 173 | |
---|
| 174 | @node Entities |
---|
| 175 | @section Entities |
---|
| 176 | |
---|
| 177 | An entity is an instance of a movable object in the scene. It could be a car, a person, a dog, a shuriken, whatever. The only assumption is that it does not necessarily have a fixed position in the world.@*@* |
---|
| 178 | |
---|
| 179 | Entities are based on discrete meshes, i.e. collections of geometry which are self-contained and typically fairly small on a world scale, which are represented by the Mesh object. Multiple entities can be based on the same mesh, since often you want to create multiple copies of the same type of object in a scene.@*@* |
---|
| 180 | |
---|
| 181 | You create an entity by calling the SceneManager::createEntity method, giving it a name and specifying the name of the mesh object which it will be based on (e.g. 'muscleboundhero.mesh'). The SceneManager will ensure that the mesh is loaded by calling the MeshManager resource manager for you. Only one copy of the Mesh will be loaded.@*@* |
---|
| 182 | |
---|
| 183 | Entities are not deemed to be a part of the scene until you attach them to a SceneNode (see the section below). By attaching entities to SceneNodes, you can create complex hierarchical relationships between the positions and orientations of entities. You then modify the positions of the nodes to indirectly affect the entity positions.@*@* |
---|
| 184 | |
---|
| 185 | When a Mesh is loaded, it automatically comes with a number of materials defined. It is possible to have more than one material attached to a mesh - different parts of the mesh may use different materials. Any entity created from the mesh will automatically use the default materials. However, you can change this on a per-entity basis if you like so you can create a number of entities based on the same mesh but with different textures etc.@*@* |
---|
| 186 | |
---|
| 187 | To understand how this works, you have to know that all Mesh objects are actually composed of SubMesh objects, each of which represents a part of the mesh using one Material. If a Mesh uses only one Material, it will only have one SubMesh.@*@* |
---|
| 188 | |
---|
| 189 | When an Entity is created based on this Mesh, it is composed of (possibly) multiple SubEntity objects, each matching 1 for 1 with the SubMesh objects from the original Mesh. You can access the SubEntity objects using the Entity::getSubEntity method. Once you have a reference to a SubEntity, you can change the material it uses by calling it's setMaterialName method. In this way you can make an Entity deviate from the default materials and thus create an individual looking version of it.@*@* |
---|
| 190 | |
---|
| 191 | @node Materials |
---|
| 192 | @section Materials |
---|
| 193 | |
---|
| 194 | The Material object controls how objects in the scene are rendered. It specifies what basic surface properties objects have such as reflectance of colours, shininess etc, how many texture layers are present, what images are on them and how they are blended together, what special effects are applied such as environment mapping, what culling mode is used, how the textures are filtered etc.@*@* |
---|
| 195 | |
---|
| 196 | Materials can either be set up programmatically, by calling SceneManager::createMaterial and tweaking the settings, or by specifying it in a 'script' which is loaded at runtime. @xref{Material Scripts} for more info.@*@* |
---|
| 197 | |
---|
| 198 | Basically everything about the appearance of an object apart from it's shape is controlled by the Material class.@*@* |
---|
| 199 | |
---|
| 200 | The SceneManager class manages the master list of materials available to the scene. The list can be added to by the application by calling SceneManager::createMaterial, or by loading a Mesh (which will in turn load material properties). Whenever materials are added to the SceneManager, they start off with a default set of properties; these are defined by OGRE as the following:@*@* |
---|
| 201 | |
---|
| 202 | @itemize @bullet |
---|
| 203 | @item |
---|
| 204 | ambient reflectance = ColourValue::White (full) |
---|
| 205 | @item |
---|
| 206 | diffuse reflectance = ColourValue::White (full) |
---|
| 207 | @item |
---|
| 208 | specular reflectance = ColourValue::Black (none) |
---|
| 209 | @item |
---|
| 210 | emmissive = ColourValue::Black (none) |
---|
| 211 | @item |
---|
| 212 | shininess = 0 (not shiny) |
---|
| 213 | @item |
---|
| 214 | No texture layers (& hence no textures) |
---|
| 215 | @item |
---|
| 216 | SourceBlendFactor = SBF_ONE, DestBlendFactor = SBF_ZERO (opaque) |
---|
| 217 | @item |
---|
| 218 | Depth buffer checking on |
---|
| 219 | @item |
---|
| 220 | Depth buffer writing on |
---|
| 221 | @item |
---|
| 222 | Depth buffer comparison function = CMPF_LESS_EQUAL |
---|
| 223 | @item |
---|
| 224 | Culling mode = CULL_CLOCKWISE |
---|
| 225 | @item |
---|
| 226 | Ambient lighting in scene = ColourValue(0.5, 0.5, 0.5) (mid-grey) |
---|
| 227 | @item |
---|
| 228 | Dynamic lighting enabled |
---|
| 229 | @item |
---|
| 230 | Gourad shading mode |
---|
| 231 | @item |
---|
| 232 | Solid polygon mode |
---|
| 233 | @item |
---|
| 234 | Bilinear texture filtering |
---|
| 235 | @end itemize |
---|
| 236 | |
---|
| 237 | You can alter these settings by calling SceneManager::getDefaultMaterialSettings() and making the required changes to the Material which is returned. |
---|
| 238 | |
---|
| 239 | Entities automatically have Material's associated with them if they use a Mesh object, since the Mesh object typically sets up it's required materials on loading. You can also customise the material used by an entity as described in @ref{Entities}. Just create a new Material, set it up how you like (you can copy an existing material into it if you like using a standard assignment statement) and point the SubEntity entries at it using SubEntity::setMaterialName(). |
---|
| 240 | |
---|
| 241 | |
---|
| 242 | @node Overlays |
---|
| 243 | @section Overlays |
---|
| 244 | |
---|
| 245 | Overlays allow you to render 2D and 3D elements on top of the normal scene contents to create effects like heads-up displays (HUDs), menu systems, status panels etc. The frame rate statistics panel which comes as standard with OGRE is an example of an overlay. Overlays can contain 2D or 3D elements. 2D elements are used for HUDs, and 3D elements can be used to create cockpits or any other 3D object which you wish to be rendered on top of the rest of the scene.@*@* |
---|
| 246 | |
---|
| 247 | You can create overlays either through the SceneManager::createOverlay method, or you can define them in an .overlay script. In reality the latter is likely to be the most practical because it is easier to tweak (without the need to recompile the code). Note that you can define as many overlays as you like: they all start off life hidden, and you display them by calling their 'show()' method. You can also show multiple overlays at once, and their Z order is determined by the Overlay::setZOrder() method.@*@* |
---|
| 248 | |
---|
| 249 | @heading Creating 2D Elements |
---|
| 250 | |
---|
| 251 | The OverlayElement class abstracts the details of 2D elements which are added to overlays. All items which can be added to overlays are derived from this class. It is possible (and encouraged) for users of OGRE to define their own custom subclasses of OverlayElement in order to provide their own user controls. The key common features of all OverlayElements are things like size, position, basic material name etc. Subclasses extend this behaviour to include more complex properties and behaviour.@*@* |
---|
| 252 | |
---|
| 253 | An important built-in subclass of OverlayElement is OverlayContainer. OverlayContainer is the same as a OverlayElement, except that it can contain other OverlayElements, grouping them together (allowing them to be moved together for example) and providing them with a local coordinate origin for easier lineup.@*@* |
---|
| 254 | |
---|
| 255 | The third important class is OverlayManager. Whenever an application wishes to create a 2D element to add to an overlay (or a container), it should call OverlayManager::createOverlayElement. The type of element you wish to create is identified by a string, the reason being that it allows plugins to register new types of OverlayElement for you to create without you having to link specifically to those libraries. For example, to create a panel (a plain rectangular area which can contain other OverlayElements) you would call OverlayManager::getSingleton().createOverlayElement("Panel", "myNewPanel");@*@* |
---|
| 256 | |
---|
| 257 | @heading Adding 2D Elements to the Overlay |
---|
| 258 | |
---|
| 259 | Only OverlayContainers can be added direct to an overlay. The reason is that each level of container establishes the Zorder of the elements contained within it, so if you nest several containers, inner containers have a higher zorder than outer ones to ensure they are displayed correctly. To add a container (such as a Panel) to the overlay, simply call Overlay::add2D.@*@* |
---|
| 260 | |
---|
| 261 | If you wish to add child elements to that container, call OverlayContainer::addChild. Child elements can be OverlayElements or OverlayContainer instances themselves. Remember that the position of a child element is relative to the top-left corner of it's parent.@*@* |
---|
| 262 | |
---|
| 263 | @heading A word about 2D coordinates |
---|
| 264 | |
---|
| 265 | OGRE allows you to place and size elements based on 2 coordinate systems: @strong{relative} and @strong{pixel} based. |
---|
| 266 | @table @asis |
---|
| 267 | @item Pixel Mode |
---|
| 268 | This mode is useful when you want to specify an exact size for your overlay items, and you don't mind if those items get smaller on the screen if you increase the screen resolution (in fact you might want this). In this mode the only way to put something in the middle or at the right or bottom of the screen reliably in any resolution is to use the aligning options, whilst in relative mode you can do it just by using the right relative coordinates. This mode is very simple, the top-left of the screen is (0,0) and the bottom-right of the screen depends on the resolution. As mentioned above, you can use the aligning options to make the horizontal and vertical coordinate origins the right, bottom or center of the screen if you want to place pixel items in these locations without knowing the resolution. |
---|
| 269 | @item Relative Mode |
---|
| 270 | This mode is useful when you want items in the overlay to be the same size on the screen no matter what the resolution. In relative mode, the top-left of the screen is (0,0) and the bottom-right is (1,1). So if you place an element at (0.5, 0.5), it's top-left corner is placed exactly in the center of the screen, no matter what resolution the application is running in. The same principle applies to sizes; if you set the width of an element to 0.5, it covers half the width of the screen. Note that because the aspect ratio of the screen is typically 1.3333 : 1 (width : height), an element with dimensions (0.25, 0.25) will not be square, but it will take up exactly 1/16th of the screen in area terms. If you want square-looking areas you will have to compensate using the typical aspect ratio eg use (0.1875, 0.25) instead. |
---|
| 271 | @end table |
---|
| 272 | |
---|
| 273 | @heading Transforming Overlays |
---|
| 274 | |
---|
| 275 | Another nice feature of overlays is being able to rotate, scroll and scale them as a whole. You can use this for zooming in / out menu systems, dropping them in from off screen and other nice effects. See the Overlay::scroll, Overlay::rotate and Overlay::scale methods for more information. |
---|
| 276 | |
---|
| 277 | @heading Scripting overlays |
---|
| 278 | Overlays can also be defined in scripts. @xref{Overlay Scripts} for details. |
---|
| 279 | |
---|
| 280 | @heading GUI systems |
---|
| 281 | Overlays are only really designed for non-interactive screen elements, although you can use them as a crude GUI. For a far more complete GUI solution, we recommend CEGui (@url{http://www.cegui.org.uk}), as demonstrated in the sample Demo_Gui. |
---|
| 282 | |
---|
| 283 | @node Scripts |
---|
| 284 | @chapter Scripts |
---|
| 285 | OGRE drives many of its features through scripts in order to make it easier to set up. The scripts are simply plain text files which can be edited in any standard text editor, and modiying them immediately takes effect on your OGRE-based applications, without any need to recompile. This makes prototyping a lot faster. Here are the items that OGRE lets you script: |
---|
| 286 | @itemize @bullet |
---|
| 287 | @item |
---|
| 288 | @ref{Material Scripts} |
---|
| 289 | @item |
---|
| 290 | @ref{Compositor Scripts} |
---|
| 291 | @item |
---|
| 292 | @ref{Particle Scripts} |
---|
| 293 | @item |
---|
| 294 | @ref{Overlay Scripts} |
---|
| 295 | @item |
---|
| 296 | @ref{Font Definition Scripts} |
---|
| 297 | @end itemize |
---|
| 298 | @node Material Scripts |
---|
| 299 | @section Material Scripts |
---|
| 300 | |
---|
| 301 | Material scripts offer you the ability to define complex materials in a script which can be reused easily. Whilst you could set up all materials for a scene in code using the methods of the Material and TextureLayer classes, in practice it's a bit unwieldy. Instead you can store material definitions in text files which can then be loaded whenever required.@*@* |
---|
| 302 | |
---|
| 303 | @heading Loading scripts |
---|
| 304 | |
---|
| 305 | Material scripts are loaded when resource groups are initialised: OGRE looks in all resource locations associated with the group (see Root::addResourceLocation) for files with the '.material' extension and parses them. If you want to parse files manually, use MaterialSerializer::parseScript.@*@* |
---|
| 306 | |
---|
| 307 | It's important to realise that materials are not loaded completely by this parsing process: only the definition is loaded, no textures or other resources are loaded. This is because it is common to have a large library of materials, but only use a relatively small subset of them in any one scene. To load every material completely in every script would therefore cause unnecessary memory overhead. You can access a 'deferred load' Material in the normal way (MaterialManager::getSingleton().getByName()), but you must call the 'load' method before trying to use it. Ogre does this for you when using the normal material assignment methods of entities etc.@*@* |
---|
| 308 | |
---|
| 309 | Another important factor is that material names must be unique throughout ALL scripts loaded by the system, since materials are always identified by name.@*@* |
---|
| 310 | |
---|
| 311 | @heading Format |
---|
| 312 | |
---|
| 313 | Several materials may be defined in a single script. The script format is pseudo-C++, with sections delimited by curly braces ('{', '}'), and comments indicated by starting a line with '//' (note, no nested form comments allowed). The general format is shown below in the example below (note that to start with, we only consider fixed-function materials which don't use vertex or fragment programs, these are covered later):@*@* |
---|
| 314 | @example |
---|
| 315 | // This is a comment |
---|
| 316 | material walls/funkywall1 |
---|
| 317 | { |
---|
| 318 | // first, preferred technique |
---|
| 319 | technique |
---|
| 320 | { |
---|
| 321 | // first pass |
---|
| 322 | pass |
---|
| 323 | { |
---|
| 324 | ambient 0.5 0.5 0.5 |
---|
| 325 | diffuse 1.0 1.0 1.0 |
---|
| 326 | |
---|
| 327 | // Texture unit 0 |
---|
| 328 | texture_unit |
---|
| 329 | { |
---|
| 330 | texture wibbly.jpg |
---|
| 331 | scroll_anim 0.1 0.0 |
---|
| 332 | wave_xform scale sine 0.0 0.7 0.0 1.0 |
---|
| 333 | } |
---|
| 334 | // Texture unit 1 (this is a multitexture pass) |
---|
| 335 | texture_unit |
---|
| 336 | { |
---|
| 337 | texture wobbly.png |
---|
| 338 | rotate_anim 0.25 |
---|
| 339 | colour_op add |
---|
| 340 | } |
---|
| 341 | } |
---|
| 342 | } |
---|
| 343 | |
---|
| 344 | // Second technique, can be used as a fallback or LOD level |
---|
| 345 | technique |
---|
| 346 | { |
---|
| 347 | // .. and so on |
---|
| 348 | } |
---|
| 349 | |
---|
| 350 | } |
---|
| 351 | @end example |
---|
| 352 | |
---|
| 353 | Every material in the script must be given a name, which is the line 'material <blah>' before the first opening '{'. This name must be globally unique. It can include path characters (as in the example) to logically divide up your materials, and also to avoid duplicate names, but the engine does not treat the name as hierarchical, just as a string.@*@* |
---|
| 354 | |
---|
| 355 | @strong{ NOTE: ':' is the delimiter for specifying material copy in the script so it can't be used as part of the material name.} |
---|
| 356 | @*@* |
---|
| 357 | |
---|
| 358 | A material can copy from a previously defined material by using a @emph{colon} @strong{:} after the material name followed by the name of the reference material to copy. If the reference material can not be found then it is ignored. (@xref{Copying Materials})@*@* |
---|
| 359 | |
---|
| 360 | A material can be made up of many techniques (@xref{Techniques})- a technique is one way of achieving the effect you are looking for. You can supply more than one technique in order to provide fallback approaches where a card does not have the ability to render the preferred technique, or where you wish to define lower level of detail versions of the material in order to conserve rendering power when objects are more distant. @*@* |
---|
| 361 | |
---|
| 362 | Each technique can be made up of many passes (@xref{Passes}), that is a complete render of the object can be performed multiple times with different settings in order to produce composite effects. Ogre may also split the passes you have defined into many passes at runtime, if you define a pass which uses too many texture units for the card you are currently running on (note that it can only do this if you are not using a fragment program). Each pass has a number of top-level attributes such as 'ambient' to set the amount & colour of the ambient light reflected by the material. Some of these options do not apply if you are using vertex programs, @xref{Passes} for more details. @*@* |
---|
| 363 | |
---|
| 364 | Within each pass, there can be zero or many texture units in use (@xref{Texture Units}). These define the texture to be used, and optionally some blending operations (which use multitexturing) and texture effects.@*@* |
---|
| 365 | |
---|
| 366 | You can also reference vertex and fragment programs (or vertex and pixel shaders, if you want to use that terminology) in a pass with a given set of parameters. Programs themselves are declared in separate .program scripts (@xref{Declaring Vertex and Fragment Programs}) and are used as described in @ref{Using Vertex and Fragment Programs in a Pass}. |
---|
| 367 | |
---|
| 368 | @subheading Top-level material attributes |
---|
| 369 | The outermost section of a material definition does not have a lot of attributes of its own (most of the configurable parameters are within the child sections. However, it does have some, and here they are:@*@* |
---|
| 370 | @anchor{lod_distances} |
---|
| 371 | @subheading lod_distances |
---|
| 372 | This attribute controls the distances at which different Techniques can come into effect. @xref{Techniques} for a full discussion of this option. |
---|
| 373 | @*@* |
---|
| 374 | @anchor{receive_shadows} |
---|
| 375 | @subheading receive_shadows |
---|
| 376 | This attribute controls whether objects using this material can have shadows cast upon them.@*@* |
---|
| 377 | |
---|
| 378 | Format: receive_shadows <on|off>@* |
---|
| 379 | Default: on@*@* |
---|
| 380 | |
---|
| 381 | Whether or not an object receives a shadow is the combination of a number of factors, @xref{Shadows} for full details; however this allows you to make a material opt-out of receiving shadows if required. Note that transparent materials never receive shadows so this option only has an effect on solid materials. |
---|
| 382 | |
---|
| 383 | @anchor{transparency_casts_shadows} |
---|
| 384 | @subheading transparency_casts_shadows |
---|
| 385 | This attribute controls whether transparent materials can cast certain kinds of shadow.@*@* |
---|
| 386 | |
---|
| 387 | Format: transparency_casts_shadows <on|off>@* |
---|
| 388 | Default: off@*@* |
---|
| 389 | Whether or not an object casts a shadow is the combination of a number of factors, @xref{Shadows} for full details; however this allows you to make a transparent material cast shadows, when it would otherwise not. For example, when using texture shadows, transparent materials are normally not rendered into the shadow texture because they should not block light. This flag overrides that. |
---|
| 390 | |
---|
| 391 | @anchor{set_texture_alias} |
---|
| 392 | @subheading set_texture_alias |
---|
| 393 | This attribute associates a texture alias with a texture name.@*@* |
---|
| 394 | |
---|
| 395 | Format: set_texture_alias <alias name> <texture name>@*@* |
---|
| 396 | |
---|
| 397 | This attribute is used to set the textures used in texture unit states that were copied from another material.(@xref{Copying Materials})@*@* |
---|
| 398 | |
---|
| 399 | @node Techniques |
---|
| 400 | @subsection Techniques |
---|
| 401 | |
---|
| 402 | A "technique" section in your material script encapsulates a single method of rendering an object. The simplest of material definitions only contains a single technique, however since PC hardware varies quite greatly in it's capabilities, you can only do this if you are sure that every card for which you intend to target your application will support the capabilities which your technique requires. In addition, it can be useful to define simpler ways to render a material if you wish to use material LOD, such that more distant objects use a simpler, less performance-hungry technique.@*@* |
---|
| 403 | |
---|
| 404 | When a material is used for the first time, it is 'compiled'. That involves scanning the techniques which have been defined, and marking which of them are supportable using the current rendering API and graphics card. If no techniques are supportable, your material will render as blank white. The compilation examines a number of things, such as: |
---|
| 405 | @itemize @bullet |
---|
| 406 | @item The number of texture_unit entries in each pass@* |
---|
| 407 | Note that if the number of texture_unit entries exceeds the number of texture units in the current graphics card, the technique may still be supportable so long as a fragment program is not being used. In this case, Ogre will split the pass which has too many entries into multiple passes for the less capable card, and the multitexture blend will be turned into a multipass blend (@xref{colour_op_multipass_fallback}). |
---|
| 408 | @item Whether vertex or fragment programs are used, and if so which syntax they use (e.g. vs_1_1, ps_2_x, arbfp1 etc) |
---|
| 409 | @item Other effects like cube mapping and dot3 blending |
---|
| 410 | @end itemize |
---|
| 411 | @* |
---|
| 412 | In a material script, techniques must be listed in order of preference, i.e. the earlier techniques are preferred over the later techniques. This normally means you will list your most advanced, most demanding techniques first in the script, and list fallbacks afterwards.@*@* |
---|
| 413 | |
---|
| 414 | To help clearly identify what each technique is used for, the technique can be named but its optional. Techniques not named within the script will take on a name that is the technique index number. For example: the first technique in a material is index 0, its name would be "0" if it was not given a name in the script. The technique name must be unqiue within the material or else the final technique is the resulting merge of all techniques with the same name in the material. A warning message is posted in the Ogre.log if this occurs. Named techniques can help when copying a material and modifying an existing technique: (@xref{Copying Materials})@*@* |
---|
| 415 | |
---|
| 416 | Format: technique name@*@* |
---|
| 417 | |
---|
| 418 | Techniques have only a small number of attributes of their own, the 'scheme'(@xref{scheme}) they belong to, and the LOD index within that scheme (@xref{lod_index}). We also mention an extra Material attribute called @ref{lod_distances} which isn't a Technique attribute but is directly related to the lod_index attribute, so it's listed here for convenience.@*@* |
---|
| 419 | |
---|
| 420 | @anchor{scheme} |
---|
| 421 | @subheading scheme |
---|
| 422 | |
---|
| 423 | Sets the 'scheme' this Technique belongs to. Material schemes are used to control top-level switching from one set of techniques to another. For example, you might use this to define 'high', 'medium' and 'low' complexity levels on materials to allow a user to pick a performance / quality ratio. Another possibility is that you have a fully HDR-enabled pipeline for top machines, rendering all objects using unclamped shaders, and a simpler pipeline for others; this can be implemented using schemes. The active scheme is typically controlled at a viewport level, and the active one defaults to 'Default'.@*@* |
---|
| 424 | |
---|
| 425 | Format: scheme <name>@* |
---|
| 426 | Example: scheme hdr@* |
---|
| 427 | Default: scheme Default@*@* |
---|
| 428 | |
---|
| 429 | |
---|
| 430 | @anchor{lod_index} |
---|
| 431 | @subheading lod_index |
---|
| 432 | |
---|
| 433 | Sets the level-of-detail (LOD) index this Technique belongs to. @*@* |
---|
| 434 | |
---|
| 435 | Format: lod_index <number>@* |
---|
| 436 | NB Valid values are 0 (highest level of detail) to 65535, although this is unlikely. You should not leave gaps in the LOD indexes between Techniques.@*@* |
---|
| 437 | |
---|
| 438 | Example: lod_index 1@*@* |
---|
| 439 | |
---|
| 440 | All techniques must belong to a LOD index, by default they all belong to index 0, ie the highest LOD. Increasing indexes denote lower levels of detail. You can (and often will) assign more than one technique to the same LOD index, what this means is that OGRE will pick the best technique of the ones listed at the same LOD index. For readability, it is advised that you list your techniques in order of LOD, then in order of preference, although the latter is the only prerequisite (OGRE determines which one is 'best' by which one is listed first). You must always have at least one Technique at lod_index 0.@*@* |
---|
| 441 | The distance at which a LOD level is applied is determined by the lod_distances attribute of the containing material, @xref{lod_distances} for details.@*@* |
---|
| 442 | |
---|
| 443 | Default: lod_index 0@*@* |
---|
| 444 | |
---|
| 445 | @anchor{lod_distances} |
---|
| 446 | @subheading lod_distances |
---|
| 447 | @strong{Note: this attribute must be specified in the outer material section (ie the parent of all the techniques), but it's specified here since it is most relevant to this section.}@*@* |
---|
| 448 | |
---|
| 449 | By setting this attribute, you indicate that you want this material to alter the Technique that it uses based on distance from the camera. You must give it a list of distances, in ascending order, each one indicating the distance at which the material will switch to the next LOD. Implicitly, all materials activate LOD index 0 for distances less than the smallest of these. You must ensure that there is at least one Technique with a @ref{lod_index} value for each distance in the list (so if you specify 3 distances, you must have techniques for indexes 1, 2 and 3). Note you must always have at least one Technique at lod_index 0.@*@* |
---|
| 450 | |
---|
| 451 | Format: lod_distances <distance_1> [<distance_2> ... <distance_n>]@* |
---|
| 452 | |
---|
| 453 | Example: lod_distances 300.0 600.5 1200@*@* |
---|
| 454 | |
---|
| 455 | The above example would cause the material to use the best Technique at lod_index 0 up to a distance of 300 world units, the best from lod_index 1 from 300 up to 600, lod_index 2 from 600 to 1200, and lod_index 3 from 1200 upwards.@*@* |
---|
| 456 | |
---|
| 457 | Techniques also contain one or more passes (and there must be at least one), @xref{Passes}. |
---|
| 458 | |
---|
| 459 | |
---|
| 460 | |
---|
| 461 | @node Passes |
---|
| 462 | @subsection Passes |
---|
| 463 | A pass is a single render of the geometry in question; a single call to the rendering API with a certain set of rendering properties. A technique can have between one and 16 passes, although clearly the more passes you use, the more expensive the technique will be to render.@*@* |
---|
| 464 | |
---|
| 465 | To help clearly identify what each pass is used for, the pass can be named but its optional. Passes not named within the script will take on a name that is the pass index number. For example: the first pass in a technique is index 0 so its name would be "0" if it was not given a name in the script. The pass name must be unqiue within the technique or else the final pass is the resulting merge of all passes with the same name in the technique. A warning message is posted in the Ogre.log if this occurs. Named passes can help when copying a material and modifying an existing pass: (@xref{Copying Materials})@*@* |
---|
| 466 | |
---|
| 467 | Passes have a set of global attributes (described below), zero or more nested texture_unit entries (@xref{Texture Units}), and optionally a reference to a vertex and / or a fragment program (@xref{Using Vertex and Fragment Programs in a Pass}). |
---|
| 468 | |
---|
| 469 | @*@* |
---|
| 470 | Here are the attributes you can use in a 'pass' section of a .material script: |
---|
| 471 | |
---|
| 472 | @itemize @bullet |
---|
| 473 | @item |
---|
| 474 | @ref{ambient} |
---|
| 475 | @item |
---|
| 476 | @ref{diffuse} |
---|
| 477 | @item |
---|
| 478 | @ref{specular} |
---|
| 479 | @item |
---|
| 480 | @ref{emissive} |
---|
| 481 | @item |
---|
| 482 | @ref{scene_blend} |
---|
| 483 | @item |
---|
| 484 | @ref{depth_check} |
---|
| 485 | @item |
---|
| 486 | @ref{depth_write} |
---|
| 487 | @item |
---|
| 488 | @ref{depth_func} |
---|
| 489 | @item |
---|
| 490 | @ref{depth_bias} |
---|
| 491 | @item |
---|
| 492 | @ref{alpha_rejection} |
---|
| 493 | @item |
---|
| 494 | @ref{cull_hardware} |
---|
| 495 | @item |
---|
| 496 | @ref{cull_software} |
---|
| 497 | @item |
---|
| 498 | @ref{lighting} |
---|
| 499 | @item |
---|
| 500 | @ref{shading} |
---|
| 501 | @item |
---|
| 502 | @ref{polygon_mode} |
---|
| 503 | @item |
---|
| 504 | @ref{fog_override} |
---|
| 505 | @item |
---|
| 506 | @ref{colour_write} |
---|
| 507 | @item |
---|
| 508 | @ref{max_lights} |
---|
| 509 | @item |
---|
| 510 | @ref{iteration} |
---|
| 511 | @item |
---|
| 512 | @ref{point_size} |
---|
| 513 | @item |
---|
| 514 | @ref{point_sprites} |
---|
| 515 | @item |
---|
| 516 | @ref{point_size_attenuation} |
---|
| 517 | @item |
---|
| 518 | @ref{point_size_min} |
---|
| 519 | @item |
---|
| 520 | @ref{point_size_max} |
---|
| 521 | @end itemize |
---|
| 522 | |
---|
| 523 | @heading Attribute Descriptions |
---|
| 524 | @anchor{ambient} |
---|
| 525 | @subheading ambient |
---|
| 526 | |
---|
| 527 | Sets the ambient colour reflectance properties of this pass. @strong{This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.} @*@* |
---|
| 528 | |
---|
| 529 | Format: ambient (<red> <green> <blue> [<alpha>]| vertexcolour)@* |
---|
| 530 | NB valid colour values are between 0.0 and 1.0.@*@* |
---|
| 531 | |
---|
| 532 | Example: ambient 0.0 0.8 0.0@*@* |
---|
| 533 | |
---|
| 534 | The base colour of a pass is determined by how much red, green and blue light is reflects at each vertex. This property determines how much ambient light (directionless global light) is reflected. |
---|
| 535 | It is also possible to make the ambient reflectance track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. |
---|
| 536 | The default is full white, meaning objects are completely globally illuminated. Reduce this if you want to see diffuse or specular light effects, or change the blend of colours to make the object have a base colour other than white. This setting has no effect if dynamic lighting is disabled using the 'lighting off' attribute, or if any texture layer has a 'colour_op replace' attribute.@*@* |
---|
| 537 | |
---|
| 538 | Default: ambient 1.0 1.0 1.0 1.0@*@* |
---|
| 539 | |
---|
| 540 | @anchor{diffuse} |
---|
| 541 | @subheading diffuse |
---|
| 542 | |
---|
| 543 | Sets the diffuse colour reflectance properties of this pass. @strong{This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.}@*@* |
---|
| 544 | |
---|
| 545 | Format: diffuse (<red> <green> <blue> [<alpha>]| vertexcolour)@* |
---|
| 546 | NB valid colour values are between 0.0 and 1.0.@*@* |
---|
| 547 | |
---|
| 548 | Example: diffuse 1.0 0.5 0.5@*@* |
---|
| 549 | |
---|
| 550 | The base colour of a pass is determined by how much red, green and blue light is reflects at each vertex. This property determines how much diffuse light (light from instances of the Light class in the scene) is reflected. |
---|
| 551 | It is also possible to make the diffuse reflectance track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. |
---|
| 552 | The default is full white, meaning objects reflect the maximum white light they can from Light objects. This setting has no effect if dynamic lighting is disabled using the 'lighting off' attribute, or if any texture layer has a 'colour_op replace' attribute.@*@* |
---|
| 553 | |
---|
| 554 | Default: diffuse 1.0 1.0 1.0 1.0@*@* |
---|
| 555 | |
---|
| 556 | @anchor{specular} |
---|
| 557 | @subheading specular |
---|
| 558 | |
---|
| 559 | Sets the specular colour reflectance properties of this pass. @strong{This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.}@*@* |
---|
| 560 | |
---|
| 561 | Format: specular (<red> <green> <blue> [<alpha>]| vertexcolour) <shininess>@* |
---|
| 562 | NB valid colour values are between 0.0 and 1.0. Shininess can be any value greater than 0.@*@* |
---|
| 563 | |
---|
| 564 | Example: specular 1.0 1.0 1.0 12.5@*@* |
---|
| 565 | |
---|
| 566 | The base colour of a pass is determined by how much red, green and blue light is reflects at each vertex. This property determines how much specular light (highlights from instances of the Light class in the scene) is reflected. |
---|
| 567 | It is also possible to make the diffuse reflectance track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. |
---|
| 568 | The default is to reflect no specular light. The colour of the specular highlights is determined by the colour parameters, and the size of the highlights by the separate shininess parameter.. The higher the value of the shininess parameter, the sharper the highlight ie the radius is smaller. Beware of using shininess values in the range of 0 to 1 since this causes the the specular colour to be applied to the whole surface that has the material applied to it. When the viewing angle to the surface changes, ugly flickering will also occur when shininess is in the range of 0 to 1. Shininess values between 1 and 128 work best in both DirectX and OpenGL renderers. This setting has no effect if dynamic lighting is disabled using the 'lighting off' attribute, or if any texture layer has a 'colour_op replace' attribute.@*@* |
---|
| 569 | |
---|
| 570 | Default: specular 0.0 0.0 0.0 0.0 0.0@*@* |
---|
| 571 | |
---|
| 572 | @anchor{emissive} |
---|
| 573 | @subheading emissive |
---|
| 574 | |
---|
| 575 | Sets the amount of self-illumination an object has. @strong{This attribute has no effect if a asm, CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.}@*@* |
---|
| 576 | |
---|
| 577 | Format: emissive (<red> <green> <blue> [<alpha>]| vertexcolour)@* |
---|
| 578 | NB valid colour values are between 0.0 and 1.0.@*@* |
---|
| 579 | |
---|
| 580 | Example: emissive 1.0 0.0 0.0@*@* |
---|
| 581 | |
---|
| 582 | If an object is self-illuminating, it does not need external sources to light it, ambient or otherwise. It's like the object has it's own personal ambient light. Unlike the name suggests, this object doesn't act as a light source for other objects in the scene (if you want it to, you have to create a light which is centered on the object). |
---|
| 583 | It is also possible to make the emissive colour track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour values. |
---|
| 584 | This setting has no effect if dynamic lighting is disabled using the 'lighting off' attribute, or if any texture layer has a 'colour_op replace' attribute.@*@* |
---|
| 585 | |
---|
| 586 | Default: emissive 0.0 0.0 0.0 0.0@*@* |
---|
| 587 | |
---|
| 588 | @anchor{scene_blend} |
---|
| 589 | @subheading scene_blend |
---|
| 590 | |
---|
| 591 | Sets the kind of blending this pass has with the existing contents of the scene. Wheras the texture blending operations seen in the texture_unit entries are concerned with blending between texture layers, this blending is about combining the output of this pass as a whole with the existing contents of the rendering target. This blending therefore allows object transparency and other special effects. There are 2 formats, one using predefined blend types, the other allowing a roll-your-own approach using source and destination factors.@*@* |
---|
| 592 | |
---|
| 593 | Format1: scene_blend <add|modulate|alpha_blend|colour_blend>@*@* |
---|
| 594 | |
---|
| 595 | Example: scene_blend add@*@* |
---|
| 596 | |
---|
| 597 | This is the simpler form, where the most commonly used blending modes are enumerated using a single parameter. Valid <blend_type> parameters are: |
---|
| 598 | @table @asis |
---|
| 599 | @item add |
---|
| 600 | The colour of the rendering output is added to the scene. Good for exposions, flares, lights, ghosts etc. Equivalent to 'scene_blend one one'. |
---|
| 601 | @item modulate |
---|
| 602 | The colour of the rendering output is multiplied with the scene contents. Generally colours and darkens the scene, good for smoked glass, semi-transparent objects etc. Equivalent to 'scene_blend dest_colour zero'. |
---|
| 603 | @item colour_blend |
---|
| 604 | Colour the scene based on the brightness of the input colours, but don't darken. Equivalent to 'scene_blend src_colour one_minus_src_colour' |
---|
| 605 | @item alpha_blend |
---|
| 606 | The alpha value of the rendering output is used as a mask. Equivalent to 'scene_blend src_alpha one_minus_src_alpha' |
---|
| 607 | @end table |
---|
| 608 | @* |
---|
| 609 | Format2: scene_blend <src_factor> <dest_factor>@*@* |
---|
| 610 | |
---|
| 611 | Example: scene_blend one one_minus_dest_alpha@*@* |
---|
| 612 | |
---|
| 613 | This version of the method allows complete control over the blending operation, by specifying the source and destination blending factors. The resulting colour which is written to the rendering target is (texture * sourceFactor) + (scene_pixel * destFactor). Valid values for both parameters are: |
---|
| 614 | @table @asis |
---|
| 615 | @item one |
---|
| 616 | Constant value of 1.0 |
---|
| 617 | @item zero |
---|
| 618 | Constant value of 0.0 |
---|
| 619 | @item dest_colour |
---|
| 620 | The existing pixel colour |
---|
| 621 | @item src_colour |
---|
| 622 | The texture pixel (texel) colour |
---|
| 623 | @item one_minus_dest_colour |
---|
| 624 | 1 - (dest_colour) |
---|
| 625 | @item one_minus_src_colour |
---|
| 626 | 1 - (src_colour) |
---|
| 627 | @item dest_alpha |
---|
| 628 | The existing pixel alpha value |
---|
| 629 | @item src_alpha |
---|
| 630 | The texel alpha value |
---|
| 631 | @item one_minus_dest_alpha |
---|
| 632 | 1 - (dest_alpha) |
---|
| 633 | @item one_minus_src_alpha |
---|
| 634 | 1 - (src_alpha) |
---|
| 635 | @end table |
---|
| 636 | @* |
---|
| 637 | Default: scene_blend one zero (opaque) |
---|
| 638 | @* |
---|
| 639 | @anchor{depth_check} |
---|
| 640 | @subheading depth_check |
---|
| 641 | |
---|
| 642 | Sets whether or not this pass renders with depth-buffer checking on or not.@*@* |
---|
| 643 | |
---|
| 644 | Format: depth_check <on|off>@*@* |
---|
| 645 | |
---|
| 646 | If depth-buffer checking is on, whenever a pixel is about to be written to the frame buffer the depth buffer is checked to see if the pixel is in front of all other pixels written at that point. If not, the pixel is not written. If depth checking is off, pixels are written no matter what has been rendered before. Also see depth_func for more advanced depth check configuration.@*@* |
---|
| 647 | |
---|
| 648 | Default: depth_check on@*@* |
---|
| 649 | |
---|
| 650 | @anchor{depth_write} |
---|
| 651 | @subheading depth_write |
---|
| 652 | |
---|
| 653 | Sets whether or not this pass renders with depth-buffer writing on or not.@* |
---|
| 654 | |
---|
| 655 | Format: depth_write <on|off>@*@* |
---|
| 656 | |
---|
| 657 | If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth buffer is updated with the depth value of that new pixel, thus affecting future rendering operations if future pixels are behind this one. If depth writing is off, pixels are written without updating the depth buffer. Depth writing should normally be on but can be turned off when rendering static backgrounds or when rendering a collection of transparent objects at the end of a scene so that they overlap each other correctly.@*@* |
---|
| 658 | |
---|
| 659 | Default: depth_write on@* |
---|
| 660 | |
---|
| 661 | @anchor{depth_func} |
---|
| 662 | @subheading depth_func |
---|
| 663 | |
---|
| 664 | Sets the function used to compare depth values when depth checking is on.@*@* |
---|
| 665 | |
---|
| 666 | Format: depth_func <func>@*@* |
---|
| 667 | |
---|
| 668 | If depth checking is enabled (see depth_check) a comparison occurs between the depth value of the pixel to be written and the current contents of the buffer. This comparison is normally less_equal, i.e. the pixel is written if it is closer (or at the same distance) than the current contents. The possible functions are: |
---|
| 669 | @table @asis |
---|
| 670 | @item always_fail |
---|
| 671 | Never writes a pixel to the render target |
---|
| 672 | @item always_pass |
---|
| 673 | Always writes a pixel to the render target |
---|
| 674 | @item less |
---|
| 675 | Write if (new_Z < existing_Z) |
---|
| 676 | @item less_equal |
---|
| 677 | Write if (new_Z <= existing_Z) |
---|
| 678 | @item equal |
---|
| 679 | Write if (new_Z == existing_Z) |
---|
| 680 | @item not_equal |
---|
| 681 | Write if (new_Z != existing_Z) |
---|
| 682 | @item greater_equal |
---|
| 683 | Write if (new_Z >= existing_Z) |
---|
| 684 | @item greater |
---|
| 685 | Write if (new_Z >existing_Z) |
---|
| 686 | @end table |
---|
| 687 | @* |
---|
| 688 | Default: depth_func less_equal |
---|
| 689 | |
---|
| 690 | @anchor{depth_bias} |
---|
| 691 | @subheading depth_bias |
---|
| 692 | |
---|
| 693 | Sets the bias applied to the depth value of this pass. Can be used to make coplanar polygons appear on top of others e.g. for decals. @*@* |
---|
| 694 | |
---|
| 695 | Format: depth_bias <value>@*@* |
---|
| 696 | |
---|
| 697 | Where <value> is between 0 and 16, the default being 0. The higher the value, the greater the offset (for if you want to do multiple overlapping decals).@*@* |
---|
| 698 | |
---|
| 699 | @anchor{alpha_rejection} |
---|
| 700 | @subheading alpha_rejection |
---|
| 701 | |
---|
| 702 | Sets the way the pass will have use alpha to totally reject pixels from the pipeline.@*@* |
---|
| 703 | |
---|
| 704 | Format: alpha_rejection <function> <value>@*@* |
---|
| 705 | |
---|
| 706 | Example: alpha_rejection greater_equal 128@*@* |
---|
| 707 | |
---|
| 708 | The function parameter can be any of the options listed in the material depth_function attribute. The value parameter can theoretically be any value between 0 and 255, but is best limited to 0 or 128 for hardware compatibility.@*@* |
---|
| 709 | |
---|
| 710 | Default: alpha_rejection always_pass@*@* |
---|
| 711 | |
---|
| 712 | @anchor{cull_hardware} |
---|
| 713 | @subheading cull_hardware |
---|
| 714 | |
---|
| 715 | Sets the hardware culling mode for this pass.@*@* |
---|
| 716 | |
---|
| 717 | Format: cull_hardware <clockwise|anticlockwise|none>@*@* |
---|
| 718 | |
---|
| 719 | A typical way for the hardware rendering engine to cull triangles is based on the 'vertex winding' of triangles. Vertex winding refers to the direction in which the vertices are passed or indexed to in the rendering operation as viewed from the camera, and will wither be clockwise or anticlockwise (that's 'counterclockwise' for you Americans out there ;). If the option 'cull_hardware clockwise' is set, all triangles whose vertices are viewed in clockwise order from the camera will be culled by the hardware. 'anticlockwise' is the reverse (obviously), and 'none' turns off hardware culling so all triagles are rendered (useful for creating 2-sided passes).@*@* |
---|
| 720 | |
---|
| 721 | Default: cull_hardware clockwise@* |
---|
| 722 | NB this is the same as OpenGL's default but the opposite of Direct3D's default (because Ogre uses a right-handed coordinate system like OpenGL). |
---|
| 723 | |
---|
| 724 | @anchor{cull_software} |
---|
| 725 | @subheading cull_software |
---|
| 726 | |
---|
| 727 | Sets the software culling mode for this pass.@*@* |
---|
| 728 | |
---|
| 729 | Format: cull_software <back|front|none>@*@* |
---|
| 730 | |
---|
| 731 | In some situations the engine will also cull geometry in software before sending it to the hardware renderer. This setting only takes effect on SceneManager's that use it (since it is best used on large groups of planar world geometry rather than on movable geometry since this would be expensive), but if used can cull geometry before it is sent to the hardware. In this case the culling is based on whether the 'back' or 'front' of the traingle is facing the camera - this definition is based on the face normal (a vector which sticks out of the front side of the polygon perpendicular to the face). Since Ogre expects face normals to be on anticlockwise side of the face, 'cull_software back' is the software equivalent of 'cull_hardware clockwise' setting, which is why they are both the default. The naming is different to reflect the way the culling is done though, since most of the time face normals are precalculated and they don't have to be the way Ogre expects - you could set 'cull_hardware none' and completely cull in software based on your own face normals, if you have the right SceneManager which uses them.@*@* |
---|
| 732 | |
---|
| 733 | Default: cull_software back@*@* |
---|
| 734 | |
---|
| 735 | @anchor{lighting} |
---|
| 736 | @subheading lighting |
---|
| 737 | |
---|
| 738 | Sets whether or not dynamic lighting is turned on for this pass or not. If lighting is turned off, all objects rendered using the pass will be fully lit. @strong{This attribute has no effect if a vertex program is used.}@*@* |
---|
| 739 | |
---|
| 740 | Format: lighting <on|off>@*@* |
---|
| 741 | |
---|
| 742 | Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shading properties for this pass redundant. When lighting is turned on, objects are lit according to their vertex normals for diffuse and specular light, and globally for ambient and emissive.@*@* |
---|
| 743 | |
---|
| 744 | Default: lighting on@*@* |
---|
| 745 | |
---|
| 746 | @anchor{shading} |
---|
| 747 | @subheading shading |
---|
| 748 | |
---|
| 749 | Sets the kind of shading which should be used for representing dynamic lighting for this pass.@*@* |
---|
| 750 | |
---|
| 751 | Format: shading <flat|gouraud|phong>@*@* |
---|
| 752 | |
---|
| 753 | When dynamic lighting is turned on, the effect is to generate colour values at each vertex. Whether these values are interpolated across the face (and how) depends on this setting.@*@* |
---|
| 754 | @table @asis |
---|
| 755 | @item flat |
---|
| 756 | No interpolation takes place. Each face is shaded with a single colour determined from the first vertex in the face. |
---|
| 757 | @item gouraud |
---|
| 758 | Colour at each vertex is linearly interpolated across the face. |
---|
| 759 | @item phong |
---|
| 760 | Vertex normals are interpolated across the face, and these are used to determine colour at each pixel. Gives a more natural lighting effect but is more expensive and works better at high levels of tesselation. Not supported on all hardware. |
---|
| 761 | @end table |
---|
| 762 | Default: shading gouraud@*@* |
---|
| 763 | |
---|
| 764 | @anchor{polygon_mode} |
---|
| 765 | @subheading polygon_mode |
---|
| 766 | |
---|
| 767 | Sets how polygons should be rasterised, ie whether they should be filled in, or just drawn as lines or points.@*@* |
---|
| 768 | |
---|
| 769 | Format: polygon_mode <solid|wireframe|points>@*@* |
---|
| 770 | |
---|
| 771 | @table @asis |
---|
| 772 | @item solid |
---|
| 773 | The normal situation - polygons are filled in. |
---|
| 774 | @item wireframe |
---|
| 775 | Polygons are drawn in outline only. |
---|
| 776 | @item points |
---|
| 777 | Only the points of each polygon are rendered. |
---|
| 778 | @end table |
---|
| 779 | Default: polygon_mode solid@*@* |
---|
| 780 | |
---|
| 781 | |
---|
| 782 | @anchor{fog_override} |
---|
| 783 | @subheading fog_override |
---|
| 784 | |
---|
| 785 | Tells the pass whether it should override the scene fog settings, and enforce it's own. Very useful for things that you don't want to be affected by fog when the rest of the scene is fogged, or vice versa.@*@* |
---|
| 786 | |
---|
| 787 | Format: fog_override <override?> [<type> <colour> <density> <start> <end>]@*@* |
---|
| 788 | |
---|
| 789 | Default: fog_override false@*@* |
---|
| 790 | |
---|
| 791 | If you specify 'true' for the first parameter and you supply the rest of the parameters, you are telling the pass to use these fog settings in preference to the scene settings, whatever they might be. If you specify 'true' but provide no further parameters, you are telling this pass to never use fogging no matter what the scene says. Here is an explanation of the parameters:@* |
---|
| 792 | @table @asis |
---|
| 793 | @item type |
---|
| 794 | @strong{none} = No fog, equivalent of just using 'fog_override true'@* |
---|
| 795 | @strong{linear} = Linear fog from the <start> and <end> distances@* |
---|
| 796 | @strong{exp} = Fog increases exponentially from the camera (fog = 1/e^(distance * density)), use <density> param to control it@* |
---|
| 797 | @strong{exp2} = Fog increases at the square of FOG_EXP, i.e. even quicker (fog = 1/e^(distance * density)^2), use <density> param to control it |
---|
| 798 | @item colour |
---|
| 799 | Sequence of 3 floating point values from 0 to 1 indicating the red, green and blue intensities |
---|
| 800 | @item density |
---|
| 801 | The density parameter used in the 'exp' or 'exp2' fog types. Not used in linear mode but param must still be there as a placeholder |
---|
| 802 | @item start |
---|
| 803 | The start distance from the camera of linear fog. Must still be present in other modes, even though it is not used. |
---|
| 804 | @item end |
---|
| 805 | The end distance from the camera of linear fog. Must still be present in other modes, even though it is not used. |
---|
| 806 | @end table |
---|
| 807 | @* |
---|
| 808 | Example: fog_override true exp 1 1 1 0.002 100 10000 |
---|
| 809 | |
---|
| 810 | @anchor{colour_write} |
---|
| 811 | @subheading colour_write |
---|
| 812 | |
---|
| 813 | Sets whether or not this pass renders with colour writing on or not.@* |
---|
| 814 | |
---|
| 815 | Format: colour_write <on|off>@*@* |
---|
| 816 | |
---|
| 817 | If colour writing is off no visible pixels are written to the screen during this pass. You might think this is useless, but if you render with colour writing off, and with very minimal other settings, you can use this pass to initialise the depth buffer before subsequently rendering other passes which fill in the colour data. This can give you significant performance boosts on some newer cards, especially when using complex fragment programs, because if the depth check fails then the fragment program is never run. @*@* |
---|
| 818 | |
---|
| 819 | Default: colour_write on@* |
---|
| 820 | |
---|
| 821 | @anchor{max_lights} |
---|
| 822 | @subheading max_lights |
---|
| 823 | |
---|
| 824 | Sets the maximum number of lights which will be considered for use with this pass.@*@* |
---|
| 825 | Format: max_lights <number>@*@* |
---|
| 826 | |
---|
| 827 | The maximum number of lights which can be used when rendering fixed-function materials is set by the rendering system, and is typically set at 8. When you are using the programmable pipeline (@xref{Using Vertex and Fragment Programs in a Pass}) this limit is dependent on the program you are running, or, if you use 'iteration once_per_light' (@xref{iteration}), it effectively only bounded by the number of passes you are willing to use. Whichever method you use, however, the max_lights limit applies.@*@* |
---|
| 828 | |
---|
| 829 | Default: max_lights 8@* |
---|
| 830 | |
---|
| 831 | @anchor{iteration} |
---|
| 832 | @subheading iteration |
---|
| 833 | |
---|
| 834 | Sets whether or not this pass is iterated, ie issued more than once.@*@* |
---|
| 835 | |
---|
| 836 | Basic Format: iteration <once | once_per_light> [lightType]@*@* |
---|
| 837 | Advanced Format: iteration <number> [<per_light> [lightType]]@*@* |
---|
| 838 | Examples: |
---|
| 839 | @table @asis |
---|
| 840 | @item iteration once |
---|
| 841 | The pass is only executed once which is the default behaviour. |
---|
| 842 | @item iteration once_per_light point |
---|
| 843 | The pass is executed once for each point light. |
---|
| 844 | @item iteration 5 |
---|
| 845 | The render state for the pass will be setup and then the draw call will execute 5 times. |
---|
| 846 | @item iteration 5 per_light point |
---|
| 847 | The render state for the pass will be setup and then the draw call will execute 5 times. This will be done for each point light. |
---|
| 848 | @end table |
---|
| 849 | @* |
---|
| 850 | |
---|
| 851 | By default, passes are only issued once. However, if you use the programmable pipeline, or you wish to exceed the normal limits on the number of lights which are supported, you might want to use the once_per_light option. In this case, only light index 0 is ever used, and the pass is issued multiple times, each time with a different light in light index 0. Clearly this will make the pass more expensive, but it may be the only way to achieve certain effects such as per-pixel lighting effects which take into account 1..n lights.@*@* |
---|
| 852 | |
---|
| 853 | Using a number instead of "once" instructs the pass to iterate more than once after the render state is setup. The render state is not changed after the initial setup so repeated draw calls are very fast and ideal for passes using programmable shaders that must iterate more than once with the same render state ie. shaders that do fur, motion blur, special filtering.@*@* |
---|
| 854 | |
---|
| 855 | If you use once_per_light, you should also add an ambient pass to the technique before this pass, otherwise when no lights are in range of this object it will not get rendered at all; this is important even when you have no ambient light in the scene, because you would still want the objects sihouette to appear.@*@* |
---|
| 856 | |
---|
| 857 | The second parameter to the attribute only applies if you use once_per_light or per_light, and restricts the pass to being run for lights of a single type (either 'point', 'directional' or 'spot'). In the example, the pass will be run once per point light. This can be useful because when you're writing a vertex / fragment program it is a lot better if you can assume the kind of lights you'll be dealing with. |
---|
| 858 | @*@* |
---|
| 859 | Default: iteration once@*@* |
---|
| 860 | |
---|
| 861 | @anchor{fur_example} |
---|
| 862 | Example: Simple Fur shader material script that uses a second pass with 10 iterations to grow the fur: |
---|
| 863 | @example |
---|
| 864 | // GLSL simple Fur |
---|
| 865 | vertex_program GLSLDemo/FurVS glsl |
---|
| 866 | { |
---|
| 867 | source fur.vert |
---|
| 868 | default_params |
---|
| 869 | { |
---|
| 870 | param_named_auto lightPosition light_position_object_space 0 |
---|
| 871 | param_named_auto eyePosition camera_position_object_space |
---|
| 872 | param_named_auto passNumber pass_number |
---|
| 873 | param_named_auto multiPassNumber pass_iteration_number |
---|
| 874 | param_named furLength float 0.15 |
---|
| 875 | } |
---|
| 876 | } |
---|
| 877 | |
---|
| 878 | fragment_program GLSLDemo/FurFS glsl |
---|
| 879 | { |
---|
| 880 | source fur.frag |
---|
| 881 | default_params |
---|
| 882 | { |
---|
| 883 | param_named Ka float 0.2 |
---|
| 884 | param_named Kd float 0.5 |
---|
| 885 | param_named Ks float 0.0 |
---|
| 886 | param_named furTU int 0 |
---|
| 887 | } |
---|
| 888 | } |
---|
| 889 | |
---|
| 890 | material Fur |
---|
| 891 | { |
---|
| 892 | technique GLSL |
---|
| 893 | { |
---|
| 894 | pass base_coat |
---|
| 895 | { |
---|
| 896 | ambient 0.7 0.7 0.7 |
---|
| 897 | diffuse 0.5 0.8 0.5 |
---|
| 898 | specular 1.0 1.0 1.0 1.5 |
---|
| 899 | |
---|
| 900 | vertex_program_ref GLSLDemo/FurVS |
---|
| 901 | { |
---|
| 902 | } |
---|
| 903 | |
---|
| 904 | fragment_program_ref GLSLDemo/FurFS |
---|
| 905 | { |
---|
| 906 | } |
---|
| 907 | |
---|
| 908 | texture_unit |
---|
| 909 | { |
---|
| 910 | texture Fur.tga |
---|
| 911 | tex_coord_set 0 |
---|
| 912 | filtering trilinear |
---|
| 913 | } |
---|
| 914 | |
---|
| 915 | } |
---|
| 916 | |
---|
| 917 | pass grow_fur |
---|
| 918 | { |
---|
| 919 | ambient 0.7 0.7 0.7 |
---|
| 920 | diffuse 0.8 1.0 0.8 |
---|
| 921 | specular 1.0 1.0 1.0 64 |
---|
| 922 | depth_write off |
---|
| 923 | |
---|
| 924 | scene_blend src_alpha one |
---|
| 925 | iteration 10 |
---|
| 926 | |
---|
| 927 | vertex_program_ref GLSLDemo/FurVS |
---|
| 928 | { |
---|
| 929 | } |
---|
| 930 | |
---|
| 931 | fragment_program_ref GLSLDemo/FurFS |
---|
| 932 | { |
---|
| 933 | } |
---|
| 934 | |
---|
| 935 | texture_unit |
---|
| 936 | { |
---|
| 937 | texture Fur.tga |
---|
| 938 | tex_coord_set 0 |
---|
| 939 | filtering trilinear |
---|
| 940 | } |
---|
| 941 | } |
---|
| 942 | } |
---|
| 943 | } |
---|
| 944 | @end example |
---|
| 945 | Note: use gpu program auto parameters @ref{pass_number} and @ref{pass_iteration_number} to tell the vertex or fragment program the pass number and iteration number.@*@* |
---|
| 946 | |
---|
| 947 | @anchor{point_size} |
---|
| 948 | @subheading point_size |
---|
| 949 | |
---|
| 950 | This setting allows you to change the size of points when rendering a point list, or a list of point sprites. The interpretation of this command depends on the @ref{point_size_attenuation} option - if it is off (the default), the point size is in screen pixels, if it is on, it expressed as normalised screen coordinates (1.0 is the height of the screen) when the point is at the origin. @*@* |
---|
| 951 | |
---|
| 952 | NOTE: Some drivers have an upper limit on the size of points they support - this can even vary between APIs on the same card! Don't rely on point sizes that cause the points to get very large on screen, since they may get clamped on some cards. Upper sizes can range from 64 to 256 pixels.@*@* |
---|
| 953 | |
---|
| 954 | Format: point_size <size>@*@* |
---|
| 955 | Default: point_size 1.0@*@* |
---|
| 956 | |
---|
| 957 | @anchor{point_sprites} |
---|
| 958 | @subheading point_sprites |
---|
| 959 | |
---|
| 960 | This setting specifies whether or not hardware point sprite rendering is enabled for this pass. Enabling it means that a point list is rendered as a list of quads rather than a list of dots. It is very useful to use this option if you're using a billboardset and only need to use point oriented billboards which are all of the same size. You can also use it for any other point list render. @*@* |
---|
| 961 | |
---|
| 962 | Format: point_sprites <on|off>@*@* |
---|
| 963 | Default: point_sprites off@*@* |
---|
| 964 | |
---|
| 965 | @anchor{point_size_attenuation} |
---|
| 966 | @subheading point_size_attenuation |
---|
| 967 | |
---|
| 968 | Defines whether point size is attenuated with view space distance, and in what fashion. This option is especially useful when you're using point sprites (@xref{point_sprites}) since it defines how they reduce in size as they get further away from the camera. You can also disable this option to make point sprites a constant screen size (like points), or enable it for points so they change size with distance.@*@* |
---|
| 969 | |
---|
| 970 | You only have to provide the final 3 parameters if you turn attenuation on. The formula for attenuation is that the size of the point is multiplied by 1 / (constant + linear * dist + quadratic * d^2); therefore turning it off is equivalent to (constant = 1, linear = 0, quadratic = 0) and standard perspective attenuation is (constant = 0, linear = 1, quadratic = 0). The latter is assumed if you leave out the final 3 parameters when you specify 'on'.@*@* |
---|
| 971 | |
---|
| 972 | Note that the resulting attenuated size is clamped to the minimum and maximum point size, see the next section.@*@* |
---|
| 973 | |
---|
| 974 | Format: point_size_attenuation <on|off> [constant linear quadratic] |
---|
| 975 | Default: point_size_attenuation off |
---|
| 976 | |
---|
| 977 | @anchor{point_size_min} |
---|
| 978 | @subheading point_size_min |
---|
| 979 | |
---|
| 980 | Sets the minimum point size after attenuation (@ref{point_size_attenuation}). For details on the size metrics, @xref{point_size}.@*@* |
---|
| 981 | |
---|
| 982 | Format: point_size_min <size> |
---|
| 983 | Default: point_size_min 0 |
---|
| 984 | |
---|
| 985 | @anchor{point_size_max} |
---|
| 986 | @subheading point_size_max |
---|
| 987 | |
---|
| 988 | Sets the maximum point size after attenuation (@ref{point_size_attenuation}). For details on the size metrics, @xref{point_size}. A value of 0 means the maximum is set to the same as the max size reported by the current card. @*@* |
---|
| 989 | |
---|
| 990 | Format: point_size_max <size> |
---|
| 991 | Default: point_size_max 0 |
---|
| 992 | |
---|
| 993 | @node Texture Units |
---|
| 994 | @subsection Texture Units |
---|
| 995 | |
---|
| 996 | Here are the attributes you can use in a 'texture_unit' section of a .material script: |
---|
| 997 | |
---|
| 998 | @heading Available Texture Layer Attributes |
---|
| 999 | @itemize @bullet |
---|
| 1000 | @item |
---|
| 1001 | @ref{texture_alias} |
---|
| 1002 | @item |
---|
| 1003 | @ref{texture} |
---|
| 1004 | @item |
---|
| 1005 | @ref{anim_texture} |
---|
| 1006 | @item |
---|
| 1007 | @ref{cubic_texture} |
---|
| 1008 | @item |
---|
| 1009 | @ref{tex_coord_set} |
---|
| 1010 | @item |
---|
| 1011 | @ref{tex_address_mode} |
---|
| 1012 | @item |
---|
| 1013 | @ref{tex_border_colour} |
---|
| 1014 | @item |
---|
| 1015 | @ref{filtering} |
---|
| 1016 | @item |
---|
| 1017 | @ref{max_anisotropy} |
---|
| 1018 | @item |
---|
| 1019 | @ref{colour_op} |
---|
| 1020 | @item |
---|
| 1021 | @ref{colour_op_ex} |
---|
| 1022 | @item |
---|
| 1023 | @ref{colour_op_multipass_fallback} |
---|
| 1024 | @item |
---|
| 1025 | @ref{alpha_op_ex} |
---|
| 1026 | @item |
---|
| 1027 | @ref{env_map} |
---|
| 1028 | @item |
---|
| 1029 | @ref{scroll} |
---|
| 1030 | @item |
---|
| 1031 | @ref{scroll_anim} |
---|
| 1032 | @item |
---|
| 1033 | @ref{rotate} |
---|
| 1034 | @item |
---|
| 1035 | @ref{rotate_anim} |
---|
| 1036 | @item |
---|
| 1037 | @ref{scale} |
---|
| 1038 | @item |
---|
| 1039 | @ref{wave_xform} |
---|
| 1040 | @item |
---|
| 1041 | @ref{transform} |
---|
| 1042 | @end itemize |
---|
| 1043 | |
---|
| 1044 | You can also use a nested 'texture_source' section in order to use a special add-in as a source of texture data, @xref{External Texture Sources} for details. |
---|
| 1045 | |
---|
| 1046 | @heading Attribute Descriptions |
---|
| 1047 | @anchor{texture_alias} |
---|
| 1048 | @subheading texture_alias |
---|
| 1049 | |
---|
| 1050 | Sets the alias name for this texture unit.@*@* |
---|
| 1051 | |
---|
| 1052 | Format: texture_alias <name>@*@* |
---|
| 1053 | |
---|
| 1054 | Example: texture_alias NormalMap@*@* |
---|
| 1055 | |
---|
| 1056 | Setting the texture alias name is usefull if this material is to be copied by other other materials and only the textures will be changed in the new material.(@xref{Copying Materials})@*@* |
---|
| 1057 | Default: If a texture_unit has a name then the texture_alias defaults to the texture_unit name. |
---|
| 1058 | |
---|
| 1059 | @anchor{texture} |
---|
| 1060 | @subheading texture |
---|
| 1061 | |
---|
| 1062 | Sets the name of the static texture image this layer will use.@*@* |
---|
| 1063 | |
---|
| 1064 | Format: texture <texturename> [<type>] [numMipMaps] [alpha]@*@* |
---|
| 1065 | |
---|
| 1066 | Example: texture funkywall.jpg@*@* |
---|
| 1067 | |
---|
| 1068 | This setting is mutually exclusive with the anim_texture attribute. Note that the texture file cannot include spaces. Those of you Windows users who like spaces in filenames, please get over it and use underscores instead.@*@* |
---|
| 1069 | The 'type' parameter allows you to specify a the type of texture to create - the default is '2d', but you can override this; here's the full list: |
---|
| 1070 | @table @asis |
---|
| 1071 | @item 1d |
---|
| 1072 | A 1-dimensional texture; that is, a texture which is only 1 pixel high. These kinds of textures can be useful when you need to encode a function in a texture and use it as a simple lookup, perhaps in a fragment program. It is important that you use this setting when you use a fragment program which uses 1-dimensional texture coordinates, since GL requires you to use a texture type that matches (D3D will let you get away with it, but you ought to plan for cross-compatibility). Your texture widths should still be a power of 2 for best compatibility and performance. |
---|
| 1073 | @item 2d |
---|
| 1074 | The default type which is assumed if you omit it, your texture has a width and a height, both of which should preferably be powers of 2, and if you can, make them square because this will look best on the most hardware. These can be addressed with 2D texture coordinates. |
---|
| 1075 | @item 3d |
---|
| 1076 | A 3 dimensional texture ie volume texture. Your texture has a width, a height, both of which should be powers of 2, and has depth. These can be addressed with 3d texture coordinates ie through a pixel shader. |
---|
| 1077 | @item cubic |
---|
| 1078 | This texture is made up of 6 2D textures which are pasted around the inside of a cube. Can be addressed with 3D texture coordinates and are useful for cubic reflection maps and normal maps. |
---|
| 1079 | @end table |
---|
| 1080 | The 'numMipMaps' option allows you to specify the number of mipmaps to generate for this texture. The default is 'unlimited' which means mips down to 1x1 size are generated. You can specify a fixed number (even 0) if you like instead. Note that if you use the same texture in many material scripts, the number of mipmaps generated will conform to the number specified in the first texture_unit used to load the texture - so be consistent with your usage.@*@* |
---|
| 1081 | |
---|
| 1082 | Finally, the 'alpha' option allows you to specify that a single channel (luminence) texture should be loaded as alpha, rather than the default which is to load it into the red channel. This can be helpful if you want to use alpha-only textures in the fixed function pipeline. |
---|
| 1083 | |
---|
| 1084 | Default: none@*@* |
---|
| 1085 | |
---|
| 1086 | @anchor{anim_texture} |
---|
| 1087 | @subheading anim_texture |
---|
| 1088 | |
---|
| 1089 | Sets the images to be used in an animated texture layer. In this case an animated texture layer means one which has multiple frames, each of which is a separate image file. There are 2 formats, one for implicitly determined image names, one for explicitly named images.@*@* |
---|
| 1090 | |
---|
| 1091 | Format1 (short): anim_texture <base_name> <num_frames> <duration>@*@* |
---|
| 1092 | |
---|
| 1093 | Example: anim_texture flame.jpg 5 2.5@*@* |
---|
| 1094 | |
---|
| 1095 | This sets up an animated texture layer made up of 5 frames named flame_0.jpg, flame_1.jpg, flame_2.jpg etc, with an animation length of 2.5 seconds (2fps). If duration is set to 0, then no automatic transition takes place and frames must be changed manually in code.@*@* |
---|
| 1096 | |
---|
| 1097 | Format2 (long): anim_texture <frame1> <frame2> ... <duration>@*@* |
---|
| 1098 | |
---|
| 1099 | Example: anim_texture flamestart.jpg flamemore.png flameagain.jpg moreflame.jpg lastflame.tga 2.5@*@* |
---|
| 1100 | |
---|
| 1101 | This sets up the same duration animation but from 5 separately named image files. The first format is more concise, but the second is provided if you cannot make your images conform to the naming standard required for it. @*@* |
---|
| 1102 | |
---|
| 1103 | Default: none@*@* |
---|
| 1104 | |
---|
| 1105 | @anchor{cubic_texture} |
---|
| 1106 | @subheading cubic_texture |
---|
| 1107 | |
---|
| 1108 | Sets the images used in a cubic texture, i.e. one made up of 6 individual images making up the faces of a cube. These kinds of textures are used for reflection maps (if hardware supports cubic reflection maps) or skyboxes. There are 2 formats, a brief format expecting image names of a particular format and a more flexible but longer format for arbitrarily named textures.@*@* |
---|
| 1109 | |
---|
| 1110 | Format1 (short): cubic_texture <base_name> <combinedUVW|separateUV>@*@* |
---|
| 1111 | |
---|
| 1112 | The base_name in this format is something like 'skybox.jpg', and the system will expect you to provide skybox_fr.jpg, skybox_bk.jpg, skybox_up.jpg, skybox_dn.jpg, skybox_lf.jpg, and skybox_rt.jpg for the individual faces.@*@* |
---|
| 1113 | |
---|
| 1114 | Format2 (long): cubic_texture <front> <back> <left> <right> <up> <down> separateUV@*@* |
---|
| 1115 | |
---|
| 1116 | In this case each face is specified explicitly, incase you don't want to conform to the image naming standards above. You can only use this for the separateUV version since the combinedUVW version requires a single texture name to be assigned to the combined 3D texture (see below).@*@* |
---|
| 1117 | |
---|
| 1118 | In both cases the final parameter means the following: |
---|
| 1119 | @table @asis |
---|
| 1120 | @item combinedUVW |
---|
| 1121 | The 6 textures are combined into a single 'cubic' texture map which is then addressed using 3D texture coordinates with U, V and W components. Necessary for reflection maps since you never know which face of the box you are going to need. Note that not all cards support cubic environment mapping. |
---|
| 1122 | @item separateUV |
---|
| 1123 | The 6 textures are kept separate but are all referenced by this single texture layer. One texture at a time is active (they are actually stored as 6 frames), and they are addressed using standard 2D UV coordinates. This type is good for skyboxes since only one face is rendered at one time and this has more guaranteed hardware support on older cards. |
---|
| 1124 | @end table |
---|
| 1125 | @* |
---|
| 1126 | Default: none |
---|
| 1127 | |
---|
| 1128 | @anchor{tex_coord_set} |
---|
| 1129 | @subheading tex_coord_set |
---|
| 1130 | |
---|
| 1131 | Sets which texture coordinate set is to be used for this texture layer. A mesh can define multiple sets of texture coordinates, this sets which one this material uses.@*@* |
---|
| 1132 | |
---|
| 1133 | Format: tex_coord_set <set_num>@*@* |
---|
| 1134 | |
---|
| 1135 | Example: tex_coord_set 2@*@* |
---|
| 1136 | |
---|
| 1137 | Default: tex_coord_set 0@*@* |
---|
| 1138 | |
---|
| 1139 | @anchor{tex_address_mode} |
---|
| 1140 | @subheading tex_address_mode |
---|
| 1141 | Defines what happens when texture coordinates exceed 1.0 for this texture layer.You can use the simple format to specify the addressing mode for all 3 potential texture coordinates at once, or you can use the 2/3 parameter extended format to specify a different mode per texture coordinate. @*@* |
---|
| 1142 | |
---|
| 1143 | Simple Format: tex_address_mode <uvw_mode> @* |
---|
| 1144 | Extended Format: tex_address_mode <u_mode> <v_mode> [<w_mode>] |
---|
| 1145 | @table @asis |
---|
| 1146 | @item wrap |
---|
| 1147 | Any value beyond 1.0 wraps back to 0.0. Texture is repeated. |
---|
| 1148 | @item clamp |
---|
| 1149 | Values beyond 1.0 are clamped to 1.0. Texture 'streaks' beyond 1.0 since last line of pixels is used across the rest of the address space. Useful for textures which need exact coverage from 0.0 to 1.0 without the 'fuzzy edge' wrap gives when combined with filtering. |
---|
| 1150 | @item mirror |
---|
| 1151 | Texture flips every boundary, meaning texture is mirrored every 1.0 u or v |
---|
| 1152 | @item border |
---|
| 1153 | Values outside the range [0.0, 1.0] are set to the border colour, you might also set the @ref{tex_border_colour} attribute too. |
---|
| 1154 | @end table |
---|
| 1155 | @* |
---|
| 1156 | Default: tex_address_mode wrap@*@* |
---|
| 1157 | |
---|
| 1158 | @anchor{tex_border_colour} |
---|
| 1159 | @subheading tex_border_colour |
---|
| 1160 | |
---|
| 1161 | Sets the border colour of border texture address mode (see @ref{tex_address_mode}). @*@* |
---|
| 1162 | |
---|
| 1163 | Format: tex_border_colour <red> <green> <blue> [<alpha>]@* |
---|
| 1164 | NB valid colour values are between 0.0 and 1.0.@*@* |
---|
| 1165 | |
---|
| 1166 | Example: tex_border_colour 0.0 1.0 0.3@*@* |
---|
| 1167 | |
---|
| 1168 | Default: tex_border_colour 0.0 0.0 0.0 1.0@*@* |
---|
| 1169 | |
---|
| 1170 | @anchor{filtering} |
---|
| 1171 | @subheading filtering |
---|
| 1172 | |
---|
| 1173 | Sets the type of texture filtering used when magnifying or minifying a texture. There are 2 formats to this attribute, the simple format where you simply specify the name of a predefined set of filtering options, and the complex format, where you individually set the minification, magnification, and mip filters yourself.@*@* |
---|
| 1174 | @strong{Simple Format}@* |
---|
| 1175 | Format: filtering <none|bilinear|trilinear|anisotropic>@* |
---|
| 1176 | Default: filtering bilinear@*@* |
---|
| 1177 | With this format, you only need to provide a single parameter which is one of the following: |
---|
| 1178 | @table @asis |
---|
| 1179 | @item none |
---|
| 1180 | No filtering or mipmapping is used. This is equivalent to the complex format 'filtering point point none'. |
---|
| 1181 | @item bilinear |
---|
| 1182 | 2x2 box filtering is performed when magnifying or reducing a texture, and a mipmap is picked from the list but no filtering is done between the levels of the mipmaps. This is equivalent to the complex format 'filtering linear linear point'. |
---|
| 1183 | @item trilinear |
---|
| 1184 | 2x2 box filtering is performed when magnifying and reducing a texture, and the closest 2 mipmaps are filtered together. This is equivalent to the complex format 'filtering linear linear linear'. |
---|
| 1185 | @item anisotropic |
---|
| 1186 | This is the same as 'trilinear', except the filtering algorithm takes account of the slope of the triangle in relation to the camera rather than simply doing a 2x2 pixel filter in all cases. This makes triangles at acute angles look less fuzzy. Equivalent to the complex format 'filtering anisotropic anisotropic linear'. Note that in order for this to make any difference, you must also set the @ref{max_anisotropy} attribute too. |
---|
| 1187 | @end table |
---|
| 1188 | @*@* |
---|
| 1189 | @strong{Complex Format}@* |
---|
| 1190 | Format: filtering <minification> <magnification> <mip>@* |
---|
| 1191 | Default: filtering linear linear point@*@* |
---|
| 1192 | This format gives you complete control over the minification, magnification, and mip filters. Each parameter can be one of the following: |
---|
| 1193 | @table @asis |
---|
| 1194 | @item none |
---|
| 1195 | Nothing - only a valid option for the 'mip' filter , since this turns mipmapping off completely. The lowest setting for min and mag is 'point'. |
---|
| 1196 | @item point |
---|
| 1197 | Pick the closet pixel in min or mag modes. In mip mode, this picks the closet matching mipmap. |
---|
| 1198 | @item linear |
---|
| 1199 | Filter a 2x2 box of pixels around the closest one. In the 'mip' filter this enables filtering between mipmap levels. |
---|
| 1200 | @item anisotropic |
---|
| 1201 | Only valid for min and mag modes, makes the filter compensate for camera-space slope of the triangles. Note that in order for this to make any difference, you must also set the @ref{max_anisotropy} attribute too. |
---|
| 1202 | @end table |
---|
| 1203 | |
---|
| 1204 | @anchor{max_anisotropy} |
---|
| 1205 | @subheading max_anisotropy |
---|
| 1206 | |
---|
| 1207 | Sets the maximum degree of anisotropy that the renderer will try to compensate for when filtering textures. The degree of anisotropy is the ratio between the height of the texture segment visible in a screen space region versus the width - so for example a floor plane, which stretches on into the distance and thus the vertical texture coordinates change much faster than the horizontal ones, has a higher anisotropy than a wall which is facing you head on (which has an anisotropy of 1 if your line of sight is perfectly perpendicular to it). You should set the max_anisotropy value to something greater than 1 to begin compensating; higher values can compensate for more acute angles.@*@* |
---|
| 1208 | In order for this to be used, you have to set the minification and/or the magnification @ref{filtering} option on this texture to anisotropic. |
---|
| 1209 | |
---|
| 1210 | Format: max_anisotropy <value>@* |
---|
| 1211 | Default: max_anisotropy 1 |
---|
| 1212 | |
---|
| 1213 | @anchor{colour_op} |
---|
| 1214 | @subheading colour_op |
---|
| 1215 | |
---|
| 1216 | Determines how the colour of this texture layer is combined with the one below it (or the lighting effect on the geometry if this is the first layer).@*@* |
---|
| 1217 | |
---|
| 1218 | Format: colour_op <replace|add|modulate|alpha_blend>@*@* |
---|
| 1219 | |
---|
| 1220 | This method is the simplest way to blend texture layers, because it requires only one parameter, gives you the most common blending types, and automatically sets up 2 blending methods: one for if single-pass multitexturing hardware is available, and another for if it is not and the blending must be achieved through multiple rendering passes. It is, however, quite limited and does not expose the more flexible multitexturing operations, simply because these can't be automatically supported in multipass fallback mode. If want to use the fancier options, use @ref{colour_op_ex}, but you'll either have to be sure that enough multitexturing units will be available, or you should explicitly set a fallback using @ref{colour_op_multipass_fallback}.@* |
---|
| 1221 | @table @asis |
---|
| 1222 | @item replace |
---|
| 1223 | Replace all colour with texture with no adjustment. |
---|
| 1224 | @item add |
---|
| 1225 | Add colour components together. |
---|
| 1226 | @item modulate |
---|
| 1227 | Multiply colour components together. |
---|
| 1228 | @item alpha_blend |
---|
| 1229 | Blend based on texture alpha. |
---|
| 1230 | @end table |
---|
| 1231 | @* |
---|
| 1232 | Default: colour_op modulate |
---|
| 1233 | |
---|
| 1234 | @anchor{colour_op_ex} |
---|
| 1235 | @subheading colour_op_ex |
---|
| 1236 | |
---|
| 1237 | This is an extended version of the @ref{colour_op} attribute which allows extremely detailed control over the blending applied between this and earlier layers. Multitexturing hardware can apply more complex blending operations that multipass blendind, but you are limited to the number of texture units which are available in hardware.@*@* |
---|
| 1238 | |
---|
| 1239 | Format: colour_op_ex <operation> <source1> <source2> [<manual_factor>] [<manual_colour1>] [<manual_colour2>]@*@* |
---|
| 1240 | |
---|
| 1241 | Example colour_op_ex add_signed src_manual src_current 0.5@*@* |
---|
| 1242 | |
---|
| 1243 | See the IMPORTANT note below about the issues between mulitpass and multitexturing that using this method can create. Texture colour operations determine how the final colour of the surface appears when rendered. Texture units are used to combine colour values from various sources (e.g. the diffuse colour of the surface from lighting calculations, combined with the colour of the texture). This method allows you to specify the 'operation' to be used, i.e. the calculation such as adds or multiplies, and which values to use as arguments, such as a fixed value or a value from a previous calculation.@*@* |
---|
| 1244 | |
---|
| 1245 | @table @asis |
---|
| 1246 | @item Operation options |
---|
| 1247 | @table @asis |
---|
| 1248 | @item source1 |
---|
| 1249 | Use source1 without modification |
---|
| 1250 | @item source2 |
---|
| 1251 | Use source2 without modification |
---|
| 1252 | @item modulate |
---|
| 1253 | Multiply source1 and source2 together. |
---|
| 1254 | @item modulate_x2 |
---|
| 1255 | Multiply source1 and source2 together, then by 2 (brightening). |
---|
| 1256 | @item modulate_x4 |
---|
| 1257 | Multiply source1 and source2 together, then by 4 (brightening). |
---|
| 1258 | @item add |
---|
| 1259 | Add source1 and source2 together. |
---|
| 1260 | @item add_signed |
---|
| 1261 | Add source1 and source2 then subtract 0.5. |
---|
| 1262 | @item add_smooth |
---|
| 1263 | Add source1 and source2, subtract the product |
---|
| 1264 | @item subtract |
---|
| 1265 | Subtract source2 from source1 |
---|
| 1266 | @item blend_diffuse_alpha |
---|
| 1267 | Use interpolated alpha value from vertices to scale source1, then add source2 scaled by (1-alpha). |
---|
| 1268 | @item blend_texture_alpha |
---|
| 1269 | As blend_diffuse_alpha but use alpha from texture |
---|
| 1270 | @item blend_current_alpha |
---|
| 1271 | As blend_diffuse_alpha but use current alpha from previous stages (same as blend_diffuse_alpha for first layer) |
---|
| 1272 | @item blend_manual |
---|
| 1273 | As blend_diffuse_alpha but use a constant manual alpha value specified in <manual> |
---|
| 1274 | @item dotproduct |
---|
| 1275 | The dot product of source1 and source2 |
---|
| 1276 | @item blend_diffuse_colour |
---|
| 1277 | Use interpolated colour value from vertices to scale source1, then add source2 scaled by (1-colour). |
---|
| 1278 | @end table |
---|
| 1279 | @item Source1 and source2 options |
---|
| 1280 | @table @asis |
---|
| 1281 | @item src_current |
---|
| 1282 | The colour as built up from previous stages. |
---|
| 1283 | @item src_texture |
---|
| 1284 | The colour derived from the texture assigned to this layer. |
---|
| 1285 | @item src_diffuse |
---|
| 1286 | The interpolated diffuse colour from the vertices (same as 'src_current' for first layer). |
---|
| 1287 | @item src_specular |
---|
| 1288 | The interpolated specular colour from the vertices. |
---|
| 1289 | @item src_manual |
---|
| 1290 | The manual colour specified at the end of the command. |
---|
| 1291 | @end table |
---|
| 1292 | @end table |
---|
| 1293 | @* |
---|
| 1294 | For example 'modulate' takes the colour results of the previous layer, and multiplies them with the new texture being applied. Bear in mind that colours are RGB values from 0.0-1.0 so multiplying them together will result in values in the same range, 'tinted' by the multiply. Note however that a straight multiply normally has the effect of darkening the textures - for this reason there are brightening operations like modulate_x2. Note that because of the limitations on some underlying APIs (Direct3D included) the 'texture' argument can only be used as the first argument, not the second. @*@* |
---|
| 1295 | |
---|
| 1296 | Note that the last parameter is only required if you decide to pass a value manually into the operation. Hence you only need to fill these in if you use the 'blend_manual' operation.@*@* |
---|
| 1297 | |
---|
| 1298 | IMPORTANT: Ogre tries to use multitexturing hardware to blend texture layers together. However, if it runs out of texturing units (e.g. 2 of a GeForce2, 4 on a GeForce3) it has to fall back on multipass rendering, i.e. rendering the same object multiple times with different textures. This is both less efficient and there is a smaller range of blending operations which can be performed. For this reason, if you use this method you really should set the colour_op_multipass_fallback attribute to specify which effect you want to fall back on if sufficient hardware is not available (the default is just 'modulate' which is unlikely to be what you want if you're doing swanky blending here). If you wish to avoid having to do this, use the simpler colour_op attribute which allows less flexible blending options but sets up the multipass fallback automatically, since it only allows operations which have direct multipass equivalents.@*@* |
---|
| 1299 | |
---|
| 1300 | Default: none (colour_op modulate)@* |
---|
| 1301 | |
---|
| 1302 | @anchor{colour_op_multipass_fallback} |
---|
| 1303 | @subheading colour_op_multipass_fallback |
---|
| 1304 | |
---|
| 1305 | Sets the multipass fallback operation for this layer, if you used colour_op_ex and not enough multitexturing hardware is available.@*@* |
---|
| 1306 | |
---|
| 1307 | Format: colour_op_multipass_fallback <src_factor> <dest_factor>@*@* |
---|
| 1308 | |
---|
| 1309 | Example: colour_op_mulitpass_fallback one one_minus_dest_alpha@*@* |
---|
| 1310 | |
---|
| 1311 | Because some of the effects you can create using colour_op_ex are only supported under multitexturing hardware, if the hardware is lacking the system must fallback on multipass rendering, which unfortunately doesn't support as many effects. This attribute is for you to specify the fallback operation which most suits you.@*@* |
---|
| 1312 | |
---|
| 1313 | The parameters are the same as in the scene_blend attribute; this is because multipass rendering IS effectively scene blending, since each layer is rendered on top of the last using the same mechanism as making an object transparent, it's just being rendered in the same place repeatedly to get the multitexture effect. If you use the simpler (and less flexible) colour_op attribute you don't need to call this as the system sets up the fallback for you.@*@* |
---|
| 1314 | |
---|
| 1315 | @anchor{alpha_op_ex} |
---|
| 1316 | @subheading alpha_op_ex |
---|
| 1317 | |
---|
| 1318 | Behaves in exactly the same away as @ref{colour_op_ex} except that it determines how alpha values are combined between texture layers rather than colour values.The only difference is that the 2 manual colours at the end of colour_op_ex are just single floating-point values in alpha_op_ex. |
---|
| 1319 | |
---|
| 1320 | @anchor{env_map} |
---|
| 1321 | @subheading env_map |
---|
| 1322 | |
---|
| 1323 | Turns on/off texture coordinate effect that makes this layer an environment map.@*@* |
---|
| 1324 | |
---|
| 1325 | Format: env_map <off|spherical|planar|cubic_reflection|cubic_normal>@*@* |
---|
| 1326 | |
---|
| 1327 | Environment maps make an object look reflective by using automatic texture coordinate generation depending on the relationship between the objects vertices or normals and the eye.@*@* |
---|
| 1328 | @table @asis |
---|
| 1329 | @item spherical |
---|
| 1330 | A spherical environment map. Requires a single texture which is either a fish-eye lens view of the reflected scene, or some other texture which looks good as a spherical map (a texture of glossy highlights is popular especially in car sims). This effect is based on the relationship between the eye direction and the vertex normals of the object, so works best when there are a lot of gradually changing normals, i.e. curved objects. |
---|
| 1331 | @item planar |
---|
| 1332 | Similar to the spherical environment map, but the effect is based on the position of the vertices in the viewport rather than vertex normals. This effect is therefore useful for planar geometry (where a spherical env_map would not look good because the normals are all the same) or objects without normals. |
---|
| 1333 | @item cubic_reflection |
---|
| 1334 | A more advanced form of reflection mapping which uses a group of 6 textures making up the inside of a cube, each of which is a view if the scene down each axis. Works extremely well in all cases but has a higher technical requirement from the card than spherical mapping. Requires that you bind a @ref{cubic_texture} to this texture unit and use the 'combinedUVW' option. |
---|
| 1335 | @item cubic_normal |
---|
| 1336 | Generates 3D texture coordinates containing the camera space normal vector from the normal information held in the vertex data. Again, full use of this feature requires a @ref{cubic_texture} with the 'combinedUVW' option. |
---|
| 1337 | |
---|
| 1338 | @end table |
---|
| 1339 | @* |
---|
| 1340 | Default: env_map off@* |
---|
| 1341 | |
---|
| 1342 | @anchor{scroll} |
---|
| 1343 | @subheading scroll |
---|
| 1344 | |
---|
| 1345 | |
---|
| 1346 | Sets a fixed scroll offset for the texture.@*@* |
---|
| 1347 | |
---|
| 1348 | Format: scroll <x> <y>@*@* |
---|
| 1349 | |
---|
| 1350 | This method offsets the texture in this layer by a fixed amount. Useful for small adjustments without altering texture coordinates in models. However if you wish to have an animated scroll effect, see the @ref{scroll_anim} attribute.@*@* |
---|
| 1351 | |
---|
| 1352 | @anchor{scroll_anim} |
---|
| 1353 | @subheading scroll_anim |
---|
| 1354 | |
---|
| 1355 | Sets up an animated scroll for the texture layer. Useful for creating fixed-speed scrolling effects on a texture layer (for varying scroll speeds, see @ref{wave_xform}).@*@* |
---|
| 1356 | |
---|
| 1357 | Format: scroll_anim <xspeed> <yspeed>@* |
---|
| 1358 | |
---|
| 1359 | @anchor{rotate} |
---|
| 1360 | @subheading rotate |
---|
| 1361 | |
---|
| 1362 | Rotates a texture to a fixed angle. This attribute changes the rotational orientation of a texture to a fixed angle, useful for fixed adjustments. If you wish to animate the rotation, see @ref{rotate_anim}.@*@* |
---|
| 1363 | |
---|
| 1364 | Format: rotate <angle>@*@* |
---|
| 1365 | |
---|
| 1366 | The parameter is a anticlockwise angle in degrees.@*@* |
---|
| 1367 | |
---|
| 1368 | @anchor{rotate_anim} |
---|
| 1369 | @subheading rotate_anim |
---|
| 1370 | |
---|
| 1371 | Sets up an animated rotation effect of this layer. Useful for creating fixed-speed rotation animations (for varying speeds, see @ref{wave_xform}).@*@* |
---|
| 1372 | |
---|
| 1373 | Format: rotate_anim <revs_per_second>@*@* |
---|
| 1374 | |
---|
| 1375 | The parameter is a number of anticlockwise revolutions per second.@*@* |
---|
| 1376 | |
---|
| 1377 | @anchor{scale} |
---|
| 1378 | @subheading scale |
---|
| 1379 | |
---|
| 1380 | Adjusts the scaling factor applied to this texture layer. Useful for adjusting the size of textures without making changes to geometry. This is a fixed scaling factor, if you wish to animate this see @ref{wave_xform}.@*@* |
---|
| 1381 | |
---|
| 1382 | Format: scale <x_scale> <y_scale>@*@* |
---|
| 1383 | |
---|
| 1384 | Valid scale values are greater than 0, with a scale factor of 2 making the texture twice as big in that dimension etc.@*@* |
---|
| 1385 | |
---|
| 1386 | @anchor{wave_xform} |
---|
| 1387 | @subheading wave_xform |
---|
| 1388 | |
---|
| 1389 | Sets up a transformation animation based on a wave function. Useful for more advanced texture layer transform effects. You can add multiple instances of this attribute to a single texture layer if you wish.@*@* |
---|
| 1390 | |
---|
| 1391 | Format: wave_xform <xform_type> <wave_type> <base> <frequency> <phase> <amplitude>@*@* |
---|
| 1392 | |
---|
| 1393 | Example: wave_xform scale_x sine 1.0 0.2 0.0 5.0@*@* |
---|
| 1394 | @table @asis |
---|
| 1395 | @item xform_type |
---|
| 1396 | @table @asis |
---|
| 1397 | @item scroll_x |
---|
| 1398 | Animate the x scroll value |
---|
| 1399 | @item scroll_y |
---|
| 1400 | Animate the y scroll value |
---|
| 1401 | @item rotate |
---|
| 1402 | Animate the rotate value |
---|
| 1403 | @item scale_x |
---|
| 1404 | Animate the x scale value |
---|
| 1405 | @item scale_y |
---|
| 1406 | Animate the y scale value |
---|
| 1407 | @end table |
---|
| 1408 | @item wave_type |
---|
| 1409 | @table @asis |
---|
| 1410 | @item sine |
---|
| 1411 | A typical sine wave which smoothly loops between min and max values |
---|
| 1412 | @item triangle |
---|
| 1413 | An angled wave which increases & decreases at constant speed, changing instantly at the extremes |
---|
| 1414 | @item square |
---|
| 1415 | Max for half the wavelength, min for the rest with instant transition between |
---|
| 1416 | @item sawtooth |
---|
| 1417 | Gradual steady increase from min to max over the period with an instant return to min at the end. |
---|
| 1418 | @item inverse_sawtooth |
---|
| 1419 | Gradual steady decrease from max to min over the period, with an instant return to max at the end. |
---|
| 1420 | @end table |
---|
| 1421 | @item base |
---|
| 1422 | The base value, the minimum if amplitude > 0, the maximum if amplitdue < 0 |
---|
| 1423 | @item frequency |
---|
| 1424 | The number of wave iterations per second, i.e. speed |
---|
| 1425 | @item phase |
---|
| 1426 | Offset of the wave start |
---|
| 1427 | @item amplitude |
---|
| 1428 | The size of the wave |
---|
| 1429 | @end table |
---|
| 1430 | @* |
---|
| 1431 | The range of the output of the wave will be {base, base+amplitude}. So the example above scales the texture in the x direction between 1 (normal size) and 5 along a sine wave at one cycle every 5 second (0.2 waves per second).@*@* |
---|
| 1432 | |
---|
| 1433 | @anchor{transform} |
---|
| 1434 | @subheading transform |
---|
| 1435 | |
---|
| 1436 | This attribute allows you to specify a static 4x4 transformation matrix for the texture unit, thus replacing the individual scroll, rotate and scale attributes mentioned above. @*@* |
---|
| 1437 | |
---|
| 1438 | Format: transform m00 m01 m02 m03 m10 m11 m12 m13 m20 m21 m22 m23 m30 m31 m32 m33@*@* |
---|
| 1439 | |
---|
| 1440 | The indexes of the 4x4 matrix value above are expressed as m<row><col>. |
---|
| 1441 | |
---|
| 1442 | @node Declaring Vertex and Fragment Programs |
---|
| 1443 | @subsection Declaring Vertex and Fragment Programs |
---|
| 1444 | |
---|
| 1445 | In order to use a vertex or fragment program in your materials (@xref{Using Vertex and Fragment Programs in a Pass}), you first have to define them. A single program definition can be used by any number of materials, the only prerequisite is that a program must be defined before being referenced in the pass section of a material.@*@* |
---|
| 1446 | |
---|
| 1447 | The definition of a program can either be embedded in the .material script itself (in which case it must precede any references to it in the script), or if you wish to use the same program across multiple .material files, you can define it in an external .program script. You define the program in exactly the same way whether you use a .program script or a .material script, the only difference is that all .program scripts are guaranteed to have been parsed before @strong{all} .material scripts, so you can guarantee that your program has been defined before any .material script that might use it. Just like .material scripts, .program scripts will be read from any location which is on your resource path, and you can define many programs in a single script.@*@* |
---|
| 1448 | |
---|
| 1449 | Vertex and fragment programs can be low-level (i.e. assembler code written to the specification of a given low level syntax such as vs_1_1 or arbfp1) or high-level such as nVidia's Cg language (@xref{High-level Programs}). High level languages give you a number of advantages, such as being able to write more intuitive code, and possibly being able to target multiple architectures in a single program (for example, the same Cg program might be able to be used in both D3D and GL, whilst the equivalent low-level programs would require separate techniques, each targetting a different API). High-level programs also allow you to use named parameters instead of simply indexed ones, although parameters are not defined here, they are used in the Pass.@*@* |
---|
| 1450 | |
---|
| 1451 | Here is an example of a definition of a low-level vertex program: |
---|
| 1452 | @example |
---|
| 1453 | vertex_program myVertexProgram asm |
---|
| 1454 | { |
---|
| 1455 | source myVertexProgram.asm |
---|
| 1456 | syntax vs_1_1 |
---|
| 1457 | } |
---|
| 1458 | @end example |
---|
| 1459 | As you can see, that's very simple, and defining a fragment program is exactly the same, just with vertex_program replaced with fragment_program. You give the program a name in the header, followed by the word 'asm' to indicate that this is a low-level program. Inside the braces, you specify where the source is going to come from (and this is loaded from any of the resource locations as with other media), and also indicate the syntax being used. You might wonder why the syntax specification is required when many of the assembler syntaxes have a header identifying them anyway - well the reason is that the engine needs to know what syntax the program is in before reading it, because during compilation of the material, we want to skip progams which use an unsupportable syntax quickly, without loading the program first.@*@* |
---|
| 1460 | |
---|
| 1461 | The current supported syntaxes are: |
---|
| 1462 | @table @asis |
---|
| 1463 | @item vs_1_1 |
---|
| 1464 | This is one of the DirectX vertex shader assembler syntaxes. @* |
---|
| 1465 | Supported on cards from: ATI Radeon 8500, nVidia GeForce 3 @* |
---|
| 1466 | @item vs_2_0 |
---|
| 1467 | Another one of the DirectX vertex shader assembler syntaxes. @* |
---|
| 1468 | Supported on cards from: ATI Radeon 9600, nVidia GeForce FX 5 series @* |
---|
| 1469 | @item vs_2_x |
---|
| 1470 | Another one of the DirectX vertex shader assembler syntaxes. @* |
---|
| 1471 | Supported on cards from: ATI Radeon X series, nVidia GeForce FX 6 series @* |
---|
| 1472 | @item vs_3_0 |
---|
| 1473 | Another one of the DirectX vertex shader assembler syntaxes. @* |
---|
| 1474 | Supported on cards from: nVidia GeForce FX 6 series |
---|
| 1475 | @item arbvp1 |
---|
| 1476 | This is the OpenGL standard assembler format for vertex programs. It's roughly equivalent to DirectX vs_1_1. |
---|
| 1477 | @item vp20 |
---|
| 1478 | This is an nVidia-specific OpenGL vertex shader syntax which is a superset of vs 1.1. |
---|
| 1479 | @item vp30 |
---|
| 1480 | Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 2.0, which is supported on nVidia GeForce FX 5 series and higher. |
---|
| 1481 | @item vp40 |
---|
| 1482 | Another nVidia-specific OpenGL vertex shader syntax. It is a superset of vs 3.0, which is supported on nVidia GeForce FX 6 series and higher. |
---|
| 1483 | @item ps_1_1, ps_1_2, ps_1_3 |
---|
| 1484 | DirectX pixel shader (ie fragment program) assembler syntax. @* |
---|
| 1485 | Supported on cards from: ATI Radeon 8500, nVidia GeForce 3 @* |
---|
| 1486 | NOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used in OpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs extension in OpenGL which is very similar in function to ps_1_4 in DirectX. Ogre has a built in ps_1_x to atifs compiler that is automatically invoked when ps_1_x is used in OpenGL on ATI hardware. |
---|
| 1487 | @item ps_1_4 |
---|
| 1488 | DirectX pixel shader (ie fragment program) assembler syntax. @* |
---|
| 1489 | Supported on cards from: ATI Radeon 8500, nVidia GeForce FX 5 series @* |
---|
| 1490 | NOTE: for ATI 8500, 9000, 9100, 9200 hardware, this profile can also be used in OpenGL. The ATI 8500 to 9200 do not support arbfp1 but do support atifs extension in OpenGL which is very similar in function to ps_1_4 in DirectX. Ogre has a built in ps_1_x to atifs compiler that is automatically invoked when ps_1_x is used in OpenGL on ATI hardware. |
---|
| 1491 | @item ps_2_0 |
---|
| 1492 | DirectX pixel shader (ie fragment program) assembler syntax. @* |
---|
| 1493 | Supported cards: ATI Radeon 9600, nVidia GeForce FX 5 series@* |
---|
| 1494 | @item ps_2_x |
---|
| 1495 | DirectX pixel shader (ie fragment program) assembler syntax. This is basically |
---|
| 1496 | ps_2_0 with a higher number of instructions. @* |
---|
| 1497 | Supported cards: ATI Radeon X series, nVidia GeForce FX 6 series@* |
---|
| 1498 | @item ps_3_0 |
---|
| 1499 | DirectX pixel shader (ie fragment program) assembler syntax. @* |
---|
| 1500 | Supported cards: nVidia GeForce FX6 series@* |
---|
| 1501 | @item ps_3_x |
---|
| 1502 | DirectX pixel shader (ie fragment program) assembler syntax. @* |
---|
| 1503 | Supported cards: nVidia GeForce FX7 series@* |
---|
| 1504 | @item arbfp1 |
---|
| 1505 | This is the OpenGL standard assembler format for fragment programs. It's roughly equivalent to ps_2_0, which means that not all cards that support basic pixel shaders under DirectX support arbfp1 (for example neither the GeForce3 or GeForce4 support arbfp1, but they do support ps_1_1). |
---|
| 1506 | @item fp20 |
---|
| 1507 | This is an nVidia-specific OpenGL fragment syntax which is a superset of ps 1.3. It allows you to use the 'nvparse' format for basic fragment programs. It actually uses NV_texture_shader and NV_register_combiners to provide functionality equivalent to DirectX's ps_1_1 under GL, but only for nVidia cards. However, since ATI cards adopted arbfp1 a little earlier than nVidia, it is mainly nVidia cards like the GeForce3 and GeForce4 that this will be useful for. You can find more information about nvparse at http://developer.nvidia.com/object/nvparse.html. |
---|
| 1508 | @item fp30 |
---|
| 1509 | Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 2.0, which is supported on nVidia GeForce FX 5 series and higher. |
---|
| 1510 | @item fp40 |
---|
| 1511 | Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 3.0, which is supported on nVidia GeForce FX 6 series and higher. |
---|
| 1512 | |
---|
| 1513 | @end table |
---|
| 1514 | |
---|
| 1515 | You can get a definitive list of the syntaxes supported by the current card by calling GpuProgramManager::getSingleton().getSupportedSyntax().@*@* |
---|
| 1516 | |
---|
| 1517 | @anchor{Default Program Parameters} |
---|
| 1518 | @subheading Default Program Parameters |
---|
| 1519 | While defining a vertex or fragment program, you can also specify the default parameters to be used for materials which use it, unless they specifically override them. You do this by including a nested 'default_params' section, like so: |
---|
| 1520 | @example |
---|
| 1521 | vertex_program Ogre/CelShadingVP cg |
---|
| 1522 | { |
---|
| 1523 | source Example_CelShading.cg |
---|
| 1524 | entry_point main_vp |
---|
| 1525 | profiles vs_1_1 arbvp1 |
---|
| 1526 | |
---|
| 1527 | default_params |
---|
| 1528 | { |
---|
| 1529 | param_named_auto lightPosition light_position_object_space 0 |
---|
| 1530 | param_named_auto eyePosition camera_position_object_space |
---|
| 1531 | param_named_auto worldViewProj worldviewproj_matrix |
---|
| 1532 | param_named shininess float 10 |
---|
| 1533 | } |
---|
| 1534 | } |
---|
| 1535 | @end example |
---|
| 1536 | The syntax of the parameter definition is exactly the same as when you define parameters when using programs, @xref{Program Parameter Specification}. Defining default parameters allows you to avoid rebinding common parameters repeatedly (clearly in the above example, all but 'shininess' are unlikely to change between uses of the program) which makes your material declarations shorter. |
---|
| 1537 | |
---|
| 1538 | @node High-level Programs |
---|
| 1539 | @heading High-level Programs |
---|
| 1540 | Support for high level vertex and fragment programs is provided through plugins; this is to make sure that an application using OGRE can use as little or as much of the high-level program functionality as they like. OGRE currently supports 3 high-level program types, Cg (an API- and card-independent, high-level language which lets you write programs for both OpenGL and DirectX for lots of cards), DirectX 9 High-Level Shader Language (HLSL), and OpenGL Shader Language (GLSL). HLSL is provided for people who only want to deploy in DirectX, or who don't want to include the Cg plugin for whatever reason. GLSL is only for the OpenGL API and can not be used with the DirectX api. To support both DirectX and OpenGL you could write your shaders in both HLSL and GLSL along with having seperate techniques in the material script. To be honest, Cg is a better bet because it lets you stay API independent - and don't be put off by the fact that it's made by nVidia, it will happily compile programs down to vendor-independent standards like DirectX and OpenGL's assembler formats, so you're not limited to nVidia cards.@*@* |
---|
| 1541 | |
---|
| 1542 | @subheading Cg programs |
---|
| 1543 | In order to define Cg programs, you have to have to load Plugin_CgProgramManager.so/.dll at startup, either through plugins.cfg or through your own plugin loading code. They are very easy to define: |
---|
| 1544 | @example |
---|
| 1545 | fragment_program myCgFragmentProgram cg |
---|
| 1546 | { |
---|
| 1547 | source myCgFragmentProgram.cg |
---|
| 1548 | entry_point main |
---|
| 1549 | profiles ps_2_0 arbfp1 |
---|
| 1550 | } |
---|
| 1551 | @end example |
---|
| 1552 | There are a few differences between this and the assembler program - to begin with, we declare that the fragment program is of type 'cg' rather than 'asm', which indicates that it's a high-level program using Cg. The 'source' parameter is the same, except this time it's referencing a Cg source file instead of a file of assembler. @*@* |
---|
| 1553 | Here is where things start to change. Firstly, we need to define an 'entry_point', which is the name of a function in the Cg program which will be the first one called as part of the fragment program. Unlike assembler programs, which just run top-to-bottom, Cg programs can include multiple functions and as such you must specify the one which start the ball rolling.@*@* |
---|
| 1554 | Next, instead of a fixed 'syntax' parameter, you specify one or more 'profiles'; profiles are how Cg compiles a program down to the low-level assembler. The profiles have the same names as the assembler syntax codes mentioned above; the main difference is that you can list more than one, thus allowing the program to be compiled down to more low-level syntaxes so you can write a single high-level program which runs on both D3D and GL. You are advised to just enter the simplest profiles under which your programs can be compiled in order to give it the maximum compatibility. The ordering also matters; if a card supports more than one syntax then the one listed first will be used. |
---|
| 1555 | |
---|
| 1556 | @subheading DirectX9 HLSL |
---|
| 1557 | DirectX9 HLSL has a very similar language syntax to Cg but is tied to the DirectX API. The only benefit over Cg is that it only requires the DirectX 9 render system plugin, not any additional plugins. Declaring a DirectX9 HLSL program is very similar to Cg. Here's an example: |
---|
| 1558 | @example |
---|
| 1559 | vertex_program myHLSLVertexProgram hlsl |
---|
| 1560 | { |
---|
| 1561 | source myHLSLVertexProgram.txt |
---|
| 1562 | entry_point main |
---|
| 1563 | target vs_2_0 |
---|
| 1564 | } |
---|
| 1565 | @end example |
---|
| 1566 | As you can see, the syntax is almost identical, except that instead of 'profiles' with a list of assembler formats, you have a 'target' parameter which allows a single assembler target to be specified - obviously this has to be a DirectX assembler format syntax code.@*@* |
---|
| 1567 | |
---|
| 1568 | @include glsl.inc |
---|
| 1569 | |
---|
| 1570 | @subheading Skeletal Animation in Vertex Programs |
---|
| 1571 | You can implement skeletal animation in hardware by writing a vertex program which uses the per-vertex blending indices and blending weights, together with an array of world matrices (which will be provided for you by Ogre if you bind the automatic parameter 'world_matrix_array_3x4'). However, you need to communicate this support to Ogre so it does not perform skeletal animation in software for you. You do this by adding the following attribute to your vertex_program definition: |
---|
| 1572 | @example |
---|
| 1573 | includes_skeletal_animation true |
---|
| 1574 | @end example |
---|
| 1575 | When you do this, any skeletally animated entity which uses this material will forgo the usual animation blend and will expect the vertex program to do it, for both vertex positions and normals. Note that ALL submeshes must be assigned a material which implements this, and that if you combine skeletal animation with vertex animation (@xref{Animation}) then all techniques must be hardware accelerated for any to be. |
---|
| 1576 | |
---|
| 1577 | @subheading Morph Animation in Vertex Programs |
---|
| 1578 | You can implement morph animation in hardware by writing a vertex program which linearly blends between the first and second position keyframes passed as positions and the first free texture coordinate set, and by binding the animation_parametric value to a parameter (which tells you how far to interpolate between the two). However, you need to communicate this support to Ogre so it does not perform morph animation in software for you. You do this by adding the following attribute to your vertex_program definition: |
---|
| 1579 | @example |
---|
| 1580 | includes_morph_animation true |
---|
| 1581 | @end example |
---|
| 1582 | When you do this, any skeletally animated entity which uses this material will forgo the usual software morph and will expect the vertex program to do it. Note that if your model includes both skeletal animation and morph animation, they must both be implemented in the vertex program if either is to be hardware acceleration. Note that ALL submeshes must be assigned a material which implements this, and that if you combine skeletal animation with vertex animation (@xref{Animation}) then all techniques must be hardware accelerated for any to be. |
---|
| 1583 | |
---|
| 1584 | @subheading Pose Animation in Vertex Programs |
---|
| 1585 | You can implement pose animation (blending between multiple poses based on weight) in a vertex program by pulling in the original vertex data (bound to position), and as many pose offset buffers as you've defined in your 'includes_pose_animation' declaration, which will be in the first free texture unit upwards. You must also use the animation_parametric parameter to define the starting point of the constants which will contain the pose weights; they will start at the parameter you define and fill 'n' constants, where 'n' is the max number of poses this shader can blend, ie the parameter to includes_pose_animation. |
---|
| 1586 | @example |
---|
| 1587 | includes_pose_animation 4 |
---|
| 1588 | @end example |
---|
| 1589 | Note that ALL submeshes must be assigned a material which implements this, and that if you combine skeletal animation with vertex animation (@xref{Animation}) then all techniques must be hardware accelerated for any to be. |
---|
| 1590 | |
---|
| 1591 | @subheading Vertex Programs With Shadows |
---|
| 1592 | When using shadows (@xref{Shadows}), the use of vertex programs can add some additional complexities, because Ogre can only automatically deal with everything when using the fixed-function pipeline. If you use vertex programs, and you are also using shadows, you may need to make some adjustments. @*@* |
---|
| 1593 | |
---|
| 1594 | If you use @strong{stencil shadows}, then any vertex programs which do vertex deformation can be a problem, because stencil shadows are calculated on the CPU, which does not have access to the modified vertices. If the vertex program is doing standard skeletal animation, this is ok (see section above) because Ogre knows how to replicate the effect in software, but any other vertex deformation cannot be replicated, and you will either have to accept that the shadow will not reflect this deformation, or you should turn off shadows for that object. @*@* |
---|
| 1595 | |
---|
| 1596 | If you use @strong{texture shadows}, then vertex deformation is acceptable; however, when rendering the object into a shadow texture (the shadow caster pass), the shadow has to be rendered in a solid colour (linked to the ambient colour for modulative shadows, black for additive shadows). You must therefore provide an alternative vertex program, so Ogre provides you with a way of specifying one to use when rendering the caster, @xref{Shadows and Vertex Programs}. |
---|
| 1597 | |
---|
| 1598 | |
---|
| 1599 | @node Using Vertex and Fragment Programs in a Pass |
---|
| 1600 | @subsection Using Vertex and Fragment Programs in a Pass |
---|
| 1601 | |
---|
| 1602 | Within a pass section of a material script, you can reference a vertex and / or a fragment program which is been defined in a .program script (@xref{Declaring Vertex and Fragment Programs}). The programs are defined separately from the usage of them in the pass, since the programs are very likely to be reused between many separate materials, probably across many different .material scripts, so this approach lets you define the program only once and use it many times.@*@* |
---|
| 1603 | |
---|
| 1604 | As well as naming the program in question, you can also provide parameters to it. Here's a simple example: |
---|
| 1605 | @example |
---|
| 1606 | vertex_program_ref myVertexProgram |
---|
| 1607 | { |
---|
| 1608 | param_indexed_auto 0 worldviewproj_matrix |
---|
| 1609 | param_indexed 4 float4 10.0 0 0 0 |
---|
| 1610 | } |
---|
| 1611 | @end example |
---|
| 1612 | In this example, we bind a vertex program called 'myVertexProgram' (which will be defined elsewhere) to the pass, and give it 2 parameters, one is an 'auto' parameter, meaning we do not have to supply a value as such, just a recognised code (in this case it's the world/view/projection matrix which is kept up to date automatically by Ogre). The second parameter is a manually specified parameter, a 4-element float. The indexes are described later.@*@* |
---|
| 1613 | |
---|
| 1614 | The syntax of the link to a vertex program and a fragment program are identical, the only difference is that 'fragment_program_ref' is used instead of 'vertex_program_ref'. |
---|
| 1615 | @anchor{Program Parameter Specification} |
---|
| 1616 | @subheading Parameter specification |
---|
| 1617 | Parameters can be specified using one of 4 commands as shown below. The same syntax is used whether you are defining a parameter just for this particular use of the program, or when specifying the @ref{Default Program Parameters}. Parameters set in the specific use of the program override the defaults. |
---|
| 1618 | @itemize @bullet |
---|
| 1619 | @item @ref{param_indexed} |
---|
| 1620 | @item @ref{param_indexed_auto} |
---|
| 1621 | @item @ref{param_named} |
---|
| 1622 | @item @ref{param_named_auto} |
---|
| 1623 | @end itemize |
---|
| 1624 | |
---|
| 1625 | @anchor{param_indexed} |
---|
| 1626 | @subheading param_indexed |
---|
| 1627 | This command sets the value of an indexed parameter. @*@* |
---|
| 1628 | |
---|
| 1629 | format: param_indexed <index> <type> <value>@*@* |
---|
| 1630 | example: param_indexed 0 float4 10.0 0 0 0@*@* |
---|
| 1631 | |
---|
| 1632 | The 'index' is simply a number representing the position in the parameter list which the value should be written, and you should derive this from your program definition. The index is relative to the way constants are stored on the card, which is in 4-element blocks. For example if you defined a float4 parameter at index 0, the next index would be 1. If you defined a matrix4x4 at index 0, the next usable index would be 4, since a 4x4 matrix takes up 4 indexes.@*@* |
---|
| 1633 | |
---|
| 1634 | The value of 'type' can be float4, matrix4x4, float<n>, int4, int<n>. Note that 'int' parameters are only available on some more advanced program syntaxes, check the D3D or GL vertex / fragment program documentation for full details. Typically the most useful ones will be float4 and matrix4x4. Note that if you use a type which is not a multiple of 4, then the remaining values up to the multiple of 4 will be filled with zeroes for you (since GPUs always use banks of 4 floats per constant even if only one is used).@*@* |
---|
| 1635 | |
---|
| 1636 | 'value' is simply a space or tab-delimited list of values which can be converted into the type you have specified. |
---|
| 1637 | |
---|
| 1638 | @anchor{param_indexed_auto} |
---|
| 1639 | @subheading param_indexed_auto |
---|
| 1640 | |
---|
| 1641 | This command tells Ogre to automatically update a given parameter with a derived value. This frees you from writing code to update program parameters every frame when they are always changing.@*@* |
---|
| 1642 | |
---|
| 1643 | format: param_indexed_auto <index> <value_code> <extra_params>@*@* |
---|
| 1644 | example: param_indexed_auto 0 worldviewproj_matrix@*@* |
---|
| 1645 | |
---|
| 1646 | 'index' has the same meaning as @ref{param_indexed}; note this time you do not have to specify the size of the parameter because the engine knows this already. In the example, the world/view/projection matrix is being used so this is implicitly a matrix4x4.@*@* |
---|
| 1647 | |
---|
| 1648 | 'value_code' is one of a list of recognised values:@* |
---|
| 1649 | @table @asis |
---|
| 1650 | @item world_matrix |
---|
| 1651 | The current world matrix. |
---|
| 1652 | @item inverse_world_matrix |
---|
| 1653 | The inverse of the current world matrix. |
---|
| 1654 | @item transpose_world_matrix |
---|
| 1655 | The transpose of the world matrix |
---|
| 1656 | @item inverse_transpose_world_matrix |
---|
| 1657 | The inverse transpose of the world matrix |
---|
| 1658 | |
---|
| 1659 | @item world_matrix_array_3x4 |
---|
| 1660 | An array of world matrices, each represented as only a 3x4 matrix (3 rows of 4columns) usually for doing hardware skinning. You should make enough entries available in your vertex program for the number of bones in use, ie an array of numBones*3 float4's. |
---|
| 1661 | |
---|
| 1662 | @item view_matrix |
---|
| 1663 | The current view matrix. |
---|
| 1664 | @item inverse_view_matrix |
---|
| 1665 | The inverse of the current view matrix. |
---|
| 1666 | @item transpose_view_matrix |
---|
| 1667 | The transpose of the view matrix |
---|
| 1668 | @item inverse_transpose_view_matrix |
---|
| 1669 | The inverse transpose of the view matrix |
---|
| 1670 | |
---|
| 1671 | @item projection_matrix |
---|
| 1672 | The current projection matrix. |
---|
| 1673 | @item inverse_projection_matrix |
---|
| 1674 | The inverse of the projection matrix |
---|
| 1675 | @item transpose_projection_matrix |
---|
| 1676 | The transpose of the projection matrix |
---|
| 1677 | @item inverse_transpose_projection_matrix |
---|
| 1678 | The inverse transpose of the projection matrix |
---|
| 1679 | |
---|
| 1680 | @item worldview_matrix |
---|
| 1681 | The current world and view matrices concatenated. |
---|
| 1682 | @item inverse_worldview_matrix |
---|
| 1683 | The inverse of the current concatenated world and view matrices. |
---|
| 1684 | @item transpose_worldview_matrix |
---|
| 1685 | The transpose of the world and view matrices |
---|
| 1686 | @item inverse_transpose_worldview_matrix |
---|
| 1687 | The inverse transpose of the current concatenated world and view matrices. |
---|
| 1688 | |
---|
| 1689 | @item viewproj_matrix |
---|
| 1690 | The current view and projection matrices concatenated. |
---|
| 1691 | @item inverse_viewproj_matrix |
---|
| 1692 | The inverse of the view & projection matrices |
---|
| 1693 | @item transpose_viewproj_matrix |
---|
| 1694 | The transpose of the view & projection matrices |
---|
| 1695 | @item inverse_transpose_viewproj_matrix |
---|
| 1696 | The inverse transpose of the view & projection matrices |
---|
| 1697 | |
---|
| 1698 | @item worldviewproj_matrix |
---|
| 1699 | The current world, view and projection matrices concatenated. |
---|
| 1700 | @item inverse_worldviewproj_matrix |
---|
| 1701 | The inverse of the world, view and projection matrices |
---|
| 1702 | @item transpose_worldviewproj_matrix |
---|
| 1703 | The transpose of the world, view and projection matrices |
---|
| 1704 | @item inverse_transpose_worldviewproj_matrix |
---|
| 1705 | The inverse transpose of the world, view and projection matrices |
---|
| 1706 | |
---|
| 1707 | @item render_target_flipping |
---|
| 1708 | The value use to adjust transformed y position if bypassed projection matrix transform. It's -1 if the render target requires texture flipping, +1 otherwise. |
---|
| 1709 | |
---|
| 1710 | @item light_diffuse_colour |
---|
| 1711 | The diffuse colour of a given light; this requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light - note that directional lights are always first in the list and always present). NB if there are no lights this close, then the parameter will be set to black. |
---|
| 1712 | @item light_specular_colour |
---|
| 1713 | The specular colour of a given light; this requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to black. |
---|
| 1714 | @item light_attenuation |
---|
| 1715 | A float4 containing the 4 light attenuation variables for a given light. This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. The order of the parameters is range, constant attenuation, linear attenuation, quadric attenuation. |
---|
| 1716 | @item light_position |
---|
| 1717 | The position of a given light in world space. This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. Note that this property will work with all kinds of lights, even directional lights, since the parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work consistently on both. |
---|
| 1718 | @item light_direction |
---|
| 1719 | The direction of a given light in world space. This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. DEPRECATED - this property only works on directional lights, and we recommend that you use light_position instead since that returns a generic 4D vector. |
---|
| 1720 | @item light_position_object_space |
---|
| 1721 | The position of a given light in object space (ie when the object is at (0,0,0)). This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. Note that this property will work with all kinds of lights, even directional lights, since the parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work consistently on both. |
---|
| 1722 | @item light_direction_object_space |
---|
| 1723 | The direction of a given light in object space (ie when the object is at (0,0,0)). This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. DEPRECATED, except for spotlights - for directional lights we recommend that you use light_position_object_space instead since that returns a generic 4D vector. |
---|
| 1724 | @item light_position_view_space |
---|
| 1725 | The position of a given light in view space (ie when the camera is at (0,0,0)). This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. Note that this property will work with all kinds of lights, even directional lights, since the parameter is set as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work consistently on both. |
---|
| 1726 | @item light_direction_view_space |
---|
| 1727 | The direction of a given light in view space (ie when the camera is at (0,0,0)). This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). NB if there are no lights this close, then the parameter will be set to all zeroes. DEPRECATED, except for spotlights - for directional lights we recommend that you use light_position_view_space instead since that returns a generic 4D vector. |
---|
| 1728 | @item light_power |
---|
| 1729 | The 'power' scaling for a given light, useful in HDR rendering. This requires an index in the 'extra_params' field, and relates to the 'nth' closest light which could affect this object (ie 0 refers to the closest light). |
---|
| 1730 | @item ambient_light_colour |
---|
| 1731 | The colour of the ambient light currently set in the scene. |
---|
| 1732 | @item fog_colour |
---|
| 1733 | The colour of the fog currently set in the scene. |
---|
| 1734 | @item fog_params |
---|
| 1735 | The parameters of the fog currently set in the scene. Packed as (exp_density, linear_start, linear_end, 1.0 / (linear_end - linear_start)). |
---|
| 1736 | @item camera_position |
---|
| 1737 | The current cameras position in world space. |
---|
| 1738 | @item camera_position_object_space |
---|
| 1739 | The current cameras position in object space (ie when the object is at (0,0,0)). |
---|
| 1740 | @item time |
---|
| 1741 | The current time, factored by the optional parameter (or 1.0f if not supplied). |
---|
| 1742 | @item time_0_x |
---|
| 1743 | Single float time value, which repeats itself based on "cycle time" given as an 'extra_params' field |
---|
| 1744 | @item costime_0_x |
---|
| 1745 | Cosine of time_0_x |
---|
| 1746 | @item sintime_0_x |
---|
| 1747 | Sine of time_0_x |
---|
| 1748 | @item tantime_0_x |
---|
| 1749 | Tangent of time_0_x |
---|
| 1750 | @item time_0_x_packed |
---|
| 1751 | 4-element vector of time0_x, sintime0_x, costime0_x, tantime0_x |
---|
| 1752 | @item time_0_1 |
---|
| 1753 | As time0_x but scaled to [0..1] |
---|
| 1754 | @item costime_0_1 |
---|
| 1755 | As costime0_x but scaled to [0..1] |
---|
| 1756 | @item sintime_0_1 |
---|
| 1757 | As sintime0_x but scaled to [0..1] |
---|
| 1758 | @item tantime_0_1 |
---|
| 1759 | As tantime0_x but scaled to [0..1] |
---|
| 1760 | @item time_0_1_packed |
---|
| 1761 | As time0_x_packed but all values scaled to [0..1] |
---|
| 1762 | @item time_0_2pi |
---|
| 1763 | As time0_x but scaled to [0..2*Pi] |
---|
| 1764 | @item costime_0_2pi |
---|
| 1765 | As costime0_x but scaled to [0..2*Pi] |
---|
| 1766 | @item sintime_0_2pi |
---|
| 1767 | As sintime0_x but scaled to [0..2*Pi] |
---|
| 1768 | @item tantime_0_2pi |
---|
| 1769 | As tantime0_x but scaled to [0..2*Pi] |
---|
| 1770 | @item time_0_2pi_packed |
---|
| 1771 | As time0_x_packed but scaled to [0..2*Pi] |
---|
| 1772 | @item frame_time |
---|
| 1773 | The current frame time, factored by the optional parameter (or 1.0f if not supplied). |
---|
| 1774 | @item fps |
---|
| 1775 | The current frames per second |
---|
| 1776 | @item viewport_width |
---|
| 1777 | The current viewport width in pixels |
---|
| 1778 | @item viewport_height |
---|
| 1779 | The current viewport height in pixels |
---|
| 1780 | @item inverse_viewport_width |
---|
| 1781 | 1.0/the current viewport width in pixels |
---|
| 1782 | @item inverse_viewport_height |
---|
| 1783 | 1.0/the current viewport height in pixels |
---|
| 1784 | @item viewport_size |
---|
| 1785 | 4-element vector of viewport_width, viewport_height, inverse_viewport_width, inverse_viewport_height |
---|
| 1786 | @item view_direction |
---|
| 1787 | View direction vector in object space |
---|
| 1788 | @item view_side_vector |
---|
| 1789 | View local X axis |
---|
| 1790 | @item view_up_vector |
---|
| 1791 | View local Y axis |
---|
| 1792 | @item fov |
---|
| 1793 | Vertical field of view, in radians |
---|
| 1794 | @item near_clip_distance |
---|
| 1795 | Near clip distance, in world units |
---|
| 1796 | @item far_clip_distance |
---|
| 1797 | Far clip distance, in world units (may be 0 for infinite view projection) |
---|
| 1798 | @item texture_viewproj_matrix |
---|
| 1799 | Only applicable to vertex programs which have been specified as the 'shadow receiver' vertex program alternative; this provides details of the view/projection matrix for the current shadow projector. |
---|
| 1800 | @anchor{pass_number} |
---|
| 1801 | @item pass_number |
---|
| 1802 | Sets the active pass index number in a gpu parameter. The first pass in a technique has an index of 0, the second an index of 1 and so on. This is usefull for multipass shaders (ie fur or blur shader) that need to know what pass it is. By setting up the auto parameter in a @ref{Default Program Parameters} list in a program definition, there is no requirement to set the pass number parameter in each pass and lose track. (@xref{fur_example}) |
---|
| 1803 | @anchor{pass_iteration_number} |
---|
| 1804 | @item pass_iteration_number |
---|
| 1805 | Usefull for GPU programs that need to know what the current pass iteration number is. The first iteration of a pass is numbered 0. The last iteration number is one less than what is set for the pass iteration number. If a pass has its iteration attribute set to 5 then the last iteration number (5th execution of the pass) is 4.(@xref{iteration}) |
---|
| 1806 | @anchor{animation_parametric} |
---|
| 1807 | @item animation_parametric |
---|
| 1808 | Useful for hardware vertex animation. For morph animation, sets the parametric value (0..1) representing the distance between the first position keyframe (bound to positions) and the second position keyframe (bound to the first free texture coordinate) so that the vertex program can interpolate between them. For pose animation, indicates a group of up to 4 parametric weight values applying to a sequence of up to 4 poses (each one bound to x, y, z and w of the constant), one for each pose. The original positions are held in the usual position buffer, and the offsets to take those positions to the pose where weight == 1.0 are in the first 'n' free texture coordinates; 'n' being determined by the value passed to includes_pose_animation. If more than 4 simultaneous poses are required, then you'll need more than 1 shader constant to hold the parametric values, in which case you should use this binding more than once, referencing a different constant entry; the second one will contain the parametrics for poses 5-8, the third for poses 9-12, and so on. |
---|
| 1809 | @item custom |
---|
| 1810 | This allows you to map a custom parameter on an individual Renderable (see Renderable::setCustomParameter) to a parameter on a GPU program. It requires that you complete the 'extra_params' field with the index that was used in the Renderable::setCustomParameter call, and this will ensure that whenever this Renderable is used, it will have it's custom parameter mapped in. It's very important that this parameter has been defined on all Renderables that are assigned the material that contains this automatic mapping, otherwise the process will fail. |
---|
| 1811 | @end table |
---|
| 1812 | |
---|
| 1813 | @anchor{param_named} |
---|
| 1814 | @subheading param_named |
---|
| 1815 | This is the same as param_indexed, but uses a named parameter instead of an index. This can only be used with high-level programs which include parameter names; if you're using an assembler program then you have no choice but to use indexes. Note that you can use indexed parameters for high-level programs too, but it is less portable since if you reorder your parameters in the high-level program the indexes will change.@*@* |
---|
| 1816 | format: param_named <name> <type> <value>@*@* |
---|
| 1817 | example: param_named shininess float4 10.0 0 0 0@*@* |
---|
| 1818 | The type is required because the program is not compiled and loaded when the material script is parsed, so at this stage we have no idea what types the parameters are. Programs are only loaded and compiled when they are used, to save memory. |
---|
| 1819 | |
---|
| 1820 | @anchor{param_named_auto} |
---|
| 1821 | @subheading param_named_auto |
---|
| 1822 | |
---|
| 1823 | This is the named equivalent of param_indexed_auto, for use with high-level programs.@*@* |
---|
| 1824 | Format: param_named_auto <name> <value_code> <extra_params>@*@* |
---|
| 1825 | Example: param_named_auto worldViewProj WORLDVIEWPROJ_MATRIX@*@* |
---|
| 1826 | |
---|
| 1827 | The allowed value codes and the meaning of extra_params are detailed in @ref{param_indexed_auto}. |
---|
| 1828 | |
---|
| 1829 | @anchor{Shadows and Vertex Programs} |
---|
| 1830 | @subheading Shadows and Vertex Programs |
---|
| 1831 | When using shadows (@xref{Shadows}), the use of vertex programs can add some additional complexities, because Ogre can only automatically deal with everything when using the fixed-function pipeline. If you use vertex programs, and you are also using shadows, you may need to make some adjustments. @*@* |
---|
| 1832 | |
---|
| 1833 | If you use @strong{stencil shadows}, then any vertex programs which do vertex deformation can be a problem, because stencil shadows are calculated on the CPU, which does not have access to the modified vertices. If the vertex program is doing standard skeletal animation, this is ok (see section above) because Ogre knows how to replicate the effect in software, but any other vertex deformation cannot be replicated, and you will either have to accept that the shadow will not reflect this deformation, or you should turn off shadows for that object. @*@* |
---|
| 1834 | |
---|
| 1835 | If you use @strong{texture shadows}, then vertex deformation is acceptable; however, when rendering the object into the shadow texture (the shadow caster pass), the shadow has to be rendered in a solid colour (linked to the ambient colour). You must therefore provide an alternative vertex program, so Ogre provides you with a way of specifying one to use when rendering the caster. Basically you link an alternative vertex program, using exactly the same syntax as the original vertex program link: |
---|
| 1836 | |
---|
| 1837 | @example |
---|
| 1838 | shadow_caster_vertex_program_ref myShadowCasterVertexProgram |
---|
| 1839 | { |
---|
| 1840 | param_indexed_auto 0 worldviewproj_matrix |
---|
| 1841 | param_indexed_auto 4 ambient_light_colour |
---|
| 1842 | } |
---|
| 1843 | @end example |
---|
| 1844 | |
---|
| 1845 | When rendering a shadow caster, Ogre will automatically use the alternate program. You can bind the same or different parameters to the program - the most important thing is that you bind @strong{ambiend_light_colour}, since this determines the colour of the shadow in modulative texture shadows. If you don't supply an alternate program, Ogre will fall back on a fixed-function material which will not reflect any vertex deformation you do in your vertex program. @*@* |
---|
| 1846 | |
---|
| 1847 | In addition, when rendering the shadow receivers with shadow textures, Ogre needs to project the shadow texture. It does this automatically in fixed function mode, but if the receivers use vertex programs, they need to have a shadow receiver program which does the usual vertex deformation, but also generates projective texture coordinates. The additional program linked into the pass like this: |
---|
| 1848 | |
---|
| 1849 | @example |
---|
| 1850 | shadow_receiver_vertex_program_ref myShadowReceiverVertexProgram |
---|
| 1851 | { |
---|
| 1852 | param_indexed_auto 0 worldviewproj_matrix |
---|
| 1853 | param_indexed_auto 4 texture_viewproj_matrix |
---|
| 1854 | } |
---|
| 1855 | @end example |
---|
| 1856 | |
---|
| 1857 | For the purposes of writing this alternate program, there is an automatic parameter binding of 'texture_viewproj_matrix' which provides the program with texture projection parameters. The vertex program should do it's normal vertex processing, and generate texture coordinates using this matrix and place them in texture coord sets 0 and 1, since some shadow techniques use 2 texture units. The colour of the vertices output by this vertex program must always be white, so as not to affect the final colour of the rendered shadow. @*@* |
---|
| 1858 | |
---|
| 1859 | When using additive texture shadows, the shadow pass render is actually the lighting render, so if you perform any fragmene program lighting you also need to pull in a custom fragment program. You use the shadow_receiver_fragment_program_ref for this: |
---|
| 1860 | @example |
---|
| 1861 | shadow_receiver_fragment_program_ref myShadowReceiverFragmentProgram |
---|
| 1862 | { |
---|
| 1863 | param_named_auto lightDiffuse light_diffuse_colour 0 |
---|
| 1864 | } |
---|
| 1865 | @end example |
---|
| 1866 | You should pass the projected shadow coordinates from the custom vertex program. As for textures, texture unit 0 will always be the shadow texture. Any other textures which you bind in your pass will be carried across too, but will be moved up by 1 unit to make room for the shadow texture. Therefore your shadow receiver fragment program is likely to be the same as the bare lighting pass of your normal material, except that you insert an extra texture sampler at index 0, which you will use to adjust the result by (modulating diffuse and specular components). |
---|
| 1867 | |
---|
| 1868 | @include MaterialScriptCopy.inc |
---|
| 1869 | |
---|
| 1870 | @include CompositorScript.inc |
---|
| 1871 | |
---|
| 1872 | @node Particle Scripts |
---|
| 1873 | @section Particle Scripts |
---|
| 1874 | |
---|
| 1875 | Particle scripts allow you to define particle systems to be instantiated in your code without having to hard-code the settings themselves in your source code, allowing a very quick turnaround on any changes you make. Particle systems which are defined in scripts are used as templates, and multiple actual systems can be created from them at runtime.@*@* |
---|
| 1876 | |
---|
| 1877 | @heading Loading scripts |
---|
| 1878 | |
---|
| 1879 | Particle system scripts are loaded at initialisation time by the system: by default it looks in all common resource locations (see Root::addResourceLocation) for files with the '.particle' extension and parses them. If you want to parse files with a different extension, use the ParticleSystemManager::getSingleton().parseAllSources method with your own extension, or if you want to parse an individual file, use ParticleSystemManager::getSingleton().parseScript.@*@* |
---|
| 1880 | |
---|
| 1881 | Once scripts have been parsed, your code is free to instantiate systems based on them using the ParticleSystemManager::getSingleton().createSystem() method which can take both a name for the new system, and the name of the template to base it on (this template name is in the script).@*@* |
---|
| 1882 | |
---|
| 1883 | @heading Format |
---|
| 1884 | |
---|
| 1885 | Several particle systems may be defined in a single script. The script format is pseudo-C++, with sections delimited by curly braces ({}), and comments indicated by starting a line with '//' (note, no nested form comments allowed). The general format is shown below in a typical example: |
---|
| 1886 | @example |
---|
| 1887 | // A sparkly purple fountain |
---|
| 1888 | Examples/PurpleFountain |
---|
| 1889 | { |
---|
| 1890 | material Examples/Flare2 |
---|
| 1891 | particle_width 20 |
---|
| 1892 | particle_height 20 |
---|
| 1893 | cull_each false |
---|
| 1894 | quota 10000 |
---|
| 1895 | billboard_type oriented_self |
---|
| 1896 | |
---|
| 1897 | // Area emitter |
---|
| 1898 | emitter Point |
---|
| 1899 | { |
---|
| 1900 | angle 15 |
---|
| 1901 | emission_rate 75 |
---|
| 1902 | time_to_live 3 |
---|
| 1903 | direction 0 1 0 |
---|
| 1904 | velocity_min 250 |
---|
| 1905 | velocity_max 300 |
---|
| 1906 | colour_range_start 1 0 0 |
---|
| 1907 | colour_range_end 0 0 1 |
---|
| 1908 | } |
---|
| 1909 | |
---|
| 1910 | // Gravity |
---|
| 1911 | affector LinearForce |
---|
| 1912 | { |
---|
| 1913 | force_vector 0 -100 0 |
---|
| 1914 | force_application add |
---|
| 1915 | } |
---|
| 1916 | |
---|
| 1917 | // Fader |
---|
| 1918 | affector ColourFader |
---|
| 1919 | { |
---|
| 1920 | red -0.25 |
---|
| 1921 | green -0.25 |
---|
| 1922 | blue -0.25 |
---|
| 1923 | } |
---|
| 1924 | } |
---|
| 1925 | @end example |
---|
| 1926 | @*@* |
---|
| 1927 | Every particle system in the script must be given a name, which is the line before the first opening '{', in the example this is 'Examples/PurpleFountain'. This name must be globally unique. It can include path characters (as in the example) to logically divide up your particle systems, and also to avoid duplicate names, but the engine does not treat the name as hierarchical, just as a string.@*@* |
---|
| 1928 | |
---|
| 1929 | A system can have top-level attributes set using the scripting commands available, such as 'quota' to set the maximum number of particles allowed in the system. Emitters (which create particles) and affectors (which modify particles) are added as nested definitions within the script. The parameters available in the emitter and affector sections are entirely dependent on the type of emitter / affector.@*@* |
---|
| 1930 | |
---|
| 1931 | For a detailed description of the core particle system attributes, see the list below: |
---|
| 1932 | |
---|
| 1933 | @subheading Available Particle System Attributes |
---|
| 1934 | @itemize @bullet |
---|
| 1935 | @item |
---|
| 1936 | @ref{quota} |
---|
| 1937 | @item |
---|
| 1938 | @ref{particle_material, material} |
---|
| 1939 | @item |
---|
| 1940 | @ref{particle_width} |
---|
| 1941 | @item |
---|
| 1942 | @ref{particle_height} |
---|
| 1943 | @item |
---|
| 1944 | @ref{cull_each} |
---|
| 1945 | @item |
---|
| 1946 | @ref{billboard_type} |
---|
| 1947 | @item |
---|
| 1948 | @ref{billboard_origin} |
---|
| 1949 | @item |
---|
| 1950 | @ref{billboard_rotation_type} |
---|
| 1951 | @item |
---|
| 1952 | @ref{common_direction} |
---|
| 1953 | @item |
---|
| 1954 | @ref{common_up_vector} |
---|
| 1955 | @item |
---|
| 1956 | @ref{particle_renderer, renderer} |
---|
| 1957 | @item |
---|
| 1958 | @ref{particle_sorted, sorted} |
---|
| 1959 | @item |
---|
| 1960 | @ref{particle_localspace, local_space} |
---|
| 1961 | @item |
---|
| 1962 | @ref{particle_point_rendering, point_rendering} |
---|
| 1963 | @item |
---|
| 1964 | @ref{particle_accurate_facing, accurate_facing} |
---|
| 1965 | @item |
---|
| 1966 | @ref{iteration_interval} |
---|
| 1967 | @item |
---|
| 1968 | @ref{nonvisible_update_timeout} |
---|
| 1969 | @end itemize |
---|
| 1970 | See also: @ref{Particle Emitters}, @ref{Particle Affectors} |
---|
| 1971 | |
---|
| 1972 | @node Particle System Attributes |
---|
| 1973 | @subsection Particle System Attributes |
---|
| 1974 | This section describes to attributes which you can set on every particle system using scripts. All atributes have default values so all settings are optional in your script.@*@* |
---|
| 1975 | |
---|
| 1976 | @anchor{quota} |
---|
| 1977 | @subheading quota |
---|
| 1978 | |
---|
| 1979 | Sets the maximum number of particles this system is allowed to contain at one time. When this limit is exhausted, the emitters will not be allowed to emit any more particles until some destroyed (e.g. through their time_to_live running out). Note that you will almost always want to change this, since it defaults to a very low value (particle pools are only ever increased in size, never decreased).@*@* |
---|
| 1980 | |
---|
| 1981 | format: quota <max_particles>@* |
---|
| 1982 | example: quota 10000@* |
---|
| 1983 | default: 10@* |
---|
| 1984 | |
---|
| 1985 | @anchor{particle_material} |
---|
| 1986 | @subheading material |
---|
| 1987 | |
---|
| 1988 | Sets the name of the material which all particles in this system will use. All paticles in a system use the same material, although each particle can tint this material through the use of it's colour property.@*@* |
---|
| 1989 | |
---|
| 1990 | format: material <material_name>@* |
---|
| 1991 | example: material Examples/Flare@* |
---|
| 1992 | default: none (blank material)@* |
---|
| 1993 | |
---|
| 1994 | @anchor{particle_width} |
---|
| 1995 | @subheading particle_width |
---|
| 1996 | |
---|
| 1997 | Sets the width of particles in world coordinates. Note that this property is absolute when billboard_type (see below) is set to 'point' or 'perpendicular_self', but is scaled by the length of the direction vector when billboard_type is 'oriented_common', 'oriented_self' or 'perpendicular_common'.@* |
---|
| 1998 | |
---|
| 1999 | format: particle_width <width>@* |
---|
| 2000 | example: particle_width 20@* |
---|
| 2001 | default: 100@* |
---|
| 2002 | |
---|
| 2003 | @anchor{particle_height} |
---|
| 2004 | @subheading particle_height |
---|
| 2005 | |
---|
| 2006 | Sets the height of particles in world coordinates. Note that this property is absolute when billboard_type (see below) is set to 'point' or 'perpendicular_self', but is scaled by the length of the direction vector when billboard_type is 'oriented_common', 'oriented_self' or 'perpendicular_common'.@* |
---|
| 2007 | |
---|
| 2008 | format: particle_height <height>@* |
---|
| 2009 | example: particle_height 20@* |
---|
| 2010 | default: 100@* |
---|
| 2011 | |
---|
| 2012 | @anchor{cull_each} |
---|
| 2013 | @subheading cull_each |
---|
| 2014 | |
---|
| 2015 | All particle systems are culled by the bounding box which contains all the particles in the system. This is normally sufficient for fairly locally constrained particle systems where most particles are either visible or not visible together. However, for those that spread particles over a wider area (e.g. a rain system), you may want to actually cull each particle individually to save on time, since it is far more likely that only a subset of the particles will be visible. You do this by setting the cull_each parameter to true.@*@* |
---|
| 2016 | |
---|
| 2017 | format: cull_each <true|false>@* |
---|
| 2018 | example: cull_each true@* |
---|
| 2019 | default: false@* |
---|
| 2020 | |
---|
| 2021 | @anchor{particle_renderer} |
---|
| 2022 | @subheading renderer |
---|
| 2023 | |
---|
| 2024 | Particle systems do not render themselves, they do it through ParticleRenderer classes. Those classes are registered with a manager in order to provide particle systems with a particular 'look'. OGRE comes configured with a default billboard-based renderer, but more can be added through plugins. Particle renders are registered with a unique name, and you can use that name in this attribute to determine the renderer to use. The default is 'billboard'.@*@* |
---|
| 2025 | |
---|
| 2026 | Particle renderers can have attributes, which can be passed by setting them on the root particle system.@*@* |
---|
| 2027 | |
---|
| 2028 | format: renderer <renderer_name>@* |
---|
| 2029 | default: billboard@* |
---|
| 2030 | |
---|
| 2031 | @anchor{particle_sorted} |
---|
| 2032 | @subheading sorted |
---|
| 2033 | |
---|
| 2034 | By default, particles are not sorted. By setting this attribute to 'true', theparticles will be sorted with respect to the camera, furthest first. This can make certain rendering effects look better at a small sorting expense.@*@* |
---|
| 2035 | |
---|
| 2036 | format: sorted <true|false>@* |
---|
| 2037 | default: false@* |
---|
| 2038 | |
---|
| 2039 | @anchor{particle_localspace} |
---|
| 2040 | @subheading local_space |
---|
| 2041 | |
---|
| 2042 | By default, particles are emitted into world space, such that if you transform the node to which the system is attached, it will not affect the particles (only the emitters). This tends to give the normal expected behaviour, which is to model how real world particles travel independently from the objects they are emitted from. However, to create some effects you may want the particles to remain attached to the local space the emitter is in and to follow them directly. This option allows you to do that.@*@* |
---|
| 2043 | |
---|
| 2044 | format: local_space <true|false>@* |
---|
| 2045 | default: false@* |
---|
| 2046 | |
---|
| 2047 | @anchor{billboard_type} |
---|
| 2048 | @subheading billboard_type |
---|
| 2049 | |
---|
| 2050 | This is actually an attribute of the 'billboard' particle renderer (the default), and is an example of passing attributes to a particle renderer by declaring them directly within the system declaration. Particles using the default renderer are rendered using billboards, which are rectangles formed by 2 triangles which rotate to face the given direction. However, there is more than 1 way to orient a billboard. The classic approach is for the billboard to directly face the camera: this is the default behaviour. However this arrangement only looks good for particles which are representing something vaguely spherical like a light flare. For more linear effectd like laser fire, you actually want the particle to have an orientation of it's own.@*@* |
---|
| 2051 | |
---|
| 2052 | format: billboard_type <point|oriented_common|oriented_self|perpendicular_common|perpendicular_self>@* |
---|
| 2053 | example: billboard_type oriented_self@* |
---|
| 2054 | default: point@* |
---|
| 2055 | |
---|
| 2056 | The options for this parameter are: |
---|
| 2057 | @table @asis |
---|
| 2058 | @item point |
---|
| 2059 | The default arrangement, this approximates spherical particles and the billboards always fully face the camera. |
---|
| 2060 | @item oriented_common |
---|
| 2061 | Particles are oriented around a common, typically fixed direction vector (see @ref{common_direction}), which acts as their local Y axis. The billboard rotates only around this axis, giving the particle some sense of direction. Good for rainstorms, starfields etc where the particles will travelling in one direction - this is slightly faster than oriented_self (see below). |
---|
| 2062 | @item oriented_self |
---|
| 2063 | Particles are oriented around their own direction vector, which acts as their local Y axis. As the particle changes direction, so the billboard reorients itself to face this way. Good for laser fire, fireworks and other 'streaky' particles that should look like they are travelling in their own direction. |
---|
| 2064 | @item perpendicular_common |
---|
| 2065 | Particles are perpendicular to a common, typically fixed direction vector (see @ref{common_direction}), which acts as their local Z axis, and their local Y axis coplanar with common direction and the common up vector (see @ref{common_up_vector}). The billboard never rotates to face the camera, you might use double-side material to ensure particles never culled by back-facing. Good for aureolas, rings etc where the particles will perpendicular to the ground - this is slightly faster than perpendicular_self (see below). |
---|
| 2066 | @item perpendicular_self |
---|
| 2067 | Particles are perpendicular to their own direction vector, which acts as their local Z axis, and their local Y axis coplanar with their own direction vector and the common up vector (see @ref{common_up_vector}). The billboard never rotates to face the camera, you might use double-side material to ensure particles never culled by back-facing. Good for rings stack etc where the particles will perpendicular to their travelling direction. |
---|
| 2068 | @end table |
---|
| 2069 | |
---|
| 2070 | @anchor{billboard_origin} |
---|
| 2071 | @subheading billboard_origin |
---|
| 2072 | |
---|
| 2073 | Specifying the point which acts as the origin point for all billboard particles, controls the fine tuning of where a billboard particle appears in relation to it's position.@*@* |
---|
| 2074 | |
---|
| 2075 | format: billboard_origin <top_left|top_center|top_right|center_left|center|center_right|bottom_left|bottom_center|bottom_right>@* |
---|
| 2076 | example: billboard_origin top_right@* |
---|
| 2077 | default: center@* |
---|
| 2078 | |
---|
| 2079 | The options for this parameter are: |
---|
| 2080 | @table @asis |
---|
| 2081 | @item top_left |
---|
| 2082 | The billboard origin is the top-left corner. |
---|
| 2083 | @item top_center |
---|
| 2084 | The billboard origin is the center of top edge. |
---|
| 2085 | @item top_right |
---|
| 2086 | The billboard origin is the top-right corner. |
---|
| 2087 | @item center_left |
---|
| 2088 | The billboard origin is the center of left edge. |
---|
| 2089 | @item center |
---|
| 2090 | The billboard origin is the center. |
---|
| 2091 | @item center_right |
---|
| 2092 | The billboard origin is the center of right edge. |
---|
| 2093 | @item bottom_left |
---|
| 2094 | The billboard origin is the bottom-left corner. |
---|
| 2095 | @item bottom_center |
---|
| 2096 | The billboard origin is the center of bottom edge. |
---|
| 2097 | @item bottom_right |
---|
| 2098 | The billboard origin is the bottom-right corner. |
---|
| 2099 | @end table |
---|
| 2100 | |
---|
| 2101 | @anchor{billboard_rotation_type} |
---|
| 2102 | @subheading billboard_rotation_type |
---|
| 2103 | |
---|
| 2104 | By default, billboard particles will rotate the texture coordinates to according with particle rotation. But rotate texture coordinates has some disadvantage, e.g. the corners of the texture will lost after rotate, and the corners of the billboard will fill with unwant texture area when using wrap address mode or sub-texture sampling. This settings allow you specifying other rotation type.@*@* |
---|
| 2105 | |
---|
| 2106 | format: billboard_rotation_type <vertex|texcoord>@* |
---|
| 2107 | example: billboard_rotation_type vertex@* |
---|
| 2108 | default: texcoord@* |
---|
| 2109 | |
---|
| 2110 | The options for this parameter are: |
---|
| 2111 | @table @asis |
---|
| 2112 | @item vertex |
---|
| 2113 | Billboard particles will rotate the vertices around their facing direction to according with particle rotation. Rotate vertices guarantee texture corners exactly match billboard corners, thus has advantage mentioned above, but should take more time to generate the vertices. |
---|
| 2114 | @item texcoord |
---|
| 2115 | Billboard particles will rotate the texture coordinates to according with particle rotation. Rotate texture coordinates is faster than rotate vertices, but has some disadvantage mentioned above. |
---|
| 2116 | @end table |
---|
| 2117 | |
---|
| 2118 | @anchor{common_direction} |
---|
| 2119 | @subheading common_direction |
---|
| 2120 | |
---|
| 2121 | Only required if @ref{billboard_type} is set to oriented_common or perpendicular_common, this vector is the common direction vector used to orient all particles in the system.@*@* |
---|
| 2122 | |
---|
| 2123 | format: common_direction <x> <y> <z>@* |
---|
| 2124 | example: common_direction 0 -1 0@* |
---|
| 2125 | default: 0 0 1@* |
---|
| 2126 | @*@* |
---|
| 2127 | See also: @ref{Particle Emitters}, @ref{Particle Affectors} |
---|
| 2128 | |
---|
| 2129 | @anchor{common_up_vector} |
---|
| 2130 | @subheading common_up_vector |
---|
| 2131 | |
---|
| 2132 | Only required if @ref{billboard_type} is set to perpendicular_self or perpendicular_common, this vector is the common up vector used to orient all particles in the system.@*@* |
---|
| 2133 | |
---|
| 2134 | format: common_up_vector <x> <y> <z>@* |
---|
| 2135 | example: common_up_vector 0 1 0@* |
---|
| 2136 | default: 0 1 0@* |
---|
| 2137 | @*@* |
---|
| 2138 | See also: @ref{Particle Emitters}, @ref{Particle Affectors} |
---|
| 2139 | |
---|
| 2140 | @anchor{particle_point_rendering} |
---|
| 2141 | @subheading point_rendering |
---|
| 2142 | |
---|
| 2143 | This is actually an attribute of the 'billboard' particle renderer (the default), and sets whether or not the billboardset will use point rendering rather than manually generated quads.@*@* |
---|
| 2144 | |
---|
| 2145 | By default a billboardset is rendered by generating geometry for a textured quad in memory, taking into account the size and orientation settings, and uploading it to the video card. The alternative is to use hardware point rendering, which means that only one position needs to be sent per billboard rather than 4 and the hardware sorts out how this is rendered based on the render state.@*@* |
---|
| 2146 | |
---|
| 2147 | Using point rendering is faster than generating quads manually, but is more restrictive. The following restrictions apply: |
---|
| 2148 | @itemize @bullet |
---|
| 2149 | @item Only the 'point' orientation type is supported |
---|
| 2150 | @item Size and appearance of each particle is controlled by the material pass (@ref{point_size}, @ref{point_size_attenuation}, @ref{point_sprites}) |
---|
| 2151 | @item Per-particle size is not supported (stems from the above) |
---|
| 2152 | @item Per-particle rotation is not supported, and this can only be controlled through texture unit rotation in the material definition |
---|
| 2153 | @item Only 'center' origin is supported |
---|
| 2154 | @item Some drivers have an upper limit on the size of points they support - this can even vary between APIs on the same card! Don't rely on point sizes that cause the point sprites to get very large on screen, since they may get clamped on some cards. Upper sizes can range from 64 to 256 pixels. |
---|
| 2155 | @end itemize |
---|
| 2156 | You will almost certainly want to enable in your material pass both point attenuation and point sprites if you use this option. @*@* |
---|
| 2157 | |
---|
| 2158 | |
---|
| 2159 | @anchor{particle_accurate_facing} |
---|
| 2160 | @subheading accurate_facing |
---|
| 2161 | |
---|
| 2162 | This is actually an attribute of the 'billboard' particle renderer (the default), and sets whether or not the billboardset will use a slower but more accurate calculation for facing the billboard to the camera. Bt default it uses the camera direction, which is faster but means the billboards don't stay in the same orientation as you rotate the camera. The 'accurate_facing true' option makes the calculation based on a vector from each billboard to the camera, which means the orientation is constant even whilst the camera rotates. @*@* |
---|
| 2163 | |
---|
| 2164 | format: accurate_facing on|off@* |
---|
| 2165 | default: accurate_facing off 0@* |
---|
| 2166 | @*@* |
---|
| 2167 | |
---|
| 2168 | |
---|
| 2169 | @anchor{iteration_interval} |
---|
| 2170 | @subheading iteration_interval |
---|
| 2171 | Usually particle systems are updated based on the frame rate; however this can give variable results with more extreme frame rate ranges, particularly at lower frame rates. You can use this option to make the update frequency a fixed interval, whereby at lower frame rates, the particle update will be repeated at the fixed interval until the frame time is used up. A value of 0 means the default frame time iteration. @*@* |
---|
| 2172 | |
---|
| 2173 | format: iteration_interval <secs>@* |
---|
| 2174 | example: iteration_interval 0.01@* |
---|
| 2175 | default: iteration_interval 0@* |
---|
| 2176 | @*@* |
---|
| 2177 | |
---|
| 2178 | @anchor{nonvisible_update_timeout} |
---|
| 2179 | @subheading nonvisible_update_timeout |
---|
| 2180 | Sets when the particle system should stop updating after it hasn't been visible for a while. By default, visible particle systems update all the time, even when not in view. This means that they are guaranteed to be consistent when they do enter view. However, this comes at a cost, updating particle systems can be expensive, especially if they are perpetual. |
---|
| 2181 | @*@* |
---|
| 2182 | This option lets you set a 'timeout' on the particle system, so that if it isn't visible for this amount of time, it will stop updating until it is next visible. A value of 0 disables the timeout and always updates.@*@* |
---|
| 2183 | |
---|
| 2184 | format: nonvisible_update_timeout <secs>@* |
---|
| 2185 | example: nonvisible_update_timeout 10@* |
---|
| 2186 | default: nonvisible_update_timeout 0@* |
---|
| 2187 | @*@* |
---|
| 2188 | |
---|
| 2189 | @node Particle Emitters |
---|
| 2190 | @subsection Particle Emitters |
---|
| 2191 | Particle emitters are classified by 'type' e.g. 'Point' emitters emit from a single point whilst 'Box' emitters emit randomly from an area. New emitters can be added to Ogre by creating plugins. You add an emitter to a system by nesting another section within it, headed with the keyword 'emitter' followed by the name of the type of emitter (case sensitive). Ogre currently supports 'Point', 'Box', 'Cylinder', 'Ellipsoid', 'HollowEllipsoid' and 'Ring' emitters. |
---|
| 2192 | |
---|
| 2193 | @subheading Particle Emitter Universal Attributes |
---|
| 2194 | @itemize @bullet |
---|
| 2195 | @item |
---|
| 2196 | @ref{angle} |
---|
| 2197 | @item |
---|
| 2198 | @ref{colour} |
---|
| 2199 | @item |
---|
| 2200 | @ref{colour_range_start} |
---|
| 2201 | @item |
---|
| 2202 | @ref{colour_range_end} |
---|
| 2203 | @item |
---|
| 2204 | @ref{direction} |
---|
| 2205 | @item |
---|
| 2206 | @ref{emission_rate} |
---|
| 2207 | @item |
---|
| 2208 | @ref{position} |
---|
| 2209 | @item |
---|
| 2210 | @ref{velocity} |
---|
| 2211 | @item |
---|
| 2212 | @ref{velocity_min} |
---|
| 2213 | @item |
---|
| 2214 | @ref{velocity_max} |
---|
| 2215 | @item |
---|
| 2216 | @ref{time_to_live} |
---|
| 2217 | @item |
---|
| 2218 | @ref{time_to_live_min} |
---|
| 2219 | @item |
---|
| 2220 | @ref{time_to_live_max} |
---|
| 2221 | @item |
---|
| 2222 | @ref{duration} |
---|
| 2223 | @item |
---|
| 2224 | @ref{duration_min} |
---|
| 2225 | @item |
---|
| 2226 | @ref{duration_max} |
---|
| 2227 | @item |
---|
| 2228 | @ref{repeat_delay} |
---|
| 2229 | @item |
---|
| 2230 | @ref{repeat_delay_min} |
---|
| 2231 | @item |
---|
| 2232 | @ref{repeat_delay_max} |
---|
| 2233 | @end itemize |
---|
| 2234 | @*@* |
---|
| 2235 | See also: @ref{Particle Scripts}, @ref{Particle Affectors} |
---|
| 2236 | |
---|
| 2237 | |
---|
| 2238 | @node Particle Emitter Attributes |
---|
| 2239 | @subsection Particle Emitter Attributes |
---|
| 2240 | This section describes the common attributes of all particle emitters. Specific emitter types may also support their own extra attributes.@*@* |
---|
| 2241 | |
---|
| 2242 | @anchor{angle} |
---|
| 2243 | @subheading angle |
---|
| 2244 | |
---|
| 2245 | Sets the maximum angle (in degrees) which emitted particles may deviate from the direction of the emitter (see direction). Setting this to 10 allows particles to deviate up to 10 degrees in any direction away from the emitter's direction. A value of 180 means emit in any direction, whilst 0 means emit always exactly in the direction of the emitter.@*@* |
---|
| 2246 | |
---|
| 2247 | format: angle <degrees>@* |
---|
| 2248 | example: angle 30@* |
---|
| 2249 | default: 0@* |
---|
| 2250 | |
---|
| 2251 | @anchor{colour} |
---|
| 2252 | @subheading colour |
---|
| 2253 | |
---|
| 2254 | Sets a static colour for all particle emitted. Also see the colour_range_start and colour_range_end attributes for setting a range of colours. The format of the colour parameter is "r g b a", where each component is a value from 0 to 1, and the alpha value is optional (assumes 1 if not specified).@*@* |
---|
| 2255 | |
---|
| 2256 | format: colour <r> <g> <b> [<a>]@* |
---|
| 2257 | example: colour 1 0 0 1@* |
---|
| 2258 | default: 1 1 1 1@* |
---|
| 2259 | |
---|
| 2260 | @anchor{colour_range_start} @anchor{colour_range_end} |
---|
| 2261 | @subheading colour_range_start & colour_range_end |
---|
| 2262 | |
---|
| 2263 | As the 'colour' attribute, except these 2 attributes must be specified together, and indicate the range of colours available to emitted particles. The actual colour will be randomly chosen between these 2 values.@*@* |
---|
| 2264 | |
---|
| 2265 | format: as colour@* |
---|
| 2266 | example (generates random colours between red and blue):@* |
---|
| 2267 | @ @ @ @ colour_range_start 1 0 0@* |
---|
| 2268 | @ @ @ @ colour_range_end 0 0 1@* |
---|
| 2269 | default: both 1 1 1 1@* |
---|
| 2270 | |
---|
| 2271 | @anchor{direction} |
---|
| 2272 | @subheading direction |
---|
| 2273 | |
---|
| 2274 | Sets the direction of the emitter. This is relative to the SceneNode which the particle system is attached to, meaning that as with other movable objects changing the orientation of the node will also move the emitter.@*@* |
---|
| 2275 | |
---|
| 2276 | format: direction <x> <y> <z>@* |
---|
| 2277 | example: direction 0 1 0@* |
---|
| 2278 | default: 1 0 0@* |
---|
| 2279 | |
---|
| 2280 | @anchor{emission_rate} |
---|
| 2281 | @subheading emission_rate |
---|
| 2282 | |
---|
| 2283 | Sets how many particles per second should be emitted. The specific emitter does not have to emit these in a continuous burst - this is a relative parameter |
---|
| 2284 | and the emitter may choose to emit all of the second's worth of particles every half-second for example, the behaviour depends on the emitter. The emission rate will also be limited by the particle system's 'quota' setting.@*@* |
---|
| 2285 | |
---|
| 2286 | format: emission_rate <particles_per_second>@* |
---|
| 2287 | example: emission_rate 50@* |
---|
| 2288 | default: 10@* |
---|
| 2289 | |
---|
| 2290 | @anchor{position} |
---|
| 2291 | @subheading position |
---|
| 2292 | |
---|
| 2293 | Sets the position of the emitter relative to the SceneNode the particle system is attached to.@*@* |
---|
| 2294 | |
---|
| 2295 | format: position <x> <y> <z>@* |
---|
| 2296 | example: position 10 0 40@* |
---|
| 2297 | default: 0 0 0@* |
---|
| 2298 | |
---|
| 2299 | @anchor{velocity} |
---|
| 2300 | @subheading velocity |
---|
| 2301 | |
---|
| 2302 | Sets a constant velocity for all particles at emission time. See also the velocity_min and velocity_max attributes which allow you to set a range of velocities instead of a fixed one.@*@* |
---|
| 2303 | |
---|
| 2304 | format: velocity <world_units_per_second>@* |
---|
| 2305 | example: velocity 100@* |
---|
| 2306 | default: 1@* |
---|
| 2307 | |
---|
| 2308 | @anchor{velocity_min} @anchor{velocity_max} |
---|
| 2309 | @subheading velocity_min & velocity_max |
---|
| 2310 | |
---|
| 2311 | As 'velocity' except these attributes set a velocity range and each particle is emitted with a random velocity within this range.@*@* |
---|
| 2312 | |
---|
| 2313 | format: as velocity@* |
---|
| 2314 | example:@* |
---|
| 2315 | @ @ @ @ velocity_min 50@* |
---|
| 2316 | @ @ @ @ velocity_max 100@* |
---|
| 2317 | default: both 1@* |
---|
| 2318 | |
---|
| 2319 | @anchor{time_to_live} |
---|
| 2320 | @subheading time_to_live |
---|
| 2321 | |
---|
| 2322 | Sets the number of seconds each particle will 'live' for before being destroyed. NB it is possible for particle affectors to alter this in flight, but this is the value given to particles on emission. See also the time_to_live_min and time_to_live_max attributes which let you set a lifetime range instead of a fixed one.@*@* |
---|
| 2323 | |
---|
| 2324 | format: time_to_live <seconds>@* |
---|
| 2325 | example: time_to_live 10@* |
---|
| 2326 | default: 5@* |
---|
| 2327 | |
---|
| 2328 | @anchor{time_to_live_min} @anchor{time_to_live_max} |
---|
| 2329 | @subheading time_to_live_min & time_to_live_max |
---|
| 2330 | As time_to_live, except this sets a range of lifetimes and each particle gets a random value inbetween on emission.@*@* |
---|
| 2331 | |
---|
| 2332 | format: as time_to_live@* |
---|
| 2333 | example:@* |
---|
| 2334 | @ @ @ @ time_to_live_min 2@* |
---|
| 2335 | @ @ @ @ time_to_live_max 5@* |
---|
| 2336 | default: both 5@* |
---|
| 2337 | @* |
---|
| 2338 | |
---|
| 2339 | @anchor{duration} |
---|
| 2340 | @subheading duration |
---|
| 2341 | |
---|
| 2342 | Sets the number of seconds the emitter is active. The emitter can be started again, see @ref{repeat_delay}. A value of 0 means infinite duration. See also the duration_min and duration_max attributes which let you set a duration range instead of a fixed one.@*@* |
---|
| 2343 | |
---|
| 2344 | format: duration <seconds>@* |
---|
| 2345 | example:@* |
---|
| 2346 | @ @ @ @ duration 2.5@* |
---|
| 2347 | default: 0@* |
---|
| 2348 | @* |
---|
| 2349 | |
---|
| 2350 | @anchor{duration_min} @anchor{duration_max} |
---|
| 2351 | @subheading duration_min & duration_max |
---|
| 2352 | |
---|
| 2353 | As duration, except these attributes set a variable time range between the min and max values each time the emitter is started.@*@* |
---|
| 2354 | |
---|
| 2355 | format: as duration@* |
---|
| 2356 | example:@* |
---|
| 2357 | @ @ @ @ duration_min 2@* |
---|
| 2358 | @ @ @ @ duration_max 5@* |
---|
| 2359 | default: both 0@* |
---|
| 2360 | @* |
---|
| 2361 | |
---|
| 2362 | @anchor{repeat_delay} |
---|
| 2363 | @subheading repeat_delay |
---|
| 2364 | |
---|
| 2365 | Sets the number of seconds to wait before the emission is repeated when stopped by a limited @ref{duration}. See also the repeat_delay_min and repeat_delay_max attributes which allow you to set a range of repeat_delays instead of a fixed one.@*@* |
---|
| 2366 | |
---|
| 2367 | format: repeat_delay <seconds>@* |
---|
| 2368 | example:@* |
---|
| 2369 | @ @ @ @ repeat_delay 2.5@* |
---|
| 2370 | default: 0@* |
---|
| 2371 | @* |
---|
| 2372 | |
---|
| 2373 | @anchor{repeat_delay_min} @anchor{repeat_delay_max} |
---|
| 2374 | @subheading repeat_delay_min & repeat_delay_max |
---|
| 2375 | |
---|
| 2376 | As repeat_delay, except this sets a range of repeat delays and each time the emitter is started it gets a random value inbetween.@*@* |
---|
| 2377 | |
---|
| 2378 | format: as repeat_delay@* |
---|
| 2379 | example:@* |
---|
| 2380 | @ @ @ @ repeat_delay 2@* |
---|
| 2381 | @ @ @ @ repeat_delay 5@* |
---|
| 2382 | default: both 0@* |
---|
| 2383 | @* |
---|
| 2384 | |
---|
| 2385 | See also: @ref{Standard Particle Emitters}, @ref{Particle Scripts}, @ref{Particle Affectors} |
---|
| 2386 | |
---|
| 2387 | |
---|
| 2388 | @node Standard Particle Emitters |
---|
| 2389 | @subsection Standard Particle Emitters |
---|
| 2390 | Ogre comes preconfigured with a few particle emitters. New ones can be added by creating plugins: see the Plugin_ParticleFX project as an example of how you would do this (this is where these emitters are implemented). |
---|
| 2391 | |
---|
| 2392 | @itemize @bullet |
---|
| 2393 | @item |
---|
| 2394 | @ref{Point Emitter} |
---|
| 2395 | @item |
---|
| 2396 | @ref{Box Emitter} |
---|
| 2397 | @item |
---|
| 2398 | @ref{Cylinder Emitter} |
---|
| 2399 | @item |
---|
| 2400 | @ref{Ellipsoid Emitter} |
---|
| 2401 | @item |
---|
| 2402 | @ref{Hollow Ellipsoid Emitter} |
---|
| 2403 | @item |
---|
| 2404 | @ref{Ring Emitter} |
---|
| 2405 | @end itemize |
---|
| 2406 | @*@* |
---|
| 2407 | @anchor{Point Emitter} |
---|
| 2408 | @subheading Point Emitter |
---|
| 2409 | |
---|
| 2410 | This emitter emits particles from a single point, which is it's position. This emitter has no additional attributes over an above the standard emitter attributes.@*@* |
---|
| 2411 | |
---|
| 2412 | To create a point emitter, include a section like this within your particle system script: |
---|
| 2413 | @example |
---|
| 2414 | |
---|
| 2415 | emitter Point |
---|
| 2416 | { |
---|
| 2417 | // Settings go here |
---|
| 2418 | } |
---|
| 2419 | @end example |
---|
| 2420 | @* |
---|
| 2421 | Please note that the name of the emitter ('Point') is case-sensitive. |
---|
| 2422 | |
---|
| 2423 | @anchor{Box Emitter} |
---|
| 2424 | @subheading Box Emitter |
---|
| 2425 | |
---|
| 2426 | This emitter emits particles from a random location within a 3-dimensional box. It's extra attributes are:@*@* |
---|
| 2427 | @table @asis |
---|
| 2428 | @item width |
---|
| 2429 | Sets the width of the box (this is the size of the box along it's local X axis, which is dependent on the 'direction' attribute which forms the box's local Z).@* |
---|
| 2430 | format: width <units>@* |
---|
| 2431 | example: width 250@* |
---|
| 2432 | default: 100@* |
---|
| 2433 | @item height |
---|
| 2434 | Sets the height of the box (this is the size of the box along it's local Y axis, which is dependent on the 'direction' attribute which forms the box's local Z).@* |
---|
| 2435 | format: height <units>@* |
---|
| 2436 | example: height 250@* |
---|
| 2437 | default: 100@* |
---|
| 2438 | @item depth |
---|
| 2439 | Sets the depth of the box (this is the size of the box along it's local Z axis, which is the same as the 'direction' attribute).@* |
---|
| 2440 | format: depth <units>@* |
---|
| 2441 | example: depth 250@* |
---|
| 2442 | default: 100@* |
---|
| 2443 | @end table |
---|
| 2444 | @* |
---|
| 2445 | To create a box emitter, include a section like this within your particle system script: |
---|
| 2446 | @example |
---|
| 2447 | emitter Box |
---|
| 2448 | { |
---|
| 2449 | // Settings go here |
---|
| 2450 | } |
---|
| 2451 | @end example |
---|
| 2452 | |
---|
| 2453 | @anchor{Cylinder Emitter} |
---|
| 2454 | @subheading Cylinder Emitter |
---|
| 2455 | |
---|
| 2456 | This emitter emits particles in a random direction from within a cylinder area, where the cylinder is oriented along the Z-axis. This emitter has exactly the same parameters as the @ref{Box Emitter} so there are no additional parameters to consider here - the width and height determine the shape of the cylinder along it's axis (if they are different it is an ellipsoid cylinder), the depth determines the length of the cylinder. |
---|
| 2457 | |
---|
| 2458 | @anchor{Ellipsoid Emitter} |
---|
| 2459 | @subheading Ellipsoid Emitter |
---|
| 2460 | This emitter emits particles from within an ellipsoid shaped area, ie a sphere or squashed-sphere area. The parameters are again identical to the @ref{Box Emitter}, except that the dimensions describe the widest points along each of the axes. |
---|
| 2461 | |
---|
| 2462 | @anchor{Hollow Ellipsoid Emitter} |
---|
| 2463 | @subheading Hollow Ellipsoid Emitter |
---|
| 2464 | This emitter is just like @ref{Ellipsoid Emitter} except that there is a hollow area in the centre of the ellipsoid from which no particles are emitted. Therefore it has 3 extra parameters in order to define this area: |
---|
| 2465 | |
---|
| 2466 | @table @asis |
---|
| 2467 | @item inner_width |
---|
| 2468 | The width of the inner area which does not emit any particles. |
---|
| 2469 | @item inner_height |
---|
| 2470 | The height of the inner area which does not emit any particles. |
---|
| 2471 | @item inner_depth |
---|
| 2472 | The depth of the inner area which does not emit any particles. |
---|
| 2473 | @end table |
---|
| 2474 | |
---|
| 2475 | @anchor{Ring Emitter} |
---|
| 2476 | @subheading Ring Emitter |
---|
| 2477 | This emitter emits particles from a ring-shaped area, ie a little like @ref{Hollow Ellipsoid Emitter} except only in 2 dimensions. |
---|
| 2478 | |
---|
| 2479 | @table @asis |
---|
| 2480 | @item inner_width |
---|
| 2481 | The width of the inner area which does not emit any particles. |
---|
| 2482 | @item inner_height |
---|
| 2483 | The height of the inner area which does not emit any particles. |
---|
| 2484 | @end table |
---|
| 2485 | @*@* |
---|
| 2486 | |
---|
| 2487 | See also: @ref{Particle Scripts}, @ref{Particle Emitters} |
---|
| 2488 | |
---|
| 2489 | @node Particle Affectors |
---|
| 2490 | @subsection Particle Affectors |
---|
| 2491 | |
---|
| 2492 | Particle affectors modify particles over their lifetime. They are classified by 'type' e.g. 'LinearForce' affectors apply a force to all particles, whilst 'ColourFader' affectors alter the colour of particles in flight. New affectors can be added to Ogre by creating plugins. You add an affector to a system by nesting another section within it, headed with the keyword 'affector' followed by the name of the type of affector (case sensitive). Ogre currently supports 'LinearForce' and 'ColourFader' affectors.@*@* |
---|
| 2493 | |
---|
| 2494 | Particle affectors actually have no universal attributes; they are all specific to the type of affector.@*@* |
---|
| 2495 | |
---|
| 2496 | See also: @ref{Standard Particle Affectors}, @ref{Particle Scripts}, @ref{Particle Emitters} |
---|
| 2497 | |
---|
| 2498 | @node Standard Particle Affectors |
---|
| 2499 | @subsection Standard Particle Affectors |
---|
| 2500 | Ogre comes preconfigured with a few particle affectors. New ones can be added by creating plugins: see the Plugin_ParticleFX project as an example of how you would do this (this is where these affectors are implemented). |
---|
| 2501 | |
---|
| 2502 | @itemize @bullet |
---|
| 2503 | @item |
---|
| 2504 | @ref{Linear Force Affector} |
---|
| 2505 | @item |
---|
| 2506 | @ref{ColourFader Affector} |
---|
| 2507 | @item |
---|
| 2508 | @ref{ColourFader2 Affector} |
---|
| 2509 | @item |
---|
| 2510 | @ref{Scaler Affector} |
---|
| 2511 | @item |
---|
| 2512 | @ref{Rotator Affector} |
---|
| 2513 | @item |
---|
| 2514 | @ref{ColourInterpolator Affector} |
---|
| 2515 | @item |
---|
| 2516 | @ref{ColourImage Affector} |
---|
| 2517 | @end itemize |
---|
| 2518 | |
---|
| 2519 | @anchor{Linear Force Affector} |
---|
| 2520 | @subheading Linear Force Affector |
---|
| 2521 | |
---|
| 2522 | This affector applies a force vector to all particles to modify their trajectory. Can be used for gravity, wind, or any other linear force. It's extra attributes are:@*@* |
---|
| 2523 | @table @asis |
---|
| 2524 | @item force_vector |
---|
| 2525 | Sets the vector for the force to be applied to every particle. The magnitude of this vector determines how strong the force is.@* |
---|
| 2526 | @ @ @ @ format: force_vector <x> <y> <z>@* |
---|
| 2527 | @ @ @ @ example: force_vector 50 0 -50@* |
---|
| 2528 | @ @ @ @ default: 0 -100 0 (a fair gravity effect)@* |
---|
| 2529 | @item force_application |
---|
| 2530 | |
---|
| 2531 | Sets the way in which the force vector is applied to particle momentum.@* |
---|
| 2532 | @ @ @ @ format: force_application <add|average>@* |
---|
| 2533 | @ @ @ @ example: force_application average@* |
---|
| 2534 | @ @ @ @ default: add@* |
---|
| 2535 | The options are: |
---|
| 2536 | @table @asis |
---|
| 2537 | @item average |
---|
| 2538 | The resulting momentum is the average of the force vector and the particle's current motion. Is self-stabilising but the speed at which the particle changes direction is non-linear. |
---|
| 2539 | @item add |
---|
| 2540 | The resulting momentum is the particle's current motion plus the force vector. This is traditional force acceleration but can potentially result in unlimited velocity. |
---|
| 2541 | @end table |
---|
| 2542 | @end table |
---|
| 2543 | @* |
---|
| 2544 | To create a linear force affector, include a section like this within your particle system script: |
---|
| 2545 | @example |
---|
| 2546 | affector LinearForce |
---|
| 2547 | { |
---|
| 2548 | // Settings go here |
---|
| 2549 | } |
---|
| 2550 | @end example |
---|
| 2551 | Please note that the name of the affector type ('LinearForce') is case-sensitive. |
---|
| 2552 | |
---|
| 2553 | @anchor{ColourFader Affector} |
---|
| 2554 | @subheading ColourFader Affector |
---|
| 2555 | |
---|
| 2556 | This affector modifies the colour of particles in flight. It's extra attributes are: |
---|
| 2557 | @table @asis |
---|
| 2558 | @item red |
---|
| 2559 | Sets the adjustment to be made to the red component of the particle colour per second.@* |
---|
| 2560 | @ @ @ @ format: red <delta_value>@* |
---|
| 2561 | @ @ @ @ example: red -0.1@* |
---|
| 2562 | @ @ @ @ default: 0@* |
---|
| 2563 | @item green |
---|
| 2564 | Sets the adjustment to be made to the green component of the particle colour per second.@* |
---|
| 2565 | @ @ @ @ format: green <delta_value>@* |
---|
| 2566 | @ @ @ @ example: green -0.1@* |
---|
| 2567 | @ @ @ @ default: 0@* |
---|
| 2568 | @item blue |
---|
| 2569 | Sets the adjustment to be made to the blue component of the particle colour per second.@* |
---|
| 2570 | @ @ @ @ format: blue <delta_value>@* |
---|
| 2571 | @ @ @ @ example: blue -0.1@* |
---|
| 2572 | @ @ @ @ default: 0@* |
---|
| 2573 | @item alpha |
---|
| 2574 | Sets the adjustment to be made to the alpha component of the particle colour per second.@* |
---|
| 2575 | @ @ @ @ format: alpha <delta_value>@* |
---|
| 2576 | example: alpha -0.1@* |
---|
| 2577 | default: 0@* |
---|
| 2578 | @end table |
---|
| 2579 | To create a colour fader affector, include a section like this within your particle system script: |
---|
| 2580 | @example |
---|
| 2581 | affector ColourFader |
---|
| 2582 | { |
---|
| 2583 | // Settings go here |
---|
| 2584 | } |
---|
| 2585 | @end example |
---|
| 2586 | |
---|
| 2587 | @anchor{ColourFader2 Affector} |
---|
| 2588 | @subheading ColourFader2 Affector |
---|
| 2589 | |
---|
| 2590 | This affector is similar to the @ref{ColourFader Affector}, except it introduces two states of colour changes as opposed to just one. The second colour change state is activated once a specified amount of time remains in the particles life. |
---|
| 2591 | @table @asis |
---|
| 2592 | @item red1 |
---|
| 2593 | Sets the adjustment to be made to the red component of the particle colour per second for the first state.@* |
---|
| 2594 | @ @ @ @ format: red <delta_value>@* |
---|
| 2595 | @ @ @ @ example: red -0.1@* |
---|
| 2596 | @ @ @ @ default: 0@* |
---|
| 2597 | @item green1 |
---|
| 2598 | Sets the adjustment to be made to the green component of the particle colour per second for the first state.@* |
---|
| 2599 | @ @ @ @ format: green <delta_value>@* |
---|
| 2600 | @ @ @ @ example: green -0.1@* |
---|
| 2601 | @ @ @ @ default: 0@* |
---|
| 2602 | @item blue1 |
---|
| 2603 | Sets the adjustment to be made to the blue component of the particle colour per second for the first state.@* |
---|
| 2604 | @ @ @ @ format: blue <delta_value>@* |
---|
| 2605 | @ @ @ @ example: blue -0.1@* |
---|
| 2606 | @ @ @ @ default: 0@* |
---|
| 2607 | @item alpha1 |
---|
| 2608 | Sets the adjustment to be made to the alpha component of the particle colour per second for the first state.@* |
---|
| 2609 | @ @ @ @ format: alpha <delta_value>@* |
---|
| 2610 | example: alpha -0.1@* |
---|
| 2611 | default: 0@* |
---|
| 2612 | @item red2 |
---|
| 2613 | Sets the adjustment to be made to the red component of the particle colour per second for the second state.@* |
---|
| 2614 | @ @ @ @ format: red <delta_value>@* |
---|
| 2615 | @ @ @ @ example: red -0.1@* |
---|
| 2616 | @ @ @ @ default: 0@* |
---|
| 2617 | @item green2 |
---|
| 2618 | Sets the adjustment to be made to the green component of the particle colour per second for the second state.@* |
---|
| 2619 | @ @ @ @ format: green <delta_value>@* |
---|
| 2620 | @ @ @ @ example: green -0.1@* |
---|
| 2621 | @ @ @ @ default: 0@* |
---|
| 2622 | @item blue2 |
---|
| 2623 | Sets the adjustment to be made to the blue component of the particle colour per second for the second state.@* |
---|
| 2624 | @ @ @ @ format: blue <delta_value>@* |
---|
| 2625 | @ @ @ @ example: blue -0.1@* |
---|
| 2626 | @ @ @ @ default: 0@* |
---|
| 2627 | @item alpha2 |
---|
| 2628 | Sets the adjustment to be made to the alpha component of the particle colour per second for the second state.@* |
---|
| 2629 | @ @ @ @ format: alpha <delta_value>@* |
---|
| 2630 | example: alpha -0.1@* |
---|
| 2631 | default: 0@* |
---|
| 2632 | @item state_change |
---|
| 2633 | When a particle has this much time left to live, it will switch to state 2.@* |
---|
| 2634 | @ @ @ @ format: state_change <seconds>@* |
---|
| 2635 | example: state_change 2@* |
---|
| 2636 | default: 1@* |
---|
| 2637 | @end table |
---|
| 2638 | To create a ColourFader2 affector, include a section like this within your particle system script: |
---|
| 2639 | @example |
---|
| 2640 | affector ColourFader2 |
---|
| 2641 | { |
---|
| 2642 | // Settings go here |
---|
| 2643 | } |
---|
| 2644 | @end example |
---|
| 2645 | |
---|
| 2646 | @anchor{Scaler Affector} |
---|
| 2647 | @subheading Scaler Affector |
---|
| 2648 | |
---|
| 2649 | This affector scales particles in flight. It's extra attributes are: |
---|
| 2650 | @table @asis |
---|
| 2651 | @item rate |
---|
| 2652 | The amount by which to scale the particles in both the x and y direction per second. |
---|
| 2653 | @end table |
---|
| 2654 | To create a scale affector, include a section like this within your particle system script: |
---|
| 2655 | @example |
---|
| 2656 | affector Scaler |
---|
| 2657 | { |
---|
| 2658 | // Settings go here |
---|
| 2659 | } |
---|
| 2660 | @end example |
---|
| 2661 | |
---|
| 2662 | @anchor{Rotator Affector} |
---|
| 2663 | @subheading Rotator Affector |
---|
| 2664 | |
---|
| 2665 | This affector rotates particles in flight. This is done by rotating the texture. It's extra attributes are: |
---|
| 2666 | @table @asis |
---|
| 2667 | @item rotation_speed_range_start |
---|
| 2668 | The start of a range of rotation speeds to be assigned to emitted particles.@* |
---|
| 2669 | @ @ @ @ format: rotation_speed_range_start <degrees_per_second>@* |
---|
| 2670 | example: rotation_speed_range_start 90@* |
---|
| 2671 | default: 0@* |
---|
| 2672 | @item rotation_speed_range_end |
---|
| 2673 | The end of a range of rotation speeds to be assigned to emitted particles.@* |
---|
| 2674 | @ @ @ @ format: rotation_speed_range_end <degrees_per_second>@* |
---|
| 2675 | example: rotation_speed_range_end 180@* |
---|
| 2676 | default: 0@* |
---|
| 2677 | @item rotation_range_start |
---|
| 2678 | The start of a range of rotation angles to be assigned to emitted particles.@* |
---|
| 2679 | @ @ @ @ format: rotation_range_start <degrees>@* |
---|
| 2680 | example: rotation_range_start 0@* |
---|
| 2681 | default: 0@* |
---|
| 2682 | @item rotation_range_end |
---|
| 2683 | The end of a range of rotation angles to be assigned to emitted particles.@* |
---|
| 2684 | @ @ @ @ format: rotation_range_end <degrees>@* |
---|
| 2685 | example: rotation_range_end 360@* |
---|
| 2686 | default: 0@* |
---|
| 2687 | @end table |
---|
| 2688 | To create a rotate affector, include a section like this within your particle system script: |
---|
| 2689 | @example |
---|
| 2690 | affector Rotator |
---|
| 2691 | { |
---|
| 2692 | // Settings go here |
---|
| 2693 | } |
---|
| 2694 | @end example |
---|
| 2695 | |
---|
| 2696 | @anchor{ColourInterpolator Affector} |
---|
| 2697 | @subheading ColourInterpolator Affector |
---|
| 2698 | |
---|
| 2699 | Similar to the ColourFader and ColourFader2 Affector?s, this affector modifies the colour of particles in flight, except it has a variable number of defined stages. It swaps the particle colour for several stages in the life of a particle and interpolates between them. It's extra attributes are: |
---|
| 2700 | @table @asis |
---|
| 2701 | @item time0 |
---|
| 2702 | The point in time of stage 0.@* |
---|
| 2703 | @ @ @ @ format: time0 <0-1 based on lifetime>@* |
---|
| 2704 | example: time0 0@* |
---|
| 2705 | default: 1@* |
---|
| 2706 | @item colour0 |
---|
| 2707 | The colour at stage 0.@* |
---|
| 2708 | @ @ @ @ format: colour0 <r> <g> <b> [<a>]@* |
---|
| 2709 | example: colour0 1 0 0 1@* |
---|
| 2710 | default: 0.5 0.5 0.5 0.0@* |
---|
| 2711 | @item time1 |
---|
| 2712 | The point in time of stage 1.@* |
---|
| 2713 | @ @ @ @ format: time1 <0-1 based on lifetime>@* |
---|
| 2714 | example: time1 0.5@* |
---|
| 2715 | default: 1@* |
---|
| 2716 | @item colour1 |
---|
| 2717 | The colour at stage 1.@* |
---|
| 2718 | @ @ @ @ format: colour1 <r> <g> <b> [<a>]@* |
---|
| 2719 | example: colour1 0 1 0 1@* |
---|
| 2720 | default: 0.5 0.5 0.5 0.0@* |
---|
| 2721 | @item time2 |
---|
| 2722 | The point in time of stage 2.@* |
---|
| 2723 | @ @ @ @ format: time2 <0-1 based on lifetime>@* |
---|
| 2724 | example: time2 1@* |
---|
| 2725 | default: 1@* |
---|
| 2726 | @item colour2 |
---|
| 2727 | The colour at stage 2.@* |
---|
| 2728 | @ @ @ @ format: colour2 <r> <g> <b> [<a>]@* |
---|
| 2729 | example: colour2 0 0 1 1@* |
---|
| 2730 | default: 0.5 0.5 0.5 0.0@* |
---|
| 2731 | @item [...] |
---|
| 2732 | @end table |
---|
| 2733 | The number of stages is variable. The maximal number of stages is 6; where time5 and colour5 are the last possible parameters. |
---|
| 2734 | To create a colour interpolation affector, include a section like this within your particle system script: |
---|
| 2735 | @example |
---|
| 2736 | affector ColourInterpolator |
---|
| 2737 | { |
---|
| 2738 | // Settings go here |
---|
| 2739 | } |
---|
| 2740 | @end example |
---|
| 2741 | |
---|
| 2742 | @anchor{ColourImage Affector} |
---|
| 2743 | @subheading ColourImage Affector |
---|
| 2744 | |
---|
| 2745 | This is another affector that modifies the colour of particles in flight, but instead of programmatically defining colours, the colours are taken from a specified image file. The range of colour values begins from the left side of the image and move to the right over the lifetime of the particle, therefore only the horizontal dimension of the image is used. Its extra attributes are: |
---|
| 2746 | @table @asis |
---|
| 2747 | @item image |
---|
| 2748 | The start of a range of rotation speed to be assigned to emitted particles.@* |
---|
| 2749 | @ @ @ @ format: image <image_name>@* |
---|
| 2750 | example: image rainbow.png@* |
---|
| 2751 | default: none@* |
---|
| 2752 | @end table |
---|
| 2753 | To create a ColourImage affector, include a section like this within your particle system script: |
---|
| 2754 | @example |
---|
| 2755 | affector ColourImage |
---|
| 2756 | { |
---|
| 2757 | // Settings go here |
---|
| 2758 | } |
---|
| 2759 | @end example |
---|
| 2760 | |
---|
| 2761 | |
---|
| 2762 | @node Overlay Scripts |
---|
| 2763 | @section Overlay Scripts |
---|
| 2764 | |
---|
| 2765 | Overlay scripts offer you the ability to define overlays in a script which can be reused easily. Whilst you could set up all overlays for a scene in code using the methods of the SceneManager, Overlay and OverlayElement classes, in practice it's a bit unwieldy. Instead you can store overlay definitions in text files which can then be loaded whenever required.@*@* |
---|
| 2766 | |
---|
| 2767 | @heading Loading scripts |
---|
| 2768 | |
---|
| 2769 | Overlay scripts are loaded at initialisation time by the system: by default it looks in all common resource locations (see Root::addResourceLocation) for files with the '.overlay' extension and parses them. If you want to parse files with a different extension, use the OverlayManager::getSingleton().parseAllSources method with your own extension, or if you want to parse an individual file, use OverlayManager::getSingleton().parseScript.@*@* |
---|
| 2770 | |
---|
| 2771 | @heading Format |
---|
| 2772 | |
---|
| 2773 | Several overlays may be defined in a single script. The script format is pseudo-C++, with sections delimited by curly braces ({}), comments indicated by starting a line with '//' (note, no nested form comments allowed), and inheritance through the use of templates. The general format is shown below in a typical example: |
---|
| 2774 | @example |
---|
| 2775 | // The name of the overlay comes first |
---|
| 2776 | MyOverlays/ANewOverlay |
---|
| 2777 | { |
---|
| 2778 | zorder 200 |
---|
| 2779 | |
---|
| 2780 | container Panel(MyOverlayElements/TestPanel) |
---|
| 2781 | { |
---|
| 2782 | // Center it horzontally, put it at the top |
---|
| 2783 | left 0.25 |
---|
| 2784 | top 0 |
---|
| 2785 | width 0.5 |
---|
| 2786 | height 0.1 |
---|
| 2787 | material MyMaterials/APanelMaterial |
---|
| 2788 | |
---|
| 2789 | // Another panel nested in this one |
---|
| 2790 | container Panel(MyOverlayElements/AnotherPanel) |
---|
| 2791 | { |
---|
| 2792 | left 0 |
---|
| 2793 | top 0 |
---|
| 2794 | width 0.1 |
---|
| 2795 | height 0.1 |
---|
| 2796 | material MyMaterials/NestedPanel |
---|
| 2797 | } |
---|
| 2798 | } |
---|
| 2799 | |
---|
| 2800 | } |
---|
| 2801 | @end example |
---|
| 2802 | |
---|
| 2803 | The above example defines a single overlay called 'MyOverlays/ANewOverlay', with 2 panels in it, one nested under the other. It uses relative metrics (the default if no metrics_mode option is found).@*@* |
---|
| 2804 | |
---|
| 2805 | Every overlay in the script must be given a name, which is the line before the first opening '{'. This name must be globally unique. It can include path characters (as in the example) to logically divide up your overlays, and also to avoid duplicate names, but the engine does not treat the name a hierarchical, just as a string. Within the braces are the properties of the overlay, and any nested elements. The overlay itself only has a single property 'zorder' which determines how'high' it is in the stack of overlays if more than one is displayed at the same time. Overlays with higher zorder values are displayed on top.@*@* |
---|
| 2806 | |
---|
| 2807 | @heading Adding elements to the overlay |
---|
| 2808 | |
---|
| 2809 | Within an overlay, you can include any number of 2D or 3D elements. You do this by defining a nested block headed by: |
---|
| 2810 | @table @asis |
---|
| 2811 | @item 'element' |
---|
| 2812 | if you want to define a 2D element which cannot have children of it's own |
---|
| 2813 | @item 'container' |
---|
| 2814 | if you want to define a 2D container object (which may itself have nested containers or elements) |
---|
| 2815 | @end table |
---|
| 2816 | @* |
---|
| 2817 | The element and container blocks are pretty identical apart from their ability to store nested blocks. |
---|
| 2818 | |
---|
| 2819 | @heading 'container' / 'element' blocks |
---|
| 2820 | |
---|
| 2821 | These are delimited by curly braces. The format for the header preceding the first brace is:@*@* |
---|
| 2822 | |
---|
| 2823 | [container | element] <type_name> ( <instance_name>) [: <template_name>]@* |
---|
| 2824 | { ...@*@* |
---|
| 2825 | @table @asis |
---|
| 2826 | @item type_name |
---|
| 2827 | Must resolve to the name of a OverlayElement type which has been registered with the OverlayManager. Plugins register with the OverlayManager to advertise their ability to create elements, and at this time advertise the name of the type. OGRE comes preconfigured with types 'Panel', 'BorderPanel' and 'TextArea'. |
---|
| 2828 | @item instance_name |
---|
| 2829 | Must be a name unique among all other elements / containers by which to identify the element. Note that you can obtain a pointer to any named element by calling OverlayManager::getSingleton().getOverlayElement(name). |
---|
| 2830 | @item template_name |
---|
| 2831 | Optional template on which to base this item. See templates. |
---|
| 2832 | @end table |
---|
| 2833 | |
---|
| 2834 | The properties which can be included within the braces depend on the custom type. However the following are always valid: |
---|
| 2835 | @itemize @bullet |
---|
| 2836 | @item |
---|
| 2837 | @ref{metrics_mode} |
---|
| 2838 | @item |
---|
| 2839 | @ref{horz_align} |
---|
| 2840 | @item |
---|
| 2841 | @ref{vert_align} |
---|
| 2842 | @item |
---|
| 2843 | @ref{left} |
---|
| 2844 | @item |
---|
| 2845 | @ref{top} |
---|
| 2846 | @item |
---|
| 2847 | @ref{width} |
---|
| 2848 | @item |
---|
| 2849 | @ref{height} |
---|
| 2850 | @item |
---|
| 2851 | @ref{overlay_material, material} |
---|
| 2852 | @item |
---|
| 2853 | @ref{caption} |
---|
| 2854 | @end itemize |
---|
| 2855 | |
---|
| 2856 | |
---|
| 2857 | @heading Templates |
---|
| 2858 | |
---|
| 2859 | You can use templates to create numerous elements with the same properties. A template is an abstract element and it is not added to an overlay. It acts as a base class that elements can inherit and get its default properties. To create a template, the keyword 'template' must be the first word in the element definition (before container or element). The template element is created in the topmost scope - it is NOT specified in an Overlay. It is recommended that you define templates in a separate overlay though this is not essential. Having templates defined in a separate file will allow different look & feels to be easily substituted.@*@* |
---|
| 2860 | |
---|
| 2861 | Elements can inherit a template in a similar way to C++ inheritance - by using the : operator on the element definition. The : operator is placed after the closing bracket of the name (separated by a space). The name of the template to inherit is then placed after the : operator (also separated by a space).@*@* |
---|
| 2862 | |
---|
| 2863 | A template can contain template children which are created when the template is subclassed and instantiated. Using the template keyword for the children of a template is optional but recommended for clarity, as the children of a template are always going to be templates themselves.@*@* |
---|
| 2864 | @example |
---|
| 2865 | template container BorderPanel(MyTemplates/BasicBorderPanel) |
---|
| 2866 | { |
---|
| 2867 | left 0 |
---|
| 2868 | top 0 |
---|
| 2869 | width 1 |
---|
| 2870 | height 1 |
---|
| 2871 | |
---|
| 2872 | // setup the texture UVs for a borderpanel |
---|
| 2873 | |
---|
| 2874 | // do this in a template so it doesn't need to be redone everywhere |
---|
| 2875 | material Core/StatsBlockCenter |
---|
| 2876 | border_size 0.05 0.05 0.06665 0.06665 |
---|
| 2877 | border_material Core/StatsBlockBorder |
---|
| 2878 | border_topleft_uv 0.0000 1.0000 0.1914 0.7969 |
---|
| 2879 | border_top_uv 0.1914 1.0000 0.8086 0.7969 |
---|
| 2880 | border_topright_uv 0.8086 1.0000 1.0000 0.7969 |
---|
| 2881 | border_left_uv 0.0000 0.7969 0.1914 0.2148 |
---|
| 2882 | border_right_uv 0.8086 0.7969 1.0000 0.2148 |
---|
| 2883 | border_bottomleft_uv 0.0000 0.2148 0.1914 0.0000 |
---|
| 2884 | border_bottom_uv 0.1914 0.2148 0.8086 0.0000 |
---|
| 2885 | border_bottomright_uv 0.8086 0.2148 1.0000 0.0000 |
---|
| 2886 | } |
---|
| 2887 | template container Button(MyTemplates/BasicButton) : MyTemplates/BasicBorderPanel |
---|
| 2888 | { |
---|
| 2889 | left 0.82 |
---|
| 2890 | top 0.45 |
---|
| 2891 | width 0.16 |
---|
| 2892 | height 0.13 |
---|
| 2893 | material Core/StatsBlockCenter |
---|
| 2894 | border_up_material Core/StatsBlockBorder/Up |
---|
| 2895 | border_down_material Core/StatsBlockBorder/Down |
---|
| 2896 | } |
---|
| 2897 | template element TextArea(MyTemplates/BasicText) |
---|
| 2898 | { |
---|
| 2899 | font_name Ogre |
---|
| 2900 | char_height 0.08 |
---|
| 2901 | colour_top 1 1 0 |
---|
| 2902 | colour_bottom 1 0.2 0.2 |
---|
| 2903 | left 0.03 |
---|
| 2904 | top 0.02 |
---|
| 2905 | width 0.12 |
---|
| 2906 | height 0.09 |
---|
| 2907 | } |
---|
| 2908 | |
---|
| 2909 | MyOverlays/AnotherOverlay |
---|
| 2910 | { |
---|
| 2911 | zorder 490 |
---|
| 2912 | container BorderPanel(MyElements/BackPanel) : MyTemplates/BasicBorderPanel |
---|
| 2913 | { |
---|
| 2914 | left 0 |
---|
| 2915 | top 0 |
---|
| 2916 | width 1 |
---|
| 2917 | height 1 |
---|
| 2918 | |
---|
| 2919 | container Button(MyElements/HostButton) : MyTemplates/BasicButton |
---|
| 2920 | { |
---|
| 2921 | left 0.82 |
---|
| 2922 | top 0.45 |
---|
| 2923 | caption MyTemplates/BasicText HOST |
---|
| 2924 | } |
---|
| 2925 | |
---|
| 2926 | container Button(MyElements/JoinButton) : MyTemplates/BasicButton |
---|
| 2927 | { |
---|
| 2928 | left 0.82 |
---|
| 2929 | top 0.60 |
---|
| 2930 | caption MyTemplates/BasicText JOIN |
---|
| 2931 | } |
---|
| 2932 | } |
---|
| 2933 | } |
---|
| 2934 | @end example |
---|
| 2935 | The above example uses templates to define a button. Note that the button template inherits from the borderPanel template. This reduces the number of attributes needed to instantiate a button.@*@* |
---|
| 2936 | |
---|
| 2937 | Also note that the instantiate of a Button needs a template name for the caption attribute. So templates can also be used by elements that need dynamic creation of children elements (the button creates a TextAreaElement in this case for its caption).@*@* |
---|
| 2938 | |
---|
| 2939 | @xref{OverlayElement Attributes}, @ref{Standard OverlayElements} |
---|
| 2940 | |
---|
| 2941 | @node OverlayElement Attributes |
---|
| 2942 | @subsection OverlayElement Attributes |
---|
| 2943 | |
---|
| 2944 | These attributes are valid within the braces of a 'container' or 'element' block in an overlay script. They must each be on their own line. Ordering is unimportant.@*@* |
---|
| 2945 | |
---|
| 2946 | @anchor{metrics_mode} |
---|
| 2947 | @subheading metrics_mode |
---|
| 2948 | |
---|
| 2949 | Sets the units which will be used to size and position this element.@*@* |
---|
| 2950 | |
---|
| 2951 | Format: metrics_mode <pixels|relative>@* |
---|
| 2952 | Example: metrics_mode pixels@* |
---|
| 2953 | |
---|
| 2954 | This can be used to change the way that all measurement attributes in the rest of this element are interpreted. In relative mode, they are interpreted as being a parametric value from 0 to 1, as a proportion of the width / height of the screen. In pixels mode, they are simply pixel offsets.@*@* |
---|
| 2955 | |
---|
| 2956 | Default: metrics_mode relative@* |
---|
| 2957 | |
---|
| 2958 | @anchor{horz_align} |
---|
| 2959 | @subheading horz_align |
---|
| 2960 | |
---|
| 2961 | Sets the horizontal alignment of this element, in terms of where the horizontal origin is.@*@* |
---|
| 2962 | |
---|
| 2963 | Format: horz_align <left|center|right>@* |
---|
| 2964 | Example: horz_align center@*@* |
---|
| 2965 | |
---|
| 2966 | This can be used to change where the origin is deemed to be for the purposes of any horizontal positioning attributes of this element. By default the origin is deemed to be the left edge of the screen, but if you change this you can center or right-align your elements. Note that setting the alignment to center or right does not automatically force your elements to appear in the center or the right edge, you just have to treat that point as the origin and adjust your coordinates appropriately. This is more flexible because you can choose to position your element anywhere relative to that origin. For example, if your element was 10 pixels wide, you would use a 'left' property of -10 to align it exactly to the right edge, or -20 to leave a gap but still make it stick to the right edge.@*@* |
---|
| 2967 | |
---|
| 2968 | Note that you can use this property in both relative and pixel modes, but it is most useful in pixel mode.@*@* |
---|
| 2969 | |
---|
| 2970 | Default: horz_align left@* |
---|
| 2971 | |
---|
| 2972 | @anchor{vert_align} |
---|
| 2973 | @subheading vert_align |
---|
| 2974 | |
---|
| 2975 | Sets the vertical alignment of this element, in terms of where the vertical origin is.@*@* |
---|
| 2976 | |
---|
| 2977 | Format: vert_align <top|center|bottom>@* |
---|
| 2978 | Example: vert_align center@*@* |
---|
| 2979 | |
---|
| 2980 | This can be used to change where the origin is deemed to be for the purposes of any vertical positioning attributes of this element. By default the origin is deemed to be the top edge of the screen, but if you change this you can center or bottom-align your elements. Note that setting the alignment to center or bottom does not automatically force your elements to appear in the center or the bottom edge, you just have to treat that point as the origin and adjust your coordinates appropriately. This is more flexible because you can choose to position your element anywhere relative to that origin. For example, if your element was 50 pixels high, you would use a 'top' property of -50 to align it exactly to the bottom edge, or -70 to leave a gap but still make it stick to the bottom edge.@*@* |
---|
| 2981 | |
---|
| 2982 | Note that you can use this property in both relative and pixel modes, but it is most useful in pixel mode.@*@* |
---|
| 2983 | |
---|
| 2984 | Default: vert_align top@* |
---|
| 2985 | |
---|
| 2986 | @anchor{left} |
---|
| 2987 | @subheading left |
---|
| 2988 | |
---|
| 2989 | Sets the horizontal position of the element relative to it's parent.@*@* |
---|
| 2990 | |
---|
| 2991 | Format: left <value>@* |
---|
| 2992 | Example: left 0.5@*@* |
---|
| 2993 | |
---|
| 2994 | Positions are relative to the parent (the top-left of the screen if the parent is an overlay, the top-left of the parent otherwise) and are expressed in terms of a proportion of screen size. Therefore 0.5 is half-way across the screen.@*@* |
---|
| 2995 | |
---|
| 2996 | Default: left 0@* |
---|
| 2997 | |
---|
| 2998 | @anchor{top} |
---|
| 2999 | @subheading top |
---|
| 3000 | |
---|
| 3001 | Sets the vertical position of the element relative to it's parent.@*@* |
---|
| 3002 | |
---|
| 3003 | Format: top <value>@* |
---|
| 3004 | Example: top 0.5@*@* |
---|
| 3005 | |
---|
| 3006 | Positions are relative to the parent (the top-left of the screen if the parent is an overlay, the top-left of the parent otherwise) and are expressed in terms of a proportion of screen size. Therefore 0.5 is half-way down the screen.@*@* |
---|
| 3007 | |
---|
| 3008 | Default: top 0@* |
---|
| 3009 | |
---|
| 3010 | @anchor{width} |
---|
| 3011 | @subheading width |
---|
| 3012 | |
---|
| 3013 | Sets the width of the element as a proportion of the size of the screen.@*@* |
---|
| 3014 | |
---|
| 3015 | Format: width <value>@* |
---|
| 3016 | Example: width 0.25@*@* |
---|
| 3017 | |
---|
| 3018 | Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are not relative to the parent; this is common in windowing systems where the top and left are relative but the size is absolute.@*@* |
---|
| 3019 | |
---|
| 3020 | Default: width 1@* |
---|
| 3021 | |
---|
| 3022 | @anchor{height} |
---|
| 3023 | @subheading height |
---|
| 3024 | |
---|
| 3025 | Sets the height of the element as a proportion of the size of the screen.@*@* |
---|
| 3026 | |
---|
| 3027 | Format: height <value>@* |
---|
| 3028 | Example: height 0.25@*@* |
---|
| 3029 | |
---|
| 3030 | Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are not relative to the parent; this is common in windowing systems where the top and left are relative but the size is absolute.@*@* |
---|
| 3031 | |
---|
| 3032 | Default: height 1@* |
---|
| 3033 | |
---|
| 3034 | @anchor{overlay_material} |
---|
| 3035 | @subheading material |
---|
| 3036 | |
---|
| 3037 | Sets the name of the material to use for this element.@*@* |
---|
| 3038 | |
---|
| 3039 | Format: material <name>@* |
---|
| 3040 | Example: material Examples/TestMaterial@*@* |
---|
| 3041 | |
---|
| 3042 | This sets the base material which this element will use. Each type of element may interpret this differently; for example the OGRE element 'Panel' treats this as the background of the panel, whilst 'BorderPanel' interprets this as the material for the center area only. Materials should be defined in .material scripts.@*@* |
---|
| 3043 | Note that using a material in an overlay element automatically disables lighting and depth checking on this material. Therefore you should not use the same material as is used for real 3D objects for an overlay.@*@* |
---|
| 3044 | |
---|
| 3045 | Default: none@* |
---|
| 3046 | |
---|
| 3047 | @anchor{caption} |
---|
| 3048 | @subheading caption |
---|
| 3049 | |
---|
| 3050 | Sets a text caption for the element.@*@* |
---|
| 3051 | |
---|
| 3052 | Format: caption <string>@* |
---|
| 3053 | Example: caption This is a caption@*@* |
---|
| 3054 | |
---|
| 3055 | Not all elements support captions, so each element is free to disregard this if it wants. However, a general text caption is so common to many elements that it is included in the generic interface to make it simpler to use. This is a common feature in GUI systems.@*@* |
---|
| 3056 | |
---|
| 3057 | Default: blank@* |
---|
| 3058 | |
---|
| 3059 | |
---|
| 3060 | @anchor{rotation} |
---|
| 3061 | @subheading rotation |
---|
| 3062 | |
---|
| 3063 | Sets the rotation of the element.@*@* |
---|
| 3064 | |
---|
| 3065 | Format: rotation <angle_in_degrees> <axis_x> <axis_y> <axis_z> |
---|
| 3066 | Example: rotation 30 0 0 1 |
---|
| 3067 | |
---|
| 3068 | Default: none |
---|
| 3069 | |
---|
| 3070 | @node Standard OverlayElements |
---|
| 3071 | @subsection Standard OverlayElements |
---|
| 3072 | |
---|
| 3073 | Although OGRE's OverlayElement and OverlayContainer classes are designed to be extended by applications developers, there are a few elements which come as standard with Ogre. These include: |
---|
| 3074 | @itemize @bullet |
---|
| 3075 | @item |
---|
| 3076 | @ref{Panel} |
---|
| 3077 | @item |
---|
| 3078 | @ref{BorderPanel} |
---|
| 3079 | @item |
---|
| 3080 | @ref{TextArea} |
---|
| 3081 | @item |
---|
| 3082 | @ref{TextBox} |
---|
| 3083 | @end itemize |
---|
| 3084 | @* |
---|
| 3085 | This section describes how you define their custom attributes in an .overlay script, but you can also change these custom properties in code if you wish. You do this by calling setParameter(paramname, value). You may wish to use the StringConverter class to convert your types to and from strings. |
---|
| 3086 | |
---|
| 3087 | @anchor{Panel} |
---|
| 3088 | @subheading Panel (container) |
---|
| 3089 | |
---|
| 3090 | This is the most bog-standard container you can use. It is a rectangular area which can contain other elements (or containers) and may or may not have a background, which can be tiled however you like. The background material is determined by the material attribute, but is only displayed if transparency is off.@*@* |
---|
| 3091 | |
---|
| 3092 | Attributes: |
---|
| 3093 | @table @asis |
---|
| 3094 | @item transparent <true | false> |
---|
| 3095 | If set to 'true' the panel is transparent and is not rendered itself, it is just used as a grouping level for it's children. |
---|
| 3096 | @item tiling <layer> <x_tile> <y_tile> |
---|
| 3097 | Sets the number of times the texture(s) of the material are tiled across the panel in the x and y direction. <layer> is the texture layer, from 0 to the number of texture layers in the material minus one. By setting tiling per layer you can create some nice multitextured backdrops for your panels, this works especially well when you animate one of the layers. |
---|
| 3098 | @item uv_coords <topleft_u> <topleft_v> <bottomright_u> <bottomright_v> |
---|
| 3099 | Sets the texture coordinates to use for this panel. |
---|
| 3100 | @end table |
---|
| 3101 | |
---|
| 3102 | @anchor{BorderPanel} |
---|
| 3103 | @subheading BorderPanel (container) |
---|
| 3104 | |
---|
| 3105 | This is a slightly more advanced version of Panel, where instead of just a single flat panel, the panel has a separate border which resizes with the panel. It does this by taking an approach very similar to the use of HTML tables for bordered content: the panel is rendered as 9 square areas, with the center area being rendered with the main material (as with Panel) and the outer 8 areas (the 4 corners and the 4 edges) rendered with a separate border material. The advantage of rendering the corners separately from the edges is that the edge textures can be designed so that they can be stretched without distorting them, meaning the single texture can serve any size panel.@*@* |
---|
| 3106 | |
---|
| 3107 | Attributes: |
---|
| 3108 | @table @asis |
---|
| 3109 | @item border_size <left> <right> <top> <bottom> |
---|
| 3110 | The size of the border at each edge, as a proportion of the size of the screen. This lets you have different size borders at each edge if you like, or you can use the same value 4 times to create a constant size border. |
---|
| 3111 | @item border_material <name> |
---|
| 3112 | The name of the material to use for the border. This is normally a different material to the one used for the center area, because the center area is often tiled which means you can't put border areas in there. You must put all the images you need for all the corners and the sides into a single texture. |
---|
| 3113 | @item border_topleft_uv <u1> <v1> <u2> <v2> |
---|
| 3114 | [also border_topright_uv, border_bottomleft_uv, border_bottomright_uv]; |
---|
| 3115 | The texture coordinates to be used for the corner areas of the border. 4 coordinates are required, 2 for the top-left corner of the square, 2 for the bottom-right of the square. |
---|
| 3116 | @item border_left_uv <u1> <v1> <u2> <v2> |
---|
| 3117 | [also border_right_uv, border_top_uv, border_bottom_uv]; |
---|
| 3118 | The texture coordinates to be used for the edge areas of the border. 4 coordinates are required, 2 for the top-left corner, 2 for the bottom-right. Note that you should design the texture so that the left & right edges can be stretched / squashed vertically and the top and bottom edges can be stretched / squashed horizontally without detrimental effects. |
---|
| 3119 | @end table |
---|
| 3120 | |
---|
| 3121 | |
---|
| 3122 | @anchor{TextArea} |
---|
| 3123 | @subheading TextArea (element) |
---|
| 3124 | |
---|
| 3125 | This is a generic element that you can use to render text. It uses fonts which can be defined in code using the FontManager and Font classes, or which have been predefined in .fontdef files. See the font definitions section for more information.@*@* |
---|
| 3126 | |
---|
| 3127 | Attributes: |
---|
| 3128 | @table @asis |
---|
| 3129 | @item font_name <name> |
---|
| 3130 | The name of the font to use. This font must be defined in a .fontdef file to ensure it is available at scripting time. |
---|
| 3131 | @item char_height <height> |
---|
| 3132 | The height of the letters as a proportion of the screen height. Character widths may vary because OGRE supports proportional fonts, but will be based on this constant height. |
---|
| 3133 | @item colour <red> <green> <blue> |
---|
| 3134 | A solid colour to render the text in. Often fonts are defined in monochrome, so this allows you to colour them in nicely and use the same texture for multiple different coloured text areas. The colour elements should all be expressed as values between 0 and 1. If you use predrawn fonts which are already full colour then you don't need this. |
---|
| 3135 | @item colour_bottom <red> <green> <blue> / colour_top <red> <green> <blue> |
---|
| 3136 | As an alternative to a solid colour, you can colour the text differently at the top and bottom to create a gradient colour effect which can be very effective. |
---|
| 3137 | @end table |
---|
| 3138 | |
---|
| 3139 | @anchor{TextBox} |
---|
| 3140 | @subheading TextBox (element) |
---|
| 3141 | |
---|
| 3142 | This element is a box that allows text input. It is composed of 2 elements, a TextArea, which defines the size, colour etc of the text to be used when typed, and a back panel which is the box-element on which the text is written.@*@* |
---|
| 3143 | |
---|
| 3144 | Attributes: |
---|
| 3145 | @table @asis |
---|
| 3146 | @item text_area <template name> [<caption>] |
---|
| 3147 | The name of the TextArea template to be used as the basis for the TextBox font. The optional caption is the text the textbox is initialised with. |
---|
| 3148 | @item back_panel <template name> |
---|
| 3149 | The name of the back panel template (e.g. a @ref{BorderPanel}) to be used as the basis for the back panel on which the text is written. This needs to be a container. |
---|
| 3150 | @end table |
---|
| 3151 | |
---|
| 3152 | |
---|
| 3153 | @node Font Definition Scripts |
---|
| 3154 | @section Font Definition Scripts |
---|
| 3155 | |
---|
| 3156 | Ogre uses texture-based fonts to render the TextAreaOverlayElement. You can also use the Font object for your own purpose if you wish. The final form of a font is a Material object generated by the font, and a set of 'glyph' (character) texture coordinate information.@*@* |
---|
| 3157 | |
---|
| 3158 | There are 2 ways you can get a font into OGRE: |
---|
| 3159 | @enumerate |
---|
| 3160 | @item Design a font texture yourself using an art package or font generator tool |
---|
| 3161 | @item Ask OGRE to generate a font texture based on a truetype font |
---|
| 3162 | @end enumerate |
---|
| 3163 | |
---|
| 3164 | The former gives you the most flexibility and the best performance (in terms of startup times), but the latter is convenient if you want to quickly use a font without having to generate the texture yourself. I suggest prototyping using the latter and change to the former for your final solution.@*@* |
---|
| 3165 | |
---|
| 3166 | All font definitions are held in .fontdef files, which are parsed by the system at startup time. Each .fontdef file can contain multiple font definitions. The basic fomat of an entry in the .fontdef file is: |
---|
| 3167 | @example |
---|
| 3168 | <font_name> |
---|
| 3169 | { |
---|
| 3170 | type <image | truetype> |
---|
| 3171 | source <image file | truetype font file> |
---|
| 3172 | ... |
---|
| 3173 | ... custom attributes depending on type |
---|
| 3174 | } |
---|
| 3175 | @end example |
---|
| 3176 | |
---|
| 3177 | @heading Using an existing font texture |
---|
| 3178 | |
---|
| 3179 | If you have one or more artists working with you, no doubt they can produce you a very nice font texture. OGRE supports full colour font textures, or alternatively you can keep them monochrome / greyscale and use TextArea's colouring feature. Font textures should always have an alpha channel, preferably an 8-bit alpha channel such as that supported by TGA and PNG files, because it can result in much nicer edges. To use an existing texture, here are the settings you need: |
---|
| 3180 | @table @asis |
---|
| 3181 | @item type image |
---|
| 3182 | This just tells OGRE you want a pre-drawn font. |
---|
| 3183 | @item source <filename> |
---|
| 3184 | This is the name of the image file you want to load. This will be loaded from the standard TextureManager resource locations and can be of any type OGRE supports, although JPEG is not recommended because of the lack of alpha and the lossy compression. I recommend PNG format which has both good lossless compression and an 8-bit alpha channel. |
---|
| 3185 | @item glyph <character> <u1> <v1> <u2> <v2> |
---|
| 3186 | This provides the texture coordinates for the specified character. You must repeat this for every character you have in the texture. The first 2 numbers are the x and y of the top-left corner, the second two are the x and y of the bottom-right corner. Note that you really should use a common height for all characters, but widths can vary because of proportional fonts. |
---|
| 3187 | @end table |
---|
| 3188 | |
---|
| 3189 | A note for Windows users: I recommend using BitmapFontBuilder (@url{http://www.lmnopc.com/bitmapfontbuilder/}), a free tool which will generate a texture and export character widths for you, you can find a tool for converting the binary output from this into 'glyph' lines in the Tools folder.@* |
---|
| 3190 | |
---|
| 3191 | @heading Generating a font texture |
---|
| 3192 | |
---|
| 3193 | You can also generate font textures on the fly using truetype fonts. I don't recommend heavy use of this in production work because rendering the texture can take a several seconds per font which adds to the loading times. However it is a very nice way of quickly getting text output in a font of your choice.@*@* |
---|
| 3194 | |
---|
| 3195 | Here are the attributes you need to supply: |
---|
| 3196 | @table @asis |
---|
| 3197 | @item type truetype |
---|
| 3198 | Tells OGRE to generate the texture from a font |
---|
| 3199 | @item source <ttf file> |
---|
| 3200 | The name of the ttf file to load. This will be searched for in the common resource locations and in any resource locations added to FontManager. |
---|
| 3201 | @item size <size_in_points> |
---|
| 3202 | The size at which to generate the font, in standard points. Note this only affects how big the characters are in the font texture, not how big they are on the screen. You should tailor this depending on how large you expect to render the fonts because generating a large texture will result in blurry characters when they are scaled very small (because of the mipmapping), and conversely generating a small font will result in blocky characters if large text is rendered. |
---|
| 3203 | @item resolution <dpi> |
---|
| 3204 | The resolution in dots per inch, this is used in conjunction with the point size to determine the final size. 72 / 96 dpi is normal. |
---|
| 3205 | @item antialias_colour <true|false> |
---|
| 3206 | This is an optional flag, which defaults to 'false'. The generator will antialias the font by default using the alpha component of the texture, which will look fine if you use alpha blending to render your text (this is the default assumed by TextAreaOverlayElement for example). If, however you wish to use a colour based blend like add or modulate in your own code, you should set this to 'true' so the colour values are anitaliased too. If you set this to true and use alpha blending, you'll find the edges of your font are antialiased too quickly resulting in a 'thin' look to your fonts, because not only is the alpha blending the edges, the colour is fading too. Leave this option at the default if in doubt. |
---|
| 3207 | @end table |
---|
| 3208 | @*@* |
---|
| 3209 | You can also create new fonts at runtime by using the FontManager if you wish. |
---|
| 3210 | |
---|
| 3211 | @node Mesh Tools |
---|
| 3212 | @chapter Mesh Tools |
---|
| 3213 | There are a number of mesh tools available with OGRE to help you manipulate your meshes. |
---|
| 3214 | @table @asis |
---|
| 3215 | @item @ref{Exporters} |
---|
| 3216 | For getting data out of modellers and into OGRE. |
---|
| 3217 | @item @ref{XmlConverter} |
---|
| 3218 | For converting meshes and skeletons to/from XML. |
---|
| 3219 | @item @ref{MeshUpgrader} |
---|
| 3220 | For upgrading binary meshes from one version of OGRE to another. |
---|
| 3221 | @end table |
---|
| 3222 | |
---|
| 3223 | @node Exporters |
---|
| 3224 | @section Exporters |
---|
| 3225 | |
---|
| 3226 | Exporters are plugins to 3D modelling tools which write meshes and skeletal animation to file formats which OGRE can use for realtime rendering. The files the exporters write end in .mesh and .skeleton respectively.@*@* |
---|
| 3227 | |
---|
| 3228 | Each exporter has to be written specifically for the modeller in question, although they all use a common set of facilities provided by the classes MeshSerializer and SkeletonSerializer. They also normally require you to own the modelling tool.@*@* |
---|
| 3229 | |
---|
| 3230 | All the exporters here can be built from the source code, or you can download precompiled versions from the OGRE web site.@*@* |
---|
| 3231 | |
---|
| 3232 | @heading A Note About Modelling / Animation For OGRE |
---|
| 3233 | There are a few rules when creating an animated model for OGRE: |
---|
| 3234 | @itemize @bullet |
---|
| 3235 | @item You must have no more than 4 weighted bone assignments per vertex. If you have more, OGRE will eliminate the lowest weighted assignments and renormalise the other weights. This limit is imposed by hardware blending limitations. |
---|
| 3236 | @item All vertices must be assigned to at least one bone - assign static vertices to the root bone. |
---|
| 3237 | @item At the very least each bone must have a keyframe at the beginning and end of the animation. |
---|
| 3238 | @end itemize |
---|
| 3239 | If you're creating unanimated meshes, then you do not need to be concerned with the above. |
---|
| 3240 | |
---|
| 3241 | Full documentation for each exporter is provided along with the exporter itself, and there is a list of the currently supported modelling tools in the OGRE Wiki at @url{http://www.ogre3d.org/wiki/index.php/Exporters}. |
---|
| 3242 | |
---|
| 3243 | @node XmlConverter |
---|
| 3244 | @section XmlConverter |
---|
| 3245 | |
---|
| 3246 | The OgreXmlConverter tool can converter binary .mesh and .skeleton files to XML and back again - this is a very useful tool for debugging the contents of meshes, or for exchanging mesh data easily - many of the modeller mesh exporters export to XML because it is simpler to do, and OgreXmlConverter can then produce a binary from it. Other than simplicity, the other advantage is that OgreXmlConverter can generate additional information for the mesh, like bounding regions and level-of-detail reduction. @*@* |
---|
| 3247 | |
---|
| 3248 | Syntax: |
---|
| 3249 | @example |
---|
| 3250 | Usage: OgreXMLConverter sourcefile [destfile] |
---|
| 3251 | sourcefile = name of file to convert |
---|
| 3252 | destfile = optional name of file to write to. If you don't |
---|
| 3253 | specify this OGRE works it out through the extension |
---|
| 3254 | and the XML contents if the source is XML. For example |
---|
| 3255 | test.mesh becomes test.xml, test.xml becomes test.mesh |
---|
| 3256 | if the XML document root is <mesh> etc. |
---|
| 3257 | @end example |
---|
| 3258 | When converting XML to .mesh, you will be prompted to (re)generate level-of-detail(LOD) information for the mesh - you can choose to skip this part if you wish, but doing it will allow you to make your mesh reduce in detail automatically when it is loaded into the engine. The engine uses a complex algorithm to determine the best parts of the mesh to reduce in detail depending on many factors such as the curvature of the surface, the edges of the mesh and seams at the edges of textures and smoothing groups - taking advantage of it is advised to make your meshes more scalable in real scenes. |
---|
| 3259 | |
---|
| 3260 | @node MeshUpgrader |
---|
| 3261 | @section MeshUpgrader |
---|
| 3262 | This tool is provided to allow you to upgrade your meshes when the binary format changes - sometimes we alter it to add new features and as such you need to keep your own assets up to date. This tools has a very simple syntax: |
---|
| 3263 | @example |
---|
| 3264 | OgreMeshUpgrade <oldmesh> <newmesh> |
---|
| 3265 | @end example |
---|
| 3266 | The OGRE release notes will notify you when this is necessary with a new release. |
---|
| 3267 | |
---|
| 3268 | @include vbos.inc |
---|
| 3269 | @include texturesource.inc |
---|
| 3270 | @include shadows.inc |
---|
| 3271 | @include animation.inc |
---|
| 3272 | |
---|
| 3273 | @bye |
---|
| 3274 | |
---|