source: OGRE/trunk/ogrenew/Docs/src/vbos.inc @ 657

Revision 657, 39.2 KB checked in by mattausch, 19 years ago (diff)

added ogre dependencies and patched ogre sources

RevLine 
[657]1@node Hardware Buffers
2@chapter Hardware Buffers
3Vertex buffers, index buffers and pixel buffers inherit most of their features from the HardwareBuffer class. The general premise with a hardware buffer is that it is an area of memory with which you can do whatever you like; there is no format (vertex or otherwise) associated with the buffer itself - that is entirely up to interpretation by the methods that use it - in that way, a HardwareBuffer is just like an area of memory you might allocate using 'malloc' - the difference being that this memory is likely to be located in GPU or AGP memory.
4@node The Hardware Buffer Manager
5@section The Hardware Buffer Manager
6The HardwareBufferManager class is the factory hub of all the objects in the new geometry system. You create and destroy the majority of the objects you use to define geometry through this class. It's a Singleton, so you access it by doing HardwareBufferManager::getSingleton() - however be aware that it is only guaranteed to exist after the RenderSystem has been initialised (after you call Root::initialise); this is because the objects created are invariably API-specific, although you will deal with them through one common interface.
7@*@*
8For example:
9@example
10VertexDeclaration* decl = HardwareBufferManager::getSingleton().createVertexDeclaration();
11@end example
12@example
13HardwareVertexBufferSharedPtr vbuf =
14        HardwareBufferManager::getSingleton().createVertexBuffer(
15                3*sizeof(Real), // size of one whole vertex
16                numVertices, // number of vertices
17                HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
18                false); // no shadow buffer
19@end example
20Don't worry about the details of the above, we'll cover that in the later sections. The important thing to remember is to always create objects through the HardwareBufferManager, don't use 'new' (it won't work anyway in most cases).
21@node Buffer Usage
22@section Buffer Usage
23Because the memory in a hardware buffer is likely to be under significant contention during the rendering of a scene, the kind of access you need to the buffer over the time it is used is extremely important; whether you need to update the contents of the buffer regularly, whether you need to be able to read information back from it, these are all important factors to how the graphics card manages the buffer. The method and exact parameters used to create a buffer depends on whether you are creating an index or vertex buffer (@xref{Hardware Vertex Buffers} and @xref{Hardware Index Buffers}), however one creation parameter is common to them both - the 'usage'.
24@*@*
25The most optimal type of hardware buffer is one which is not updated often, and is never read from. The usage parameter of createVertexBuffer or createIndexBuffer can be one of the following:
26@table @code
27@item HBU_STATIC
28This means you do not need to update the buffer very often, but you might occasionally want to read from it.
29
30@item HBU_STATIC_WRITE_ONLY
31This means you do not need to update the buffer very often, and you do not need to read from it. However, you may read from it's shadow buffer if you set one up (@xref{Shadow Buffers}). This is the optimal buffer usage setting.
32
33@item HBU_DYNAMIC
34This means you expect to update the buffer often, and that you may wish to read from it. This is the least optimal buffer setting.
35
36@item HBU_DYNAMIC_WRITE_ONLY
37This means you expect to update the buffer often, but that you never want to read from it. However, you may read from it's shadow buffer if you set one up (@xref{Shadow Buffers}). If you use this option, and replace the entire contents of the buffer every frame, then you should use HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE instead, since that has better performance characteristics on some platforms.
38
39@item HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE
40This means that you expect to replace the entire contents of the buffer on an extremely regular basis, most likely every frame. By selecting this option, you free the system up from having to be concerned about losing the existing contents of the buffer at any time, because if it does lose them, you will be replacing them next frame anyway. On some platforms this can make a significant performance difference, so you should try to use this whenever you have a buffer you need to update regularly. Note that if you create a buffer this way, you should use the HBL_DISCARD flag when locking the contents of it for writing.
41
42@end table
43Choosing the usage of your buffers carefully is important to getting optimal performance out of your geometry. If you have a situation where you need to update a vertex buffer often, consider whether you actually need to update @strong{all} the parts of it, or just some. If it's the latter, consider using more than one buffer, with only the data you need to modify in the HBU_DYNAMIC buffer.
44@*@*
45Always try to use the _WRITE_ONLY forms. This just means that you cannot read @emph{directly} from the hardware buffer, which is good practice because reading from hardware buffers is very slow. If you really need to read data back, use a shadow buffer, described in the next section.
46
47@node Shadow Buffers
48@section Shadow Buffers
49As discussed in the previous section, reading data back from a hardware buffer performs very badly. However, if you have a cast-iron need to read the contents of the vertex buffer, you should set the 'shadowBuffer' parameter of createVertexBuffer or createIndexBuffer to 'true'. This causes the hardware buffer to be backed with a system memory copy, which you can read from with no more penalty than reading ordinary memory. The catch is that when you write data into this buffer, it will first update the system memory copy, then it will update the hardware buffer, as separate copying process - therefore this technique has an additional overhead when writing data. Don't use it unless you really need it.
50
51@node Locking buffers
52@section Locking buffers
53In order to read or update a hardware buffer, you have to 'lock' it. This performs 2 functions - it tells the card that you want access to the buffer (which can have an effect on its rendering queue), and it returns a pointer which you can manipulate. Note that if you've asked to read the buffer (and remember, you really shouldn't unless you've set the buffer up with a shadow buffer), the contents of the hardware buffer will have been copied into system memory somewhere in order for you to get access to it. For the same reason, when you're finished with the buffer you must unlock it; if you locked the buffer for writing this will trigger the process of uploading the modified information to the graphics hardware.
54@*@*
55@subheading Lock parameters
56When you lock a buffer, you call one of the following methods:
57@example
58// Lock the entire buffer
59pBuffer->lock(lockType);
60// Lock only part of the buffer
61pBuffer->lock(start, length, lockType);
62@end example
63The first call locks the entire buffer, the second locks only the section from 'start' (as a byte offset), for 'length' bytes. This could be faster than locking the entire buffer since less is transferred, but not if you later update the rest of the buffer too, because doing it in small chunks like this means you cannot use HBL_DISCARD (see below).
64@*@*
65The lockType parameter can have a large effect on the performance of your application, especially if you are not using a shadow buffer.
66@table @code
67@item HBL_NORMAL
68This kind of lock allows reading and writing from the buffer - it's also the least optimal because basically you're telling the card you could be doing anything at all. If you're not using a shadow buffer, it requires the buffer to be transferred from the card and back again. If you're using a shadow buffer the effect is minimal.
69@item HBL_READ_ONLY
70This means you only want to read the contents of the buffer. Best used when you created the buffer with a shadow buffer because in that case the data does not have to be downloaded from the card.
71@item HBL_DISCARD
72This means you are happy for the card to discard the @emph{entire current contents} of the buffer. Implicitly this means you are not going to read the data - it also means that the card can avoid any stalls if the buffer is currently being rendered from, because it will actually give you an entirely different one. Use this wherever possible when you are locking a buffer which was not created with a shadow buffer. If you are using a shadow buffer it matters less, although with a shadow buffer it's preferable to lock the entire buffer at once, because that allows the shadow buffer to use HBL_DISCARD when it uploads the updated contents to the real buffer.
73@item HBL_NO_OVERWRITE
74This is useful if you are locking just part of the buffer and thus cannot use HBL_DISCARD. It tells the card that you promise not to modify any section of the buffer which has already been used in a rendering operation this frame. Again this is only useful on buffers with no shadow buffer.
75@end table
76
77Once you have locked a buffer, you can use the pointer returned however you wish (just don't bother trying to read the data that's there if you've used HBL_DISCARD, or write the data if you've used HBL_READ_ONLY). Modifying the contents depends on the type of buffer, @xref{Hardware Vertex Buffers} and @xref{Hardware Index Buffers}
78
79@node Practical Buffer Tips
80@section Practical Buffer Tips
81The interplay of usage mode on creation, and locking options when reading / updating is important for performance. Here's some tips:
82@enumerate
83@item
84Aim for the 'perfect' buffer by creating with HBU_STATIC_WRITE_ONLY, with no shadow buffer, and locking all of it once only with HBL_DISCARD to populate it. Never touch it again.
85@item
86If you need to update a buffer regularly, you will have to compromise. Use HBU_DYNAMIC_WRITE_ONLY when creating (still no shadow buffer), and use HBL_DISCARD to lock the entire buffer, or if you can't then use HBL_NO_OVERWRITE to lock parts of it.
87@item
88If you really need to read data from the buffer, create it with a shadow buffer. Make sure you use HBL_READ_ONLY when locking for reading because it will avoid the upload normally associated with unlocking the buffer. You can also combine this with either of the 2 previous points, obviously try for static if you can - remember that the _WRITE_ONLY' part refers to the hardware buffer so can be safely used with a shadow buffer you read from.
89@item
90Split your vertex buffers up if you find that your usage patterns for different elements of the vertex are different. No point having one huge updateable buffer with all the vertex data in it, if all you need to update is the texture coordinates. Split that part out into it's own buffer and make the rest HBU_STATIC_WRITE_ONLY.
91@end enumerate
92
93
94@node Hardware Vertex Buffers
95@section Hardware Vertex Buffers
96This section covers specialised hardware buffers which contain vertex data. For a general discussion of hardware buffers, along with the rules for creating and locking them, see the @ref{Hardware Buffers} section.
97
98@node The VertexData class
99@subsection The VertexData class
100The VertexData class collects together all the vertex-related information used to render geometry. The new RenderOperation requires a pointer to a VertexData object, and it is also used in Mesh and SubMesh to store the vertex positions, normals, texture coordinates etc. VertexData can either be used alone (in order to render unindexed geometry, where the stream of vertices defines the triangles), or in combination with IndexData where the triangles are defined by indexes which refer to the entries in VertexData.
101@*@*
102It's worth noting that you don't necessarily have to use VertexData to store your applications geometry; all that is required is that you can build a VertexData structure when it comes to rendering. This is pretty easy since all of VertexData's members are pointers, so you could maintain your vertex buffers and declarations in alternative structures if you like, so long as you can convert them for rendering.@*@*
103The VertexData class has a number of important members:
104@table @asis
105@item vertexStart
106The position in the bound buffers to start reading vertex data from. This allows you to use a single buffer for many different renderables.
107@item vertexCount
108The number of vertices to process in this particular rendering group
109@item vertexDeclaration
110A pointer to a VertexDeclaration object which defines the format of the vertex input; note this is created for you by VertexData. @xref{Vertex Declarations}
111@item vertexBufferBinding
112A pointer to a VertexBufferBinding object which defines which vertex buffers are bound to which sources - again, this is created for you by VertexData. @xref{Vertex Buffer Bindings}
113@end table
114
115@node Vertex Declarations
116@subsection Vertex Declarations
117Vertex declarations define the vertex inputs used to render the geometry you want to appear on the screen. Basically this means that for each vertex, you want to feed a certain set of data into the graphics pipeling, which (you hope) will affect how it all looks when the triangles are drawn. Vertex declarations let you pull items of data (which we call vertex elements, represented by the VertexElement class) from any number of buffers, both shared and dedicated to that particular element. It's your job to ensure that the contents of the buffers make sense when interpreted in the way that your VertexDeclaration indicates that they should.@*@*
118To add an element to a VertexDeclaration, you call it's addElement method. The parameters to this method are:
119@table @asis
120@item source
121This tells the declaration which buffer the element is to be pulled from. Note that this is just an index, which may range from 0 to one less than the number of buffers which are being bound as sources of vertex data. @xref{Vertex Buffer Bindings} for information on how a real buffer is bound to a source index. Storing the source of the vertex element this way (rather than using a buffer pointer) allows you to rebind the source of a vertex very easily, without changing the declaration of the vertex format itself.
122@item offset
123Tells the declaration how far in bytes the element is offset from the start of each whole vertex in this buffer. This will be 0 if this is the only element being sourced from this buffer, but if other elements are there then it may be higher. A good way of thinking of this is the size of all vertex elements which precede this element in the buffer.
124@item type
125This defines the data type of the vertex input, including it's size. This is an important element because as GPUs become more advanced, we can no longer assume that position input will always require 3 floating point numbers, because programmable vertex pipelines allow full control over the inputs and outuputs. This part of the element definition covers the basic type and size, e.g. VET_FLOAT3 is 3 floating point numbers - the meaning of the data is dealt with in the next paramter.
126@item semantic
127This defines the meaning of the element - the GPU will use this to determine what to use this input for, and programmable vertex pipelines will use this to identify which semantic to map the input to. This can identify the element as positional data, normal data, texture coordinate data, etc. See the API reference for full details of all the options.
128@item index
129This parameter is only required when you supply more than one element of the same semantic in one vertex declaration. For example, if you supply more than one set of texture coordinates, you would set first sets index to 0, and the second set to 1.
130@end table
131
132You can repeat the call to addElement for as many elements as you have in your vertex input structures. There are also useful methods on VertexDeclaration for locating elements within a declaration - see the API reference for full details.
133
134@subheading Important Considerations
135Whilst in theory you have completely full reign over the format of you vertices, in reality there are some restrictions. Older DirectX hardware imposes a fixed ordering on the elements which are pulled from each buffer; specifically any hardware prior to DirectX 9 may impose the following restrictions:
136@itemize @bullet
137@item
138VertexElements should be added in the following order, and the order of the elements within any shared buffer should be as follows:
139@enumerate
140@item Positions
141@item Blending weights
142@item Normals
143@item Diffuse colours
144@item Specular colours
145@item Texture coordinates (starting at 0, listed in order, with no gaps)
146@end enumerate
147@item
148You must not have unused gaps in your buffers which are not referenced by any VertexElement
149@item
150You must not cause the buffer & offset settings of 2 VertexElements to overlap
151@end itemize
152OpenGL and DirectX 9 compatible hardware are not required to follow these strict limitations, so you might find, for example that if you broke these rules your application would run under OpenGL and under DirectX on recent cards, but it is not guaranteed to run on older hardware under DirectX unless you stick to the above rules. For this reason you're advised to abide by them!
153
154@node Vertex Buffer Bindings
155@subsection Vertex Buffer Bindings
156Vertex buffer bindings are about associating a vertex buffer with a source index used in @ref{Vertex Declarations}.
157@subheading Creating the Vertex Buffer
158Firstly, lets look at how you create a vertex buffer:
159@example
160HardwareVertexBufferSharedPtr vbuf =
161        HardwareBufferManager::getSingleton().createVertexBuffer(
162                3*sizeof(Real), // size of one whole vertex
163                numVertices, // number of vertices
164                HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
165                false); // no shadow buffer
166@end example
167
168Notice that we use @ref{The Hardware Buffer Manager} to create our vertex buffer, and that a class called HardwareVertexBufferSharedPtr is returned from the method, rather than a raw pointer. This is because vertex buffers are reference counted - you are able to use a single vertex buffer as a source for multiple pieces of geometry therefore a standard pointer would not be good enough, because you would not know when all the different users of it had finished with it. The HardwareVertexBufferSharedPtr class manages its own destruction by keeping a reference count of the number of times it is being used - when the last HardwareVertexBufferSharedPtr is destroyed, the buffer itself automatically destroys itself.@*@*
169
170The parameters to the creation of a vertex buffer are as follows:
171@table @asis
172@item vertexSize
173The size in bytes of a whole vertex in this buffer. A vertex may include multiple elements, and in fact the contents of the vertex data may be reinterpreted by different vertex declarations if you wish. Therefore you must tell the buffer manager how large a whole vertex is, but not the internal format of the vertex, since that is down to the declaration to interpret. In the above example, the size is set to the size of 3 floating point values - this would be enough to hold a standard 3D position or normal, or a 3D texture coordinate, per vertex.
174@item numVertices
175The number of vertices in this buffer. Remember, not all the vertices have to be used at once - it can be beneficial to create large buffers which are shared between many chunks of geometry because changing vertex buffer bindings is a render state switch, and those are best minimised.
176@item usage
177This tells the system how you intend to use the buffer. @xref{Buffer Usage}
178@item useShadowBuffer
179Tells the system whether you want this buffer backed by a system-memory copy. @xref{Shadow Buffers}
180@end table
181
182@subheading Binding the Vertex Buffer
183The second part of the process is to bind this buffer which you have created to a source index. To do this, you call:
184@example
185vertexBufferBinding->setBinding(0, vbuf);
186@end example
187This results in the vertex buffer you created earlier being bound to source index 0, so any vertex element which is pulling its data from source index 0 will retrieve data from this buffer. @*@*
188There are also methods for retrieving buffers from the binding data - see the API reference for full details.
189
190@node Updating Vertex Buffers
191@subsection Updating Vertex Buffers
192The complexity of updating a vertex buffer entirely depends on how its contents are laid out. You can lock a buffer (@xref{Locking buffers}), but how you write data into it vert much depends on what it contains.@*@*
193Lets start with a vert simple example. Lets say you have a buffer which only contains vertex positions, so it only contains sets of 3 floating point numbers per vertex. In this case, all you need to do to write data into it is:
194@example
195Real* pReal = static_cast<Real*>(vbuf->lock(HardwareBuffer::HBL_DISCARD));
196@end example
197... then you just write positions in chunks of 3 reals. If you have other floating point data in there, it's a little more complex but the principle is largely the same, you just need to write alternate elements. But what if you have elements of different types, or you need to derive how to write the vertex data from the elements themselves? Well, there are some useful methods on the VertexElement class to help you out.@*@*
198Firstly, you lock the buffer but assign the result to a unsigned char* rather than a specific type. Then, for each element whcih is sourcing from this buffer (which you can find out by calling VertexDeclaration::findElementsBySource) you call VertexElement::baseVertexPointerToElement. This offsets a pointer which points at the base of a vertex in a buffer to the beginning of the element in question, and allows you to use a pointer of the right type to boot. Here's a full example:
199@example
200// Get base pointer
201unsigned char* pVert = static_cast<unsigned char*>(vbuf->lock(HardwareBuffer::HBL_READ_ONLY));
202Real* pReal;
203for (size_t v = 0; v < vertexCount; ++v)
204{
205        // Get elements
206        VertexDeclaration::VertexElementList elems = decl->findElementsBySource(bufferIdx);
207        VertexDeclaration::VertexElementList::iterator i, iend;
208        for (i = elems.begin(); i != elems.end(); ++i)
209        {
210                VertexElement& elem = *i;
211                if (elem.getSemantic() == VES_POSITION)
212                {
213                        elem.baseVertexPointerToElement(pVert, &pReal);
214                        // write position using pReal
215
216                }
217               
218                ...
219               
220               
221        }
222        pVert += vbuf->getVertexSize();
223}
224vbuf->unlock();
225@end example
226
227See the API docs for full details of all the helper methods on VertexDeclaration and VertexElement to assist you in manipulating vertex buffer data pointers.
228
229@node Hardware Index Buffers
230@section Hardware Index Buffers
231Index buffers are used to render geometry by building triangles out of vertices indirectly by reference to their position in the buffer, rather than just building triangles by sequentially reading vertices. Index buffers are simpler than vertex buffers, since they are just a list of indexes at the end of the day, howeverthey can be held on the hardware and shared between multiple pieces of geometry in the same way vertex buffers can, so the rules on creation and locking are the same. @xref{Hardware Buffers} for information.
232
233@node The IndexData class
234@subsection The IndexData class
235This class summarises the information required to use a set of indexes to render geometry. It's members are as follows:
236@table @asis
237@item indexStart
238The first index used by this piece of geometry; this can be useful for sharing a single index buffer among several geometry pieces.
239@item indexCount
240The number of indexes used by this particular renderable.
241@item indexBuffer
242The index buffer which is used to source the indexes.
243@end table
244
245@subheading Creating an Index Buffer
246Index buffers are created using @xref{The Hardware Buffer Manager} just like vertex buffers, here's how:
247@example
248HardwareIndexBufferSharedPtr ibuf = HardwareBufferManager::getSingleton().
249        createIndexBuffer(
250                HardwareIndexBuffer::IT_16BIT, // type of index
251                numIndexes, // number of indexes
252                HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
253                false); // no shadow buffer     
254@end example
255Once again, notice that the return type is a class rather than a pointer; this is reference counted so that the buffer is automatically destroyed when no more references are made to it. The parameters to the index buffer creation are:
256@table @asis
257@item indexType
258There are 2 types of index; 16-bit and 32-bit. They both perform the same way, except that the latter can address larger vertex buffers. If your buffer includes more than 65526 vertices, then you will need to use 32-bit indexes. Note that you should only use 32-bit indexes when you need to, since they incur more overhead than 16-bit vertices, and are not supported on some older hardware.
259@item numIndexes
260The number of indexes in the buffer. As with vertex buffers, you should consider whether you can use a shared index buffer which is used by multiple pieces of geometry, since there can be performance advantages to switching index buffers less often.
261@item usage
262This tells the system how you intend to use the buffer. @xref{Buffer Usage}
263@item useShadowBuffer
264Tells the system whether you want this buffer backed by a system-memory copy. @xref{Shadow Buffers}
265@end table
266
267@node Updating Index Buffers
268@subsection Updating Index Buffers
269Updating index buffers can only be done when you lock the buffer for writing; @xref{Locking buffers} for details. Locking returns a void pointer, which must be cast to the apropriate type; with index buffers this is either an unsigned short (for 16-bit indexes) or an unsigned long (for 32-bit indexes). For example:
270@example
271unsigned short* pIdx = static_cast<unsigned short*>(ibuf->lock(HardwareBuffer::HBL_DISCARD));
272@end example
273You can then write to the buffer using the usual pointer semantics, just remember to unlock the buffer when you're finished!
274
275@node Hardware Pixel Buffers
276@section Hardware Pixel Buffers
277
278Hardware Pixel Buffers are a special kind of buffer that stores graphical data in graphics card memory, generally
279for use as textures. Pixel buffers can represent a one dimensional, two dimensional or three dimensional image. A texture can consist of a multiple of these buffers.
280
281In contrary to vertex and index buffers, pixel buffers are not constructed directly. When creating a texture, the necessary pixel buffers to hold its data are constructed automatically.
282
283@node Textures
284@subsection Textures
285
286A texture is an image that can be applied onto the surface of a three dimensional model. In Ogre, textures are represented by the Texture resource class.
287
288@subheading Creating a texture
289
290Textures are created through the TextureManager. In most cases they are created from image files directly by the
291Ogre resource system. If you are reading this, you most probably want to create a texture manually so that you can provide it with image data yourself. This is done through TextureManager::createManual:
292
293@example
294ptex = TextureManager::getSingleton().createManual(
295    "MyManualTexture", // Name of texture
296    "General", // Name of resource group in which the texture should be created
297    TEX_TYPE_2D, // Texture type
298    256, // Width
299    256, // Height
300    1, // Depth (Must be 1 for two dimensional textures)
301    0, // Number of mipmaps
302    PF_A8R8G8B8, // Pixel format
303    TU_DYNAMIC_WRITE_ONLY // usage
304);
305@end example
306
307This example creates a texture named @emph{MyManualTexture} in resource group @emph{General}. It is a
308square @emph{two dimensional} texture, with width 256 and height 256. It has @emph{no mipmaps}, internal
309format @emph{PF_A8R8G8B8} and usage @emph{TU_DYNAMIC_WRITE_ONLY}.
310
311The different texture types will be discussed in @ref{Texture Types}. Pixel formats are summarised in
312@ref{Pixel Formats}.
313
314@subheading Texture usages
315
316In addition to the hardware buffer usages as described in @xref{Buffer Usage} there are some usage flags specific to textures:
317@table @asis
318@item TU_AUTOMIPMAP
319Mipmaps for this texture will be automatically generated by the graphics hardware. The exact algorithm used is not
320defined, but you can assume it to be a 2x2 box filter.
321@item TU_RENDERTARGET
322This texture will be a render target, ie. used as a target for render to texture. Setting this flag will ignore all other texture usages except TU_AUTOMIPMAP.
323@item TU_DEFAULT
324This is actualy a combination of usage flags, and is equivalent to TU_AUTOMIPMAP | TU_STATIC_WRITE_ONLY. The resource
325system uses these flags for textures that are loaded from images.
326
327@end table
328
329@subheading Getting a PixelBuffer
330
331A Texture can consist of multiple PixelBuffers, one for each combo if mipmap level and face number. To get a PixelBuffer from a Texture object the method Texture::getBuffer(face, mipmap) is used:
332
333@emph{face} should be zero for non-cubemap textures. For cubemap textures it identifies the face to use, which is one of the cube faces described in @xref{Texture Types}.
334
335@emph{mipmap} is zero for the zeroth mipmap level, one for the first mipmap level, and so on. On textures that have automatic mipmap generation (TU_AUTOMIPMAP) only level 0 should be accessed, the rest will be taken care of by the rendering API.
336
337A simple example of using getBuffer is
338@example
339// Get the PixelBuffer for face 0, mipmap 0.
340HardwarePixelBufferSharedPtr ptr = tex->getBuffer(0,0);
341@end example
342
343@node Updating Pixel Buffers
344@subsection Updating Pixel Buffers
345
346Pixel Buffers can be updated in two different ways; a simple, convient way and a more difficult (but in some cases faster) method. Both methods make use of PixelBox objects (@xref{Pixel boxes}) to represent image data in memory.
347
348@subheading blitFromMemory
349
350The easy method to get an image into a PixelBuffer is by using HardwarePixelBuffer::blitFromMemory. This takes a PixelBox object and does all necessary pixel format conversion and scaling for you. For example, to create a manual texture and load an image into it, all you have to do is
351
352@example
353// Manually loads an image and puts the contents in a manually created texture
354Image img;
355img.load("elephant.png", "General");
356// Create RGB texture with 5 mipmaps
357TexturePtr tex = TextureManager::getSingleton().createManual(
358    "elephant",
359    "General",
360    TEX_TYPE_2D,
361    img.getWidth(), img.getHeight(),
362    5, PF_X8R8G8B8);
363// Copy face 0 mipmap 0 of the image to face 0 mipmap 0 of the texture.
364tex->getBuffer(0,0)->blitFromMemory(img.getPixelBox(0,0));
365@end example
366
367@subheading Direct memory locking
368
369A more advanced method to transfer image data from and to a PixelBuffer is to use locking. By locking a PixelBuffer
370you can directly access its contents in whatever the internal format of the buffer inside the GPU is.
371
372@example
373/// Lock the buffer so we can write to it
374buffer->lock(HardwareBuffer::HBL_DISCARD);
375const PixelBox &pb = buffer->getCurrentLock();
376
377/// Update the contents of pb here
378/// Image data starts at pb.data and has format pb.format
379/// Here we assume data.format is PF_X8R8G8B8 so we can address pixels as uint32.
380uint32 *data = static_cast<uint32*>(pb.data);
381size_t height = pb.getHeight();
382size_t width = pb.getWidth();
383size_t rowSkip = pb.getRowSkip(); // Skip between rows of image
384for(size_t y=0; y<height; ++y)
385{
386    for(size_t x=0; x<width; ++x)
387{
388        // 0xRRGGBB -> fill the buffer with yellow pixels
389        data[rowSkip*y + x] = 0x00FFFF00;
390}
391}
392
393/// Unlock the buffer again (frees it for use by the GPU)
394buffer->unlock();
395@end example
396
397
398@node Texture Types
399@subsection Texture Types
400
401There are four types of textures supported by current hardware, three of them only differ in the amount of dimensions
402they have (one, two or three). The fourth one is special. The different texture types are:
403
404@table @asis
405
406@item TEX_TYPE_1D
407One dimensional texture, used in combination with 1D texture coordinates.
408@item TEX_TYPE_2D
409Two dimensional texture, used in combination with 2D texture coordinates.
410@item TEX_TYPE_3D
411Three dimensional volume texture, used in combination with 3D texture coordinates.
412@item TEX_TYPE_CUBE_MAP
413Cube map (six two dimensional textures, one for each cube face), used in combination with 3D texture coordinates.
414
415@end table
416
417@subheading Cube map textures
418
419The cube map texture type (TEX_TYPE_CUBE_MAP) is a different beast from the others; a cube map texture represents a series of six two dimensional images addressed by 3D texture coordinates.
420   
421@table @asis
422@item +X (face 0)
423Represents the positive x plane (right).
424@item -X (face 1)
425Represents the negative x plane (left).
426@item +Y (face 2)
427Represents the positive y plane (top).
428@item -Y (face 3)
429Represents the negative y plane (bottom).
430@item +Z (face 4)
431Represents the positive z plane (front).
432@item -Z (face 5)
433Represents the negative z plane (back).
434@end table
435
436@node Pixel Formats
437@subsection Pixel Formats
438
439A pixel format described the storage format of pixel data. It defines the way pixels are encoded in memory. The following classes of pixel formats (PF_*) are defined:
440
441@table @asis
442@item Native endian formats (PF_A8R8G8B8 and other formats with bit counts)
443These are native endian (16, 24 and 32 bit) integers in memory. This means that an image with format PF_A8R8G8B8 can be seen as an array of 32 bit integers, defined as 0xAARRGGBB in hexadecimal. The meaning of the letters is described below.
444
445@item Byte formats (PF_BYTE_*)
446These formats have one byte per channel, and their channels in memory are organized in the order they are specified in the format name. For example, PF_BYTE_RGBA consists of blocks of four bytes, one for red, one for green, one for blue, one for alpha.
447
448@item Short formats (PF_SHORT_*)
449These formats have one unsigned short (16 bit integer) per channel, and their channels in memory are organized in the order they are specified in the format name. For example, PF_SHORT_RGBA consists of blocks of four 16 bit integers, one for red, one for green, one for blue, one for alpha.
450
451@item Float16 formats (PF_FLOAT16_*)
452These formats have one 16 bit floating point number per channel, and their channels in memory are organized in the order they are specified in the format name. For example, PF_FLOAT16_RGBA consists of blocks of four 16 bit floats, one for red, one for green, one for blue, one for alpha. The 16 bit floats, also called half float) are very similar to the IEEE single-precision floating-point standard of the 32 bits floats, except that they have only 5 exponent bits and 10 mantissa. Note that there is no standard C++ data type or CPU support to work with these efficiently, but GPUs can
453calculate with these much more efficiently than with 32 bit floats.
454
455@item Float32 formats (PF_FLOAT32_*)
456These formats have one 32 bit floating point number per channel, and their channels in memory are organized in the order they are specified in the format name. For example, PF_FLOAT32_RGBA consists of blocks of four 32 bit floats, one for red, one for green, one for blue, one for alpha. The C++ data type for these 32 bits floats is just "float".
457
458@item Compressed formats (PF_DXT[1-5])
459S3TC compressed texture formats, a good description can be found at | Wikipedia (http://en.wikipedia.org/wiki/S3TC)
460
461@end table
462
463@subheading Colour channels
464
465The meaning of the channels R,G,B,A,L and X is defined as
466
467@table @asis
468@item R
469Red colour component, usually ranging from 0.0 (no red) to 1.0 (full red).
470@item G
471Green colour component, usually ranging from 0.0 (no green) to 1.0 (full green).
472@item B
473Blue colour component, usually ranging from 0.0 (no blue) to 1.0 (full blue).
474@item A
475Alpha component, usually ranging from 0.0 (entire transparent) to 1.0 (opaque).
476@item L
477Luminance component, usually ranging from 0.0 (black) to 1.0 (white). The luminance component is duplicated in the R, G, and B channels to achieve a greyscale image.
478@item X
479This component is completely ignored.
480@end table
481
482If none of red, green and blue components, or luminance is defined in a format, these default to 0. For the alpha channel this is different; if no alpha is defined, it defaults to 1.
483
484@subheading Complete list of pixel formats
485
486This pixel formats supported by the current version of Ogre are
487
488@table @asis
489
490@item Byte formats
491PF_BYTE_RGB, PF_BYTE_BGR, PF_BYTE_BGRA, PF_BYTE_RGBA, PF_BYTE_L, PF_BYTE_LA, PF_BYTE_A
492
493@item Short formats
494PF_SHORT_RGBA
495
496@item Float16 formats
497PF_FLOAT16_R, PF_FLOAT16_RGB, PF_FLOAT16_RGBA
498
499@item Float32 formats
500PF_FLOAT32_R, PF_FLOAT32_RGB, PF_FLOAT32_RGBA
501
502@item 8 bit native endian formats
503PF_L8, PF_A8, PF_A4L4, PF_R3G3B2
504
505@item 16 bit native endian formats
506PF_L16, PF_R5G6B5, PF_B5G6R5, PF_A4R4G4B4, PF_A1R5G5B5
507
508@item 24 bit native endian formats
509PF_R8G8B8, PF_B8G8R8
510
511@item 32 bit native endian formats
512PF_A8R8G8B8, PF_A8B8G8R8, PF_B8G8R8A8, PF_R8G8B8A8, PF_X8R8G8B8, PF_X8B8G8R8, PF_A2R10G10B10 PF_A2B10G10R10
513
514@item Compressed formats
515PF_DXT1, PF_DXT2, PF_DXT3, PF_DXT4, PF_DXT5
516
517@end table
518
519@node Pixel boxes
520@subsection Pixel boxes
521
522All methods in Ogre that take or return raw image data return a PixelBox object.
523
524A PixelBox is a primitive describing a volume (3D), image (2D) or line (1D) of pixels in CPU memory. It describes the location and data format of a region of memory used for image data, but does not do any memory management in itself.
525
526Inside the memory pointed to by the @emph{data} member of a pixel box, pixels are stored as a succession of "depth" slices (in Z), each containing "height" rows (Y) of "width" pixels (X).
527
528Dimensions that are not used must be 1. For example, a one dimensional image will have extents (width,1,1). A two dimensional image has extents (width,height,1).
529
530A PixelBox has the following members:
531@table @asis
532@item data
533The pointer to the first component of the image data in memory.
534@item format
535The pixel format (@xref{Pixel Formats}) of the image data.
536@item rowPitch
537The number of elements between the leftmost pixel of one row and the left pixel of the next. This value must always be equal to getWidth() (consecutive) for compressed formats.
538@item slicePitch
539The number of elements between the top left pixel of one (depth) slice and the top left pixel of the next. Must be a multiple of  rowPitch. This value must always be equal to getWidth()*getHeight() (consecutive)
540for compressed formats.
541@item left, top, right, bottom, front, back
542Extents of the box in three dimensional integer space. Note that the left, top, and front edges are included but the right, bottom and top ones are not. @emph{left} must always be smaller or equal to @emph{right}, @emph{top} must always be smaller or equal to @emph{bottom}, and @emph{front} must always be smaller or equal to @emph{back}.
543@end table
544
545It also has some useful methods:
546@table @asis
547@item getWidth()
548Get the width of this box
549@item getHeight()
550Get the height of this box. This is 1 for one dimensional images.
551@item getDepth()
552Get the depth of this box. This is 1 for one and two dimensional images.
553@item setConsecutive()
554Set the rowPitch and slicePitch so that the buffer is laid out consecutive in memory.
555@item getRowSkip()
556Get the number of elements between one past the rightmost pixel of one row and the leftmost pixel of the next row. This is zero if rows are consecutive.
557@item getSliceSkip()
558Get the number of elements between one past the right bottom pixel of one slice and the left top pixel of the next slice. This is zero if slices are consecutive.
559@item isConsecutive()
560Return whether this buffer is laid out consecutive in memory (ie the pitches are equal to the dimensions)
561@item getConsecutiveSize()
562Return the size (in bytes) this image would take if it was laid out consecutive in memory
563@item getSubVolume(const Box &def)
564Return a subvolume of this PixelBox, as a PixelBox.
565@end table
566
567For more information about these methods consult the API documentation.
568
Note: See TracBrowser for help on using the repository browser.