Compressed Vertex Data


Vertex data is read by the shader in 32 bytes chunks and if the vertex data is indexed (most probably) it is also better to be aligned to 32 bytes or 64 bytes. Consequently it is a lot better to have 32 bytes vertex data than, for example, 36 bytes vertex data. And (this is the ugly part) it could be better to have 32 bytes of vertex data than, for example, 20 bytes of vertex data. Because you could read in some cases two or three times the data in each fetch. But if your vertex data is read mostly sequentially having less than 32 bytes vertex data could not produce a performance penalty because the data is cached and you can store your model using less memory.

There are some other cases when aligned data could be not necessary better. For example, if I remove the binormal channel my vertex data (in this kind of models) is stored in two 32 bytes chunks, great. But if I put back the binormal data then my vertex data uses three chunks of not aligned data and I find in some tests that this could be better because I avoid a binormal calculation in the shader that could cost more that the fetch itself.

It is not easy to find the right path. And the thing gets worst when flexibility is aimed.

What can be done?

Identify the common cases and try to pack the information for them efficiently. Some models could have only normals and positions. In old versions I made special shaders so everything works well with these models. These models are actually very rare in production and if there are not supported then I don’t need to have some extra shaders (and possible a lot of extra GPU calls on runtime) and this simple data will become normals, positions and UVs (32 bytes). I will try to create dummy UV data for these models to avoid unnecessary exceptions on runtime.

For models with normals, positions and one UV set the best option is to have uncompressed data to pack it in one 32 bytes chunk per vertex, besides texture data outside the range (0, 1) could be needed.
The rest of the models probably are stored better with compressed data. The question is if the binormal information is stored or not.

XNA Final Engine

The vertex data compression is performed in the ProcessVertexChannel method on the RigidModelProcessor and SkinnedModelProcessor classes. You should try to improve this compression analyzing your own application constrains; however, a compression that is both flexible and efficient was already implemented, therefore if performance is not an issue just keep these classes untouched.
Important: vertex declarations are placed in VertexAndFragmentDeclarations.fxh.

References


http://blogs.msdn.com/b/shawnhar/archive/2010/11/19/compressed-vertex-data.aspx
Richard Huddy, Optimizing DirectX9 Graphics, GDC 2006.
Shanon Drone, Windows to Reality - Getting the Most out of Direct3D 10 Graphics in your Games, Gamefest 2007.


Last edited Jan 27, 2013 at 10:49 PM by jischneider, version 17

Comments

No comments yet.