Index segments currently make up most of the mesh size. Different approaches to reducing their size have yielded insufficient results. The previous attempts were small, universal improvements for saving all kinds of meshes. Some meshes can be however further compressed by increasing the size of one segment to shrink another. This is where SharedIndexSegment
would come in.
A SharedIndexSegment
is a segment that stores indices, which are identical, but would normally end up duplicated in completely different segments. A bitmask at the begging of the segment signals, which segments did the indices in the segment belong to.
This alone will not do a lot, since a large range of indices being shared across segments is very rare. This is where the cost in increasing size of some other segments comes in.
Let us imagine this hypothetical scenario:
We have a set of vertices:
[va,vb,vc,vd]
uvs:
[ua,ub,ub,ud]
combined into two triangles:
vertex index:
[0,1,2,3,1,2]
and uvs:
[1,2,3,0,2,3]
There are 5 unique combinations of vertex and uv indices:
[ (0,1) ,(1,2),(2,3),(3,0),(1,1)]
if we change the uv and vertex array to look like this:
[va,vb,vc,vd,vb]
and uv array to look like this:
[ub,uc,ud,ua,ub]
Each unique combination of uv and vertex data can be represented with a single index!
[0,1,2,3,4,2
]
This has it's downsides:
- Both the vertex and uv array now contain duplicate data.
- The highest value in the unifed index array is now bigger.
- Computing the new data will increase write times.
Which is why it is not something that will fit each mesh, and should be done on a per-mesh basis. Additionally, reordering data may be not allowed for some user-generated meshes. Someone might not want their mesh data reordered. Which is why this will be a function: [unfiy_index_data
].
This has the potential to drastically reduce the size of some meshes.
What is needed for this to work?