lukaskalbertodt / lox Goto Github PK
View Code? Open in Web Editor NEWFast polygon mesh library with different data structures and traits to abstract over those.
Home Page: https://docs.rs/lox
License: Apache License 2.0
Fast polygon mesh library with different data structures and traits to abstract over those.
Home Page: https://docs.rs/lox
License: Apache License 2.0
Criterion 0.3 soft-deprecated a couple of functions our benchmarks use. These should be update to BenchmarkGroup
, presumably.
The module is very involved and seems like it could better be its own crate, also to reduce the API surface area of lox
.
Reading and writing files should be easy. Thus this library should support a variety of mesh format. These are the ones we absolutely want:
But we probably want to implement even more than that. Related Wiki page.
All map implementations should have an extensive test suite. Since all those tests are nearly identical, we should create a macro to quickly create a bunch of tests.
TODO: Fill out later, once this library is more complete
The EdgeMesh
offers users to explicitly reference edges. We should also have a similar interface for half edges as this is useful in several situations. However, this could be tricky as different data structures deal with this differently. HEM has boundary half edges, DEM does not, for example.
We have two arrays: vertices
and faces
. For each element we can specify additional properties after the "core info". The question is how to specify multiple properties. Potential syntaxes:
As tuple:
vertices: [
v0: (prop0, prop1, prop2), // multiple
v1 : (prop0), // single
]
Simple, but feels like too many parenthesis, especially for a single element.
As tuple with special casing single value:
vertices: [
v0: (prop0, prop1, prop2), // multiple
v1 : prop0, // single
]
Problem: if someone tries to specify a position as tuple (after all (f32, f32, f32)
implements Pos3Like
), strange errors will occur (it will be parsed as three properties instead of one). Generally, it feels like too much magic IMO.
Comma separated, with semicolon seperated elements
vertices: [
v0: prop0, prop1, prop2; // multiple
v1 : prop0; // single
]
This diverges from the normal Rust "list like syntax": [a, b, c]
.
Not sure how much of this still makes sense, but I wanted to transfer everything from private notes to GitHub.
Deref<Target=Handle>
for ElementRef{Element}Ref
thingies a single pointer? Also make them always valid, i.e. non constructable by users.add_vertex(pos)
and the like methods to fat meshes (automatically?)derive(FatMesh)
to add several methods like add_vertex(position)
.(core mesh, position)
again to make passing fat meshes into algorithms easier{Element}Ref
to {Element}
?derive_more
dependencyprofile.release debug=true
ErrorKind
nicerConfig::into_writer
should be in a trait (and then we can add wrote_to_mem
)hsize
and handles
u64
as hsize
Handle
for structs with one hsize
fieldInto<Handle>
or sofat::EmptyMesh
or implement Mem*
for all core mesh typestry_add_face
which returns a result--query is_solid
where it just prints true
or false
?An issue to capture everything related to IO that needs or could need improvements.
MemSink
communicate with the StreamingSource
before actually starting the transfer. This enables the following:
f64
positions; which could tell the source to parse ASCII floats as f64
directly. But this is crazy fancy stuff. Not important.Returning iterators from trait methods is difficult as (a) usually every implementing type has a different iterator type and (b) this iterator type borrows from the collection. This is currently not (easily) expressible in Rust. The situation will be vastly improved by GATs. impl Trait
for trait methods would make it even easier.
Until those features are implemented, we have to work around the problem. This is done in a number of different ways:
Mesh::{faces, vertices, edges}
: here, there exists only one iterator type which uses the next_*_handle
and last_*_handle
methods of the Mesh
trait. This means we have extracted the iterator logic into two "normal" functions in the trait. This only works because the iterator state is the same for all implementing types: a handle as the current "position" is sufficient. So this workaround doesn't work for situations where different implementors need different iterator state or the iterator logic is more complex.
Adjacency queries of meshes: here, a funky type system trick was used to work around the lack of GATs. It produces a bunch of verbose filler code, but it works. Each implementor has its own type.
PropStore: here, dynamic dispatch is used for now. Currently there is no urgent need to improve the speed of this iteration. This workaround could probably be replaced by the one of the adjacency queries, but it's not worth it yet.
On platforms that support mmap, we should probably just mmap files and read from there. However, this is only useful when we use the slice directly so that the ParseBuf
overhead is completely gone.
Intra-doc links are still somewhat experimental and break from time to time. Breakage is especially bad in the fev
crate where things from other crates are reexported.
add_face
/add_triangle
add_vertex
remove_face
remove_isolated_vertex
remove_vertex_with_adjacent_faces
(and maybe find a shorter name): this can have a default implementation. This should probably be bounded by Self: FullAdj
, right?remove_all_faces
/remove_all_vertices
: improve documentation and fix semantics, especially regarding reuse of handlessplit_face
(1-to-n split)flip_edge
split_edge_with_faces
(although: maybe rename)collapse_edge
Ref
)num_*
contains_*
valence_of_*
?is_manifold_vertex
?We (might) want to know about these useful properties:
To add a known property, changes in several places are necessary:
MemSink
traitMemSource
traitderive(MemSink)
codederive(MemSource)
codeio::PropKind
Blockers:
Iterating over all vertices/faces/edges of a mesh is currently implemented via next_vertex_handle_from
and last_vertex_handle
. This was originally done to work around the lack of GATs. Now we have GATs and could solve this properly... except for vertex_handles_mut
: an iterator that iterates over vertex handles but also gives mutable access to the underlying mesh. It's not possible to easily implement it due to borrowing the vertices.iter()
immutable for the handles, but also the whole mesh. So it is necessary to pull out the iteration logic instead of using the provided one of the relevant DenseMap
.
I'm not yet sure how to best solve this. Maybe use the GAT solution for element_handles()
and elements()
, but still use the manual iteration for the mut
iterator. But that should probably be hidden as well, i.e. have a IterMut<'s>
type in the trait and remove the next_handle_from
functions from the Mesh
trait.
Unsorted, random things to do:
convert
tool for now (if that works, the sink/source system is probably fine)PropMap
s where we can decide what kinds of properties we want (e.g. read this into a mesh and position prop map).PropMap
s where we can decide what kinds of properties we want (e.g. read this into a mesh and position prop map)Sink
due to source-determined ordering of add
-opertionsmap_vertex
and map_face
Currently, every file format has its own error type. Those error types are pretty similar though. It might be better to create one io::Error
type instead to simplify API and duplicate code.
So the high level interface for everything IO is defined by StreamingSink
and StreamingSource
. The question is how to call the types implementing those traits.
For StreamingSource
both are called Reader
right now which sounds OK to me. But for the StreamingSink
, it's stl::Sink
while stl::Writer
offers a more raw and non-abstract interface. PLY doesn't have such an implementation yet, but ply::Writer
already exists and also offers the more raw interface.
Furthermore, there are two traits related to these raw writers, currently called MeshWriter
and IntoMeshWriter
. At least MeshWriter
needs to continue to exist, but should probably be renamed. Maybe just RawWriter
?
Wishlist for #[derive(MemSink)]
:
mesh
(or so) but explicitly opt out of it being usedfinish()
behavior: what is required and so on#[lox(cast = "lossy")]
Wishlist for #[derive(MemSource)]
:
This issue collects all major parts of the library that can be improved with generic associated types. This library won't be released as 1.x
before GATs have landed and these improvements have been implemented, as they a central to a fast and convenient API.
Explicit{Edge, Face, Vertex}::{edges, faces, vertices}()
currently return Box<Iterator>
. Instead, those traits should have a GAT {Edge, Face, Vertex}Iter<'s>
instead which is returned by the methods.PropMap
: the trait currently uses a hack from the boo
module.IntoMeshWriter
could also move its parameters to the function and an GATfev-map
needs better documentation. An incomplete list of things we need to do:
As explained in the docs, IO code was thrown out to reduce the scope of the initial release. Of course, IO is important for many use cases, so it should be added back. The code lives in old_code/
and was fully working (and fast!), but it needs quite a bit of adjustments still to:
Here is an incomplete and unsorted list of things that need to be addressed:
IntoMeshWriter
?)set_vertex_positions
, set_vertex_colors
, ...
set_prop::<VertexPositions>()
or sth like that.A Mem{Source, Sink}
is worth more than a Stream{SourceSink}
, as in: can always do more. It would be really useful to:
: Stream*
super trait bound to Mem*
traitsStream*
for all types T: Mem*
(this implementation would likely be similar to PLYs impl: many function pointers)This would make it possible to easily transfer data from one MemSource
to a MemSink
.
I should do that one step at a time, to not get overwhelmed with work again. Some things are certainly more important than others.
loxi
, so leave it for now.loxi
: useful CLI tool for working with meshes. No high priority.Later, when the project is more complete and not breaking all the time, missing docs should lead to a CI failure when merging into master
.
PLY reading and writing speed is not bad, but could be improved a lot. The difficult part is that PLY has a dynamic type system, so to say: we have to read the header to know what types to expect. This leads to a lot of complexity everywhere.
I have two main ideas how to improve PLY speed.
(a) RawSource
has to be changed. Currently it is forced to write to the "serializer" once for each property. That means we cannot do any optimizations such as "have a temporary buffer with all vertex properties and write to the writer only once per vertex". This is bad. So this certainly needs to be improved.
(b) And here is where it gets funny: write a JIT codegen. The optimal assembly code to read "normal" binary PLY files is actually super simple. It's just a memcopy, mostly. However, due to the fact that the data layout is dynamic, the Rust compiler can't do any optimizations in that regard. So the idea is to generate x86-64 machine code after reading the PLY header. This code is then optimized for the PLY data layout. I bet this could increase raw read performance at least five-fold if not a lot more. But it's a lot of work and unsafe
code and platform-dependent stuff and and and. Not sure if worth it -- it would be a super interesting project though.
Just looking through the HalfEdge implementation, it looks as though it should be possible to implement a niche optimization
Line 194 in 80c0e20
with HalfEdge defined as
Line 101 in 80c0e20
If that were switched from using DenseMap
which uses a StableVec
internally to a map using InlineStableVec which would use Option
, then define HalfEdge
using the NonZero
variants then adding/subtracting on creation indexing.
It looks like trying it out would be a bit involved but overall doesn't really require any major refactors that would impact other parts of the code base, with the main thing being the alternative to DenseMap
and NonZero Handle
implementation. Was curious if you had tried that, or if feel like it's worth trying.
Edit:
What would be really nice here if there was a NonMax*
as mentioned in rust-lang/rust#45842
That would allow for just checking when the handle is created rather than having to do arithmetic whenever it is used.
unfortunately afaik nothing in std
which implements niches for _::MAX
.
There is this crate though that emulates it https://crates.io/crates/nonmax but it is just a wrapper around NonZero*
.
The algo
module is sadly empty right now. It should be filled with lots of standard algorithms, maybe just the easy ones for now. Implementing a sqrt(3) takes some time, but there surely is a lot of low hanging fruit that's quick to implement and useful.
This is not trivial. It's questionable when and how we cast between numerical types. Going the strict path (think std::convert::{Into, From}
) probably leads to an annoying API. Going the relaxed path and just casting everything when necessary, regardless of precision loss, might lead to surprising bad behavior.
We could let the user decide, but this could very well add trait bounds everywhere, making the API less easy to read. Here is a possible design for a conversion trait. It requires specialization to be generic, but works without it when restricting to primitives.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.