leudz / shipyard Goto Github PK
View Code? Open in Web Editor NEWEntity Component System focused on usability and flexibility.
License: Other
Entity Component System focused on usability and flexibility.
License: Other
To design an event there are three main questions:
On what the event will be attached?
Storage (except unique1)
Pack (in theory)
Pack (in practice)
-> it would require reordering tuples or returning Box<dyn Any>
, they both seems terrible.
System
-> make it part of the system
Workload
-> make it a system
Pipeline
-> make it a workload
Listeners could also be useful but the only way I can see to implement them is to require them when adding/removing components like a pack. There might be a better way.
1. Unique storages don't store component nor entity so they can't be eligible for creation nor removal/deletion and see 2 for modification. โฒ
2. Tracking modification isn't possible as far as I know, checking if the bits changed for example doesn't work if the component go back to its original value before the end of the iteration. So modification events will trigger after every mutable access, whether or not a change happened. โฒ
It's possible to add empty components but they'll still use an EntityId
per component and up to an usize
per entity in the World
(could go down with #10 if the tagged entities are in the right spot).
This could be far less, one bit per entity in the World
with the possibility to go even lower using pagination.
Since the storage wouldn't store EntityId
it won't be possible to call with_id
on these storages but with #52 we'll still be able to iterate them.
There would be a new Tag
type used like Unique
to access the storage.
My actual use case is to be able round-trip a Key to JS.
A few ideas come to mind:
make some of the methods on Key pub
instead of pub(crate)
- so for example it would be easier to send a version/index pair across the wire.
impl Display and/or Debug
have an optional serde implementation gated by feature
btw I'm not totally clear on how Key::new()
works... seems to only take an index... isn't version
also required? :)
This is already on @leudz Ali Baba's Cave (todo list) - just opening an issue for convenience
As discussed on Gitter...
The idea is to have a run macro that is similar in syntax to the system derive macro. The only major difference is that the world must be given as the first argument - and that it executes world.run()
under the hood.
This would be valid syntax:
//create an entity
let entity = run!(&world, |mut entities:&mut Entities, mut labels:&mut Label| {
entities.add_entity(&mut labels, Label::new("foo"))
});
//delete an entity
run!(&world, |mut all_storages: AllStorages| {
all_storages.delete(entity);
});
Some storage only need one instance, in this case it would waste a lot of memory and ergonomic to treat it as a regular storage.
I can think of three ways to implement it:
HashMap
of unique components instead of storages.HashMap
and make the differentiation using an enum.SparseArray
but only its data Vec
, it would waste very little memory and not require any structural change.To access the component in a system or run
a struct Unique
could be used just like Not
.
Iterators only implements the minimum possible set of iterator traits and methods.
To be the most flexible and performant, other traits and methods can be implemented.
This is true both for std
and rayon
.
Putting together a basic wasm boilerplate, and tried adding shipyard for the first time in about a week or two... got an error with the following: in Cargo.toml
shipyard = {version = "0.2", git= "https://github.com/leudz/shipyard.git", default-features = false}
Seems like all the flags for non-parallel environments are removed?
Having a Default
bound on storages allows to remove the need to register components.
This is something I'd like to keep but if someone can think of a storage where this bound isn't met, please comment.
Keep in mind that only the storage has to be Default
, for example Vec<T>
is Default
regardless if T: Default
or not.
This function would be triggered the first time a system is added to the scheduler.
It'll be a second method on the System
trait alongside a second type SetupData
.
The trait would become:
trait System<'a> {
type Data: SystemData<'a>;
type SetupData: SystemData<'a>;
fn run(storage: <Self::Data as SystemData<'a>>::View);
fn setup(_: <Self::SetupData as SystemData<'a>>::View) {}
}
Default associated types are not stable yet so everyone will have to define SetupData
, fortunately there'll be a macro:
#[system_with_setup]
impl InAcid {
fn run(/*borrows*/) { ... }
fn setup(/*setup borrows*/) { ... }
}
I just discovered the component_group crate, a crate to work on groups of components in specs.
I thought of something similar a while ago but I was and am still a bit worried this pattern will be misused. One of the big strength of an ECS is being able to mix and match components, sometimes even in surprising ways. This pattern goes against this but provides some big pluses.
The first thing required would be a proc macro to generate all code related to a bundle:
#[bundle]
struct Virus {
#[NonSend] // just as example
position: Position,
size: f32,
}
Two different implementations have been proposed, both would generate these structs used for iteration and get
:
struct VirusRef<'c> {
position: &'c Position,
size: &'c f32,
}
struct VirusMut<'c> {
position: &'c mut Position,
size: &'c mut f32,
}
The two methods differ for the views and iterators, method 1 generates a new type for each bundle while method 2 uses a BundleView[Mut]
and BundleIter[Mut]
.
For the first method these structs would be generated:
struct VirusView<'v> {
position: NonSend<View<'v, Position>>,
size: View<'v, f32>,
}
struct VirusViewMut<'v> {
position: NonSend<ViewMut<'v, Position>>,
size: ViewMut<'v, f32>,
}
It'll be possible to access the inner views through the bundle. With method 1 it would be a field access, with method 2 a TypeId
lookup.
I tried to implement both methods but rust-analyzer handles method 1 terribly while method 2 requires GAT or specialization.
Originally asked by @danaugrs on gitter.
Currently the highest abstraction is workloads, they are implemented very simply and work well. But we could go a step or two beyond.
The first step would be to merge multiple workloads, making it possible to break the barrier between them and enabling a bit more parallelism.
Next we can take advantage of more information. !Send
and !Sync
components for example have a "thread affinity" that must be respected and aren't parallelized at all at the moment. A better design would not keep a thread busy with systems that could run on other threads while a thread specific system waits. This also applies to AllStorages
in case there are systems without storage borrow.
There is also the case of systems defined in other languages, if this is implemented at some point, try_run
could be used to check if the system can run or has to be deferred.
Might also work with async systems might also become a thing and could work with this model as well.
Most of this could be done with a rewrite of workloads, where pipeline gets its name would be the last step: continuously add systems or workloads to an execution list.
This could also be set as a loop, once all systems have been executed, run them again.
There could be a component to modify the execution list from systems to stop the pipeline, clear it, make it start over,...
This issue lists all future additions to Shipyard (totally unordered):
World
that find all invalid borrows in systemsViewMut::swap
map
flags all components with update packcontains
work with multiple viewsWorld::maintain
World::maintain
Shiperator
Not
packAllStoragesViewMut::clear_modified
and AllStoragesViewMut::clear_inserted
Unique
in SparseSet
Change the type of Key
to NonZeroUsize
.
Currently old Key
can add and remove components without version check.
It can't trigger UB but is not suppose to happen.
The solution is to change dense arrays from Vec to Vec.
Extra storages are the ones we have to pass without doing any action on it on the user side, currently only packs need them but in the future events could require them too.
I've been thinking about a way to remove them for a while, panicking would work but goes against the philosophy of the crate. Using legion-like systems would also work but I'd rather avoid it.
I'm not 100% sure this solution would work but what we could do is to store references to extra storages in the storages that requires it.
This approach has multiple edge-cases:
what happens if the extra storage is already borrowed?
what happens if multiple storages needs a common extra storage?
what happens when an extra storage is borrowed while stored in another storage?
In any case this issue needs #44 to work as workloads use compile time information whereas packs are a runtime concept.
Proposed by @skypjack.
Very sparse SparseArray
could waste a lot of memory.
This issue could be mitigated by making pages of indices for SparseArray
's sparse array.
Instead of a Vec<usize>
sparse array would become Vec<Option<Box<[usize; PAGE_SIZE]>>>
.
@leudz mentioned that this is on the roadmap. Just adding an issue so it's easier to see when it's been added :)
Sparse sets allow two kinds of shared components without needing any additional memory.
The first kind is a single owner and any number of observer, the second is no owner and any number of observer.
A bool
could be added to flag storages with shared components and iteration would be modified in consequence. Iteration over a ViewMut
should skip any observer entity to not fall into UB.
Packs also couldn't apply to storages with shared components since there wouldn't be any way to redirect the observers.
In the case there's no owner the components would remain in the storage indefinitely, a method could be added to remove orphan components. World::maintain
could also remove them.
To make it possible the first entity added in the removed linked list must have its index changed to any value (usize::MAX
for example).
We can then use the fact that if entities[index].index() == index
this entity is alive.
Pack update would track insertion/modification and removal.
It would use 2 indices in the storage, inserted components at the beginning followed by modified ones.
Components would be moved between categories the same way pack owned does it.
Removed components would be stored in a Vec<(Key, T)>
to be able to remove multiple components from the same entity while still being able to tell where they come from.
Iterators over this storage should keep track of components mutably accessed and store their Key
, when the iterator gets dropped this list would update the storage.
Add a procedural macro to get a nicer implementation of systems.
The system InAcid
present in the first example of the readme:
struct InAcid;
impl<'a> System<'a> for InAcid {
type Data = (&'a Position, &'a mut Health);
fn run(&self, (pos, mut health): <Self::Data as SystemData>::View) {
// -- snip --
}
}
should look like this with the macro:
#[system(InAcid)]
fn run(pos: &Position, mut health: &mut Health) {
// -- snip --
}
Add the ability to add/remove storages from a pack and delete one.
Might have to change the Arc
s into RwLock
s.
Entities can safely be deleted by using AllStorages
but all storages have to be iterated over and it prevents any parallelization.
One way to get around this issue is to have a function on Entities
. To make this function safe a component counter could be added to Entities
but it would mean borrowing Entities
to add/remove components and impact performance of many operations (add/remove entities and components).
To avoid the api and performance issue, a function can be implemented without counter.
delete_unchecked
would rely on the user to remove all components from the entity before calling it. In case this isn't respected the user could access components they didn't removed with a deleted entity.
The question is: should delete_unchecked
be unsafe
or not?
I'm not the first person struggling with the issue rust-lang/api-guidelines#179 but currently no consensus has been reached.
Iterators borrowing only immutable storages can be cloned.
Currently one would have to make the same iterator twice.
Like Not
, add the ability to iterate over Option<View<T>>
.
Option<View<T>>
would return Option<T>
.
NonPacked
iterator should take advantage of packed storages when it can.
If some of the storages iterated upon are packed, components out of the packed zone are certain to not have the components this iterator wants.
I thought typetag was the right tool to implement this issue but I'd also need specialization. Since this isn't happening anytime soon and it likely wouldn't fit all use cases here's another approach.
I didn't try to implement it so it might be impossible to get to compile ๐
To me there's two use cases for ser/de:
World
and loading itWorld
and adding it another oneThe first one basically has no issues tied to it, the second one has a few:
I don't think a single behavior will fit all use cases so making both serialization and deserialization as modular as possible is the only way to go.
This issue is a big one but all methods don't have to be implemented at once. With this pattern adding new methods won't be a breaking change as long as the serialization format doesn't change.
The builder pattern seems a good way to go about it:
world.serialize::<View<T>>(serializer).build();
serialize
would work with Borrow
so any view that can be requested by a system could be used here.
It has to be a view and not a simple type for differentiating unique storage and !Send
/!Sync
components.
T
would have to implement both Serialize
and Deserialize
of course.
This would serialize the whole view, then you could filter:
world.serialize::<View<T>>(serializer).filter(|t: &T| -> bool { /* do your filtering */ }).build();
world.serialize::<View<T>>(serializer).filter_with_id(|(id, t): (EntityId, &T)| -> bool { /* do your filtering */ }).build();
world.serialize::<(View<T>, View<U>)>(serializer).complex_filter(|(t, u): (&T, &U)| -> (bool, bool) { /* do your filtering */ }).build();
As opposed to filter
, complex_filter
returns a bool
per component.
world.serialize::<(View<T>, View<U>)>(serializer).complex_filter_with_id(|(id, (t, u)): (EntityId, (&T, &U))| -> (bool, bool) { /* do your filtering */ }).build();
world.serialize::<View<T>>(serializer).without_shared().build();
world.serialize::<View<T>>(serializer).custom().build();
I explain this one in the section below.
There probably will be some additional generics on serialize
so a few _
to add.
Using multiple filters would be an error, there will be a try_
and panicking version.
world.deserialize::<View<T>>(deserializer).build();
The default behavior will be to create a new entity for entities that already exists.
world.deserialize::<View<T>>(deserializer).checked_entities()?.build();
Will first go through all entities and if one of them already exists it'll return an error with the id.
world.deserialize::<View<T>>(deserializer).skip_entities()?.build();
Will skip any entity that already exist.
world.deserialize::<View<T>>(deserializer).checked_components()?.build();
It'll first check all storages to see if an entity has a component that will be deserialized and return an error if it has with the EntityId
and the component TypeId
.
world.deserialize::<View<T>>(deserializer).add_components().build();
If the exact entity exist it'll add the components to it if it doesn't have one. If the spot is taken by an entity with a different generation it'll skip it.
world.deserialize::<View<T>>(deserializer).replace_components().build();
If the exact entity exists and already has some of the components to be deserialized, it'll replace them.
world.deserialize::<View<T>>(deserializer).custom().build();
I'm not sure what arguments this one will take. Here's the use case I imagine:
You have a tree inside the ECS and are using EntityId
to link nodes. You want to transfer this tree to another World
.
There's no way to do it properly with the methods above. If you skip entities, you'll have holes in the tree. If you use check
it'll work sometimes and sometimes it won't. With new_entity
your tree will be a mess.
custom
would allow you to have a context and allow powerful operations on storage. For example you could ask which entity is the last one. Going from here you can safely offset all your nodes and it'll end up correct.
I imagine this will require a few new views that let you more in control of entity and component creation.
It could also be used to reset entities' generation when loading a fresh World
.
The readme currently contains only two basic examples.
Explaining the internals would clarify the trade offs when packing storages.
Don't use a SparseSet
for Unique
.
Add a method delete_all
to AllComponents
that deletes all components of an entity.
Test fails here: 76f6161#diff-8f17a6886be0aae3b83a52405bb5e4b8R1785
@leudz mentioned on Gitter:
it's indeed with_id, it's completely broken for non packed iterators
this line
it looks into the first component's array, whatever is the shortest one
These two methods will go through all storages and apply the method of the same name.
clear_modified
is always fast but clear_inserted
might take some time, a note in the method's doc will be needed.
The implementation will likely go through UnknowStorage
.
Add a method on World
to find systems with invalid borrows.
There's no established plan but it'll happen one way or another.
Is it possible to add a sort functionality for packs and storages?
For comparison, it's available on EnTT groups as well as EnTT storage (there's another sort for storage too comparing to entities)
This could make it easier to impose something like a hierarchy... for example, a scene graph might be built via having Transform and Relationship components, and making a pack of those two. Then even if new entities are added, sort()
could be called at just that time to keep the vector in a breadth-first order for quick traversal each tick.
Currently there's no way to have data specific to some system, the System
trait doesn't access self
in any way.
To emulate it it's possible to use Unique
but it's not its original purpose so it doesn't convey correctly the code meaning and isn't optimal.
@eldyer proposed Local
, a per system Unique
.
Local storages would only be accessible from World
or their system.
The Local
type would take two generics, a system and a borrow.
In systems the first generic wouldn't actually be used so it shouldn't be bound to System
, this way Local<(), &usize>
would be valid. The macro will even hide it.
The AtomicRefCell
around storages would become per SparseSet
. Each storage would have a general AtomicRefCell<SparseSet>
in additional to a HashMap<TypeId, AtomicRefCell<SparseSet>>
with an entry per system local.
Making AtomicRefCell
only encapsulate SparseSet
make this system possible:
#[system(LocalTest)]
fn run(global: &usize, local: Local<&mut usize>) {}
This also makes it possible for other systems to run at the same time even when a local is an exclusive borrow.
The scheduler will be aware of how local borrowing interact with global ones and take care of conflicts if the same system is used multiple times in the same workload while allowing valid borrows.
Systems and run
have a hard cap on the number of storages they can request.
This cap can be removed by allowing nested tuples.
If it becomes possible to specify some generic arguments while using the impl syntax (rust-lang/rust#63066) World::run
could be rewritten as:
fn run<'a, T: Run<'a>>(&'a self, f: impl FnOnce(T::Storage))
Allowing run
to be called without , _
:
world.run::<&mut usize>(|mut usizes| {/* -- snip -- */})
Systems created by the macro should mimic run
's visibility.
Private:
#[system(Test)]
fn run(_: &usize, _: &mut i32) {}
Public:
#[system(Test)]
pub fn run(_: &usize, _: &mut i32) {}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.