Giter Site home page Giter Site logo

leudz / shipyard Goto Github PK

View Code? Open in Web Editor NEW
665.0 665.0 41.0 12.6 MB

Entity Component System focused on usability and flexibility.

License: Other

Rust 97.05% CSS 2.67% HTML 0.16% WGSL 0.11%
architectural-patterns data-oriented ecs entity-component entity-component-system game-development gamedev parallel-ecs rust shipyard

shipyard's People

Contributors

cleancut avatar colelawrence avatar dakom avatar dispersia avatar dtolnay avatar friz64 avatar jack-cooper avatar jeanbarriere avatar jrmiller82 avatar leudz avatar nukesor avatar optimisticpeach avatar poconn avatar samuelmcgowan avatar snyball avatar softmoth avatar teymour-aldridge avatar tsoutsman avatar veykril avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shipyard's Issues

Events

To design an event there are three main questions:

  • On what the event will be attached?

    • When will it trigger?
      • What can it access?
  • Storage (except unique1)

    • on creation, after the component has been inserted
      • id + entire storage for sequential
      • id + reference to the component for parallel
    • on removal/deletion
      • id + component + entire storage for sequential
      • id + the component for parallel
    • on modification2
      • id + entire storage for sequential
      • id + reference to the component for parallel
  • Pack (in theory)

    • on creation, after the component has been inserted
      • id + entire pack for sequential
      • id + reference to the components for parallel
    • on removal/deletion
      • id + entire pack for sequential
      • id + the components for parallel
  • Pack (in practice)
    -> it would require reordering tuples or returning Box<dyn Any>, they both seems terrible.

  • System
    -> make it part of the system

  • Workload
    -> make it a system

  • Pipeline
    -> make it a workload

Listeners could also be useful but the only way I can see to implement them is to require them when adding/removing components like a pack. There might be a better way.

1. Unique storages don't store component nor entity so they can't be eligible for creation nor removal/deletion and see 2 for modification. โ–ฒ
2. Tracking modification isn't possible as far as I know, checking if the bits changed for example doesn't work if the component go back to its original value before the end of the iteration. So modification events will trigger after every mutable access, whether or not a change happened. โ–ฒ

Tag component

It's possible to add empty components but they'll still use an EntityId per component and up to an usize per entity in the World (could go down with #10 if the tagged entities are in the right spot).
This could be far less, one bit per entity in the World with the possibility to go even lower using pagination.

Since the storage wouldn't store EntityId it won't be possible to call with_id on these storages but with #52 we'll still be able to iterate them.
There would be a new Tag type used like Unique to access the storage.

Key serialization

My actual use case is to be able round-trip a Key to JS.

A few ideas come to mind:

  1. make some of the methods on Key pub instead of pub(crate) - so for example it would be easier to send a version/index pair across the wire.

  2. impl Display and/or Debug

  3. have an optional serde implementation gated by feature

btw I'm not totally clear on how Key::new() works... seems to only take an index... isn't version also required? :)

run macro

This is already on @leudz Ali Baba's Cave (todo list) - just opening an issue for convenience

As discussed on Gitter...

The idea is to have a run macro that is similar in syntax to the system derive macro. The only major difference is that the world must be given as the first argument - and that it executes world.run() under the hood.

This would be valid syntax:

//create an entity
let entity = run!(&world, |mut entities:&mut Entities, mut labels:&mut Label| {
    entities.add_entity(&mut labels, Label::new("foo"))
});

//delete an entity
run!(&world, |mut all_storages: AllStorages| {
    all_storages.delete(entity);
});

Unique component

Some storage only need one instance, in this case it would waste a lot of memory and ergonomic to treat it as a regular storage.
I can think of three ways to implement it:

  1. Add a second HashMap of unique components instead of storages.
  2. Use the current HashMap and make the differentiation using an enum.
  3. Use SparseArray but only its data Vec, it would waste very little memory and not require any structural change.

To access the component in a system or run a struct Unique could be used just like Not.

wasm / non-parallel

Putting together a basic wasm boilerplate, and tried adding shipyard for the first time in about a week or two... got an error with the following: in Cargo.toml

shipyard = {version = "0.2", git= "https://github.com/leudz/shipyard.git", default-features = false}

Seems like all the flags for non-parallel environments are removed?

!Default storage

Having a Default bound on storages allows to remove the need to register components.

This is something I'd like to keep but if someone can think of a storage where this bound isn't met, please comment.

Keep in mind that only the storage has to be Default, for example Vec<T> is Default regardless if T: Default or not.

System Setup

This function would be triggered the first time a system is added to the scheduler.

It'll be a second method on the System trait alongside a second type SetupData.
The trait would become:

trait System<'a> {
    type Data: SystemData<'a>;
    type SetupData: SystemData<'a>;
    fn run(storage: <Self::Data as SystemData<'a>>::View);
    fn setup(_: <Self::SetupData as SystemData<'a>>::View) {}
}

Default associated types are not stable yet so everyone will have to define SetupData, fortunately there'll be a macro:

#[system_with_setup]
impl InAcid {
    fn run(/*borrows*/) { ... }
    fn setup(/*setup borrows*/) { ... }
}

Components Bundle

I just discovered the component_group crate, a crate to work on groups of components in specs.

I thought of something similar a while ago but I was and am still a bit worried this pattern will be misused. One of the big strength of an ECS is being able to mix and match components, sometimes even in surprising ways. This pattern goes against this but provides some big pluses.

The first thing required would be a proc macro to generate all code related to a bundle:

#[bundle]
struct Virus {
    #[NonSend] // just as example
    position: Position,
    size: f32,
}

Two different implementations have been proposed, both would generate these structs used for iteration and get:

struct VirusRef<'c> {
    position: &'c Position,
    size: &'c f32,
}
struct VirusMut<'c> {
    position: &'c mut Position,
    size: &'c mut f32,
}

The two methods differ for the views and iterators, method 1 generates a new type for each bundle while method 2 uses a BundleView[Mut] and BundleIter[Mut].

For the first method these structs would be generated:

struct VirusView<'v> {
    position: NonSend<View<'v, Position>>,
    size: View<'v, f32>,
}
struct VirusViewMut<'v> {
    position: NonSend<ViewMut<'v, Position>>,
    size: ViewMut<'v, f32>,
}

It'll be possible to access the inner views through the bundle. With method 1 it would be a field access, with method 2 a TypeId lookup.

I tried to implement both methods but rust-analyzer handles method 1 terribly while method 2 requires GAT or specialization.

Pipeline

Originally asked by @danaugrs on gitter.
Currently the highest abstraction is workloads, they are implemented very simply and work well. But we could go a step or two beyond.

The first step would be to merge multiple workloads, making it possible to break the barrier between them and enabling a bit more parallelism.

Next we can take advantage of more information. !Send and !Sync components for example have a "thread affinity" that must be respected and aren't parallelized at all at the moment. A better design would not keep a thread busy with systems that could run on other threads while a thread specific system waits. This also applies to AllStorages in case there are systems without storage borrow.

There is also the case of systems defined in other languages, if this is implemented at some point, try_run could be used to check if the system can run or has to be deferred.

Might also work with async systems might also become a thing and could work with this model as well.

Most of this could be done with a rewrite of workloads, where pipeline gets its name would be the last step: continuously add systems or workloads to an execution list.
This could also be set as a loop, once all systems have been executed, run them again.
There could be a component to modify the execution list from systems to stop the pipeline, clear it, make it start over,...

Ali Baba's Cave

This issue lists all future additions to Shipyard (totally unordered):

  • warnings behind feature flag
  • shared components
  • method on World that find all invalid borrows in systems
  • remove component if trying to remove with newer key but don't return it
  • ViewMut::swap
  • lists all features and explain them in the book and doc
  • paragraph about try method
  • paragraph on reference in iterator method not been the same as the final one
  • drain_filter
  • iterator blueprint
  • return from systems
  • explain that map flags all components with update pack
  • store direct pointer to storages in workloads
  • make contains work with multiple views
  • World::maintain
  • paragraph in the book talking about World::maintain
  • run macro
  • events
  • sparse set pagination
  • clone on some iterators
  • event that keep storages sorted
  • more methods on Shiperator
  • take a storage out and put it back later
  • pipeline
  • register macro
  • Not pack
  • nested pack
  • serialization
  • AllStoragesViewMut::clear_modified and AllStoragesViewMut::clear_inserted
  • cheat sheet
  • system setup
  • system local
  • components bundle
  • add_component_unchecked
  • don't store Unique in SparseSet

Dense array

Currently old Key can add and remove components without version check.
It can't trigger UB but is not suppose to happen.
The solution is to change dense arrays from Vec to Vec.

Remove extra storages

Extra storages are the ones we have to pass without doing any action on it on the user side, currently only packs need them but in the future events could require them too.

I've been thinking about a way to remove them for a while, panicking would work but goes against the philosophy of the crate. Using legion-like systems would also work but I'd rather avoid it.

I'm not 100% sure this solution would work but what we could do is to store references to extra storages in the storages that requires it.
This approach has multiple edge-cases:
what happens if the extra storage is already borrowed?
what happens if multiple storages needs a common extra storage?
what happens when an extra storage is borrowed while stored in another storage?

In any case this issue needs #44 to work as workloads use compile time information whereas packs are a runtime concept.

SparseArray pagination

Proposed by @skypjack.
Very sparse SparseArray could waste a lot of memory.
This issue could be mitigated by making pages of indices for SparseArray's sparse array.
Instead of a Vec<usize> sparse array would become Vec<Option<Box<[usize; PAGE_SIZE]>>>.

Partially owned Packs

@leudz mentioned that this is on the roadmap. Just adding an issue so it's easier to see when it's been added :)

Shared components

Sparse sets allow two kinds of shared components without needing any additional memory.
The first kind is a single owner and any number of observer, the second is no owner and any number of observer.
A bool could be added to flag storages with shared components and iteration would be modified in consequence. Iteration over a ViewMut should skip any observer entity to not fall into UB.
Packs also couldn't apply to storages with shared components since there wouldn't be any way to redirect the observers.
In the case there's no owner the components would remain in the storage indefinitely, a method could be added to remove orphan components. World::maintain could also remove them.

Make Entities iterable

To make it possible the first entity added in the removed linked list must have its index changed to any value (usize::MAX for example).
We can then use the fact that if entities[index].index() == index this entity is alive.

Pack update

Pack update would track insertion/modification and removal.
It would use 2 indices in the storage, inserted components at the beginning followed by modified ones.
Components would be moved between categories the same way pack owned does it.
Removed components would be stored in a Vec<(Key, T)> to be able to remove multiple components from the same entity while still being able to tell where they come from.
Iterators over this storage should keep track of components mutably accessed and store their Key, when the iterator gets dropped this list would update the storage.

Proc macro

Add a procedural macro to get a nicer implementation of systems.
The system InAcid present in the first example of the readme:

struct InAcid;
impl<'a> System<'a> for InAcid {
    type Data = (&'a Position, &'a mut Health);
    fn run(&self, (pos, mut health): <Self::Data as SystemData>::View) {
        // -- snip --
    }
}

should look like this with the macro:

#[system(InAcid)]
fn run(pos: &Position, mut health: &mut Health) {
    // -- snip --
}

More flexible packs

Add the ability to add/remove storages from a pack and delete one.
Might have to change the Arcs into RwLocks.

Function with contract but not leading to UB, unsafe or not?

Entities can safely be deleted by using AllStorages but all storages have to be iterated over and it prevents any parallelization.

One way to get around this issue is to have a function on Entities. To make this function safe a component counter could be added to Entities but it would mean borrowing Entities to add/remove components and impact performance of many operations (add/remove entities and components).
To avoid the api and performance issue, a function can be implemented without counter.

delete_unchecked would rely on the user to remove all components from the entity before calling it. In case this isn't respected the user could access components they didn't removed with a deleted entity.

The question is: should delete_unchecked be unsafe or not?

I'm not the first person struggling with the issue rust-lang/api-guidelines#179 but currently no consensus has been reached.

Clone for some iterators

Iterators borrowing only immutable storages can be cloned.
Currently one would have to make the same iterator twice.

Use packed len with NonPacked iterator

NonPacked iterator should take advantage of packed storages when it can.
If some of the storages iterated upon are packed, components out of the packed zone are certain to not have the components this iterator wants.

Serialization

I thought typetag was the right tool to implement this issue but I'd also need specialization. Since this isn't happening anytime soon and it likely wouldn't fit all use cases here's another approach.
I didn't try to implement it so it might be impossible to get to compile ๐Ÿ˜„

To me there's two use cases for ser/de:

  1. saving (part of) a World and loading it
  2. saving (part of) a World and adding it another one

The first one basically has no issues tied to it, the second one has a few:

  • what happens if the entity already exists?
  • what happens if the entity already has a component of this type?

I don't think a single behavior will fit all use cases so making both serialization and deserialization as modular as possible is the only way to go.

This issue is a big one but all methods don't have to be implemented at once. With this pattern adding new methods won't be a breaking change as long as the serialization format doesn't change.

Serialization

The builder pattern seems a good way to go about it:

world.serialize::<View<T>>(serializer).build();

serialize would work with Borrow so any view that can be requested by a system could be used here.
It has to be a view and not a simple type for differentiating unique storage and !Send/!Sync components.
T would have to implement both Serialize and Deserialize of course.

This would serialize the whole view, then you could filter:

world.serialize::<View<T>>(serializer).filter(|t: &T| -> bool { /* do your filtering */ }).build();
world.serialize::<View<T>>(serializer).filter_with_id(|(id, t): (EntityId, &T)| -> bool { /* do your filtering */ }).build();
world.serialize::<(View<T>, View<U>)>(serializer).complex_filter(|(t, u): (&T, &U)| -> (bool, bool) { /* do your filtering */ }).build();

As opposed to filter, complex_filter returns a bool per component.

world.serialize::<(View<T>, View<U>)>(serializer).complex_filter_with_id(|(id, (t, u)): (EntityId, (&T, &U))| -> (bool, bool) { /* do your filtering */ }).build();
world.serialize::<View<T>>(serializer).without_shared().build();
world.serialize::<View<T>>(serializer).custom().build();

I explain this one in the section below.

There probably will be some additional generics on serialize so a few _ to add.
Using multiple filters would be an error, there will be a try_ and panicking version.

Deserialization

world.deserialize::<View<T>>(deserializer).build();

The default behavior will be to create a new entity for entities that already exists.

world.deserialize::<View<T>>(deserializer).checked_entities()?.build();

Will first go through all entities and if one of them already exists it'll return an error with the id.

world.deserialize::<View<T>>(deserializer).skip_entities()?.build();

Will skip any entity that already exist.

world.deserialize::<View<T>>(deserializer).checked_components()?.build();

It'll first check all storages to see if an entity has a component that will be deserialized and return an error if it has with the EntityId and the component TypeId.

world.deserialize::<View<T>>(deserializer).add_components().build();

If the exact entity exist it'll add the components to it if it doesn't have one. If the spot is taken by an entity with a different generation it'll skip it.

world.deserialize::<View<T>>(deserializer).replace_components().build();

If the exact entity exists and already has some of the components to be deserialized, it'll replace them.

world.deserialize::<View<T>>(deserializer).custom().build();

I'm not sure what arguments this one will take. Here's the use case I imagine:
You have a tree inside the ECS and are using EntityId to link nodes. You want to transfer this tree to another World.
There's no way to do it properly with the methods above. If you skip entities, you'll have holes in the tree. If you use check it'll work sometimes and sometimes it won't. With new_entity your tree will be a mess.
custom would allow you to have a context and allow powerful operations on storage. For example you could ask which entity is the last one. Going from here you can safely offset all your nodes and it'll end up correct.
I imagine this will require a few new views that let you more in control of entity and component creation.

It could also be used to reset entities' generation when loading a fresh World.

delete_all

Add a method delete_all to AllComponents that deletes all components of an entity.

add clear_inserted and clear_modified to AllStoragesViewMut

These two methods will go through all storages and apply the method of the same name.

clear_modified is always fast but clear_inserted might take some time, a note in the method's doc will be needed.

The implementation will likely go through UnknowStorage.

Sorting

Is it possible to add a sort functionality for packs and storages?

For comparison, it's available on EnTT groups as well as EnTT storage (there's another sort for storage too comparing to entities)

This could make it easier to impose something like a hierarchy... for example, a scene graph might be built via having Transform and Relationship components, and making a pack of those two. Then even if new entities are added, sort() could be called at just that time to keep the vector in a breadth-first order for quick traversal each tick.

System Local

Currently there's no way to have data specific to some system, the System trait doesn't access self in any way.

To emulate it it's possible to use Unique but it's not its original purpose so it doesn't convey correctly the code meaning and isn't optimal.

@eldyer proposed Local, a per system Unique.

Local storages would only be accessible from World or their system.
The Local type would take two generics, a system and a borrow.
In systems the first generic wouldn't actually be used so it shouldn't be bound to System, this way Local<(), &usize> would be valid. The macro will even hide it.

The AtomicRefCell around storages would become per SparseSet. Each storage would have a general AtomicRefCell<SparseSet> in additional to a HashMap<TypeId, AtomicRefCell<SparseSet>> with an entry per system local.

Making AtomicRefCell only encapsulate SparseSet make this system possible:

#[system(LocalTest)]
fn run(global: &usize, local: Local<&mut usize>) {}

This also makes it possible for other systems to run at the same time even when a local is an exclusive borrow.
The scheduler will be aware of how local borrowing interact with global ones and take care of conflicts if the same system is used multiple times in the same workload while allowing valid borrows.

Nested tuple for systems and run

Systems and run have a hard cap on the number of storages they can request.
This cap can be removed by allowing nested tuples.

Simpler run

If it becomes possible to specify some generic arguments while using the impl syntax (rust-lang/rust#63066) World::run could be rewritten as:

fn run<'a, T: Run<'a>>(&'a self, f: impl FnOnce(T::Storage))

Allowing run to be called without , _:

world.run::<&mut usize>(|mut usizes| {/* -- snip -- */})

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.