Giter Site home page Giter Site logo

specs's Introduction

Specs

Specs Parallel ECS

Build Status Crates.io Gitter MIT/Apache Docs.rs Code coverage LoC

Specs is an Entity-Component System written in Rust. Unlike most other ECS libraries out there, it provides

  • easy parallelism
  • high flexibility
    • contains 5 different storages for components, which can be extended by the user
    • its types are mostly not coupled, so you can easily write some part yourself and still use Specs
    • Systems may read from and write to components and resources, can depend on each other and you can use barriers to force several stages in system execution
  • high performance for real-world applications

Minimum Rust version: 1.70

Example

use specs::prelude::*;

// A component contains data
// which is associated with an entity.
#[derive(Debug)]
struct Vel(f32);

impl Component for Vel {
    type Storage = VecStorage<Self>;
}

#[derive(Debug)]
struct Pos(f32);

impl Component for Pos {
    type Storage = VecStorage<Self>;
}

struct SysA;

impl<'a> System<'a> for SysA {
    // These are the resources required for execution.
    // You can also define a struct and `#[derive(SystemData)]`,
    // see the `full` example.
    type SystemData = (WriteStorage<'a, Pos>, ReadStorage<'a, Vel>);

    fn run(&mut self, (mut pos, vel): Self::SystemData) {
        // The `.join()` combines multiple component storages,
        // so we get access to all entities which have
        // both a position and a velocity.
        for (pos, vel) in (&mut pos, &vel).join() {
            pos.0 += vel.0;
        }
    }
}

fn main() {
    // The `World` is our
    // container for components
    // and other resources.
    let mut world = World::new();
    world.register::<Pos>();
    world.register::<Vel>();

    // An entity may or may not contain some component.

    world.create_entity().with(Vel(2.0)).with(Pos(0.0)).build();
    world.create_entity().with(Vel(4.0)).with(Pos(1.6)).build();
    world.create_entity().with(Vel(1.5)).with(Pos(5.4)).build();

    // This entity does not have `Vel`, so it won't be dispatched.
    world.create_entity().with(Pos(2.0)).build();

    // This builds a dispatcher.
    // The third parameter of `with` specifies
    // logical dependencies on other systems.
    // Since we only have one, we don't depend on anything.
    // See the `full` example for dependencies.
    let mut dispatcher = DispatcherBuilder::new().with(SysA, "sys_a", &[]).build();
    // This will call the `setup` function of every system.
    // In this example this has no effect since we already registered our components.
    dispatcher.setup(&mut world);

    // This dispatches all the systems in parallel (but blocking).
    dispatcher.dispatch(&mut world);
}

Please look into the examples directory for more.

Public dependencies

crate version
hibitset hibitset
rayon rayon
shred shred
shrev shrev

Contribution

Contribution is very welcome! If you didn't contribute before, just filter for issues with "easy" or "good first issue" label. Please note that your contributions are assumed to be dual-licensed under Apache-2.0/MIT.

specs's People

Contributors

a1phyr avatar aceeri avatar adamnemecek avatar andreivasiliu avatar andrewhickman avatar annekitsune avatar azaril avatar azriel91 avatar bors[bot] avatar csherratt avatar dependabot-support avatar ebkalderon avatar eldyer avatar graemewilde avatar imberflur avatar jjedelsky avatar kvark avatar lcnr avatar ldesgoui avatar rhuagh avatar serprex avatar svenstaro avatar telzhaak avatar timonpost avatar torkleyy avatar wadelma avatar willglynn avatar xaeroxe avatar xmac94x avatar zesterer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

specs's Issues

Avoid iterating through all the entities

In a lot of cases, the fraction of entities that do need to be processed is just a fraction of all entities. We lose a lot of time by checking all the entity components to exist. This information could be cached and re-used.

We could get a small speedup (in general) by early exiting if one of the components is not found. Similarly, we could advice users to convert problematic runXwYr calls to custom run calls and early-exit manually.

As a last resort, one could separate the ECS into two, so that the component coverage is denser in each (like, don't mix particle entities with UI entities with game entities, etc).

One way to properly address this is to use tables. @csherratt reported the speedup from ~260us to ~50us by building on top of our Storage interface.

Another way would be to have Aspect as a special construct that tracks which entities have a required set of components, proposed by @OvermindDL1.

Storage::get and Storage::insert are slow

The functions Storage::get and Storage::insert are expensive and I'm not particularly sure why.

I am modeling a scene graph by giving entities a TransformParent(Entity) component, and all those entities also have Transforms and WorldTransforms. Evaluating a WorldTransform naively is O(N) so the naive algorithm of evaluating all entities' WorldTransform would end up being O(N^2), so instead I made a dynamic programming algorithm that sorts entities into parent buckets, and then creates a list of entities ordered by level in the "tree" (i.e. root entities first, then their children, then their children, so on so forth), reducing the evaluation to O(N).

I evaluate the world transform at each step by getting the parent's world transform and the current entity's local transform, multiplying, and storing the world transform back, using the Storages fetched.

However, after measuring the loops done, the performance on get and insert is so slow that even for just 100 entities, regardless of hierarchy organization, I get less than 70hz. They are the main bottleneck, outside of the Joins themselves.

I get the impression that we are supposed to minimize calls to get and insert whenever possible, but after reading the code I don't understand why they are so expensive to call. The RwLock guards should already be established so it's not like they're locking on the masked storage each time? Each component uses VecStorage as well. The code below could be improved to make it access the storage less, but it still doesn't make sense to me why this is happening.

impl System<Ctx> for UpdateWorldTransforms {
    fn run(&mut self, arg: RunArg, _: Ctx) {
        let (transforms, mut world_transforms, parents, entities): (Storage<Transform, _, _>, Storage<WorldTransform, _, _>, Storage<TransformParent, _, _>, Entities) =
            arg.fetch(|w| {
                let t = w.read::<Transform>();
                let wt = w.write::<WorldTransform>();
                let p = w.read::<TransformParent>();
                let e = w.entities();
                (t, wt, p ,e)
            });

        // Iterate over entities that have parents and local transforms (we don't use transform yet)
        for (entity, parent, _) in (&entities, &parents, &transforms).iter() {
            if !self.entity_children_map.contains_key(&parent.entity) {
                self.entity_children_map.insert(parent.entity, Vec::new());
            }

            let children: &mut Vec<Entity> = self.entity_children_map.get_mut(&parent.entity).expect("There should always be a vec here.");

            children.push(entity);
        }

        // Iterate over entities that do _not_ have parents and do have transforms and add their children to the vec
        for (entity, _, transform, world_transform) in (&entities, !(&parents), &transforms, &mut world_transforms).iter() {
            // Make sure they have a world transform (the local transform is the world in this case)
            world_transform.0 = transform.0;

            if let Some(children) = self.entity_children_map.get_mut(&entity) {
                self.entities_to_update.extend(children.drain(..));
            }
        }

        // Now, add their children, and their children, ad infinitum...
        let (mut begin, mut end): (usize, usize) = (0, self.entities_to_update.len());
        loop {
            for i in begin..end {
                let e = self.entities_to_update[i];
                if let Some(children) = self.entity_children_map.get_mut(&e) {
                    self.entities_to_update.extend(children.drain(..));
                }
            }
            if end == self.entities_to_update.len() {
                // no entities were added in this cycle, break
                break
            } else {
                begin = end;
                end = self.entities_to_update.len();
            }
        }

        // Finally, the actual step of updating all the entities' world transforms.
        for entity in self.entities_to_update.drain(..) {
            let parent = parents.get(entity).expect("This entity should have a parent.");
            let parent_world_transform: Matrix4<f32> = world_transforms.get(parent.entity).expect("The parent should have a world transform at this point.").0;
            let local_transform: &Transform = transforms.get(entity).expect("Why doesn't this entity have a local transform?");
            world_transforms.insert(entity, WorldTransform(local_transform.0 * parent_world_transform));
        }
    }
}

Make it possible to single-thread portions that need it

In particular, pretty much any part of SDL2 that interacts with the hardware (so, most of it) is defined to only work properly from the program's main thread. So, for example, drawing can't be made into a system that consumes components containing an SDL2 Texture.

Because of this, it would be nice to be able to "pin" components to a particular thread or some such, so they will only get run by that thread. The world is an imperfect place.

...though upon consideration, when doing drawing with SDL2 the order you draw things matters and specs can't really specify that anyway, so maybe trying to do drawing with an ECS is misguided in the first place.

Entity deletion

May require some sort of recycling with generational IDs.

Discussing alternative ways of specifying system priority

Expanding upon from #71. Sorry for the wall of text, I've got a ton of thoughts about this issue and didn't have a lot of time to refine them down to a bite sized blurb. Thanks for reading, anyway!

My thoughts, mostly boil down to two things:

  • That data dependencies aren't necessarily the same as runtime dependencies,
  • That maintaining the "knowledge" of who goes before who in more than one place, at least with respect to a given system, is a bit of a pain.

Data dependency is not always the same as run order dependency

It turns out in at least my case that a data dependency is not the same as the run ordering. If that were the case, I'd have cycles in my execution strategy. One example (from my client)

[network::AdapterSystem]
-> [some interpreting system] Via incoming network events
-> [player::InputSystem] Via some interpreted state
-> [network::AdapterSystem] Via outbound player input event

I resolve this in my particular case by just not having the network system depend on things that supply outgoing events. It emits them first thing on the next tick.

I've also got cases where I enforce a "dependency" that's not directly visible via the data dependencies. One example from my client.

[network::AdapterSystem]
-> [synchronization::System] reconstructs state emitted by server, writing (LOTS of) component state
~> [renderer::System] uses that component state.

There are alternate approaches to solve both problems. It would be possible to break the adapter system into two systems, or to have the renderer system depend on some sentinel value. In my case, it was more difficult at an implementation level to do the former due to my underlying socket, and the latter is actually more complicated due to a Send issue with my renderer.

Centralizing the "knowledge" of system priority.

A different, and for me more compelling, reason is that maintaining the explicit priorities is just a pain in the butt when specifying them the current way.

I've found, at least in my case, that keeping the knowledge of dependency ordering localized, even if its often redundant, is less overhead than the alternatives. Those alternatives as I see them are either maintaining a huge set of constants with numerical priorities, or making systems responsible for knowing their priority in the raw. The former gets pretty busy and doesn't separate concerns well, and the latter IMO isn't viable since that priority value is intrinsically dependent on the values of its dependencies, and the number of other systems -- it becomes a maintenance headache.

Thus, at least in my swing at the problem, I put off actually numbering the systems, and just resolve those values at the last possible second when every system, and their explicit dependencies, is available.

Why now?

In some sense, I think this problem is more salient when the friction in passing events is lessened. I hacked together a broadcasting pub sub system (that uses specs under the hood), that lets emitters and receivers be less aware of each other. That increase in ergonomics (IMO, anyway), made the pain of manual system juggling a bit more obvious.

For morbidly curious, the implementation is embedded in my game repo (a test here). Its super naive and unoptimized though.

Post-submit addendum

I didn't realize until after the fact, but the fact that i use pubsub, and thus obscure the lineage of a piece of data, sorta necessitates a more direct way of specifying dependencies. I imaging there could be other cases as well where the data dependencies are similarly complicated.

Consider ThreadPool alternatives

I see 2 minor-ish issues with it:

  1. the need to specify the number of threads
  2. the overlapping logic of our systems going back and forth with their threads

Create Parallel Benchmark

We currently don't really have any ways of proving the benefits of our design. Benchmarks are always a good way, they will also help shake out race conditions and places where things become a bottleneck that were not expected (I suspect the scheduler waiting on threads could be problematic for example).

What I would suggest is

  • Add an example that has 16 or more components
  • And 30 or more systems that touch one component or another
    • This should be a mix of things pure readers, pure writers and ones that read and write multiple parts
  • Creates 10K entities
    • Should be a mix of components, not all entities should contain ever component
  • Include dynamic creation and deletion of entities
  • The main-loop tracks how long the systems took to process and prints out a summary of how long it took to run N times, and how long each loop took.

Entity iterator

Currently our entities() method returns a lock to Vec<Entity>. This doesn't quite work for generational entities, where the world doesn't store entities directly but rather uses an optimized form, from which they can be iterated. Thus, we need to implement that iterator and return it.
This is related to choice 3 in #4

TrackedStorage (flagged component storage)

Having a TrackedStorage that would add/remove flags depending on if the component was modified would be a good bonus to performance in a lot of cases. For instance, if you only want to recalculate something once the component is modified.

One problem is if the user uses a mutable storage and iterates instead of joining then it will flag all of the components. This could be solved by comparing the two results on mutable iterations.

Make registering components idempotent

I'd like to be able to have systems in my library automatically register any components they use upon initialisation, so that clients don't need to do that separately. The consequence of this is that multiple systems would attempt to register the same component type.

World::register can't be used in that way directly, by the look of it because it doesn't keep track of what components are registered before blindly "re-initialising" the storage for the component type.

I thought of checking this myself using something like TypeMap, then it occurred to me that you're effectively tracking the information necessary for this internally in Specs anyway, so it wouldn't be much of a stretch to make register idempotent`.

Any objection to this, or reason I've missed that it shouldn't be idempotent? I'm happy to implement it and PR if no objections.

Deadlock when using create_iter()

The following program deadlocks under Mac OS 10.11:

extern crate specs;

use specs::{Component, VecStorage, World, Planner, System, RunArg};

struct Foo(i32);
impl Component for Foo { type Storage = VecStorage<Foo>; }

struct Bar(i32);
impl Component for Bar { type Storage = VecStorage<Bar>; }

type Ctx = ();

struct Spawner;

impl System<Ctx> for Spawner {
    fn run(&mut self, arg: RunArg, _: Ctx) {
        let (mut foo, mut bar, mut entities) = arg.fetch(|w| {
            let e = w.create_iter();
            let foo = w.write::<Foo>();
            let bar = w.write::<Bar>();
            (foo, bar, e)
        });

        for i in 0 .. 10 {
            let eid = entities.next().expect("Ran out of EIDs");
            foo.insert(eid, Foo(i));
            bar.insert(eid, Bar(i));
        }
    }
}

fn main() {
    let mut world = World::new();
    world.register::<Foo>();
    world.register::<Bar>();

    let mut planner = Planner::new(world, 3);
    planner.add_system(Spawner, "Spawner 1", 10);
    planner.add_system(Spawner, "Spawner 2", 9);

    for i in 0 .. 100 {
        planner.dispatch(());
        println!("Rendered frame {}", i);
    }
}

If run as written, Rust detects the deadlock and panics. If I move the create_iter() call after the calls to write(), the deadlock is not detected, and the program just freezes.

I haven't tested this under other operating systems.

Does Storage need to hold the generation?

Right now both storage types hold a generation with the data. This is used to validate if the value in the get/get_mut is the same one being referenced.I think the check is needed, the question is do we need to make every storage track this state?

Currently, entities are only created by systems using the dynamic create functionality. Deletes are done lazily via the dynamic delete. Because the deletes are lazy the entity index is never reused twice between a planner.wait(). And since the deletes are done by the specs itself, we are guaranteed that the value will be removed during a .wait().

Because of this design I think it is reasonable to assume

  1. no Storage will actually be storing a value that is out of date with the generation
  2. specs only needs a single place that holds the latest generation.

What I purpose, is we remove the generation from Storage and return every Storage wrapped in a magic type that joins the shared generation count with the type. This will implement get ect but will validate that the object is both alive and that the generation is correct. The storage trait would be simplified a bit by referring to each get/insert/remove by Index rather then by Entity.

This will reduce the size of the components in the VecStorage HashMapStorage which should mean they will work faster since it is less cache pressure.

Name change

A lot of people are concerned about our name collision with https://wiki.haskell.org/Parsec. Even if the actual project is known from the context, searching anything online is difficult.

I suggest changing the name together with moving into an organization, e.g. "slide-rs/apecs".
cc @csherratt

Constructing entities manually

Would be nice to be able to store a u32 and i32 instead of storing an Entity, mainly for Atomic types.

Are there any problems related to making the declaration pub struct Generation(pub i32)? Or maybe having a constructor for it?

Consider using futures

Could you try futures instead of threadpool? Probably make it an optional dependency? I think it would also be interesting in terms of performance.

Additionally, amethyst could reuse the CpuPool then (Probably make it an Arc?).

Allow is_alive() to be called from the Entities struct

It would be much more ergonomic and you wouldn't need to get read lock for the whole world just for that. Most usage of is_alive() is in systems and right now the only place where you could call it there is inside arg.fetch. As far as I understand, running whole systems inside fetch would stop any parallel processing from happening.

I'd suggest the same thing for create_later() and delete_later() but I don't understand how they work yet.

Rename iter to join.

Currently Join traits join method is called iter which is confusing if you are not familiar with the library.
And even after you know that it's for joining it doesn't describe what is happening well.

Naming it to join would make it clearer what is happening and make it easier to understand code written using the library.

System data problem

Anything more than a simple demo will probably have some soft of a System trait, with objects implementing it being boxed and stored in a list to be executed automatically. It may look like this:

trait System {
    fn process(&mut self, &mut secs::Planner, TimeDelta);
}

Unfortunately, using secs under such abstraction is not elegant:

  1. The system execution is now split into 3 different chunks, executed at different moments:
    • process() entry, executed upon queuing the system
    • fetch phase, executed at the start of the system on another thread
    • iter phase, executed after all the storage locks are acquired
  2. Any data that system has (and wants to access in the execution closure) needs to be Clone + Sync. Imaging a mutable vector there - now you'd need to wrap it into Arc<Mutex<X>> , even though it should not be needed. This is inconvenient and bloats the code.

Optimally, I'd like to base ECS onto the system interface like this (edit: moved to #40):

trait System<C: Clone>: Send { //`C` stands for "Context"
    type Data<'a>; // some sort of HKT, which is not supported in current Rust
    fn fetch<'a>(&self, &'a secs::World) -> Self::Data<'a>;
    fn run(&mut self, Data<'a>, C);
}

Why this would be nice:

  • this system is required to implement Send: each time it needs to be executed, it's moved onto the thread, run, and then moved back (or not, if we can track this).
  • it allows any sort of data to be kept within the system without harsh restrictions or bloated wrappers
  • it clearly separates executable parts and their contexts. This enforces a fetch at compile time.
  • it allows the users to start using it right away (without higher abstractions) in complex systems, one just need to decide what C is (in the original example, C = TimeDelta)

Aspects

Aspect is basically the configuration of components that are being read or write by a particular system. It's an additional concept to work with, but it has a strong benefit: static knowledge about the system dependencies. With that information given, we can safely know in advance if two systems are compatible to work in parallel or not. Thus, we can schedule them more effectively.

Essentially, aspects would allow us to get rid of the RwLock (even though I consider their usage to be rather elegant as it stands now), since we would already know when something is available. Also, seems like it would benefit from directly controlling the threads instead of using thread_pool (#42).

If we go further, we can cache entity lists per aspect as proposed by @OvermindDL1. This could be a replacement for our current BitSet-based system.

All in all, this is just something to think about. It seems like introducing Aspect would essentially drive the architecture in a different direction, so deserves to live in a separate project.

BTreeMap for all the storage

It appears that BTreeMap is both efficient at removing/inserting elements as it is at iteration, giving mostly linear access and combining the benefits of vectors and maps. It seems like an ideal storage for components (see #119), thus I wonder... If we can go extreme and force all the storages to be BTreeMap. This would simplify some of the internals and some of the user code.

I haven't seen a good example of a user-defined storage type yet, so that might be the biggest question. If it's not useful, we can cut this ability.

Optimize VecStorage

It's currently implemented as Vec<Option<T>>, thus blowing up the size by a machine word, with only one bit really used. This reduces cache efficiency. Instead, we can have a separate bit vector.

Feedback from porting Yasteroids

Speaking of kvark/yasteroids#2 - since it's the largest existing project using specs, I thought I'd share my impressions of using it at this scale:

  • system interface is needed (#30), so that the systems are moved between threads, and their contents don't need to be syncable
  • run shortcuts have a very limited use - only 2 out of 6 of the systems I got use it. We should focus instead on the custom run ergonomics, or empower the shortcuts with more abilities.
  • overall it was mostly a pleasure to use it after simplecs, although checking for components in custom run functions is still quite noisy.
  • it's non-trivial to get the BitSet performance by writing our custom runs
  • dynamic entity creation/deletion works extremely nice and convenient. I used to do pooling of the like-entities in order to avoid creating them (which was simply not supported), but now the code is absolutely clean and lean.
  • I had to fix a few places in specs in order for it to work. We need to establish strong rules about the order we lock things in (to avoid deadlocks).

Generic Entity

Looks like there are 3 ways here:

  1. generic over trait Entity, functions accept it by reference
  2. generic over trait Entity: Copy, functions accept it by copy
  3. custom generational IDs to satisfy everyone

no thread version to compile to asmjs/wasm

my game is using specs and I also want to be able to compile to asmjs.
But the web doesn't support thread for now.

What do you think about using a fake thread-pool when targeting those platform ?

Support for dynamic component types

This library seems promising and I would like to use it in a project of mine, but I really can't because I would like the ability to create custom component types at runtime.

The current implementation seems like it uses typeids to map component types into their relative data. It seems like it would be easy to change them from typeids into data of generic type that implements the necessary traits (like PartialEq, Eq, Clone, etc).

The problem is that I don't really know how to make one implementation that supports the current way of accessing components and this dynamic one at the same time. There could be two different World implementations with slightly different interfaces but that would be messy and unwanted.

I think I'll fork this project to try and make a dynamic version and see what I can do.

Undocumented constraint: custom systems must call `arg.fetch`.

In practice a system probably will always call arg.fetch before doing any other work but it probably shouldn't be required. The code currently enforces it waiting on the signal associated with the pulse in FetchArgs ensuring that it was not dropped.

If fetch is never called the signal is dropped and the task panics with a bit of a confusing message:

if signal.wait().is_err() {
    panic!("task panicked before args were captured.")
}

lib.rs#L120

If the current behavior is desired it should be documented and maybe the panic should hint something to the effect of: "Maybe you forgot to call arg.fetch from within your system closure".

Entity creation/deletion APIs

This concern came up in #86.

We currently have:

  • create_now - needs &mut self and thus only applicable to early initialization
  • create_later - doesn't allow components building conveniently, so the usage is rather limited
  • create_later_build - much usable, but the name is long and confusing

I suggest we revise these APIs for specs-0.8. Basic proposal:

  • leave create_now as is, given that it's sufficiently expressive
  • rename create_later_build -> create, assuming it's what users actually want to use
  • rename create_later to create_pure, reflecting the fact only the Entity is created with no building.

Can we do better? cc @msiglreith @csherratt

Unable to use components that require lifetime parameters

Howdy!

I was attempting to play around with radiant and specs but I've run into some trouble. Component requires Any... which requires the static lifetime. So it isn't possible (or it just isn't clear how) to create components that hold runtime resources.

Any advice here?

RFC: dynamic entity creation and deletion

Dynamic operation means that it happens during a system execution as opposed to being outside of the system processing (this is already in).

Appendix

We could add RwLock<Appendix> to the Scheduler, where:

struct Appendix {
    next: Entity,
    queue: Vec<Entity>,
}

Note: World doesn't know anything about it.
Then, WorldArg would get methods insert() -> Entity and remove(Entity). These would lock the Appendix for writing and update both next and the queue.

Note: when removing an entity, the value added to the queue can have its generation inverted right away. This allows to avoid having two separate queues for insertion/removal, or an additional enum type.
Note: we propose next: Entity instead of next_free: Index, because when an entity gets dynamically deleted, we can no longer use generations vector for its new generation.

Latency

We want the newly created entities to start participating right away. The use-case is: player input generates a bullet, it gets its first movement and rendered the same frame.
Note: removing entities doesn't have to have low latency. E.g. if the player killed something, it's more important to show the explosion particles quickly than hiding the object itself.

Locking generations for writing is not an option, since it's getting used simultaneously by multiple systems all the time (fore reading). Thus, we record entity insertion and removal in the Appendix.

In order to be able to work with the new entities, we propose having entities_extra(&self) -> EntitiesExtraIter method in the WorldArg. It would lock the appendix for reading, go through the queue, and return those entities that were added. We can also update the run method to always process extra entities by default, or make it configurable.

Note: we don't force the user to iterate over extra entities in their custom systems. It's their decision, and they can even decide on that dynamically during the system execution.

Rest

Of course, we should not be using the appendix across multiple frames without merging it with the world. Otherwise we'd suffer from the lock waits. To address this, we could have a rest() method in the scheduler. The implementation would lock both world.generations and appendix for writing, thus waiting for all the systems to finish. Then we'd record the queued changes into the generation changes, and clear the queue.

User is supposed to call rest at the end of the frame, where waiting is allowed. It's an open question of how to enforce that. Perhaps, we can assert or warn when the size of the queue reaches a certain fraction of the world.

The case for Lazybox ECS

@Liotitch wrote:

Even though today we are closer to specs, we still have some fondamental differences. With the traditional design we had issues to integrate some features (like Box2D) that were not planned to be used in a multithreaded environment, so we introduced the concept of modules that are independant parts which provide and own their components.

Also we have the concept of Accessors that statically ensures to the user and the library that the entity is alive and the reference valid.
Allowing us to avoid some checks most of the time.

We have Groups that are like Aspects that represents a set of entity that Systems can iterate on instead of calculating it each processing steps although it introduces its own caveat.

Our System Scheduler is based on what systems need to read and write instead of associating a priority to each system, but I suppose we could implement this on top of specs.

At last, we had a lot of trouble to handle serialization and our new ecs is (in theory) designed to ease that.

In order to use specs in lazybox we would currently either need some changes in specs that might not be in the scope of this library or at least find new alternatives on our side.

Ideal systems API

This is how I see our hypothetical API (forked from #30):

trait System<C: Clone>: Send { //`C` stands for "Context"
    /// A composition of ECS components, some writeable, some not, that are processed.
    type<'a> Data<'a>: SystemInput; // it's a type that depends on arbitrary lifetime 'a (HKT)
    /// Fetch the required components from the world.
    /// May block until the components become available.
    fn fetch<'a>(&self, &'a World) -> Self::Data<'a>;
    /// Main processing entry point, implemented by default to allow users to only 
    //// implement `on_each` and friends to get this automatically.
    fn run(&mut self, data: Self::Data, context: C) {
        self.on_start(context);
        for (ent, elem) in data.iter() {
            self.on_each(ent, elem);
        }
        self.on_finish();
    }
    fn on_start(&mut self, C) {}
    fn on_each(&mut self, Entity, Self::Data::Element) {}
    fn on_finish(&mut self) {}
}

Main concerns:

  1. Currently impossible for the fact it needs HKT over lifetimes. Need to find a workaround, perhaps by erasing the lifetime of the locks, or using lower-level locks under the hood (StaticRwLock).
  2. May need a way to communicate between on_start, on_each, and on_finish. With this design, the only way to pass data through is via self, but it's possible.
  3. Need to automatically implement this trait for function closures that match on_each or run arguments. This may be non-trivial and require some magic dust.

Ticket-based locking

#@amaranth on IRC suggested an interesting method of locking that has a potential to improve our parallilization abilities greatly. Basically, each system gets tickets for the components it needs first, not actually locking them, and then waits for the tickets to come, in parallel.

Candidate: https://github.com/amaranth/queuedrwlock

Needs more docs

So I'm currently trying to use this but I would really like to get some more docs on things. For instance, what does add_system do exactly and how do I use it? What are the differences between anonymous systems and named system and use cases for each? Is it even a good idea to use the anonymous systems?

Optional Components

It would be good if there was a way for functions to specify, all entities that contain &A, &B and give me Some(&C) if C is defined, but None otherwise.

fn run_xxx(|a: &mut A, b: &B, c: Option<&C>|

);

Glium compatibility

I will try that also but just courius to know in advance.

Will it work in either single thread or multi thread with Glium or Gfx?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.