Giter Site home page Giter Site logo

sequentity's Introduction

A single-file, immediate-mode sequencer widget for C++17, Dear ImGui and EnTT


Table of Contents



Try it

You could build it, or you can download a pre-built version and fool around.

Quickstart

Key Description
QWER Switch between tools
Click + Drag Manipulate the coloured squares
Click + Drag Move events around in time
Alt + Click + Drag Pan in the Event Editor
Delete Delete all events
Space Play and pause
F1 Toggle the ImGui performance window
F2 Toggle the Theme Editor
Backspace (debug) Pause the rendering loop
Enter (debug) Redraw one frame


Overview

It's an sequence editor, in the spirit of MIDI authoring software like Ableton Live, Bitwig and FL Studio, where each event carry a start time, a duration and handle to your custom application data.

Heads up!

This is a work in progress, alpha at best, and is going through changes (that you are welcome to participate in!)

What makes Sequentity different, and inspired its name, is that it is built as an Entity-Component-System (ECS), and events are a combination of start time, length and custom application data; as opposed to individual events for start and end; suitable for e.g. (1) create a dynamic rigid body, (2) edit said body, whilst maintaining reference to what got created and (3) delete the body at the end of the event.

entt::registry& registry;

auto entity = registry.create();
auto& track = registry.assign<Sequentity::Track>(entity, "My first track");
auto& channel = Sequentity::PushChannel(track, MyEventType, "My first channel");
auto& event = Sequentity::PushEvent(channel, 10, 5); // time, length

while (true) {
    ImGui::Begin("Event Editor");
    Sequentity::EventEditor(registry);
    ImGui::End();
}

What can I use it for?

If you need to record anything in your application, odds are you need to play something back. If so, you may also need to edit what got recorded, in which case you can use something like Sequentity.

I made this for recording user input in order to recreate application state exactly such that I could record once more, on-top of the previous recording; much like how musicians record over themselves with various instruments to produce a complete song. You could theoretically decouple the clock-time aspect and use this as playback mechanism for undo/redo, similar to what ZBrush does, and save that with your scene/file. Something I intend on experimenting with!

Goals

  • Build upon the decades of UI/UX design found in DAWs like Ableton Live and Bitwig
  • Visualise 1-100'000 events simultaneosuly, with LOD if necessary
  • No more than 1 ms per call on an Intel-level GPU
  • Fine-grained edits to properties of individual events up close
  • Coarse-grained bulk-edits to thousands of events from afar

Is there anything similar?

I'm sure there are, however I was only able to find one standalone example, and only a few others embedded in open source applications.

If you know any more, please let me know by filing an issue!

Finally, there are others with a similar interface but different implementation and goal.



Features

  • Per-event application data Attach any of your application data to an event, and retrieve it later
  • Per-event, channel and track coloring To stay organised with lots of incoming data
  • Consolidated Events Individual events with a start and length, as opposed separate events for start and end
  • Overlapping Events Author events
  • Single-file library Distributed as a single .h file of ~1'000 lines for easy inclusion into your project
  • Cross-fade Overlap the end of one event with the start of another for a cross-blend, to do e.g. linear interpolation
  • Event Priority When events overlap, determine the order in which they are processed
  • Group Events For drawing and manipulating multiple groups of entities together
  • Mini-map Like the one in Sublime Text; for when you've got lots of events
  • Event Scaling Sometimes, the input is good, but could use some fine-tuning
  • Event Cropping Other times, part of the input isn't worth keeping
  • Track Folding For when you have way too much going on
  • Track, Channel and Event Renaming Get better organised
  • Custom Event Tooltip Add a reminder for yourself or others about what an event is all about
  • Event Vertical Move Implement moving of events between tracks (i.e. vertically)
  • Zoom Panning works by holding ALT while click+dragging. Zooming needs something like that.
  • One-off events Some things happen instantaneously

sequentitydemo1 sequentitydemo3 sequentitydemo2 sequentity_zooming sequentitydemo4 sequentitydemo6



Design Decisions

  • No class instance ImGui widgets generally don't require an instance, and neither does Sequentity
  • Events -> Channels -> Tracks Events are organised into these three groups
  • Integer Event Type Leaving definition and interpretation of types to the application author
  • Integer Time Time is represented as samples, rather than frames*
  • 1 Entity, 1 Track Events ultimately operate on components relative some entity
  • void* for application data In search of a better alternative, as it complicates cleanup. Let me know!
  • No clock time The application is responsible for managing the event loop

* The difference being that a sample is a complete snapshot of your application/game state, whereas a frame is a (potentially fractal) point in time, e.g. 1.351f



Todo

These are going into GitHub issues shortly.

  • Stride There are a few values that work, but make no sense, like stride
  • Bug, hot-swap tool Translate something and switch tool without letting go
  • Bug, event at end Click to add an event on the end frame, and it'll create one erroneously
  • Cosmetics, transitions Duration of transitions is based on a solid 60 fps, should be relative wallclock time
  • Refactor, Unify data types Data types in Sequentity is a mixture of native, Magnum and ImGui.
  • Smooth Panning and Zooming Any change to these have a nice smoothing effect
  • Drag Current Time You can, but it won't trigger the time-changed callback


Open Questions

I made Sequentity for another (commercial) project, but made it open source in order to seek help from the open source community. This is my first sequencer-like project and in fact my first C++ project (with <4 months of experience using the language), so I expect lots of things to be ripe for improvement.

Here are some of the things I'm actively looking for answers to and that you are welcome to strike up a dialog about in a new issue. (Thank you!)

  • Start, End and Beyond Events are currently authored and stored in memory like they appear in the editor; but in your typical MIDI editor the original events don't look like this. Instead, events are standalone, immutable. An editor, like Cubase, then draws each consecutive start and end pair as a single bar for easy selection and edits. But do they store it in memory like this? I found it challenging to keep events coming in from the application together. For example, if I click and drag with the mouse, and then click with my Wacom tabled whilst still holding down the mouse, I would get a new click event in the midst of drag events, without any discernable way to distinguish the origin of each move event. MIDI doesn't have this problem, as an editor typically pre-selects from which device to expect input. But I would very much like to facilitate multiple mice, simultaneous Wacom tablets, eye trackers and anything capable of generating interesting events.
  • How do we manage selection? Sequentity manages the currently selected event using a raw pointer in its own State component, is there a better way? We couldn't store selection state in an event itself, as they aren't the ones aware of whether their owner has got them selected or not. It's outside of their responsibility. And besides, it would mean we would need to iterate over all events to deselect before selecting another one, what a waste.

On top of these, there are some equivalent Application Open Questions for the Tools and Input handling which I would very much like your feedback on.



Install

Sequentity is distributed as a single-file library, with the .h and .cpp files combined.

  1. Copy Sequentity.h into your project
  2. #define SEQUENTITY_IMPLEMENTATION in one of your .cpp files
  3. #include <Sequentity.h>
  4. See below

Dependencies

  • ImGui Which is how drawing and user input is managed
  • EnTT An ECS framework, this is where and how data is stored.


Usage

Sequentity can draw events in time, and facilitate edits to be made to those events interactively by the user. It doesn't know nor care about playback, that part is up to you.

New to EnTT?

An EnTT Primer

Here's what you need to know about EnTT in order to use Sequentity.

  1. EnTT (pronounced "entity") is an ECS framework
  2. ECS stands for Entity-Component-System
  3. Entities are identifiers for "things" in your application, like a character, a sound or UI element
  4. Components carry the data for those things, like the Color, Position or Mesh
  5. Systems operate on that data in some way, such as adding +1 to Position.x each frame

It works like this.

// You create a "registry"
auto registry = entt::registry;

// Along with an entity
auto entity = registry.create();

// Add some data..
struct Position {
    float x { 0.0f };
    float y { 0.0f };
};
registry.assign<Position>(entity, 5.0f, 1.0f);  // 2nd argument onwards passed to constructor

// ..and then iterate over that data
registry.view<Position>().each([](auto& position) {
    position.x += 1.0f;
});

A "registry" is what keeps track of what entities have which components assigned, and "systems" can be as simple as a free function. I like to think of each loop as its own system, like that one up there iterating over positions. Single reponsibility, and able to perform complex operations that involve multiple components.

Speaking of which, here's how you combine components.

registry.view<Position, Color>().each([](auto& position, const auto& color) {
    position.x += color.r;
});

This function is called on every entity with both a position and color, and combines the two.

Sequentity then is just another component.

registry.assign<Sequentity::Track>(entity);

This component then stores all of the events related to this entity. When the entity is deleted, the Track is deleted alongside it, taking all of the events of this entity with it.

registry.destroy(entity);

You could also keep the entity, but erase the track.

registry.remove<Sequentity::Track>(entity);

And when you're fed up with entities and want to go home, then just:

registry.clear();

And that's about it as far as Sequentity goes, have a look at the EnTT Wiki along with my notes for more about EnTT. Have fun!

Here's how you draw.

// Author some data
entt::registry& registry;
entity = registry.create();

// Events may carry application data and a type for you to identify it with
struct MyEventData {
    float value { 0.0f };
};

enum {
    MyEventType = 0
};

auto& track = registry.assign<Sequentity::Track>(entity); {
    track.label = "My first track";
    track.color = ImColor::HSV(0.0f, 0.5f, 0.75f);
}

auto& channel = Sequentity::PushChannel(track, MyEventType); {
    channel.label = "My first channel";
    channel.color = ImColor::HSV(0.33f, 0.5f, 0.75f);
}

auto& event = Sequentity::PushEvent(channel); {
    event.time = 1;
    event.length = 50;
    event.color ImColor::HSV(0.66f, 0.5f, 0.75f);
}

// Draw it!
Sequentity::EventEditor(registry);

And here's how you query.

const int time { 13 };
Sequentity::Intersect(track, time, [](const auto& event) {
    if (event.type == MyEventType) {

        // Do something interesting
        event.time;
        event.length;
    }
});

The example application uses events for e.g. translations, storing a vector of integer pairs representing position. For each frame, data per entity is retrieved from the current event and correlated to a position by computing the time relative the start of an event.


Event Handlers

What you do with events is up to you, but I would recommend you establish so-called "event handlers" for the various types you define.

For example, if you define Translate, Rotate and Scale event types, then you would need:

  1. Something to produce these
  2. Something to consume these

Producers in the example applications are so-called "Tools" and operate based on user input like the current mouse position. The kind of tool isn't necessarily bound or even related to the type of event it produces. For example, a TranslateTool would likely generate events of type TranslateEvent with TranslateEventData, whereby you may establish an equivalent TranslateEventHandler to interpret this data.

enum EventTypes_ : Sequentity::EventType {
    TranslateEvent = 0;
};

struct TranslateEventData {
    int x;
    int y;
};

void TranslateEventHandler(entt::entity entity, const Sequentity::Event& event, int time) {
    auto& position = Registry.get<Position>(entity);
    auto& data = static_cast<TranslateEventData*>(event.data);
    // ...
}

Sorting

Tracks are sorted in the order of their EnTT pool.

Registry.sort<Sequentity::Track>([this](const entt::entity lhs, const entt::entity rhs) {
    return Registry.get<Index>(lhs) < Registry.get<Index>(rhs);
});

State

State - such as the zoom level, scroll position, current time and min/max range - is stored in your EnTT registry which is (optionally) accessible from anywhere. In the example application, it is used to draw the Transport panel with play, stop and visualisation of current time.

auto& state = registry.ctx<Sequentity::State>();

When state is automatically created by Sequentity if you haven't already done so. You may want to manually create state for whatever reason, which you can do like this.

auto& state = registry.set<Sequentity::State>();
state.current_time = 10;

// E.g.
Sequentity::EventEditor(registry);

To draw the event editor with the current time set to 10.


Components

Sequentity provides 1 ECS component, and 2 additional inner data structures.

/**
 * @brief A Sequentity Event
 *
 */
struct Event {
    TimeType time { 0 };
    TimeType length { 0 };

    ImVec4 color { ImColor::HSV(0.0f, 0.0f, 1.0f) };

    // Map your custom data here, along with an optional type
    EventType type { EventType_Move };
    void* data { nullptr };

    /**
     * @brief Ignore start and end of event
     *
     * E.g. crop = { 2, 4 };
     *       ______________________________________
     *      |//|                              |////|
     *      |//|______________________________|////|
     *      |  |                              |    |
     *      |--|                              |----|
     *  2 cropped from start             4 cropped from end
     *
     */
    TimeType crop[2] { 0, 0 };

    /* Whether or not to consider this event */
    bool enabled { true };

    /* Events are never really deleted, just hidden from view and iterators */
    bool removed { false };

    /* Extend or reduce the length of an event */
    float scale { 1.0f };

    // Visuals, animation
    float height { 0.0f };
    float thickness { 0.0f };

};

/**
 * @brief A collection of events
 *
 */
struct Channel {
    const char* label { "Untitled channel" };

    ImVec4 color { ImColor::HSV(0.33f, 0.5f, 1.0f) };

    std::vector<Event> events;
};


/**
 * @brief A collection of channels
 *
 */
struct Track {
    const char* label { "Untitled track" };

    ImVec4 color { ImColor::HSV(0.66f, 0.5f, 1.0f) };

    bool solo { false };
    bool mute { false };

    std::unordered_map<EventType, Channel> channels;
    
    // Internal
    bool _notsoloed { false };
};

Serialisation

All data comes in the form of components with plain-old-data, including state like panning and zooming.

TODO


Roadmap

See Todo for now.

sequentity's People

Contributors

alanjfs avatar microdee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sequentity's Issues

Curve Editor

Goal

A general-purpose editor of the application data associated with an event.

Motivation

Events ultimately represent your application data, and you typically edit that elsewhere. But sometimes the data is general enough for it to be suitable for a basic graph editor, like position or rotation over time.

Inspiration

Where DAWs implement modulation/velocity editors, we could make a curve editor akin to Blender, Maya and Houdini.

Bitwig implements an editor for stepped keys, without interpolation between values and thus doesn't really qualify as "curves". That works, though as a user I never really was a fan and typically re-record instead to avoid the finnicky interface.

image

Live does the same.

image

Helio does the same as well, but with a handy slice-tool.

helio_curveeditor

Cubase is getting closer to curves, whereby lines are drawn between events as opposed to boxes or emptiness.

image

What we want though is closer to that of Blender and Maya.

image

Implementation

In each of the references, I think a separate window/panel is a good fit, for an option to overlay the data with its parent event.

Newbie question... How to generate the VS project...?

Hey @alanjfs , thanks for sharing this. It looks really cool.

Sorry for this silly question,
Just learning to compile the project.
But could you write some lines about generating the projects?

What I am doing wrong?

  • Way 1:
    I downloaded Ninja and added it to the system path environment
    (Because it was mentioned into Example/CMakeSettings.json)

From the sequentity-master/example folder I run cmake -G Ninja, and I am getting this errors:

PS F:\openFrameworks\addons\ofxSurfingImGui\examples\5_Sequentity\MISC\sequentity-master\sequentity-master\Example> cmake -G Ninja

CMake Warning:
  No source or binary directory provided.  Both will be assumed to be the
  same as the current working directory, but note that this warning will
  become a fatal error in future CMake releases.

-- The CXX compiler identification is unknown
CMake Error at CMakeLists.txt:3 (project):
  No CMAKE_CXX_COMPILER could be found.

  Tell CMake where to find the compiler by setting either the environment
  variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
  to the compiler, or to the compiler name if it is in the PATH.

-- Configuring incomplete, errors occurred!
See also "F:/openFrameworks/addons/ofxSurfingImGui/examples/5_Sequentity/MISC/sequentity-master/sequentity-master/Example/CMakeFiles/CMakeOutput.log".
See also "F:/openFrameworks/addons/ofxSurfingImGui/examples/5_Sequentity/MISC/sequentity-master/sequentity-master/Example/CMakeFiles/CMakeError.log".
PS F:\openFrameworks\addons\ofxSurfingImGui\examples\5_Sequentity\MISC\sequentity-master\sequentity-master\Example>
  • Way 2:
    When using the Gui version of make I have come further and created the VS2017/VS2019 projects,
    but I am having errors when compiling:

seq1

seq2

seq3

Custom deleter for application data

Goal

Avoid memory leaks.

Implementation

Events carry your application data as a void* which means Sequentity can't help you with the removal of said data.

// Store
MyData data = new MyData;
Sequentity::Event event;
event.data = static_cast<void*>(data);

// Retrieve
MyData& data = static_cast<MyData*>(event.data);

Instead, your application is solely responsible for both creating and removing of related memory. That's no good..

One way of tackling this could be a unique_ptr with a custom deter, see suggestion here

License?

Hello,

Would you consider licensing the software under a standard bsd-2 license? I wouldn't be permitted to provide a copy of the software or the software product being developed for a customer.

Thanks.

Input Handling 2.0

A discussion topic regarding the way input is managed in the example application and how it can be improved.


Goal

Translate input from any device - e.g. mouse, Wacom, Wii or XBox controller, eye tracker - to generic input, independent of device origin, with support for two or more devices operating in parallel, such as two mice.


Motivation

The coloured squares in the example application are currently controlled by click + dragging with your mouse. It should also work with a touch screen, courtesy of GLFW translating those hardware events into mouse events for us.

Next I'd like to "map" the position of a square to one input - such as the mouse position - and the rotation of another to another input - such as the angle of my Wacom pen - and the color of another to whether or not the H-key on my keyboard is pressed, red if it is, blue otherwise.

I figure there are a total 4 different kinds of input that we as humans are able to provide the computer, irrespective of hardware, in either relative or absolute form, at various resolutions.

Types

  • On/Off for any number of keys
  • 1D Range
  • 2D Range
  • 3D Range

Examples

Event Type Mode Resolution
Mouse 2D 2D Range Rel 16-bit
Mouse 2D 2D Range Abs 0-screen
Mouse Key On/Off 3-10
Keyboard Key On/Off 20-50
Keyboard WASD 2D Range Rel 20-50
Wacom Position 2D Range Abs 16-bit
Wacom Pressure 1D Range Abs 0-4096
Wacom Angle 2D range Abs 0-4096
Playstation Key On/Off 4-12
Playstation D-Pad 2D Range Abs 4-12
Playstation Range 1D Range Abs 4-12
Playstation Touch 2D Range Rel 256
iPad Touch 2D Range Abs 256
iPad Gyro 2D Range Abs 256
Index 3D 3D Range Abs 16-bit
Index Key On/Off 5-10
Index Finger 1D Range Abs 256
GPS 2D Range Abs 32-bit
Midi Key On/Off 0-127
Midi Knob 1D Range Rel 127
Midi Slider 1D Range Abs 127
Midi 2D 2D Range Abs 127
Midi 2D 2D Range Rel 127
Midi Velocity 1D Range Abs 0-127
Midi Aftertouch 1D Range 0-127
Motion Capture 3D Range 16-bit
Gesture On/Off 1-n

I'd like to build my application around these 4 fundamental input types, and enable the user to pick any of these as sources from which to generate it.


Implementation

I'm not sure.

I figure there must at least be a translation layer. Something dedicated to interpreting the data coming from the device, like the mouse, Xbox or Valve Index controllers, the keyboard and so forth.

Your average application already provides two of these translation layers, for your mouse and for your keyboard.

void Application::mousePressEvent(...) {}
void Application::keyPressEvent(...) {}

That's great, we can translate these into our 4 general-purpose input handlers.

void Application::buttonEvent(...) {}
void Application::range1DEvent(...) {}
void Application::range2DEvent(...) {}
void Application::range3DEvent(...) {}

And now we can respond to these throughout our application, instead of to the mouse or keyboard directly. Then, when support for a new device is added, we can simply translate it to one or more of these 4 general purpose events.

To poll or not to poll

This one is always tricky. We don't care for events that happen more often than once every frame, and when we do care we want them to happen either at the beginning or end of each iteration.

For example, if an event comes in before the scene is rendered, then we can take it into account. If it comes in during a render, it's somewhat pointless. But that's exactly what could happen in the current example application, as drawing and receiving events are entirely separate and happen independently. (As far as I can tell?)

So polling seems the better option; at least in terms of predictability which I would trade for performance, if that is actually a tradeoff.

Unknowns

So input can come at any time, great. But some input have a distinct beginning and an end. Like dragging. Dragging is a combination of a button being pressed, a series of range2d's (in the case of a mouse) followed by a button being released.

Other input is a fire-and-forget type deal, like keyboard presses. Those are easier to conceptualise.

Safe guards for operator overloading?

Hi,

This might be due to the limits of what I know so I'm not sure if it's an issue or a support request :)

I'm using ImGui's math overloads by using the define IMGUI_DEFINE_MATH_OPERATORS

https://github.com/ocornut/imgui/blob/d6a5cc7934b4f4f9d5effffc4f1acee151247f51/imgui_internal.h#L4-L7

doing that conflicts with yours, and I have no idea how to overcome that... I'm using those in a bunch of places in my own libraries.

Is there a way to make the overloads available only inside a namespace for instance or a way to avoid clashes?

Thanks

Clip Editor

Goal

Faciliate storage and retrival of events in a scratchpad-like area with drag'n'drop.

Motivation

Seeing and playing back events is great, but sometimes you'd like to experiment with parts of your data whilst keeping the rest intact, mixing and matching as you go. To build up a little library of reusable events and groups of events.

Implementation

See the Clip Views of Ableton Live and Bitwig.

image

image

image

Event Log

An additional, textual interface to events. Like what Renoise has.

Sequentity::EventLog(registry);

image

Custom per-note visualisation

Goal

Gain an overview of what each event does.

Motivation

Seeing time and duration is great, but some events have a greater impact on your data than others and that's when it can come in handy to have those stand out.

Examples

  • If an event represents position over time, draw velocity as a subtle colouring, stronger color means greater velocity
  • For events that represents orientation, add an icon of a pair of clock-hands per animated axis
  • For arbitrary values over time, overlay a curve onto the events.

UI

See how Bitwig manages this for MPE (per-note data).

image

Implementation

The application could maybe assign an additional field of Sequentity-specific data to each Event, along with any supported type of visualisation.

Sequentity::Event event;
event.visualise[Sequentity::ClockHands] = { 0.5f, 0.4f, 0.5f, 0.6f, 0.7f };

Events as Entities

Problem

Currently, events are designed to fit with pre-existing entities in your game or application. For example, if your game character is an entity with Position and Renderable components, then you could attach a Sequentity::Track that carry events related to changes to those components.

registry.view<Sequentity::Track>().each([](const auto& track, auto& position) {

    // Iterate over all events for this track, at the current time
    Sequentity::Intersect(track, current_time, [](const auto& event) {
        if (event.type == TranslateEvent) {
            auto& data = static_cast<TranslateEventData*>(event.data);

            // Do something with it
            event.time
            event.length
        }
    });
});

So far so good.

The consequence however is that each Track becomes a "mini-ECS" in that they themselves form a registry of additional entities, each one carrying a Sequentity::Channel component, which in turn form yet another registry of the Sequentity::Event. Except they don't carry the advantage of an ECS, in that they are limited to this one component each, and you can't operate on them like you would other entities in your application.

Solution

What if each of these were entities of the same registry, associated to your application entity indirectly?

bob = registry.create();

registry.assign<Name>(bob, "Bob");
registry.assign<Position>(bob);
registry.assign<Renderable>(bob, "BobMesh.obj");

Just your everyday ECS. Now let's get Sequentity in there.

track1 = registry.create();
channel1 = registry.create();
event1 = registry.create();

registry.assign<Name>(track1, "Track 1");
registry.assign<Sequentity::Track>(track1, bob); // With reference to another entity

registry.assign<Name>(channel1, "Channel 1");
registry.assign<Sequentity::Channel>(channel1, track);  // With reference to its "parent" track

registry.assign<Sequentity::Event>(event1, channel1); // With reference to its "parent" channel
registry.assign<SomeData>(event1);  // Your data here

Benefits

  1. The immediate benefit is that we avoid the need to store a void* inside the Event struct itself, and instead rely on EnTT to take ownership and delete any memory alongside removal of the entity.
  2. Another is that we're now able to iterate over events independently.
  3. And another is that moving events from one track to another is now a matter of retargeting that .parent value, rather than physically moving a member of the channel.events vector like we must currently.
registry.view<Sequentity::Event>().each([](const auto& event) {
  // Do something with all events
});

We could also iterate over tracks and channels individually in the same way, if we needed just them and not the events (to draw the outliner, for example). And if we wanted, we could still reach out and fetch the associated channel and track from an event by traversing what is effectively a hierarchy.

registry.view<Sequentity::Event>().each([](const auto& event) {
  const auto& channel = registry.get<Sequentity::Channel>(event.parent);
  const auto& track = registry.get<Sequentity::Track>(channel.parent);
});

Cost

The primary (and poor) reason for not going this route immediately was the percieved performance cost of introducing this hierarchy of enities and traversing between levels.

One of the (hypothetical) advantage of the current data layout is that it is tightly packed.

struct Track {
  std::vector<Channel> channels {
    struct Channel {
      std::vector<Event> events {
        struct Event { ... };
      };
    };
  };
};

Each track can be drawn as a whole, with its channels tightly laid out in memory, and its inner events tightly laid out in memory too. Hypotethically, this should be the absolute most optimal way of storing and iterating over the data, and it just so happens to also make for a suitable editing format; e.g. moving a track takes all data with it, moving events in time along a single channel doesn't really make a difference, as the data is all the same and doesn't need to physically move (just need a change to the .time integer value).

If each event is an entity, then in order to read from its channel we need to perform a separate lookup for its channel. That channel may reside elsewhere in memory, as may its track.

We could potentially sort these three pools such that tracks, its channels and its events reside next to each other, in which case accessing these should be almost as fast (?), except we still need to perform a lookup by entity as opposed to just moving the pointer in a vector.

Aside from this (hypothetical) performance cost however, is there any cost to API or UX? Worth exploring.

Tools 2.0

A discussion topic regarding the current implementation of the "tools" in the example application.

image

Goal

Overcome limitations of the current implementation.

  1. Tools are called every frame, even when not "active"
  2. Tools are free functions, to avoid issues with type when storing a _activeTool in the application
  3. Tools do not support multiple inputs in parallel, e.g. wacom tablet + mouse movements
  4. Tools need metadata, like a label and type, which are currently independent
  5. Tools can only manipulate the next frame, i.e. does nothing (useful) during pause
  6. Tools could be entities, but are not
  7. Inputs are assigned to entities being manipulated, should maybe be assigned to the tool instead?

Overview of Current Implementation

The user interacts with the world using "tools".

A tool doesn't immediately modify any data, instead a tool generates events. Events are then interpreted by the application - via an "event handler" - which in turn modify your data. The goal is being able to play back the events and reproduce exactly what the user did with each tool.

Tool     Event    Application
 _    
| |        _
| |------>| |        _
| |       | |------>| |
|_|       | |       | |
          |_|       | |
                    | |
                    | |
                    |_|

The application has a notion of an "Active Tool" which is called once per frame.

// Handle any input coming from the above drawScene()
_activeTool.write();

The tool does nothing, unless there is an entity with a particular component called Active along with some "input".

static void TranslateTool() {
// Handle press input of type: 2D range, relative anything with a position
Registry.view<Name, Activated, InputPosition2D, Color, Position>().each([](
auto entity,

The Active component is assigned by the UI layer, in this case ImGui, whenever an entity is clicked.

if (ImGui::IsItemActivated()) {
Registry.assign<Activated>(entity, sqty.current_time);
Registry.assign<InputPosition2D>(entity, absolutePosition, relativePosition);
}
else if (ImGui::IsItemActive()) {
Registry.assign<Active>(entity);
Registry.assign<InputPosition2D>(entity, absolutePosition, relativePosition);
}
else if (ImGui::IsItemDeactivated()) {
Registry.assign<Deactivated>(entity);
}

These are the three states of any entity able to be manipulated with a Tool.

  • Activated entity has transitioned from being passive to active, this happens once
  • Active entity is activated, and being manipulated (e.g. dragged)
  • Deactivated entity transitioned from active back to passive, happens once

Thoughts

Overall I'm looking for thoughts on the current system, I expect similar things have been done lots of times, perhaps with the exception of wanting to support (1) multiple inputs in parallel, e.g. two mice and (2) wanting the user to ultimately assign an arbitrary input to arbitrary tools, e.g. swap from the mouse affecting position to the Wii controller.

Ultimately, the application is meant to facilitate building of a physical "rig", where you have a number of physical input devices, each affecting some part of a character or world. Like a marionettist and her control bar.

Code wise, there are a few things I like about the current implementation, and some I dislike.

Likes

  • Tristate I like the Activated, Active and Deactivated aspect; I borrowed that from ImGui which seem to work quite nicely and is quite general.
  • Events rule I like that tools only have a single responsibility, which is to generate events. Which means I could generate events from anywhere, like from mouse move events directly, and it would integrate well with the sequencer and application behavior. =
  • Encapsulation I also like that because events carry the final responsibility, manipulating events is straightforward and intuitive, and serialising those to disk is unambiguous.
  • Generic inputs And I like how inputs are somewhat general, but I'll touch on inputs in another issue to keep this one focused on tools.

Dislikes

  • UI and responsibility I don't like the disconnect between how Active is assigned from the UI layer
  • Inputs on the wrong entities I don't like how inputs e.g. InputPosition2D are associated with entities like "hip", "leftLeg" etc. rather than a tool itself, which seems more intuitive.

Arrangement Editor

Goal

Organise groups of events at a high level.

Motivation

DAWs like Ableton Live, Bitwig, FL Studio and others enable you to work with a group of events called a "clip" in a loop, to then later arrange this clip alongside other clips on a global timeline. That's very handy for when you've got too many events to manage individually. Swap out the surgical knife for a chainsaw.

Implementation

There's lots of reference to draw from here. Bitwig does a good job at this I find.

image

It can even combine that with effects and and a clip view, all in one screen, without being overwhelming.

image

Live does something similar that also works well.

image

As does Helio.

image

And Cubase.

image

Both Cubase and Logic are able to draw both arrangement and event editor at the same time, drawing events from the currently selected clip in the arrangement view, which I especially like and think could be a good fit for Sequentity as well.

image

Per-track input

Goal

Provide the means for specifying from where to listen for input, such as a device like the mouse or a Wacom tablet.

Motivation

The example application currently listens for input coming from the mouse, via ImGui::IsItemActive() and ImGui::GetMouseDragDelta(). But if you wanted input from somewhere else you're out of luck. Furthermore, there is no interface for choosing an input source.

DAWs lack a standard input like mouse and keyboard, and instead dedicate a section on each track from which the end user specifies which device a given track is meant to listen to once it comes time to record.

Here's what that looks like in e.g. Bitwig.

image

Explore

Would such an interface make sense for 2d/3d content creation?

Inputs could be:

  • Mouse
  • Keyboard
  • Wacom Pen
  • Touchscreen
  • Midi XY Pad
  • More

Tracks could then represent individual characters or parts of a character like an arm or a hand. One hand is driven by Mouse 1, the other by Mouse 2. The feet could be driven by something more high-level, like a pre-authored animation loop of a walk cycle. The local time of that animation clip could be driven by a linear pedal, capable of outputting values between e.g. 0-127. With that, you could potentially animate a walkcycle in real-time using 2 mice and a pedal.

Layout

Each track contains a number of channels. The more channels you have, the more space if made available within a track, behind each of the channels. Maybe thats's a good spot?

 _____________________________
| m  s                track 1 |
|  ____________   channel 1 o |
| |            |  channel 2 o |
| | space here |  channel 3 o |
| |____________|  channel 4 o |
|_____________________________|

Device Capabilities

Some devices are capable of providing 2d position data, like a mouse. A keyboard is somewhat able, if you consider the WASD or arrow keys. For MIDI, devices typically support both notes and modulation, pitch and such. And you still assign the whole shebang to a given track. The track then records each of these capabilities.

Is there an equivalent we could apply for computer peripherals like mouse and keyboard? The keyboard being able to provide both button presses and second-order data like position via the arrow keys over time.

Minimap

Goal

Facilitate management of and edits to events in the 100s-1000s range.

Motivation

I'd like for Sequentity to be your sketchpad for all data in an application, but lots of data can get overwhelming. Other software like Sublime Text also operate on lots of data, but manage to keep you informed on where you are by providing a minimal rendition of all data in the form of a minimap. I think we can do that too.

Implementation

Since we're working primarily across time, horizontally, it makes sense for a minimap to also be horizontal. Look at Helio.

helio7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.