Giter Site home page Giter Site logo

termlike's People

Contributors

jhauberg avatar

Stargazers

 avatar

Watchers

 avatar

termlike's Issues

Transformations are not applied properly when using fill function

Below example shows how transformations (in this case just rotation) is not applied properly, or at least not as expected, when using term_fillt:

screen shot 2018-08-02 at 11 51 39

What we're looking at is two rows of cards.

The cards in the upper row are rendered by two sets of glyphs; one for the solid background, and one for the black inner frame. E.g.:

    char const * const back =
    "████\n"
    "████\n"
    "████\n"
    "████\n"
    "████";

    char const * const frame =
    "┌──┐\n"
    "│  │\n"
    "│  │\n"
    "│  │\n"
    "└──┘";

Cards in the lower row render the back through a single use of term_fillt, where the dimensions correspond to the resulting size when rendering back glyph by glyph, e.g.:

term_fillt(position, sized(4*8, 5*8), TERM_COLOR_WHITE, transform);

The inner frame is rendered glyph by glyph, identically to the upper row.

So, both rows should look identical. But they do not. It almost looks as if the rotation is correct, but i'm not sure about that either, as i'd think both rows would at least be rasterized identically in that case- even if offset incorrectly.

Consider consolidating similar structs

For example, there's struct term_color, struct graphics_color and then another struct color. I'd like to preserve term_color, because that's part of the exposed and public API, but the latter ones could probably be combined.

The reasoning for having all of these is sound enough, but I think with the size of this project we probably don't need so much abstraction per layer (e.g. term_color is public API, graphics_color relates to the abstract renderer API, and finally color relates to the concrete OpenGL renderer implementation).

I think we can get away with just having the renderer abstraction, but let's see if it makes sense.

Can gl3w be replaced?

As the required OpenGL features are very limited, look into whether the gl3w header loading code can be replaced by something better/smaller. If it could even be removed completely, that would of course be entirely preferable.

I know of a few alternatives:

Improve/refactor animation API

The one major improvement I want to apply here is to move away from these two:

struct term_animate * animated(float value);
void animate_release(struct term_animate *);

It needs to be simpler and not require malloc/free. To avoid that, struct term_animate must be public API, and not an opaque type. The reason that I didn't go this way to start with, was that I felt it's a complicated object and the values it holds are meant for consumption by the accompanying API functions; not the caller, i.e. it might be confusing to use. But I think in the end there's more pros than cons to opening it up.

Also consider whether this module even should be part of the core termlike library. It could be an optional addition.

Refactor measurement for single glyphs/measuring base size

In quite a few cases i've wanted to get the base size of a single glyph. This can be easily done by just passing any single character into term_measure.

However, when the intention is to just figure out the base size, then the character you put in does not matter, and could serve as a point of confusion; because which one do you put in? It really doesn't matter which one you measure, but you might stop and think twice about it.

I can think of a few solutions.

  1. Add term_get_glyph_size(struct term_dimens *)

This function returns the base size of a single glyph. It does no measuring, and thus, does not take transforms or other attributes into account.

  1. Allow NULL to be passed to term_measure

This would then be considered as a request to measure the base size of a glyph. But to avoid inconsistent behavior, it would take transforms and attributes into account.

  1. Add and pass TERM_SINGLE_GLYPH pre-processor definition to term_measure

This would expand to either NULL (see 2)), or any random glyph (whitespace is probably a good fit).

I think I prefer 3) the most, because it makes the intent very clear when read, and doesn't add special behavior to the existing implementation.

It might also be worth doing a combination of solutions; particularly 1) and 3). The question is whether the true base size is ever useful to have, since every print will take transforms and attributes into account.

Internal buffer optimization may drop strings

As referenced in 81b520e

termlike/src/buffer.c

Lines 56 to 59 in c234412

if (buffer->text == text) {
// skip copy/decode; buffer should already contain this text
return;
}

The optimization has an issue. The problem can be exemplified as follows:

static char buffer[64];
static int32_t a = 0;

sprintf(buffer, "counting up: %d", a);

term_print(buffer, positioned(0, 0), TERM_COLOR_WHITE);

a += 1;

if (a > 10000) {
    a = 0;
}

This example will not display what you expected (buffer pointer remains the same, causing term_print to believe that the "string" has not changed).

screen shot 2018-08-21 at 16 11 22

Optimize command sorting

I've noticed that we could gain a slight increase in FPS by additionally sorting our print commands by whether or not they require transformation. For example, consider this sequence of print commands:

GLYPH ORDER LAYER TRANSFORM
0 1 NO
1 1 YES
2 1 NO
3 1 YES
4 1 NO
5 1 YES

This could be optimized as:

GLYPH ORDER LAYER TRANSFORM
0 1 NO
2 1 NO
4 1 NO
1 1 YES
3 1 YES
5 1 YES

The additional flag is not easily added to command_index as it is, though, since it requires being packed into uint64_t, and it already holds fields that take up all the available bytes. I suppose we could reduce the call order field, but in theory that one should actually be capable of going as high as the maximum capacity of commands.

This whole thing can also simply be done client-side by just ordering print calls in the optimal way, but if we can just sort it internally as well, then that would be the best.

Edit:
Hmm, this actually won't work out without breaking layering guarantees. For example, in above scenario, the at order 1 should never be rendered after the at 2, because if those glyphs were drawn in the same place, then the latter would be covered by the former. Order is more important than whether a glyph is transformed or not, which means the transformed field will never have any effect (since order is always going to be unique).
Edit again:
Honestly, thinking more about this, my initial hunch was wrong. The way I discovered an increase in performance was not by sorting differently, it was by printing in a different order; i.e. first printing a large batch of untransformed glyphs, then the same bunch- but transformed.
The reason this was more performant is probably very simply because qsort had less work to do! The glyphs were already sorted properly. Adding additional sorting property would not help to sort any faster.

So I think this optimization must be left up to the program. But at least this issue might serve as a reminder.

Improve word-wrapping algorithm

I think it doesn't work correctly in all cases. For example, if it determines that it should break, it just keeps backtracking until it finds a whitespace- even if it occurs mid-previously wrapped sentence.

It works OK, but I think it could be better.

There is a common and proven way to implement wrapping, and I think we should just implement that. See https://en.wikipedia.org/wiki/Line_wrap_and_word_wrap

Improve line measuring

I would like to refactor the measuring of line widths to something more performant, but also more capable.

See these lines:

termlike/src/termlike.c

Lines 54 to 60 in 8a9dedd

struct term_lines {
// the maximum number of lines is *actually* equal to the maximum size of
// the internal buffer (in the case where each character/byte is a newline)
// but that is really a huge amount of lines, so we artificially limit it
// to much less than that
int32_t widths[MAX_LINES];
};

For any print, this struct is going to be used, copied and moved around. Even if it's just a single character.

I think there's some clear performance gains here, both in terms of memory-usage but also utilisation.

For one thing, the limit is silly, and should not be a thing:

#define MAX_LINES (128)

I think, since there's only ever going to be one print/measure going on at a time (project is not thread-safe!), we could basically just keep one stretchy buffer (see command.c#89) for measured line widths and refer to/use that for every print/measure. So the allocated memory will only ever be as high as the longest print/measure command, and we would not need to copy it around.

Better examples and binary downloads

I'd like to provide binary downloads for each release, particularly also binaries of the example programs.

Being able to quickly download and try how this thing works and runs on your own system is a much better indication of whether the library is a fit for you or not. Screenshots and GIFs are fine, but being able to run it is much better.

Layering is not guaranteed when number of glyphs exceed batch size

It all starts here:

#define MAX_GLYPHS 2048 // flush when reaching this limit

This should always be a hard limit. However, the exact number is dependent on a few things and is not strictly important to this issue.

The thing is, in order for the batcher to do its thing, glyphs must be flushed once reaching the limit.

This is all well and good, however, since glyphs are sorted by their z-index to be drawn in the correct order, we can reach a scenario where glyphs from one batch take precedence over glyphs from another batch, potentially rendering some glyphs invisible, or incorrectly apply transparency.

I figure this might be solvable by fiddling with depth buffer settings, but I think the sorted order actually matters a lot (front-to-back vs. back-to-front?)

For example, try changing the batch limit to 1 and run this example:

static
void
draw(double const interp)
{
    (void)interp;
    
    char const * const pointer = "▓";
    
    struct term_cursor_state cursor;
    
    term_cursor(&cursor);
    
    int32_t w, h;

    term_measure(pointer, &w, &h);
    term_print(positionedz(cursor.location.x - (w / 2),
                           cursor.location.y - (h / 2),
                           layered(1)),
               TERM_COLOR_WHITE,
               pointer);
    
    term_print(positionedz(10, 10, layered(0)),
               colored(255, 0, 0),
               "A");
}

int32_t
main(void)
{
    if (!term_open(defaults("Termlike: Cursor"))) {
        exit(EXIT_FAILURE);
    }
    
    term_set_drawing(draw);
    
    while (!term_is_closing()) {
        if (term_key_down(TERM_KEY_ESCAPE)) {
            term_set_closing(true);
        }
        
        term_run(TERM_FREQUENCY_ONCE_A_SECOND);
    }
    
    term_close();
    
    return 0;
}

When you point the cursor over the 'A', some red should shine through- but doesn't:

screen shot 2018-06-27 at 15 11 21

Note that if you move the programmatic order of drawing the 'A' to be drawn first, then it looks correct:

screen shot 2018-06-27 at 15 12 20

However, the whole point of layering is to not need to worry about which order you draw things in.

Cursor positioning broken after fullscreen toggle on macOS 10.14.1

Positioning seems correct as long we stick in windowed mode and becomes off as soon as toggling to fullscreen (even when starting in fullscreen mode). Positioning becomes increasingly incorrect when toggling multiple times.

Seems to be related to pixel scaling? Might be something different on the GLFW side, or just a side-effect of macOS update. Or both.

Edit: turned out to be a side-effect of a caching optimization previously made (but clearly not tested properly), where an invalidated viewport was not stored correctly.

Cached prints

When starting this project, I was of the conviction that tile-based rendering was just a artificial limitation that should not be needed. If you could render glyphs anywhere, then you'd have more freedom and could just implement the tiling yourself.

However, i've come to realize that this approach, while perfectly fine until it runs too hot, lacks options for a program to control and optimize the rendering.

Essentially, what it lacks is a way for a program to imply that a print command will happen over and over, without change in parameters. This could be beneficial in terms of sorting and processing.

Most ASCII renderers solve this by implementing a tiling system. This essentially means that they have a grid of cells for X layers, where glyphs can be put into. In such a system, the rendering time should be stable and constant because it always has the same amount of glyphs to render- the concept of "dirty" cells is also commonly implemented so that only changes trigger re-drawing. It's essentially a big cache.

In Termlike, there's theoretically no limit (though, realistically it is at UINT16/32_MAX) to how many glyphs can be rendered, and so glyphs cannot be "cached"; essentially dirtying everything up, every frame.

I still think this approach is the right choice for Termlike, but an additional system for "cached" prints would be a useful addition. I'm not sure yet how to go about it, but something like term_put and term_putstr seems appropriate.

The ideal benefits would be that the string of glyphs would not need to be decoded, measured, transformed and sorted every frame.

Realistically, I think only the decoding (and maybe measuring) part can be avoided. The spritebatch will still need to transform each glyph.

I'm also considering other options: something like "compiling" a string of glyphs into a format that is decoded and measured ahead of time, so that you only need to do those things once.

In any case, any such additions should hook into the current rendering system (push command -> sort -> render) rather than becoming some monstrous thing running in parallel. Preferably it is all going through the same pipeline, but just skipping some steps where able.

Measuring is affected by current transformation; should it be?

For example, in the following snippet, we expect a text with a solid background- and that is what we get:

char const * const name = "TARGETING DRONE MKII";

struct term_dimens name_dims;

term_measure(name, &name_dims);

term_fill(positioned(10, 10), name_dims, TERM_COLOR_WHITE);
term_print(name, positioned(10, 10), TERM_COLOR_BLACK);

And that is what we get:

screenshot 2018-11-09 at 09 16 51

However, what if we want to scale the text down? We might do the following:

char const * const name = "TARGETING DRONE MKII";

term_set_transform(scaled(0.8f));

struct term_dimens name_dims;

term_measure(name, &name_dims);

term_fill(positioned(10, 10), name_dims, TERM_COLOR_WHITE);
term_print(name, positioned(10, 10), TERM_COLOR_BLACK);

But the output is probably not what was expected:

screenshot 2018-11-09 at 09 21 01

Notice how the "MKII" is seemingly being cut off?

What happens here, is that both the fill and the print is being scaled, but since the dimensions given to term_fill is already taking the scaling into account, the resulting rect is essentially being scaled down twice.

So it is pretty easy to fix, but it might not be immediately obvious why it didn't work in the first place. Anyway, the easy fix is to measure the text before applying a transformation:

char const * const name = "TARGETING DRONE MKII";

struct term_dimens name_dims;

term_measure(name, &name_dims);

term_set_transform(scaled(0.8f));

term_fill(positioned(10, 10), name_dims, TERM_COLOR_WHITE);
term_print(name, positioned(10, 10), TERM_COLOR_BLACK);

screenshot 2018-11-09 at 09 27 59

So this begs the question: should term_measure be affected by the current transformation?

Rotation is already obviously a non-factor, but what about scaling? I think there are reasons internally to require the final resulting size to be known, but we can probably find a way to avoid that, or just keep it internal-only. I'm not sure of the consequences for typical usage if this was to be changed.

Basically, does it work as expected?

Optimize glyph index lookup

This function is a source of drop in performance:

static
void
graphics_get_font_cell(struct graphics_context const * const context,
uint32_t const code,
uint16_t * const row,
uint16_t * const column)
{
int32_t const table_size = context->font.columns * context->font.rows;
int32_t table_index = -1;
if (context->font.codepage != NULL) {
for (int32_t i = 0; i < table_size; i++) {
if (context->font.codepage[i] == code) {
table_index = i;
break;
}
}
}
if (table_index < 0 ||
table_index > table_size) {
table_index = -1;
}
if (table_index == -1) {
return;
}
*row = (uint16_t)table_index / context->font.columns;
*column = (uint16_t)table_index % context->font.columns;
}

Depending on which glyph you're looking up, you'll see varying performance; e.g. looking up a glyph that appears at the end of the table will be way slower than one that appears in the beginning. For example, there is a large difference between (slow), vs. (fast).

Additionally, this isn't really a function specific to the OpenGL implementation, so it may be better suited elsewhere.

Consider reducing number of compilation units

For example, combining the smaller units like position.c, bounds.c, layer.c etc. into a single file (termlike.c?), or simply refactoring the functions as inline in their respective headers. The functions are essentially just construct struct objects. Those kinds of functions might as well be inlined, I think.

This library is meant to be small and concise, but the number of files is starting to indicate that it isn't.

Edit:
Refer to https://stackoverflow.com/a/23699777/144433 for proper inlining.

For reference, here's file sizes in release mode (CMake/Make, macOS):
libterm.a ~156KB
test-perf ~373KB

Inconsistent framerate after switching windowed/fullscreen modes

On macOS, I'm seeing odd framerate drops when switching between windowed/fullscreen modes.

For example, the following steps seems to pretty consistently make it happen:

  1. Run a program that starts in windowed mode
  2. switch to fullscreen
  3. switch back to windowed
  4. Observe framerate drops and large jumps (in both modes!)

Need to verify whether it's just a macOS issue, or Windows too.

glfw/glfw#772 and
glfw/glfw#857 may be related.

Update:
Interestingly, this only seems to occur on some window resolutions. For example, it doesn't happen with the performance test in 640x480. This program hits a framerate below the monitors refresh rate (~50 fps, 60hz monitor). Maybe that is of importance.
Update: Ok, so changing around the resolutions and hitting higher framerates in the performance test does not show the framerate inconsistency. It seems to only occur in the simpler programs (e.g. logo.c). What?!

Another fun thing I noticed, using the Quarts Debug tool:

screen shot 2018-08-01 at 15 22 33

🤔

Edit:
An additional fun observation is that sometimes the FPS improves significantly after having been in fullscreen and coming back to windowed. Like going from ~250 to 400+... oh boy.

Refactor API to be simple for the common use-case

I feel that the API has gotten out of hand. Just look at this (don't forget to scroll 😵):

void term_measure(char const * characters,
int32_t * width,
int32_t * height);
void term_measuret(char const * characters,
struct term_transform,
int32_t * width,
int32_t * height);
void term_measurec(int32_t * width,
int32_t * height);
void term_measurect(struct term_transform,
int32_t * width,
int32_t * height);
void term_measurestr(char const * text,
struct term_bounds,
int32_t * width,
int32_t * height);
void term_measurestrt(char const * text,
struct term_bounds,
struct term_transform,
int32_t * width,
int32_t * height);

That is way too many options and parameters (of which some are optional, hence the number of functions). This is also the case for the print functions:

void term_print(struct term_position,
struct term_color,
char const * characters);
void term_printt(struct term_position,
struct term_color,
struct term_transform,
char const * characters);
void term_printstr(struct term_position,
struct term_color,
struct term_bounds,
char const * text);
void term_printstrt(struct term_position,
struct term_color,
struct term_bounds,
struct term_transform,
char const * text);

Ideally, there is just term_print and term_printstr; i.e., two options, only differentiated by the intent to render: either 1) a static set of glyphs (e.g. an object or a map), or 2) a wrapped piece of text for reading.

I don't want to lose the advanced options for transformations, though. So this would require a significant API restructuring.

Attribute structs

I have already gone down a few different roads to try and solve this, for example, the idea of wrapping parameters into a term_print_attributes struct, which could then have helper functions for the specific need, e.g.:

term_print("Hello", attribs(positioned(0, 0), colored(255, 255, 255)));

and

term_print("Again", attribst(positioned(0, 0), colored(255, 255, 255), rotated(rand() % 360, TERM_ROTATE_STRING)));

I was sort-of fine with this solution; it looks OK, and each print feels like an atomic command. That's good.

But then there's the measuring functions. For those to produce the expected output, they also need to be provided attributes; for now just a transform. But only in the very specific case of scale being applied; which is not at all the common case. Alas, you'd be forced to type all this, every time:

term_measure("█", &size, (struct term_measure_attribs) {
        .transform = TERM_TRANSFORM_NONE
    });

Such a simple function, now hideously disfigured because of a parameter that is only needed every once in a while.

Of course, there is the option of sticking with the multitude of different functions to keep options for every scenario. But that was exactly what we wanted to avoid in the first place.

Variadic functions

I also toyed with the idea of making the functions variadic; i.e. accepting a variable number of parameters. This way, I figured, you could just provide the stuff you needed. Rest would be defaults.

But there were too many downsides to this approach, and ultimately it was more confusing than it was handy. Something like:

term_print("Hello", POSITION_COLOR, positioned(0, 0), colored(255, 255, 255));

or

term_printstr("Hello", TERM_BOUNDS_NONE, POSITION_COLOR_TRANSFORM, positioned(0, 0), colored(255, 255, 255), scaled(2));

For example, a downside is that your IDE will have no idea what to suggest, leaving you guessing. Additionally, messing with the order would wreak havoc and probably crash things.

It is not well suited for this particular scenario.

Global State

So. Feeling like i've exhausted the clever ways of dealing with this issue (barring any macro related ridiculousness), I think the solution is to introduce a global state for transformations and similar uncommon attributes (stuff like line-spacing and padding, possibly). This is pretty commonly seen in other libraries (Allegro, Cocoa/UIKit etc.).

I don't particular like this idea, but I think it will solve the problem.

You could argue that other stuff (like, color/tinting) might then as well become global state also while we're at it. However, I feel it's important to note that I don't think keeping global state variables is a good thing; it is error-prone and can be difficult to debug. Like, if you forget to reset the transform and things suddenly act not-at-all as you expected. But in this particular case I think the pros outweigh the cons.

So the result would be a simple:

term_set_transform(scaled(2));

and

struct term_transform t;

term_get_transform(&t);

However, if we expect to add other attributes like line-spacing etc., e.g:

term_set_attributes((struct term_attribs) { .linespacing = 2, .padding = 5 });

Then it would have been nice to consolidate that along with the transform. However, that will take us back into the original issue: being able to provide only the params you want to (e.g. what if you only want to set linespacing? An un-initialized transform is invalid, because scale would be 0).

So maybe above is the correct approach. The additional attributes default nicely with un-initialized values.

Anyway- an issue with keeping global state is that any slip-up will affect everything following, and always setting the transform you expect, but also keeping what was set, becomes a large bunch of boilerplate. For example, to ensure defaults before printing, but also resetting to what it was:

struct term_transform previous;
struct term_attribs previous_attr;
term_get_transform(&previous);
term_get_attributes(&previous_att);
term_set_transform(TERM_TRANSFORM_NONE);
term_set_attributes(TERM_ATTRIBUTES_DEFAULT);
// all your printing
term_set_transform(previous);
term_set_attributes(previous_attr);

Yuck.

I suppose a way to mitigate that would be to introduce a state structure that holds both the transform, and attributes, so you could save/restore both in one go. Something like:

struct term_state {
  struct term_transform transform;
  struct term_attributes attributes;
};
struct term_state state;
term_get_state(&state);
term_set_transform(TERM_TRANSFORM_NONE);
term_set_attributes(TERM_ATTRIBUTES_DEFAULT);
// all your printing
term_set_state(state);

This saves a few lines, but also convolutes the API by adding further objects and functions.

Fill function does not properly cover expected pixels

Specifically, when filling an area of an uneven size, the area covered may be rasterized in an unexpected way.

For example, in below picture, the height of the background for the profiling overlay is set to fill 9 pixels (e.g. 1 more than the height of a glyph). Instead, something else happens (note the left and right sides of the bar):

screen shot 2018-07-31 at 10 06 47

However, and this is the core of the problem, since the transformation of the glyph behind the scenes anchor the glyph around its center, the glyph must be offset by half its size to account for the anchoring. But. Half the size of 9 is 4.5. This causes rasterization to become unpredictable, since we'll be drawing from, and to, half a pixel.

Similarly, in the above example, the bar should also fill 319 pixels (1 less than the full width of the window), however, it fills up only 318 pixels. Again, due to the half-pixel adjustment.

Should strings be copied on print?

In regards to this hardcoded limit:

#define BUFFER_SIZE_MAX 256

Some kind of limit is obviously necessary, but how low can we go? The current limit is not large enough. There's plenty of stuff I've wanted to print, only to have it cut off.

As an aside, keep in mind that this limit would be extended further into each buffered command if we ever decide to implement string-copying on print (see 2c7a942)- which could accumulate to a lot of memory and less performance, since it would likely involve malloc and some kind of stretchy buffer.

"Poor" performance overall

When running the performance test on Windows 10, I am getting approximately half the FPS that i'm getting on my MacBook Pro. Seems to me that, if anything, it should be the other way around (considering the MacBook also has way more pixels to light up with its Retina display).

I also think the performance i'm getting on the MacBook Pro is not good enough for what it is doing, so there's that too.

Considering that the PC i'm testing on is built for gaming, and should, at least spec-wise, be the stronger machine, this makes me think there might be something fundamentally wrong, somewhere (Edit: maybe not so fundamental- see observations below).

(Not entirely true, as the MacBook actually has a newer CPU- which may be more efficient; e.g. PC i5-6600k vs MacBook i5-7360u- also, interestingly, fewer cores on the MacBook (2 vs 4 on PC), which may also be an indicator)

Edit:
Though by looking here http://www.cpu-world.com/Compare/217/Intel_Core_i5_Mobile_i5-7360U_vs_Intel_Core_i5_i5-6600K.html it seems the PC CPU should totally be doing better...

Additionally, the PC has a dedicated GPU, which the MacBook does not (Iris™ Plus Graphics 640)

One thing I know for sure is that all the glyph transformation is heavily CPU-bound (the engine is generally CPU-bound), and is what takes most cycles every frame- for example, getting rid of any transforming print commands in the performance test doubles the FPS immediately.

Observations:

logo example runs incredibly fast on PC, e.g. ~5000FPS, and only ~500 on the MacBook.

This example program presents the simplest use-case and I feel like that kind of performance is acceptable. So this suggests that the core render loop is OK, and the problem is more likely related to handling large amounts of commands/transformations and utilizing the CPU more efficiently. This is an area i'm not too experienced in, so I bet there's a bunch of stuff that could be optimized. Stuff like struct layouts and memory blocks.

Text strings are not wrapped correctly

Similar to #13, printing strings that should wrap inside a bounded area does not take scaling into account, causing stuff like seen here, where transform is scaled(2):

screen shot 2018-08-03 at 09 03 13

Without scaling:

screen shot 2018-08-03 at 09 05 09

Refactor #include directives with relative paths

It has come to my attention that it is, in general, considered bad practice to use relative paths in #include directives, but is especially bad with those that track backwards.

For example, what we're doing here:

#include "../renderer.h" // graphics_*

This seemed clear and correct to me, but MSVC is throwing warnings (C4464) for it.

I'd like to refactor the #include directives to be compliant with this, but is the solution really to add e.g. src/graphics/ as a header search directory (or maybe just src/)? I suppose it is.

Edit:
I think the way I want to do it is by -I src/include, e.g.:

Includes:

src/include/keys.h
src/include/graphics/renderer.h
src/include/graphics/viewport.h
src/include/platform/window.h
src/include/platform/timer.h
etc.

Implementations:

src/graphics/viewport.c
src/graphics/opengl/renderer.c
src/platform/glfw/window.c
src/platform/glfw/timer.c
etc.

This would mean that a file like renderer.c would no longer have #include "../viewport.h", but instead #include "graphics/viewport.h". Right? Or would it be #include <graphics/viewport.h>? In that case, maybe the better solution is to add a termlike directory inside src/include, so that we can end up with #include <termlike/graphics/viewport.h>. This looks right, I think.

Add #define for toggling whether to include profiler

Currently, the profiling overlay is only available in debug builds (e.g. #ifdef DEBUG), but it is also useful to have in release builds.

So instead, there should be a #define somewhere that toggles whether it should be included in a build.

Add recording functions

It would be neat being able to store in-game screenshots or videos with a single keypress. It would probably only be available in debug builds, and video recording would (possibly) be limited to X seconds.

A nice, but not necessary, feature could be showing a timer on recording time. But it would have to be a thing that did not end up on the actual recording, so i'm not sure how that would work.

Consider adding unit tests for logically testable components

For example, the layers test is fine for visually determining whether commands are sorted, but it could be boiled down further to simply assert that the sorting function results in the expected outcome. The visuals could fail due to many other things. Similarly, word-wrapping is very testable in that we can assert whether newlines are inserted as expected.

Printing rotated characters or strings

Currently, term_print supports printing a string of characters in a color, at a layered location. This is, as I see it, the minimal amount of features needed for doing anything reasonably meaningful with this function.

However, being able to rotate a character, or a string, is potentially a very neat addition, as it would increase the amount of possible effects (e.g. rotating sharply pixeled characters often result in odd-looking rasterization- you might discover that "☻" rotated 35 degrees looks more like a scary monster).

In the glyph renderer's current state it would be somewhat trivial to add rotation (for a single character- rotating an entire string will actually be problematic).

Issues:

  • Printing functions will increase in complexity (more parameters)
    • Thoughts on this: positioning (e.g. struct term_location) could include layering info, removing need of layer parameter- similarly for rotation; just include an angle. Something like located(0, 0, layered(1), angled(35, TERM_ANCHOR_CENTER)).
  • Measuring will be off (only axis-aligned bounding boxes)
  • Is anchoring also expected, or can we just rotate around centers?
    • Might be cumbersome to achieve vertical strings if only anchoring at center (e.g. easier if you could anchor from top-left)
    • Results will not always be what you expect without anchoring...
  • You could argue that rotation is not faithful to the traditional way of ASCII terminals, neither is non-grid-based positioning or layered composition, though...
  • What if we added term_printc, which is identical to term_print, except that it only prints one character- but with the added option of rotation and scaling (see #3)?

Applying rotation to individual characters can create new looks:

screen shot 2018-06-20 at 15 25 18

But actually rotating a string of text should look like this:

screen shot 2018-06-20 at 16 30 51

Can we provide both options, somehow?

Consider making printstr functions variadic

Simply for convenience, it would be useful for the term_printstr function family to accept a variable list of arguments (like printf/sprintf etc.). I guess maybe all of the print functions, really.

However, this addition comes with some implications related to who owns the memory (see #5). In order to provide this feature, string copying (as mentioned in #5) would need to be implemented first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.