jhauberg / termlike Goto Github PK
View Code? Open in Web Editor NEWA game engine for CP437 enthusiasts
License: MIT License
A game engine for CP437 enthusiasts
License: MIT License
Below example shows how transformations (in this case just rotation) is not applied properly, or at least not as expected, when using term_fillt
:
What we're looking at is two rows of cards.
The cards in the upper row are rendered by two sets of glyphs; one for the solid background, and one for the black inner frame. E.g.:
char const * const back =
"████\n"
"████\n"
"████\n"
"████\n"
"████";
char const * const frame =
"┌──┐\n"
"│ │\n"
"│ │\n"
"│ │\n"
"└──┘";
Cards in the lower row render the back through a single use of term_fillt
, where the dimensions correspond to the resulting size when rendering back glyph by glyph, e.g.:
term_fillt(position, sized(4*8, 5*8), TERM_COLOR_WHITE, transform);
The inner frame is rendered glyph by glyph, identically to the upper row.
So, both rows should look identical. But they do not. It almost looks as if the rotation is correct, but i'm not sure about that either, as i'd think both rows would at least be rasterized identically in that case- even if offset incorrectly.
For example, there's struct term_color
, struct graphics_color
and then another struct color
. I'd like to preserve term_color
, because that's part of the exposed and public API, but the latter ones could probably be combined.
The reasoning for having all of these is sound enough, but I think with the size of this project we probably don't need so much abstraction per layer (e.g. term_color
is public API, graphics_color
relates to the abstract renderer API, and finally color
relates to the concrete OpenGL renderer implementation).
I think we can get away with just having the renderer abstraction, but let's see if it makes sense.
Termlike is very heavy on CPU-usage. It might be very beneficial to multithread certain paths, if there are any where it's viable.
Be careful and think twice about this. For example, the order of glyphs being rendered is very important.
See https://timmurphy.org/2010/05/04/pthreads-in-c-a-minimal-working-example/ for reference
This is related to #2, as the considerations and implications are practically the same.
The one major improvement I want to apply here is to move away from these two:
struct term_animate * animated(float value);
void animate_release(struct term_animate *);
It needs to be simpler and not require malloc/free. To avoid that, struct term_animate
must be public API, and not an opaque type. The reason that I didn't go this way to start with, was that I felt it's a complicated object and the values it holds are meant for consumption by the accompanying API functions; not the caller, i.e. it might be confusing to use. But I think in the end there's more pros than cons to opening it up.
Also consider whether this module even should be part of the core termlike library. It could be an optional addition.
In quite a few cases i've wanted to get the base size of a single glyph. This can be easily done by just passing any single character into term_measure
.
However, when the intention is to just figure out the base size, then the character you put in does not matter, and could serve as a point of confusion; because which one do you put in? It really doesn't matter which one you measure, but you might stop and think twice about it.
I can think of a few solutions.
term_get_glyph_size(struct term_dimens *)
This function returns the base size of a single glyph. It does no measuring, and thus, does not take transforms or other attributes into account.
NULL
to be passed to term_measure
This would then be considered as a request to measure the base size of a glyph. But to avoid inconsistent behavior, it would take transforms and attributes into account.
TERM_SINGLE_GLYPH
pre-processor definition to term_measure
This would expand to either NULL
(see 2)), or any random glyph (whitespace is probably a good fit).
I think I prefer 3) the most, because it makes the intent very clear when read, and doesn't add special behavior to the existing implementation.
It might also be worth doing a combination of solutions; particularly 1) and 3). The question is whether the true base size is ever useful to have, since every print will take transforms and attributes into account.
As referenced in 81b520e
Lines 56 to 59 in c234412
The optimization has an issue. The problem can be exemplified as follows:
static char buffer[64];
static int32_t a = 0;
sprintf(buffer, "counting up: %d", a);
term_print(buffer, positioned(0, 0), TERM_COLOR_WHITE);
a += 1;
if (a > 10000) {
a = 0;
}
This example will not display what you expected (buffer
pointer remains the same, causing term_print
to believe that the "string" has not changed).
I've noticed that we could gain a slight increase in FPS by additionally sorting our print commands by whether or not they require transformation. For example, consider this sequence of print commands:
GLYPH | ORDER | LAYER | TRANSFORM |
---|---|---|---|
○ | 0 | 1 | NO |
• | 1 | 1 | YES |
○ | 2 | 1 | NO |
• | 3 | 1 | YES |
○ | 4 | 1 | NO |
• | 5 | 1 | YES |
This could be optimized as:
GLYPH | ORDER | LAYER | TRANSFORM |
---|---|---|---|
○ | 0 | 1 | NO |
○ | 2 | 1 | NO |
○ | 4 | 1 | NO |
• | 1 | 1 | YES |
• | 3 | 1 | YES |
• | 5 | 1 | YES |
The additional flag is not easily added to command_index
as it is, though, since it requires being packed into uint64_t
, and it already holds fields that take up all the available bytes. I suppose we could reduce the call order field, but in theory that one should actually be capable of going as high as the maximum capacity of commands.
This whole thing can also simply be done client-side by just ordering print calls in the optimal way, but if we can just sort it internally as well, then that would be the best.
Edit:
Hmm, this actually won't work out without breaking layering guarantees. For example, in above scenario, the •
at order 1
should never be rendered after the ○
at 2
, because if those glyphs were drawn in the same place, then the latter would be covered by the former. Order is more important than whether a glyph is transformed or not, which means the transformed field will never have any effect (since order is always going to be unique).
Edit again:
Honestly, thinking more about this, my initial hunch was wrong. The way I discovered an increase in performance was not by sorting differently, it was by printing in a different order; i.e. first printing a large batch of untransformed glyphs, then the same bunch- but transformed.
The reason this was more performant is probably very simply because qsort
had less work to do! The glyphs were already sorted properly. Adding additional sorting property would not help to sort any faster.
So I think this optimization must be left up to the program. But at least this issue might serve as a reminder.
I think it doesn't work correctly in all cases. For example, if it determines that it should break, it just keeps backtracking until it finds a whitespace- even if it occurs mid-previously wrapped sentence.
It works OK, but I think it could be better.
There is a common and proven way to implement wrapping, and I think we should just implement that. See https://en.wikipedia.org/wiki/Line_wrap_and_word_wrap
I would like to refactor the measuring of line widths to something more performant, but also more capable.
See these lines:
Lines 54 to 60 in 8a9dedd
For any print, this struct is going to be used, copied and moved around. Even if it's just a single character.
I think there's some clear performance gains here, both in terms of memory-usage but also utilisation.
For one thing, the limit is silly, and should not be a thing:
Line 41 in 8a9dedd
I think, since there's only ever going to be one print/measure going on at a time (project is not thread-safe!), we could basically just keep one stretchy buffer (see command.c#89) for measured line widths and refer to/use that for every print/measure. So the allocated memory will only ever be as high as the longest print/measure command, and we would not need to copy it around.
This makes measuring out of sync with print statements, as it does not report the actual outcome.
I'd like to provide binary downloads for each release, particularly also binaries of the example programs.
Being able to quickly download and try how this thing works and runs on your own system is a much better indication of whether the library is a fit for you or not. Screenshots and GIFs are fine, but being able to run it is much better.
It all starts here:
This should always be a hard limit. However, the exact number is dependent on a few things and is not strictly important to this issue.
The thing is, in order for the batcher to do its thing, glyphs must be flushed once reaching the limit.
This is all well and good, however, since glyphs are sorted by their z-index to be drawn in the correct order, we can reach a scenario where glyphs from one batch take precedence over glyphs from another batch, potentially rendering some glyphs invisible, or incorrectly apply transparency.
I figure this might be solvable by fiddling with depth buffer settings, but I think the sorted order actually matters a lot (front-to-back vs. back-to-front?)
For example, try changing the batch limit to 1 and run this example:
static
void
draw(double const interp)
{
(void)interp;
char const * const pointer = "▓";
struct term_cursor_state cursor;
term_cursor(&cursor);
int32_t w, h;
term_measure(pointer, &w, &h);
term_print(positionedz(cursor.location.x - (w / 2),
cursor.location.y - (h / 2),
layered(1)),
TERM_COLOR_WHITE,
pointer);
term_print(positionedz(10, 10, layered(0)),
colored(255, 0, 0),
"A");
}
int32_t
main(void)
{
if (!term_open(defaults("Termlike: Cursor"))) {
exit(EXIT_FAILURE);
}
term_set_drawing(draw);
while (!term_is_closing()) {
if (term_key_down(TERM_KEY_ESCAPE)) {
term_set_closing(true);
}
term_run(TERM_FREQUENCY_ONCE_A_SECOND);
}
term_close();
return 0;
}
When you point the cursor over the 'A', some red should shine through- but doesn't:
Note that if you move the programmatic order of drawing the 'A' to be drawn first, then it looks correct:
However, the whole point of layering is to not need to worry about which order you draw things in.
Positioning seems correct as long we stick in windowed mode and becomes off as soon as toggling to fullscreen (even when starting in fullscreen mode). Positioning becomes increasingly incorrect when toggling multiple times.
Seems to be related to pixel scaling? Might be something different on the GLFW side, or just a side-effect of macOS update. Or both.
Edit: turned out to be a side-effect of a caching optimization previously made (but clearly not tested properly), where an invalidated viewport was not stored correctly.
When starting this project, I was of the conviction that tile-based rendering was just a artificial limitation that should not be needed. If you could render glyphs anywhere, then you'd have more freedom and could just implement the tiling yourself.
However, i've come to realize that this approach, while perfectly fine until it runs too hot, lacks options for a program to control and optimize the rendering.
Essentially, what it lacks is a way for a program to imply that a print command will happen over and over, without change in parameters. This could be beneficial in terms of sorting and processing.
Most ASCII renderers solve this by implementing a tiling system. This essentially means that they have a grid of cells for X layers, where glyphs can be put into. In such a system, the rendering time should be stable and constant because it always has the same amount of glyphs to render- the concept of "dirty" cells is also commonly implemented so that only changes trigger re-drawing. It's essentially a big cache.
In Termlike, there's theoretically no limit (though, realistically it is at UINT16/32_MAX) to how many glyphs can be rendered, and so glyphs cannot be "cached"; essentially dirtying everything up, every frame.
I still think this approach is the right choice for Termlike, but an additional system for "cached" prints would be a useful addition. I'm not sure yet how to go about it, but something like term_put
and term_putstr
seems appropriate.
The ideal benefits would be that the string of glyphs would not need to be decoded, measured, transformed and sorted every frame.
Realistically, I think only the decoding (and maybe measuring) part can be avoided. The spritebatch will still need to transform each glyph.
I'm also considering other options: something like "compiling" a string of glyphs into a format that is decoded and measured ahead of time, so that you only need to do those things once.
In any case, any such additions should hook into the current rendering system (push command -> sort -> render) rather than becoming some monstrous thing running in parallel. Preferably it is all going through the same pipeline, but just skipping some steps where able.
I stumbled upon this article which may provide a significant performance improvement to the current implementation: http://voidptr.io/blog/2016/04/28/ldEngine-Part-1.html
It requires some investigation, however, and may need to be an opt-in feature if the stuff is not supported on OpenGL Core 3.3.
Edit:
Ah, shucks. OpenGL on macOS is stuck at 4.1 (see https://support.apple.com/en-us/HT202823). We'd need 4.4 to implement this.
For example, in the following snippet, we expect a text with a solid background- and that is what we get:
char const * const name = "TARGETING DRONE MKII";
struct term_dimens name_dims;
term_measure(name, &name_dims);
term_fill(positioned(10, 10), name_dims, TERM_COLOR_WHITE);
term_print(name, positioned(10, 10), TERM_COLOR_BLACK);
And that is what we get:
However, what if we want to scale the text down? We might do the following:
char const * const name = "TARGETING DRONE MKII";
term_set_transform(scaled(0.8f));
struct term_dimens name_dims;
term_measure(name, &name_dims);
term_fill(positioned(10, 10), name_dims, TERM_COLOR_WHITE);
term_print(name, positioned(10, 10), TERM_COLOR_BLACK);
But the output is probably not what was expected:
Notice how the "MKII" is seemingly being cut off?
What happens here, is that both the fill and the print is being scaled, but since the dimensions given to term_fill
is already taking the scaling into account, the resulting rect is essentially being scaled down twice.
So it is pretty easy to fix, but it might not be immediately obvious why it didn't work in the first place. Anyway, the easy fix is to measure the text before applying a transformation:
char const * const name = "TARGETING DRONE MKII";
struct term_dimens name_dims;
term_measure(name, &name_dims);
term_set_transform(scaled(0.8f));
term_fill(positioned(10, 10), name_dims, TERM_COLOR_WHITE);
term_print(name, positioned(10, 10), TERM_COLOR_BLACK);
So this begs the question: should term_measure
be affected by the current transformation?
Rotation is already obviously a non-factor, but what about scaling? I think there are reasons internally to require the final resulting size to be known, but we can probably find a way to avoid that, or just keep it internal-only. I'm not sure of the consequences for typical usage if this was to be changed.
Basically, does it work as expected?
This function is a source of drop in performance:
termlike/src/graphics/opengl/renderer.c
Lines 484 to 516 in e4c328a
Depending on which glyph you're looking up, you'll see varying performance; e.g. looking up a glyph that appears at the end of the table will be way slower than one that appears in the beginning. For example, there is a large difference between √
(slow), vs. ♦
(fast).
Additionally, this isn't really a function specific to the OpenGL implementation, so it may be better suited elsewhere.
For example, combining the smaller units like position.c
, bounds.c
, layer.c
etc. into a single file (termlike.c
?), or simply refactoring the functions as inline in their respective headers. The functions are essentially just construct struct objects. Those kinds of functions might as well be inlined, I think.
This library is meant to be small and concise, but the number of files is starting to indicate that it isn't.
Edit:
Refer to https://stackoverflow.com/a/23699777/144433 for proper inlining.
For reference, here's file sizes in release mode (CMake/Make, macOS):
libterm.a
~156KB
test-perf
~373KB
On macOS, I'm seeing odd framerate drops when switching between windowed/fullscreen modes.
For example, the following steps seems to pretty consistently make it happen:
Need to verify whether it's just a macOS issue, or Windows too.
glfw/glfw#772 and
glfw/glfw#857 may be related.
Update:
Interestingly, this only seems to occur on some window resolutions. For example, it doesn't happen with the performance test in 640x480. This program hits a framerate below the monitors refresh rate (~50 fps, 60hz monitor). Maybe that is of importance.
Update: Ok, so changing around the resolutions and hitting higher framerates in the performance test does not show the framerate inconsistency. It seems to only occur in the simpler programs (e.g. logo.c). What?!
Another fun thing I noticed, using the Quarts Debug tool:
🤔
Edit:
An additional fun observation is that sometimes the FPS improves significantly after having been in fullscreen and coming back to windowed. Like going from ~250 to 400+... oh boy.
I feel that the API has gotten out of hand. Just look at this (don't forget to scroll 😵):
termlike/include/termlike/termlike.h
Lines 142 to 162 in dc8fea8
That is way too many options and parameters (of which some are optional, hence the number of functions). This is also the case for the print functions:
termlike/include/termlike/termlike.h
Lines 119 to 135 in dc8fea8
Ideally, there is just term_print
and term_printstr
; i.e., two options, only differentiated by the intent to render: either 1) a static set of glyphs (e.g. an object or a map), or 2) a wrapped piece of text for reading.
I don't want to lose the advanced options for transformations, though. So this would require a significant API restructuring.
I have already gone down a few different roads to try and solve this, for example, the idea of wrapping parameters into a term_print_attributes
struct, which could then have helper functions for the specific need, e.g.:
term_print("Hello", attribs(positioned(0, 0), colored(255, 255, 255)));
and
term_print("Again", attribst(positioned(0, 0), colored(255, 255, 255), rotated(rand() % 360, TERM_ROTATE_STRING)));
I was sort-of fine with this solution; it looks OK, and each print feels like an atomic command. That's good.
But then there's the measuring functions. For those to produce the expected output, they also need to be provided attributes; for now just a transform. But only in the very specific case of scale being applied; which is not at all the common case. Alas, you'd be forced to type all this, every time:
term_measure("█", &size, (struct term_measure_attribs) {
.transform = TERM_TRANSFORM_NONE
});
Such a simple function, now hideously disfigured because of a parameter that is only needed every once in a while.
Of course, there is the option of sticking with the multitude of different functions to keep options for every scenario. But that was exactly what we wanted to avoid in the first place.
I also toyed with the idea of making the functions variadic; i.e. accepting a variable number of parameters. This way, I figured, you could just provide the stuff you needed. Rest would be defaults.
But there were too many downsides to this approach, and ultimately it was more confusing than it was handy. Something like:
term_print("Hello", POSITION_COLOR, positioned(0, 0), colored(255, 255, 255));
or
term_printstr("Hello", TERM_BOUNDS_NONE, POSITION_COLOR_TRANSFORM, positioned(0, 0), colored(255, 255, 255), scaled(2));
For example, a downside is that your IDE will have no idea what to suggest, leaving you guessing. Additionally, messing with the order would wreak havoc and probably crash things.
It is not well suited for this particular scenario.
So. Feeling like i've exhausted the clever ways of dealing with this issue (barring any macro related ridiculousness), I think the solution is to introduce a global state for transformations and similar uncommon attributes (stuff like line-spacing and padding, possibly). This is pretty commonly seen in other libraries (Allegro, Cocoa/UIKit etc.).
I don't particular like this idea, but I think it will solve the problem.
You could argue that other stuff (like, color/tinting) might then as well become global state also while we're at it. However, I feel it's important to note that I don't think keeping global state variables is a good thing; it is error-prone and can be difficult to debug. Like, if you forget to reset the transform and things suddenly act not-at-all as you expected. But in this particular case I think the pros outweigh the cons.
So the result would be a simple:
term_set_transform(scaled(2));
and
struct term_transform t;
term_get_transform(&t);
However, if we expect to add other attributes like line-spacing etc., e.g:
term_set_attributes((struct term_attribs) { .linespacing = 2, .padding = 5 });
Then it would have been nice to consolidate that along with the transform. However, that will take us back into the original issue: being able to provide only the params you want to (e.g. what if you only want to set linespacing? An un-initialized transform is invalid, because scale would be 0).
So maybe above is the correct approach. The additional attributes default nicely with un-initialized values.
Anyway- an issue with keeping global state is that any slip-up will affect everything following, and always setting the transform you expect, but also keeping what was set, becomes a large bunch of boilerplate. For example, to ensure defaults before printing, but also resetting to what it was:
struct term_transform previous;
struct term_attribs previous_attr;
term_get_transform(&previous);
term_get_attributes(&previous_att);
term_set_transform(TERM_TRANSFORM_NONE);
term_set_attributes(TERM_ATTRIBUTES_DEFAULT);
// all your printing
term_set_transform(previous);
term_set_attributes(previous_attr);
Yuck.
I suppose a way to mitigate that would be to introduce a state structure that holds both the transform, and attributes, so you could save/restore both in one go. Something like:
struct term_state {
struct term_transform transform;
struct term_attributes attributes;
};
struct term_state state;
term_get_state(&state);
term_set_transform(TERM_TRANSFORM_NONE);
term_set_attributes(TERM_ATTRIBUTES_DEFAULT);
// all your printing
term_set_state(state);
This saves a few lines, but also convolutes the API by adding further objects and functions.
Specifically, when filling an area of an uneven size, the area covered may be rasterized in an unexpected way.
For example, in below picture, the height of the background for the profiling overlay is set to fill 9 pixels (e.g. 1 more than the height of a glyph). Instead, something else happens (note the left and right sides of the bar):
However, and this is the core of the problem, since the transformation of the glyph behind the scenes anchor the glyph around its center, the glyph must be offset by half its size to account for the anchoring. But. Half the size of 9 is 4.5. This causes rasterization to become unpredictable, since we'll be drawing from, and to, half a pixel.
Similarly, in the above example, the bar should also fill 319 pixels (1 less than the full width of the window), however, it fills up only 318 pixels. Again, due to the half-pixel adjustment.
In regards to this hardcoded limit:
Line 10 in 48f1c02
Some kind of limit is obviously necessary, but how low can we go? The current limit is not large enough. There's plenty of stuff I've wanted to print, only to have it cut off.
As an aside, keep in mind that this limit would be extended further into each buffered command if we ever decide to implement string-copying on print (see 2c7a942)- which could accumulate to a lot of memory and less performance, since it would likely involve malloc and some kind of stretchy buffer.
When running the performance test on Windows 10, I am getting approximately half the FPS that i'm getting on my MacBook Pro. Seems to me that, if anything, it should be the other way around (considering the MacBook also has way more pixels to light up with its Retina display).
I also think the performance i'm getting on the MacBook Pro is not good enough for what it is doing, so there's that too.
Considering that the PC i'm testing on is built for gaming, and should, at least spec-wise, be the stronger machine, this makes me think there might be something fundamentally wrong, somewhere (Edit: maybe not so fundamental- see observations below).
(Not entirely true, as the MacBook actually has a newer CPU- which may be more efficient; e.g. PC i5-6600k vs MacBook i5-7360u- also, interestingly, fewer cores on the MacBook (2 vs 4 on PC), which may also be an indicator)
Edit:
Though by looking here http://www.cpu-world.com/Compare/217/Intel_Core_i5_Mobile_i5-7360U_vs_Intel_Core_i5_i5-6600K.html it seems the PC CPU should totally be doing better...
Additionally, the PC has a dedicated GPU, which the MacBook does not (Iris™ Plus Graphics 640)
One thing I know for sure is that all the glyph transformation is heavily CPU-bound (the engine is generally CPU-bound), and is what takes most cycles every frame- for example, getting rid of any transforming print commands in the performance test doubles the FPS immediately.
Observations:
logo
example runs incredibly fast on PC, e.g. ~5000FPS, and only ~500 on the MacBook.
This example program presents the simplest use-case and I feel like that kind of performance is acceptable. So this suggests that the core render loop is OK, and the problem is more likely related to handling large amounts of commands/transformations and utilizing the CPU more efficiently. This is an area i'm not too experienced in, so I bet there's a bunch of stuff that could be optimized. Stuff like struct layouts and memory blocks.
Similar to #13, printing strings that should wrap inside a bounded area does not take scaling into account, causing stuff like seen here, where transform is scaled(2)
:
Without scaling:
It has come to my attention that it is, in general, considered bad practice to use relative paths in #include directives, but is especially bad with those that track backwards.
For example, what we're doing here:
This seemed clear and correct to me, but MSVC is throwing warnings (C4464) for it.
I'd like to refactor the #include directives to be compliant with this, but is the solution really to add e.g. src/graphics/
as a header search directory (or maybe just src/
)? I suppose it is.
Edit:
I think the way I want to do it is by -I src/include
, e.g.:
Includes:
src/include/keys.h
src/include/graphics/renderer.h
src/include/graphics/viewport.h
src/include/platform/window.h
src/include/platform/timer.h
etc.
Implementations:
src/graphics/viewport.c
src/graphics/opengl/renderer.c
src/platform/glfw/window.c
src/platform/glfw/timer.c
etc.
This would mean that a file like renderer.c would no longer have #include "../viewport.h"
, but instead #include "graphics/viewport.h"
. Right? Or would it be #include <graphics/viewport.h>
? In that case, maybe the better solution is to add a termlike
directory inside src/include
, so that we can end up with #include <termlike/graphics/viewport.h>
. This looks right, I think.
Currently, the profiling overlay is only available in debug builds (e.g. #ifdef DEBUG
), but it is also useful to have in release builds.
So instead, there should be a #define
somewhere that toggles whether it should be included in a build.
It would be neat being able to store in-game screenshots or videos with a single keypress. It would probably only be available in debug builds, and video recording would (possibly) be limited to X seconds.
A nice, but not necessary, feature could be showing a timer on recording time. But it would have to be a thing that did not end up on the actual recording, so i'm not sure how that would work.
For example, the layers test is fine for visually determining whether commands are sorted, but it could be boiled down further to simply assert that the sorting function results in the expected outcome. The visuals could fail due to many other things. Similarly, word-wrapping is very testable in that we can assert whether newlines are inserted as expected.
Currently, term_print
supports printing a string of characters in a color, at a layered location. This is, as I see it, the minimal amount of features needed for doing anything reasonably meaningful with this function.
However, being able to rotate a character, or a string, is potentially a very neat addition, as it would increase the amount of possible effects (e.g. rotating sharply pixeled characters often result in odd-looking rasterization- you might discover that "☻" rotated 35 degrees looks more like a scary monster).
In the glyph renderer's current state it would be somewhat trivial to add rotation (for a single character- rotating an entire string will actually be problematic).
Issues:
struct term_location
) could include layering info, removing need of layer parameter- similarly for rotation; just include an angle. Something like located(0, 0, layered(1), angled(35, TERM_ANCHOR_CENTER))
.term_printc
, which is identical to term_print
, except that it only prints one character- but with the added option of rotation and scaling (see #3)?Applying rotation to individual characters can create new looks:
But actually rotating a string of text should look like this:
Can we provide both options, somehow?
Simply for convenience, it would be useful for the term_printstr
function family to accept a variable list of arguments (like printf
/sprintf
etc.). I guess maybe all of the print functions, really.
However, this addition comes with some implications related to who owns the memory (see #5). In order to provide this feature, string copying (as mentioned in #5) would need to be implemented first.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.