Giter Site home page Giter Site logo

jonathanhogg / flitter Goto Github PK

View Code? Open in Web Editor NEW
26.0 4.0 1.0 28.66 MB

A functional programming language and declarative system for describing 2D and 3D visuals

Home Page: https://flitter.readthedocs.io

License: BSD 2-Clause "Simplified" License

Python 42.55% Cython 55.87% GLSL 1.58%
ableton-push2 glsl language livecoding opengl visuals live-performance live-visuals live-coding dmx

flitter's Introduction

About

Jonathan Hogg trained in software and electronics and worked for over 20 years, commercially and in academia, designing and developing systems and managing teams. In 2009 they moved into the creative industries and create interactive digital artworks, multi-sensory installations and participatory artworks, as well as designing and performing live digital visuals.

Their work involves hacking varied media including: high-level and embedded software; digital and analogue electronic circuitry; LED lighting; 2D and 3D digital graphics; audio and video; wood, metal and even occasionally paint. They have performed live visuals at the Royal Albert Hall, South Bank Centre, Bristol Arnolfini and at festivals, and their video installation work has been projected onto the Queen's House in Greenwich and featured at Coventry City of Culture.

Jonathan has extensive experience working in education as a creative practitioner delivering creative and technical projects across all age ranges, including experience working with children with special educational needs. They also mentor young people for Arts Emergency and are a visiting lecturer at University of the Arts London.

Output Arts

In November 2009, Andy D'Cruz, Hilary Sleiman and Jonathan Hogg formed arts collective Output Arts to create multisensory artworks for non-gallery venues. Since then, Output Arts have created a wide range of installation and participatory artworks and arts events, including:

flitter's People

Contributors

jonathanhogg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

flitter's Issues

It would be great if point lights could have a radius

This would improve the realism of specular reflections where a light is relatively large compared to the distance of it from a surface. At the moment it is treated as infinitely small (that would be the definition of a "point" then) and so the reflections will look like small dots at close distances.

It would be useful to have distinct camera support in `!canvas3d`

At the moment, the camera is specified with the viewpoint, focus, up, fov and orthographic attributes. This is OK unless you want to apply transformations to the camera, at which point it becomes a pain in the arse. One can sort of get around this with applying transforms to the world instead, but it'd be handy to be able to just place a camera in the world at an arbitrary position. Even neater would be to place multiple cameras inside the world and then switch between them at will.

I suggest adding a !camera object that can be placed in the normal object graph, within !transforms as desired. This would probably have pretty much the same attributes as the current top-level ones, with perhaps renaming viewpoint to position (to be consistent with other objects) and adding a direction alongside focus.

If these cameras had something like an id attribute, then a top-level camera selection could be done with something like camera=id.

What about just throwing away queries?

Queries are a royal pain-in-the-arse to handle correctly in the language, with annoying side-effects and weird rules around attribute-setting and node appending.

I've not been using them much in my recent work and a glance at the places I've used them in the past (mostly stuff from Let My Country Awake) suggests that those could be replaced with template functions.

Maybe it is time to accept that it was an idea that just hasn't turned out to be as useful as I thought it would be?

What if symbols became numbers instead of strings?

At the moment a symbol, like :foo, is just a synonym of the equivalent string, "foo". However, what if we converted symbols into numbers instead?

I could have a symbol table that allocates new large/unusual numbers for each new symbol and maintains mappings both ways between these. The parser would replace all symbols with simple number vectors. Then, wherever I'm currently expecting symbol-like strings, e.g, the values of enumerated attributes like composite=, I could accept a number if it maps to a string through this table. For compatibility I could have Vector.as_string() do this mapping automagically.

This would have the advantage that state keys incorporating symbols and numbers would become simple numeric vectors, which are much faster to manipulate and – importantly – hash.

The downside is that the internal state would be less representable, but I don't think there are many places where I currently print out state keys. The smartest thing to do might be for Vector.__repr__() to look up all numbers in the symbol table in case they match a symbol, then print out the symbol instead.

I'd want to ensure that the symbol table uses numbers that are unlikely to collide with normal number usage in a program. Maybe large integer negatives would be smart. Ideally you'd want this symbol table mapping to be stable. So alternatively it could just be the 64-bit string hash (as computed normally in Vector.hash()) interpreted as a double? This would surely turn up numbers that are unlikely to collide with normal usage of numbers?

Should `Vector` support a second dimension?

There are a few places where I am implicitly treating Vector as having a second dimension:

  • In multiple-binding for loops (for a;b;c in v)
  • In zip(), which takes multiple vectors and produces a vector suitable for multiple-binding
  • In functions that take an optional group-size argument, e.g., sum() and accumulate()
  • In functions that produce or consume pairs, e.g., polar() and angle()
  • Explicitly in the Matrix33 and Matrix44 sub-classes of Vector

Would all of this be neater if I just supported a second dimension to vectors? I hesitate to call this a "matrix" as I feel that ought to be reserved for fixed-size objects rather than the variable length objects the first four of these usages operate on. I'm going to call these "arrayed vectors" for the moment based on the C++ terminology of arrays being fixed-length and vectors being variable-length.

So the array part of an arrayed vector would be a size value that the vector length becomes a multiple of. Mostly no changes to the model would be required except for adding a new attribute that contains this array size and some changes to how indexing works.

Let's say that zip consumes m flat vectors (i.e., vectors with an array size of 1) and returns an arrayed vector with an array size m. Indexing or iterating over this should produce m-vectors. If multiple-binding for loops were redefined such that the names are bound to each element of the vector produced by iterating over the loop source, then the overall behaviour of:

for a;b;c in zip(as, bs, cs)
    ...

stays the same. However, looping with multiple names over a flat vector would always result in binding each element to the first name and setting the other names to null.

Functions that implicitly return arrayed vectors, like polar() could just explicitly do that and therefore could still be iterated over naturally:

let n = 12
    indices = 0..n

for x;y in polar(indices/n)
    ...

However, now one could also naturally index the results:

let n=12
    indices = 0..n
    coords = polar(indices/n)

for i in indices
    let x;y = coords[i]
    ...

which would improve the various places in my code where I have ugly indexing like coords[i*2..(i+1)*2]. Functions like angle() would take 2-arrayed vectors and return flat vectors. hypot() would take an n-arrayed m-item vector and return a flat m-item vector with the n-dimensioned hypotenuse of each item. sum() and acculumate() would lose their second argument and just do The Right Thing with any n-arrayed vector they are given.

Matrix33 and Matrix44 are only used internally, but it would make sense for these to match 3-arrayed and 4-arrayed vectors in terms of their attributes. It might be neat to abstract out operations into Vector that can easily work with any size of matrix – like translate, scale, vmul, mmul and transpose.

What if `Context` objects became the interface between the engine and rendering?

Following on from an idea in #38 of putting references into Context so that @context_funcs can access them, what if the Context was also passed into renderers instead of the current mish-mash of engine, node, references and global names?

Other than possibly just being neater, another key advantage of doing this would be allowing renderers to update the warnings and errors sets. At the moment, there's no good way for renderers to log problems that wouldn't spam the console on each frame, so they tend to just eat errors and forget about them.

Forcing use of unsupported recursion causes kind of horrible errors expected of using recursion

Flitter does not support recursion.

That said, you can of course make it recurse with this one hack:

func fib(fib, n)
  if n > 1
    fib(fib, n-1) + fib(fib, n-2)
  else
    n

debug(fib(fib, 10))

Sadly this will cause the simplifier to explode because it attempts to statically evaluate this recursion. The implementation of tree.IfElse.simplify() eagerly evaluates the then expression of the if and chokes on its own stack.

The graph model could be a proper DAG

The hokey reference stuff currently allows one to link one piece of the scene tree into another point. It'd be neater if the system just supported a proper DAG (directed acyclic graph). What I'd really like to do is this:

!shader fragment="screen.frag"
    !canvas#main
        ...
    !shader fragment="blur.frag" horizontal=false
        !shader fragment="blur.frag" horizontal=true
            {#main}

For this to work, a bunch of changes would need to be made:

  • drop the parent member from model.Node and make whatever changes elsewhere this implies
  • nodes need to be added into the context graph before child nodes are evaluated and appended (see more below)
  • the language needs a new mechanism for unlinking a child from a node (or perhaps just overwriting all children)
  • the language needs a mechanism for appending children to a node without accidentally appending that node into the main graph
  • the scene rendering will need to remember when a scene node has already been updated and not try to render it again

Evaluation changes

At the moment, the evaluation is strictly functional tree reduction with the exception of Top which evaluates each sub-expression (representing a top-level statement) one at a time and appends the results into the context graph. This would need to change to something more like an imperative model, as otherwise there'd be nothing in the graph to query in the example above.

My first thought is to add a new execute() generator method to the statement expressions. This would yield individual Vectors. So Sequence would yield from the execute() method of each sub-expression, IfElse would evaluate() its condition and then yield from one branch or the other, and so on. The default execute() method on most expressions would just yield the result of evaluate(). Simple expressions would still call evaluate() on their sub-expressions for speed (e.g., binary operators). If necessary, an evaluate() method on statement-like expressions would iterate over execute(), collecting the results together into one Vector.

A statement-like expression here would be: Sequence, For, IfElse, Node, Tag, Attributes, Append, Prepend. I guess Let and Function as well, logically, but they both yield nothing. I think partial-evaluation remains unchanged: this really only impacts Search and this cannot be partially evaluated.

Language changes

At the moment, adding a Query below a Node reparents the matches. With a DAG there needs to be a new way to detach a node from one place and attach it in another so one can still do grafting. I'd propose a new append operator that overwrites the list of children instead of appending to it (perhaps !), and a new operator for discarding the results of a statement so that they don't get re-appended to the context graph root (I'm gonna use ? while I think about this):

? {window} !
    !shader fragment=read("feedback.frag")
        {window>*}

This needs some careful thinking about: if ! immediately drops the children of {window}, then the later {window>*} won't match anything and the previous contents of the window get garbage collected. Worse still, if !shader is appended immediately, then {window>*} creates a cycle.

A rounded box primitive would be a nice thing

Here's a functional implementation of what I mean:

func rbox(_, position=0, size=1, rotation=null, radius=null, segments=null)
    let radius=clamp(radius/size if radius != null else 0.25, 0;0;0, 0.5)
        inner=1-radius*2
        segments=max(1, segments//8)*8 if segments != null
    !union position=position size=size rotation=rotation
        if inner[1] and inner[2]
            !box size=1;inner[1];inner[2]
        if inner[0] and inner[1]
            !box size=inner[0];1;inner[2]
        if inner[0] and inner[1]
            !box size=inner[0];inner[1];1
        if radius[0] and radius[1] and radius[2]
            for x in (-1;1)*inner[0]/2
                for y in (-1;1)*inner[1]/2
                    for z in (-1;1)*inner[2]/2
                        !sphere segments=segments position=x;y;z size=radius
        if radius[0] and radius[1] and inner[2]
            for x in (-1;1)*inner[0]/2
                for y in (-1;1)*inner[1]/2
                    !transform translate=x;y;0 scale=radius[0];radius[1];inner[2]
                        !cylinder segments=segments
        if radius[0] and inner[1] and radius[2]
            for x in (-1;1)*inner[0]/2
                for z in (-1;1)*inner[2]/2
                    !transform translate=x;0;z scale=radius[0];inner[1];radius[2]
                        !cylinder segments=segments rotation=0.25;0;0
        if inner[0] and radius[1] and radius[2]
            for y in (-1;1)*inner[1]/2
                for z in (-1;1)*inner[2]/2
                    !transform translate=0;y;z scale=inner[0];radius[1];radius[2]
                        !cylinder segments=segments rotation=0;0.25;0

A rounded box primitive would also generalise !box, !cylinder and !sphere in that:

  • @rbox radius=0 is a !box
  • @rbox size=2;2;1 radius=1;1;0 is a !cylinder
  • @rbox size=2 radius=1 is a !sphere

The difference would come down to how the texture UV coordinates work.

Ideally an !rbox would use a box model of UV coordinates and carefully wrap each side 1/4th around the cylinders at the edges and over 1/8th "corners" of the spheres. This ought to take into account the relative area of the flat side vs the curved parts so that the texture isn't weirdly stretched or compressed. To keep the mapping correct, there would need to be seams around each of the 6 "faces" of the !rbox. I guess this also then means that segments must be a multiple of 8.

Counters annoy me

I dislike that counter() is a function with side-effects. The proper thing should be that counters become very simple renderers. Something like:

!counter state=key rate=0 [ time=frame-time ] [ initial=0 ]

  • counter value would be accessed as $(key)
  • this means there's no point in trying to minimise state activity by remembering the current rate, instead just remember the last value of time, perhaps as $(key;:time)
  • the time attribute name is to be consistent with !physics
  • adding initial makes sense as it avoids having to always add a starting point into the counter value otherwise.

If key isn't found in the state then it is initialised with the value of initial and time is recorded at key;:time. Each frame I calculate the delta between time and the last value from the state and then multiply this by rate and add it to the current key value in the state. I can just use normal vector multiplication and addition here so n-vectors are supported automatically.

Interestingly, rate, time and initial cannot be explicitly null – any of them being null is the same as the attribute being unset and therefore the default value. However, any of them could be non-numeric, which would result in a null calculation, so I'm going to assume that non-numeric vectors are implicitly null, and ignore them.

If I'm going to change counters, I should do so before 1.0.0.

OpenGL is dying

My current use of pyglet and moderngl was always a hack and needs to end. There are, thankfully, now some decent alternatives to investigate.

The top candidate for replacing OpenGL has to be WebGPU – which thankfully isn't just for the web. The rust-based wgpu-native library provides a multi-platform translation API that, importantly for me, maps to Metal in macOS. There appear to be solid Python bindings available for this library.

I'm still going to need a windowing layer and GLFW may be the thinnest/simplest cross-platform API for this that is supported by wgpu-py.

Either fix or pull simplification on state

Program simplification on state is completely defeated by anything that frequently touches state, which includes physics systems or counters. I should either turn it off and pretend it never happened, or I should think about how to make it work in these cases.

There is an argument for turning it off, as running the simplifier is not zero-cost and can result in irritating frame stutters.

In terms of making it work, the idea that occurs to me would be to bisect the state into static and changing pieces. Simplify on the former and continue to lookup the latter dynamically; dump the simplified program if anything on the static side changes.

Probably the simplest way to achieve this would be to keep a second dictionary mapping state keys to timestamps. One could then walk this every x seconds looking for things that have not changed in the last x seconds and run the simplifier for just those keys. Keep a record of which keys were simplified on. If the set of static keys hasn't changed on the next check, we'd keep the current simplified program. If any of the static keys changes during a frame then dump the simplified program.

Functions and function calling needs a cleanup

The function code is a bit of a mess, particularly where it comes to a division between native Python functions and Flitter functions. This is what I think needs doing:

  • debug() should just be a @context_func in functions.pyx and all special pleading for it removed
  • move references into the Context class and make sample() a @context_func as well
  • allow func to declare recursive functions – this should just be a case of adding the function name before the parameters onto lnames list before compiling the body and then pushing it onto the lnames stack as part of the calling convention
  • box all Flitter functions as Python function-like objects – basically turn call_helper() into a callable class wrapper around func Programs
  • make all function calling use the Python call convention – dump CallFast and have Call just use PyObject_CallObject() or PyObject_Call() depending on whether any kwargs have been popped off the stack
  • let the normal Python call stack handle and throw recursion errors as necessary

An advantage of all of this is that it will allow for higher-order functions, like passing Flitter functions into native functions as arguments. It'll also let me Cython-optimise the, currently quite slow, sample() function.

This also paves the way for possibly allowing functions to be passed into the rendering pipeline as attributes of Nodes…

Add some basic functional testing

Assuming that I can get #34 done, then it'd be good to at least add functional tests based on the programs in /examples. What I'd need to do is rig them up to run for a set amount of time offscreen and then dump the framebuffer to an image file. If I add sample images for each of the examples into the repo (which would be good for documentation sake anyway) then I could do some image comparison to check that the examples are rendering correctly.

Possible useful link for image comparison:

https://sewar.readthedocs.io/en/latest/#

Should indexing wrap?

There is an argument that indexing should wrap the same way that vector expansion in operators works, i.e., indices are modulo the length of the vector.

This would have a few benefits:

  • Places where I currently expand a vector manually out to a particular length to ensure indexing works (e.g., multiplying sizes by (1;1;1)) would just work
  • Negative indexing could be used to select from the other end of a vector without having to call len() and do a subtraction
  • A single number could be easily expanded out to an arbitrary length vector with slicing, e.g., 5[..100] would produce a hundred long vector all consisting of the number 5

Question: should multi-binding let statements also wrap a short vector or should they match the behaviour of a multi-binding for loop? It would seem to be useful if they did wrap, as then one could write:

let width;height=SIZE

and have it do The Right Thing without having to do SIZE * (1;1).

This feels like it would be a substantive enough change that I should do it before 1.0.0 or not do it at all.

Add support for offscreen GL contexts

It'd be great to be able to create a non-window context in Flitter, for containing scene nodes that will be used only as references and to allow generating images and videos in a non-windowed environment – which would be great for doing functional tests.

Why aren't `Context` and `StateDict` part of the model?

I think they used to be part of language.tree and I moved them into language.vm when I created that.

Anyway, they are clearly part of the contract between the language and render frameworks and so should be part of model.

This would also solve the contortion I've subsequently had to make in creating a language.context module to avoid an import cycle.

OSC support should be added to `render` framework

The OSC stuff that's currently in there to support the separate Ableton Push process should be moved into the render package and turned into a general-purpose mechanism for interfacing the language with external things via OSC.

The obvious thing to do is to support an OSC listener that can update the state. Maybe something like:

!osc_listen host='127.0.0.1' port=12345
    !endpoint address='/a/b/c' state=:abc

With whatever values are sent to the /a/b/c address being saved as a vector against the :abc state key. This will obviously work fine for ints, floats, bools and strings. Could consider the idea of grouping up the address path into a node/vector hierarchy, like:

!osc_listen host='127.0.0.1' port=12345
    !group prefix=:a;:b
        !endpoint address=:c state=:abc

This may be unnecessarily wordy though. There's probably no harm in it and in allowing addresses to be vectors that get joined using / characters.

Trickier is figuring out how to send OSC values. Perhaps something like:

!osc_send host='127.0.0.1' port=11000
    !group prefix=:live
        !endpoint address=:track;:set;:volume values=track_id;volume

This should presumably keep a cache of the last sent addresses/values and only send changes, with perhaps some kind of (per-endpoint) confgurable timeout to resend the current value if it doesn't change.

Documentation is woefully inadequate

I need to fully document:

  • The full language
  • All functions
  • A tutorial
  • The various basic !window graph nodes: !shader, !video, !image, !record and !reference
  • All of !canvas
  • All of !canvas3d
  • All of !physics
  • The high-level !controller nodes: !rotary, !pad, !button and !slider
  • The specifics of the supported MIDI surfaces

Doing this might force me to stabilise some of the language… 🙄

Windowed mode not working on macOS Sonoma (Apple Silicon)

I checked out and tried to run flitter as described in the readme, and running the demo crashes:

(venv)  venv  kladdkaka  flitter  main  $  flitter examples/hoops.fl                                                                                 15:17 
15:17:42.969 93674:.engine.control  | SUCCESS: Loaded page 0: examples/hoops.fl
/Users/michael/Dev/flitter/venv/lib/python3.11/site-packages/glfw/__init__.py:916: GLFWError: (65540) b'Invalid window size 0x0'
  warnings.warn(message, GLFWError)
15:17:43.238 93674:.engine.__main__ | ERROR: Unexpected exception in flitter
Traceback (most recent call last):
  File "/Users/michael/Dev/flitter/venv/bin/flitter", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/Users/michael/Dev/flitter/venv/lib/python3.11/site-packages/flitter/engine/__main__.py", line 65, in main
    asyncio.run(controller.run())
  File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/[email protected]/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/Users/michael/Dev/flitter/venv/lib/python3.11/site-packages/flitter/engine/control.py", line 226, in run
    self._references = await self.update_renderers(context.graph, **names)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/michael/Dev/flitter/venv/lib/python3.11/site-packages/flitter/engine/control.py", line 116, in update_renderers
    await asyncio.gather(*tasks)
  File "/Users/michael/Dev/flitter/venv/lib/python3.11/site-packages/flitter/render/window/__init__.py", line 121, in update
    self.create(engine, node, resized, **kwargs)
  File "/Users/michael/Dev/flitter/venv/lib/python3.11/site-packages/flitter/render/window/__init__.py", line 457, in create
    self.recalculate_viewport(new_window)
  File "/Users/michael/Dev/flitter/venv/lib/python3.11/site-packages/flitter/render/window/__init__.py", line 526, in recalculate_viewport
    if width / height > aspect_ratio:
       ~~~~~~^~~~~~~~
ZeroDivisionError: division by zero

The root cause of this issues is that the call to get the monitor working area returns (0, 44, 0, 0) - this is not an error (I can call glfw.get_error() and I get 0, and the 44 implies we know about the menu bar.

This response gives mw and mh as zero, which causes the following code:

            while width > mw * 0.95 or height > mh * 0.95:
                width = width * 2 // 3
                height = height * 2 // 3

to make width and height zero.

I did some spelunking, and if I comment that code out so I get a window, then even with a fully setup and working GL environment the monitor area is still wrong. In fullscreen things work, as this check then doesn't have effect.

macOS version: 14.0 (23A344
Python version: Python 3.11.6 (from home-brew)
CPU: Apple M1 Pro

Convert canvas to Cython

The current canvas drawing code could almost certainly be quicker if it was implemented in Cython. This would also allow the helpers.pyx code to be folder into it. It'll need the 3.10 match/case code changed to equivalent if statements.

The enums are a bit of a mess as well, so it might be worth sorting those out too – I suspect that just using dictionaries would be easier.

Can I come up with a decent model of translucency?

I'm thinking two passes:

  • Dispatch the back-faces of all translucent objects and draw the fragment normal and distance into a texture
  • Dispatch the front-faces of everything as normal with the first pass texture as a uniform

In the normal shader, read the backface distance and normal from the first pass texture if this fragment is translucent. Use backface position (${viewpoint} + \vec{V} \cdot d_{backface}$) and normal to calculate how much of each light falling there is absorbed into the object. Use this light multiplied by a transmission colour (is that just the albedo colour?) and multiplied by the translucency raised to the power of the thickness (difference between front and backface distance) to determine an amount of transmitted light that can be added into the fragment colour.

This assumes that our new translucency material property is a fraction of light transmitted per unit distance.

There is an argument that this path should also be taken for transparent instances and that transparency should also take account of the material thickness. So then translucency is how much lights behind an object can be seen through an object and transparency is how much of the scene behind can be seen through an object.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.