Giter Site home page Giter Site logo

maddiem4 / concurrentree Goto Github PK

View Code? Open in Web Editor NEW
60.0 60.0 5.0 5.35 MB

A solution for concurrently editable rich text documents. Designed with P2P in mind, though works just as well for server-only federation. Developer email: [email protected]

Home Page: http://orchard.crabdance.com/

Python 100.00%

concurrentree's People

Contributors

adajw avatar maddiem4 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

concurrentree's Issues

Update document schema

New schema includes top-level "about" property, and a restructuring of the "permissions" property. DDD support inside the permissions is planned to be in root["permissions"]["DDD"], and I have a vague idea what it will be, but I'm saving that work for another ticket entirely.

This is a subticket of Ticket 3.

#about
    $doctype (MIME, with some extensions)
    $docname
    #owners
        strinterface: bool
    #sources
        strinterface: priority (int)

#permissions
    // global overrides
    #read
        strinterface: bool
    #write
        strinterface: bool
    #graph
        #vertices
            name (str) : #
                #participants
                    strinterface: bool
                $threshold
        #edges
            str(["vertice1","vertice2"]) : # // always in alphabetic order
                #participants
                    strinterface: bool
                $threshold

Temporarily remove BCP from codebase

Until I get back to it - if I ever decide to, which seems less and less likely and/or necessary over time - BCP is a millstone around this library's neck, and maintaining it alongside the properly developing parts is a waste of effort, the second I have to expend any effort on it at all. It makes a lot more sense to cut it out for now and restore it later, thanks to the magic of tags.

Abstract secure key:value storage as another lib dependency

We can completely get rid of the ct.util namespace by moving encrypt-able key:value storage into its own project, or reusing someone else's persistent storage abstraction work. In the meantime, in bug23, I will remove ct.util in favor of ct.storage, which will include the leftovers of the crypto library (that really belonged with storage all along). That way it's one package to replace, should make that cleaner.

Remove SingleNode

After the writeup in #39 for @marcelklehr, and a bit of completely unrelated stewing on things, I figured out a way to completely remove SingleNodes from use in the codebase, the model, and the serialization standard.

Maps

This was the big reason to have SingleNodes in the first place, they were just reused in Lists for convenience. But we can make a better approach analogous to BigTable's use of consecutive, immutable SSTables (which fit the CTree immutability model quite nicely).

A map will be defined by its set of keys (in the representation sense). So if I want to create a map representing { "hello" : "world", "goodbye" : "strudel" }, I will start with the following instruction:

[2, [...], 8, ["hello","goodbye"]]

This object now has three childsets open:

  • Childset 0 : Competing values for key "hello".
  • Childset 1 : Competing values for key "goodbye".
  • Childset 2 : Extension merge. Only accepts maps.

Note that the order of the childset-to-key associations, is defined by the order of the keys in the map definition instruction (this array is the map's immutable value). We can now insert into these childset in the traditional way - using integer positions and the node key of the actual value. For example:

[1, [..., 8, "{\\"hello\\,893893892"], 1, "strudel"]

Which inserts a string "strudel" into the map at position 1, thus setting the value for representation key "goodbye". These childsets have competition semantics - only the object with the "winning" key will be used.

Childsets with no children, or at deleted positions, do not contribute a key presence to the flattened representation at all. In the above example, we have so far created a map, and set the value for "goodbye", even though we have an unused space for setting the value for "hello". The flattened rep, currently, would be { "goodbye": "strudel"}. If position 1 were marked deleted, the rep would become {}.

Map extensions work by ordered-replace semantics. All the maps in the extension childset are ordered from least- to most-successful (sorted by keys). For each one, the flat rep is computed, and used to update the parent map's flat-rep-in-progress. This is similar to a plain old dict.update(), except that keys that are explicitly non-present in the child map (whether from deletion, or an empty childset) are removed from the parent rep-in-progress. Thus, a key may be removed and set again multiple times if there is heavy competition in the extension childset.

In pseudocode, this can be expressed pretty simply:

def flatten(map):
    temp = {}
    for i in range(len(map)):
        if (not map.deletions[i]) and len(map.children[i]):
            temp[map.keys[i]] = map.children[i].flatten()
    for extension in map.children[len(map)]:
        temp.update(flatten(extension))
        for k in extension.deleted_keys:
            del temp[k]
    return temp

In the real version, of course, this function would be a method of the Map Node class, and for performance, you would collect child nodes in temp first (and flatten them as a final step), to avoid flattening things you won't actually use in the final result.

Lists

Lists never "needed" SingleNodes in the first place. It's not that much code to have them enforce nodekey constraints themselves, and you might as well, since that's the only place you'd be using it anymore. Because a list can be extended from any position, and any element in a list can be marked deleted, all SingleNode-based replacement ever did was violate the SPOT rule, by defining multiple ways to replace a value beyond delete-and-insert.

Not much to say about this, really. Position childsets would simply hold the direct value. Unset positions would not be present in the flatrep.

Validation system consolidation

Right now there's all the stuff being actively developed in model/validation/*, but there's also the in-use-but-not-actually-doing-anything model/validator.py. Bug16, in fact, is all about migrating a lot of security logic into validation filters used by the former.

Pros (validation/*):

  • Already providing practical benefit
  • Under active development
  • Will be used a lot more in the future for sure

Cons (validation/*):

  • Doesn't have fine grained op validation yet.

Pros (validator):

  • Fine grained op validator
  • Can abort an op at any point before, during or after it is applied

Cons (validator):

  • Doesn't really line up with the validation/* vision.

As you can see, both are important, and their only cons are their lack of mesh-together-ness. So I have to find a way to resolve the confusing name collision, and get these things to couple nicely and purposefully. I'll work on ideas in the comments here until I come up with an implementation plan I'm happy with.

Hook up MCP code with callbacks to use from other code.

Make sure that request-type messages store callbacks for response. Do this on a per-request basis. Basically, anything in Writer.pull_* should be storing user callbacks. Explicitly:

  • pull_index (done)
  • pull_snapshot
  • pull_op (done)
  • pull_ops

Modularize the MCP vertical integration test

Currently, it's one big and increasingly monolithic monster of a test series. I'd love to be able to factor out enough common setup code to test most of that stuff in the docstrings of the actual functions I'm testing. This ticket will be considered complete when the vertical integration test is less than 50 lines and/or looks reasonable.

BCP protocol design - negotiation.

BCP's design is... poor, in some spots. A big one is dealing with its "I'll do what I goddamn want" attitude towards document state. Confirmation/error data is pretty much disregarded and ignored, so if the two endpoints have a disagreement about an op, the likelihood of it already being applied on one end is high. And CTrees being what they are, op application is a one-way street.

A better implementation, IMHO, is to have a dual-tree approach. One "experimental" tree, where ops are applied without question, and a "stable" tree consisting of only the confirmed data, as a backup in case the experimental tree is determined to be "wrong." This can be accomplished with an op storage map for experimental ops, and an ophash set containing the hash of every op between stable and experimental. If an op is confirmed good, it is removed from the temp storage containers, and applied to the stable tree. If not, it's still removed, but the experimental tree is reconstructed from the stable tree + temp stored data.

On receipt of an op, an endpoint should send either a confirmation or an error that can be directly associated with the op by hash. The error should be an "op failed" error that is sent independent of any other related errors. Confirmations will be numerous, and therefore must be lightweight, consisting of only {"type":"c","hash":"..."}. Good remote ops should be applied to stable and confirmed at once. Bad remote ops should be reported and dropped. An op timeout should also be negotiated, which reminds me to create another ticket for BCP session properties.

Create classes for op and request validation

In order to replace the current crummy want_docname system in MCP.gear, we're going to want to create a concept of a validation queue. This can be any regular python iterable (such as a generator, to interface with Queue objects) that returns ValidationRequest objects. These in turn come in subclasses for different types of actions, like invitations to documents.

So, we'll need a couple new files for this:

  • model/validation/queue.py (container for Python Queues so they can be treated as iterables)
  • model/validation/request.py (base class for things that need approval)
  • model/validation/invitation.py (subclass for document invitations)
  • model/validation/cmdline.py (stdin/stdout-based functions for manually validating requests in the command line)

Then we need to modify MCP.gear to use these, moving as much callback information into invitation.py as possible.

This is a subticket of bug12

Some sort of wrapper issue with dicts and deletion

The wrapper dict, for some reason, has an odd idea of how to determine if it contains a key. None, which is supposed to be equivalent to the key not existing, does not have that effect - while a value of an empty dict has the effect of making __contains__ evaluate to False. It's not clear where else the algo is faulty, I haven't looked into it too deeply yet. It shouldn't be hard to test for though, and the problem is manifesting in host_table.py.

This ticket requires three things to be done:

  • Unit tests revealing this bad behavior must be written into the dict wrapper.
  • The failing tests in host_table.py need to be re-enabled.
  • All tests must pass again after debugging.

Consensus protocol for edges

Extend the consensus protocol from Bug21 so that collision edges can also be defined. Edge collisions can be resolved entirely by machines, and should alert the original author and all endorsers when an op has to be rejected entirely due to collision (as opposed to just deferred).

This is a subticket of Bug18.

Integrate HardLupa as a document verifier

I've been busy lately over in the HardLupa project, making a security-hardened wrapper for lupa. Well, it looks like it's time for that to pay off. We can finally have a Lua interface for validating operations.

API

# Provided as a global

ctree.get_value(jaddr)
-> Function that takes a list (jaddr, as in, JSON address) and returns the value of the thing in that part of the JSON.

ctree.get_type(jaddr)
-> Returns the type of an object in the doc JSON, with possibilities [ string | list | map | number | bool | nil ].

# Should be defined by graph code, otherwise assumed to pass everything

validate_instruction(instruction)
-> Returns a bool.
     Instructions passed to Lua are in a simplified format, given as a table:
        * jaddr - JSON address as list
        * mod - modification type, string with possibilities [ insert | delete | create | update ]
        * details - small table whose properties are dependent on the modification and the type of object at jaddr.
     Return true if the instruction is okay, false if it does something "illegal."

validate_state()
-> Returns a bool, whether the end result of the op is a sane state.

Note: every JSON address starts with "before" or "after" to represent the root node. While the Python end is validating an instruction, it will do a validate_instruction loop in Lua for every instruction, and in that context, it will treat the "before" tree as "after all prior instructions in this op, but before the one being validated", and "after" as "after applying the instruction currently being validated." After going through every instruction this way using a triple buffer setup, Python will call validate_state(), and in that context it will treat "before" as "before the op entirely" and "after" as "after applying the op entirely".

If validation fails at any point, it will immediately drop the temp trees from memory and fail the op completely.

Implementation on the Python side

Each gear will allocate its own HardLupa sandbox process, and within that, SBRuntimes as necessary for each graph object. Runtimes will only be allocated if they are immediately needed for validating an op, and if they have not already been instantiated. If dnhash is a string containing the hexadecimal SHA1 of the docname, and oname is the name of the graph vertex, then a runtime's name in the sandbox (and in a dict maintained by the gear) is dnhash + "-" + oname.

In the MCP folder, create a new file for a class Vertex, which subclasses hardlupa.SBRuntime, and accepts an instance of SBRuntime in the constructor to copy its data over (as well as, optionally, the code to initialize the user-provided Lua validation funcs). This class will have the custom code that provides the global "ctree" variable, and has special convenience functions for validating ops and such. Using this, we can move a lot of validation code out of other places entirely.

Ticket scope

For the purposes of this ticket, just get a document-wide Vertex demonstration working. The Lua does not have to be stored in-document, in fact it should probably just be instantiated as part of the MCP demo.

Oversimplified hooks for local op approval

Make a general-purpose op validation class, and use it in gear.py. Then, for convenience, modify the validation queue code to include a filtering system for automatically approving, rejecting, or deferring a request.

Update documentation for v0.5

Primarily the README, which should at least state that other sources of documentation may be out of date. This is a good time to establish a policy of including the version number in documentation where it is not part of the codebase, so that users can judge for themselves how up-to-date it is based on version numbers, without so much manual intervention on my part.

This would also be a good time to clean up some of the really, really misleading stuff from the doc folder, maybe. If there's anything with no historic value, I'll delete it, but otherwise I think a warning in the README should suffice.

Documents sometimes get an inaccurate apply list for some reason

The cause of a couple serious and hard-to-pin-down bugs was that document objects' list of applied op hashes were getting ops added, which hadn't actually been applied. In a bit of quantumness, ops applied to one document would mysteriously affect the apply list of another document.

This is easy enough to patch temporarily now that it's been found (comment out the "already applied" check and try to feel to bad at the moment about wasted cycles), because ops are idempotent. But it's still a serious issue, the root cause needs to be found and stomped. Also, I'm not sure if the check is preventing any network feedback loops. I hope not, but it very well could be.

Update wiki post-renovation

Most people probably don't even know this project has a wiki. It should not only be updated after #40 and #41 and such, it should also be featured in the main project page.

Remove test code in favor of DoctestAll

After getting sick of having the code for recursive testing being copied everywhere, I turned it into DoctestAll. Now, it's time to use it. That means stripping out top-level testing and updating the README.

Separate MCP data layer into its own protocol, EJTP.

MCP is made of two layers, the transport normalization layer and the data layer. I think a very good thing for the project would be to split the transport layer off into a reusable, separate library and use that as a dependency in MCP. This means the name "MCP" will refer only to the data layer, which runs on top of EJTP (Encrypted JSON Transport Protocol).

The first step, obviously, is to create a new EJTP repository and set it up so that it works as a standalone library. Then, this ticket will be done when ConcurrenTree is migrated to use the EJTP lib as a dependency.

Invent and use validation language globally in document

Create a temporary part of the document schema that stores validation rules for all operations on the document. Also, determine validation rule format, in a way that can be expressed as JSON (including pure text as an option). This language is to be worked out in the comments in this ticket.

This is a subticket of Bug18.

Rewrite MCP tracking/negotiation to use document private property

MCP needs a bit of a rework since its "oversimplification" to try to store ops in the node structure itself. Using the private space allows for persistent storage of negotiation data and handles parts of that functionality already thanks to op tracking. This clears up quite a bit of mind-bending meta overhead and makes everything in theory quite a bit cleaner.

To do this, operation tracks have to be reintroduced to the protocol itself. Once that happens, issues with storage will mostly be simple, "fix what's right in front of you" busywork, without too much theoretical thought required. The tricky bit, of course, is designing the API for tracks and negotiation. Ops can be transferred with "track" property easy enough, the rest could probably just be done as signature-based metadata frames.

Subtickets:

Make sure that ConcurrenTree supports EJTP v0.8.1 API

Hopefully this won't be too intense a project alongside the DEJE -> MCP merge, but for the most part there shouldn't be conflicts there. The big issues I foresee involve the new encryptor cache system, and having to implement hellos or some other key exchange system in MCP (since it's no longer handed to us by EJTP).

When the unit tests for ConcurrenTree run error-free with current stable EJTP, this ticket will be ready to close/merge into #27.

Concepts?

hey there,

I'm intrigued. I know OT fairly well and all the rambling about it and how ctree is so much better let's me want to check out the concepts behind ctree.

But I can't find any enlightening explanations of how ctree woks exactly. Could you help me out here?

Ditch install.sh in favor of distutils

After having worked on the EJTP Python lib, I've learned a lot about Python packaging from necessity that would be useful to this project. But it's definitely a full ticket's worth of work, worthwhile as that work will be.

Things involved

  • Moving lots of the repo around into subfolders
  • Writing a setup.py script
  • Adding all the subpackages into that script

Turn ConcurrenTree.model into the top-level directory

After moving encrypted key:value persistent storage to another library, there will be no "competition" to the model directory, and therefore, no reason not to move all its contents to top-level and get rid of that layer of hierarchy altogether. Booyah, let's do this thing.

Depends on #25, of course. Ticket is done when tests pass again and ConcurrenTree.model no longer exists.

Change codebase to use 4-space tabs, not hard tabs

Doctest doesn't get any indigestion from hard tabs, but as a developer, I sure get some from having to mix the two styles in my code. Not to mention that hard tabs make it difficult to edit on my phone over SSH, or to paste code here and there with predictable formatting. Once upon a time I was in love with hard tabs for their convenience on the keyboard, but these days it's too painful for other uses to maintain. Lop 'em all out, and replace them with 4 spaces each.

Unit test for MCP demo code

Since this is the area I'm currently working, a unit test would be invaluable for regression prevention. Unfortunately, there are some freshly-discovered issues that occur when trying to do the whole demo in a single process, which I'll do as subtickets and link here.

Subtickets

  • Apply list crossover
  • Even without apply list filtering, some ops will just not apply at all for mysterious reasons.

BCP protocol design - session properties.

I propose a "session properties" mechanism based on the shared document "?session". It applies to that BCP session only, allows for existing sync and confirmation mechanisms, and shall contain as little MCPdoc cruft as possible (we'll see where that lands during the implementation of this ticket).

Add reserved property "private" to documents.

The top-level property "private" is to be reserved for document metadata. Operations cannot access it, it's stored as plain structures instead of wrappers, and it's totally invisible (and against the rules to overwrite) from the outside. However, it is stored to disk and restored the same way as any component of the document.

More implementation details to come later, but basically, all metaproperties that are not part of the synced layer of the document belong in here, possibly including docname.

Move all sorts of stuff out of gear.py

I can't take it anymore. It's just too goddamn big. It's hard to find anything in there, and more complex and error-prone for it. Anything that is not absolutely core to a gear needs to be in a different class, and a different file, imported in. Anything involving cryptography, permissions, validation... break that out.

Stuff that belongs

  • Communication using one EJTP client
  • Backend storage (including host data)

Stuff that does not

  • Crypto
  • Permissions
  • Validation (helper functions and filters at least)
  • Utility functions like "owner"

Rename validation to verification

Kind of a minor change in some ways, but since the focus is now pulled toward starting with manual user verification and automating from there, it should make the intentions of that code clearer.

This bug involves changing all references to "validation" in the codebase and filenames to "verification." If there are any instances where this seems unreasonable, discuss in the comments.

MCP protocol design - alias negotiation for symmetric keys

MCP's encryption layer is built on the public key model, which is good in some ways but bad in others. Sending large amounts of data through RSA code is slow and bad for security. What I'd like to do is find a way to negotiate symmetric keys between two hosts.

The solution I'm leaning towards is a use/abuse of the user tag in interfaces. Every interface is composed of three parts:

[iface_type, iface_location, user_tag]

The first two are the information necessary to identify a distinct network endpoint (UDP port, IRC channel, email address), which stay the same during aliasing. The user tag is changed by appending "-$randstring" to the end, where $randstring is an alphanumeric code that may or may not be based on a hash of the symmetric key. This represents an alternate interface to the same remote endpoint and user.

Now, an alias is always specifically constructed for communication between two specific interfaces. Its key has no signature value. It's simply a transport layer "preferred route" that drops any messages not from its dedicated remote endpoint. Ideally, both ends set up aliases with relatively lightweight symmetric encryption that talk to each other.

This could be accomplished by adding a property to the "?hosts" table called "alias". Every stored host that has an "alias" property will use that property in the transport encryption layer. It will be stored as a 4-interface, where the fourth argument is the encryptor definition.

Of course, we also need to define protocol messages that negotiate encryptor aliases. I'll comment further on this issue as inspiration strikes.

Asynchronous validation hooks for library API

Currently, there are no good or official mechanisms for asynchronous validation of operations or collisions. Much has been made of having manual, user-curated approvals of changes, but the library isn't really designed to handle those kinds of things.

So this ticket encompasses whatever redesign and reimplementation work is necessary to queue and retrieve pending operations for manual approval or rejection. The API should be as simple to hook into as possible, and should integrate the new graph space. The details of this can be worked out in the comments in this ticket.

This is a subticket of bug3.

Subtickets

Remove all networking code

One of the things I decided since working on this project before, is that the networking problem is actually outside the scope of CTree itself. So one of the biggest changes, getting back into the code whenever I do so, will be removing all the networking code.

All the code will still be available in git history, obviously, so it's not like it's really "gone" for historical purposes. Just cleaned up from more current workings.

Integrate invitation class into gear.py

Now that the validation framework is written, it's time to do our first integration with the Gear system. Setting up the asynchronous hooks and such should make for cleaner code in gear.py, or at the very least, less resistant to similar op-level changes.

Since this is a pretty simple ticket on the surface, I'll only go into detail about it in comments if I find that to be necessary.

This is a subticket of bug12.

Move some in-place validation to filters in gear.py

There is a lot going on in Gear.recv_json. Granted, there's a lot going on in that file in general and the whole thing needs to be refactored into more manageable chunks soon, and this is a good first step. There is so much validation happening in that one function alone that can be split off into reusable filters, and should!

This ticket is done when Gear.recv_json cannot be gutted any further into separate validation filters. It is a subticket of bug12.

Unit testing for less manual regression detection

I'm heavily leaning toward doctest, since it'll force/encourage me to write better documentation in the code itself. I definitely want to use something in the standard lib, and unittest seems way heavy-handed to me. A definite requirement is the incremental increase of test code - anything that takes a lot of code at the start to be useful at all is a non-starter with me.

So yeah, gonna try doctest first, unittest if that doesn't work for me.

RSA encryptor doesn't work properly, still unsure why.

On large messages of certain lengths, RSA encryption simply results in mangled garbage. I'm going to write a test program later to see if I can find an actual pattern to the breakage, which I suspect will be the key to diagnosing the problem itself.

Separate DEJE out of MCP

I'm going to create a new repository for DEJE based on some of the existing MCP code. A lot of it, actually. This ticket is whatever it takes to strip MCP down to its true CTree roots. Which means figuring out what the end result oughtta be, and how to get there.

Light MCP, at its heart, has to be all about each participant having their own set of signed ops that compose their view of the document. Everything else is flavor on top, even the notion of participants sharing a document.

Actions/Requests

All messages must be acked. Except for acks. Also, following EJTP good practice, all messages are objects with a "type" property starting with "mcp-".

An EJTP interface can try to subscribe to a document on another interface, but may be rejected at any time with an error. Subscription means that the remote interface will send you updates to their approval list as they happen, and keep rough track of what you've seen (through explicit mark-as-read semantics).

You can also request information at any time statelessly. There are three types of "bulk requests": the index, the contents, and operations (requested by their hash). For a quick snapshot of a remote's version of the doc, grab the "contents". For something more in-depth and structural, pull the index and then any ops you don't already have.

Privacy controls will be dependent on self-descriptive metadata in the document, format TBD. For the purposes of this ticket, this won't be implemented yet. In the absence of any appropriate metadata, the application should wait until a configurable timeout for manual user approval or rejection, before sending a timeout-related error. This does not preclude logging the request, and presenting it to the user as a notification the next time (s)he opens the application.

Protocol syntax

All frames

  • type : Indicates message type, always prefixed with "mcp-"
  • ackc : Acknowledgement token. An arbitrary string or number chosen by the sender. Only omitted in mcp-ack.

mcp-error

Report a protocol error.

  • code : a standardized error code of some sort
  • msg : short human-readable message for the code.
  • data : useful details about the error in a machine-readable format.

mcp-ack

Confirm that you received a frame.

  • ackr : Ack response. Contains a list of all tokens that this ack frame confirms.

mcp-pull-index

Retrieve the index from a remote source.

  • docname : includes owner, so we don't need a separate field for that.

mcp-pull-snapshot

Pulls a snapshot of the document.

  • docname

mcp-pull-ops

Retrieves a set of operations. Can request more than one at once, but only from one docname at a time, and response ops will come back individually.

  • docname
  • hashes : a list of op hashes.

mcp-index

A snapshot of a current index.

  • docname
  • hashes : a map of hash : sig, where the signatures are all by the owner of the docname.

mcp-snapshot

A flattened copy of the document at the time the request is processed. Contains no CTree structural data and is subject to obsolescence.

  • docname
  • contents

mcp-op

For portability, I've decided that only the instructions list should be taken into account for hashing purposes.

  • docname
  • instructions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.