Giter Site home page Giter Site logo

goodboy / tractor Goto Github PK

View Code? Open in Web Editor NEW
249.0 8.0 12.0 1.92 MB

A distributed, structured concurrent runtime for Python (and friends)

License: GNU Affero General Public License v3.0

Python 100.00%
actor-model rpc distributed-systems multiprocessing multicore-programming trio async-await structured-concurrency streaming-data

tractor's People

Contributors

chrizzftd avatar goodboy avatar kehrazy avatar overclockworked64 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tractor's Issues

The bane of async generators

As per #56, discussion on gitter, and discussion in python-trio/trio#638 async generators aren't task safe and cause all sorts of subtle problems if used without some kind of higher level wrapping to conduct teardown, that is an explicit aclose() call. This issue is mostly to keep track of the long running discussion and eventually integrate whatever consensus solution is chosen.

For now I've implemented a custom ReceiveChannel as a stopgap but even it isn't sufficient to address premature stream break-ing/teardown currently. I would like to experiment with using an contextlib.AsyncExitStack to ensure, at least at the actor nursery level, aclose() is always called on all streams.

The effort on this general front includes quite a few issues worth watching:

Stop forking forkservers

I haven't dug into it yet but it seems using the forkserver method causes a new server to be created for every new process created from a sub-process. Ideally we can have only one server per tractor program and have each new (sub-sub-sub..etc.) process be spawned by the top level server.

The main task idea is really really wrong!

After an in depth discussion with @vodik surrounding #11, it's so obvious now that the main kwarg to ActorNursery.start_actor() is just plain stupid and wrong. The idea of a main task is the same as starting an actor and then submitting a single function/task to that actor and then cancelling the actor once it's lone task is complete. So instead of this inverted approach (of making an actor with a main task the lower level API thus severely complicating the internal implementation of Actor._async_main()) we agreed the nursery should instead get a ActorNursery.run_in_actor() method which internally is simply a wrapper around:

async def run_in_actor(self, name: str, main: Function):
    portal = self.start_actor(name, rpc_module_paths=['my_module_with_func'])
    self._main_children.append(portal)
    return await portal.run('my_module_with_func', func_name)
    await portal.cancel_actor()

When the nursery block ends this actor is waited upon for it's final result much like is done implicitly in the core...
Couple notes:

  • the Portal will need to learn a completed state that signals the far end task is done (in the case of an async generator this happens when either end closes - i.e. .aclose())
  • the completed state variable will likely need to be a trio.Event

Note this also avoids the confusion in how does the main=my_func module in the remote actor get figured out? (hint it doesn't since this was using the multiprocessing.forkserver's internal pickling stuff to make it happen).

This let's us entirely drop the idea of Portal.result() afaik which in turn means we lose all future-like junk just like trio and a bunch of implementation stuff should be severely simplified!

^ Hmm not sure it should get dropped otherwise it gets tricky trying to figure out how you can call run_in_actor() without blocking. Maybe it's ok to look somewhat like the Future returned from loop.run_in_executor() as in asyncio? This also allows for additional portal.run() calls to be made to that same actor while the nursery block is open.

Spec out the async-ipc protocol

tractor utilizes a simple multiplexed protocol for conducting inter-process-task-communication (IPTC)?

Each per-process trio task can invoke tasks in other processes and received responses depending on the type of the remote callable. All packets are encoded as msgpack serialized dictionaries which I'll refer to as messages.

How it works

When an actor wants to invoke a remote routine it sends a cmd packet:
{'cmd': (ns, func, kwargs, uid, cid)} of type Dict[str, Tuple[str, str, Dict[str, Any], Tuple[str, str], str]]
Where:

  • ns is the remote module name
  • func is the remote function name
  • kwargs is a dict of keyword arguments to call the function with
  • uid is the unique id of the calling actor
  • cid is the unique id of the call by a specific task

The first response is a function type notifier msg:
{'functype': functype, 'cid': cid} of type Dict[str, str].
Where functype can take one of:

  • 'asyncfunc' for an asynchronous function
  • 'asyncgen for a single direction stream either implemented using an asyn generator function or a @stream decorated async func
  • 'context' for a inter actor, task linked, context. For now see #209.

Depending on the value of functype then the following message(s) are sent back to the caller:

  • 'asyncfunc':
    • a single packet with the remote routine's result {'return', result, 'cid', cid} of type Dict[str, Any]
  • 'asyncgen':
    • a stream of packets with the remote async generator's sequence of results {'yield', value, 'cid', cid} of type Dict[str, Any].
  • 'context':
    • a single 'started' message containing a first value returned from the Context.started() call in the remote task followed by a possible stream of {'yield', value, 'cid', cid} messages if a bidir stream is opened on each side. Again see #209.

A remote task which is streaming over a channel can indicate completion using a ''stop'` message:

If a remote task errors it should capture it's error output (still working on what output) and send it in a message back to its caller:

  • {'error': {tb_str': traceback.format_exc(), 'type_str': type(exc).__name__,}}

A remote actor must have a system in place to cancel tasks spawned from the caller. The system to do this should be invoke-able using the existing protocol defined above and thus no extra "cancel" message should be required (I think).

  • an example is when a local caller cancels its currently consuming stream by calling a Actor._cancel_task() routine in the remote actor. This routine should have knowledge of the rpc system and be capable of looking up the caller's task-id to conduct cancellation.
  • any remote task should should in theory be cancel-able in this way but there is not yet a "cross-actor cancel scope" system in place for generic tasks (this is maybe a todo)
    • likely we'll need our own tractor.CancelScope around calls to Portal.run() see #122

What should be done

  • spec out this protocol and define it somewhere in the code base with proper generic type definitions that can be used to type check related internal ipc apis (see @vodik's comment in #35).
  • work towards full bi-directional async generator support i.e. using .asend() also mentioned in an internal todo.
  • deeply consider how far to move forward with async generators considering the outstanding problems with garbage collection and cleanup as per discussion, Pep 533, and the recommended use of async_generator.aclosing() - maybe an alternative reactive programming approach may end up being better?

Official alpha release and pypi package

Now that windows support and a few users have had some moderately successful results (as well as myself running tractor in "production" for over a year) I think it's about a good time to get an initial package up on pypi.

Checkout the milestone list!

Task sychronization func decorator

After working on the built-in pubsub system I've realized it may be useful to have a (set of) synchronization decorator(s) for limiting multi-task access to user defined functions.
This would make it easy to allow for multiple actors to make calls to a common actor that defines functions which don't want more then some finite number of tasks executing its body at a given time.

Need to think about it a little more.

Do we need to support invoking sync functions?

@jtrakk made a good point in gitter that supporting both async and regular functions for remote invocation is a bit odd and probably unnecessary. I think I agree; there ain't nothing a regular func can do that an async one can't. Unless I'm forgetting something I think it should be fine to just remove support for regular functions?

Off hand it would allow removing the slew of checks in _actor._invoke().

Initial supervision tests

#42 brings in proper trio.MultiError support and more deterministic cancellation semantics.
More tests are needed to ensure this system is rock solid before moving onto adding different supervision strategies as in #22.

Some I can think of off-hand that aren't in the test suite are:

  • n run_in_actor(), x start_actor()
    • local error causes all to cancel
      • n don't error but complete quickly
      • n don't error but complete slowly
      • n all error
    • internal error in start_actor() and run_in_actor()
      • n all error
      • n some error
  • propagation of MultiError up subactor nursery trees

More to come...

pypy 3.6 testing

The rumour is pypy for py3.6 is working alright on the nightly build for which there are install instructions here.

I think I'd like to test this out before getting #21 in as there may be conflicts with the pypy version of the stdlib?

Bidirectional streaming?

I've already left some notes in the code about how we could do two way streaming using the native received = yield sent generator semantics but it's probably worth looking at how other projects are approaching it and if it's even a good idea.

Some projects I found through different communities recently:

  • bidirectional streaming in google's gprc
  • streaming in faust (though I think this is moreso a demonstration of a need for a proper asyncitertools)

Another question is how to accomplish this with traditional messaging patterns like those found in nng. There was some suggestions in gitter about combining protocols/socket types.

More to come...

Update

The api I'm currently most convinced of is at this comment. Afaict it would also suffice the needs of #122 to some degree since cancelling a "context" would effectively be like cancelling a cross-actor scope.

Typo in docs

The actor discovery section has a variable error - where does my_service come from?

Publish to subscribe pattern?

I was digging through the zmq docs for a friend and started wondering how does one begin to approach the problem of subscribing to push data to some other actor? I.e. what is effectively the way to accomplish reverse pub-sub or represent push-pull zmq sockets using async generator semantics? The reactive x paradigm supports this using...

One idea I had was to always use a pull approach such that a client actor can tell another to async for from it - a kind of call-back and iterate my async generator why don't you.

Something maybe like:

async def push_data():
    for i in range(10**6):
        yield i
        await trio.sleep(0.3)

async with tractor.connect('ml_core') as portal:
    async with portal.push(push_data, 'remote_func_name', mod=6) as portal:
        # other app code that can run alongside push task
    # at close of block push stream is cancelled

And then on the server side the remote_func_name must be defined:

async def remote_func_name(pushed, mod):
    async for item in pushed:
        # stash for downstream processing
        if item % mod == 0:
            await tractor.current_actor().statespace['queue'].put(item)

Patch forkserver manual management into cpython stdlib

The solution to #6 required overriding some code in multiprocessing.forkserver.py (NB the overridden code was taken from 3.8 and includes recent changes) but should likely be patched back into the stdlib for others who might find it useful.

The minor differences introduced in tractor's custom version are:

I personally think the stdlib should allow a user to avoid the implicit ensure_running() calls via some configuration mechanism - it doesn't seem to add to any particularly sophisticated forkserver resiliency and likely is apt to hide spawning problems instead of failing loudly. It also avoids creating (arguably unnecessary) auxiliary processes per sub-sub-process.

Worker pool API?

@parity3 made a gitter request for a system to delegate work to an actor/task cluster or pool.

for my use case, I've got multiple "clients" that wish to use the same
set of workers, coming in at sporadic times. So they'd need to connect
to a running arbiter, and the arbiter needs to make sure that some
workers are free before starting on the client-specified set of task
chunks (as a method of resource manageement). It'd be nice to see an
example for that use case, or more built in api support for such case.

And with a little follow up requirement:

also, it's not clear to me how best to implement a pattern where actors
are all notified that there are no more unclaimed tasks of a set (for
the actors that were involved with the task set). It would be nice for
such actors to be notified so they can "wrap up" any transient data
associated with the tasks they performed, aggregate / summarize it, and
send it back to the client

That last part will probably be to do with a lack of docs on how actor cancellation can be done using one of ActorNursery.cancel(), Portal.cancel_actor() or a plain old trio.CancelScope.

This worker pool is something I've though about a bit and it reminded me of a couple projects:

More discussion and brainstorming is greatly welcome!

Windows support?

Hi, this looks like a great project!

I wanted to give it a try but after installing via pip install git+git://github.com/tgoodlet/tractor.git, importing tractor fails because forkserver seems to not be available on my os (windows10):

Python 3.7.2 (default, Feb 21 2019, 17:35:59) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tractor
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\ProgramData\Miniconda3\envs\tractor19030502\lib\site-packages\tractor\__init__.py", line 18, in <module>
    from ._trionics import open_nursery
  File "C:\ProgramData\Miniconda3\envs\tractor19030502\lib\site-packages\tractor\_trionics.py", line 21, in <module>
    ctx = mp.get_context("forkserver")
  File "C:\ProgramData\Miniconda3\envs\tractor19030502\lib\multiprocessing\context.py", line 238, in get_context
    return super().get_context(method)
  File "C:\ProgramData\Miniconda3\envs\tractor19030502\lib\multiprocessing\context.py", line 192, in get_context
    raise ValueError('cannot find context for %r' % method) from None
ValueError: cannot find context for 'forkserver'
>>>

Any plan on adding windows support in the future?

Windows: multiprocessing causes KeyError: '__mp_main__'

This is an issue that can show up on windows without special construction of a __main__ script in a python program using tractor.

When running the first example in a script the traceback log will look something like the following output:

Click to show log
No actor could be found @ 127.0.0.1:1616
Alright... Action!
already have channel(s) for ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748'):[<Channel fd=148, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60548)>]?
already have channel(s) for ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748'):[<Channel fd=148, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60548)>, <Channel fd=636, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60549)>]?
already have channel(s) for ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748'):[<Channel fd=640, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60556)>]?
already have channel(s) for ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748'):[<Channel fd=640, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60556)>, <Channel fd=624, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60557)>]?
Actor errored:
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 127, in _invoke
    await chan.send({'return': await coro, 'cid': cid})
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 14, in say_hello
    return await portal.run(_this_module, 'hi')
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 203, in run
    *(await self._submit(ns, func, kwargs))
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 184, in _submit
    raise unpack_error(first_msg, self.channel)
tractor._exceptions.RemoteActorError: ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748')
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 234, in _get_rpc_func
    return getattr(self._mods[ns], funcname)
KeyError: '__mp_main__'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 425, in _process_messages
    func = self._get_rpc_func(ns, funcname)
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 236, in _get_rpc_func
    raise ModuleNotExposed(*err.args)
tractor._exceptions.ModuleNotExposed: __mp_main__

Actor errored:
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 127, in _invoke
    await chan.send({'return': await coro, 'cid': cid})
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 14, in say_hello
    return await portal.run(_this_module, 'hi')
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 203, in run
    *(await self._submit(ns, func, kwargs))
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 184, in _submit
    raise unpack_error(first_msg, self.channel)
tractor._exceptions.RemoteActorError: ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748')
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 234, in _get_rpc_func
    return getattr(self._mods[ns], funcname)
KeyError: '__mp_main__'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 425, in _process_messages
    func = self._get_rpc_func(ns, funcname)
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 236, in _get_rpc_func
    raise ModuleNotExposed(*err.args)
tractor._exceptions.ModuleNotExposed: __mp_main__

Nursery for ('arbiter', 'e1c90c88-3ff6-11e9-868f-605718da7748') errored with <class 'tractor._exceptions.RemoteActorError'>,
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 224, in _return_from_resptype
    return msg['return']
KeyError: 'return'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 329, in open_nursery
    yield nursery
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 35, in main
    print(await gretchen.result())
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 261, in result
    raise self._result
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 254, in result
    *self._expect_result
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 228, in _return_from_resptype
    raise unpack_error(msg, self.channel)
tractor._exceptions.RemoteActorError: ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748')
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 127, in _invoke
    await chan.send({'return': await coro, 'cid': cid})
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 14, in say_hello
    return await portal.run(_this_module, 'hi')
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 203, in run
    *(await self._submit(ns, func, kwargs))
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 184, in _submit
    raise unpack_error(first_msg, self.channel)
tractor._exceptions.RemoteActorError: ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748')
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 234, in _get_rpc_func
    return getattr(self._mods[ns], funcname)
KeyError: '__mp_main__'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 425, in _process_messages
    func = self._get_rpc_func(ns, funcname)
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 236, in _get_rpc_func
    raise ModuleNotExposed(*err.args)
tractor._exceptions.ModuleNotExposed: __mp_main__


Sending actor cancel request to ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748') on <Channel fd=148, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60548)>
Sending actor cancel request to ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748') on <Channel fd=640, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60556)>
Task <bound method Actor.cancel of <tractor._actor.Actor object at 0x0000021DE5606B38>> was likely cancelled before it was started
Task <bound method Actor.cancel of <tractor._actor.Actor object at 0x0000026F646D7C50>> was likely cancelled before it was started
already have channel(s) for ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748'):[<Channel fd=640, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60556)>]?
already have channel(s) for ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748'):[<Channel fd=148, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 60548)>]?
May have failed to cancel ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748')
May have failed to cancel ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748')
This portal is already closed can't cancel
Cancelling existing result waiter task for ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748')
This portal is already closed can't cancel
Cancelling existing result waiter task for ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748')
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 224, in _return_from_resptype
    return msg['return']
KeyError: 'return'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 133, in exhaust_portal
    final = res = await portal.result()
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 261, in result
    raise self._result
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 329, in open_nursery
    yield nursery
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 35, in main
    print(await gretchen.result())
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 261, in result
    raise self._result
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 254, in result
    *self._expect_result
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 228, in _return_from_resptype
    raise unpack_error(msg, self.channel)
tractor._exceptions.RemoteActorError: ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748')
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 127, in _invoke
    await chan.send({'return': await coro, 'cid': cid})
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 14, in say_hello
    return await portal.run(_this_module, 'hi')
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 203, in run
    *(await self._submit(ns, func, kwargs))
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 184, in _submit
    raise unpack_error(first_msg, self.channel)
tractor._exceptions.RemoteActorError: ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748')
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 234, in _get_rpc_func
    return getattr(self._mods[ns], funcname)
KeyError: '__mp_main__'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 425, in _process_messages
    func = self._get_rpc_func(ns, funcname)
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 236, in _get_rpc_func
    raise ModuleNotExposed(*err.args)
tractor._exceptions.ModuleNotExposed: __mp_main__



During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 42, in <module>
    tractor.run(main, spawn_method='spawn')
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\__init__.py", line 104, in run
    return trio.run(_main, async_fn, args, kwargs, name, arbiter_addr)
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\trio\_core\_run.py", line 1444, in run
    raise runner.main_task_outcome.error
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\__init__.py", line 87, in _main
    actor, main, host, port, arbiter_addr=arbiter_addr)
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 832, in _start_actor
    result = await main()
  File "A:\tractor_examples\ex01_trynamic_scene.py", line 37, in main
    print("CUTTTT CUUTT CUT!!! Donny!! You're supposed to say...")
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\async_generator\_util.py", line 53, in __aexit__
    await self._agen.athrow(type, value, traceback)
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 329, in open_nursery
    yield nursery
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 300, in __aexit__
    await self.cancel()
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 280, in cancel
    await self.wait()
  File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 235, in wait
    raise trio.MultiError(errors)
trio.MultiError: RemoteActorError('(\'donny\', \'e1c97da6-3ff6-11e9-bb69-605718da7748\')\nTraceback (most recent call last):\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 127, in _invoke\n    await chan.send({\'return\': await coro, \'cid\': cid})\n  File "A:\\tractor_examples\\ex01_trynamic_scene.py", line 14, in say_hello\n    return await portal.run(_this_module, \'hi\')\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_portal.py", line 203, in run\n    *(await self._submit(ns, func, kwargs))\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_portal.py", line 184, in _submit\n    raise unpack_error(first_msg, self.channel)\ntractor._exceptions.RemoteActorError: (\'gretchen\', \'e1eb5e08-3ff6-11e9-a581-605718da7748\')\nTraceback (most recent call last):\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 234, in _get_rpc_func\n    return getattr(self._mods[ns], funcname)\nKeyError: \'__mp_main__\'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 425, in _process_messages\n    func = self._get_rpc_func(ns, funcname)\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 236, in _get_rpc_func\n    raise ModuleNotExposed(*err.args)\ntractor._exceptions.ModuleNotExposed: __mp_main__\n\n'), RemoteActorError('(\'gretchen\', \'e1eb5e08-3ff6-11e9-a581-605718da7748\')\nTraceback (most recent call last):\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 127, in _invoke\n    await chan.send({\'return\': await coro, \'cid\': cid})\n  File "A:\\tractor_examples\\ex01_trynamic_scene.py", line 14, in say_hello\n    return await portal.run(_this_module, \'hi\')\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_portal.py", line 203, in run\n    *(await self._submit(ns, func, kwargs))\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_portal.py", line 184, in _submit\n    raise unpack_error(first_msg, self.channel)\ntractor._exceptions.RemoteActorError: (\'donny\', \'e1c97da6-3ff6-11e9-bb69-605718da7748\')\nTraceback (most recent call last):\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 234, in _get_rpc_func\n    return getattr(self._mods[ns], funcname)\nKeyError: \'__mp_main__\'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 425, in _process_messages\n    func = self._get_rpc_func(ns, funcname)\n  File "C:\\ProgramData\\Miniconda3\\envs\\tractor19030601\\lib\\site-packages\\tractor\\_actor.py", line 236, in _get_rpc_func\n    raise ModuleNotExposed(*err.args)\ntractor._exceptions.ModuleNotExposed: __mp_main__\n\n')

Details of embedded exception 1:

  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 224, in _return_from_resptype
      return msg['return']
  KeyError: 'return'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 133, in exhaust_portal
      final = res = await portal.result()
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 261, in result
      raise self._result
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 254, in result
      *self._expect_result
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 228, in _return_from_resptype
      raise unpack_error(msg, self.channel)
  tractor._exceptions.RemoteActorError: ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748')
  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 127, in _invoke
      await chan.send({'return': await coro, 'cid': cid})
    File "A:\tractor_examples\ex01_trynamic_scene.py", line 14, in say_hello
      return await portal.run(_this_module, 'hi')
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 203, in run
      *(await self._submit(ns, func, kwargs))
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 184, in _submit
      raise unpack_error(first_msg, self.channel)
  tractor._exceptions.RemoteActorError: ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748')
  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 234, in _get_rpc_func
      return getattr(self._mods[ns], funcname)
  KeyError: '__mp_main__'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 425, in _process_messages
      func = self._get_rpc_func(ns, funcname)
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 236, in _get_rpc_func
      raise ModuleNotExposed(*err.args)
  tractor._exceptions.ModuleNotExposed: __mp_main__



Details of embedded exception 2:

  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 133, in exhaust_portal
      final = res = await portal.result()
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 261, in result
      raise self._result
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_trionics.py", line 329, in open_nursery
      yield nursery
    File "A:\tractor_examples\ex01_trynamic_scene.py", line 35, in main
      print(await gretchen.result())
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 261, in result
      raise self._result
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 254, in result
      *self._expect_result
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 228, in _return_from_resptype
      raise unpack_error(msg, self.channel)
  tractor._exceptions.RemoteActorError: ('gretchen', 'e1eb5e08-3ff6-11e9-a581-605718da7748')
  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 127, in _invoke
      await chan.send({'return': await coro, 'cid': cid})
    File "A:\tractor_examples\ex01_trynamic_scene.py", line 14, in say_hello
      return await portal.run(_this_module, 'hi')
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 203, in run
      *(await self._submit(ns, func, kwargs))
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_portal.py", line 184, in _submit
      raise unpack_error(first_msg, self.channel)
  tractor._exceptions.RemoteActorError: ('donny', 'e1c97da6-3ff6-11e9-bb69-605718da7748')
  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 234, in _get_rpc_func
      return getattr(self._mods[ns], funcname)
  KeyError: '__mp_main__'

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 425, in _process_messages
      func = self._get_rpc_func(ns, funcname)
    File "C:\ProgramData\Miniconda3\envs\tractor19030601\lib\site-packages\tractor\_actor.py", line 236, in _get_rpc_func
      raise ModuleNotExposed(*err.args)
  tractor._exceptions.ModuleNotExposed: __mp_main__

The issue was first described by @chrizzFTD in #61. @rahulraj80 also experienced this but was able to use @chrizzFTD's solution to get a similar working result. This solution is based on this SO question.

We need to document this solution under the windows gotchas in the readme such that it's available for easy reference.

Stateful testing!

I want to put in some effort to ensure the cancellation system is rock-solid even across multiple hosts.

hypythesis' stateful testing system seems like a good fit to find all the race conditions that should be handled when the system fails unexpectedly.

Drop `Channel.aiter_recv()`

@vodik made a point before that Channel.aiter_recv() is kind of pointless and we could do the same as is currently done for the StreamQueue such that one can simply do:

for packet in chan:
   ... stuff ..

For one the __aiter__() already lets you do this.
It would basically just require using the same technique where we instead define what is currently aiter_recv() as a private method that's called and assigned as an instance variable in Channel.__init__() and then __aiter__() just always returns that async generator instance (or is it fine as is re-calling it every time?).

PS looking at how go does something similar with range might be worth thinking about here.

Command-time message type checking?

tractor allows actors to invoke remote routines, receive messages, respond etc. With Python's new type annotations I'm wondering if it would be beneficial to leverage type checking before commands are sent off on the network. An actor which spawns another can at startup receive type information about all allowed remote functions and then conduct run time type checking before sending a cmd packet.

The question is whether this is superfluous and will just cause more overhead in the case where network latency is less than the type checks.

Add a wait_for_actor() helper

As per #30 a tractor.wait_for_actor() helper would greatly simplify inter-actor synchronous coordination.

This should be mostly straight forward to implement as a blocking call and should be tested alongside the trio cancellation primitives.

Asyncio adapter?

As per some work I've been doing with QT in pikers/piker#52 (particularly to do with using quamash) I would like to see what can be whipped up for interacting with a foreign trio loop in another actor/process by running asyncio in a thread alongside a local actor.

Some things to check out:

Task already disappeared teardown error?

I got this testing out the new options data feed in piker:

27.0.0.1', 1616), raddr=('127.0.0.1', 59706)>
Nov 28 22:43:28 (MainProcess: MainThread) [TRACE] tractor.ipc _ipc.py:122 send `None`
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:416 Exiting msg loop for <Channel fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 59696)> from ('anonymous', 'da40804a-f387-11e8-80a0-a402b9cc051a')
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:272 Releasing channel <Channel fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 59696)> from ('anonymous', 'da40804a-f387-11e8-80a0-a402b9cc051a')
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:276 No more channels for ('anonymous', 'da40804a-f387-11e8-80a0-a402b9cc051a')
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:279 Peers is defaultdict(<class 'list'>, {})
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:283 Signalling no more peer channels
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:287 Disconnecting channel <Channel fd=7, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('127.0.0.1', 1616), raddr=('127.0.0.1', 59696)>
Nov 28 22:43:28 (MainProcess: MainThread) [TRACE] tractor.ipc _ipc.py:122 send `None`
Nov 28 22:43:28 (MainProcess: MainThread) [INFO] piker.broker.data data.py:371 No more subscriptions for questrade
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] piker.broker-config config.py:33 Writing config file /home/tyler/.config/piker/brokers.ini
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:537 All peer channels are complete
Nov 28 22:43:28 (MainProcess: MainThread) [DEBUG] tractor _actor.py:619 Shutting down channel server
Traceback (most recent call last):
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/bin/pikerd", line 11, in <module>
    load_entry_point('piker', 'console_scripts', 'pikerd')()
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/lib/python3.7/site-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/lib/python3.7/site-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/lib/python3.7/site-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/lib/python3.7/site-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/home/tyler/repos/piker/piker/cli.py", line 37, in pikerd
    loglevel=loglevel if tl else None,
  File "/home/tyler/repos/tractor/tractor/__init__.py", line 116, in run_daemon
    return run(partial(trio.sleep, float('inf')), **kwargs)
  File "/home/tyler/repos/tractor/tractor/__init__.py", line 98, in run
    return trio.run(_main, async_fn, args, kwargs, name, arbiter_addr)
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/lib/python3.7/site-packages/trio/_core/_run.py", line 1337, in run
    raise runner.main_task_outcome.error
  File "/home/tyler/repos/tractor/tractor/__init__.py", line 83, in _main
    actor, main, host, port, arbiter_addr=arbiter_addr)
  File "/home/tyler/repos/tractor/tractor/_actor.py", line 738, in _start_actor
    actor.cancel_server()
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/lib/python3.7/site-packages/trio/_core/_run.py", line 397, in __aexit__
    raise combined_error_from_nursery
  File "/home/tyler/repos/tractor/tractor/_actor.py", line 500, in _async_main
    log.debug("Waiting on root nursery to complete")
  File "/home/tyler/.local/share/virtualenvs/piker-PaB62peT/lib/python3.7/site-packages/trio/_core/_run.py", line 397, in __aexit__
    raise combined_error_from_nursery
  File "/home/tyler/repos/tractor/tractor/_actor.py", line 140, in _invoke
    tasks.remove((cs, func))
ValueError: list.remove(x): x not in list

Setup was:

  • started pikerd as standalone daemon-actor
  • started piker monitor
  • started options feed test from piker test suite
  • close the piker monitor
  • pikerd bailed as above...

Impress the celery crowd

After a great in depth convo with @ryanhiebert, it seems we may need to drum up some examples to compare and contrast how tractor is a lower level system in comparison to celery / amqp and can be used to build such load balancer patterns. A more general actor model system like tractor or a scalability protocols system like zeromq considers such patterns special cases. Emphasis on SC in these examples should be paramount.

I'd also like to glean everything from those communities that can be used at the "tractor level". I've already noted a few things:

I'd like to use this issue to track the discourse surrounding how tractor can support the needs of (ex) celery users who are looking for a SC alternative.

An "actor model" eh?

This is a discussion thread to dig into the theory surrounding how this project is (like) an actor model system and how it is or isn't compatible with structured concurrency.

BEFORE YOU THINK OR READ ANYTHING ELSE.

Watch this vid by the original author of the theory if you think you know what an actor model is or does, or looks like:
https://www.youtube.com/watch?v=7erJ1DV_Tlo

Hint: it's a computational model not an API.

A bunch of stuff that should probably be referenced in the docs and aluded to more formally in the implementation (if that's what we're after?):

Reference implementations in other languages and frameworks:

Native MQ transport (aka support for scalability protocols)

As has always been the plan (because it fits the actor model so well) we need native zeromq support. This will likely need a little thinking and re-design to integrate with the current Channel api.

Some prerequisites before this can move forward:

Data processing tutorial

I'd like to hash out a set of examples for how a real-time data processing pipeline could be designed with tractor.

My initial though was a streamer feeding 2 processors which each do frame oriented numpy array processing (say a linear regression and some statistics err sumthin) and and ship the results to a parent with all actors using async generators for the frame delivery.

msgpack-numpy should take care of the serialization easy peasy.

Keying task info with the channel instance seems tfh

I can't remember exactly why I originally keyed task info with a Channel in a Tuple[Channel, str] but it seems mostly unnecessary. The task call id (a UUID created for each rpc request) for all practical purposes should be unique even across billions of actors.

This may have been me being tfh (tin foil hat) about collisions for no good reason?
If there's another reason I can't remember it but it should be looked into..

Allow passing function refs to Portal.run() ?

@parity3 had the same thought as I: why can't Portal.run() accept a direct function reference and then we simply lookup the functions module and name much like we do in ActorNursery.run_in_actor()? This would of course have the implicit assumption that the remote actor has the chosen function's module code loaded for use.

The only question is really how to expose both ways on Portal?

  • use a different method, say run_func()?
  • allow passing an explicit func or ref kwarg?
  • allow the first arg to Portal.run() to be either a module name or a func ref and then in the named case make name a required kwarg?

Async actor spawning

@pquentin made a good point that the trio.sleep(0.4) in the initial example is a little confusing.

Why would you have to sleep to wait for the actor to come up?

The reason is because we don't yet have a wait_for_actor() API (allowing an actor to wait for another to come up) and the original reason for not adding it yet is because we don't yet have support for async actor spawning. Without the latter (I think) you'd get a deadlock with the current await n.start_actor() api? Although, now that I look at it, that call should return before the actor has fully started up which should cause start_actor() to unblock. So maybe this wait_on_actor() would work already?

^ is now working as per #31.

Either way, I propose adding a ActorNursery.start_actor_soon() which will allow for instructing the nursery to spawn a group of actors async. Question is what's the behaviour?

  • don't call Process.start() until the nursery's __aexit__() is hit - just like in trio with tasks
  • or, start spawning right away (i.e. Process.start() is called immediately and __aexit__() just waits for each to come up (like it already does) + the regular behaviour: waiting for actor completion / error.

@vodik any thoughts on this?

Supervisor API

I want to start design discussion on how to approach erlang-like supervisors including restart strategies and how this will interplay with a service discovery system and possibly real-time code replacement.

Some questions/comments I have:

  • what's the (semantic) difference between a nursery and a supervisor?

  • an explicit distinction should be made between a MainProcess supervisor and the arbiter actor

    • the Arbiter is per-host and is part of service discovery between hosts
  • how do we best implement a distributed supervisor (one that spawns actors over multiple hosts)?

    • does it simply send the spawn request to the appropriate arbiter who will then create or use a host-local nursery?
    • what happens when all actors on the remote host go down but a remote actor is still using a remotely spawned actor in that process cluster?
  • what does a distributed process supervisor API look like?

    • how does this interact with the arbiter and service discovery system?
  • how does an orchestration layer build on all this?

  • here's erlang's supervisor behaviors

Much more to come...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.