Giter Site home page Giter Site logo

auto-pi-lot / autopilot Goto Github PK

View Code? Open in Web Editor NEW
89.0 9.0 23.0 23.38 MB

Distributed behavioral experiments

Home Page: https://docs.auto-pi-lot.com

License: Mozilla Public License 2.0

Python 99.36% Shell 0.48% Roff 0.16%
python neuroscience behavioral-neuroscience

autopilot's Introduction

PyPI PyPI pyversions PyPI - Status License: MPL 2.0 Contributor Covenant Twitter Follow

Status:

Documentation Status Travis (.com) branch Coverage Status Jonny Status

Maintenance Mode

2024-01-22

Autopilot is in maintenance-only mode - development has been paused as we take the long way around towards building a new kind of p2p networking module to support a reworked autopilot 2.0. We will write a more detailed blogpost about lessons learned from autopilot soon.

Autopilot is not dead, it is merely resting <3

-jonny

Autopilot

Autopilot Banner Logo

Docs Paper Forum Wiki
Read the Docs Paper Forum Wiki

Autopilot is a Python framework for performing complex, hardware-intensive behavioral experiments with swarms of networked Raspberry Pis. As a tool, it provides researchers with a toolkit of flexible modules to design experiments without rigid programming & API limitations. As a vision, it dreams of bridging the chaotic hacky creativity of scientific programmers with a standardized, communally developed library of reproducible experiment prototypes.

Autopilot was developed with three primary design principles:

  • Flexibility - Autopilot was designed for any hardware and any experiment -- its hardware API is designed to give a structured wrapper around the code you already use, and its task design is entirely non-prescriptive. It attempts to eliminate the need for researchers to use a patchwork of mutually incompatible tools to perform complex experiments. Autopilot is a hacker's plaything -- rather than a uniform, simplified experience, its modular design and complete API-level documentation is meant to encourage users to make and break core Autopilot modules.
  • Efficiency - Autopilot uses Python as a glue around high-performance, low-level libraries, and is fully concurrent across multiple threads, processes, and computers. Its distributed design eliminates the hard limits faced by by single-computer systems, letting researchers use arbitrary numbers and combinations of hardware components to perform complex, hardware-intensive experiments at scale.
  • Reproducibility - Autopilot obsessively documents data provenance, logging the entire history of an Animal's training, including any version and local code changes. Any part of an experiment that isn't documented is considered a bug. By integrating experiments and producing data that is clean at the time of acquisition, Autopilot makes it easy to do good science -- its goal is to allow exact experimental replication from a single file.

Distributed Behavior

Autopilot's premise is simple: to scale experiments, just use more computers.

Autopilot systems consist of multiple "Agents" -- computers with specialized roles in the swarm. One user-facing "Terminal" agent allows a researcher to control many "Pilots," or computers that perform experiments (typically the beloved Raspberry Pi). Each Pilot can coordinate one or many "Children" to offload subsets of an experiment's computational or hardware requirements. Users can use and misuse Autopilot's flexible modules to make whatever agent topology they need <3.

Autopilot System Diagram

Module Overview

Autopilot divides the logical structure of experiments into independent1 modules:

Module
Hardware Agents - Pilot & Terminal Runtime classes that encapsulate a computer/Pi's role in the swarm. Terminals provide the user interface and coordinate subjects and tasks, Pilots do the experiments. Formalizing the Agent API to allow additional agents like Compute or Surveillance agents is a major short-term development goal!
Hardware Hardware - Control your tools! Extensible classes to control whatever hardware you've got.
Hardware Stimuli - Stimulus management and presentation. Parametric sound generation with a realtime audio server built on Jackd. Stubs are present for future development of visual stimuli using Psychopy.
Hardware Tasks - Build experiments! Write some basic metadata to describe data, plots, and hardware and the rest is up to you :)
Hardware Subject - Data management with hdf5 and pyTables. Abstraction layer for keeping obsessive records of subject history and system configuration
Hardware Transforms - Composable data transformations. Need to control the pitch of a sound with a video? build a transformation pipeline to connect your objects
Hardware UI - UI for controlling swarms of Pilots using Qt5/PySide2
Hardware Visualization - (Mostly Prototypes) to do common visualizations

1 a continual work in progress!

Getting Started

All documentation is hosted at https://docs.auto-pi-lot.com

Installation is simple, just install with pip and use Autopilot's guided setup to configure your environment and preferences. The initial setup routine uses a CLI interface that is SSH friendly :)

pip3 install auto-pi-lot
python3 -m autopilot.setup.setup

Autopilot Setup Console

All of Autopilot is quite new, so bugs, incomplete documentation, missing features are very much expected! Don't be shy about raising issues or asking questions in the forum.

Development Status

Jonny is trying to graduate! Autopilot will be slow and maybe a little chaotic until then!

Branch Map

We're working on a formal contribution system, pardon the mess! Until we get that and our CI coverage up, main will lag a bit behind the development branches:

  • dev - main development branch that collects hotfixes, PRs, etc. Unstable but usually has lots of extra goodies
  • hotfix - branches from dev for building and testing hotfixes, PRs back to dev.
  • lab - branches from dev but doesn't necessarily PR back, the local branch used in the maintaining (Wehr) lab
  • parallax - experimental departure from dev to implement a particular experiment and rebuild a lot of components along the way, will eventually return to dev <3

Short-Term

See the short-term development goals in our version milestones:

  • v0.4.0 - Implement registries to separate user code extensions like tasks and local hardware devices in a user directory, preserve source code in produced data so local development isn't lost.
  • v0.5.0 - Make a unitary inheritance structure from a root Autopilot object such that a) common operations like logging and networking are implemented only once, b) the plugin system for v0.4.0 can not only add new objects, but replace core objects while maintaining provenance (ie. no monkey patching needed), c) object behavior that requires coordination across multiple instances gets much easier, making some magical things like self-healing self-discovering networking possible. This will also include a major refactoring of the code structure, finally breaking up some of the truly monstrous thousand-line modules in core into an actually modular system we can build from <3

Long-Term

Autopilot's extended development goals, in their full extravagance, can be found at the Autopilot Development Todo

What's new?

v0.3.0

After much ado, we're releasing Autopilot's first major upgrade. Cameras, Continuous data, DeepLabCut, and a lot more!

  • Autopilot has moved to Python 3!! (Tested on 3.6-3.8)
  • Capturing video with OpenCV and the Spinnaker SDK is now supported (See autopilot.hardware.cameras)
  • An I2C_9DOF motion sensor and the MLX90640 temperature sensor are now supported.
  • Timestamps from GPIO events are now microsecond-precise thanks to some modifications to the pigpio library
  • GPIO output timing is also microsecond-precise thanks to the use of pigpio scripts, so you can deliver exactly the reward volumes you intend <3
  • Hardware modules have been refactored into their own module, and have been almost wholly rebuilt to have sensible inheritance structure.
  • Networking modules are more efficient and automatically compress arrays (like video frames!) on transmission. Streaming is also easier now, check out Net_Node.get_stream() !
  • We now have a detailed development roadmap , so you can see the magnificent future we have planned.
  • We have created the autopilot-users discussion board for troubleshooting & coordinating community development :)

Supported Systems

OS

  • Ubuntu >=16.04
  • raspiOS >=Buster

Python Versions

  • 3.7
  • 3.8
  • 3.9

Raspberry Pi Versions

  • Raspi>=3b (Raspi 4 strongly recommended!)

autopilot's People

Contributors

arnefmeyer avatar cxrodgers avatar mikewehr avatar sneakers-the-rat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autopilot's Issues

manage agents from gui

implement gui dialog for

  • viewing commit hashes of agents and update them
  • restarting agents
  • show and edit prefs including managing hardware

Networking discovery and addressing overhaul

[stubbing out an idea, forgive me while it remains incomplete]

The way networking objects are created and addressed it very clumsy right now.

  • registry for networking objects
  • uniform 'listen' port
  • every node increments ports looking for open ports and stores in centralized registry
  • networking stations ping around the network looking for autopilot objects, and then when they find them exchange a handshake that lists all the networking objects available beneath them
  • as new networking objects are created, let the rest of the swarm know -- do we want a full-knowledge system or like a DHT?

related to another sorta fundamental revision to the notion of id in the system that needs its own architecture issue

Indicate pilot status more robustly

Start/stop button don't reflect, eg. if errors happen in Pis, and there's no way to manually 'ping' them or refresh their status in case they hiccup but are otherwise still available.

Need to make more robust heartbeating system separate from instantiation in GUI.

Missing directories in install

Submitted in discussion board:
https://groups.google.com/g/autopilot-users/c/HUCdRJJ-Sg4/m/ag3rwzYABQAJ

Minor issues I observed where that I ran (in both the “pure” pip install and the pip install from the repo) into the error of a missing dir under ‘./autopilot/external/pigpio’ which could be solved by simply creating the dir. Also, git-cloning the ‘wehr-lab’ repo missed the contents for both folders in the src which I added by hand. Afterwards the installation was successful and I ran the setup script also without errors (not caring about disabling bluetooth for now)

Report Pilot errors in terminal

Catch all pilot errors and send back to Terminal for display

currently user has to, if running as daemon, ssh into pi and use journalctl, not very optimal for debugging.

PING_INTERVAL confuses setup_autopilot on initial setup

I haven't fully tracked this through yet to see what's going on, but if I am on the dev branch on a completely fresh rpi, then setup_autopilot fails to start due to PING_INTERVAL. However, on the master branch, everything is fine, so the workaround is to setup_autopilot on master.

python3 -m autopilot.setup.setup_autopilot

No existing prefs found, starting from defaults
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pi/dev/autopilot/autopilot/setup/setup_autopilot.py", line 605, in <module>
setup.run()
File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/npyscreen/apNPSApplication.py", line 30, in run
return npyssafewrapper.wrapper(self.__remove_argument_call_main)
File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/npyscreen/npyssafewrapper.py", line 41, in wrapper
wrapper_no_fork(call_function)
File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/npyscreen/npyssafewrapper.py", line 97, in wrapper_no_fork
return_code = call_function(_SCREEN)    
File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/npyscreen/apNPSApplication.py", line 25, in __remove_argument_call_main
return self.main()
File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/npyscreen/apNPSApplicationManaged.py", line 148, in main
self.onStart()
File "/home/pi/dev/autopilot/autopilot/setup/setup_autopilot.py", line 411, in onStart
self.forms[form_class.NAME] = self.addForm(form_class.NAME, form_class, name=form_class.DESCRIPTION)
File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/npyscreen/apNPSApplicationManaged.py", line 55, in addForm
fm = FormClass( parentApp=self, *args, **keywords)
File "/home/pi/dev/autopilot/autopilot/setup/setup_autopilot.py", line 80, in __init__
super(Autopilot_Form, self).__init__(*args, **kwargs)
File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/npyscreen/fmForm.py", line 70, in __init__
self.create()
File "/home/pi/dev/autopilot/autopilot/setup/setup_autopilot.py", line 362, in create
self.populate_form(PILOT_PREFS)
File "/home/pi/dev/autopilot/autopilot/setup/setup_autopilot.py", line 143, in populate_form
raise Warning("Not sure what to do with param {} with type {}".format(param_name, param['type']))
Warning: Not sure what to do with param PING_INTERVAL with type float

Probably related: once everything is properly set up, I still get these warnings upon starting autopilot:

/home/pi/dev/autopilot/autopilot/prefs.py:424: UserWarning: Returning default prefs value PING_INTERVAL : 5 (ideally this shouldnt happen and everything should be specified in prefs
  warnings.warn(f'Returning default prefs value {key} : {default_val} (ideally this shouldnt happen and everything should be specified in prefs', UserWarning)

Low-urgency bug report since the workaround works and the warning doesn't cause obvious problems

Migrate to PySide6

Just as we moved to PySide2, the Qt foundation has pushed PySide6 (based on Qt6) which likely has a bunch of breaking changes... A particular motivation to move is the fact that they seem to have stopped hosting the PySide2 documentation...

Implement Registries - New protocols not visible

I've made a dummy protocol, but was unable to see it from the Terminal's 'New Protocol' window. After some rummaging around I managed to achieve it by manually adding my new protocol to __init__.py in the Tasks directory.
It would of course be great if it just appears on its own - and perhaps I've missed something in doing this. Otherwise, hereby a request to make that possible. :-D

v0.5.0 - Make inheritance structure unitary from root object

This is the defining enhancement of v0.5.0

following from the conversation on #39

you know what this is now the defining enhancement of v0.5.0 - everything should have a registry, because everything should inherit from a root object that allows every module to have plugins. A Universal Plugin System From A Unitary Root Object. what if one wanted to implement their own networking plugin module that replaced a network node element. it should be possible to do that yes? plot objects should spawn from a registry because there should be some metaclass that can mutually coordinate their framerate if they're videos, and so the thought goes ever on.

Every object should have certain properties & methods, like some identifier, ability to make logger, have net node, etc.

Should implement a root AutopilotObject inherited by everything else. this would make giving unique names to every object a lot easier, keep a registry, not reimplement extremely common methods multiple times, etc.

loggers don't identify object id correctly

nice big fat one here.

loggers are named by the the module name.classname.id, but when id is not present, they will all get the root module and overwrite one another rather than creating multiple children with different recognizable IDs.

need to a) unify root object to standardize on id
b) make sure loggers dodge on creation in any case -- if no logger for the module has been created, make one, but then always get some unique id to differentiate different instances of an object.

'autopilot' alias as install script - trying to launch terminal

$ source launch_autopilot.sh
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/mikewehr/Documents/Analysis/autopilot/autopilot/core/terminal.py", line 885, in
ex = Terminal()
File "/Users/mikewehr/Documents/Analysis/autopilot/autopilot/core/terminal.py", line 159, in init
prefs.add('INVOKER', self.invoker)
File "/Users/mikewehr/Documents/Analysis/autopilot/autopilot/prefs.py", line 414, in add
_PREFS[param] = value
File "", line 2, in setitem
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py", line 818, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: can't pickle Invoker objects

Programmatic Setup

Be able to specify setup options from the cli as flags, or from some portable environment description file that includes prefs as well as any scripts to be run.

  • Fully resolves #23 by allowing setup to be run without terminal window being a minimum size
  • makes multiple setups much easier
  • finalizes portability of experiments by allowing full environment recreation without need for manual intervention.

system requirements

There should be a document that explains the system requirements - do I need a Raspberry Pi 4? Does it have to run Debbian? Can I install the terminal on my Mac or Windows computer or does it have to be linux? Could I install a pilot on a Mac? Etc.
Maybe this should be a paragraph in Readme.md.
For example I thought this might be described in one of the requirements txt files but those are clearly a different thing.

Net_Node.send crashes when its argument "to" is a list

Background: I am trying to control 8 speakers. Current approach is to use one Pilot and three Child, each controlling two speakers.

Right now, I'm just trying to connect one Child to the Pilot. I'm using tasks.gonogo as an example. However, I think it may (?) have a bug in it.
https://github.com/wehr-lab/autopilot/blob/dc9833ae827301219d50620fa8e1b22266bb222d/autopilot/tasks/gonogo.py#L170

Note that the to argument is a list. The documentation of Net_Node.send says this is acceptable. But it crashes here:
https://github.com/wehr-lab/autopilot/blob/dc9833ae827301219d50620fa8e1b22266bb222d/autopilot/core/networking.py#L1582
when it tries to convert the list using bytes.

File "/home/pi/dev/autopilot/autopilot/core/networking.py", line 1582, in send
    self.sock.send_multipart([self.upstream.encode('utf-8'), bytes(msg.to, encoding="utf-8"), msg_enc])
TypeError: encoding without a string argument

What is the correct syntax for sending messages to a Child? I tried using the name of the child (e.g., "rpi02") as well as "T_rpi02" and the name of the Child task's Net_Node. Then the message sends okay but it doesn't get received. Although this might be a networking problem, I can debug further if I understand better what the to field should be. Thanks!

reassign_protocol should check for changes to data description

If you change the TrialData format (e.g. by adding new columns) and then try to rebuild the data table by reassigning to the same protocol with the same name, reassign_protocol sees that the table already exists and silently doesn't do anything. Workaround for now is to rename the protocol to something like 'gap-laser2.json', but we should add logic that checks if additional columns need to be added here: https://github.com/wehr-lab/autopilot/blob/043fb6cf7c8642d4169be0823c773b358b2c4984/autopilot/core/subject.py#L538

slack thread:
so it stashes the protocol description - ie. the contents of that json file - and marks that down, but it doesn’t move and rename the data, typically i was thinking that if you assigned a protocol it would have a different name, didn’t account for changes in data description that would require the table to be remade. I can instead add logic that checks if additional columns need to be added where there is that exception catch i linked above

Catch stdout in logs

Logging is coming along, but it's still the case that just raise Exception doesn't get caught in the logs. Need to catch stdout/stderr so unhandled exceptions are documented, especially when running as a systemd daemon which needs journalctl -u autopilot -e or something not entirely obvious to access.

Support for stereo (two-channel) audio output

My understanding from reading the code is that stereo (two-channel) audio output is not really supported in autopilot right now. I have been working on implementing this in my own fork. It isn't quite ready for a PR yet, but I wanted to share my approach and collect feedback from anyone who is interested in this functionality.

Here is how (I think) things currently stand in autopilot:
There are prefs OUTCHANNELS and NCHANNELS that relate to multi-channel output. I believe that they ultimately serve just to hook up the same "mono" audio stream to one or both physical outputs on the hifiberry. The sound generating classes seem to generate only mono sounds.

Here are the scenarios and use-cases that I would like to support:

  • One speaker hooked up to one channel
  • Two speakers hooked up to two channels, but always playing the same thing ("mono")
  • Two speakers hooked up to two channels, playing different things ("stereo")
  • Two speakers hooked up to two channels, playing different things, but reversed ("reversed")

The last scenario would be used in the case where you've hooked everything up but realized you swapped the speakers by accident, and just want to flip a bit in software instead of physically rewiring things.

I can't really imagine a scenario where we have more than two channels of audio output. I am actually implementing now 8-channel output, and the way I'm doing it is with 4 Pis and 4 Hifiberrys. I would suggest that we hold off on supporting >2 channel output in software until someone comes up with a proposed hardware implementation of this, since right now we don't even know what that would look like.

My proposal
The general idea is that OUTCHANNELS should encapsulate information about how the speakers are physically hooked up, and the arguments to the sound generating classes should encapsulate information about what kind of sound is generated (mono or stereo).

First, I'm proposing to add a channel argument to the sound-generating classes. It would have the following behavior:

  • channel=None: mono (1-dimensional) waveform, the same as it is now
  • channel=0: stereo output, with channel 0 containing a waveform and channel 1 containing zeros
  • channel=1: stereo output, with channel 1 containing a waveform and channel 0 containing zeros

Second, I think we could simplify the logic by implementing each use-case with OUTCHANNELS (and dropping NCHANNELS). I propose the following meaning for OUTCHANNELS:

  • [0] or [1]: single output to the corresponding speaker
  • [0, 1]: stereo output. Whether they play mono or stereo sounds would be determined by the sound generating classes (the channel argument above).
  • [1, 0]: reversed stereo output. No change to the sound generating classes would be needed here, just flip this bit in prefs to swap the speakers.

The default behavior would be:

  • channel=None in the sound generating classes. This means they would generate 1d, mono output, just as they currently do.
  • [0, 1] for OUTCHANNELS. That would send output to both speakers. If the sound generating classes make mono output (default), the same output goes to all speakers. If the user specifies a channel in the sound generating classes, stereo output works right away without changing a pref. Although this is technically a change in the default value of the pref, I don't think the actual default behavior would change. (could be wrong about this)

Status
Currently implemented in https://github.com/cxrodgers/autopilot/tree/stereo_sound

A comparison vs the "dev" branch:
dev...cxrodgers:stereo_sound

I believe that it's working on my end. I'm a little unsure about the logic in jackclient.py because there are a lot of conditionals relating to continuous sound (or not), currently playing sound (or not), and the queues. I'll check more carefully.

A few stray observations:

  • I think there is some Matlab-style indexing lurking in the default pref for OUTCHANNELS of [1, 2], this should almost surely be [0, 1].
  • I'm having trouble testing the sound-generating classes because I don't know how to instantiate a jackclient outside of a task, and that jackclient is needed to get the sampling rate and so on. I have noticed a few weirdnesses with sounds, where if I generate the same sound twice it is two different lengths, one the requested length and one a multiple of the chunk size (1024). Still tracking this down

Please let me know if you have any suggestions! I know I'm proposing a big change to an important component of the project (sound playing) so it should be something everyone is happy with.

namedtuple or dataclasses to implement a general format for data 'packets'

Stubbing out an idea:

In many cases data needs to move around the system in 'packets' -- eg. a frame from a camera needs both the image frame as well as the timestamp, etc., a task stage will return a prespecified set of data, and so on.

We want to have some predictable way of

a) specifying what fields we expect to have in a given 'packet'
b) provide a uniform way of accessing fields of data -- eg. for transformations, we don't want to have to write a thousand subclasses to handle whether the data is a pandas dataframe or numpy array...
c) handle common transformations like unit conversions, serialization and compression operations

it seems like data classes are the natural builtin way of doing this, seems like we may also want to provide some interface to declare them on the fly more in the syntax of namedtuple.

Rotating logs and local data file

Currently there is no limit to the growth of logfiles, so theoretically eventually a pilot will fill up its disk and crash.

Implement a logging.RotatingFileHandler to rotate logfiles with a maximum size set in prefs

same concern for local.h5 file.

No sound on box 4

After taking mice out after running on gap_laser protocol for 1 hour I noticed box 4 had stopped playing white noise at some point while running. I've had this happen with other boxes and with other protocols before too.

Logs:
no_sound_box4.txt
pilot_logs.tar.gz

Setup Error

I'm using a RasPi 4B with Raspbian Buster 10 and Python 3.7. I've managed to successfully install autopilot and all dependencies, but when running python3 -m autopilot.setup.setup_autopilot I keep getting the following error:

/usr/lib/python3.7/runpy.py:125: RuntimeWarning: 'autopilot.setup.setup_autopilot' found in sys.modules after import of package 'autopilot.setup', but prior to execution of 'autopilot.setup.setup_autopilot'; this may result in unpredictable behaviour
warn(RuntimeWarning(msg))
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/pi/.local/lib/python3.7/site-packages/autopilot/setup/setup_autopilot.py", line 691, in <module>
setup.run()
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/apNPSApplication.py", line 30, in run
return npyssafewrapper.wrapper(self.__remove_argument_call_main)
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/npyssafewrapper.py", line 41, in wrapper
wrapper_no_fork(call_function)
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/npyssafewrapper.py", line 97, in wrapper_no_fork
return_code = call_function(_SCREEN)
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/apNPSApplication.py", line 25, in __remove_argument_call_main
return self.main()
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/apNPSApplicationManaged.py", line 148, in main
self.onStart()
File "/home/pi/.local/lib/python3.7/site-packages/autopilot/setup/setup_autopilot.py", line 535, in onStart
self.pilot_1 = self.addForm('CONFIG_PILOT_1', Pilot_Config_Form_1, name="Setup Pilot Agent - 1/2")
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/apNPSApplicationManaged.py", line 55, in addForm
fm = FormClass( parentApp=self, *args, **keywords)
File "/home/pi/.local/lib/python3.7/site-packages/autopilot/setup/setup_autopilot.py", line 228, in __init__
super(Autopilot_Form, self).__init__(*args, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/fmForm.py", line 70, in __init__
self.create()
File "/home/pi/.local/lib/python3.7/site-packages/autopilot/setup/setup_autopilot.py", line 498, in create
self.populate_form(PILOT_PREFS)
File "/home/pi/.local/lib/python3.7/site-packages/autopilot/setup/setup_autopilot.py", line 270, in populate_form
widget = self.add(nps.TitleText, name=param['text'], value = str(default))
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/fmForm.py", line 384, in add_widget
*args, **keywords)
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/wgtitlefield.py", line 69, in __init__
self.make_contained_widgets()
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/wgtitlefield.py", line 86, in make_contained_widgets
self.label_widget = textbox.Textfield(self.parent, relx=self.relx, rely=self.rely, width=len(self.name)+1, value=self.name, color=self.labelColor)
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/wgtextbox.py", line 44, in __init__
self.update()
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/wgtextbox.py", line 132, in update
self._print()
File "/home/pi/.local/lib/python3.7/site-packages/npyscreen/wgtextbox.py", line 299, in _print color _curses.error: addwstr() returned ERR

Any ideas why this appears?

Let Terminal GUI remember its previous size

Current behavior is that the Terminal GUI covers the entire desktop, which probably makes sense for certain use-cases but can be annoying in others. I see there are some comments about infinitely expanding buttons so I imagine that was the rationale for this choice.

Perhaps a more user-friendly behavior would be to remember the previous size and restore it? Can autopilot/prefs.json be a good place to store information like last window size or is there a better place? I can work on this if people think it makes sense.

On a related note I fixed some Qt DeprecationWarnings on my end with this commit:
cxrodgers@f22002e
I can include that in a PR too.

Run 2afc task error

DEBUG:node._test_agent_2:RECEIVED: ID: test_agent_2_2; TO: _test_agent_2; SENDER: test_agent_2; KEY: START; FLAGS: {}; VALUE: {'bias_mode': 0, 'correction': False, 'graduation': {'type': 'n_trials', 'value': {'current_trial': '100', 'n_trials': '100', 'type': 'n_trials'}}, 'punish_dur': 100, 'punish_stim': True, 'req_reward': True, 'reward': 1000, 'step_name': '2AFC', 'stim': {'sounds': {}, 'tag': 'Sounds', 'type': 'sounds'}, 'task_type': '2AFC', 'subject': '111', 'step': 0, 'current_trial': 0, 'session': 12, 'pilot': 'test_agent_2'}
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: POKES, Pin: L
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'POKES'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: POKES, Pin: C
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'POKES'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: POKES, Pin: R
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'POKES'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: LEDS, Pin: L
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'LEDS'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: LEDS, Pin: C
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'LEDS'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: LEDS, Pin: R
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'LEDS'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: LEDS, Pin: TOP
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'LEDS'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: PORTS, Pin: L
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'PORTS'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: PORTS, Pin: C
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'PORTS'
ERROR:task.<class 'autopilot.tasks.nafc.Nafc'>:Pin could not be instantiated - Type: PORTS, Pin: R
Traceback (most recent call last):
  File "/home/pi/github/autopilot/autopilot/tasks/task.py", line 142, in init_hardware
    hw_args = pin_numbers[type][pin]
KeyError: 'PORTS'
DEBUG:task.<class 'autopilot.tasks.nafc.Nafc'>:Hardware initialized
DEBUG:task.<class 'autopilot.tasks.nafc.Nafc'>:Stimulus manager initialized
DEBUG:main:Starting task loop
Exception in thread Thread-5:
Traceback (most recent call last):
  File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.7/threading.py", line 865, in run
    self._target(*self._args, **self._kwargs)
  File "/home/pi/github/autopilot/autopilot/core/pilot.py", line 706, in run_task
    data = next(self.task.stages)() # Double parens because next just gives us the function, we still have to call it
  File "/home/pi/github/autopilot/autopilot/tasks/nafc.py", line 276, in request
    self.target, self.distractor, self.stim = self.stim_manager.next_stim()
  File "/home/pi/github/autopilot/autopilot/stim/managers.py", line 218, in next_stim
    self.last_stim = np.random.choice(self.stimuli[self.target])
KeyError: 'R'

I find this error is in here: task.py:init_hardware: hw_args = pin_numbers[type][pin]

I print pin_numbers is like:

{
	'GPIO': {
		'test': {
			'name': 'test',
			'pin': 21,
			'pins': '',
			'polarity': 1,
			'pull': 1,
			'pulse_width': 100,
			'trigger': '',
			'type': 'gpio.LED_SINGLE',
			'value': 1
		}
	}
}

and self.HARDWARE is like:

{
	'POKES': {
		'L': < class 'autopilot.hardware.gpio.Digital_In' > ,
		'C': < class 'autopilot.hardware.gpio.Digital_In' > ,
		'R': < class 'autopilot.hardware.gpio.Digital_In' >
	},
	'LEDS': {
		'L': < class 'autopilot.hardware.gpio.LED_SINGLE' > ,
		'C': < class 'autopilot.hardware.gpio.LED_SINGLE' > ,
		'R': < class 'autopilot.hardware.gpio.LED_SINGLE' > ,
		'TOP': < class 'autopilot.hardware.gpio.Digital_Out' >
	},
	'PORTS': {
		'L': < class 'autopilot.hardware.gpio.Solenoid' > ,
		'C': < class 'autopilot.hardware.gpio.Solenoid' > ,
		'R': < class 'autopilot.hardware.gpio.Solenoid' >
	}
}

dictionary self.HARDWARE 's key has not been included in pin_numbers. Did I have ignore somthing? Please help and thank. @sneakers-the-rat

clear pigpiod scripts on gpio object release

There are 2 issues contained in the attached log files.
Box 4: sound cut out completely at some point -> restarted box in autopilot into the ports not working/box not responding
Box 7: ports/box stopped responding at some point during running
Box 0: ports/box stopped responding at some point during running -> restarted box in autopilot and still not responding

Log files:
pilot_logs.tar.gz
box_error_11.11.20.txt

callbacks for sounds

so to partially tell everyone about a development move I have been making for awhile but so far had only articulated in my mind:

it's time for lifecycle hooks.

ok but we're getting ahead of ourselves. the problem at hand is that there is an intrinsic asynchrony between the stop_evt call in the JackClient.process method. the event is set when the process method determines the sound is over (a None is sent down the queue) but that is in the process call that is preparing the buffer for the frame in the next buffering cycle for jackd. that means that the sound end callback usually happens early (reminder to Jonny to upload the pic of the oscilloscope) but can be partially compensated by accounting for the jackd server buffering logic (also Jonny upload compensation pic)

the interface around these callbacks is like grotesquely opaque and happens through the godforsaken stim managers module (if you thought the hundreds-of-lines-long methods in Subject were bad...), and another way of doing this would be to give sound objects ~ lifecycle hooks ~

what this would mean is that sound objects should take on greater knowledge over their presentation, and that means a better thought out interface with the jack client. that means we need to make networking uniform (cite issue) but in the meantime means that there needs to be a way to a) give callables to the various epochs of a sound (play started, play ended, among others yno), and b) implemented as a more general send/receive system between sounds and the client. the queues and like intensely complicated conditional logic in the jack client are truly unmaintainable.

anyway as this scales to the rest of the project, in accordance with v0.5.0, a unitary inheritance structure should include a means of declaring the lifecycle hooks of an object, allowing it to declare the various points of its function, and then allowing other objects to interact with them. more on this in the full spec of I think what will become v0.6.0

segfault on launch

Submitted via discussion board: https://groups.google.com/g/autopilot-users/c/HUCdRJJ-Sg4/m/ag3rwzYABQAJ

Environment Configuration:
[ SUCCESS ] performance
[ SUCCESS ] hifiberry
[ FAILURE ] bluetooth
[ SUCCESS ] jackd
[ SUCCESS ] virtualenv detected, path: /home/pi/autopilot/venv
Additional Messages: Launch file created at /home/pi/autopilot/launch_autopilot.sh

However, when I run the launch_autopilot.sh skript, I get the following error:

jackd: no process found
pigpiod: no process found
launch_autopilot.sh: 5: launch_autopilot.sh: source: not found
Segmentation fault

I get the same error when I try to install all requirements etc. by apt and pip separately (not sure if this makes sense), run the setup script and run the generated launch_autopilot.sh file. I am not sure why the pilot is not sending a handshake to the terminal etc.

Document pilot connection & setup

Received in discussion board: https://groups.google.com/g/autopilot-users/c/JvWIPpYY0TI/m/MtKAj0h-BwAJ

Hi Jonny,

First up, this seems like an amazing framework and I can't wait to get started! And alert: I'm an experimentalist, so a lot of things are new to me figuring out this - which might just be a good thing for providing n00b feedback on the docs. ;-)

I now have Autopilot installed on both a Terminal (Mac) and a Pilot. However, my knowledge is limited when it comes to connecting the two, and right now there's not much help to get from the docs. I have a vague idea that there's some ssh involved (about which I know pretty much 0), or is it simpler than that? Do they need to be connected to the same WiFi network (mine cannot currently where I am stationed)?

An aside: A later request for updating the docs could also be a brief mention of how to install and use virtualenv, in case people are not familiar with this way of working. This was the case for me so I had to figure that out elsewhere and hope for the best (works now, but might be very helpful for other first-timers. Also, it sort of works outside a virtual environment, but the Pilot doesn't as the welcome_msg.txt apparently is missing).

Thanks again, and hope you can help me out here!
Cheers,
[name removed in case they don't want it here!!!]

Documentation Requests!

Track minor docs improvements not worth starting their own issue over, comment to make a docs request!

General

  • use of ~/.autopilot file to bootstrap location of autopilot user dir
  • use of pigpio args and pigpio mask in prefs, explain default mask & relationship to sound card...
  • document logging, log levels
  • networking configuration, how to connect pilot to terminal
  • quickstart overview -- 'do something cool in autopilot in 10 minutes'
  • gui wizards/elements, incl. 'reload protocols' and 'batch reassign'
  • Use references to methods themselves rather than calling methods to assign triggers - #119 (comment)

Examples

  • using hardware objects! cameras, etc. outside of context of class
  • simple example task, blinking a light, etc. as both a notebook and in the user guide

Wiki

Make the Autopilot wiki & discussion board

auto-pi-lot.com/hardware should become wiki.auto-pi-lot.com

Wiki should be used for

  • Hardware schematics/meshes/etc. and build guides
  • Pi arcana like configuring hats, sound cards, capture cards, etc.
  • Hints on using the program, design considerations, longer descriptions of tasks

The google groups discussion board is suboptimal in a number of ways, most jarringly the lack of syntax highlighting and inability to edit messages. Either

  • Switch to using new github discussions board, or if that doesn't work
  • Serve some phpbb or similar forum.

Discussion board being separate from the wiki is good because a) most people don't use discussion pages in wikis and b) need low-barrier, simple way for people to ask questions without needing to know "where to put them" in the wiki. Wiki is archive, discussion board is contemporaneous.

Structure of Prefs, defaults, & setup

Better document the parameters in prefs -- finish restructuring of setup routine to further separate prefs, keep a list of default prefs like pvp https://github.com/CohenLabPrinceton/pvp/blob/7d772f093a5706266d372aab2d32535200eeb250/pvp/common/prefs.py#L48 so docs can be generated from docstrings more easily.

Specifically, better document the structure of hardware prefs and how that interacts with the HARDWARE descriptor in a task heading as mentioned here: https://groups.google.com/g/autopilot-users/c/_MqzLDDq3CE/m/AvrIWcwbAAAJ

Currently script, their descriptions, and their names are split across multiple files, these should be unified into a single dict for maintainability/elegance/it just sorta sucks this way.

Prefs should also have a better 'getter' method that returns default values when unset in prefs, or kindly return Nones rather than AttributeErrors

Add Autopilot info to open-neuroscience.com

Hello Autopilot folks!

We are reaching out because we would love to have your project listed on Open Neuroscience, and also share information about this project:

Open Neuroscience is a community run project, where we are curating and highlighting open source projects related to neurosciences!

Briefly, we have a website where short descritptions about projects are listed, with links to the projects themselves, their authors, together with images and other links.

Once a new entry is made, we make a quick check for spam, and publish it.

Once published, we make people aware of the new entry by Twitter and a Facebook group.

To add information about their project, developers only need to fill out this form

In the form, people can add subfields and tags to their entries, so that projects are filterable and searchable on the website!

The reason why we have the form system is that it makes it open for everyone to contribute to the website and allows developers themselves to describe their projects!

Also, there are so many amazing projects coming out in Neurosciences that it would be impossible for us to keep track and log them all!

Please get in touch if you have any questions or would like to collaborate!

Choose user directory

Autopilot should be able to have its user directory anywhere, and so should have a well defined fallback system to find it. There are two parts, finding prefs.json and then

I think the order should be
(explicit/implicit prefs location, explicit/implicit user directory location)

  • Explicit/Explicit: BASEDIR in prefs.json, location of which is provided with an -f flag to the agent launch
  • Implicit/Explicit: BASEDIR in prefs.json, location of which is found from the ~/.autopilot hidden file which indicated user directory location
  • Implicit/Explicit: BASEDIR in prefs.json, location found by looking in the default ~/autopilot directory.
  • Explicit/Implicit: prefs.json without BASEDIR shouldn't be allowed
  • Implicit/Implicit: ^^

Reportedly, this isn't working:
#33 (comment)

The relevant portion of the setup script is here:
https://github.com/wehr-lab/autopilot/blob/01543a9708d4bf06c74dc6af6fc7e8917331fc98/autopilot/setup/setup_autopilot.py#L486

where

  • if a directory is passed with -d it is used
  • otherwise ~/.autopilot is checked if it exists, and that directory is used
  • finally, use default ~/autopilot` directory

This doesn't seem to be propagated to the setup script, and the resulting prefs file takes the BASEDIR value from the wizard: https://github.com/wehr-lab/autopilot/blob/01543a9708d4bf06c74dc6af6fc7e8917331fc98/autopilot/setup/setup_autopilot.py#L534

Should: update the BASE_PREFS dictionary's default value for BASEDIR before opening prompt, so have a single point of setting BASEDIR (rather than comparing the temporary autopilot_dir variable and the result from the setup script.

Document prefs['AUDIOSERVER'] datatype and effect

Following up on the information I learned from this discussion:
#55 (comment)

I will try to make a PR to clarify the doc around the behavior of prefs['AUDIOSERVER']. Briefly, it can be either bool or string, if string it can be 'jack' or 'pyo', 'pyo' is deprecated.

Probably requires changes here:
https://docs.auto-pi-lot.com/en/latest/_modules/autopilot/prefs.html?highlight=AUDIOSERVER
https://docs.auto-pi-lot.com/en/latest/autopilot.stim.sound.sounds.html

Maybe also in the codebase somewhere.

curses bug in setup script

python3 -m autopilot.setup.setup_autopilot

reproducibly crashes if you run it in an SSH terminal window that is too narrow. This is admittedly a minor issue, but it was an annoyance that took me a while to figure out.

Testing checklist

List of "not necessarily the obvious" things that wouldn't be covered in a coverage check to be sure to include in tests.

  • -e setup in venv vs. system python (#44)

How to connect one parent to multiple children?

This is sort of a continuation of our discussion in #93 (comment)
But that title is not so relevant anymore.

The situation is that I am trying to connect one Parent Pi to multiple Child Pi. Currently, I can connect the Parent to one Child at a time, so I know the Children are each set up correctly. However, I am not sure how to tell the Parent to expect multiple children.

An important pref is likely CHILDID, which currently expects a string. Is the idea that it would expect a list for the case of multiple children? When I provide a list, this happens:

DEBUG:networking.station_02.Pilot_Station.Pilot_Station-3:FORWARDING (router): [b'rpi03', b'{"flags": {}, "timestamp": "2021-06-07T13:51:40.688037", "ttl": 5, "sender": "rpi01", "to": ["rpi02", "rpi03"], "key": "START", "value": {"child": {"parent": "rpi01", "subject": "testmouse_paft"}, "task_type": "PAFT Child", "subject": "testmouse_paft"}, "id": "rpi01_13"}']
ERROR:tornado.general:Uncaught exception in zmqstream callback
Traceback (most recent call last):
  File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/zmq/eventloop/zmqstream.py", line 455, in _handle_events
    self._handle_send()
  File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/zmq/eventloop/zmqstream.py", line 492, in _handle_send
    status = self.socket.send_multipart(msg, **kwargs)
  File "/home/pi/.venv/autopilot/lib/python3.7/site-packages/zmq/sugar/socket.py", line 555, in send_multipart
    return self.send(msg_parts[-1], flags, copy=copy, track=track)
IndexError: list index out of range

Alternatively, should the Children be serially connected, each a Child of the other? Can there be a multi-generational setup like this? If so, not sure if LINEAGE should be PARENT or CHILD.

I suppose alternatively, I could try to manually send the message to start the task to each child separately. This is how we send the message to one Child.

        self.node = Net_Node(id="T_{}".format(prefs.get('NAME')),
                             upstream=prefs.get('NAME'),
                             port=prefs.get('MSGPORT'),
                             listens={},
                             instance=False)

        self.subject = kwargs['subject']
        value = {
            'child': {
                'parent': prefs.get('NAME'), 'subject': kwargs['subject']},
            'task_type': self.CHILDREN['rpi03']['task_type'],
            'subject': kwargs['subject'],
        }

        # send to the station object with a 'CHILD' key
        self.node.send(to=prefs.get('NAME'), key='CHILD', value=value)

However, I'm not sure how to modify this last line to go to the correct child. It seems to just go to whichever one is specified in CHILDID. I tried the obvious thing (changing the to field to the child's name), but no message arrived, I'm not sure I was doing the to syntax correctly.

thanks!!

Unify creation of loggers

How did it come to this.

Loggers are instantiated with a bunch of methods that are largely the same, and it's causing problems for obvious reasons -- small changes aren't reliably propagated to the rest.

As far as I can tell there's no reason to have multiple ways of spawning a logger, unify into a single function like in : https://github.com/CohenLabPrinceton/pvp/blob/322aa15bab4f7c722f0032239147aa210e017277/pvp/common/loggers.py#L38

Logging also needs to be DOCUMENTED!!! maybe in a general DEBUGGING section in the docs?

  • how loggers are spawned
  • format, location of log files
  • how to set log level
  • guidance for logging in new code

https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/hardware/__init__.py#L159
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/tasks/children.py#L210
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/tasks/task.py#L115
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/core/plots.py#L86
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/core/plots.py#L192
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/core/pilot.py#L224
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/core/terminal.py#L202
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/stim/sound/sounds.py#L193
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/core/networking.py#L658
https://github.com/wehr-lab/autopilot/blob/6c437a926fbee2ee5c2e69d948b98d6c296d7e9b/autopilot/core/networking.py#L1854

Stream method for hardware objects

From the discussion board (https://groups.google.com/g/autopilot-users/c/6jdWZv05bfI/m/1F438EqxCQAJ)

All hardware objects should be able to easily stream events.

The API that makes sense to me is something like this

from autopilot.hardware.gpio import Digital_In
pin = Digital_In(1)
pin.stream(pin.level, **stream_kwargs)
# or
pin.stream('level', **stream_kwargs)

In the interest of making this general, one strategy might be to make a decorator that wraps attributes of the object such that when they are updated a message is emitted to the configured streaming target.

ports not responding

After running boxes for around 30 minutes plus the ports start acting weird and ultimately stop responding. Example: started up test protocol -> let run for 45 min -> first couple of port inputs worked -> lights got wonky and 3rd poke and beyond were not responsive.
Terminal:
portsnotworking.txt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.