pulse2percept / pulse2percept Goto Github PK
View Code? Open in Web Editor NEWA Python-based simulation framework for bionic vision
Home Page: https://pulse2percept.readthedocs.io
License: BSD 3-Clause "New" or "Revised" License
A Python-based simulation framework for bionic vision
Home Page: https://pulse2percept.readthedocs.io
License: BSD 3-Clause "New" or "Revised" License
When I tried to import pulse2percept as p2p, it returned the following error: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject
Provide a Percept
object that provides an easy plot method, automatically flips up/down if necessary (from retinal coordinates to dva), and has labeled axes. Probably easiest to build off of Stimulus
(PR #119).
Random aside: right now, access to individual electrodes is done either by a name, or by an index. In both cases, ultimately the access is mediated by the index (getitem calls to get_index etc.). It occurs to me that instead of a list, the electrodes could be stored in a dict, with either integer keys (the case where no names are provided) or the names as keys. Might simplify some of the code below (?). Would it have any unwanted consequences? We could still iterate over the dict items (in iter below).
Add "Edit on GitHub" button to docs, as seen on ReadTheDocs. Also, make it so that [source] buttons directly link to GitHub.
It would be nice to have a way for indexing into/slicing the data container of a Stimulus
object. Make it so that __getitem__
takes a time point and returns the data at that time point, including interpolating as needed.
Need to convert all the pulse trains into the new Stimulus
format:
tsample
is no longer neededAn example:
class MonophasicPulse(p2p.stimuli.Stimulus):
def __init__(self, amp, pulse_dur, delay_dur=0, stim_dur=None, dt=1e-12):
"""Monophasic pulse
Parameters
----------
amp : float
Current amplitude (uA). Negative currents: cathodic, positive: anodic.
pulse_dur : float
Pulse duration (ms)
delay_dur : float
Delay duration (ms). Zeros will be inserted at the beginning of the
stimulus to deliver the pulse after ``delay_dur`` ms.
stim_dur : float, optional, default: ``pulse_dur+delay_dur``
Stimulus duration (ms). Zeros will be inserted at the end of the
stimulus to make the the stimulus last ``stim_dur`` ms.
dt : float, optional, default: 1e-12 ms
Sampling time step (ms).
Examples
--------
A single cathodic pulse (1ms pulse duration at 20uA) delivered after
2ms and embedded in a stimulus that lasts 10ms overall:
>>> from pulse2percept.stimuli import MonophasicPulse
>>> pulse = MonophasicPulse(-20, 1, delay_dur=2, stim_dur=10)
"""
if stim_dur is None:
stim_dur = pulse_dur + delay_dur
assert stim_dur >= pulse_dur + delay_dur
# We only need to store the time points at which the stimulus changes.
# For example, the pulse amplitude is zero at time=``delay_dur`` and
# ``amp`` at time=``delay_dur+dt``:
data = np.array([0, 0, amp, amp, 0, 0]).reshape((1, -1))
time = np.array([0, delay_dur,
delay_dur + dt, delay_dur + pulse_dur,
delay_dur + pulse_dur + dt, stim_dur])
super().__init__(data, time=time, compress=True)
The new classes can replace the old classes in pulse2percept/stimuli/pulse_trains.py
.
TimeSeries
can be removed.
This is also a good time to convert all times to ms (see issue #170).
We need the following:
MonophasicPulse(self, amp, pulse_dur, delay_dur=0, stim_dur=None, dt=1e-12)
BiphasicPulse(self, amp, pulse_dur, polarity='cathodic-first', delay_dur=0, stim_dur=None, dt=1e-12)
PulseTrain
A user reports that pip install fails with error "'pulse2percept/fast_retina.pyx' doesn't match any files". It's time for a new release
Add two examples to the gallery:
This might take a while to run on your laptop. Try running on the cluster.
To use the video module (based on scikit-video
), the user needs to install ffmpeg
or libav
, which is platform-dependent and annoying for both users and CI. Better to find a pip-installable solution.
imageio
might be a good alternative (https://imageio.readthedocs.io/en/latest/index.html), and it's already required by scikit-image
. There's an imageio-ffmpeg
that is pip-installable and can do movies (https://imageio.readthedocs.io/en/latest/examples.html#convert-a-movie). Otherwise percepts in time could also be gifs.
Other package suggestions welcome.
Ease installation for down-stream users, using conda forge: https://conda-forge.github.io/
Following #88 (comment), we should consider automatically integrating the documentation with binder/colab. Let the users run examples in their browser.
There's a bit of a direction towards that here
ScoreboardModel
and AxonMapModel
currently do not incorporate electrode-retina distance, because it was not in Beyeler et al. (2019). Need to add another Gaussian to the equation that inversely scales brightness with electrode-retina distance, using a third decay constant, zeta
.
In my experience, Joblib and Cython haven't worked well together. Maybe I'm doing something wrong, or it's the Python overhead that bites me, but I easily get the desired speedup by using OpenMP to parallelize Cython code via prange
.
utils.parfor
parallelizes an outer loop either via Joblib or Dask. I think the sensible thing to do would be to add a third option, OpenMP (via prange
).
We should also add some benchmark scripts to compare between Joblib, Dask, and OpenMP.
Integrating OpenMP is not without issues though: scikit-learn/scikit-learn#7650. We might have to subclass the Extensions to get OpenMP installed correctly across platforms. Also not sure how to package that on PyPI. So it's a future milestone for now. Maybe someone else can help.
12-core speedup over single-core NumPy for the Horsager model:
Seems like a good place to keep documentation for various releases (pip versions, stable, dev): https://docs.readthedocs.io/en/stable/intro/import-guide.html
What do you think, @arokem?
Add a "datasets" subpackage modeled after scikit-learn that allows us to:
data_path
directory.data_path
directory as a Pandas DataFrame
.Mimic the behavior of scikit-learn: By default, data_path
is ~/pulse2percept_data
. Users can overwrite this location with an environment variable PULSE2PERCEPT_DATA
.
We'll need a new subpackage:
pulse2percept/datasets
:
data/
: a folder containing csv files with the data (you can use a dummy file when writing the code)__init__.py
: analogous to the other subpackagessetup.py
: analogous to other subpackages, but make sure to config.add_data_dir('data')
base.py
:
fetch_url(url, file_path)
: basically thisload_data(data_path)
: basically that, for CSV and HDF5fetch_beyeler2019()
: a function to load the data described in Beyeler et al. (2019)
data_path/beyeler2019
existsargus_shapes.h5
from https://osf.io/6v2tb into data_path/beyeler2019
using fetch_url
load_data
. You can use function hdf2df
from heretests/
: a folder with tests for fetch_url
and load_data
Also to come: how to update the docs
Most electrode-like objects (ElectrodeGrid
, PointSource
, DiskElectrode
, etc.) do not explain the coordinate system. Documentation should point out that:
Edit: Consider showing a schematic of the coordinate system in doc/topics/implants.rst
Add the 24-channel suprachoroidal prototype by Bionic Vision Australia (BVA), which now belongs to Bionic Vision Technologies (BVT).
implants/bvt.py
, and add class BVA24(ProsthesisSystem)
x_center
, y_center
of the array is actually the center of the stimulating electrodes 1-20: so the center is in the middle of electrodes 7, 8, 9, and 13 (see Fig. 2)Currently has some mysterious things going on in it.
The plan:
Add auto-generated documentation using sphinx. Possibly dovetails with the BIDS-organized docathon: https://bids.github.io/docathon/
To auto-upload tags as they get pushed to Github.
Good for stable DOI creation.
Reproduce the bug:
model = p2p.models.AxonMapModel()
model.build()
model.predict_percept(implant=ArgusII)
When I pass Argus II into the predict method, I get an error about left/right eye.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-17-e310b8938853> in <module>()
----> 1 model.predict_percept(implant=ArgusII)
/content/p2psource/pulse2percept/models/axon_map.py in predict_percept(self, implant, t)
340 if implant.eye != self.eye:
341 raise ValueError(("The implant is in %s but the model was built "
--> 342 "for %s.") % (implant.eye, self.eye))
343 return super(AxonMapModel, self).predict_percept(implant, t=t)
ValueError: The implant is in <property object at 0x7f96acf818b8> but the model was built for RE.
Inspiration:
With instructions telling users how they can run the test-suite themselves.
Possibly: how they might extend testing, when they extend functionality with our base classes?
Revise the deprecated image2stim
function. Provide the following functionality:
Would be great if it were extensible, so users can pick and choose which steps they want to apply.
ElectrodeGrid
currently assumes all electrodes are DiskElectrode
objects.
To be more flexible, constructor needs to accept an electrode type. Thing is, DiskElectrode
needs a radius, but PointSource
does not. Therefore, we also need a way to specify optional keyword arguments that can be passed to the electrode constructor.
Add Alpha-AMS to pulse2percept/implants/alpha.py
. Note we already have Alpha-IMS, but not AMS.
Hi guys,
First of all thank you very much for your framework ๐
When simulating longer videos I have noticed that the percepts become black after time. After further investigation I think it is because of the charge accumulation making the system less sensitive the more charge is accumulated. However, sensitivity is never restored as far as I can see (and was of no interest in generating the model I guess).
So there are two questions that I have right now:
1.) Are my assumptions correct?
2.) Do you know of any "plausible" way, e.g. a known model etc. for "resetting" the sensitivity?
Thanks in advance
EDIT: To be more precise: I am simulating a 20 second clip (25fps) of a still image for testing (amplitude encoded between [0,1] and an implant with a working frequency between 1-20 Hz (1ms cathodic pulses)).
Attached you can see two responses (working frequency 5Hz and 20Hz) of a fixed position within the percept. It is clear to me that the sensitivity drops faster for pulses with 20Hz, since the accumulated charge will increase faster than with 5Hz, however, in both cases the accumulated charge will linearly increase over time making the system insensitive in the long run.
We should expand on the two setups - depending on whether you want to be a user or developer.
A user should have the easiest setup possible, and be directed to the latest stable release. We should not expect proficiency with any of these tools.
Steps include:
A developer needs to take a lot more steps, and we can assume some level of experience with these tools:
Provide an easy way to uninstall p2p, so that the annoying pip error cannot uninstall p2p. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall
can be avoided. It's especially annoying for developers when switching between multiple versions of p2p.
IMO manually deleting from site-packages
is not a user-friendly option. Too cumbersome.
Is it as easy as running pip install -e .
during make instead of python setup.py install
? What about building the Cython extension?
See also:
Get rid of old crufty stuff
Enumerating the papers that use the software.
The ElectrodeArray
class currently has an add_electrode
method, but no remove_electrode
method. WE should also add a remove_electrodes
method.
Use xarray
to store time series, build Stimulus
object, pass to ProsthesisSystem
. Model must be smart about handling different time steps for simulation, stimulus, and percept.
Explain how the code is organized, how to contribute, how to add new subpackages, how to write tests (see issue #78), how to build everything.
__slots__
can reduce memory usage by preventing the creation of __dict__
and __weakref
and speed up attribute lookup.
A few things to consider: https://stackoverflow.com/a/28059785
TLDR on multiple inheritance:
object
['foo', 'bar']
, then child should only add ['baz']
, not ['foo', 'bar', 'baz']
__dict__
We need a roadmap, see e.g.: https://scikit-learn.org/stable/roadmap.html
Need to go through the API reference and make sure all of the parts are documented. Currently there are a couple of descriptions missing at the submodule level.
Implants:
p2p.implants.ElectrodeArray
(PR #146)p2p.implants.ElectrodeGrid
(PR #162)Stimuli (@ezgirmak):
p2p.stimuli.Stimulus
and the pulse trains in p2p.stimuli.pulse_trains
(PR #133).Models:
ScoreboardModel
(PR #96)AxonMapModel
(PR #96)Nanduri2012Model
(PR #168)Horsager2009Model
(PR #180)How to:
git remote add upstream https://github.com/pulse2percept/pulse2percept.git
)master
branch (git checkout master && git pull upstream master
)git checkout -b my-branch
)plot_*.py
file in doc/examples/{implants|stimuli}
, give it a meaningful name (but it has to start with plot_
for Sphinx Gallery to work).py
file with a """
block that contains your verbal introduction to the example. You can use Sphinx syntax#
comment blocks. Have a look at doc/examples/models/plot_axonmap.py
or https://sphinx-gallery.github.io/stable/syntax.html for the structure/syntaxmake doc
from the root directory or with make html
from within doc/
master
Automate wheel building for different platforms via CI (including Windows), collect artifacts, upload them to PyPI. Look at scikit-learn for inspiration.
The difference between available versions of p2p is still confusing to people. #139 and #140 improved wording in the docs to distinguish between "stable" and "latest", but it is still way to easy to read the wrong docs and not notice.
Suggested changes:
Is your feature request related to a problem? Please describe.
Percepts are not animated in time. Need to save to video (which requires FFMPEG), then open the video with VLC or similar.
Describe the solution you'd like
Animate them directly in IPython / Jupyter Notebook!
Additional context
Try this:
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
import numpy as np
t = np.linspace(0,2*np.pi)
x = np.sin(t)
fig, ax = plt.subplots()
plt.close() # prevent extra empty axes
ax.axis([0,2*np.pi,-1,1])
l, = ax.plot([],[])
def animate(i):
l.set_data(t[:i], x[:i])
rc('animation', html='jshtml')
ani = matplotlib.animation.FuncAnimation(fig, animate, frames=len(t));
HTML(ani.to_jshtml())
but loop over frames of a percept.
Alternatively: ani.to_jshtml()
(not interactive).
ElectrodeGrid
should not inherit the add_electrode
or remove_electrode
method from ElectrodeArray
, because then it's no longer a grid and shape
would have the wrong value.
One solution is to move add_electrode
(and remove_electrode
, #147) to a mixin.
On the other hand, it would be really nice to be able to build PRIMA from a hex grid, then remove some of the electrodes around the edges.
Time is currently in SI units (seconds), but space (microns) and currents (uA) are not. Sure, we could go all SI. But it's more convenient to use the dimension that is most relevant to the application. Dealing with 1e-6
floats is just painful...
Thus time should be in milliseconds (especially in TimeSeries
etc.)
This simple example:
import pulse2percept as p2p
sim = p2p.Simulation(p2p.implants.ArgusII())
pulse= {'F2': p2p.stimuli.PulseTrain(0.01/1000)}
percept = sim.pulse2percept(pulse)
breaks during the last line with "For now, all pulse trains must have the same sampling time step as the ganglion cell layer. In the future, this requirement might be relaxed."
(The problem is that the ganglion cell layer uses tsample=0.005/1000 per default. And PulseTrain doesn't have such a default value.)
First of all, this error message is very confusing. As a user I've never even heard of a ganglion cell layer. Let alone touched it.
Second, if every method must use the same tsample
, specify it once at the global level (maybe in p2p.Simulation
constructor?). Otherwise at least have all methods use the same default value for tsample
...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.