Giter Site home page Giter Site logo

pulse2percept / pulse2percept Goto Github PK

View Code? Open in Web Editor NEW
73.0 15.0 50.0 283.18 MB

A Python-based simulation framework for bionic vision

Home Page: https://pulse2percept.readthedocs.io

License: BSD 3-Clause "New" or "Revised" License

Python 95.52% Makefile 0.10% Cython 4.38%
python neuroscience vision neural-engineering bionic-vision retinal-implants

pulse2percept's Introduction

DOI

BSD 3-clause

PyPI

build

GitHub forks

GitHub stars

pulse2percept: A Python-based simulation framework for bionic vision

Retinal degenerative diseases such as retinitis pigmentosa and macular degeneration result in profound visual impairment in more than 10 million people worldwide, and a variety of sight restoration technologies are being developed to target these diseases.

Retinal prostheses, now implanted in over 500 patients worldwide, electrically stimulate surviving cells in order to evoke neuronal responses that are interpreted by the brain as visual percepts ('phosphenes'). However, interactions between the device electronics and the retinal neurophysiology result in perceptual distortions that may severely limit the quality of the generated visual experience:

Input stimulus and predicted percept

(left: input stimulus, right: predicted percept)

Built on the NumPy and SciPy stacks, pulse2percept provides an open-source implementation of a number of computational models for state-of-the-art visual prostheses (also known as the 'bionic eye'), such as ArgusII, BVA24, and PRIMA, to provide insight into the visual experience provided by these devices.

Simulations such as the above are likely to be critical for providing realistic estimates of prosthetic vision, thus providing regulatory bodies with guidance into what sort of visual tests are appropriate for evaluating prosthetic performance, and improving current and future technology.

If you use pulse2percept in a scholarly publication, please cite as:

M Beyeler, GM Boynton, I Fine, A Rokem (2017). pulse2percept: A Python-based simulation framework for bionic vision. Proceedings of the 16th Python in Science Conference (SciPy), p.81-88, doi:`10.25080/shinma-7f4c6e7-00c <https://doi.org/10.25080/shinma-7f4c6e7-00c>`_.

Installation

Once you have Python 3 and pip, the stable release of pulse2percept can be installed with pip:

pip install pulse2percept

The bleeding-edge version of pulse2percept can be installed via:

pip install git+https://github.com/pulse2percept/pulse2percept

When installing the bleeding-edge version on Windows, note that you will have to install your own C compiler first. Detailed instructions for different platforms can be found in our Installation Guide.

pulse2percept supports these Python versions:

Python 3.11 3.10 3.9 3.8 3.7 3.6 3.5 3.4 2.7
p2p 0.9 Yes Yes Yes Yes
p2p 0.8 Yes Yes Yes Yes
p2p 0.7 Yes Yes Yes Yes
p2p 0.6 Yes Yes Yes Yes
p2p 0.5 Yes Yes Yes
p2p 0.4 Yes Yes Yes

Where to go from here

pulse2percept's People

Contributors

aiwenxu avatar apurvvarshney avatar arokem avatar ascientist avatar conrad-crowley avatar dylanlin29 avatar emmagan avatar ezgirmak avatar francienagiki avatar garethgeorge avatar gboynton avatar ionefine avatar jgranley avatar jonluntzel avatar lukeyoffe avatar mbeyeler avatar narenberg avatar neurosciencescripts avatar oliver-contier avatar omizrahi99 avatar subawocit avatar tallyhawley avatar wadevaresio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pulse2percept's Issues

Update user vs developer installation instructions

We should expand on the two setups - depending on whether you want to be a user or developer.

A user should have the easiest setup possible, and be directed to the latest stable release. We should not expect proficiency with any of these tools.
Steps include:

  • How to install Python
  • macOS: How to get pip without XCode
  • How to verify everything worked

A developer needs to take a lot more steps, and we can assume some level of experience with these tools:

  • Get Python/XCode/git
  • Get Cython
  • Fork and clone
  • install -r requirements.txt
  • install -r requirements-dev.txt
  • install -r requirements-doc.txt

Time should be in milliseconds

Time is currently in SI units (seconds), but space (microns) and currents (uA) are not. Sure, we could go all SI. But it's more convenient to use the dimension that is most relevant to the application. Dealing with 1e-6 floats is just painful...

Thus time should be in milliseconds (especially in TimeSeries etc.)

Electrode-retina distance for ScoreboardModel, AxonMapModel

ScoreboardModel and AxonMapModel currently do not incorporate electrode-retina distance, because it was not in Beyeler et al. (2019). Need to add another Gaussian to the equation that inversely scales brightness with electrode-retina distance, using a third decay constant, zeta.

Add BVA-24

Add the 24-channel suprachoroidal prototype by Bionic Vision Australia (BVA), which now belongs to Bionic Vision Technologies (BVT).

  • Create file implants/bvt.py, and add class BVA24(ProsthesisSystem)
  • Get all relevant implant parameters from Ayton et al. (2014)
  • Use the same electrode naming scheme as in the paper (see Fig. 2): Electrodes 1-20, then 21a-m, then 22 and 23 for the large return electrodes, and ignore 24 for now (external/remote return electrode)
  • Note that electrodes 9, 17, and 19 are smaller than the others
  • Make it so that x_center, y_center of the array is actually the center of the stimulating electrodes 1-20: so the center is in the middle of electrodes 7, 8, 9, and 13 (see Fig. 2)

DOC: add user guide

Explain high-level concepts, how to use the package, where to find what. Point to specific examples from the gallery.

  • describe high-level organization (PR #124)
  • how to create an implant (PR #124)
  • how to create a stimulus (PR #119)
  • how to run a model (PR #124)

Add a version warning on RTD

The difference between available versions of p2p is still confusing to people. #139 and #140 improved wording in the docs to distinguish between "stable" and "latest", but it is still way to easy to read the wrong docs and not notice.

Suggested changes:

  • Show the software version prominently on RTD
  • Change the latest version to X.Y.devN (E.g., 0.6.dev0). The "dev" should make it apparent that this is not stable.
  • Show a warning on RTD for versions other than stable/latest

[BUG] Error in Predict Percept

Reproduce the bug:

model = p2p.models.AxonMapModel()
model.build()
model.predict_percept(implant=ArgusII)

When I pass Argus II into the predict method, I get an error about left/right eye.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-17-e310b8938853> in <module>()
----> 1 model.predict_percept(implant=ArgusII)

/content/p2psource/pulse2percept/models/axon_map.py in predict_percept(self, implant, t)
    340         if implant.eye != self.eye:
    341             raise ValueError(("The implant is in %s but the model was built "
--> 342                               "for %s.") % (implant.eye, self.eye))
    343         return super(AxonMapModel, self).predict_percept(implant, t=t)

ValueError: The implant is in <property object at 0x7f96acf818b8> but the model was built for RE.

Revise image2stim

Revise the deprecated image2stim function. Provide the following functionality:

Would be great if it were extensible, so users can pick and choose which steps they want to apply.

Add __getitem__ to Stimulus for indexing

It would be nice to have a way for indexing into/slicing the data container of a Stimulus object. Make it so that __getitem__ takes a time point and returns the data at that time point, including interpolating as needed.

Complete API reference

Need to go through the API reference and make sure all of the parts are documented. Currently there are a couple of descriptions missing at the submodule level.

Reduce memory footprint with __slots__

__slots__ can reduce memory usage by preventing the creation of __dict__ and __weakref and speed up attribute lookup.

A few things to consider: https://stackoverflow.com/a/28059785

TLDR on multiple inheritance:

  • top needs to inherit from object
  • slots are inherited, so if parent has ['foo', 'bar'], then child should only add ['baz'], not ['foo', 'bar', 'baz']
  • if you did it right, child should not have __dict__

Electrode objects do not explain x,y,z coordinate system

Most electrode-like objects (ElectrodeGrid, PointSource, DiskElectrode, etc.) do not explain the coordinate system. Documentation should point out that:

  • unit is microns
  • origin is the fovea
  • y is superior/inferior retina
  • x is temporal/nasal retina
  • z is distance from surface

Edit: Consider showing a schematic of the coordinate system in doc/topics/implants.rst

Cannot import the current version of Pulse2percept

Cannot import the current version of Pulse2Percept

I tried the pip3 install -U pulse2percept but it skipped the upgrade.

Screen Shot 2020-01-28 at 10 12 15 AM

So I tried the master fork sync.

Screen Shot 2020-01-28 at 10 14 02 AM

However, its still only updated to the pulse2percept v0.5.0 not v0.6.0

Refactor retina.make_axon_map

Currently has some mysterious things going on in it.

The plan:

  • Write some tests for a minimal case.
  • Store some larger examples.
  • Refactor, making sure that both of the above still work.

Animate percepts in Jupyter Notebook

Is your feature request related to a problem? Please describe.
Percepts are not animated in time. Need to save to video (which requires FFMPEG), then open the video with VLC or similar.

Describe the solution you'd like
Animate them directly in IPython / Jupyter Notebook!

Additional context
Try this:

%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
import numpy as np

t = np.linspace(0,2*np.pi)
x = np.sin(t)

fig, ax = plt.subplots()
plt.close()  # prevent extra empty axes
ax.axis([0,2*np.pi,-1,1])
l, = ax.plot([],[])

def animate(i):
    l.set_data(t[:i], x[:i])

rc('animation', html='jshtml')
ani = matplotlib.animation.FuncAnimation(fig, animate, frames=len(t));

HTML(ani.to_jshtml())

but loop over frames of a percept.

Alternatively: ani.to_jshtml() (not interactive).

[BUG] CI/CD broken with Coverage 5.0

ERROR: coveralls 1.9.2 has requirement coverage<5.0,>=3.6, but you'll have coverage 5.0.1 which is incompatible.

Coveralls has a restriction on <5.0 (link), but pip does not respect it.

Coverage 5.0 was released on 12/14/19 (link), broke coveralls, and thus broke our CI/CD builds.

Simulating videos - Charge accumulation in Nanduri / Horsager

Hi guys,
First of all thank you very much for your framework ๐Ÿ‘
When simulating longer videos I have noticed that the percepts become black after time. After further investigation I think it is because of the charge accumulation making the system less sensitive the more charge is accumulated. However, sensitivity is never restored as far as I can see (and was of no interest in generating the model I guess).

So there are two questions that I have right now:
1.) Are my assumptions correct?
2.) Do you know of any "plausible" way, e.g. a known model etc. for "resetting" the sensitivity?

Thanks in advance

EDIT: To be more precise: I am simulating a 20 second clip (25fps) of a still image for testing (amplitude encoded between [0,1] and an implant with a working frequency between 1-20 Hz (1ms cathodic pulses)).

Attached you can see two responses (working frequency 5Hz and 20Hz) of a fixed position within the percept. It is clear to me that the sensitivity drops faster for pulses with 20Hz, since the accumulated charge will increase faster than with 5Hz, however, in both cases the accumulated charge will linearly increase over time making the system insensitive in the long run.
wf5
wf20

ElectrodeArray should store electrodes in a dictionary

Random aside: right now, access to individual electrodes is done either by a name, or by an index. In both cases, ultimately the access is mediated by the index (getitem calls to get_index etc.). It occurs to me that instead of a list, the electrodes could be stored in a dict, with either integer keys (the case where no names are provided) or the names as keys. Might simplify some of the code below (?). Would it have any unwanted consequences? We could still iterate over the dict items (in iter below).

Remove ffmpeg/libav dependency

To use the video module (based on scikit-video), the user needs to install ffmpeg or libav, which is platform-dependent and annoying for both users and CI. Better to find a pip-installable solution.

imageio might be a good alternative (https://imageio.readthedocs.io/en/latest/index.html), and it's already required by scikit-image. There's an imageio-ffmpeg that is pip-installable and can do movies (https://imageio.readthedocs.io/en/latest/examples.html#convert-a-movie). Otherwise percepts in time could also be gifs.

Other package suggestions welcome.

Add datasets subpackage

Add a "datasets" subpackage modeled after scikit-learn that allows us to:

  1. Download a CSV/HDF5 file from a URL and place it in data_path directory.
  2. Load the data from a local file in the data_path directory as a Pandas DataFrame.

Mimic the behavior of scikit-learn: By default, data_path is ~/pulse2percept_data. Users can overwrite this location with an environment variable PULSE2PERCEPT_DATA.

We'll need a new subpackage:

pulse2percept/datasets:

  • data/: a folder containing csv files with the data (you can use a dummy file when writing the code)
  • __init__.py: analogous to the other subpackages
  • setup.py: analogous to other subpackages, but make sure to config.add_data_dir('data')
  • base.py:
    • fetch_url(url, file_path): basically this
    • load_data(data_path): basically that, for CSV and HDF5
    • fetch_beyeler2019(): a function to load the data described in Beyeler et al. (2019)
      • Check if data_path/beyeler2019 exists
      • If it does not exist, download argus_shapes.h5 from https://osf.io/6v2tb into data_path/beyeler2019 using fetch_url
      • Load the data as Pandas DataFrame using load_data. You can use function hdf2df from here
  • tests/: a folder with tests for fetch_url and load_data

Also to come: how to update the docs

Avoid 'pip cannot uninstall a distutils installed project'

Provide an easy way to uninstall p2p, so that the annoying pip error cannot uninstall p2p. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall can be avoided. It's especially annoying for developers when switching between multiple versions of p2p.

IMO manually deleting from site-packages is not a user-friendly option. Too cumbersome.

Is it as easy as running pip install -e . during make instead of python setup.py install? What about building the Cython extension?

See also:

Specify electrode type for ElectrodeGrid

ElectrodeGrid currently assumes all electrodes are DiskElectrode objects.

To be more flexible, constructor needs to accept an electrode type. Thing is, DiskElectrode needs a radius, but PointSource does not. Therefore, we also need a way to specify optional keyword arguments that can be passed to the electrode constructor.

Automate wheelhouse build & PyPI upload

Automate wheel building for different platforms via CI (including Windows), collect artifacts, upload them to PyPI. Look at scikit-learn for inspiration.

Convert pulse trains to new Stimulus format

Need to convert all the pulse trains into the new Stimulus format:

  • The list of function arguments should be simplified and used consistently across pulse types.
  • They can be created in compressed format (specifying only the signal edges)
  • tsample is no longer needed

An example:

class MonophasicPulse(p2p.stimuli.Stimulus):
    def __init__(self, amp, pulse_dur, delay_dur=0, stim_dur=None, dt=1e-12):
        """Monophasic pulse
        
        Parameters
        ----------
        amp : float
            Current amplitude (uA). Negative currents: cathodic, positive: anodic.
        pulse_dur : float
            Pulse duration (ms)
        delay_dur : float
            Delay duration (ms). Zeros will be inserted at the beginning of the
            stimulus to deliver the pulse after ``delay_dur`` ms.
        stim_dur : float, optional, default: ``pulse_dur+delay_dur``
            Stimulus duration (ms). Zeros will be inserted at the end of the
            stimulus to make the the stimulus last ``stim_dur`` ms.
        dt : float, optional, default: 1e-12 ms
            Sampling time step (ms).
            
        Examples
        --------
        A single cathodic pulse (1ms pulse duration at 20uA) delivered after
        2ms and embedded in a stimulus that lasts 10ms overall:
        >>> from pulse2percept.stimuli import MonophasicPulse
        >>> pulse = MonophasicPulse(-20, 1, delay_dur=2, stim_dur=10)
        """
        if stim_dur is None:
            stim_dur = pulse_dur + delay_dur
        assert stim_dur >= pulse_dur + delay_dur
        # We only need to store the time points at which the stimulus changes.
        # For example, the pulse amplitude is zero at time=``delay_dur`` and
        # ``amp`` at time=``delay_dur+dt``:
        data = np.array([0, 0, amp, amp, 0, 0]).reshape((1, -1))
        time = np.array([0, delay_dur,
                         delay_dur + dt, delay_dur + pulse_dur,
                         delay_dur + pulse_dur + dt, stim_dur])
        super().__init__(data, time=time, compress=True)

The new classes can replace the old classes in pulse2percept/stimuli/pulse_trains.py.
TimeSeries can be removed.

This is also a good time to convert all times to ms (see issue #170).

We need the following:

  • MonophasicPulse(self, amp, pulse_dur, delay_dur=0, stim_dur=None, dt=1e-12)
  • BiphasicPulse(self, amp, pulse_dur, polarity='cathodic-first', delay_dur=0, stim_dur=None, dt=1e-12)
  • PulseTrain
  • bursts? triplets? asymmetric?

Add Percept object

Provide a Percept object that provides an easy plot method, automatically flips up/down if necessary (from retinal coordinates to dva), and has labeled axes. Probably easiest to build off of Stimulus (PR #119).

Fix the size of ElectrodeGrid

ElectrodeGrid should not inherit the add_electrode or remove_electrode method from ElectrodeArray, because then it's no longer a grid and shape would have the wrong value.

One solution is to move add_electrode (and remove_electrode, #147) to a mixin.

On the other hand, it would be really nice to be able to build PRIMA from a hex grid, then remove some of the electrodes around the edges.

Inconsistent tsample values break simulation

This simple example:

import pulse2percept as p2p
sim = p2p.Simulation(p2p.implants.ArgusII())
pulse= {'F2': p2p.stimuli.PulseTrain(0.01/1000)}
percept = sim.pulse2percept(pulse)

breaks during the last line with "For now, all pulse trains must have the same sampling time step as the ganglion cell layer. In the future, this requirement might be relaxed."
(The problem is that the ganglion cell layer uses tsample=0.005/1000 per default. And PulseTrain doesn't have such a default value.)

First of all, this error message is very confusing. As a user I've never even heard of a ganglion cell layer. Let alone touched it.

Second, if every method must use the same tsample, specify it once at the global level (maybe in p2p.Simulation constructor?). Otherwise at least have all methods use the same default value for tsample...

Add scoreboard/axon map example to gallery

Add two examples to the gallery:

  • how to build and run the scoreboard model
    • set rho=20, xystep=0.05, xrange/yrange from -8 to 8
    • use Alpha-IMS, centered over the fovea
    • use stimulus with current 1 sent to every electrode
      • imshow the result
    • use a grayscale image of the letter A as image
      • imshow the result
  • how to build and run the axon map model
    • set rho=100, axlambda=100, xystep=0.25
    • use Argus II, centered over the fovea, angled at -45deg
    • use stimulus with current 1 sent to every electrode
    • imshow the result

This might take a while to run on your laptop. Try running on the cluster.

DOC: add developer's guide

Explain how the code is organized, how to contribute, how to add new subpackages, how to write tests (see issue #78), how to build everything.

Add basic usage examples to the gallery

Implants:

  • How to build a custom p2p.implants.ElectrodeArray (PR #146)
  • How to build a rectangular/hexagonal p2p.implants.ElectrodeGrid (PR #162)

Stimuli (@ezgirmak):

  • How to build and visualize monophasic/biphasic pulses and some standard pulse trains using p2p.stimuli.Stimulus and the pulse trains in p2p.stimuli.pulse_trains (PR #133).
  • How to build and visualize your own pulse train (sinusoidal: PR #178, others: #180)

Models:

  • ScoreboardModel (PR #96)
  • AxonMapModel (PR #96)
  • Nanduri2012Model (PR #168)
  • Horsager2009Model (PR #180)

How to:

  • On your own fork, make sure you have the upstream repo (git remote add upstream https://github.com/pulse2percept/pulse2percept.git)
  • Check out the latest version of the master branch (git checkout master && git pull upstream master)
  • Create a new branch off of it (git checkout -b my-branch)
  • Create a plot_*.py file in doc/examples/{implants|stimuli}, give it a meaningful name (but it has to start with plot_ for Sphinx Gallery to work)
  • Start the .py file with a """ block that contains your verbal introduction to the example. You can use Sphinx syntax
  • Intersperse your Python code. Add follow-up narrative in # comment blocks. Have a look at doc/examples/models/plot_axonmap.py or https://sphinx-gallery.github.io/stable/syntax.html for the structure/syntax
  • You can build the docs with make doc from the root directory or with make html from within doc/
  • git add/commit, make a PR to the upstream master

Cython parallelism with OpenMP

In my experience, Joblib and Cython haven't worked well together. Maybe I'm doing something wrong, or it's the Python overhead that bites me, but I easily get the desired speedup by using OpenMP to parallelize Cython code via prange.

utils.parfor parallelizes an outer loop either via Joblib or Dask. I think the sensible thing to do would be to add a third option, OpenMP (via prange).
We should also add some benchmark scripts to compare between Joblib, Dask, and OpenMP.

Integrating OpenMP is not without issues though: scikit-learn/scikit-learn#7650. We might have to subclass the Extensions to get OpenMP installed correctly across platforms. Also not sure how to package that on PyPI. So it's a future milestone for now. Maybe someone else can help.

12-core speedup over single-core NumPy for the Horsager model:
download (1)

ENH: Add new API for temporal models

Use xarray to store time series, build Stimulus object, pass to ProsthesisSystem. Model must be smart about handling different time steps for simulation, stimulus, and percept.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.