Giter Site home page Giter Site logo

parallax's Introduction

Parallax

Parallax

Parallax is a graphical user interface designed to streamline the process of setting up and performing acute in vivo electrophysiology experiments.

Documentation: parallax.readthedocs.io.

Prerequisites

  • Python==3.8 (Recommended to install via Anaconda or Miniconda)
    • Python 3.8 is required for the Spinnaker library.
  • PySpin (for Linux or Mac OS users)

Installation

  1. Create a virtual environment with Python 3.8 and activate it:
  • On Windows:
conda create -n parallax python=3.8
conda activate parallax
  1. To install Parallax into a fresh environment, run:
pip install parallax-app

To upgrade to the latest version, run:

pip install parallax-app --upgrade
  1. To install the camera interface:
pip install parallax-app[camera]

Additional Setup for Linux and macOS

  • Download the Spinnaker SDK package for your system here
  • Follow the installation instructions in the README
  • Install the Python bindings found alongside the SDK package

Running Parallax

python -m parallax

For developers:

  1. Clone the repository:
git clone https://github.com/AllenNeuralDynamics/parallax.git
  1. Install the package along with dev dependencies:
pip install -e .[dev]

Documentation

  1. To install the dependencies:
pip install -e .[docs]
  1. Then to create the documentation html files, run:
sphinx-build -b html docs/ docs/_build

Support and Contribution

If you encounter any problems or would like to contribute to the project, please submit an Issue on GitHub.

License

Parallax is licensed under the MIT License. For more details, see the LICENSE file.

parallax's People

Contributors

campagnola avatar chronopoulos avatar hannalee2 avatar jsiegle avatar yonibrowning avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

chronopoulos

parallax's Issues

Expose more controls to the user

Currently, a number of important functions (such as camera focusing and stage translation) require modifier keys and/or middle mouse button scrolls/clicks. It's fine to keep these shortcuts, but there should also be buttons or drop-down menus that can be used instead.

Approaches to automated probe detection

Using a simple benchmarking dataset (single probe on a plain background), I tested two strategies for automated probe detection:

Template matching

Figure 202 (2)

This one was pretty straightforward...I just used the manual annotation values to pull out a 100 x 100 pixel template from the difference image (created by subtracting any other image from the original, after applying a slight Gaussian blur). I tried using both the difference image and a thresholded version of the difference image to make the template, but the former was much more reliable.

Figure 206

This approach worked with almost every combination of images I tested (it did make a few mistakes, but this could be mitigated by using two images with less probe movement between them). But of course this relies on manual identification of the tip location. That's why I also tested the line fitting method, which doesn't require any prior info.

Figure 119

Line fitting

Figure 1771 (1)

This approach uses the Hough transform from scikit-image to find the probe axis in a difference image. First, I subtracted any baseline image from the target image, then selected a threshold that pulls out a mask of the probe. Choosing the right threshold is the most important part -- if you get that right, it's very reliable. I'm currently using an iterative method to find the lowest threshold that makes a large enough mask, but this could definitely be optimized.

Figure 1775 (1)

Once the threshold is set, I take the highest scoring candidate line from the Hough transform and select a block of pixels spanning that line. Looking at the standard deviation of pixels along an axis orthogonal to the probe makes it obvious where the probe tip begins. There are sometimes issues if the probe tip is at the edge of the field of view -- in this case, it could be helpful to use template matching as backup.

Conclusion

Both of these methods are promising, but will require more refinement. It would be great if both of these could be integrated into Parallax with some of the important parameters exposed so we can play around with real-time tracking.

Multi-camera viewer

Create an app for simultaneously viewing streams from an arbitrary number of FLIR cameras.

The following global settings should be accessible:

  • Start/stop all cameras
  • Start/stop synchronized recording
  • Capture one image from all cameras
  • Data directory
  • Layout (e.g. number of columns)

For each camera, the following settings should be accessible via popup menus:

  • Custom name text box
  • Serial number (select from available USB cameras)
  • Exposure (max = 1000 ms)
  • Gain
  • White balance
  • Contrast

Here's an example of what the UI could look like:

Software mockup

A JSON file should be automatically updated whenever any settings are changed, and should be saved along with any snapshots or recordings. This can be used to re-instantiate the exact same settings the next time the app is launched.

Real-time motor coordinates overlay

The coordinates of each motor should be visible on the live image of the insertion system cameras.

This requires the following:

  • Identifying which probe is associated with which set of motors. This can be done manually or automatically.
  • Finding the probe tip and tracking it while it's in motion
  • Overlaying local X/Y/Z coordinates for the associated motor
  • Overlaying global X/Y/Z coordinates for the associated probe (once the coordinate transformation has been found)

Integrate Zaber motor control into the GUI

At a minimum, we need the following controls:

  • Real-time position display (can just be a number, but ideally this would show a diagram with the full range, set points, and current location)
  • Ability to store and load set points. Most importantly, we'll need the "experiment" position and the "insert/remove mouse" position, but there may be other points we want to save as well (e.g. for removing probes).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.