Giter Site home page Giter Site logo

futuremmcore's Introduction

Micro-Manager

Micro-Manager is an application to control microscope hardware, such as cameras, xy-stages, filter wheels, etc. It includes a hardware abstraction layer written in C++ and a user interface written in Java (Swing).

Go to micro-manager.org for documentation and binary downloads.

For support, see Micro-Manager Community.

The Micro-Manager community welcomes you! For our governance structures, go here

Source code

This repository contains the Java projects that make up the Micro-Manager "MMStudio" GUI application. The device control layer is written in C++ and found in a separate repository, mmCoreAndDevices, which is currently a git submodule of this repository.

To checkout both repositories together:

git clone --recurse-submodules https://github.com/micro-manager/micro-manager.git

If you will be making changes to the code, make sure to enable pre-commit hooks as described in doc/pre-commit.md.

Branches

  • main - the main branch of development (Micro-Manager 2.x)
  • svn-mirror - git-svn mirror of the Micro-Manager 1.4 Subversion repository

Other branches are not official.

Developer information

For license information, please see doc/copyright.txt.

For build instructions, please see the doc/how-to-build.md.

Additional information is available on the Micro-Manager website at https://micro-manager.org

Contributing

Start here: https://micro-manager.org/Building_and_debugging_Micro-Manager_source_code

futuremmcore's People

Contributors

henrypinkard avatar nanthony21 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

nicost

futuremmcore's Issues

Memory management + performance

Here is how the system currently works:

  • Images are copied from cameras internal buffer into the MMCore circular buffer
  • Higher level language wrappers (i.e. MMCoreJ and MMCorePy) then get access to this data by copying it into the respective buffers of their languages, using on many possible functions (core.getTaggedImage, core.getImage, core.popNextImage, etc.)
  • In the case of Pycro-Manager, data are copied once again when passing through the ZeroMQ bridge from Java to Python

The cost of the copies for MMCoreJ and MMCorePy isn't much, but still might be limiting for the highest performance applications. This is evidenced by the fact that (with the right hardware), we can readily achieve writing speeds over 1 GB/s from the Java layer using AcqEngJ. The cost of the ZeroMQ transfer to Pycro-Manager is substantial, as it is limited to something on the order of 100 MB/s. It is unclear whether this an inherent consequence of transferring across processes (unlike MMCoreJ and MMCorePy, which stay in the same process), or whether it is a result of ZeroMQ itself. There may be other implementations of ZeroMQ that could substantially improve speed.

MMKernel should be designed so that its implementation is as fast as possible for use cases like streaming data from a camera to a file, or streaming to RAM for real time display like this. What is the best way to do this?

  1. Do everything at the level of C. This may make sense for streaming to a file, but not so much for streaming to RAM for visualization/analysis
  2. Pass only pointers to addresses in memory for MMCoreJ and MMCorePy. This would require that calling code explicitly handles memory management (i.e. calling the destructor). Thus, the getImage() function should be called only once or needs to implement some kind of reference counting. This may allow for fast performance using the existing setup with pycro-manager--that is, pointers to memory addresses could pass quickly through java layer, across ZMQ to python, and be used to instantiate memory of numpy arrays. However, doing this type of shared process memory is complicated, seems to be OS-specific, and may introduce its own overhead from stuff like locking mechanisms that may make this not even worth it.
  3. Create multiple instances of the kernel for each process. To circumvent having to deal with inter-process communication, another possibility is having multiple instances of the Kernel, and only have the process that will eventually be using the the memory copy it out of the kernel. For example, if you start an acquisition through MM or PM, a kernel wrapping the camera(s) gets created on Java/Python side respectively so that image never gets copied between processes. This would require new data writing code on the python side, since our current fastest performance file writer NDTiffStorage is a Java library, but making would be fairly easy.

Remaining questions:

  • How much overhead is induced by the copy from the C layer to MMCoreJ and MMCorePy? Is it possible to get around this by wrapping native memory directly in Java/Python?
  • Is creating code to write data in pure c++ really necessary? Or could a wrapped native memory + python/java code achieve the same thing?

What do people use LabView for?

I know that LabView is often the go-to software for building customized systems from scratch that require complex synchronization. Having never used it myself, my big picture understanding is: 1) It provides some essential functionality 2) In spite of this, people seem to perpetually complain about having to use it

It would be really helpful if someone could explain it. What do you use it for? Why is it so essential? What are its most important features? What (if any) alternatives are there? Why is it so frustrating to use?

reduce use of properties for functionality covered by methods

I think the relationship between method calls and properties is confusing and can be improved in a future version of MM. Here are two examples that illustrate my point:
- using a minimal (no properties defined) XY stage adapter, there is no way to define position limits. To set limits, one has to create corresponding properties ("PositionX" and "PositionY") but these names are not standard. As a result, any script that looks at position limits may not be fully portable.
- a minimal adapter for a 1D stage device does not allow setting the position of this device from the GUI. To get access to the stage position, I have to create a corresponding property which again can have any name making scripts not portable.

Perhaps, a stricter set of guidelines for implementing a new adapter would be useful, something along the lines of:

  1. Implement all methods (including GetLimitsUm, Move, etc) even if they simply return DEVICE_UNSUPPORTED_COMMAND. [All of these methods should be exposed at the wrapper level]
  2. When implementing a device adapter, avoid creating properties with functionality overlapping with existing method calls (such as PositionX for XYStage);
  3. If properties are created, use names from the list of existing property names whenever possible. Such a list already exists in MMDeviceConstants.h but perhaps it could be formally maintained and made visible to the wrapper layer as well.

Discuss file structure

Do we want to keep this as one large document or should we split each section out to it's own file? They can then all be linked to from a table of contents.

Logging

@nicost:
"Calling applications may want to maintain their own logging system, so would like to get a stream, rather than have the logs written to the filesystem automatically. I don't think this is very complicated."

dealing with property overload

Since complex devices can have 10s or even 100s of properties, it would be nice to have a way to mark properties "advanced" where those properties would be easily hidden in the property browser, when creating configuration groups, etc. That way functionality can be exposed for advanced users but easily hidden by others. Implementing requires changes to both the GUI and core layers.

Device Hierarchy

Should the inheritance structure of device types be more hierarchical?

Something like: ?
DataAcquisitionDevice:

  • Cameras (Acquisition2dDevice)
  • Photodiodes (Acquisition1dDevice)
  • Any other sensor that produces more data than is practical to transmit through properties.

PositionedDevice: (Indicates requirement for affine transform):

  • Camera
  • 2dStage
  • 1dStage
    ...

New and crazy camera types

There are many new types of cameras coming online, like event-based cameras, lensless cameras, multi-color cameras (i.e. more than just RGB), point scanning systems that don't have physical sensors, and compound cameras that combine multiple physical sensors. It might be hard to design an API that accounts for all this in a rapidly changing field, but it is at least worth considering all the stuff that currently exists.

I'm hoping here that people with knowledge/experience with different camera types can share some thoughts:

  • What nontraditional cameras have you worked with?
  • How do they differ from a 2D monochrome or RGB sensor?
  • How it breaks the assumptions of and doesn't conform to the current camera API?
  • What unique challenges would be posed in creating a generic API for their use (e.g. how they are controlled, data format)?
  • Anything else worth considering?

individual device timeouts

It would be useful for every device to have its own timeout that can be set in the adapter depending on the method called.
For example, I'd like to be able to set the XYstage timeout to 1 sec while setting motion parameters, then to 'enough time to move' sec for SetPosition commands. At the same time, the timeout for a different device, say a shutter, would remain unaffected.

Enable (simple?) integration of external devices by means of widgets/GUI elements?

Hey, I don't know if this is the right place to ask for this "feature", but I was wondering if somebody already demanded for an easy way to integrate external hardware control using widgets/GUI snippets. In this Youtube Video by Nico, he mentioned that the Laser engine by the Ries group has a "simple" way to do that.

I was thinking if it's worth thinking about having external devices e.g. encapsulated using Labthings (@rwb27) exposed through a small widget that enables simple device control. This could for example be a pump, a small pipetting robot, or whatever is interfaced through an external device adapter. The GUI definition could be part of Labthings if I'm not mistaken (e.g. HTML)?

Cleaning up the Camera API

@henrypinkard:
There's a few overlapping things in the core that would need to get sorted out here. Multi-channel cameras, multi-component cameras (rgb falls here i think), and multi-camera device adapter. The current calls for these are pretty confusing

@nicost:
Agreed! But it does make sense that the same camera can operate in monochrome and RGB modes.

MMCore getInstance idea

Hey :)

I was reading the manuscript and you mention that it should be possible to have other implementations where pointers to the devices would be possible and I just thought of this idea:

  1. You could have a function in MMCore called getInstance or getPointer, for example, and send as a parameter the device name. I think that would be very beneficial for languages like python, where we could do something like:
  • MMCore add device Camera
  • MMCore initialize device Camera
  • MMCore getInstance Camera

And then, one could directly have that pointer associated with an object in some object-oriented language like Python. The only thing is that:

  1. It would be nice if it is then possible to call functions from that device directly by using the pointer. So in that case, then the class could directly call, for example, camera.snapImage(), instead of core.snapImage(). That would be nice for using multiple cameras, in my opinion.

I don't know if it makes sense. I hope I understood the scope correctly. But in this case, you could keep the same architecture as now, everything going through the MMCore but at the same time enable modularity from other languages.

Saving calibration / array properties

On the issue of providing a built in way to store calibration coefficients.

@nanthony21:
I think this should be in the system config file. Aside from calibration data it would also be nice to have a way to specify default property values. This is currently done via the "System-Startup" config group which works well but seems kind of hacky.
If properties were extended to support arrays then calibration data could easily just be a standard property of a device adapter.

@henrypinkard:
Extending properties to support arrays would be really useful in many ways. What would be the challenges to implementing this?

Multi-axis stage handling

Apology: I've not yet written a device adapter so this is just based on my reading of the docs. I have started implementing the relevant API functions though, as we're trying to follow the MMCore API in another project, with the eventual goal of creating a device adapter.

I can see single-axis stages and two-axis stages are nicely supported, but I am confused how a 3-axis stage (or XY stage with integrated Z drive, such as the Prior and ASI systems I've used in the past) is supported. Implementing separate XY and Z move commands means that making a move in all 3 axes ends up being split into two sequential moves, one in X/Y and one in Z. Most of the stages I use could accept a command along the lines of move <x> <y> <z> but I don't see an MMCore command that would correspond to this.

The only way I can see to do this in a device adapter would be to implement some sort of delayed-execution scheme, so that two function calls (e.g. to setXYPosition and setPosition) get combined, probably involving two separate objects (one for the XY stage and one for Z) plus a third object to coordinate between them. That significantly complicates the task of implementing a device adapter, and probably results in sub-optimal performance (e.g. because I'll just implement sequential moves as it's easier). Looking at the existing device adapter for Prior stages, it seems that's exactly what happens; I think the code could be simplified if it was implemented as a 3-axis stage (which MMCore can then break down into XY and Z with little to no additional work from the device adapter).

Would it make sense to have a more general class that handles n-axis stages, and then implement nice ways to wrap that to extract single-axis stages? Doing things that way round feels like it should be simpler to me - but no doubt there are reasons for the way it's currently implemented...

Figures on dark mode GitHub

The figure have transparent background and black text which doesn't look good when the page has a dark background. Probably best to not have a transparent background for the images.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.