Giter Site home page Giter Site logo

computational-cell-analytics / micro-sam Goto Github PK

View Code? Open in Web Editor NEW
314.0 7.0 37.0 40.26 MB

Segment Anything for Microscopy

Home Page: https://computational-cell-analytics.github.io/micro-sam/

License: MIT License

Python 17.69% Shell 0.12% Jupyter Notebook 82.16% Batchfile 0.03%
microscopy-images segment-anything segmentation cell-segmentation napari nuclei-segmentation mitochondria-segmentation

micro-sam's Introduction

DOC Conda codecov DOI

Segment Anything for Microscopy

Tools for segmentation and tracking in microscopy build on top of Segment Anything. Segment and track objects in microscopy images interactively with a few clicks!

We implement napari applications for:

  • interactive 2d segmentation (Left: interactive cell segmentation)
  • interactive 3d segmentation (Middle: interactive mitochondria segmentation in EM)
  • interactive tracking of 2d image data (Right: interactive cell tracking)

If you run into any problems or have questions regarding our tool please open an issue on Github or reach out via image.sc using the tag micro-sam and tagging @constantinpape.

Installation and Usage

Please check the documentation for details on how to install and use micro_sam. You can also watch the quickstart video or all video tutorials.

Contributing

We welcome new contributions!

If you are interested in contributing to micro-sam, please see the contributing guide. The first step is to discuss your idea in a new issue with the current developers.

Citation

If you are using this repository in your research please cite

Related Projects

There are a few other napari plugins build around Segment Anything:

Compared to these we support more applications (2d, 3d and tracking), and provide finetuning methods and finetuned models for microscopy data. WebKnossos also offers integration of SegmentAnything for interactive segmentation.

Release Overview

New in version 1.0.1

Use stable URL for model downloads and fix issues in state precomputation for automatic segmentation.

New in version 1.0.0

This release mainly fixes issues with the previous release and marks the napari user interface as stable.

New in version 0.5.0

This version includes a lot of new functionality and improvements. The most important changes are:

  • Re-implementation of the annotation tools. The tools are now implemented as napari plugin.
  • Using our improved functionality for automatic instance segmentation in the annotation tools, including automatic segmentation for 3D data.
  • New widgets to use the finetuning and image series annotation functionality from napari.
  • Improved finetuned models for light microscopy and electron microscopy data that are available via bioimage.io.

New in version 0.4.1

  • Bugfix for the image series annotator. Before the automatic segmentation did not work correctly.

New in version 0.4.0

  • Significantly improved model finetuning
  • Update the finetuned models for microscopy, see details in the doc
  • Training decoder for direct instance segmentation (not available via the GUI yet)
  • Refactored model download functionality using pooch

New in version 0.3.0

  • Support for ellipse and polygon prompts
  • Support for automatic segmentation in 3d
  • Training refactoring and speed-up of fine-tuning

New in version 0.2.1 and 0.2.2

  • Several bugfixes for the newly introduced functionality in 0.2.0.

New in version 0.2.0

  • Functionality for training / finetuning and evaluation of Segment Anything Models
  • Full support for our finetuned segment anything models
  • Improvements of the automated instance segmentation functionality in the 2d annotator
  • And several other small improvements

New in version 0.1.1

  • Fine-tuned segment anything models for microscopy (experimental)
  • Simplified instance segmentation menu
  • Menu for clearing annotations

New in version 0.1.0

  • We support tiling in all annotators to enable processing large images.
  • Implement new automatic instance segmentation functionality:
    • That is faster.
    • Enables interactive update of parameters.
    • And also works for large images by making use of tiled embeddings.
  • Implement the image_series_annotator for processing many images in a row.
  • Use the data hash in pre-computed embeddings to warn if the input data changes.
  • Create a simple GUI to select which annotator to start.
  • And made many other small improvements and fixed bugs.

New in version 0.0.2

  • We have added support for bounding box prompts, which provide better segmentation results than points in many cases.
  • Interactive tracking now uses a better heuristic to propagate masks across time, leading to better automatic tracking results.
  • And have fixed several small bugs.

micro-sam's People

Contributors

ajinkya-kulkarni avatar anwai98 avatar caroteu avatar claydugo avatar constantinpape avatar genevievebuckley avatar hmaarrfk avatar jookuma avatar lufre1 avatar mobinapournemat avatar paulhfu avatar paulsp7 avatar tc2fh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

micro-sam's Issues

Errors when running micro-sam.annotator2d

I have downloaded anaconda and followed the given steps: cloned the micro-sam directory, created a cpu-based environment using mamba and once activated, installed napari using 'pip install e .'. When I run the command 'micro_sam.annotator_2d -i C:\Users\scmc\Desktop\test.png', the napari app opens up with the correct functionality. Still, I get 2 errors:

  1. napari.manifest -> 'skan' could not be imported: Could not find file 'napari.yaml' in module 'skan'
  2. RuntimeWarning: invalid value encountered in cast!

I just read a previous issue ticket, so I am not concerned about the first issue. However, every time I click on the image/software, I get the 2nd error. Is that a problem? If so, how would I fix it?

More info about my settings:

 active environment : sam
    active env location : C:\Users\scmc\anaconda3\envs\sam
            shell level : 2
       user config file : C:\Users\scmc\.condarc
 populated config files : C:\Users\scmc\.condarc
          conda version : 23.3.1
    conda-build version : 3.24.0
         python version : 3.10.9.final.0
       virtual packages : __archspec=1=x86_64
                          __cuda=12.1=0
                          __win=0=0
       base environment : C:\Users\scmc\anaconda3  (writable)
      conda av data dir : C:\Users\scmc\anaconda3\etc\conda
  conda av metadata url : None
           channel URLs : https://repo.anaconda.com/pkgs/main/win-64
                          https://repo.anaconda.com/pkgs/main/noarch
                          https://repo.anaconda.com/pkgs/r/win-64
                          https://repo.anaconda.com/pkgs/r/noarch
                          https://repo.anaconda.com/pkgs/msys2/win-64
                          https://repo.anaconda.com/pkgs/msys2/noarch
          package cache : C:\Users\scmc\anaconda3\pkgs
                          C:\Users\scmc\.conda\pkgs
                          C:\Users\scmc\AppData\Local\conda\conda\pkgs
       envs directories : C:\Users\scmc\anaconda3\envs
                          C:\Users\scmc\.conda\envs
                          C:\Users\scmc\AppData\Local\conda\conda\envs
               platform : win-64
             user-agent : conda/23.3.1 requests/2.28.1 CPython/3.10.9 Windows/10 Windows/10.0.19045
          administrator : False
             netrc file : None
           offline mode : False

Embedding creation or selection prompt in Windows 10 not functioning properly

When the embeddings do not match the dataset, the prompt to select new embeddings or generate new embeddings come up, but the functionality for Select different file and Create new file is not working as expected. This testing was done in Windows 10 machine and the two videos attached below display the issue that is prevalent.

  1. After clicking on Select different file, the file explorer opens for selection but closes immedietly, and this happens for 4 times after which napari opens anyhow probably ignoring the changes in the embeddings
  2. After clicking on Create new file, the file explorer opens but putting a new name for the embeddings and clicking on Open does not work and the window appears again, and this happens for 4 times after which napari opens anyhow probably ignoring the changes in the embeddings
select.diff.file.mp4
create.new.file.mp4

Clear annotations doesnt reset tracking state

Clear annotations doesnt reset track_state and track_id leading to errors thrown when there was a division segmentation before clearing the annotations, creating multiple track_id values in the dropdown which were meaningless after clearing the annotations.

Originally posted by @Sagnik700 in #64 (comment)

annotate a serie of images using your script does not work for rgb.

Hi,
After the discussion we had on image sc forum, I tried the image_series_annotator and I have a few questions.
I made a folder composed of several RGB images. The first problem that I have is that they do not have all the same size which seems pretty natural. Hence np.stack cannot work. Moreover, when building the stack if images are RGB, the stack is 4D and the embeddings cannot be computed. I think I understand that for efficiency reasons, you try to compute the embeddings on the whole stack at the beginning, the problem is that it does not work in this type of case. Hence, in my case, wouldn't it be better to compute the embedding image per image, each time I load the next one, without using the stack ?
Thanks !

fine tuning sam

Hello team,

great work.

I have a doubt.
Did you happen to try and fine-tune SAM model for your dataset?
If yes, What did you use as ground truth for the calculation of loss for back propagation during fine-tuning.

@constantinpape

Consistency in version numbering

I notice that although there is a 0.0.2 release, the version number in the setup.pyfile is still listed as 0.0.1
This makes things a little confusing when checking whether you're using the latest version or not (this is how I noticed the discrepancy).

version="0.0.1",

Should the setup.py file be updated (or possibly even better, use a tool like setuptools-scm to handle this automatically)?

Changing prompt label affects the last point because it is the selected one

This behavior is a bit counter-intuitive, as a user it seems more natural to change the label for the next point to be put.
This can be particular confusing for tracking or 3d segmentation where the user often doesn't see that the label of the last selected point changes because they have navigated to a different frame or slice already.

I am however not sure how to change this, since the functionality is based on the napari point annotation example and changing it might mean some changes that make the functionality more brittle.

Training refactoring

We should refactor the training logic, so that the SamTrainer takes a prompt generator, which then takes care of all the iterative prompting logic (or not for a simpler training scheme, in which case also n_sub_iterations=0).

Now that we have the IterativePromptingGenerator that should be feasible. In order to make this extensible we need to define clean signatures for the __call__ method of the prompt generators, and for their return values.

And we should then also clean up and simplify the training and iterative prompt generator code. (There's still some inefficient things in there.)

Publish micro-sam via conda-forge?

Having a conda-forge package would be helpful to provide micro_sam as a library and enable using conda constructor (see #68). However, I am not quite sure how to deal with the segment-anything dependency, which is not available via conda yet.

Implement auto-segmentation with background

The behavior of SAM auto segmentation for objects with background is a bit unpredictable. Sometimes background gets the id 0, sometimes it gets a segmentation id. We should add an (optional) flag to the function to ensure that background is mapped to zero. (Internally we will check if there is a zero segment, and if not map the largest id to zero).

Advanced box_extension

If 0 < box_extension < 1 interpret this as a fraction of the box shape and extend by it. To be more adaptive to the actual size of the object.
Use it in annotators for 3d, tracking and auto seg (instead of hard-coding pixel values).

Errors when segmenting 3D slices

I tried to use 3D annotator to segment my 3D data but an error "You either haven't provided any prompts or invalid prompts. The segmentation will be skipped." showed up when I clicked "Segment Slice". I only have these three points on this slice and there are no other points in other slices. How can I solve this?
image

Create documentation

Create a proper documentation via some python documentation library.
The documentation should:

  • contain a library documentation that is computed from docstrings and type annotations
  • extra pages that explain the usage of the tool

I will take care of the content, but some help with setting up the doc infrastructure would be nice.

Division object is projected to next frame

Reported by @Sagnik700 in #64

Testing micro-sam with 0.1.2.post1 version and with MiaPaCa200x200 dataset, daughter cell segmentation of a frame was visible in the previous frame where the parent cell was annotated as division in track_state. Screenshot 1(Frame 22) shows the parent cell segmentation in brown and the daughter cell segmentation in blue. The blue segmentation should not be visible in Screenshot 1 since it is appearing in Screenshot 2(Frame 23) onwards.

Steps to replicate the issue:

  1. Segment the exact cell which is highlighted as division as track_state in frame 22
  2. Segment the daughter cell as highlighted as track 2 as track_state in frame 23 and click on Track Object
  3. Come back to frame 22 to notice the issue

Screenshot 1
Screenshot 2

Originally posted by @Sagnik700 in #64 (comment)

Improve support for custom models

#98 will introduce fine-tuned models. (They are still preliminary and will be updated + specific models for electron microscopy will be added).
We should improve the support for it by:

  • Enable selecting them from the annotator GUI
  • Keeping track of which model was used for producing embedding file (extending what we currently have for the data signature)
    • We should also check for the tiling there
  • Possibly making the LM models the defaults

skan module error when running napri from command line

I have created the sam environment and pip installed the sam module. I can run all the scripts named in the setup file eg "micro_sam.precompute_embeddings -h" but I get an error when I try to start the scripts with an image file or with napari.

The command " napari -i .\test.jpg" gives me the error:

"12:34:57 ERROR napari.manifest -> 'skan' could not be imported: Could not find file 'napari.yaml' in module 'skan'"

What data format is used for micro_sam.annotator_tracking?

I love this tool very much after I successfully installed it. I would like to have a try on the tracking function, but I wonder what the input should be.

I tried to put in an ImageJ-created tiff stack, but I could only see one image. I also tried to put a folder, but it tells that "ImageIO does not generally support reading folders. Limited support may be available via specific plugins. Specify the plugin explicitly using the plugin kwarg, e.g. `plugin='DICOM'".

Thanks.

sam_annotator_2d.py automatic example data download

There is still one more example script sam_annotator_2d.py, but it is silly to try and download a giant images.zip bundle to unzip just one image file. I think that should be discussed, and any work happen in a separate PR.

Yes, I agree. I think I can host the example images separately; but will need to double check with the data licenses if this is ok.

Originally posted by @constantinpape in #35 (comment)

Relates to #35

The example data for the sam_annotator_2d.py demo script is a single image file from the LiveCell dataset.

It has a Creative Commons by attribution licence, see the license here

New object segmentations will overwrite older ones

I noticed that if a new object segmentation overlaps all or part of an older object, committing the new segmentation will overwrite the old one.

This is probably not a bug and is the desired behaviour. Making this more clearly understood in the user documentation is probably a useful idea.

Context: I shared some first impressions using micro-sam with Constantin. He asked me to write them down here, so we don't forget what was said.

Feedback installers

Hi @paulhfu ,
this issue is for collecting feedback on the installers.

For now I only have positive things to report:

@sushmita-nair and @Sagnik700: could you please test the windows installer?

@Marei33: could you please test the mac installer?

For downloading the installers please follow the link above, and scroll to the bottom, then you can download the respective installer:

image

Installation micro-sam for macOS M1

The troubleshooting didn't work for me, but I have some adaptions for the installation on the M1 that worked:

git clone https://github.com/computational-cell-analytics/micro-sam
mamba create -c conda-forge python=3.10 pip -n sam
conda activate sam
mamba install icu=70.1
mamba install qt-main=5.15.4
mamba install pyqt=5.15.4
mamba install pytorch::pytorch torchvision torchaudio -c pytorch
mamba install -c conda-forge python-elf tqdm
mamba install -c conda-forge napari
cd micro-sam 
pip install git+https://github.com/facebookresearch/segment-anything.git
pip install -e .

I needed to install icu, qt-main and pyqt before installing everything else.
SAM-yml.zip

napari gets slow for large images

Another possible improvement - if the label layers created by micro-sam ("current" and "commited") are entirely in-memory numpy arrays, we might get a performance benefit from using something like zarr instead.

Apologies if this suggestion is already something that happens, or incompatible with the code. I have not looked at this part of the code in detail yet.

I've noticed that editing (drawing/erasing) the labels takes a very long time for very large 2D images. I have not tried in 3D, but I guess something similar happens there.

Originally posted by @GenevieveBuckley in #34 (comment)

Renaming 'raw' layer breaks segmentation

Hi again,

I am playing with micro-sam a bit more today and found this minor issue: renaming the 'raw' layer to anything else breaks the "Segment all Objects" function, because it will still look for the "raw" layer. The (abbreviated) error is:

KeyError: "'raw' is not in list".

I am also attaching the full traceback below. Note that I think this is a low-priority issue, because most users will likely not rename their layers, and I am not sure how easy it is to remedy it.

Traceback
Run automatic segmentation with SAM. This can take a few minutes ...
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File .../lib/python3.10/site-packages/napari/utils/events/containers/_typed.py:140, in TypedMutableSequence.__getitem__(self=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels la...9d4610>, <Points layer 'prompts' at 0x33da3bd90>], key='raw')
    139 try:
--> 140     return self.__getitem__(self.index(key))
        key = 'raw'
        self = [<Image layer 'DAPI' at 0x1696a8eb0>, <Labels layer 'auto_segmentation' at 0x330020100>, <Labels layer 'committed_objects' at 0x33bb89570>, <Labels layer 'current_object' at 0x33d9d4610>, <Points layer 'prompts' at 0x33da3bd90>]
    141 except ValueError as e:

File .../lib/python3.10/site-packages/napari/utils/events/containers/_typed.py:228, in TypedMutableSequence.index(self=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels la...9d4610>, <Points layer 'prompts' at 0x33da3bd90>], value='raw', start=0, stop=None)
    226         return i
--> 228 raise ValueError(
        trans = <napari.utils.translations.TranslationBundle object at 0x13124af50>
        value = 'raw'
    229     trans._(
    230         "{value!r} is not in list",
    231         deferred=True,
    232         value=value,
    233     )
    234 )

ValueError: 'raw' is not in list

The above exception was the direct cause of the following exception:

KeyError                                  Traceback (most recent call last)
File .../lib/python3.10/site-packages/psygnal/_signal.py:972, in SignalInstance._run_emit_loop(self=<SignalInstance 'changed' on PushButton(value=False, annotation=None, name='call_button')>, args=(False,))
    971 try:
--> 972     caller.cb(args)
        caller = <psygnal._weak_callback._StrongFunction object at 0x169631c00>
        args = (False,)
    973 except Exception as e:

File .../lib/python3.10/site-packages/psygnal/_weak_callback.py:268, in _StrongFunction.cb(self=<psygnal._weak_callback._StrongFunction object>, args=())
    267     args = args[: self._max_args]
--> 268 self._f(*self._args, *args, **self._kwargs)
        args = ()
        self._f = <function FunctionGui.__init__.<locals>._disable_button_and_call at 0x16962c550>
        self = <psygnal._weak_callback._StrongFunction object at 0x169631c00>
        self._args = ()
        self._kwargs = {}

File .../lib/python3.10/site-packages/magicgui/widgets/_function_gui.py:218, in FunctionGui.__init__.<locals>._disable_button_and_call()
    217 try:
--> 218     self.__call__()
        self = <FunctionGui autosegment_widget(v: napari.viewer.Viewer = Viewer(axes=Axes(visible=False, labels=True, colored=True, dashed=False, arrows=True), camera=Camera(center=(0.0, 1070.4865548651428, 525.4334832457411), zoom=1.2841762427452341, angles=(0.0, 0.0, 90.0), perspective=0.0, interactive=True), cursor=Cursor(position=(1527.1995960807803, 1304.1428168946877), scaled=True, size=1, style=<CursorStyle.CROSSHAIR: 'crosshair'>), dims=Dims(ndim=2, ndisplay=2, last_used=0, range=((0.0, 2160.0, 1.0), (0.0, 2160.0, 1.0)), current_step=(1080, 1080), order=(0, 1), axis_labels=('0', '1')), grid=GridCanvas(stride=1, shape=(-1, -1), enabled=False), layers=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels layer 'auto_segmentation' at 0x330020100>, <Labels layer 'committed_objects' at 0x33bb89570>, <Labels layer 'current_object' at 0x33d9d4610>, <Points layer 'prompts' at 0x33da3bd90>], scale_bar=ScaleBar(visible=False, colored=False, color=<class 'numpy.ndarray'> (4,) float32, ticks=True, position=<Position.BOTTOM_RIGHT: 'bottom_right'>, font_size=10.0, box=False, box_color=<class 'numpy.ndarray'> (4,) float32, unit=None), text_overlay=TextOverlay(visible=False, color=<class 'numpy.ndarray'> (4,) float32, font_size=10.0, position=<TextOverlayPosition.TOP_LEFT: 'top_left'>, text=''), overlays=Overlays(interaction_box=InteractionBox(points=None, show=False, show_handle=False, show_vertices=False, selection_box_drag=None, selection_box_final=None, transform_start=<napari.utils.transforms.transforms.Affine object at 0x16cb0af80>, transform_drag=<napari.utils.transforms.transforms.Affine object at 0x16cb0aec0>, transform_final=<napari.utils.transforms.transforms.Affine object at 0x16cb0ae60>, transform=<napari.utils.transforms.transforms.Affine object at 0x3307c9f90>, allow_new_selection=True, selected_vertex=None)), help='use <3> for select points, use <4> for pan/zoom', status='', tooltip=Tooltip(visible=False, text=''), theme='dark', title='napari', mouse_over_canvas=False, mouse_move_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_move at 0x3307c1630>], mouse_drag_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_drag at 0x3307c1900>], mouse_double_click_callbacks=[], mouse_wheel_callbacks=[<function dims_scroll at 0x14fbd1900>], _persisted_mouse_event={}, _mouse_drag_gen={}, _mouse_wheel_gen={}, keymap={'Shift': <function InteractionBoxMouseBindings.initialize_key_events.<locals>.hold_to_lock_aspect_ratio at 0x3307c2440>, 'Control-Shift-R': <function InteractionBoxMouseBindings._reset_active_layer_affine at 0x33185c9d0>, 'Control-Shift-A': <function InteractionBoxMouseBindings._transform_active_layer at 0x33185c8b0>, 'S': <function annotator_2d.<locals>._segmet at 0x33dd7b400>, 'C': <function annotator_2d.<locals>._commit at 0x33dd7b520>, 'T': <function annotator_2d.<locals>._toggle_label at 0x33dd7b370>, 'Shift-C': <function annotator_2d.<locals>.clear_prompts at 0x33dd7b0a0>}), method: str = 'default')>
    219 finally:

File .../lib/python3.10/site-packages/magicgui/widgets/_function_gui.py:331, in FunctionGui.__call__(self=<FunctionGui autosegment_widget(v: napari.viewer...mpts at 0x33dd7b0a0>}), method: str = 'default')>, update_widget=False, *args=(), **kwargs={})
    330 with _function_name_pointing_to_widget(self):
--> 331     value = self._function(*bound.args, **bound.kwargs)
        self = <FunctionGui autosegment_widget(v: napari.viewer.Viewer = Viewer(axes=Axes(visible=False, labels=True, colored=True, dashed=False, arrows=True), camera=Camera(center=(0.0, 1070.4865548651428, 525.4334832457411), zoom=1.2841762427452341, angles=(0.0, 0.0, 90.0), perspective=0.0, interactive=True), cursor=Cursor(position=(1527.1995960807803, 1304.1428168946877), scaled=True, size=1, style=<CursorStyle.CROSSHAIR: 'crosshair'>), dims=Dims(ndim=2, ndisplay=2, last_used=0, range=((0.0, 2160.0, 1.0), (0.0, 2160.0, 1.0)), current_step=(1080, 1080), order=(0, 1), axis_labels=('0', '1')), grid=GridCanvas(stride=1, shape=(-1, -1), enabled=False), layers=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels layer 'auto_segmentation' at 0x330020100>, <Labels layer 'committed_objects' at 0x33bb89570>, <Labels layer 'current_object' at 0x33d9d4610>, <Points layer 'prompts' at 0x33da3bd90>], scale_bar=ScaleBar(visible=False, colored=False, color=<class 'numpy.ndarray'> (4,) float32, ticks=True, position=<Position.BOTTOM_RIGHT: 'bottom_right'>, font_size=10.0, box=False, box_color=<class 'numpy.ndarray'> (4,) float32, unit=None), text_overlay=TextOverlay(visible=False, color=<class 'numpy.ndarray'> (4,) float32, font_size=10.0, position=<TextOverlayPosition.TOP_LEFT: 'top_left'>, text=''), overlays=Overlays(interaction_box=InteractionBox(points=None, show=False, show_handle=False, show_vertices=False, selection_box_drag=None, selection_box_final=None, transform_start=<napari.utils.transforms.transforms.Affine object at 0x16cb0af80>, transform_drag=<napari.utils.transforms.transforms.Affine object at 0x16cb0aec0>, transform_final=<napari.utils.transforms.transforms.Affine object at 0x16cb0ae60>, transform=<napari.utils.transforms.transforms.Affine object at 0x3307c9f90>, allow_new_selection=True, selected_vertex=None)), help='use <3> for select points, use <4> for pan/zoom', status='', tooltip=Tooltip(visible=False, text=''), theme='dark', title='napari', mouse_over_canvas=False, mouse_move_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_move at 0x3307c1630>], mouse_drag_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_drag at 0x3307c1900>], mouse_double_click_callbacks=[], mouse_wheel_callbacks=[<function dims_scroll at 0x14fbd1900>], _persisted_mouse_event={}, _mouse_drag_gen={}, _mouse_wheel_gen={}, keymap={'Shift': <function InteractionBoxMouseBindings.initialize_key_events.<locals>.hold_to_lock_aspect_ratio at 0x3307c2440>, 'Control-Shift-R': <function InteractionBoxMouseBindings._reset_active_layer_affine at 0x33185c9d0>, 'Control-Shift-A': <function InteractionBoxMouseBindings._transform_active_layer at 0x33185c8b0>, 'S': <function annotator_2d.<locals>._segmet at 0x33dd7b400>, 'C': <function annotator_2d.<locals>._commit at 0x33dd7b520>, 'T': <function annotator_2d.<locals>._toggle_label at 0x33dd7b370>, 'Shift-C': <function annotator_2d.<locals>.clear_prompts at 0x33dd7b0a0>}), method: str = 'default')>
        bound = <BoundArguments (v=Viewer(axes=Axes(visible=False, labels=True, colored=True, dashed=False, arrows=True), camera=Camera(center=(0.0, 1070.4865548651428, 525.4334832457411), zoom=1.2841762427452341, angles=(0.0, 0.0, 90.0), perspective=0.0, interactive=True), cursor=Cursor(position=(1527.1995960807803, 1304.1428168946877), scaled=True, size=1, style=<CursorStyle.CROSSHAIR: 'crosshair'>), dims=Dims(ndim=2, ndisplay=2, last_used=0, range=((0.0, 2160.0, 1.0), (0.0, 2160.0, 1.0)), current_step=(1080, 1080), order=(0, 1), axis_labels=('0', '1')), grid=GridCanvas(stride=1, shape=(-1, -1), enabled=False), layers=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels layer 'auto_segmentation' at 0x330020100>, <Labels layer 'committed_objects' at 0x33bb89570>, <Labels layer 'current_object' at 0x33d9d4610>, <Points layer 'prompts' at 0x33da3bd90>], scale_bar=ScaleBar(visible=False, colored=False, color=<class 'numpy.ndarray'> (4,) float32, ticks=True, position=<Position.BOTTOM_RIGHT: 'bottom_right'>, font_size=10.0, box=False, box_color=<class 'numpy.ndarray'> (4,) float32, unit=None), text_overlay=TextOverlay(visible=False, color=<class 'numpy.ndarray'> (4,) float32, font_size=10.0, position=<TextOverlayPosition.TOP_LEFT: 'top_left'>, text=''), overlays=Overlays(interaction_box=InteractionBox(points=None, show=False, show_handle=False, show_vertices=False, selection_box_drag=None, selection_box_final=None, transform_start=<napari.utils.transforms.transforms.Affine object at 0x16cb0af80>, transform_drag=<napari.utils.transforms.transforms.Affine object at 0x16cb0aec0>, transform_final=<napari.utils.transforms.transforms.Affine object at 0x16cb0ae60>, transform=<napari.utils.transforms.transforms.Affine object at 0x3307c9f90>, allow_new_selection=True, selected_vertex=None)), help='use <3> for select points, use <4> for pan/zoom', status='', tooltip=Tooltip(visible=False, text=''), theme='dark', title='napari', mouse_over_canvas=False, mouse_move_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_move at 0x3307c1630>], mouse_drag_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_drag at 0x3307c1900>], mouse_double_click_callbacks=[], mouse_wheel_callbacks=[<function dims_scroll at 0x14fbd1900>], _persisted_mouse_event={}, _mouse_drag_gen={}, _mouse_wheel_gen={}, keymap={'Shift': <function InteractionBoxMouseBindings.initialize_key_events.<locals>.hold_to_lock_aspect_ratio at 0x3307c2440>, 'Control-Shift-R': <function InteractionBoxMouseBindings._reset_active_layer_affine at 0x33185c9d0>, 'Control-Shift-A': <function InteractionBoxMouseBindings._transform_active_layer at 0x33185c8b0>, 'S': <function annotator_2d.<locals>._segmet at 0x33dd7b400>, 'C': <function annotator_2d.<locals>._commit at 0x33dd7b520>, 'T': <function annotator_2d.<locals>._toggle_label at 0x33dd7b370>, 'Shift-C': <function annotator_2d.<locals>.clear_prompts at 0x33dd7b0a0>}), method='default')>
        self._function = <function autosegment_widget at 0x169612e60>
    333 self._call_count += 1

File ~/projects/micro-sam/micro_sam/sam_annotator/annotator_2d.py:29, in autosegment_widget(v=Viewer(axes=Axes(visible=False, labels=True, col...tator_2d.<locals>.clear_prompts at 0x33dd7b0a0>}), method='default')
     28 print("Run automatic segmentation with SAM. This can take a few minutes ...")
---> 29 image = v.layers["raw"].data
        v = Viewer(axes=Axes(visible=False, labels=True, colored=True, dashed=False, arrows=True), camera=Camera(center=(0.0, 1070.4865548651428, 525.4334832457411), zoom=1.2841762427452341, angles=(0.0, 0.0, 90.0), perspective=0.0, interactive=True), cursor=Cursor(position=(1527.1995960807803, 1304.1428168946877), scaled=True, size=1, style=<CursorStyle.CROSSHAIR: 'crosshair'>), dims=Dims(ndim=2, ndisplay=2, last_used=0, range=((0.0, 2160.0, 1.0), (0.0, 2160.0, 1.0)), current_step=(1080, 1080), order=(0, 1), axis_labels=('0', '1')), grid=GridCanvas(stride=1, shape=(-1, -1), enabled=False), layers=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels layer 'auto_segmentation' at 0x330020100>, <Labels layer 'committed_objects' at 0x33bb89570>, <Labels layer 'current_object' at 0x33d9d4610>, <Points layer 'prompts' at 0x33da3bd90>], scale_bar=ScaleBar(visible=False, colored=False, color=<class 'numpy.ndarray'> (4,) float32, ticks=True, position=<Position.BOTTOM_RIGHT: 'bottom_right'>, font_size=10.0, box=False, box_color=<class 'numpy.ndarray'> (4,) float32, unit=None), text_overlay=TextOverlay(visible=False, color=<class 'numpy.ndarray'> (4,) float32, font_size=10.0, position=<TextOverlayPosition.TOP_LEFT: 'top_left'>, text=''), overlays=Overlays(interaction_box=InteractionBox(points=None, show=False, show_handle=False, show_vertices=False, selection_box_drag=None, selection_box_final=None, transform_start=<napari.utils.transforms.transforms.Affine object at 0x16cb0af80>, transform_drag=<napari.utils.transforms.transforms.Affine object at 0x16cb0aec0>, transform_final=<napari.utils.transforms.transforms.Affine object at 0x16cb0ae60>, transform=<napari.utils.transforms.transforms.Affine object at 0x3307c9f90>, allow_new_selection=True, selected_vertex=None)), help='use <3> for select points, use <4> for pan/zoom', status='', tooltip=Tooltip(visible=False, text=''), theme='dark', title='napari', mouse_over_canvas=False, mouse_move_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_move at 0x3307c1630>], mouse_drag_callbacks=[<function InteractionBoxMouseBindings.initialize_mouse_events.<locals>.mouse_drag at 0x3307c1900>], mouse_double_click_callbacks=[], mouse_wheel_callbacks=[<function dims_scroll at 0x14fbd1900>], _persisted_mouse_event={}, _mouse_drag_gen={}, _mouse_wheel_gen={}, keymap={'Shift': <function InteractionBoxMouseBindings.initialize_key_events.<locals>.hold_to_lock_aspect_ratio at 0x3307c2440>, 'Control-Shift-R': <function InteractionBoxMouseBindings._reset_active_layer_affine at 0x33185c9d0>, 'Control-Shift-A': <function InteractionBoxMouseBindings._transform_active_layer at 0x33185c8b0>, 'S': <function annotator_2d.<locals>._segmet at 0x33dd7b400>, 'C': <function annotator_2d.<locals>._commit at 0x33dd7b520>, 'T': <function annotator_2d.<locals>._toggle_label at 0x33dd7b370>, 'Shift-C': <function annotator_2d.<locals>.clear_prompts at 0x33dd7b0a0>})
     30 seg = segment_instances.segment_instances_sam(SAM, image)

File .../lib/python3.10/site-packages/napari/utils/_proxies.py:77, in PublicOnlyProxy.__getitem__(self=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels la...9d4610>, <Points layer 'prompts' at 0x33da3bd90>], key='raw')
     76 def __getitem__(self, key):
---> 77     return self.create(super().__getitem__(key))
        self = [<Image layer 'DAPI' at 0x1696a8eb0>, <Labels layer 'auto_segmentation' at 0x330020100>, <Labels layer 'committed_objects' at 0x33bb89570>, <Labels layer 'current_object' at 0x33d9d4610>, <Points layer 'prompts' at 0x33da3bd90>]
        key = 'raw'

File .../lib/python3.10/site-packages/napari/utils/events/containers/_typed.py:142, in TypedMutableSequence.__getitem__(self=[<Image layer 'DAPI' at 0x1696a8eb0>, <Labels la...9d4610>, <Points layer 'prompts' at 0x33da3bd90>], key='raw')
    141     except ValueError as e:
--> 142         raise KeyError(str(e)) from e
    144 result = self._list[key]

KeyError: "'raw' is not in list"

The above exception was the direct cause of the following exception:

EmitLoopError                             Traceback (most recent call last)
File .../lib/python3.10/site-packages/magicgui/widgets/bases/_value_widget.py:65, in ValueWidget._on_value_change(self=PushButton(value=False, annotation=None, name='call_button'), value=False)
     63 if value is self.null_value and not self._nullable:
     64     return
---> 65 self.changed.emit(value)
        value = False
        self.changed = <SignalInstance 'changed' on PushButton(value=False, annotation=None, name='call_button')>
        self = PushButton(value=False, annotation=None, name='call_button')

File .../lib/python3.10/site-packages/psygnal/_signal.py:927, in SignalInstance.emit(self=<SignalInstance 'changed' on PushButton(value=False, annotation=None, name='call_button')>, check_nargs=False, check_types=False, asynchronous=False, *args=(False,))
    924     sd.start()
    925     return sd
--> 927 self._run_emit_loop(args)
        self = <SignalInstance 'changed' on PushButton(value=False, annotation=None, name='call_button')>
        args = (False,)
    928 return None

File .../lib/python3.10/site-packages/psygnal/_signal.py:974, in SignalInstance._run_emit_loop(self=<SignalInstance 'changed' on PushButton(value=False, annotation=None, name='call_button')>, args=(False,))
    972                 caller.cb(args)
    973             except Exception as e:
--> 974                 raise EmitLoopError(
        caller = <psygnal._weak_callback._StrongFunction object at 0x169631c00>
        args = (False,)
    975                     slot_repr=repr(caller), args=args, exc=e
    976                 ) from e
    978 return None

EmitLoopError: calling <psygnal._weak_callback._StrongFunction object at 0x169631c00> with args=(False,) caused KeyError: "'raw' is not in list".

Usability issues with the tracking annotator

The tracking annotator is fairly complex to use and it would be good to simplify it a bit and prevent easy to make user errors.

  • Don't allow setting a track_id for a layer if that track has already ended (due to division). Currently this let's the user set a point, but nothing happens.
  • Change track state via box annotation (shape layer) #23

(will add more points here)

cc @Sagnik700

Error module magicgui not found

Dear micro-sam team,

I created a new env and tried to install micro_sam

>mamba create -n microsam python=3.9 micro_sam -y 
>...
>mamba activate microsam 
>micro_sam.annotator
Traceback (most recent call last):
  File "D:\conda\conda-envs\microsam\Scripts\micro_sam.annotator-script.py", line 5, in <module>
    from micro_sam.sam_annotator.annotator import main
  File "D:\conda\conda-envs\microsam\lib\site-packages\micro_sam\sam_annotator\__init__.py", line 1, in <module>
    from .annotator import annotator
  File "D:\conda\conda-envs\microsam\lib\site-packages\micro_sam\sam_annotator\annotator.py", line 2, in <module>
    import magicgui
ModuleNotFoundError: No module named 'magicgui'

I then tried to install napari (because I knew magicgui is a dependency)

>mamba install napari / or 'napari[all]'
...
>micro_sam.annotator
Traceback (most recent call last):
  File "D:\conda\conda-envs\microsam\Scripts\micro_sam.annotator-script.py", line 5, in <module>
    from micro_sam.sam_annotator.annotator import main
  File "D:\conda\conda-envs\microsam\lib\site-packages\micro_sam\sam_annotator\__init__.py", line 1, in <module>
    from .annotator import annotator
  File "D:\conda\conda-envs\microsam\lib\site-packages\micro_sam\sam_annotator\annotator.py", line 6, in <module>
    from PyQt5.QtWidgets import QFileDialog, QMessageBox
ModuleNotFoundError: No module named 'PyQt5'

finally running

>pip install PyQt5, 
...
>micro_sam.annotator

I got the GUI ๐ŸŽ‰ :

image

Best regards,

Romain

Selecting point prompts of different sizes does not update slider

Selecting point prompts of different sizes does not update the point size slider in the top left corner. In the screenshot below, point prompts of different sizes are registered, but after selecting the bigger prompt using the Point Selector the point size slider still shows the size of the previously selected smaller point

image

Originally posted by @Sagnik700 in #64 (comment)

Difficult to see whether first object is commited properly

It is difficult to see whether the first object is commited or not, because the colour does not change. It might be helpful for users if there was an obvious change when the commit happens.

Possible solution: Use the "shuffle colormap" one one of the labels layers, so they won't match. I think that could be done in the setup, using something like viewer.layers['committed_objects'].new_colormap()

Context: I shared some first impressions using micro-sam with Constantin. He asked me to write them down here, so we don't forget what was said.

Reopening: What data format is used for micro_sam.annotator_tracking?

Previous discussion: #45

Hi @ludalin-ucla, in this case you need to specify the pattern to load the files in the folder vie the -k parameter. If you want to load all files with the ending .tif you can do it like this:

micro_sam.annotator_tracking -i path/to/folder/with/images  -k *.tif

(and if the files end with .tiff you would use *.tiff etc.)

An example of tiff stacks is attached.

Unfortunately attachements via mail don't work, so I couldn't test this on your data. But I am pretty sure using the -k argument should fix your problem. Feel free to reopen if you're still having an issue with this.

Hi, I tried to add the parameter as you mentioned, and another error comes:
micro_sam.annotator_tracking: error: unrecognized arguments: 0002.tif 0003.tif 0004.tif 0005.tif 0006.tif
(the first image in the folder is 0001.tif).

Reset track_state to track after a commit

If the track_state was kept in division before clicking on Commit, the new annotation started with the track_state being in division rather than being in track. If it is not noticed then it leads to multiple errors and wrong segmentations, so after committing every track, the track_state should be reset to track

Implement point based strategy to propagate prompts to adjacent slice

for the 3d annotator and cases where we have an object with complex inner shape (like vessels, vasculature etc.)
Currently the segmentation propagation doesn't work quite well for these cases because SAM just predicts the outline of the vessels as complete mask.

Instead of the current approach we could (in addition) sample positive and negative points from the object mask and use these as points. This should be done with a distance threshold to avoid sampling points close to the object boundary.

The easiest way to implement this is probably to add use_points to segment_from_mask and then sample positive and negative points from the mask. (+ parameters for how many points to sample and for the distance threshold).

Running the annotator raises warning

For me using the GUI entrypoint from annotator raises a warning:

/home/pape/Work/my_projects/micro-sam/micro_sam/sam_annotator/annotator_2d.py:243: UserWarning: A QApplication is already running with 1 event loop. To enter *another* event loop, use `run(max_loop_level=2)`
  napari.run()

This seems to be the case for using any of the 4 annotators with it (so far I have checked for annotator_2d and image_series_annotator).

Input structure for the micro_sam.sam_annotator.annotator_3d

Hi,

I have used the annotator_3d a couple of weeks before using a 3D tiff file as an input:

from skimage.io import imread, imsave
from micro_sam.sam_annotator import annotator_3d
imsave("3D.tif", data.astype("uint8"))
data = imread("3D.tif")
embed_path = None
annotator_3d(raw, embedding_path=embed_path)

However, following the same steps now, I get the error message: "ValueError(f"Invalid input image of shape {input_.shape}. Expect either 2D grayscale or 3D RGB image.") ValueError: Invalid input image of shape (32, 256, 256). Expect either 2D grayscale or 3D RGB image." (image of whole error message is linked)

In 2D the annotator_2d works for me. I am now unsure how to use the annotator_3d for 3D tiff stacks and would be grateful for any hints. My data is a 3D grayscale image stack (uint16) that I then resaved to uint8. I also tried out float32 or saving the stack to png, but this results in the loss of the 3D structure.

Thanks a lot,
Gesine

image

annotator_2d dtype support

Hi,

Thanks for making this available, I'm very keen to try it out! My images tend to be uint16 which currently seems to not play nice with annotator_2d.py. Probably would need some transformation in _compute_2d to fix that?
It's the same with float32, float64, int64, uint64, uint32, and int32. So I am not sure which dtype would be the correct one and would appreciate some pointers. Thanks!

Traceback
Traceback (most recent call last):
  File ".../bin/micro_sam.annotator_2d", line 33, in <module>
    sys.exit(load_entry_point('micro-sam', 'console_scripts', 'micro_sam.annotator_2d')())
  File ".../micro-sam/micro_sam/sam_annotator/annotator_2d.py", line 184, in main
    annotator_2d(
  File ".../micro-sam/micro_sam/sam_annotator/annotator_2d.py", line 44, in annotator_2d
    IMAGE_EMBEDDINGS = util.precompute_image_embeddings(PREDICTOR, raw, save_path=embedding_path, ndim=2)
  File ".../micro-sam/micro_sam/util.py", line 236, in precompute_image_embeddings
    image_embeddings = _compute_2d(input_, predictor) if save_path is None else\
  File ".../micro-sam/micro_sam/util.py", line 119, in _compute_2d
    predictor.set_image(image)
  File ".../lib/python3.10/site-packages/segment_anything/predictor.py", line 56, in set_image
    input_image = self.transform.apply_image(image)
  File ".../lib/python3.10/site-packages/segment_anything/utils/transforms.py", line 31, in apply_image
    return np.array(resize(to_pil_image(image), target_size))
  File ".../lib/python3.10/site-packages/torchvision/transforms/functional.py", line 335, in to_pil_image
    raise TypeError(f"Input type {npimg.dtype} is not supported")
TypeError: Input type uint16 is not supported

Better caching of embeddings

Currently the caching of embeddings is not aware of changing inputs. So if the image or any relevant parameters (e.g. tiling) change the napari annotator is not aware and things just break because the shapes etc. don't match anymore.

The best solution would probably be to hash all relevant things (the input image and the tiling params) and then warn or throw an error if the hash doesn't agree.

cc @GenevieveBuckley

Implement automatic 3d segmentation

  • Begin in start slice (default: middle)
  • Auto segment it (From embeddings)
  • Project to upper slices
    • Auto segment from embeddings, use projections to this slide as "seed masks" (make sure ids stay consistent)
    • Go to next upper, repeat until volume is done
  • Same procedure for the lower slices

Automatic tiling for large images

Right now the user has to activate tiling by hand. It would be better to automatically tile when images are bigger than some shape (2k x 2k ?). The tiling size should be around 1024 pixels (input size of SAM).
(It should of course be possible to over-ride this by passing explicit arguments for tiling / not tiling, but by default we should choose reasonable tiling options.)

Cannot load custom vit-b

Loading the finetuned vit-bs does not work.

from micro_sam.util import get_sam_model

predictor = get_sam_model(model_type="vit_b_lm")

fails with a very long error message about parameter shape mismatches.

It works for the vit_l models.

cc @anwai98

What arguments should be input for --tile and --halo in 2d_annotator?

I am glad that the tiling function is incorporated into micro-sam now. While, when I try to use tiling, I found the help document (-h) is not very clear. For example, if I want to segment a large image into multiple smaller images of 512 x 512 pixels, with the halo 64 pixels outward from a single tile (the total area is 576 x 576 pixels), then what I should input?

It is also interesting that in tiling function, it is not possible to pass -e and I have to set up a directory for embedding. I wonder why.

More segment anything tricks

The segment anything model is trained to predict with the "single mask head" if a single prompt (= one box or one positive point) is passed, and with the multi-mask head for more than one prompt.

So it would make sense to use it the same way in inference, by adopting the following in all the segment_from_prompt functions:

  • default: multimask_output=None
  • if None and n_prompts == 1: use multimask_output=True and select mask with highest predicted IOU
  • if None and n_prompts > 1: use multimask_output=False

(Also while working on this the segment_from_prompts functions should be refactored to re-use more common code for the multimask and tiling logic.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.