Giter Site home page Giter Site logo

petebankhead / qupath-extension-sam Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ksugar/qupath-extension-sam

1.0 1.0 0.0 108 KB

QuPath extension for Segment Anything Model (SAM)

License: GNU General Public License v3.0

Java 98.48% Groovy 1.52%

qupath-extension-sam's Introduction

QuPath extension SAM

This is a QuPath extension for Segment Anything Model (SAM).

This is a part of the following paper. Please cite it when you use this project. You will also cite the original SAM paper and the MobileSAM paper.

Install

Drag and drop the extension file to QuPath and restart it.

Please note that you need to set up the server following the instructions in the link below.

https://github.com/ksugar/samapi

Update

To update the qupath-extension-sam, follow the following instructions.

  1. Open extensions directory from Extensions > Installed extensions > Open extensions directory
  2. Replace qupath-extension-sam-x.y.z.jar with the latest version of the extension file.
  3. Restart QuPath application.

Please note that you need to also update the samapi server.
To keep updated with the latest samapi server, follow the instructions here.

Usage

SAM prompt command

Rectangle (BBox) prompt

  1. Select Extensions > SAM from the menu bar.
  2. Select the Prompt tab in the Segment Anyghing Model dialog.
  3. Select a rectangle tool by clicking the icon.
  4. Add rectangles.
  5. Select rectangles to process. (Alt + Ctrl + A: Select all annotation objects Ctrl or โŒ˜ + left click: Select multiple objects)
  6. Press the Run for selected button.
  7. If you activate Live mode, SAM predicts a mask every time you add a rectangle.

Point prompt

  1. Select Extensions > SAM from the menu bar.
  2. Select the Prompt tab in the Segment Anyghing Model dialog.
  3. Select a point tool by clicking the icon.
  4. Add foreground points.
  5. (Optional) add background points.
  6. Press the Run for selected button.
  7. If you activate Live mode, SAM predicts a mask every time you add a foreground point.

Parameters

key value
Server URL of the server.
SAM type One of vit_h (huge), vit_l (large), vit_b (base), or vit_t (mobile).
SAM weights The SAM weights to use. The options are automatically fetched from the server.
Output type If Single Mask is selected, the model will return single masks per prompt. If Multi-mask is selected, the model will return three masks per prompt. Multi-mask (all) keeps all three masks. One of the three masks is kept if the option Multi-mask (largest), Multi-mask (smallest), or Multi-mask (best quality) is selected.
Display names Display the annotation names in the viewer. (this is a global preference)
Assign random colors If checked and no path class is set in Auto set setting, assign random colors to new (unclassified) objects created by SAM.
Assign names If checked, assign names to identify new objects as created by SAM, including quality scores.
Keep prompts If checked, keep the foreground prompts after detection. If not checked, these are deleted.
Display names Display the annotation names in the viewer. (this is a global preference)

SamAutomaticMaskGenerator

  1. Select Extensions > SAM from the menu bar.
  2. Select the Auto mask tab in the Segment Anyghing Model dialog.
  3. Set parameters.
  4. Press the Run button.

Parameters

key value
Server URL of the server.
SAM type One of vit_h (huge), vit_l (large), vit_b (base), or vit_t (mobile).
SAM weights The SAM weights to use. The options are automatically fetched from the server.
Assign random colors If checked and no path class is set in Auto set setting, assign random colors to new (unclassified) objects created by SAM.
Assign names If checked, assign names to identify new objects as created by SAM, including quality scores.
Keep prompts If checked, keep the foreground prompts after detection. If not checked, these are deleted.
Display names Display the annotation names in the viewer. (this is a global preference)
points_per_side The number of points to be sampled along one side of the image. The total number of points is points_per_side**2.
points_per_batch Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU memory.
pred_iou_thresh A filtering threshold in [0,1], using the model's predicted mask quality.
stability_score_thresh A filtering threshold in [0,1], using the stability of the mask under changes to the cutoff used to binarize the model's mask predictions.
stability_score_offset The amount to shift the cutoff when calculated the stability score.
box_nms_thresh The box IoU cutoff used by non-maximal suppression to filter duplicate masks.
crop_n_layers If >0, mask prediction will be run again on crops of the image. Sets the number of layers to run, where each layer has 2**i_layer number of image crops.
crop_nms_thresh The box IoU cutoff used by non-maximal suppression to filter duplicate masks between different crops.
crop_overlap_ratio Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of the image length. Later layers with more crops scale down this overlap.
crop_n_points_downscale_factor The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n.
min_mask_region_area If >0, postprocessing will be applied to remove disconnected regions and holes in masks with area smaller than min_mask_region_area. Requires opencv.
output_type If 'Single Mask' is selected, the model will return single masks per prompt. If 'Multi-mask' is selected, the model will return three masks per prompt. 'Multi-mask (all)' keeps all three masks. One of the three masks is kept if the option 'Multi-mask (largest)', 'Multi-mask (smallest)', or 'Multi-mask (best quality)' is selected.
include_image_edge If True, include a crop area at the edge of the original image.

Register SAM weights from URL

  1. Select Extensions > SAM from the menu bar.
  2. Press the Register button in the Segment Anyghing Model dialog.

The weights file is downloaded from the URL and registered on the server. After the registration, you can select the weights from the SAM weights dropdown menu.

Parameters

key value
SAM type One of vit_h (huge), vit_l (large), vit_b (base), or vit_t (mobile).
Name The SAM weights name to register. It needs to be unique in the same SAM type.
URL The URL to the SAM weights file.

SAM weights catalog

Here is a list of SAM weights that you can register from the URL.

Type Name (customizable) URL Citation
vit_h vit_h_lm https://zenodo.org/record/8250299/files/vit_h_lm.pth?download=1 Archit, A. et al. Segment Anything for Microscopy. bioRxiv 2023. doi:10.1101/2023.08.21.554208

https://github.com/computational-cell-analytics/micro-sam
vit_b vit_b_lm https://zenodo.org/record/8250281/files/vit_b_lm.pth?download=1
vit_h vit_h_em https://zenodo.org/record/8250291/files/vit_h_em.pth?download=1
vit_b vit_b_em https://zenodo.org/record/8250260/files/vit_b_em.pth?download=1

Tips

If you select a class in Auto set in the Annotations tab, it is used for a new annotation generated by SAM.

Updates

v0.4.1

  • Properly send the checkpoint URL parameter
    • The checkpoint URL was not sent to the server.
  • Add a catalog of SAM weights to README
  • Add example scripts under src/main/resources/scripts

v0.4.0

v0.3.0

  • Support for both point and rectangle foreground prompts by @petebankhead

    • Ensure each new point is a distinct object while SAM is running (i.e. turn of 'Multipoint' mode)
    • Support line ROIs as a way of adding multiple points in a single object
  • Support point background prompts by @petebankhead

    • Points with 'ignored*' classifications are passed to the model as background prompts (Sidenote: it seems a large number of background points harm the prediction... or I've done something wrong)
  • Implement 'Live mode' and 'Run for selected' by @petebankhead

    • 'Live mode' toggle button to turn live detection on or off
    • Alternative 'Run for selected' button to use only the selected foreground and background objects
      • This makes it possible to annotate first, then run SAM across multiple objects - as required on the forum
  • Support SamAutomaticMaskGenerator

  • Menu items simplified to a single command to launch a dialog to control annotation with SAM by @petebankhead

    • Provide persistent preferences for key choices (e.g. server, model)
    • Run prediction in a background thread with (indeterminate) progress indicator
    • Help the user with tooltips (and prompts shown at the bottom of the dialog)
  • Handle changing the current image while the command is running by @petebankhead

    • Send entire field of view for point prediction This is useful for one-click annotation of visible structures
  • Include the 'quality' metric as a measurement for objects that are created by @petebankhead

  • Support z-stacks/time series (by using the image plane; there's no support for 3D objects) by @petebankhead and @rharkes

  • Optionally assign names & random colors to identify the generated objects by @petebankhead

  • Optionally return multiple (3) detections instead of 1 by @petebankhead

  • Select which detection to retain based upon size or quality, or keep all of them by @petebankhead

  • Optionally keep the prompt objects, instead of immediately deleting them by @petebankhead

v0.2.0

  • Support any number of channels

Citation

Please cite my paper on bioRxiv.

@article {Sugawara2023.06.13.544786,
	author = {Ko Sugawara},
	title = {Training deep learning models for cell image segmentation with sparse annotations},
	elocation-id = {2023.06.13.544786},
	year = {2023},
	doi = {10.1101/2023.06.13.544786},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Deep learning is becoming more prominent in cell image analysis. However, collecting the annotated data required to train efficient deep-learning models remains a major obstacle. I demonstrate that functional performance can be achieved even with sparsely annotated data. Furthermore, I show that the selection of sparse cell annotations significantly impacts performance. I modified Cellpose and StarDist to enable training with sparsely annotated data and evaluated them in conjunction with ELEPHANT, a cell tracking algorithm that internally uses U-Net based cell segmentation. These results illustrate that sparse annotation is a generally effective strategy in deep learning-based cell image segmentation. Finally, I demonstrate that with the help of the Segment Anything Model (SAM), it is feasible to build an effective deep learning model of cell image segmentation from scratch just in a few minutes.Competing Interest StatementKS is employed part-time by LPIXEL Inc.},
	URL = {https://www.biorxiv.org/content/early/2023/06/13/2023.06.13.544786},
	eprint = {https://www.biorxiv.org/content/early/2023/06/13/2023.06.13.544786.full.pdf},
	journal = {bioRxiv}
}

Acknowledgements

qupath-extension-sam's People

Contributors

ksugar avatar rharkes avatar petebankhead avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.