A web API for SAM implemented with FastAPI.
This is a part of the following paper. Please cite it when you use this project. You will also cite the original SAM paper.
- Sugawara, K. Training deep learning models for cell image segmentation with sparse annotations. bioRxiv 2023. doi:10.1101/2023.06.13.544786
Create a conda environment.
conda create -n samapi -y python=3.11
conda activate samapi
If you're using a computer with CUDA-compatible GPU, install cudatoolkit
.
conda install -y cudatoolkit=11.8
If you're using a computer with CUDA-compatible GPU on Windows, install torch
with GPU-support with the following command.
# Windows with CUDA-compatible GPU only
python -m pip install torch --index-url https://download.pytorch.org/whl/cu118
Install samapi
and its dependencies.
python -m pip install git+https://github.com/ksugar/samapi.git
If you are using WSL2, LD_LIBRARY_PATH
will need to be updated as follows.
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
uvicorn samapi.main:app
The command above will launch a server at http://localhost:8000.
INFO: Started server process [21258]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
For more information, see uvicorn documentation.
class SAMBody(BaseModel):
type: Optional[ModelType] = ModelType.vit_h
bbox: Tuple[int, int, int, int] = Field(example=(0, 0, 0, 0))
b64img: str
key | value |
---|---|
type | One of vit_h , vit_l , or vit_b |
bbox | Coordinate of a bbox (x1, y1, x2, y2) |
b64img | Base64-encoded image data |
The response body contains a list of GeoJSON Feature objects.
Supporting other formats is a future work.
- v0.2.0: Support for MPS backend (MacOS) by @petebankhead
Please cite my paper on bioRxiv.
@article {Sugawara2023.06.13.544786,
author = {Ko Sugawara},
title = {Training deep learning models for cell image segmentation with sparse annotations},
elocation-id = {2023.06.13.544786},
year = {2023},
doi = {10.1101/2023.06.13.544786},
publisher = {Cold Spring Harbor Laboratory},
abstract = {Deep learning is becoming more prominent in cell image analysis. However, collecting the annotated data required to train efficient deep-learning models remains a major obstacle. I demonstrate that functional performance can be achieved even with sparsely annotated data. Furthermore, I show that the selection of sparse cell annotations significantly impacts performance. I modified Cellpose and StarDist to enable training with sparsely annotated data and evaluated them in conjunction with ELEPHANT, a cell tracking algorithm that internally uses U-Net based cell segmentation. These results illustrate that sparse annotation is a generally effective strategy in deep learning-based cell image segmentation. Finally, I demonstrate that with the help of the Segment Anything Model (SAM), it is feasible to build an effective deep learning model of cell image segmentation from scratch just in a few minutes.Competing Interest StatementKS is employed part-time by LPIXEL Inc.},
URL = {https://www.biorxiv.org/content/early/2023/06/13/2023.06.13.544786},
eprint = {https://www.biorxiv.org/content/early/2023/06/13/2023.06.13.544786.full.pdf},
journal = {bioRxiv}
}