Comments (6)
I suspect your network configuration isn't compatible with patches that are that small. You're creating your UNet with
model = UNet(
dimensions=3,
in_channels=1,
out_channels=2,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
norm=Norm.BATCH,
).to(device)
With this configuration your network will have 5 layers and with strides of 2 for each downsampling you will halve the spatial dimensions of the input image 4 times. This means your image sizes in the network are 20**3 -> 10**3 -> 5**3 -> 2**3 -> 1**3
. Attempting to upsample this in the decode path of the network will not work because of the default padding in the upsample convolutions.
I would suggest using patch sizes that are multiples of powers of 2 which, for a dimension of M*2**N allows you to downsample N times. In your case what you can do is use a patch size of 32 and stack your volumes only once so the depth dimension is 40, or double each slice in your volume to get the same.
Alternatively you can stick with 20**3 as your patch size and use (64,128,256)
as your channels argument and (2,2)
as your strides argument to make a shallower network and see how that works.
from tutorials.
For Spacing
you provide a multiple which reduces the size so if you want to go from 1200x340x20 to 1200x340x40 you want something like this:
from monai.transforms import Spacing
s=Spacing(pixdim=(1, 1, 0.49))
t=torch.rand(1,1200,340,20)
print(s(t)[0].shape) # (1, 1200, 340, 40)
For using Spacingd
you would have the same pixdim
values as this example.
I'm honestly not sure what to comment on your particular problem, I shall ask my colleagues to see if anyone has something specific to this particular segmentation problem to contribute.
from tutorials.
Hello!
I work with sparse data (cerebral microbleed segmentation), and I would recommend you look into the following:
- sampling techniques: oversampling the minority class. This is an easy way of balancing the dataset, but may increase false positive predictions or overfitting.
- data augmentation: in particular in combination with a good sampling technique, this can be a very powerful tool. Data augmentation (flipping, rotating, etc) can have the same effect as oversampling the minority class, but it should also reduce the risk of overfitting.
- loss function: because of the imbalance, I would recommend a loss function designed for imbalanced data, such as generalised dice, tversky loss or focal loss.
Generally with very imbalanced data it takes a lot of patience to fine-tune the network.
from tutorials.
@Irme thanks. Please consider joining the CTF NF Hackathon which is still open and has plenty of medical data. Here are some details:
https://nfhack-platform.bemyapp.com/#/event
We have more than 500 participants registered from around the world as well as more than 40 mentors and 21 projects in progress. Registration remains open throughout the Hackathon and projects can be submitted up until November 13th.
from tutorials.
Thanks very much Eric. There are 20 planes in the MRI. Should I be trying to "thicken" the plane volumes so that they correspond more to the physical dimensions of the X-Y plane, or is that irrelevant? If so, how do I do that? My MRI images are 20 NumPy bitmaps. I think something like this is done in the Spleen tutorial notebook here:
Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")),
The Spleen example from NVidia uses a Nifti format that comes with metadata including an Affine transform that the Spacingd
transform accesses. What would be the corresponding arguments for Spacing
transform?
Also, one last question. The tumors (in this case, from Neurofibromatosis 1) are very sparse in the whole MRI volume. So 98% of the volume is non-tumor and 2% is tumor. Do you have any thoughts on how to increase accuracy for this very sparse class? I have only 50 full labelled examples to work with. Should I explore Waterloo's "Less than one shot" learning technique?
from tutorials.
Thanks Eric! Intuitively I don't think it should matter. The spleen tutorial has the pixdim=(1.5, 1.5, 2.0)
. I'm curious what motivated that. It seems like they were doing it to reduce the size of the inputs:
from monai.transforms import Spacing
import torch
s=Spacing(pixdim=(1.5, 1.5, 2.0))
t=torch.rand(1,226,257,113)
print(s(t)[0].shape) # (1, 151, 172, 57)
Regarding the overall number of samples (50 in my case), and sparsity in the image set of positive labels (tiny tumors), any thoughts will be very welcome.
This is in the context of the Children's Tumor Foundation Hack for NF which is still open for participation.
from tutorials.
Related Issues (20)
- Link for Installation of MONAI Generative Models gives 404 error
- AutoRunner demo needs to set auto_scale_allowed to False HOT 1
- Incorporating ONNX Support into Brain Tumor Segmentation Example HOT 1
- Getting cls_logits NaN of Inf during training HOT 1
- Decollate_batch() should be used with LoadImaged or LoadImage(image_only= "False") with dictionary_based input HOT 1
- module 'cv2.dnn' has no attribute 'DictValue'
- ImportError: libGL.so.1: cannot open shared object file: No such file or directory
- Kernel hangs in "TCIA_PROSTATEx_Prostate_MRI_Anatomy_Model.ipynb" HOT 11
- nbclient.exceptions.DeadKernelError: Kernel died in ./modules/3d_image_transforms.ipynb HOT 7
- Issue with multi-GPU support in Auto3DSeg on Windows HOT 1
- the argument needed to change the default directory in pathology/tumor_detection/README.MD
- please upload more famous diffusion model about image to image,thanks HOT 1
- Certificate verify failed when downloading OASIS data
- monailabel tutorials contain broken and outdated links to Orthanc HOT 2
- IndexError in modules/resample_benchmark.ipynb HOT 1
- Auto3DSeg to test data for segmentation of the Lung HOT 3
- The execution of `runner.sh` is timing out due to the large dataset
- Breakup of datasets used during lung nodule detector training HOT 3
- Create tutorial for 3d regression.
- Auto3DSeg evaluation using dice coefficient
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tutorials.