Giter Site home page Giter Site logo

angeloucn / min_max_similarity Goto Github PK

View Code? Open in Web Editor NEW
1.0K 1.0K 397.0 4.93 MB

A contrastive learning based semi-supervised segmentation network for medical image segmentation

Python 100.00%
contrastive-learning medical-image-segmentation medical-video-analysis semi-supervised-learning surgical-tools-segmentation video-segmentation

min_max_similarity's People

Contributors

angeloucn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

min_max_similarity's Issues

for each unlabeled image, is it be input to the different model with different data augmentation?

I am confused about contrast learning. My current understanding is that for models A and B, the same unlabeled image is enhanced with different data augmentation A1 and B1, and the corresponding data is put into models A and B respectively for subsequent similarity comparison. Am I right? If so, how to compute the L-similarity between the different augmentation(what if it is rotated or flipped, the loss could be high)

If my comprehension is totally wrong, could you tell me the correct procedure?
Thanks a lot.

Trying to reproduce, but extremely low performance

I'm trying to reproduce the reported results. I use the default hyper-parameters in train_mms.py. I wonder if anything need to change since I got extremely low results.

Train log
2022-10-03 20:00:16,393 2022-10-03 20:00:16.393250 Epoch [099/100], total_loss : 2.3719
2022-10-03 20:00:16,394 Train loss: 2.371871218389394
2022-10-03 20:00:28,635 Validation dice coeff model 1: 0.38830975438087945
2022-10-03 20:00:28,636 Validation dice coeff model 1: 0.3108561814208858
2022-10-03 20:00:28,637 current best dice coef model 1 0.47562526039001296, model 2 0.3603631227240354
2022-10-03 20:00:28,637 current patience :101

Test log
2022-10-03 20:11:16,730 logs/kvasir/test/saved_images_1/
2022-10-03 20:11:39,722 Model 1 F1 score : 0.18228444612672604
2022-10-03 20:11:39,838 Model 1 MAE : 0.18325799916872645
2022-10-03 20:11:39,839 logs/kvasir/test/saved_images_2/
2022-10-03 20:12:06,201 Model 2 F1 score : 0.15014741534272816
2022-10-03 20:12:06,324 Model 2 MAE : 0.1558560876074035

Encountering suboptimal results during the reproduction process, seeking your insights and guidance.

Hello, thank you for your contributions in the field of semi-supervised contrastive learning. I have gained a lot from your work!

However, during the reproduction process, I encountered some issues. When I attempted to apply the original model to the polyp dataset (kvasir-seg), I found that the performance was not ideal, with a dice score of only around 23. The log files also indicated that the dataloader filtered out many images with sizes not meeting the default trainsize.

Initially, I suspected the issue might be with the filters in data.py, so I commented out the filtering operations. This improved the training dice value to around 33, but it's clearly not reflective of the true capabilities of the model.

Next, I tried another polyp dataset, colon, and trained the original model. The best dice during training was over 85, which is quite impressive.

I then switched to the kvasir-instrument dataset mentioned in the original paper, using the original model parameters for training. However, the results were not satisfactory, similar to kvasir-seg, with a dice score of only around 20 during training.

What's strange is that the image sizes for all three datasets don't match the trainsize. I'm wondering if it's necessary to modify the default value of trainsize to try and improve experimental results?

This process has left me quite perplexed. I look forward to your response and greatly appreciate your assistance.

dataset

Thank you for your listening! May i ask you the url of all these datasets or share the datasets by some ways? Due to that for the first dataset i can not open the URL, for the second dataset, i can not get the dataset from the website, for the third dataset, i also can not open the URL.

truely thanks!

Something about AF-SfMLearner

Hi, Sorry to interrupt. I can't find your contact information, so I'm asking a question here. I was recently reproducing AF-SfMLearner and I had the same problem as you, I predicted the trajectory very poorly. The author mentioned that you seem to have solved the problem, it seems to be by adding align_corner = True to F.interpolate and F.grid_sampler. Does it work? Thank you very much!

Best regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.