angeloucn / min_max_similarity Goto Github PK
View Code? Open in Web Editor NEWA contrastive learning based semi-supervised segmentation network for medical image segmentation
A contrastive learning based semi-supervised segmentation network for medical image segmentation
I am confused about contrast learning. My current understanding is that for models A and B, the same unlabeled image is enhanced with different data augmentation A1 and B1, and the corresponding data is put into models A and B respectively for subsequent similarity comparison. Am I right? If so, how to compute the L-similarity between the different augmentation(what if it is rotated or flipped, the loss could be high)
If my comprehension is totally wrong, could you tell me the correct procedure?
Thanks a lot.
I'm trying to reproduce the reported results. I use the default hyper-parameters in train_mms.py
. I wonder if anything need to change since I got extremely low results.
Train log
2022-10-03 20:00:16,393 2022-10-03 20:00:16.393250 Epoch [099/100], total_loss : 2.3719
2022-10-03 20:00:16,394 Train loss: 2.371871218389394
2022-10-03 20:00:28,635 Validation dice coeff model 1: 0.38830975438087945
2022-10-03 20:00:28,636 Validation dice coeff model 1: 0.3108561814208858
2022-10-03 20:00:28,637 current best dice coef model 1 0.47562526039001296, model 2 0.3603631227240354
2022-10-03 20:00:28,637 current patience :101
Test log
2022-10-03 20:11:16,730 logs/kvasir/test/saved_images_1/
2022-10-03 20:11:39,722 Model 1 F1 score : 0.18228444612672604
2022-10-03 20:11:39,838 Model 1 MAE : 0.18325799916872645
2022-10-03 20:11:39,839 logs/kvasir/test/saved_images_2/
2022-10-03 20:12:06,201 Model 2 F1 score : 0.15014741534272816
2022-10-03 20:12:06,324 Model 2 MAE : 0.1558560876074035
Hello, thank you for your contributions in the field of semi-supervised contrastive learning. I have gained a lot from your work!
However, during the reproduction process, I encountered some issues. When I attempted to apply the original model to the polyp dataset (kvasir-seg), I found that the performance was not ideal, with a dice score of only around 23. The log files also indicated that the dataloader filtered out many images with sizes not meeting the default trainsize.
Initially, I suspected the issue might be with the filters in data.py, so I commented out the filtering operations. This improved the training dice value to around 33, but it's clearly not reflective of the true capabilities of the model.
Next, I tried another polyp dataset, colon, and trained the original model. The best dice during training was over 85, which is quite impressive.
I then switched to the kvasir-instrument dataset mentioned in the original paper, using the original model parameters for training. However, the results were not satisfactory, similar to kvasir-seg, with a dice score of only around 20 during training.
What's strange is that the image sizes for all three datasets don't match the trainsize. I'm wondering if it's necessary to modify the default value of trainsize to try and improve experimental results?
This process has left me quite perplexed. I look forward to your response and greatly appreciate your assistance.
Thank you for your listening! May i ask you the url of all these datasets or share the datasets by some ways? Due to that for the first dataset i can not open the URL, for the second dataset, i can not get the dataset from the website, for the third dataset, i also can not open the URL.
truely thanks!
Hi, Sorry to interrupt. I can't find your contact information, so I'm asking a question here. I was recently reproducing AF-SfMLearner and I had the same problem as you, I predicted the trajectory very poorly. The author mentioned that you seem to have solved the problem, it seems to be by adding align_corner = True to F.interpolate and F.grid_sampler. Does it work? Thank you very much!
Best regards
I think the roles of the Classifier and Projector are quite similar. So why do you decide to differentiate them? Did you conduct any experiments with the Classifier and Projector sharing the same weights?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.