Official Pytorch implementation of MICCAI 2024 paper (early accept, top 11%) Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography
License: Creative Commons Attribution 4.0 International
Hello! Thank you for publishing the code!
I'm wondering if you could clarify which YAML file was used to train the CLIP model (efficient net 5b + Bio_ClinicalBERT)?
Hi,
I wanted to pre-train the same model with the RSNA dataset. However, since RSNA doesn't have text reports, can we generate the templated text reports from the RSNA dataset attributes using the preprocessing you used for the VinDr dataset ? if so, what modifications would you recommend to the RSNA csv file..?
Hello, there are three questions I want to ask you.
What is the purpose of the Breastclip folder? Is it a part of the project? If so, at what stage is it used?
I saw a .py file in Breastclip that generates reports from the findings, are these reports obtained from the vindr dataset? Are the reports obtained used in detection and classification tasks, if so, through which files?
Which method should I follow if I want to perform detection and birads-classification with the image-text data structure?
Do you have a processed version of the vindr dataset? Since I am working on Colab, I do not have enough disk space to download and extract, so I cannot use this data set. I also took a look at the vindr dataset with png extension on Kaggle, but the images do not match the information in the csv file. If you have it and it won't cause any problems, can you share it via a drive link ?