This repository contains the implementation of the method described in our paper, "Leveraging Coarse-to-Fine Grained Representations in Contrastive Learning for Differential Medical Visual Question Answering".
- Creating conda environment
conda create -n your_env_name python=3.8
conda activate your_env_name
- Install the required packages
pip install -r requirements.txt
The Medical-Diff-VQA dataset, a derivative of the MIMIC-CXR dataset, consists of questions categorized into seven categories: abnormality (145,421), location (84,193), type (27,478), level (67,296), view (56,265), presence (155,726), and difference(164,324). The 'difference' questions are specifically for comparing two images. In total, the Medical-Diff-VQA dataset contains 700,703 question-answer pairs derived from 164,324 pairs of main and reference images.
More details can be found in this paper and the generation of dataset can be found in this project.
MIMIC-CXR:
- Download from Physionet.
- Note: This dataset requires authorization.
This repository contains the code of two main folders, feature extraction and model. The feature extraction folder contains the code for extracting the image features and generating the dynamic graphs. The model folder contains the code for training and test our model. Please refer README.md in each folder for more details.
If you find our work useful, please consider citing our paper:
Comming soon
This project is build upon EKAID, MCCFormer, GLoRIA and BLIP. Thanks to the contributors of these great codebases.