Comments (31)
Hi @fharman ,
It's not possible to compute image quality metrics for the test dataset since it doesn't have the ground truth images.
The way to do this would be to reconstruct all the images of the test dataset, put them in a single folder, zip it and upload to, for example, Dropbox, and submit this zipped folder containing your reconstructions to https://fastmri.org/.
This way your reconstructions will appear in the leaderboards.
Hope this helps
from fastmri-reproducible-benchmark.
Thank you for your return. I try to reach your result. So i wrote down this.
Thank you again Zaccharie for everything.
Best regards,
from fastmri-reproducible-benchmark.
Which result are you referring to?
from fastmri-reproducible-benchmark.
I mean , i hope to reach a result in PSNR(28.4db- 33.2 db) with high SSIM value. But when i tried, zero filled result is higher than unet.(i thought it is less than full val. dataset).
Also, in metrics std is multiplied with 2. Why?
from fastmri-reproducible-benchmark.
So you are talking about the Proton Density with Fat Suppression (PDFS) U-net results from the journal paper?
I am very surprised that your U-net results are less than the Zero-Filled, since the U-net is starting from the Zero-Filled image so it should at least perform the same.
What are your results exactly?
Re the *2
in std representation: I have no clue why it's there. I took this code from the original fastMRI repo but never paid attention to this part !!!
Thank you very much for signaling this to me, I will submit a correction ASAP.
from fastmri-reproducible-benchmark.
So you are talking about the Proton Density with Fat Suppression (PDFS) U-net results from the journal paper?
I am very surprised that your U-net results are less than the Zero-Filled, since the U-net is starting from the Zero-Filled image so it should at least perform the same.
What are your results exactly?Re the
*2
in std representation: I have no clue why it's there. I took this code from the original fastMRI repo but never paid attention to this part !!!
Thank you very much for signaling this to me, I will submit a correction ASAP.
Hi Zaccharie,
My result is low in SSIM as you see in below. So i hoped to reach a result close to Challenge Leaderboard.(0.65-0.75 in SSIM).
PSNR-mean (std) (dB) SSIM-mean (std) # params Runtime (s)
zfilled 25.91 (3.012) 0.5049 (0.1147) 0 0.4042
unet 28.15 (4.37) 0.4313 (0.1223) 481801 0.4002
Also I want to ask you that is there any step for test dataset? Because all codes for validation in reconstruction. But as you know test dataset is zero-filled in default. So ı pruned the codes but i think there could be a way for seeing the reconstruction of test dataset in unet despite of validation dataset.
Thank you for all your support,
Best Regards,
from fastmri-reproducible-benchmark.
Hi @fharman ,
I think your results are low both in PSNR and SSIM compared to my results, even for the zero-filled version.
Are you showing me the results for both contrasts or for one contrast only?
For how many epochs did you train the U-net?
Also you shouldn't compare against the leaderboard for the knee dataset, since the results are slightly higher on the leaderboard than on the validation set.
Again, for the test set we don't have access to the ground truth so it's impossible to know the image quality metrics. And for simple inference for later submission, I think that unfortunately I don't have the scripts for the old functional models, but I will gladly receive a contribution on that.
I gave you the instructions on how to submit the test results here.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Aaaaah ok, I didn't know you used my checkpoints.
I think at some point there was a breaking change in the U-net implementation... But I don't remember when this happened... Sorry about that.
I will notify this change in the Readme somehwere.
What you can do to still test with these checkpoints is to checkout to the commit they were added (git checkout bcd3fdd
).
Re the test set, you have a mode
argument for the ZeroFilled2DSequence
, which you can set to 'testing'
(same for MaskedSequence
).
from fastmri-reproducible-benchmark.
Actually, I just realized that I had planned a release for that, you can see this here.
You can do git checkout old_unet
to be in a state where the UNet saved checkpoints correspond to the UNet architecture.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Glad I could help (this is very old work so that I did when starting my PhD so it's not really well organised, sorry for that)!
Please let me know if this works.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Ok I just checked and I also had this issue at the release old_unet
. It must mean that there is a bug somewhere and I will fix it eventually.
What you can do is to checkout to the commit I had mentioned earlier (the one where I added the weights), and everything should work just fine git checkout bcd3fdd
(this one doesn't have a separation per contrast).
You will need to install PyTorch and ModOpt for everything to run smoothly (pip install torch modopt
).
Sorry about that.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Hi @fharman ,
I didn't get your image. But if they are not like in the paper, you probably didn't checked out to the commit I sent you git checkout bcd3fdd
.
Could you share you colab with me for me to check that everything is as expected?
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Ah I think it's because you didn't do the checkout correctly.
In colab, after cloning the repo, you can do !git checkout bcd3fdd
.
Checking out to an older commit allows you to have the code at the time of writing of this commit.
Don't hesitate to read a bit about git and versioning to know more about this, I think it will help you greatly in the future.
Also don't forget to grant me the access to the colab for me to review it.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
No but you need also the code corresponding to this commit
not only the checkpoints.
And you still didn't grant me access permissions to your colab.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
I just checked, I still don't have access to it.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Still not it.
In the top right hand corner, when using a colab notebook, you have a share button (here in French, "Partager").
Click on it, and add my email address to the colaborators: [email protected].
Before I check, please make sure you are on the right git commit(bcd3fdd
), to compute the metrics.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
No I have the access (the last one was the right move).
In your notebook you didn't checkout to the commit I mentioned. How did you do it?
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Yes but it's not about the checkpoints only it's about the version of the source code to run it.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
In light of your answer in the other issue, can we close this @fharman ?
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Related Issues (20)
- maybe remove or update np_metrics HOT 4
- need to specify the scikit-image version in the requirements.txt HOT 2
- Update the introduction notebook
- Add HuggingFace Hub integration to store the models HOT 1
- Regarding sessions of OASIS3 dataset to be downloaded HOT 5
- Error in training PD-net for OASIS3 dataset HOT 14
- Unpin TF version number
- Correct bias setting in MWCNN
- PSNR and SSIM metrics from skimage HOT 1
- Nufft and its adj operator with cropped 320x320 image HOT 8
- Error in training XPDNet HOT 3
- GPU->CPU Memcpy failed HOT 3
- Move to using tensorflow-nufft HOT 3
- Difficulty in running inference for NCPDNet HOT 2
- Getting NaN values when running nc_train.py HOT 8
- question about mask HOT 1
- Time per epoch increasing HOT 1
- Getting error when I run postBuild HOT 1
- How to predict using the XPDNet-brain-af4 model HOT 2
- error
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fastmri-reproducible-benchmark.