Comments (14)
Hi @fharman ,
I did not provide a script to perform the inference using the functional models (the old ones).
Can you show me the code you used so that I can help you debug it?
I just tested to see whether the sequences in 'testing'
mode indeed contained the expected number of test files, and it's the case, see:
from fastmri_recon.data.sequences.fastmri_sequences import ZeroFilled2DSequence
test_path = ... # your test path here
seq_4 = ZeroFilled2DSequence(
test_path,
mode='testing',
af=4,
contrast=None,
norm=True,
)
seq_8 = ZeroFilled2DSequence(
test_path,
mode='testing',
af=8,
contrast=None,
norm=True,
)
print(len(seq_4.filenames), len(seq_8.filenames))
Which returns 50, 58
, and there are 108 files for the test set.
from fastmri-reproducible-benchmark.
In the colab you linked, I didn't see where you performed the inference to predict the test images.
Also please note that since #113 you no longer need to switch to an older commit for correct results using the U-net, I corrected the error.
from fastmri-reproducible-benchmark.
Ok I see now, let me see.
from fastmri-reproducible-benchmark.
So your function save_figure
(consider changing the name for greater readability) is the one that I guess you wanted to use to save your results (I am rewriting it here for clarity).
def save_figure(im_recos, name):
global img_index
with h5py.File(test_gen_scaled.filenames[img_index],'r') as f:
for slice_index in range(f.get('kspace').shape[0]):
im_reco = im_recos[ slice_index ]
image_name=test_gen_scaled.filenames[img_index][78:len(test_gen_scaled.filenames[img_index])-6]
# im_gt = img_batch[slice_index]
# im_res = np.abs(im_gt - im_reco)
fig, ax = plt.subplots ( 1, frameon=False )
ax.imshow ( np.abs ( np.squeeze ( im_reco ) ), aspect='auto' )
ax.axis ( 'off' )
fig.savefig (f'/content/drive/My Drive/fastmri-reproducible-benchmark-bcd3fddb48ad324566a44b23c4439ff7a50a316e/TestFigures2/AF8/{image_name}_{slice_index}_{name}_recon_af{AF}.png' )
# fig, ax = plt.subplots(1, frameon=False)
# ax.imshow(np.abs(np.squeeze(im_res)), aspect='auto')
# ax.axis('off')
# fig.savefig(f'/content/drive/My Drive/UNET_Colab/fastmri_reproducible_benchmark_master/TestFigures/trivial_{name}_residu_af{AF}.png')
img_index += 1
However, you didn't open the correct file, you opened the original test file rather than a newly created output file with the correct naming (see the guidelines here).
Also, in this function, you never write to that file but instead save a matplotlib figure, you should save to the newly created h5 file instead.
Also, I think in Python it's recommended to not use global variables in the general case, you should try rewriting this function also without a global variable.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
@fharman I did not do any correction in the function, I just rewrote it in the issue for logging's sake.
The folders you are showing me are full of png images. For this challenge you need to submit h5 files: please read the guidelines on how to submit.
Alternatively, you can also take inspiration from this function or the one of the official repo.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Hi,
Sorry for my faults and misunderstanding. But I try to learn and revise myself according to your reply. I want to ask you about fastmri-reproducible-benchmark/fastmri_recon/evaluate/utils/write_results.py /
write_result(exp_id, result, filename, coiltype='multicoil', scale_factor=1e6, brain=False, challenge=False) in this line, what is exp_id?
Thank you for your patience and reply,
Best,
from fastmri-reproducible-benchmark.
I added a Pull Request (PR) where I specify the docs of this function. Please take a look.
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
@fharman it's because the exp_id
should be a string. Here you just put Unet
without specifying that it was a string.
from fastmri-reproducible-benchmark.
Hi @fharman ,
Can we close this?
from fastmri-reproducible-benchmark.
from fastmri-reproducible-benchmark.
Related Issues (20)
- maybe remove or update np_metrics HOT 4
- need to specify the scikit-image version in the requirements.txt HOT 2
- Update the introduction notebook
- Add HuggingFace Hub integration to store the models HOT 1
- Regarding sessions of OASIS3 dataset to be downloaded HOT 5
- Error in training PD-net for OASIS3 dataset HOT 14
- Unpin TF version number
- Correct bias setting in MWCNN
- PSNR and SSIM metrics from skimage HOT 1
- Nufft and its adj operator with cropped 320x320 image HOT 8
- Error in training XPDNet HOT 3
- GPU->CPU Memcpy failed HOT 3
- Move to using tensorflow-nufft HOT 3
- Difficulty in running inference for NCPDNet HOT 2
- Getting NaN values when running nc_train.py HOT 8
- question about mask HOT 1
- Time per epoch increasing HOT 1
- Getting error when I run postBuild HOT 1
- How to predict using the XPDNet-brain-af4 model HOT 2
- error
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fastmri-reproducible-benchmark.