Giter Site home page Giter Site logo

Comments (21)

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

Hi,

For which cell did you get this error message?

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

Hi again,

In validation mode, I haven't got any errors in reconstruction. But in testing mode, there is a line named im_recos = reco_function(*test_gen[image_index], model), as you know, reconstruction function is reco_net_from_test_file and there was 2 inputs that I saw. Nevertheless I faced this error TypeError: reco_net_from_test_file() takes 2 positional arguments but 4 were given and began to debug to see which function gives 2 returns. Unfortunately, my spyder always tries to connect the kernel at that line.

Have you got any idea for this problem?

P.S: Your presentation on Youtube about Git is really good. Thanks.

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

Oh ok, you mean that you used the qualitative validation notebook for the test dataset?

If you show me what you tried, I can try to help, but you should know that this notebook was not meant to be used for the test dataset, so I cannot guarantee that there will be an easy fix.

Re the git presentation, you should also check out the related repo if you want to practice a bit the concepts I talked about in the presentation: https://github.com/zaccharieramzi/git-tuto.

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

Yes, I mean qualitative validation notebook for test dataset. According to your says, " this notebook was not meant to be used for the test dataset" , qualitative validation notebook does not used for the test dataset. Yes , i tried to modify codes in fastmri_sequences. I have faced another problem.

Thank you for your recommendation. Because I am very new in Github and checked the presentation out.

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

But can you show me what you tried?

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

In qualitative validation notebook, I changed the mode as testing and contrast=None

test_gen_zero = ZeroFilled2DSequence(test_path, af=AF, norm=True, mode='testing', mask_seed=0)
test_gen_scaled = Masked2DSequence(test_path, mode='testing', af=AF, scale_factor=1e6, mask_seed=0)


all_net_params = [
    {   
        'name': 'cascadenet',
        'init_function': cascade_net,
        'run_params': {
            'n_cascade': 5,
            'n_convs': 5,
            'n_filters': 48,
            'noiseless': True,
        },
        'test_gen':test_gen_zero,
        'run_id': 'cascadenet_af4_1568926824',
        'reco_function': reco_net_from_test_file,
    }
]

def save_figure_for_params(reco_function=None, test_gen=None, name=None, **net_params):
    model = unpack_model(**net_params)
    im_recos= reco_function(*test_gen[image_index], model).

But I can not visualize the result of CascadeNet in testing mode.(visualized in validation mode). The error line is im_recos= reco_function(*test_gen[image_index], model) and according to error, there are 4 inputs for this function. In another way, in cross_domain_reconstruction.py , the main function is shown like this.

import numpy as np


def reco_net_from_test_file(kspace_and_mask_batch, model):
    im_recos = model.predict_on_batch(kspace_and_mask_batch)
    im_recos = np.squeeze(im_recos)
    return im_recos

You are right, these are just for the validation mode. But in testing mode, I could not see where the problem is.

Best,

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

The main reason is that the way the current qualitative validation notebook is written is expressly for validation data.

I see you made several adjustments for this (change the reconstruction function, remove the fact that img_batch is present), but the one you didn't do was to remove the unpacking of the arguments (because the output of the testing sequence can now directly be fed to the network).

So instead of im_recos= reco_function(*test_gen[image_index], model), you should have im_recos= reco_function(test_gen[image_index], model). I didn't test this for myself, but let me know if it doesn't work.

Also, just a warning, that the unet architecture changed, and therefore you should import the old unet in order to obtain valid results from the saved checkpoints. (see this PR)

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

Hi,

I changed it as if im_recos= reco_function(test_gen[image_index], model) . But I have faced an error as IndexError: tuple index out of range. Because of this error, I added a line before that reconstruction line (im_recos= reco_function(test_gen[image_index], model) ) like test_gen=test_gen.get_item_test(test_gen.filenames[image_index])[0]. Nevertheless, I encountered another error like
ValueError: Layer model_1 expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(320, 320, 1) dtype=float32>]

I have tried to overcome this problem and I hope to solve. You are really good at detecting special problems and solving.

Best,

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

I don't think I understand your second try. You are changing the structure a bit too much.

But for the first try can you paste the entire stack trace?

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

Hi,

I have added test_gen=test_gen.get_item_test(test_gen.filenames[image_index])[0]. Because when I wrote len(test_gen[image_index]), the result was 3. Consequently, 3+1(model)=4 and I am facing this error. I also thought that [0] is kspace and the others are mean and std. If I misundertood, sorry for then.

Tomorrow I will write stack trace. I could not reach the pc in my office with teamviewer.

Thanks ,

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-17-0fe99a9645e2> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', '\nfor net_params in all_net_params:\n    save_figure_for_params(**net_params)\n    ')

3 frames
/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py in run_cell_magic(self, magic_name, line, cell)
   2115             magic_arg_s = self.var_expand(line, stack_depth)
   2116             with self.builtin_trap:
-> 2117                 result = fn(magic_arg_s, cell)
   2118             return result
   2119 

<decorator-gen-60> in time(self, line, cell, local_ns)

/usr/local/lib/python3.6/dist-packages/IPython/core/magic.py in <lambda>(f, *a, **k)
    186     # but it's overkill for just that one bit of state.
    187     def magic_deco(arg):
--> 188         call = lambda f, *a, **k: f(*a, **k)
    189 
    190         if callable(arg):

/usr/local/lib/python3.6/dist-packages/IPython/core/magics/execution.py in time(self, line, cell, local_ns)
   1191         else:
   1192             st = clock2()
-> 1193             exec(code, glob, local_ns)
   1194             end = clock2()
   1195             out = None

<timed exec> in <module>()

<ipython-input-15-f4c660eaec36> in save_figure_for_params(reco_function, test_gen, name, **net_params)
     33     for image_index in range((len(test_gen_scaled))):
     34         #test_gen=test_gen.get_item_test(test_gen.filenames[image_index])[0]
---> 35         im_recos= reco_function(*test_gen[image_index], model)
     36         filename=test_gen_scaled.filenames[image_index][70:len(test_gen_scaled.filenames[image_index])-3]
     37         write_result(im_recos ,filename, coiltype='singlecoil', scale_factor=1e6, brain=False)

TypeError: reco_net_from_test_file() takes 2 positional arguments but 4 were given

I could not reach my pc with teamviewer. I have run it in colab. Thanks.

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

Ah ok I see.
The problem is that for the signature for the reconstruction functions are different for the unet and the cross-domain networks.

Therefore you should have 2 cases, in the setup of test reconstruction, one for the unet and one for the cross-domain reconstruction, based on the 2 reconstruction functions' signatures.


Please note that I have edited your comments and initial issue, in order to use the GitHub markdown code formatting. This allows an easier read of the code (you can even have color coding for python code).
You can find some examples for code here.

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

Hi @ali-yilmaz , do you have any updates on this?

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

Unfortunately, no. I am still facing the same error. I thought there could be a problem in fastmri_sequences but it works for old-unet. So have still not overcome this. How did you run this for your test public leaderboard?

Best,

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

For the functional models, I used something more or less like the following:

scale_factor = 1e6
for setup, (net_id, epoch) in tqdm_notebook(zip(setups, nets_for_setups), desc='Setups'):
    im_ds_test = test_masked_kspace_dataset_from_indexable(test_path, scale_factor=scale_factor, **setup)
    test_files = list(list_files_w_contrast_and_af(test_path, **setup))
    chkpt_path = f'checkpoints/{net_id}-{epoch}.hdf5'
    model.load_weights(chkpt_path)
    for input_data, filename in tqdm_notebook(zip(im_ds_test, test_files), desc='recos'):
        im_recos = model.predict(input_data, batch_size=4)
        im_recos = im_recos[..., 0] / scale_factor
        with h5py.File(op.join(submission_path, submission_name, filename_submission(filename)), 'w') as f:
            f.create_dataset('reconstruction', data=im_recos)

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024
scale_factor = 1e6

for setup, (net_id, epoch) in tqdm_notebook(zip(setups, nets_for_setups), desc='Setups'):

(zip(setups, nets_for_setups) Does Setups and nets_for_setups mean unpacking of CascadeNet? What are setups, nets_for_setups?

    im_ds_test = **test_masked_kspace_dataset_from_indexable**(test_path, scale_factor=scale_factor, **setup)

In test_masked_kspace_dataset_from_indexable, def test_masked_kspace_dataset_from_indexable(path, AF=4, scale_factor=1, contrast=None):, I think setup seems just only contrast. I think I am wrong about setup.

    test_files = list(list_files_w_contrast_and_af(test_path, **setup))

I have seen that list_files_w_contrast_and_af was used for adressing. (def list_files_w_contrast_and_af(path, AF=4, contrast=None)) . I don't understand this line(list_files_w_contrast_and_af(test_path, **setup)).

    chkpt_path = f'checkpoints/{net_id}-{epoch}.hdf5'
    model.load_weights(chkpt_path)

    for input_data, filename in tqdm_notebook(zip(im_ds_test, test_files), desc='recos'):
        im_recos = model.predict(input_data, batch_size=4)

These below lines are just for writing as h5. I used it before.(I don't use scale factor because the ground truth and reconstruction ) are approximately similar each other in validation dataset.

     im_recos = im_recos[..., 0] / scale_factor
     with h5py.File(op.join(submission_path, submission_name, filename_submission(filename)), 'w') as f:
             f.create_dataset('reconstruction', data=im_recos)**

Also Congragulations for the multicoil brain challenge leaderboard. In Fastmri Article, For SSIM evaluation, window size is recommended as 7x7. Were you care about it?

Best,

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

Yes sorry, that was very unclear I went a bit too fast in copying this snippet. This should answer the questions about setups and net_for_setups variables:

possible_afs = [4, 8]
possible_contrasts = ['CORPDFS_FBK', 'CORPD_FBK']
setups = product(possible_afs, possible_contrasts)
setups = [
    {'AF': AF, 'contrast': contrast}
    for AF, contrast in setups
]

nets_for_setups = [  # the run id of the neural networks corresponding to each setup
    ('pdnet_af4_CORPDFS_FBK_1582824315', 50),
    ('pdnet_af4_CORPD_FBK_1582833677', 50),
    ('pdnet_af8_CORPDFS_FBK_1582659795', 50),
    ('pdnet_af8_CORPD_FBK_1582708199', 50),
]

In words, setups is a list of dictionaries specifying the acceleration factor and contrast for the inference. net_for_setups is a list of tuple where each tuple has a corresponding setup in setups, and a tuple is composed of the run id of the network and its final epoch.

I don't use scale factor because the ground truth and reconstruction are approximately similar each other in validation dataset

I am surprised by this. You didn't use a scaling factor when training?

In Fastmri Article, For SSIM evaluation, window size is recommended as 7x7. Were you care about it?

I didn't change the parameters of the SSIM no. They might differ from the ones of the official fastMRI, but they are a very good indication so I didn't care to change them. Please feel free to check the exact agreement and contribute here.

Also Congragulations for the multicoil brain challenge leaderboard.

Thanks! There is a paper on arxiv describing the overall challenge, don't hesitate to give it a read.

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

Hi,

Thank you for your support.

possible_afs = [4, 8]
possible_contrasts = ['CORPDFS_FBK', 'CORPD_FBK']
setups = product(possible_afs, possible_contrasts)
setups = [
    {'AF': AF, 'contrast': contrast}
    for AF, contrast in setups
]

nets_for_setups = [  # the run id of the neural networks corresponding to each setup
    ('pdnet_af4_CORPDFS_FBK_1582824315', 50),
    ('pdnet_af4_CORPD_FBK_1582833677', 50),
    ('pdnet_af8_CORPDFS_FBK_1582659795', 50),
    ('pdnet_af8_CORPD_FBK_1582708199', 50),
]  

According to this snippet, we have to have 4 checkpoints as per contrast and acceleration factor. I thought you did these just only one checkpoint that exists in checkpoint folder(300 epoch). Has it been trained for AF=4 and AF=8?

I am surprised by this. You didn't use a scaling factor when training?

I used it in training of UNET ( multiplication tensors with 1e6 ). But I don't use it in writing reconstruction of test dataset as h5 file. I thought you could test it in your UNET writing as

  im_recos = im_recos,       (after im_recos,  don't scale it and check it one by one in validation dataset)
     with h5py.File(op.join(submission_path, submission_name, filename_submission(filename)), 'w') as f:
             f.create_dataset('reconstruction', data=im_recos)

I didn't change the parameters of the SSIM no. They might differ from the ones of the official fastMRI, but they are a very good indication so I didn't care to change them. Please feel free to check the exact agreement and contribute here.

I said it because in 7x7 window size, it gives higher SSIM value. According to this, you could revize and check your validation results in articles.

Thanks! There is a paper on arxiv describing the overall challenge, don't hesitate to give it a read.

I will read and I also think of citing all these articles. But firstly, I have to improve my networks with these and complex networks and have to make a adaptation in scaling of ground truth and kspace . As you know, all ground truths are single and all h5 values are complex. I have to make a progress.

Thank you,

Best,

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

According to this snippet, we have to have 4 checkpoints as per contrast and acceleration factor. I thought you did these just only one checkpoint that exists in checkpoint folder(300 epoch). Has it been trained for AF=4 and AF=8?

You don't have to. For example, if you don't fine tune on contrast you could have something like:

possible_afs = [4, 8]
setups = [
    {'AF': AF}
    for AF in possible_afs
]

nets_for_setups = [  # the run id of the neural networks corresponding to each setup
    ('pdnet_af4_<id>', 50),
    ('pdnet_af8_<id>', 50),
]  

I used it in training of UNET ( multiplication tensors with 1e6 ). But I don't use it in writing reconstruction of test dataset as h5 file. I thought you could test it in your UNET writing as

Hmmm if you use the scaling factor for training, you should definitely divide by it when writing a submission file. Otherwise the results are going to be completely off. You can try and see for yourself when submitting to fastmri.

I said it because in 7x7 window size, it gives higher SSIM value. According to this, you could revize and check your validation results in articles.

Ah ok interesting I will check it out.

As you know, all ground truths are single and all h5 values are complex.

I am not sure I understand what you are saying here.

Anyway did you manage to make an inference, i.e. reconstruct test files?

from fastmri-reproducible-benchmark.

ali-yilmaz avatar ali-yilmaz commented on September 27, 2024

You don't have to. For example, if you don't fine tune on contrast you could have something like:

Ok. I understand. It works for 300 epoch(just changing according to AF two times). Thank you!

Hmmm if you use the scaling factor for training, you should definitely divide by it when writing a submission file. Otherwise the results are going to be completely off. You can try and see for yourself when submitting to fastmri.

Yeah, I did it. I rescaled it before writing. But when comparision with your result (old UNET), I did not scale with 1e6. They were similar.

Ah ok interesting I will check it out.

in fastMRI: An Open Dataset and Benchmarks for Accelerated MRI , There exists

For SSIM values reported in this paper, we choose a window size of 7 × 7.

But in their Github repository, they used standart SSIM that had 11x11 window. Because of better result in 7x7 , it is reported.

As you know, all ground truths are single and all h5 values are complex.

I am not sure I understand what you are saying here.

I am also interested in complex networks. But as you know, all ground truths are in image domain in spite of existing of MRI slices in complex space. But upon taking fft2 of ground truths, they are really different from fully sampled kspace despite scaling kspaces with 1e6. They are not close each other. I am probing into this whether anyone used this before or not.

Anyway did you manage to make an inference, i.e. reconstruct test files?

Yeah, I did. According to AF, I revised code.

Thank you for all support,

Best,

from fastmri-reproducible-benchmark.

zaccharieramzi avatar zaccharieramzi commented on September 27, 2024

But upon taking fft2 of ground truths, they are really different from fully sampled kspace despite scaling kspaces with 1e6

If you are talking solely of the data from fastMRI (not the one modified after preprocessing in this repo), then no it should be exactly the same. Maybe you didn't consider using the normalized fft2 (norm='ortho' in the fft2 function).

I am also interested in complex networks.

You should take a look at this paper.

Yeah, I did. According to AF, I revised code.

Cool, lemme close this then. Feel free to re-open another issue for any other question.

from fastmri-reproducible-benchmark.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.