Giter Site home page Giter Site logo

wdika / mridc Goto Github PK

View Code? Open in Web Editor NEW
37.0 1.0 11.0 12.54 MB

Data Consistency Toolbox for Magnetic Resonance Imaging

Home Page: https://mridc.readthedocs.io

License: Apache License 2.0

Python 37.67% Shell 0.01% Dockerfile 0.02% Jupyter Notebook 62.30%
deep-learning pytorch machine-learning mri mri-reconstruction recurrent-inference-machines variational-network unet convolutional-neural-networks data-consistency

mridc's Introduction

mridc's People

Contributors

deepsource-autofix[bot] avatar deepsourcebot avatar dependabot[bot] avatar lysanderdejong avatar pre-commit-ci[bot] avatar sourcery-ai[bot] avatar wdika avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

mridc's Issues

Add image logging for wandb

Is your feature request related to a problem? Please describe.
Go to the base reconstruction class and the logging func.

Describe the solution you'd like
It would be nice to log images in wandb as in tensorboard atm.

[FR] Use PyTorch Lightning

Is your feature request related to a problem? Please describe.
Training and logging should be more concise and unified for every model.

Describe the solution you'd like
Use PyTorch Lightning.

[BUG] Batch size > 1 is not allowed

Describe the bug
--batch_size > 1 is not allowed. It also mixes with the input's slice dimension.

To Reproduce
Train a cirim or e2evn or unet with --batch_size > 1.

Expected behavior
--batch_size > 1 should be allowed. When training on a dataset with varying matrix sizes though it should be limited to 1.

Environment
Operating System: ubuntu-latest Python Version: >= 3.9 PyTorch Version: >= 1.9

Coil dim in the llg of the RIM is fixed

Describe the bug
The coil dimension in the log-likelihoog gradient computation is fixed to 1.

To Reproduce
Steps to reproduce the behavior:
Go to rim_utils

Expected behavior
This should be dynamically set by the argument of the function.

[FR] Use weights and biases

Is your feature request related to a problem? Please describe.
Hyperparameters search options need. Logging needs to be refined.

Describe the solution you'd like
Use weights and biases.

onnxruntime 1.11.1 doesn't support python 3.10

Is your feature request related to a problem? Please describe.
As title says onnxruntime 1.11.1 doesn't support python 3.10.

Describe the solution you'd like
Upgrade when onnxrutime updates.

Generalize the FFT type

Is your feature request related to a problem? Please describe.
The FFT type (for example here and here and here) can be defined multiple times and it is not always clear how normalization works.

Describe the solution you'd like
The fft_normalization and fft_dim should be leveraged better, to distinguish between several different options.

Fix/allow batch_size > 1

Describe the bug
A batch size of 1 is only supported at the moment. This behavior is not indeed and should be changed.

To Reproduce

Expected behavior
batch_size>1 should be allowed.

Desktop (please complete the following information):

  • OS: Linux

Additional context
Add any other context about the problem here.

[FR] Create global arguments file and inherit them

Is your feature request related to a problem? Please describe.
Every train and run script duplicates global arguments.

Describe the solution you'd like
Global arguments should be on a separate file and be inherited when required.

Describe alternatives you've considered
This might be also addressed through PL and WANDB.

[FR] Unify Fast Fourier Transform across the repo

Is your feature request related to a problem? Please describe.
Fast Fourier Transform type, dim, and normalization should be consistent across the repo.

Describe the solution you'd like
Define fft_type, fft_dim, and fft_normalization across the repo.

  • fft_type options should be "orthogonal" and "backward".
  • fft_normalization options should be "orthogonal", "backward", and None.
  • fft_dim should always be the last two dimensions or the two last before the complex dimension [..., 2], when this exists.

hydra-core==1.2.0 is not supported

Describe the bug
hydra-core==1.2.0 is not supported.

To Reproduce
Steps to reproduce the behavior:
Try to run train/inference with mridc.launch for any model.

Expected behavior
mridc.launch should not return any error for _run_hydra

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.