wdika / mridc Goto Github PK
View Code? Open in Web Editor NEWData Consistency Toolbox for Magnetic Resonance Imaging
Home Page: https://mridc.readthedocs.io
License: Apache License 2.0
Data Consistency Toolbox for Magnetic Resonance Imaging
Home Page: https://mridc.readthedocs.io
License: Apache License 2.0
Is your feature request related to a problem? Please describe.
The fastMRI dataset contains metadata stored in ismrmrd header
format. This is true only for some datasets and not always in general.
Describe the solution you'd like
This function _retrieve_metadata should be generalized or constrained.
Is your feature request related to a problem? Please describe.
Every train and run script duplicates global arguments.
Describe the solution you'd like
Global arguments should be on a separate file and be inherited when required.
Describe alternatives you've considered
This might be also addressed through PL
and WANDB
.
Describe the bug
Not precisely a bug, but this argument should be generalized among different masking functions.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
For example, in gaussian masking, this argument acts as FWHM. This should be clearly formalized.
Describe the bug
A batch size of 1 is only supported at the moment. This behavior is not indeed and should be changed.
To Reproduce
Expected behavior
batch_size>1
should be allowed.
Desktop (please complete the following information):
Additional context
Add any other context about the problem here.
Is your feature request related to a problem? Please describe.
For example, the option for the inputs can be defined here.
We need to make sure this applies here and across the repo.
Describe the solution you'd like
Use the coil combination func everywhere.
Describe the solution you'd like
Use yaml files to parse arguments for training and running methods.
Is your feature request related to a problem? Please describe.
Fast Fourier Transform type, dim, and normalization should be consistent across the repo.
Describe the solution you'd like
Define fft_type
, fft_dim
, and fft_normalization
across the repo.
fft_type
options should be "orthogonal"
and "backward"
.fft_normalization
options should be "orthogonal"
, "backward"
, and None
.fft_dim
should always be the last two dimensions or the two last before the complex dimension [..., 2], when this exists.Is your feature request related to a problem? Please describe.
As title says onnxruntime 1.11.1 doesn't support python 3.10.
Describe the solution you'd like
Upgrade when onnxrutime updates.
Is your feature request related to a problem? Please describe.
Go to the base reconstruction class and the logging func.
Describe the solution you'd like
It would be nice to log images in wandb as in tensorboard atm.
Describe the bug
hydra-core==1.2.0 is not supported.
To Reproduce
Steps to reproduce the behavior:
Try to run train/inference with mridc.launch for any model.
Expected behavior
mridc.launch should not return any error for _run_hydra
Is your feature request related to a problem? Please describe.
In the config files, the dataset_type argument should be removed (for example here).
Describe the solution you'd like
Instead, we should define the coil and spatial dimensions.
Is your feature request related to a problem? Please describe.
Hyperparameters search options need. Logging needs to be refined.
Describe the solution you'd like
Use weights and biases.
Describe the bug
half_scan_percentage
works only with 2D
masks.
To Reproduce
Duplicate a test function for 1D
masking like this and add half_scan_percentage > 0
.
Expected behavior
It should be working on both 1D
and 2D
masks. Add this option for 1D
masking as well.
Environment
Operating System: ubuntu-latest
Python Version: >= 3.9
PyTorch Version: >= 1.9
Describe the bug
--batch_size > 1
is not allowed. It also mixes with the input's slice dimension.
To Reproduce
Train a cirim
or e2evn
or unet
with --batch_size > 1
.
Expected behavior
--batch_size > 1
should be allowed. When training on a dataset with varying matrix sizes though it should be limited to 1.
Environment
Operating System: ubuntu-latest
Python Version: >= 3.9
PyTorch Version: >= 1.9
Is your feature request related to a problem? Please describe.
Training and logging should be more concise and unified for every model.
Describe the solution you'd like
Use PyTorch Lightning.
Describe the bug
The coil dimension in the log-likelihoog gradient computation is fixed to 1.
To Reproduce
Steps to reproduce the behavior:
Go to rim_utils
Expected behavior
This should be dynamically set by the argument of the function.
Is your feature request related to a problem? Please describe.
The FFT type (for example here and here and here) can be defined multiple times and it is not always clear how normalization works.
Describe the solution you'd like
The fft_normalization and fft_dim should be leveraged better, to distinguish between several different options.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.