Giter Site home page Giter Site logo

mcmasterai / radiology-and-ai Goto Github PK

View Code? Open in Web Editor NEW
7.0 7.0 4.0 44.06 MB

Training and applying AI models for segmenting and characterizing brain tumours given 3D neuroimaging data

License: MIT License

Jupyter Notebook 99.89% Python 0.11%

radiology-and-ai's Introduction

Radiology-and-AI

Repository for the McMaster AI society Radiology-and-AI projects team. This repository hold the Tumour Segmentation training, evaluation, and visualization tools we have created. I also holds the documentation and tutorials we have created to assist in using the tools and further our understanding of Radiology and AI research.

Installation

To use our tools, clone this repository as well as the MedicalZooPytorch https://github.com/black0017/MedicalZooPytorch repository into the same folder. Install the reqiored packages for the folders by navigating to each folder and running pip install -r requirements.txt. Next, navigate into the Radiology-and-AI directory. Following the examples in the Examples Folder, and in the usage.ipynb notebook found in the Notebooks folder you can use our training, evaluation, and visualization tools. When attempting to run our code be sure to add the Radiology_and_AI and MedicalZooPytorch folders to your path using the sys.path.append('../Radiology_and_AI') and sys.path.append('../../MedicalZooPytorch') commands (the exact path you enter is dependent on what directory you are running the code from and where you installed the libraries).

Features

Training and Evaluation Tools

Using our Training Tool and Evaluation Tool you can easily train and evaluate a Tumour Segmentation model and apply many types of augmentations and normalizations. For information on usage refer to the usage.ipynb Tutorial in the Notebooks folder or the eval_example.py and training_example.py files in the Examples folder.

Visualization Tool

Using our Visualization Tool it is simple to generate informative graphics and output the segmentations your model generates. The visualization tool can generate 2d slices and 3d gifs of your true and predicted segmentations, as well output Nifti files of the transformed input data and segmentations. Below are some exaples of graphics generated by our visualization tool using models we trained with the Training Tool.

image BraTS20_Training_043

Research Motivation

The below is information relating to the original motivating reason for taking on this project.

Proposed Models

Summary of ML models

The intended tasks include:

  • Automatic segmentation of brain metastasis/tumors and edema (swelling)
  • Prediction of tumor type (multiclass classification)
  • Prediction of immunotherapy response (binary classification)
  • Prediction of tumor pseudoprogression vs true tumor progression

The intended models to do these tasks include deep convolutional neural network based architectures, including deep residual networks. Transfer learning will be used. It is expected that this large dataset will result in superior performance compared to previous similar work.

Project Background

9-50% of patients with malignant tumours develop brain metastasis, and 30% of these patients do so before the primary cancer is known. For these patients it is essential to identify quickly and accurately the primary tumour for the staging and treatment planning. Current methods for do so are lacking, and previous work has demonstrated that Articifial Intelligence models show strong ability in differentiating between tumour types. Artificial Intelligence can take into account features of tumours not perceptible to the human eye and which current diagnostic techniques do not take into account, thus showing the technologies potential for providing radiologists and clinicians with the ability to more accurately predict the primary tumor in patients with brain metastasis.

Recent studies have also shown Artificial Intelligence has the potential not only to determine primary tumour types, but also predict tumour response to radiotherapy. Identification of patients who would be most likely to benefit from costly and difficult treatments like immunotherapy would also be of great benefit to patients an radiologists alike.

Additionally, patients diagnosed with glioblastomas often face challenges in treatment due to changes in the imaging of the tumour showing ambiguous pictures of progression of the cancer versus improvement. A tool which can differentiate which patients are undergoing tumour progression versus so called "pseudoprogression" could be highly benficial in treatment of glioblastoma.

Research and Design Methods

The research team consists of investigators with background in machine learning and prior experience with similar projects. Additionally, the principal investigator (Dr. Fateme Salehi) has experience with radiomics and has recently presented her work at the American Society of Neuroradiology as well as the Roentgen Society of North America (RSNA). Leveraging her radiomics expertise, and working with a team of experts, she plans to complete the project and publish the results in AJNR (American Journal of Neuroradiology). The data will be presented at upcomingrelated conferences as well. Once machine learning models are developed, they will be applied to patients in future as further images are acquired to facilitate evaluation of medical images. These models have the potential to impact patient treatment decisions early on, with regards to response to treatment or lack thereof. The machine learning models will also be applied to patients presenting with new brain tumors of unknown primary origin, with the aim to determine the primary site of malignancy, therefore reducing the need for brain biopsy and/or further imaging and radiation exposure. We will submit an application for the prospective arm of the study before starting any work in future, and the current arm of the study will be purely retrospective.

radiology-and-ai's People

Contributors

dufaultc avatar matthewcso avatar shammo12345 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

radiology-and-ai's Issues

Research tumour segmentation papers

Report the:
-Model architecture used
-Data augmentation method
-The performance metrics of the model
-Data used if available

Focused on people try to complete Brats challenge or something similar

Implement Z score normalization

  • Make sure the images normalize such that the backgrounds of the images are zero, and do the intensity normalization
    -Everything below a certain value after z score normalization is the "zero" value
  • Run before training and calculate a z-sore he normalizations for each channel
  • Pass in as parameter to collator

Small improvements all over

  • Seperate augment methods from the collator, 50% (look for what other have done) chance of each augmentation
  • Add parameter to lightning module for channels, should set input size accordingly
  • Add visualization of segmentation/ make better
  • Try to alter lightning module to allow for TPU training should allow for faster training
  • Review how output come from medzoo UNET implementation

Create training function

-use existing training files to see what needs to be contained in the function
-convert command line arguments to method parameters
-place in root of project

Train model with new stuff

-compare results with both Z-score and Nyul normalization
-Use power-law transformation
-Report results

Mulitple upgrades to training scripts

  • add learning rate parameter to lightning module
  • Add wandb logger
  • Try to determin how to add hyperparameter search with pytorch lightning
  • Save pre-created eval and train dataset
  • Update metric logging

Use learning rate finder to determine optimal learning rate

-Use learning_rate_finding notebook to see which learning rate gives lowest learning rate after 2 epochs
-The notebook is under the training-improvments-47 branch
-The only changes you should have to make are in sections 3 and 4 of the notebook, and in the first cell change
"!git clone https://github.com/McMasterAI/Radiology-and-AI.git" to "!git clone --branch training-improvements-47 https://github.com/McMasterAI/Radiology-and-AI.git"
-Find which accumulate_grad_batches amount finds best learning rate (maybe try 10, 15 and 20)
-Check 50 different learning rates for each
-Each should give different optimal learning rate
-Of these three optimal learning rates, run model training with each and see which gives lowest loss value after 2 epochs of training
-You will need to create a wandb account to track the model training and also make a shortcut to the macai datasets folder I shared with you in your base drive folder
-You don't need to push any changes to the repo just report the best learning rate you found and anything else you think important

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.