neuronets / nobrainer Goto Github PK
View Code? Open in Web Editor NEWA framework for developing neural network models for 3D image processing.
License: Other
A framework for developing neural network models for 3D image processing.
License: Other
This is a straight forward bug.
In nobrainer/volume.py#L50 its the checking the shape of features again instead of labels.
This is preventing me from being able to use the augment
argument in function nobrainer.dataset.get_dataset()
I would like to document two aspects regarding nobrainer: working with nobrainer on the Satori cluster & the Kullback-Leibler Divergence computation in losses.py.
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/
conda create --name wmlce-ea python=3.6
conda activate wmlce-ea
conda install tensorflow=2.1.0=gpu_py36_914.g4f6e601
conda install numpy # do the same for all librairies required : click, scikit-image
python ~/nibabel/setup.py install # install nibabel using its setup.py file
python ~/nobrainer/setup.py install # install nobrainer using its setup.py file
This environment can be used to run nobrainer commands on SLURM nodes.
sum(model.losses)
in the losses.py nobrainer script. As reported in this TF2.1.0 issue -[https://github.com/tensorflow/probability/issues/894]-, this way of computing the kl divergence with variational layers gives rise to Symbolic Tensors due to the fact that the _build_kl_divergence attribute of layers is set to False after the first forward pass. An alternative to make things work with the current TF version in conda is to manually reset the _build_kl_divergence attribute of each layer to True after each forward pass. However this is incompatible with multi-GPU training and Distributed Strategy.Not sure why this is happening because it passes on a debian buster machine with a fresh miniconda environment.
complains about numpy core multiarray umath not being found.
Hi,
I am trying to use the nobrainer pre-trained model for a binary classification problem instead of 3D segmentation. Essentially I want my model to learn brain information (structure, boundaries etc) from the pre-trained nobrainer model and then use that to further train a model for classification. I am using the ABIDE dataset and currently have binary labels for each volume. From what I understood after taking a look at the convert()
method in io.py, it takes in an input of filepaths in pairs of 2 and both of them need to be brain volumes. Is there a way for me to bypass giving the labels as 3D volumes and still use the brain volume conformation just on the features (ABIDE volumes).
As a little background, I am clipping the model at the the end of the encoder path and attaching my own layers. I want to conform the variable shaped dataset to fixed shapes of (128, 128, 128) blocks and pass them through the nobrainer model (with the weights set from your pre-trained model) and then visualize the activations layers to understand what its learning and if the model is viable option for my goal.
According to the tensorflow style guide, tf.keras.layers
should be used instead of tf.layers
. We could also go with tf.nn
, but we might as well use tf.keras.layers
.
In the current implementation, the data loader (nobrainer.volume.get_dataset
) is only meant for semantic segmentation tasks. It loads TFRecords files that contain a 3D volume of features and a 3D volume of labels. We should add data loaders for TFRecords files that might contain other types of data, like for classification / regression tasks and generative tasks.
For classification / regression, the features will be a 3D volume, and the label will be some scalar per volume. For generative tasks like with an autoencoder, the features and labels are the same (i.e., a 3D volume).
The plan is to allow loading of TFRecords that contain volumes of features and volumes of labels; volumes of features and scalar labels; volumes of features (and no labels). This will have to be broken up into at least three functions, but perhaps there can be one entrypoint to these.
In an effort to minimize code duplication, we should come up with some code that can be shared among the three readers and three writers. For the readers, the main difference will be the parse function (the function that reads the binary TFRecords format into a tensorflow tensor) and the function that preprocesses the data (i.e., standardizes the features and maybe binarizes the labels). The three readers will have different parsers and different preprocessing functions, but otherwise they should be the same.
Another idea is to prune the dataset loader function by removing the preprocessing part of it. We can explain to users that they will have to preprocess their data how they see fit (e.g., by unit norming, binarizing labels, etc.). This might be a good way to go actually, because it will make the dataset loader more generic and applicable to models that might expect features in a certain range, like [0, 1]. All of the other important dataset loading bits like shuffling, repeating, batching, and optionally prefetching can happen outside of the base loading function.
Ideally, writers for these TFRecords files should also be created. The current writer (nobrainer.io.convert
) converts volumes of features and volumes of labels to TFRecords. It is parallel-aware, so we can use that as a template for writing TFRecords files. For now, let's focus on the readers, and we can write a more generic writer along the way.
cc: @wazeerzulfikar
Use tf.contrib.distribute.MirroredStrategy
as TowerOptimizer
is deprecated.
Use the experimental_distribute
keyword argument in tf.estimator.RunConfig
, and pass that RunConfig
instance to the config
keyword argument of the Estimator
. The experimental_distribute
keyword argument will set the distribution strategy for training and evaluation.
The experimental_distribute
keyword argument takes an instance of tf.contrib.distribute.DistributionStrategy
and probably accepts an instance of tf.contrib.distribute.MirroredStrategy
(which inherits from tf.contrib.distribute.DistributionStrategy
.
The experimental_distribute
argument is only available from tensorflow 1.11. This is the latest release at this point, but it's probably OK to pin such a recent version because we will distribute in a container.
Interpolating while allowing multiple channels will allow us to interpolate the results of multi-class segmentation
Not sure if this is already a known issue (couldn't find it in the open issues).
I am trying to run the collab notebook - guide/train_binary_segmentation.ipynb and when I run the block after Create Datasets
dataset_train = nobrainer.volume.get_dataset(
file_pattern='data/data-train_shard-*.tfrecords',
n_classes=n_classes,
batch_size=batch_size,
volume_shape=volume_shape,
block_shape=block_shape,
n_epochs=n_epochs,
augment=augment,
....
I get the following error:
NameError Traceback (most recent call last)
<ipython-input-8-09a4f66f01de> in <module>()
8 augment=augment,
9 shuffle_buffer_size=shuffle_buffer_size,
---> 10 num_parallel_calls=num_parallel_calls,
11 )
12
11 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
235 except Exception as e: # pylint:disable=broad-except
236 if hasattr(e, 'ag_error_metadata'):
--> 237 raise e.ag_error_metadata.to_exception(e)
238 else:
239 raise
NameError: in converted code:
relative to /usr/local/lib/python3.6/dist-packages/nobrainer:
volume.py:334 preprocess *
features, labels = _preprocess_binary(
volume.py:382 _preprocess_binary *
x = to_blocks(x, volume_shape=x.shape, block_shape=block_shape)
volume.py:257 to_blocks *
inter_shape = tuple(e for tup in zip(blocks, block_shape) for e in tup)
NameError: name 'tup' is not defined
Hi everyone,
I want to include a nobrainer
based brain extraction in BIDSonym to enable quality control after defacing.
To do so I include nobrainer
as follows in my neurodocker command
:
--run-bash "source activate bidsonym && git clone https://github.com/neuronets/nobrainer.git && cd nobrainer && python setup.py install && cd -" \
--run-bash "mkdir -p /opt/nobrainer/models && cd /opt/nobrainer/models && curl -LJO https://github.com/neuronets/nobrainer-models/releases/download/0.1/brain-extraction-unet-128iso-model.h5 && cd ~ " \
The build, etc. works fine, but when I try to call nobrainer
as outlined in the README, I do run into the following error:
File "/opt/miniconda-latest/envs/bidsonym/lib/python3.6/site-packages/nobrainer-0.0.1a3+16.g8b14cd7-py3.6.egg/nobrainer/cli/main.py", line 157, in predict
AttributeError: module 'skimage' has no attribute 'transform'
The respective line reads:
x = skimage.transform.resize(.....
Might this be related to an import problem, as main.py imports skimage as follows:
import skimage
,
whereas it should be
from skimage import transform
and the respective line:
x=transform.resize
?
I'm truly sorry if I missed something obvious here/the problem is at my end.
Cheers, Peer
nobrainer uses the Estimator API. the tensorflow team recommends using keras instead of estimators with the upcoming 2.0 release:
That said, if you are working on custom architectures, we suggest using tf.keras to build your models instead of Estimator
it seems like estimators will be left in the 2.0 code because many pre-built estimators exist. though it also looks like some functionality is being added to the estimator api, according to the 2.0 api documentation.
what should we do? stick with estimators, or move to keras per tensorflow's recommendations?
also, tf.layers
will be removed, which we make heavy use of.
refer to https://github.com/tensorflow/tensorflow/blob/r2.1/tensorflow/python/keras/losses.py#L348-L406 for an example
from tensorflow.python.keras.losses import LossFunctionWrapper
consider implementing parts of https://arxiv.org/abs/1902.09383
the notebooks should be tested in CI. but we probably do not want to train models in CI, so the code below removes everything from a notebook at and after a model.fit
call.
cd guide
jupyter-nbconvert --to script *.ipynb
# Remove anything from `model.fit` and below in scripts.
# Model fitting may crash travis.
sed -i -e '/model.fit/,$d; /get_ipython/d' *.py
for f in *.py; do echo print\(\"++ FINISHED $f ++\"\) >> $f; done
for script in *.py; do python $script; done
this script should be added to the travis config
we could release the trained model weights on osf.
and add a guide for for doing transfer learning.
Progressive training has shown promising results in GANs (https://arxiv.org/pdf/1710.10196.pdf). Let's try this out with a U-Net. The general idea would be to begin training on low-resolution scans and progressively add mirrored layers to the model. We would probably have to start close to the latent space of the U-Net, because that is where images are at their lowest resolution. Layers would be added on either side of the latent space, until we reach our target resolution.
Add optional regularization to networks. dropout is one form of regularization, but it would be useful to be able to choose dropout, l2, dropout+l2, none.
Hi,
Attempting transfer learning with the existing nobrainer model to extrapolate to our dataset using nobrainer version 0.0.1a3.
Currently stuck on data conversion to TFrecord
import nobrainer
csv_of_filepaths = './code/nobrainer_fs-SkullStripped_trainingdata.csv'
filepaths = nobrainer.io.read_csv(csv_of_filepaths)
train_paths = filepaths[:324]
evaluate_paths = filepaths[324:]
nobrainer.io.convert(
train_paths,
tfrecords_template=./data/data-train_shard-{shard:03d}.tfrecords',
volumes_per_shard=3,
num_parallel_calls=24)
The exception that follows is
Converting 324 pairs of files to 108 TFRecords.
0/108 [..............................] - ETA: 0s
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/admin/Anvil/opt/miniconda3/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/Users/admin/Anvil/opt/miniconda3/lib/python3.7/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/Users/admin/Anvil/opt/miniconda3/lib/python3.7/site-packages/nobrainer/io.py", line 162, in _convert
options = tf.io.TFRecordOptions(compression_type=tf.io.TFRecordCompressionType.GZIP)
AttributeError: module 'tensorflow_core._api.v2.io' has no attribute 'TFRecordCompressionType'
"""The above exception was the direct cause of the following exception:
AttributeError Traceback (most recent call last)
in
3 tfrecords_template='./data/data-train_shard-{shard:03d}.tfrecords',
4 volumes_per_shard=3,
----> 5 num_parallel_calls=24)~/Anvil/opt/miniconda3/lib/python3.7/site-packages/nobrainer/io.py in convert(volume_filepaths, tfrecords_template, volumes_per_shard, to_ras, gzip_compressed, num_parallel_calls, verbose)
112 progbar.update(0)
113 with multiprocessing.Pool(num_parallel_calls) as p:
--> 114 for _ in p.imap(map_fn, volume_filepaths_shards, chunksize=2):
115 progbar.add(1)
116~/Anvil/opt/miniconda3/lib/python3.7/multiprocessing/pool.py in (.0)
323 result._set_length
324 ))
--> 325 return (item for chunk in result for item in chunk)
326
327 def imap_unordered(self, func, iterable, chunksize=1):~/Anvil/opt/miniconda3/lib/python3.7/multiprocessing/pool.py in next(self, timeout)
746 if success:
747 return value
--> 748 raise value
749
750 next = next # XXXAttributeError: module 'tensorflow_core._api.v2.io' has no attribute 'TFRecordCompressionType'
When training highres3dnet, Dice coefficient for brainmask in 2-class model is sometimes NaN. This seems to happen when the Dice coefficient for background is 1.0 (meaning nothing was classified as brainmask.
We have to account for there being no labels in some blocks of volumes (or at least the model thinking there are no labels). So instead of NaN, should probably return 0.
https://www.tensorflow.org/hub:
TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models. A module is a self-contained piece of a TensorFlow graph, along with its weights and assets, that can be reused across different tasks in a process known as transfer learning.
add support for hub.Module
, which will allow users to relatively easily conduct transfer learning with trained nobrainer models. for example, a model can be trained on a problem with a lot of data (e.g., brain extraction in MRI), then used as a module and trained on a similar problem with less data (e.g., brain extraction in contrast-enhanced MRI, tumor labeling, etc.).
this would be especially useful because there will probably not be too many pre-trained 3D models on tfhub, relative to 2D models.
I propose that we make all of our models subclasses of tf.estimator.Estimator
. See 081d3d2 for an example. This will make training/predicting/evaluating arbitrary models easier.
hdf5 was supported, but support should be re-implemented. tfrecords are also very useful. many volume files + labels can be saved to one (compressed) tfrecords file. this can improve data read speeds, and this would be very useful when training on TPU, where data loading/preprocessing could be a bottleneck.
see this example.
the current implementation with tf.data.Dataset.from_generator
could create issues with distributed training, as mentioned in the documentation of this method:
NOTE: The current implementation of Dataset.from_generator() uses tf.py_func and inherits the same constraints. In particular, it requires the Dataset- and Iterator-related operations to be placed on a device in the same process as the Python program that called Dataset.from_generator(). The body of generator will not be serialized in a GraphDef, and you should not use this method if you need to serialize your model and restore it in a different environment.
pip install --no-cache-dir nobrainer[gpu]
$ pip install --no-cache-dir nobrainer[gpu]
Collecting nobrainer[gpu]
Downloading https://files.pythonhosted.org/packages/ad/c0/91cd0c0e73d1ce8b50f3e9841a39d6f93417a72c73cd41b0f1b533a5f75e/nobrainer-0.0.1a3-py3-none-any.whl (51kB)
|████████████████████████████████| 61kB 4.1MB/s
Collecting numpy
Downloading https://files.pythonhosted.org/packages/41/38/b278d96baebc6a4818cfd9c0fb6f0e62013d5b87374bcf0f14a0e9b83ed5/numpy-1.18.1-cp38-cp38-manylinux1_x86_64.whl (20.6MB)
|████████████████████████████████| 20.6MB 10.0MB/s
Collecting nibabel
Downloading https://files.pythonhosted.org/packages/0d/fc/2efb2a7c5d117c761b72f6bc00bf057294e9ccdf0a737957c65fa89ecc6f/nibabel-3.0.0-py3-none-any.whl (3.3MB)
|████████████████████████████████| 3.3MB 42.0MB/s
Collecting click
Downloading https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl (81kB)
|████████████████████████████████| 81kB 44.5MB/s
ERROR: Could not find a version that satisfies the requirement tf-nightly-gpu; extra == "gpu" (from nobrainer[gpu]) (from versions: none)
ERROR: No matching distribution found for tf-nightly-gpu; extra == "gpu" (from nobrainer[gpu])
I tried to run nobrainer within the singularity container(nobrainer_latest-gpu.sif), but I am getting this error when trying to run the first nobrainer command:
This system supports the C.UTF-8 locale which is recommended.
You might be able to resolve your issue by exporting the
following environment variables:
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
Click discovered that you exported a UTF-8 locale
but the locale system could not pick up from it because
it does not exist. The exported locale is "en_US.UTF-8" but it
is not supported.
After exporting these two variables nobrainer is ok and working.
commands I run:
singularity run/shell/exec --nv -B path/to/bind nobrainer_latest-gpu.sif nobrainer nobrainer --help
The plan is to check off all these boxes by the end of july...
This roadmap also gives us a chance to consider the scope of nobrainer. in my opinion, nobrainer should provide models, losses, metrics, layers, and TFRecord I/O for magnetic resonance imaging in 3D. also guides for how to use nobrainer. i think the training wrapper in nobrainer should be removed, and instead, examples should explain how to train different models (e.g., for semantic segmentation, GANs, classifiers, etc.).
related to ohbm/hackathon2019#79
Methods like nobrainer.to_blocks
can be converted from numpy to tensorflow and be used in a tf.data.Dataset
pipeline.
Data loading will likely be the bottleneck when we eventually train on TPUs, and adhering to the tf.data.Dataset
API as much as possible will reduce the cost of data transformations.
In addition, this pipeline should be able to handle a list of (features, labels)
of HDF5.
See these notes on improving efficiency of dataset pipelines https://www.tensorflow.org/guide/performance/datasets.
To approximate something like "stylized imagenet", let's try to permute the grayscale data in the features volumes. Hopefully this will cause the models to rely more on the shape of the features and less on the texture.
For example, values of 0 can become 0.5 and 0.5 can become 0.2, etc. Permutation ensures there are no clashes.
Create visualizations of weights at various layers given a sample.
"boundary loss for highly unbalanced segmentation" https://arxiv.org/abs/1812.07032
the current readme and docker builds are a bit outdated since the organization move.
the models directory is not available in the docker image and the readme does not require users to mount a models directory.
the docker builders are not automated currently and should be run on dockerhub and on releases of both nobrainer and nobrainer-models.
Add mri_convert --conform
capability to the nobrainer convert
command. Users should be able to conform volumes easily.
Add options to highres3dnet to:
The arguments to __init__
functions should be standardized across model estimators. There are common, required arguments, which should be positional arguments. There are also model-specific arguments, which should have a default of some recommended value or None
. An example of an optional argument with a default, recommended value is the number of filters in meshnet. The default could be 21, and it should be documented that the authors of the meshnet paper used 21 filters for their brainmask model and 71 filters for the aparc+aseg model.
Example __init__
signature:
class Model(tf.estimator.Estimator):
def __init__(self, required_arg1, required_arg2, optional_arg1="default_value",
optional_arg2=None):
...
We can also use **kwds
for arbitrary keyword arguments. In this case, each __init__
should check that only valid keyword arguments were passed in.
When training on sub-volumes of a 3D volume (e.g., 64**3), there will usually be some volumes with empty labels (i.e., all zero). is it valid to train on the empty labels to teach the network that that block should be labeled as background?
I wonder whether you could add singularity
commands for quick demonstration in the Usage
section. Currently it only has information for docker and command line.
nobrainer.dataset
.get_dataset
to a tf.data.Dataset
from file pattern.This is in a continuing effort to make nobrainer more modular.
make nobrainer models compatible with tpus (https://www.tensorflow.org/guide/using_tpu)
a large part of this will involve optimizing data loading / preprocessing. if data cannot be loaded quickly enough, we will not make good use of the tpu. see https://www.tensorflow.org/performance/datasets_performance
ideas:
Loss functions to include:
Currently, meshnet cannot be trained on a full 256x256,256 volume on 11GB or 12GB of gpu memory. The current solution is to block the data into 128x128x128 cubes. These inputs are float32. Using float16 for inputs (and weights) will reduce the memory footprint and could allow training on full volumes.
let's make sure we guide the user as to which runtime to attach, how to change runtime hardware, control memory usage, etc.,.
Command-line output of nobrainer info
.
What were you trying to do?
run the dataset.get_dataset(augment=True)
function with augment=True
What actually happened?
Returns the following error:
ValueError: in converted code:
<ipython-input-41-d9e16d8c12b3>:110 None *
lambda x, y: tf.cond(
/home1/06850/sbansal6/.local/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py:1389 cond_for_tf_v2
return cond(pred, true_fn=true_fn, false_fn=false_fn, strict=True, name=name)
/work/06850/sbansal6/maverick2/software-packages/nobrainer/nobrainer/volume.py:51 apply_random_transform_scalar_labels *
raise ValueError("labels must be rank 1")
ValueError: labels must be rank 1
Can you replicate the behavior? If yes, how?
This is roughly the function definition that I am providing
dataset_train = get_dataset("data_dev/data-train_*",
n_classes=self.n_classes,
batch_size=global_batch_size,
volume_shape=volume_shape,
scalar_label=True, augment=True,
block_shape=block_shape)
Bug:
The following code-block used for creating blocks returns an unbatched block of shape <MapDataset shapes: ((128, 128, 128, ), ()), types: (tf.float32, tf.float32)>
if scalar_label:
def _f(x, y):
x = to_blocks(x, block_shape)
n_blocks = x.shape[0]
y = tf.repeat(y, n_blocks)
return (x, y)
dataset = dataset.map(_f, num_parallel_calls=num_parallel_calls)
# This step is necessary because separating into blocks adds a dimension.
dataset = dataset.unbatch()
The tf.data.dataset.unbatch
function unbatches the second tf.float32 not as you expected.
Possible Fix (the easiest/quickest one I could think of):
Put the augment block before separating into blocks
....
# Augment examples if requested.
if augment:
if not scalar_label:
dataset = dataset.map(
lambda x, y: tf.cond(
tf.random.uniform((1,)) > 0.5,
true_fn=lambda: apply_random_transform(x, y),
false_fn=lambda: (x, y),
),
num_parallel_calls=num_parallel_calls,
)
else:
dataset = dataset.map(
lambda x, y: tf.cond(
tf.random.uniform((1,)) > 0.5,
true_fn=lambda: apply_random_transform_scalar_labels(x, y),
false_fn=lambda: (x, y),
),
num_parallel_calls=num_parallel_calls,
)
# Separate into blocks, if requested.
if block_shape is not None:
if not scalar_label:
dataset = dataset.map(
lambda x, y: (to_blocks(x, block_shape), to_blocks(y, block_shape)),
num_parallel_calls=num_parallel_calls,
)
# This step is necessary because separating into blocks adds a dimension.
dataset = dataset.unbatch()
if scalar_label:
def _f(x, y):
....
This seem like a trivial fix so not submitting a PR unless you have something better in mind that I could work on for this fix.
I am attempting to perform transfer learning on the existing nobrainer model weights by synthesizing a training dataset complied with manual edits to the brain mask.
My first attempt made it through to epoch 4/5 before the kernel crashed. I've tried rerunning the code multiple times with smaller datasets, different learning rates but I keep getting the same error message:
Train for 1296 steps, validate for 80 steps
2019-12-06 10:47:11.520055: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:150] Filling up shuffle buffer (this may take a while): 1 of 10
2019-12-06 10:47:12.065779: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:199] Shuffle buffer filled.
Killed: 9
My code is below, any help/suggestions would be appreciated.
# Transfer Learning to ADS FreeSurfer Brain Masks
import nobrainer
# initialize
csv_of_filepaths = './nobrainer/code/nobrainer_fs-SkullStripped_trainingdata.csv'
filepaths = nobrainer.io.read_csv(csv_of_filepaths)
# split into train and test
train_paths = filepaths[:324]
evaluate_paths = filepaths[324:]
# convert images to tensorflow records
nobrainer.io.convert(
train_paths,
tfrecords_template='./nobrainer/processed/data-train_shard-{shard:03d}.tfrecords',
volumes_per_shard=3,
num_parallel_calls=24)
nobrainer.io.convert(
evaluate_paths,
tfrecords_template='./nobrainer/processed/data-evaluate_shard-{shard:03d}.tfrecords',
volumes_per_shard=3,
num_parallel_calls=24)
#### preallocation for train/evaluate
n_classes = 1
batch_size = 2
volume_shape = (256, 256, 256)
block_shape = (128, 128, 128)
n_epochs = None
augment = False
shuffle_buffer_size = 10
num_parallel_calls = 24
# train object
dataset_train = nobrainer.volume.get_dataset(
file_pattern='./nobrainer/processed/data-train_shard-*.tfrecords',
n_classes=n_classes,
batch_size=batch_size,
volume_shape=volume_shape,
block_shape=block_shape,
n_epochs=n_epochs,
augment=augment,
shuffle_buffer_size=shuffle_buffer_size,
num_parallel_calls=num_parallel_calls,
)
# evaluate object
dataset_evaluate = nobrainer.volume.get_dataset(
file_pattern='./nobrainer/processed/data-evaluate_shard-*.tfrecords',
n_classes=n_classes,
batch_size=batch_size,
volume_shape=volume_shape,
block_shape=block_shape,
n_epochs=1,
augment=False,
shuffle_buffer_size=None,
num_parallel_calls=1,
)
##################################################
# TRANSFER LEARNING
### get existing model for transfer learning
##################################################
import tensorflow as tf
model_path = tf.keras.utils.get_file(
fname='brain-extraction-unet-128iso-model.h5',
origin='https://github.com/neuronets/nobrainer-models/releases/download/0.1/brain-extraction-unet-128iso-model.h5')
model = tf.keras.models.load_model(model_path, compile=False)
model.summary()
# set L2 regularization for layers
for layer in model.layers:
layer.kernel_regularizer = tf.keras.regularizers.l2(0.01)
# set learning rate
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-05)
# compile model
model.compile(
optimizer=optimizer,
loss=nobrainer.losses.jaccard,
metrics=[nobrainer.metrics.dice],
)
# compute steps given sizes
steps_per_epoch = nobrainer.volume.get_steps_per_epoch(
n_volumes=len(train_paths),
volume_shape=volume_shape,
block_shape=block_shape,
batch_size=batch_size)
validation_steps = nobrainer.volume.get_steps_per_epoch(
n_volumes=len(evaluate_paths),
volume_shape=volume_shape,
block_shape=block_shape,
batch_size=batch_size)
## TRAIN MODEL!!!
model.fit(
dataset_train,
epochs=1,
verbose=1,
steps_per_epoch=steps_per_epoch,
validation_data=dataset_evaluate,
validation_steps=validation_steps,
use_multiprocessing=True,
workers=24)
model.save('./nobrainer/nobrainer-models/ads-transfer-learning_manual-edits_brain-extraction-unet-128iso-model.h5',
save_format='h5')
model.save_weights('./nobrainer/nobrainer-models/ads-transfer-learning_manual-edits_brain-extraction-unet-128iso-weights.h5',
save_format='h5')
Create tf.keras.
layer classes for the dropout methods used in the knowing what you know manuscript.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.