Giter Site home page Giter Site logo

ann4brains's Introduction

ann4brains

ann4brains implements filters for adjacency matrices, representing networks, that can be used within a deep neural network. These filters are designed specifically for brain networks (i.e. connectomes), but can be used with adjacency matrices representing networks of any kind.

If your dataset is raw connectivity data (e.g., diffusion or functional MRI volumes), you will need to first extract brain networks (i.e., 2D adjacency matrices) from this data using other software (e.g., the Connectome Computation System, https://github.com/zuoxinian/CCS or the HCP Connectome Toolbox, http://www.humanconnectome.org/software/)

ann4brains is a Python wrapper for Caffe that implements the Edge-to-Edge, and Edge-to-Node filters as described in:

Kawahara, J., Brown, C. J., Miller, S. P., Booth, B. G., Chau, V., Grunau, R. E., Zwicker, J. G., and Hamarneh, G. (2017). BrainNetCNN: Convolutional neural networks for brain networks; towards predicting neurodevelopment. NeuroImage, 146(July), 1038–1049. [DOI] [URL] [PDF]


Other Implementations

If you're looking for an implementation using a different library, BrainNetCNN has been implemented by other groups (thank you!) in PyTorch and Keras.


Hello World

Here's a fully working, minimal "hello world" example,

import os, sys
import numpy as np
from scipy.stats.stats import pearsonr
import caffe
sys.path.insert(0, os.path.abspath(os.path.join(os.getcwd(), '..'))) # To import ann4brains.
from ann4brains.synthetic.injury import ConnectomeInjury
from ann4brains.nets import BrainNetCNN
np.random.seed(seed=333) # To reproduce results.

injury = ConnectomeInjury() # Generate train/test synthetic data.
x_train, y_train = injury.generate_injury()
x_test, y_test = injury.generate_injury()
x_valid, y_valid = injury.generate_injury()

hello_arch = [ # We specify the architecture like this.
    ['e2n', {'n_filters': 16,  # e2n layer with 16 filters.
             'kernel_h': x_train.shape[2], 
             'kernel_w': x_train.shape[3]}], # Same dimensions as spatial inputs.
    ['dropout', {'dropout_ratio': 0.5}], # Dropout at 0.5
    ['relu',    {'negative_slope': 0.33}], # For leaky-ReLU
    ['fc',      {'n_filters': 30}],  # Fully connected (n2g) layer with 30 filters.
    ['relu',    {'negative_slope': 0.33}],
    ['out',     {'n_filters': 1}]]  # Output layer with 1 nodes as output.

hello_net = BrainNetCNN('hello_world', hello_arch) # Create BrainNetCNN model
hello_net.fit(x_train, y_train[:,0], x_valid, y_valid[:,0]) # Train (regress only on class 0)
preds = hello_net.predict(x_test) # Predict labels of test data
print("Correlation:", pearsonr(preds, y_test[:,0])[0]) # ('Correlation:', 0.61187756)

More examples can be found in this extended notebook.


Installation

ann4brains uses the following dependencies:

  • numpy
  • scipy
  • h5py
  • matplotlib
  • cPickle
  • Caffe

You must already have Caffe and pycaffe working on your system

i.e., in Python, you should be able to run,

import caffe

without errors.

To use ann4brains, download it, and try to run the helloworld example:

git clone https://github.com/jeremykawahara/ann4brains.git
cd ann4brains/examples
python helloworld.py

This example will create synthetic data, train a small neural network, and should output the correlation of:

('Correlation:', 0.61187756)

More examples are in this extended notebook.


Working directly with Caffe

If you prefer to work directly with Caffe and not use this wrapper, you can modify the example prototxt files that implement the E2E and E2N filters. Or view the Python files that generate the E2E and E2N layers.


What are these filters?

I wrote a short blog post informally describing these filters and this work, which you may find helpful. But here are the key ideas (in the form of a gif):

Edge-to-Edge

edge to edge filter

(Left) The input. (Yellow cross) the filter. (Right) The output response.

The Edge-to-Edge filter computes a weighted response over neighbouring edges for a given edge.

Edge-to-Node

edge to node filter

(Left) The input. (Yellow cross) the filter. (Right) The output response.

The Edge-to-Node filter computes a weighted response over neighbouring edges for a given node.

More information

Poster (click to view high resolution)

brainnet poster

Slides (click to view)

brainnet slides

ann4brains's People

Contributors

c9brown avatar hamarneh avatar jeremykawahara avatar oldmerkum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ann4brains's Issues

Asymmetric Adjacency Matrices

Note that if we assume the input to the E2N filter is a symmetric matrix, we can drop either the term containing the row weights, r, or the term containing the column weights, c, since the incoming and outgoing weights on each edge will be equal. In all experiments in this paper, we used E2N filters with only the |Ω| row weights in r because we did not empirically find any clear advantage in learning separate weights for both incoming and outgoing edges when training over symmetric connectome data.

Does this apply to the ann4brains library and if so could the authors lay out the necessary changes to the E2N filters to allow for asymmetric adjacency matrices?

Max pooling layer

Hello, I am trying to implement a max pooling layer and a softmax layer into the code. I'm looking at nets.py:

def create_architecture(self, mode, hdf5_data):
        """Returns the architecture (i.e., caffe prototxt) of the model.

        Jer: One day this should probably be written to be more general.
        """

    arch = self.arch
    pars = self.pars
    n = caffe.NetSpec()
    
    if mode == 'deploy':
        n.data = L.DummyData(shape=[dict(dim=pars['deploy_dims'])])
    elif mode == 'train':
        n.data, n.label = L.HDF5Data(batch_size=pars['train_batch_size'], source=hdf5_data, ntop=pars['ntop'])
    else:  # Test.
        n.data, n.label = L.HDF5Data(batch_size=pars['test_batch_size'], source=hdf5_data, ntop=pars['ntop'])
    
    # print(n.to_proto())
    in_layer = n.data
    
    for layer in arch:
        layer_type, vals = layer
    
        if layer_type == 'e2e':
            in_layer = n.e2e = e2e_conv(in_layer, vals['n_filters'], vals['kernel_h'], vals['kernel_w'])
        elif layer_type == 'e2n':
            in_layer = n.e2n = e2n_conv(in_layer, vals['n_filters'], vals['kernel_h'], vals['kernel_w'])
        elif layer_type == 'fc':
            in_layer = n.fc = full_connect(in_layer, vals['n_filters'])
        elif layer_type == 'out':
            n.out = full_connect(in_layer, vals['n_filters'])
            # Rename to user specified unique layer name.
            # n.__setattr__('out', n.new_layer)
    
        elif layer_type == 'dropout':
            in_layer = n.dropout = L.Dropout(in_layer, in_place=True,
                                             dropout_param=dict(dropout_ratio=vals['dropout_ratio']))
        elif layer_type == 'relu':
            in_layer = n.relu = L.ReLU(in_layer, in_place=True,
                                       relu_param=dict(negative_slope=vals['negative_slope']))
        elif layer_type == 'pool':
            in_layer  = L.Pooling(in_layer, kernel_size=3, stride=2, in_place=True, pool=P.Pooling.MAX)
        elif layer_type == 'softmax':
            in_layer = L.SoftmaxWithLoss(in_layer, 1)
        else:
            raise ValueError('Unknown layer type: ' + str(layer_type))

        # ~ end for.

    if mode != 'deploy':
        if self.pars['loss'] == 'EuclideanLoss':
            n.loss = L.EuclideanLoss(n.out, n.label)
        else:
            ValueError("Only 'EuclideanLoss' currently implemented for pars['loss']!")
    return n

In particular:

        elif layer_type == 'pool':
            in_layer  = L.Pooling(in_layer, kernel_size=3, stride=2, in_place=True, pool=P.Pooling.MAX)
        elif layer_type == 'softmax':
            in_layer = L.SoftmaxWithLoss(in_layer, 1)

I know this is wrong, but how would I properly implement these layers here? The documentation of PyCaffe is quite poor.

N2G Layer is missing

Hello guys,

I am replicating your paper using connectivity matrices (connectomes) to see how this CNN perform for a similar problem.
I have realised that in your code there is not an implementation of the layer type Node-To-Graph Layer N2G. I wonder why there is not implementation in the layers.py. In your paper this layer is used before the fully connected layers.

Could you tell me how did you perform this operation without the actual implementation of the layer? or is there some code missing ?

Thanks a lot and kind regards,

Andrés

Extend this to 3D...?

I want to extend this network architecture in case of molecule bond informations.

For example, H2O consists of two Os(O1, O2), and one H(H1).

Make adjacency matrix out of this then we have 2x2x2 matrix where H,Os are X/Y axis and Z is the quantity of each element.

I think edge-to-edge node filter in this case would be
Conv2D(1xdxd)+ Conv2D(dx1xd) + Conv2D(dxdx1)

edge-to-node would be

Concatenation of Conv1D on neighboring edges(that diagonal).

Check failed: error == cudaSuccess (33 vs. 0) invalid resource handle

(this issue was reported via email)

When running the helloworld.py example, you get an error that says:

I0530 14:50:14.842402 7533 solver.cpp:56] Solver scaffolding done.
F0530 14:50:14.842481 7533 benchmark.cpp:30] Check failed: error == cudaSuccess (33 vs. 0) invalid resource handle
*** Check failure stack trace: ***
Abort

This error occurs using the latest version of caffe:

import caffe
caffe.__version__
'1.0.0'

when it executes the "solver.step(1)" line in the optimizers.py script.

However, you can train the model from the command line directly, which you be run directly from the "examples" directory,

caffe train -solver proto/hello_world_solver.prototxt

So it seems there's an issue with pycaffe on this latest version...

However, when you use an older version of caffe,
https://github.com/BVLC/caffe/releases/tag/rc3

import caffe
caffe.__version__
'1.0.0-rc3'

the helloworld.py example works fine...


So I see two workarounds right now. 1) Use an older version of caffe; or, 2) Ignore the optimizer in the ann4brains wrapper and train from the command line (the rest of the code can still be used to build the prototxt files).

If neither of options work well for you (or if you do find a solution to this pycaffe issue), please let me know.

Question consultation

Question consultation
Dear authors:
Recently I have looking your this project and feel interested in it! And I have some questions at the same time. Firstly, for the HCP Connectome Toolbox, I have a look at the site, and I am confused that ow to use this software, and I seem ti find a tutorial such as https://www.humanconnectome.org/tutorials, is that correct for introduction the usage of this software and how to obtain the adjacency matrices by using this software? After reading your introduction PPT for your project, and I have a another question, is that, how the designed network implement the computation operation of the Edge-to-Edge or Edge-to-Node?Thanks a lot! I am looking forward to your reply Sincerely!

Best regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.