Giter Site home page Giter Site logo

ann4brains's People

Contributors

c9brown avatar hamarneh avatar jeremykawahara avatar oldmerkum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ann4brains's Issues

N2G Layer is missing

Hello guys,

I am replicating your paper using connectivity matrices (connectomes) to see how this CNN perform for a similar problem.
I have realised that in your code there is not an implementation of the layer type Node-To-Graph Layer N2G. I wonder why there is not implementation in the layers.py. In your paper this layer is used before the fully connected layers.

Could you tell me how did you perform this operation without the actual implementation of the layer? or is there some code missing ?

Thanks a lot and kind regards,

Andrés

Asymmetric Adjacency Matrices

Note that if we assume the input to the E2N filter is a symmetric matrix, we can drop either the term containing the row weights, r, or the term containing the column weights, c, since the incoming and outgoing weights on each edge will be equal. In all experiments in this paper, we used E2N filters with only the |Ω| row weights in r because we did not empirically find any clear advantage in learning separate weights for both incoming and outgoing edges when training over symmetric connectome data.

Does this apply to the ann4brains library and if so could the authors lay out the necessary changes to the E2N filters to allow for asymmetric adjacency matrices?

Question consultation

Question consultation
Dear authors:
Recently I have looking your this project and feel interested in it! And I have some questions at the same time. Firstly, for the HCP Connectome Toolbox, I have a look at the site, and I am confused that ow to use this software, and I seem ti find a tutorial such as https://www.humanconnectome.org/tutorials, is that correct for introduction the usage of this software and how to obtain the adjacency matrices by using this software? After reading your introduction PPT for your project, and I have a another question, is that, how the designed network implement the computation operation of the Edge-to-Edge or Edge-to-Node?Thanks a lot! I am looking forward to your reply Sincerely!

Best regards.

Check failed: error == cudaSuccess (33 vs. 0) invalid resource handle

(this issue was reported via email)

When running the helloworld.py example, you get an error that says:

I0530 14:50:14.842402 7533 solver.cpp:56] Solver scaffolding done.
F0530 14:50:14.842481 7533 benchmark.cpp:30] Check failed: error == cudaSuccess (33 vs. 0) invalid resource handle
*** Check failure stack trace: ***
Abort

This error occurs using the latest version of caffe:

import caffe
caffe.__version__
'1.0.0'

when it executes the "solver.step(1)" line in the optimizers.py script.

However, you can train the model from the command line directly, which you be run directly from the "examples" directory,

caffe train -solver proto/hello_world_solver.prototxt

So it seems there's an issue with pycaffe on this latest version...

However, when you use an older version of caffe,
https://github.com/BVLC/caffe/releases/tag/rc3

import caffe
caffe.__version__
'1.0.0-rc3'

the helloworld.py example works fine...


So I see two workarounds right now. 1) Use an older version of caffe; or, 2) Ignore the optimizer in the ann4brains wrapper and train from the command line (the rest of the code can still be used to build the prototxt files).

If neither of options work well for you (or if you do find a solution to this pycaffe issue), please let me know.

Max pooling layer

Hello, I am trying to implement a max pooling layer and a softmax layer into the code. I'm looking at nets.py:

def create_architecture(self, mode, hdf5_data):
        """Returns the architecture (i.e., caffe prototxt) of the model.

        Jer: One day this should probably be written to be more general.
        """

    arch = self.arch
    pars = self.pars
    n = caffe.NetSpec()
    
    if mode == 'deploy':
        n.data = L.DummyData(shape=[dict(dim=pars['deploy_dims'])])
    elif mode == 'train':
        n.data, n.label = L.HDF5Data(batch_size=pars['train_batch_size'], source=hdf5_data, ntop=pars['ntop'])
    else:  # Test.
        n.data, n.label = L.HDF5Data(batch_size=pars['test_batch_size'], source=hdf5_data, ntop=pars['ntop'])
    
    # print(n.to_proto())
    in_layer = n.data
    
    for layer in arch:
        layer_type, vals = layer
    
        if layer_type == 'e2e':
            in_layer = n.e2e = e2e_conv(in_layer, vals['n_filters'], vals['kernel_h'], vals['kernel_w'])
        elif layer_type == 'e2n':
            in_layer = n.e2n = e2n_conv(in_layer, vals['n_filters'], vals['kernel_h'], vals['kernel_w'])
        elif layer_type == 'fc':
            in_layer = n.fc = full_connect(in_layer, vals['n_filters'])
        elif layer_type == 'out':
            n.out = full_connect(in_layer, vals['n_filters'])
            # Rename to user specified unique layer name.
            # n.__setattr__('out', n.new_layer)
    
        elif layer_type == 'dropout':
            in_layer = n.dropout = L.Dropout(in_layer, in_place=True,
                                             dropout_param=dict(dropout_ratio=vals['dropout_ratio']))
        elif layer_type == 'relu':
            in_layer = n.relu = L.ReLU(in_layer, in_place=True,
                                       relu_param=dict(negative_slope=vals['negative_slope']))
        elif layer_type == 'pool':
            in_layer  = L.Pooling(in_layer, kernel_size=3, stride=2, in_place=True, pool=P.Pooling.MAX)
        elif layer_type == 'softmax':
            in_layer = L.SoftmaxWithLoss(in_layer, 1)
        else:
            raise ValueError('Unknown layer type: ' + str(layer_type))

        # ~ end for.

    if mode != 'deploy':
        if self.pars['loss'] == 'EuclideanLoss':
            n.loss = L.EuclideanLoss(n.out, n.label)
        else:
            ValueError("Only 'EuclideanLoss' currently implemented for pars['loss']!")
    return n

In particular:

        elif layer_type == 'pool':
            in_layer  = L.Pooling(in_layer, kernel_size=3, stride=2, in_place=True, pool=P.Pooling.MAX)
        elif layer_type == 'softmax':
            in_layer = L.SoftmaxWithLoss(in_layer, 1)

I know this is wrong, but how would I properly implement these layers here? The documentation of PyCaffe is quite poor.

Extend this to 3D...?

I want to extend this network architecture in case of molecule bond informations.

For example, H2O consists of two Os(O1, O2), and one H(H1).

Make adjacency matrix out of this then we have 2x2x2 matrix where H,Os are X/Y axis and Z is the quantity of each element.

I think edge-to-edge node filter in this case would be
Conv2D(1xdxd)+ Conv2D(dx1xd) + Conv2D(dxdx1)

edge-to-node would be

Concatenation of Conv1D on neighboring edges(that diagonal).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.