Giter Site home page Giter Site logo

skokec / dau-convnet Goto Github PK

View Code? Open in Web Editor NEW
21.0 21.0 5.0 434 KB

Displaced Aggregation Units for Convolutional Networks from "Spatially-Adaptive Filter Units for Deep Neural Networks" paper

Home Page: http://www.vicos.si/Research/DeepCompositionalNet

CMake 5.38% C++ 70.45% Cuda 11.73% Shell 0.87% Python 11.31% Dockerfile 0.27%

dau-convnet's People

Contributors

skokec avatar vitjanz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dau-convnet's Issues

Using "padding = same " in a dau_conv2d_tf layer ??

Hi,
Can I use "padding = same" in a dau_conv2d_tf layer the way one can usually use it in a regular Keras/tensorflow layer(Conv2D(filters,(num_row, num_col), strides=strides, padding="same")) ?

cheers,
H

DAU-ConvNet in the UNet architecture

Do you guys have any experience with applying DAU-ConvNets to the U-Net architecture or one of their derivatives ?? by that I mean replacing the standard convolutional blocks with the DAUs in that specific architecture ??

cheers,
H

version `GLIBCXX_3.4.30' not found (required by cmake)

i found this issue "cmake: /home/yashc/anaconda3/lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by cmake)"
and "cmake: /home/yashc/anaconda3/lib/libcurl.so.4: no version information available (required by cmake)"
during build GauNet , plz help me how can i resolve this.

Using DAU on Res50

Hello Author @skokec ,

May I check with you if it's possible to use this implementations on a Windows platforms as I see the readme has only the ones tested on Ubuntu. Could you please let us know your suggestions as I am trying to use this on a Res-50 but running it on Windows.

Encounter error when applying DAU_Conv

I try to modify ASPP to a DAU_Conv layer with 6 units. However, when I run the code, it gives me the following error.

terminate called after throwing an instance of 'DAUConvNet::DAUException' what(): ASSERT ERROR: misaligned address

I am using your compiled dau_conv-1.0_TF1.13.1-cp35-cp35m-manylinux1_x86_64.whl file for installing tensorflow and my cuda version is 10.0. Could you please give me some hints about what happens here for my case?

about the DAU details

Thanks for your work. I have a question about the experiment. When testing the dau, did you replace all the convolution module by dau?

DAU-ConvNet not working with tf.keras or keras

Hi there,

I try to use the DAUs to replace standard convolutional blocks in a modified U-Net architecture but I do get these warnings and this error :

WARNING: Entity <bound method DAUConv2dTF.call of <dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42a0f160>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output. Cause: converting <bound method DAUConv2dTF.call of <dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42a0f160>>: AssertionError: If not matching a CFG node, must be a block statement: <gast.gast.ImportFrom object at 0x1a42b4f748>

my code :

import tensorflow.compat.v1 as tf tf.logging.set_verbosity(tf.logging.ERROR) import numpy as np from tensorflow.compat.v1.keras.layers import Input, Conv2D, MaxPooling2D, Conv2DTranspose, concatenate, BatchNormalization, Activation, add from tensorflow.compat.v1.keras.models import Model, model_from_json from tensorflow.keras.activations import relu from dau_conv_tf import DAUConv2dTF

` def dau_bn(x, filters, dau_units, max_kernel_size):

max_kernel_size = max_kernel_size 
x = DAUConv2dTF(filters = filters, 
       dau_units = dau_units, 
       max_kernel_size = max_kernel_size , 
       strides=1, 
       data_format='channels_first',
       use_bias=False,
       weight_initializer=tf.random_normal_initializer(stddev=0.1),
       mu1_initializer = tf.random_uniform_initializer(minval=-tf.floor(max_kernel_size/2.0), 
                                                      maxval=tf.floor(max_kernel_size/2.0),dtype=tf.float32), 
       mu2_initializer = tf.random_uniform_initializer(minval=-tf.floor(max_kernel_size/2.0), 
                                                      maxval=tf.floor(max_kernel_size/2.0),dtype=tf.float32), 
       sigma_initializer=None,
       bias_initializer=None,
       weight_regularizer=None,
       mu1_regularizer=None,
       mu2_regularizer=None,
       sigma_regularizer=None,
       bias_regularizer=None,
       activity_regularizer=None,
       weight_constraint=None,
       mu1_constraint=None,
       mu2_constraint=None,
       sigma_constraint=None,
       bias_constraint=None,
       trainable=True,
       mu_learning_rate_factor=500, # additional factor for gradients of mu1 and mu2 
       name=None)(x)
x = BatchNormalization(axis=1, scale=False)(x)
x = Activation('relu')(x)
return x  `

` def conv2d_bn(x, filters ,num_row,num_col, padding = "same", strides = (1,1), activation = 'relu'):

        x = Conv2D(filters,(num_row, num_col), strides=strides, padding=padding, data_format ='channels_first' ,use_bias=False)(x)
        x = BatchNormalization(axis=1, scale=False)(x)
        if(activation == None):
            return x
        x = Activation(activation)(x)
        
        return x `

`def trans_conv2d(x, filters, num_row, num_col, padding='same', strides=(2, 2), name=None):

x = Conv2DTranspose(filters, (num_row, num_col), strides=strides, padding=padding,      data_format = 'channels_first')(x)
x = BatchNormalization(axis=1, scale=False)(x)

return x `

` def MultiResBlock(dau_units,max_kernel_size ,U,inp, alpha = 1.67):

        W = alpha * U
        
        shortcut = inp
        
        shortcut = conv2d_bn(shortcut, int(W*0.167) + int(W*0.333) +
                     int(W*0.5), 1, 1, activation=None, padding='same')
        
        conv3x3 = dau_bn(inp,int(W*0.167), dau_units, max_kernel_size )
        
       
        
        conv5x5 = dau_bn(conv3x3,int(W*0.333), dau_units , max_kernel_size )

       

        conv7x7 = dau_bn(conv5x5,int(W*0.5), dau_units,  max_kernel_size )
        
 
        
        out = concatenate([conv3x3, conv5x5, conv7x7], axis=1)
        out = BatchNormalization(axis=1)(out)
        out = add([shortcut, out])
        out = Activation('relu')(out)
        out = BatchNormalization(axis=1)(out)
        
        return  out `

`def ResPath(dau_units, max_kernel_size,filters, length, inp):
shortcut = inp
shortcut = conv2d_bn(shortcut, filters, 1, 1,
activation=None, padding='same')

out = conv2d_bn(inp, filters, 3, 3, activation='relu', padding='same')

out = add([shortcut, out])
out = Activation('relu')(out)
out = BatchNormalization(axis=1)(out)

for i in range(length-1):

    shortcut = out
    shortcut = conv2d_bn(shortcut, filters, 1, 1,
                          activation=None, padding='same')
    
    out = dau_bn(out, filters, dau_units , max_kernel_size)

   
    out = add([shortcut, out])
    out = Activation('relu')(out)
    out = BatchNormalization(axis=1)(out)

return out


def MultiResUnet(n_channels,height, width):

inputs = Input((n_channels,height, width ))

mresblock1 = MultiResBlock((2,2),9,32, inputs)
pool1 = MaxPooling2D(pool_size=(2, 2),data_format = 'channels_first')(mresblock1)
mresblock1 = ResPath((2,2),9,32, 4, mresblock1)

mresblock2 = MultiResBlock((2,2),9,32*2, pool1)
pool2 = MaxPooling2D(pool_size=(2, 2),data_format = 'channels_first')(mresblock2)
mresblock2 = ResPath((2,2),9,32*2, 3, mresblock2)

mresblock3 = MultiResBlock((2,2),9,32*4, pool2)
pool3 = MaxPooling2D(pool_size=(2, 2),data_format='channels_first')(mresblock3)
mresblock3 = ResPath((2,2),9,32*4, 2, mresblock3)

mresblock4 = MultiResBlock((2,2),9,32*8, pool3)
pool4 = MaxPooling2D(pool_size=(2, 2), data_format = 'channels_first')(mresblock4)
mresblock4 = ResPath((2,2),9,32*8, 1, mresblock4)

mresblock5 = MultiResBlock((2,2),9,32*16, pool4)

up6 = concatenate([trans_conv2d(mresblock5,
    32*8, 2, 2, strides=(2, 2), padding='same'), mresblock4], axis=1)
mresblock6 = MultiResBlock((2,2),9,32*8, up6)

up7 = concatenate([trans_conv2d(mresblock6,
    32*4, 2, 2, strides=(2, 2), padding='same'), mresblock3], axis=1)
mresblock7 = MultiResBlock((2,2),9,32*4, up7)

up8 = concatenate([trans_conv2d(mresblock7,
    32*2, 2, 2, strides=(2, 2), padding='same'), mresblock2], axis=1)
mresblock8 = MultiResBlock((2,2),9,32*2, up8)

up9 = concatenate([trans_conv2d(mresblock8,
    32, 2, 2, strides=( 2, 2), padding='same'), mresblock1], axis=1)
mresblock9 = MultiResBlock((2,2),9,32, up9)

conv10 = conv2d_bn(mresblock9, 1, 1, 1, activation='sigmoid')

model = Model(inputs=[inputs], outputs=[conv10])

return model


model = MultiResUnet(3,128, 128)
display(model.summary())

`

Now the error I get :

TypeError Traceback (most recent call last)

in
----> 1 model = MultiResUnet(3,128, 128)
2 display(model.summary())

in MultiResUnet(n_channels, height, width)
54 conv10 = conv2d_bn(mresblock9, 1, 1, 1, activation='sigmoid')
55
---> 56 model = Model(inputs=[inputs], outputs=[conv10])
57
58 return model

~/miniconda3/envs/MastersThenv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in init(self, *args, **kwargs)
127
128 def init(self, *args, **kwargs):
--> 129 super(Model, self).init(*args, **kwargs)
130 # initializing _distribution_strategy here since it is possible to call
131 # predict on a model without compiling it.

~/miniconda3/envs/MastersThenv/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py in init(self, *args, **kwargs)
165 self._init_subclassed_network(**kwargs)
166
--> 167 tf_utils.assert_no_legacy_layers(self.layers)
168
169 # Several Network methods have "no_automatic_dependency_tracking"

~/miniconda3/envs/MastersThenv/lib/python3.6/site-packages/tensorflow/python/keras/utils/tf_utils.py in assert_no_legacy_layers(layers)
397 'classes), please use the tf.keras.layers implementation instead. '
398 '(Or, if writing custom layers, subclass from tf.keras.layers rather '
--> 399 'than tf.layers)'.format(layer_str))
400
401

TypeError: The following are legacy tf.layers.Layers:
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42a0f160>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42ac44e0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a42d1f9e8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a43aaf048>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a43b6d358>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a43ce27f0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a448cb0b8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a44a9ccc0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a44b00860>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a453f3400>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a454a37f0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4554fd30>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a45ce3e48>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a45ebf518>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a444bc0b8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4665a588>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4ad5b518>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4adffac8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4511c668>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4b3f5710>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4b4a5ac8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a44324048>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a45e96b70>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a445fe3c8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4323aa58>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4bb59898>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4bd356d8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a435126d8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4bd976a0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a437f65c0>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4c2bc2e8>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4c375668>
<dau_conv_tf.dau_conv.DAUConv2dTF object at 0x1a4c425c88>
To use keras as a framework (for instance using the Network, Model, or Sequential classes), please use the tf.keras.layers implementation instead. (Or, if writing custom layers, subclass from tf.keras.layers rather than tf.layers)

and my tf.keras.version == 2.2.4-tf
running on mac os HighSierra

Any help would be highly appreciated,

cheers, H

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.