Giter Site home page Giter Site logo

cnn_finetune's People

Contributors

flyyufelix avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cnn_finetune's Issues

"negative dimension" constructing VGG16

When running the VGG16 code I get this error:

Traceback (most recent call last): File "/Users/noah/Documents/TensorFlow/10_2_week/cnn_finetune-master/vgg16.py", line 105, in <module> model = vgg16_model(img_rows, img_cols, channel, num_classes) File "/Users/noah/Documents/TensorFlow/10_2_week/cnn_finetune-master/vgg16.py", line 34, in vgg16_model model.add(MaxPooling2D((2, 2), strides=(2, 2))) File "/Users/noah/anaconda/lib/python3.6/site-packages/keras/models.py", line 475, in add output_tensor = layer(self.outputs[0]) File "/Users/noah/anaconda/lib/python3.6/site-packages/keras/engine/topology.py", line 602, in __call__ output = self.call(inputs, **kwargs) File "/Users/noah/anaconda/lib/python3.6/site-packages/keras/layers/pooling.py", line 154, in call data_format=self.data_format) File "/Users/noah/anaconda/lib/python3.6/site-packages/keras/layers/pooling.py", line 217, in _pooling_function pool_mode='max') File "/Users/noah/anaconda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 3386, in pool2d x = tf.nn.max_pool(x, pool_size, strides, padding=padding) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 1772, in max_pool name=name) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1605, in _max_pool data_format=data_format, name=name) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2632, in create_op set_shapes_for_outputs(ret) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1911, in set_shapes_for_outputs shapes = shape_func(op) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1861, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 595, in call_cpp_shape_fn require_shape_fn) File "/Users/noah/Documents/TensorFlow/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 659, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].

ValueError: GpuCorrMM images and kernel must have the same stack size

I'm trying to retrain resnet_50 using my own datasets which is in following shape:
('X_train: ', (11562L, 200L, 200L, 1L))
('X_valid:', (2891L, 200L, 200L, 1L))
('Y_train:', (11562L, 124L))
('Y_valid:', (2891L, 124L))

Once the model starts to do the training, it throws "GpuCorrMM images and kernel must have the same stack size". More about the error is:

Train on 11562 samples, validate on 2891 samples
Epoch 1/50
Traceback (most recent call last):
File "resnet_50.py", line 195, in
validation_data=(X_valid, Y_valid))
File "C:\Users\saugat_kc\AppData\Local\Continuum\Anaconda2\envs\resnet_env\lib\site-packages\keras\engine\training.py", line 1196, in fit
initial_epoch=initial_epoch)
File "C:\Users\saugat_kc\AppData\Local\Continuum\Anaconda2\envs\resnet_env\lib\site-packages\keras\engine\training.py", line 891, in _fit_loop
outs = f(ins_batch)
File "C:\Users\saugat_kc\AppData\Local\Continuum\Anaconda2\envs\resnet_env\lib\site-packages\keras\backend\theano_backend.py", line 959, in call
return self.function(*inputs)
File "C:\Users\saugat_kc\AppData\Local\Continuum\Anaconda2\envs\resnet_env\lib\site-packages\theano\compile\function_module.py", line 898, in call
storage_map=getattr(self.fn, 'storage_map', None))
File "C:\Users\saugat_kc\AppData\Local\Continuum\Anaconda2\envs\resnet_env\lib\site-packages\theano\gof\link.py", line 325, in raise_with_op
reraise(exc_type, exc_value, exc_trace)
File "C:\Users\saugat_kc\AppData\Local\Continuum\Anaconda2\envs\resnet_env\lib\site-packages\theano\compile\function_module.py", line 884, in call
self.fn() if output_subset is None else
ValueError: GpuCorrMM images and kernel must have the same stack size

I'm using Python 2.7, Theano 0.9.0 with keras 1.2.2 on a Windows 10 platform.
Is there anything that I'm missing here?

Fine tuning of two seperate layers in a model

Hello, I read your blog post about fine-tuning. very good explanation. I have an issue with a model, which could be interesting to investigate. Please have a look:

https://github.com/nicolov/segmentation_keras/blob/master/model.py

I want to fine-tune this with a different number of classes (for example 2).

By the way I should modify these two layers and change 21 to 2:

model.add(Convolution2D(21, 1, 1, activation='linear', name='fc-final'))

and

model.add(Convolution2D(21, 1, 1, name='ct_final'))

Also, freeze other layers and make them trainable = False

Now my questions:

  1. I don't know how can I determine how many layers model have to freeze until one of these layers.

  2. I should freeze except than two separate layers. if the correct method of freezing in such situations is to do separately, I should know what is the number of these layers in the model. How can I know that?

Input_shape mis_match using the denseNet121.py

I use the theano as backend.
when i run densenet121.py,
it show me ValueError: Error when checking input: expected data to have shape (224, 224, 3) but got array with shape (3, 224, 224)
It's weird.

Dataset for each model

Hello Sir,

Thank you for the repo.

How can I prepare the dataset for each model?
you know I mean it is not quite clear what format they have followed or used the model to do classification or semantic segmentation (for example VOC or similar), therefore the way of creating the dataset and feeding is unknown.

ZeroPadding2D((1,1), name = '...') vs. Conv2D(...., padding = 'same', ... )

Hi,
I'm rewritting the code resnet152.py in Keras2 API, and I had this question:
why you built a layer ZeroPadding2D whose name do not appear in caffe .prototxt
x = ZeroPadding2D((1, 1), name=conv_name_base + '2b_zeropadding')(x)
x = Convolution2D(nb_filter2, kernel_size, kernel_size, name=conv_name_base + '2b', bias=False)(x)
in the function conv_block()?
( since I don't know the mechanism to conver a caffemodel to keras model
and I'm wondering this operation is necessary for the mechanism, right? ?)

Because this can be done easily to use just one Conv2D function
x = Conv2D(nb_filter2, (kernel_size, kernel_size), padding='same', name=conv_name_base + '2b', use_bias=False)(x)

and when I finish my code, if you like, may be I can do something API update for you repo.
looking forward to your reply : )

fine tune

hi ,
if I use your resnet50 for pretrain imagenet weight to exact feature , then before the final pooling ,all the layer weight can change??
and if I trained finish , how to save the weight and load it ?

example like your cifar-10 fine tune

fine tune my own dataset?

I really want to know how to load my own dataset for fine tune.
Who could share the code.
Thanks!

layers of resnet_101.py

In article, the conv3_x parameter of 101-layer is 4,
but the code is 3,

x = conv_block(x, 3, [128, 128, 512], stage=3, block='a')
for i in range(1,3):
     x = identity_block(x, 3, [128, 128, 512], stage=3, block='b'+str(i))

Resnet101 weights using channels other than 3 results in weight loading error

Currently the resnet101 model defaults to using 3 channels (even though its color_type is defaulting to 1).

cnn_finetune/resnet_101.py

Lines 116 to 122 in cba3439

global bn_axis
if K.image_dim_ordering() == 'tf':
bn_axis = 3
img_input = Input(shape=(img_rows, img_cols, color_type), name='data')
else:
bn_axis = 1
img_input = Input(shape=(color_type, img_rows, img_cols), name='data')

I found that if you change this channel to just 1 and try loading the weights that are supplied you get this error:

ValueError: Dimension 0 in both shapes must be equal, but are 7 and 64. Shapes are [7,7,1,64] and [64,3,7,7]. for 'Assign' (op: 'Assign') with input shapes: [7,7,1,64], [64,3,7,7].

But if you keep the channels at 3, weight loading works normally.

The error message is quite strange, as it seems to imply that the channel ordering is wrong. But if you just keep the channels at 3 for the Input layer, there's no problem at all.

This also occurs if you change the channels to be greater than 3. Why would weights be affected by how many input channels there are anyway?

Is this step necessary for ( does you example for vgg16/dataset_cifar50 do this step)?

# For Tensorflow 
# Switch RGB to BGR order 
x = x[:, :, :, ::-1]  

# Subtract ImageNet mean pixel 
x[:, :, :, 0] -= 103.939
x[:, :, :, 1] -= 116.779
x[:, :, :, 2] -= 123.68

# For Theano
# Switch RGB to BGR order 
x = x[:, ::-1, :, :]

# Subtract ImageNet mean pixel 
x[:, 0, :, :] -= 103.939
x[:, 1, :, :] -= 116.779
x[:, 2, :, :] -= 123.68

I didn't do this step because I did not see your code for vgg 16 do this?
The problem is why we need to do, will the pixel value turn into negatvie if we minus the mean pixel of ImageNet?

Can I use this code for regression problem?

I was trying to use this code for a regression problem.
And I am getting this error.
ValueError: Multioutput target data is not supported with label binarization
Any help?
Thanks.

resNeXt pre-trained model

Thank you for your converted model for keras. It is really helpful for me.
Would you also upload resNeXt model ?
I read your blog and tried to convert it by unmaintained keras converter, but I am not clever enough to make it through.
Best regards.

Failed to use Theano as backend for vgg16.py

Hi,
I am trying to run the code vgg16.py using Theano backend as the network weights are in Theano.
I have set
from keras import backend as K
K.set_image_dim_ordering('th')
But still it is using Thensorflow and giving me the error.

self._session = tf_session.TF_NewSession(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.

Process finished with exit code 1

Problem with finetuning DenseNet with my own data

Hi @flyyufelix ,
I am trying to finetune DenseNet161.py with custom png images. I am getting an error. I compared the cifar10.load_data() with my own load_data(). The output shapes and content types are the same. I'd be happy if you can take a look at the error and my code below. Thanks in advance for your time and help.

ERROR:
ValueError: You are passing a target array of shape (2, 1) while using as loss categorical_crossentropy. categorical_crossentropy expects targets to be binary matrices (1s and 0s) of shape (samples, classes). If your targets are integer classes, you can convert them to the expected format via:

from keras.utils.np_utils import to_categorical
y_binary = to_categorical(y_int)

MY CODE:

def load_data(img_rows, img_cols):
    num_classes = 1
    img1=cv2.resize(cv2.imread('images/vehicle/image0451.png'), (img_rows, img_cols)).astype(np.float32)
    img2=cv2.resize(cv2.imread('images/vehicle/image0452.png'), (img_rows, img_cols)).astype(np.float32)
    img3=cv2.resize(cv2.imread('images/vehicle/image0453.png'), (img_rows, img_cols)).astype(np.float32)

    for x in (img1,img2,img3):
        x[:, :, 0] -= 103.939
        x[:, :, 1] -= 116.779
        x[:, :, 2] -= 123.68

    X_train = np.array([img1,img2])
    X_valid = np.array([img3])

    Y_train = np.array([[0],[0]])
    Y_valid = np.array([[0]])

    # Transform targets to keras compatible format
    Y_train = np_utils.to_categorical(Y_train, num_classes)
    Y_valid = np_utils.to_categorical(Y_valid, num_classes)

    return X_train, Y_train, X_valid, Y_valid

Negative dimension size ValueError

Hi Felix, thanks for the code you have provided here!

I just clone everything and am attempting to run things out of box before doing my own finetuning .. I'm running into this issue though, and am wondering if you have any insights.

I simply run 'python vgg16.py' and get this output:

Kristas-MacBook-Pro-2:cnn_finetune Krista$ python vgg19.py
Using TensorFlow backend.
2017-11-10 12:30:18.227736: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
vgg19.py:29: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(64, (3, 3), activation="relu")
model.add(Convolution2D(64, 3, 3, activation='relu'))
vgg19.py:31: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(64, (3, 3), activation="relu")
model.add(Convolution2D(64, 3, 3, activation='relu'))
vgg19.py:35: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(128, (3, 3), activation="relu")
model.add(Convolution2D(128, 3, 3, activation='relu'))
vgg19.py:37: UserWarning: Update your Conv2D call to the Keras 2 API: Conv2D(128, (3, 3), activation="relu")
model.add(Convolution2D(128, 3, 3, activation='relu'))
Traceback (most recent call last):
File "vgg19.py", line 108, in
model = vgg19_model(img_rows, img_cols, channel, num_classes)
File "vgg19.py", line 38, in vgg19_model
model.add(MaxPooling2D((2,2), strides=(2,2)))
File "/usr/local/lib/python2.7/site-packages/keras/models.py", line 475, in add
output_tensor = layer(self.outputs[0])
File "/usr/local/lib/python2.7/site-packages/keras/engine/topology.py", line 603, in call
output = self.call(inputs, **kwargs)
File "/usr/local/lib/python2.7/site-packages/keras/layers/pooling.py", line 154, in call
data_format=self.data_format)
File "/usr/local/lib/python2.7/site-packages/keras/layers/pooling.py", line 217, in _pooling_function
pool_mode='max')
File "/usr/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 3456, in pool2d
data_format=tf_data_format)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1958, in max_pool
name=name)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 2806, in _max_pool
data_format=data_format, name=name)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2958, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2209, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2159, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 627, in call_cpp_shape_fn
require_shape_fn)
File "/usr/local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 691, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].

I'm using python 2..7.14, keras 2.0.9, and tensorflow 1.4.0

Thanks in advance!

How to use resnet101 for batch training?

I am using resnet 101 for batch training for an input of 1900 training images with 224,224,3 dimensions. The labels are pixelwise probability maps of 2 classes, so a label is 224,224,2 in shape.
Now when I use this model with an img_input = Input(shape=(1900,224,224,3)). But when this is passed to the next layer of ZeroPadding2D the img_input gets 5 dimensions somehow.

This being the error:

ValueError: Input 0 is incompatible with layer conv1_zeropadding: expected ndim=4, found ndim=5

Would be great if you could help

load_data() module

Hey, Thanks for gathering all the CNN model + ImageNet Pretrained Models in a repo. I am curious about load_data() module. It would be nice if you can explain more or add some code how to write load_data() module. For example, there are two folders which one for training images and one for test images. How am I going to write load_data() module?

Thanks.

Enhancements: Example for loading own datasets

Very nice work! I found it really easy to test all these networks and build some understanding.

Would be very nice to have example

  1. Howto load own datasets(function)
  2. Howto save fine-tuned model

ValueError: Error when checking target

Hi I got the following error when trying to fine-tune the code for ResNet50.

Warning (from warnings module):
File "C:\CT_SCAN_IMAGE_SET\resnet_50\dbs2017\fyufelix_cnn_finetune_4.py", line 210
validation_data=(X_valid, Y_valid),
UserWarning: The nb_epoch argument in fit has been renamed epochs.
Traceback (most recent call last):
File "C:\CT_SCAN_IMAGE_SET\resnet_50\dbs2017\fyufelix_cnn_finetune_4.py", line 210, in
validation_data=(X_valid, Y_valid),
File "C:\Research\Python_installation\lib\site-packages\keras\engine\training.py", line 1581, in fit
batch_size=batch_size)
File "C:\Research\Python_installation\lib\site-packages\keras\engine\training.py", line 1418, in _standardize_user_data
exception_prefix='target')
File "C:\Research\Python_installation\lib\site-packages\keras\engine\training.py", line 141, in _standardize_input_data
str(array.shape))
ValueError: Error when checking target: expected fc10 to have 2 dimensions, but got array with shape (277, 224, 224, 3)

Can someone please let me know that how I can fix this

X_train.shape - (1107, 224, 224, 3)
Y_train.shape - (277, 224, 224 , 3)
X_valid.shape- (1107,)
Y_valid.shape - (277,)

Resnet50 Issue loading weights

I am trying to run the code for the Resnet50 model but I keep getting the same error.

"ValueError: You are trying to load a weight file containing 107 layers into a model with 99 layers."

Im using keras 2.2.4 and I tried downgrading but I am still getting the same Issue. Has anyone encountered this error?

modification of Yolo v2

Hello Felix,

in the article of "Detect and Classify Species of Fish from Fishing Vessels ..." you mentioned:

I used an improved version of YOLO called YOLO v2. It made several small but important changes inspired by Faster R-CNN, such as assigning bounding box coordinate “priors” to each partitioned region and replacing the fully connected layers with convolutional layers, hence making the network fully convolutional. Essentially, YOLO v2 works very similarly to RPN.

May I ask you where you have made this modifications? inside the yolo-voc.cfg file isn't it?

I want to follow you and make these modifications. Would you please help me?

Image input shape issue

I am using resnet-152 architecture and training input of size 224x224 and 331 images with tensor-flow backend but keep getting this error:
ValueError: Error when checking input: expected data to have shape (None, 224, 224, 3) but got array with shape (331, 3, 224, 224)
I tried setting image data format to channel_first and channel_last but its not working either way
Kindly help me out

sorry,I follow your tutorial,but I got a error.

Traceback (most recent call last):
  File "C:/Users/hadxu/PycharmProjects/Quant_implement/deep-learning/fine-tuning/VGG16_tuning.py", line 106, in <module>
    model = vgg16_model(img_rows, img_cols, channel, num_classes)
  File "C:/Users/hadxu/PycharmProjects/Quant_implement/deep-learning/fine-tuning/VGG16_tuning.py", line 39, in vgg16_model
    model.add(MaxPooling2D((2, 2), strides=(2, 2)))
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\keras\models.py", line 466, in add
    output_tensor = layer(self.outputs[0])
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\keras\engine\topology.py", line 585, in __call__
    output = self.call(inputs, **kwargs)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\keras\layers\pooling.py", line 154, in call
    data_format=self.data_format)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\keras\layers\pooling.py", line 217, in _pooling_function
    pool_mode='max')
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\keras\backend\tensorflow_backend.py", line 3245, in pool2d
    x = tf.nn.max_pool(x, pool_size, strides, padding=padding)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1821, in max_pool
    name=name)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1638, in _max_pool
    data_format=data_format, name=name)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
    op_def=op_def)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\framework\ops.py", line 2338, in create_op
    set_shapes_for_outputs(ret)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\framework\ops.py", line 1719, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\framework\ops.py", line 1669, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 610, in call_cpp_shape_fn
    debug_python_shape_fn, require_shape_fn)
  File "C:\Program Files\Anaconda3\envs\deep-learning\lib\site-packages\tensorflow\python\framework\common_shapes.py", line 676, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 2 from 1 for 'max_pooling2d_2/MaxPool' (op: 'MaxPool') with input shapes: [?,1,112,128].

How can I use the inceptionv3 theano model in tensorflow?

Do thanks for your sharing!
However, I have a problem about fine-tuning inception-v3 in tensorflow.
Can I load the theano version model weight in tensorflow? If so, can you tell me more detail?
I will be so appreciate for anyone's reply!

AttributeError: 'Tensor' object has no attribute 'assign'

Hi Felix, thanks for providing the resnet152 model in keras + weights.
However I have the following error when trying to run it :

File "resnet_152.py", line 252, in
model = resnet152_model(img_rows, img_cols, channel, num_class)
File "resnet_152.py", line 222, in resnet152_model
model.load_weights(weights_path)
File "/usr/local/lib/python3.4/dist-packages/keras/engine/topology.py", line 2500, in load_weights
load_weights_from_hdf5_group(f, self.layers)
File "/usr/local/lib/python3.4/dist-packages/keras/engine/topology.py", line 2913, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 2022, in batch_set_value
assign_op = x.assign(assign_placeholder)
AttributeError: 'Tensor' object has no attribute 'assign'

tensorflow: 1.0.1
keras: 2.0.3

thanks

Add classes to existing classes in transfer learning

Hello, your transfer learning sample is very useful.

I have a question now, do you think is it possible to make transfer learning to add 2 new classes to the 1000 classes already classified in the starting model?
For example I would like to create a model that is trained on 1000 images of Inceptionv3 + 2 classes of animals of my choice that I have dataset.

Thanks a lot.

Negative dimension size when reproducing the result using Inception_v3

I have run some models such as densenet169 and they work well. Thank you for this helpful work!!!

However, when I try to run inception_v3 model, this error occurs and there seem some bugs in the framework of inception_v3. The error log is as follows:

Traceback (most recent call last):
  File "inception_v3_tmp.py", line 238, in <module>
    model = inception_v3_model(img_rows, img_cols, channel, num_classes)
  File "inception_v3_tmp.py", line 56, in inception_v3_model
    x = conv2d_bn(x, 32, 3, 3, border_mode='valid')
  File "inception_v3_tmp.py", line 31, in conv2d_bn
    name=conv_name)(x)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/keras/engine/topology.py", line 602, in __call__
    output = self.call(inputs, **kwargs)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/keras/layers/convolutional.py", line 164, in call
    dilation_rate=self.dilation_rate)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 3161, in conv2d
    data_format='NHWC')
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 670, in convolution
    op=op)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 338, in with_space_to_batch
    return op(input, num_spatial_dims, padding)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 662, in op
    name=name)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 131, in _non_atrous_convolution
    name=name)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 399, in conv2d
    data_format=data_format, name=name)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2508, in create_op
    set_shapes_for_outputs(ret)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1873, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1823, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
    debug_python_shape_fn, require_shape_fn)
  File "/home/wangjk/.virtualenvs/tensorflow/local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 676, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,149,32], [3,3,32,32].

Performance and source of the models?

Can you add how well each of these models performs and how they were trained? For example it looks like the resnet 50 model comes from Keras' models but the densenet models are in a google drive folder. Were they trained with these scripts or imported from elsewhere?

Also, could you consider adding an MIT license? Here is a quick MIT summary link. Without a license nobody can legally make use of the code so please do!

The MIT License (MIT)

Copyright (c) <year> <copyright holders>

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Finally, have you considered contributing this to keras-contrib? It is the official Keras community contributions library.

May I change the channel number when I load the weight?

Thanks a lot for your sharing!
I have a problem loading densenet121_weights(tensorflow version, in which the channel is at the last place of data).
I want to try the model on 3D data, and the image size is 64*64, and I set the channel number as 64,which was 3 before. When the keras backend is theano, it works when I load the weight, but returns an error at model.fit, followed with the traceback:
selection_035

but when the backend is tensorflow, the weight can't be load correctly:
selection_036

I want to know how to load the weight when channel number is changed.
Do thanks for the reply!

load_data returns None?

Hello,

This is actually a great tutorial repo, however I cannot get my head around the load_data functions ? They always just return None ?

Regarding Performance on Imagenet

I happened follow the tutorial given in

https://www.pyimagesearch.com/2016/08/10/imagenet-classification-with-python-and-keras/

But, when i used the pretrained models you provided i happened to get very different results. Please find the script i used here

When i use your weights the prediction keeps changing everytime i run and for keras default resnet 50 model it shows Magpie as label for the test image ILSVRC2012_test_00000002.JPEG from Imagenet 2012 test data. Is there any problem with the weights? or preprocessing? I think keras default preprocessing step does the same as you mentioned.

Thanks
Mahesh

Image Preprocessing for ImageNet models.

Hi,

As all of these models are ImageNet model, I was wondering if you used similar kind of image processing as used by these models otherwise, model weights would be different in either case.

Thanks

Extracting features from ResNet101

Hi,

I'm trying to extract features from the last convolutional layer of ResNet101, from the image at its original dimension.
I obtain this error:
ValueError: The shape of the input to "Flatten" is not fully defined (got (None, None, 2048). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.
Here is my call:
base_model = resnet101_model(None, None, 3, 0)
Can you help me?

error on flatten step

Hi Felix, thanks for putting the repo together, I am trying to get one of the models working -- vgg16 -- but getting an error on the flatten step as seen below. In my code on compiling the model I inputted number of channels; are you using a backend of theano or tensorflow ? I wonder could this me my issue.

ValueErrorTraceback (most recent call last)
<ipython-input-6-1af69e06c865> in <module>()
     11 
     12 # Load our model
---> 13 model = vgg_std16_model(img_rows, img_cols, channel, num_class)
     14 
     15 # Start Fine-tuning

<ipython-input-2-25fab191ca91> in vgg_std16_model(img_rows, img_cols, channel, num_class)
     49 
     50     # Add Fully Connected Layer
---> 51     model.add(Flatten())
     52     model.add(Dense(4096, activation='relu'))
     53     model.add(Dropout(0.5))

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/models.pyc in add(self, layer)
    453                           output_shapes=[self.outputs[0]._keras_shape])
    454         else:
--> 455             output_tensor = layer(self.outputs[0])
    456             if isinstance(output_tensor, list):
    457                 raise TypeError('All layers in a Sequential model '

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/engine/topology.pyc in __call__(self, inputs, **kwargs)
    557             # Infering the output shape is only relevant for Theano.
    558             if all([s is not None for s in _to_list(input_shape)]):
--> 559                 output_shape = self.compute_output_shape(input_shape)
    560             else:
    561                 if isinstance(input_shape, list):

/home/ubuntu/anaconda2/lib/python2.7/site-packages/keras/layers/core.pyc in compute_output_shape(self, input_shape)
    486             raise ValueError('The shape of the input to "Flatten" '
    487                              'is not fully defined '
--> 488                              '(got ' + str(input_shape[1:]) + '. '
    489                              'Make sure to pass a complete "input_shape" '
    490                              'or "batch_input_shape" argument to the first '

ValueError: The shape of the input to "Flatten" is not fully defined (got (0, 3, 512). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.

TypeError: 'module' object is not callable

Does someone else have this issue? The code seems to stop just before this line x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1)) and this is the error:

TypeError                                 Traceback (most recent call last)
<ipython-input-16-4ef36ddf89c4> in <module>()
    340 
    341     # Load our model
--> 342     model_ = resnet101_model(img_rows, img_cols, channel, num_classes)
    343 
    344     # Start Fine-tuning

<ipython-input-16-4ef36ddf89c4> in resnet101_model(img_rows, img_cols, color_type, num_classes)
    138     print('ok7')
    139 
--> 140     x = conv_block(x, 3, [64, 64, 256], stage=2, block='a', strides=(1, 1))
    141     print('ok8')
    142     x = identity_block(x, 3, [64, 64, 256], stage=2, block='b')

<ipython-input-16-4ef36ddf89c4> in conv_block(input_tensor, kernel_size, filters, stage, block, strides)
     90     shortcut = Scale(axis=bn_axis, name=scale_name_base + '1')(shortcut)
     91 
---> 92     x = merge([x, shortcut], mode='sum', name='res' + str(stage) + block)
     93     x = Activation('relu', name='res' + str(stage) + block + '_relu')(x)
     94     return x

TypeError: 'module' object is not callable

The same happens for resnet152_model

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.