Giter Site home page Giter Site logo

keras-deeplab-v3-plus's Introduction

Keras implementation of Deeplabv3+

This repo is not longer maintained. I won't respond to issues but will merge PR
DeepLab is a state-of-art deep learning model for semantic image segmentation.

Model is based on the original TF frozen graph. It is possible to load pretrained weights into this model. Weights are directly imported from original TF checkpoint.

Segmentation results of original TF model. Output Stride = 8




Segmentation results of this repo model with loaded weights and OS = 8
Results are identical to the TF model




Segmentation results of this repo model with loaded weights and OS = 16
Results are still good




How to get labels

Model will return tensor of shape (batch_size, height, width, num_classes). To obtain labels, you need to apply argmax to logits at exit layer. Example of predicting on image1.jpg:

import numpy as np
from PIL import Image
from matplotlib import pyplot as plt

from model import Deeplabv3

# Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
# as original image.  Normalization matches MobileNetV2

trained_image_width=512 
mean_subtraction_value=127.5
image = np.array(Image.open('imgs/image1.jpg'))

# resize to max dimension of images from training dataset
w, h, _ = image.shape
ratio = float(trained_image_width) / np.max([w, h])
resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))

# apply normalization for trained dataset images
resized_image = (resized_image / mean_subtraction_value) - 1.

# pad array to square image to match training images
pad_x = int(trained_image_width - resized_image.shape[0])
pad_y = int(trained_image_width - resized_image.shape[1])
resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')

# make prediction
deeplab_model = Deeplabv3()
res = deeplab_model.predict(np.expand_dims(resized_image, 0))
labels = np.argmax(res.squeeze(), -1)

# remove padding and resize back to original image
if pad_x > 0:
    labels = labels[:-pad_x]
if pad_y > 0:
    labels = labels[:, :-pad_y]
labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))

plt.imshow(labels)
plt.waitforbuttonpress()

How to use this model with custom input shape and custom number of classes

from model import Deeplabv3
deeplab_model = Deeplabv3(input_shape=(384, 384, 3), classes=4#or you can use None as shape
deeplab_model = Deeplabv3(input_shape=(None, None, 3), classes=4)

After that you will get a usual Keras model which you can train using .fit and .fit_generator methods.

How to train this model

Useful parameters can be found in the original repository.

Important notes:

  1. This model doesn’t provide default weight decay, user needs to add it themselves.
  2. Due to huge memory use with OS=8, Xception backbone should be trained with OS=16 and only inferenced with OS=8.
  3. User can freeze feature extractor for Xception backbone (first 356 layers) and only fine-tune decoder. Right now (March 2019), there is a problem with finetuning Keras models with BN. You can read more about it here.

Known issues

This model can be retrained check this notebook. Finetuning is tricky and difficult because of the confusion between training and trainable in Keras. See this issue for a discussion and possible alternatives.

How to load model

In order to load model after using model.save() use this code:

from model import relu6
deeplab_model = load_model('example.h5')

Xception vs MobileNetv2

There are 2 available backbones. Xception backbone is more accurate, but has 25 times more parameters than MobileNetv2.

For MobileNetv2 there are pretrained weights only for alpha=1. However, you can initiate model with different values of alpha.

Requirement

The latest vesrion of this repo uses TF Keras, so you only need TF 2.0+ installed
tensorflow-gpu==2.0.0a0
CUDA==9.0


If you want to use older version, use following commands:

git clone https://github.com/bonlime/keras-deeplab-v3-plus/
cd keras-deeplab-v3-plus/
git checkout 714a6b7d1a069a07547c5c08282f1a706db92e20

tensorflow-gpu==1.13
Keras==2.2.4

keras-deeplab-v3-plus's People

Contributors

bonlime avatar brianmanderson avatar bytebagels avatar dependabot[bot] avatar joshmyersdean avatar kelvin2468 avatar lotte1990 avatar meight avatar parssifal avatar penguinmenac3 avatar sachsbl avatar udayakumar97 avatar vinhill avatar yuriykortev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-deeplab-v3-plus's Issues

Loading model

Hi

I would like to load the model after training.
I use the code:

model = load_model('deeplabmode.h5',
custom_objects={'BilinearUpsampling':BilinearUpsampling},)

Unfortunately I receive the following error;

File "/XYZ/deeplab_model_2D.py", line 46, in init
super(BilinearUpsampling, self).init(**kwargs)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/engine/topology.py", line 293, in init
raise TypeError('Keyword argument not understood:', kwarg)
TypeError: ('Keyword argument not understood:', 'size')

I think this has to do with the definition of the BilinearUpsampling layer. Any ideas?

The full log is here:

Using TensorFlow backend.
Traceback (most recent call last):
File "predict_full_img_deeplab.py", line 85, in
'dice_coef':dice_coef, 'tf':tf,'BilinearUpsampling':BilinearUpsampling},
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/models.py", line 243, in load_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/models.py", line 317, in model_from_config
return layer_module.deserialize(config, custom_objects=custom_objects)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 144, in deserialize_keras_object
list(custom_objects.items())))
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/engine/topology.py", line 2514, in from_config
process_layer(layer_data)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/engine/topology.py", line 2500, in process_layer
custom_objects=custom_objects)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 144, in deserialize_keras_object
list(custom_objects.items())))
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/engine/topology.py", line 2514, in from_config
process_layer(layer_data)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/engine/topology.py", line 2500, in process_layer
custom_objects=custom_objects)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/layers/init.py", line 55, in deserialize
printable_module_name='layer')
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 146, in deserialize_keras_object
return cls.from_config(config['config'])
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/engine/topology.py", line 1271, in from_config
return cls(**config)
File "XYZ/deeplab_model_2D.py", line 46, in init
super(BilinearUpsampling, self).init(**kwargs)
File "/home/christian/anaconda3/envs/myjob1/lib/python3.5/site-packages/keras/engine/topology.py", line 293, in init
raise TypeError('Keyword argument not understood:', kwarg)
TypeError: ('Keyword argument not understood:', 'size')

One-hot to label correspondence

Hello!
First of all, thanks for your repository, it's very helpful!

Dumb question, but I can't seem to find the correspondence between the label numbers and the label meanings. I'm reading the PASCAL-VOC devkit and there's a table in alphabetic order, is it the same numbering of the labels?

I'd like to produce some images similar to the ones on the README, where we have both the segmentation result and the label caption, if you could provide the code or the info i'd be very glad

Does the model have errors?

When I use fit, it says
Error when checking target: expected bilinear_upsampling_2 to have shape (512, 512, 2) but got array with shape (512, 512, 3)
ps: My input image.shape is (512,512,3)

keras and tensorflow version for installation

Hi Bonlime,

Thanks for sharing the code, would you like to clarify which version of tensorflow and keras you are supporting? It seems to me that the original deeplab V3 code only works for tensorflow=1.5.

Training

How do I train on a custom dataset?

question for batch size

Thank you for your awesome code !!

i'm gonna re-training deeplabv3+ with xception, OS16, my own dataset(tfrecord). input size is (512,1024,3)

my question is, how many batches can be trained per gpu with above setting?
(i'm using titan xp, 12GB)

in my experiment, only 1 batch is running and above 2 occurs OOM T.T

is it normal case? or did I something wrong?

Not getting binary mask using model.predict

I trained the model using model.fit_generator after preprocessing my input data using the model.preprocess_input function.

I save the weights to a checkpoint file and load them back using model.load_weights(path_to_weights). Now the model should be ready for testing. I run the analysis on a single test image that has also been preprocessed using y_pred = model.predict(test_image). However, y_pred is not a binary mask.

Do we have to add our own output activation layer to the model? Or any other thoughts on why this may be occuring? Thanks!

Custom Dataset: Number of Classes

Should num_classes take into account the background class. For example, if I have 3 foreground classes that I have masks for and a background class that I don't care about and don't have a mask for, should my num_classes be 3?

Score map

Hi,
I am quite new, could give an advise, how to obtain score map (input before softmax function) and visualize it using colormap?
The one like https://i.imgur.com/rfUyXez.png

And also, how much gpu-memory pre-trained model requires to run? I got "run out of memory" error using 2 Gb GPU.

Thank you in advance.

how to reproduce the sample segmentation result?

Here is my code:
from matplotlib.pyplot import imshow
from PIL import Image, ImageOps
import numpy as np
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
%matplotlib inline

from model import Deeplabv3
deeplab_model = Deeplabv3(input_shape=(512,512,3), classes = 4, weights='pascal_voc', OS=8)
img_path = '.\imgs\image1.jpg'
img = image.load_img(img_path, target_size= (512,512))
x = np.asarray(img)
x = np.expand_dims(x, axis=0)
x.flags.writeable = True
x = x/255-1
label = deeplab_model.predict(x)

How could I generate the segmentation map after predict? Just imshow(label[0])?

The use of regularization

Thank you for the awesome code!

I observe serious overfitting on my own data using this code.
Does deeplab-v3-plus any regularization at all?

image-level pooling

Hello,

I checked your code, I think the image-level pooling was not correctly implemented. because it is not computing the global average pooling.

https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/model.py#L437

I checked this code:
https://github.com/tensorflow/models/blob/4a0ee4a29dd7e4b6e0135ccf6857f4dc58d71a90/research/deeplab/model.py#L397
and Reference are here ref 52 from original paper (https://arxiv.org/pdf/1506.04579.pdf):

Exploiting the FCN architecture, ParsetNet can directly use global average pooling from the final (or any) feature map, resulting in the feature of the whole
image, and use it as context.

In implementation, this is accomplished by unpooling the context vector and appending the resulting feature map with the standard feature map.

Specifically, we use global average pooling and pool the context features from the last layer or any
layer if that is desired.

Load model

"from model import relu6, BilinearUpsampling deeplab_model = load_model('example.h5',custom_objects={'relu6':relu6,'BilinearUpsampling':BilinearUpsampling })"

As I try to load using the above instructions, 'load_model' is not a defined function.

output stride explained

In creating the deeplab model, the output stride (OS) is used to determine the pooling size for the AveragePooling2D layer as seen below

b4 = AveragePooling2D(pool_size=(int(np.ceil(input_shape[0] / OS)), int(np.ceil(input_shape[1] / OS))))(x)

In this case, the input shape is being divided by the output_stride. However, the input x into this layer is already downsampled by a factor of 16 if the xception backbone is used. For example, if the input_shape was (288,288,1), the tensor x would have a shape of (None, 18, 18, 1). If OS=16, then the pool_size=(18,18), which is just a global average pooling layer. If OS=8, then pool_size=(36,36), which on an input of size (18,18) and padding=valid is also just the same global average pooling layer.

Based on this implementation, an OS of either 8 or 16 would just yield a global average pooling layer.

Typically, output stride is been defined as the ratio of the input shape and the output shape. In this case, shouldn't the pool_size be (OS, OS)?

Dropout(0.9)

I believe it should be:
Dropout(0.1)

Orig TF repo:
concat_logits = slim.dropout(
concat_logits,
keep_prob=0.9,
is_training=is_training,
scope=_CONCAT_PROJECTION_SCOPE + '_dropout')

Keep prob means that we keep 0.9 of probabilities.

Load weights whith another number of classes

I used extract_weights to get weights from the pretrained network on PASCAL VOC, and I'm loading these weights after a init of the model with random weights, with a classes=3 :

      WEIGHTS_DIR = 'weights/' + backbone 
      print('Loading weights from', WEIGHTS_DIR) 
      for layer in tqdm(model.layers): 
          if layer.weights: 
              weights = [] 
              for w in layer.weights: 
                  weight_name = os.path.basename(w.name).replace(':0', '') 
                  weight_file = layer.name + '_' + weight_name + '.npy' 
                  weight_arr = np.load(os.path.join(WEIGHTS_DIR, weight_file)) 
                  weights.append(weight_arr) 
              layer.set_weights(weights) 

I get an error : Layer weight shape (1, 1, 256, 3) not compatible with provided weight shape (1, 1, 256, 21) Should I merge the weights over the 21 channels of your last layer, to get weights for only 3 channels (because logits_semantic_bias.npy has a shape (21,) and logits_semantic_kernel.npy has a shape (1, 1, 256, 21)? Thanks in advance

Deeplabv3() got an unexpected keyword argument

Using TensorFlow backend.
Traceback (most recent call last):
File "C:/Users/pdrt/Desktop/keras-deeplab-v3-plus/load_weights.py", line 16, in
model = Deeplabv3(input_shape=(512, 512, 3),num_classes = 21)
TypeError: Deeplabv3() got an unexpected keyword argument 'num_classes'
Instantiating an empty Deeplabv3+ model...

Why? How can I fix thiss?

Tensorflow eager execution: AttributeError: 'DeferredTensor' object has no attribute '_keras_shape'

I really like the clean structure of this model and repo. I try to fit the model to an own dataset. To debug I enable tensorflow eager mode.

tf.enable_eager_execution()
model = Deeplabv3(weights='pascal_voc', input_shape=(200,200,3), backbone='mobilenetv2', classes=64)
...
f = model(batch)

This, however results in an error:

model.py", line 236, in _inverted_res_block
in_channels = inputs._keras_shape[-1]
AttributeError: 'DeferredTensor' object has no attribute '_keras_shape'

Would it be nice to solve this and have the model compatible with eager execution?

How to create masks?

I have 2 classes in my target segmented image. So that means I have a greyscale mask of height x width consisting of 0s and 1s. Using expand_dims I can get a mask of shape (height, width, 1) but when specifying classes=2, the model expects masks of shape (height, width, 2). How to create such mask?

Training on new dataset

Could you please give an example or some ideas to train on new dataset? Thanks in advance.

How I can run this project?

I run extract_weights.py
then load_weights.py
and then I run model.py
but I did not recieve any result?
Please help me! How I can run this and recieve result

Performance degradation

Hi,

I'm using the converted pretrain-weights to measure mIoU and observed 10% drop. Could you sure you measurement results, did you still get 84% mIoU on PASCAL using the transferred model and weights?

Thanks

error of keras.utils.multi_gpu_model

If I don't use multi_gpus, everything is ok.
When I use multi_gpus of keras (keras.utils.multi_gpu_model(model, gpus=None)
), it shows:
TypeError: Expected int32 passed to parameter 'size' of op 'ResizeBilinear', got (Dimension(None), Dimension(None)) of type 'tuple' instead.

It seems the line 77 in model.py doesn't fit multi_gpus of keras.

code

hi, bonlime. can you release your complete code? thanks ,I write our own Loss and metrics, but I don not know if it is right. The result is bad

Last layer's weights when changing the number of classes + expected labels

Thank you for sharing your code

In my case , i want to train on my own dataset, should the labels (mask) be a 2-d image (with one chanel eventually) where each pixel has as value the class label (for example pixel belonging to a person has value 15,ect, which will requires later on a colormap) ? or what format the labels (mask) should have.......may i know what software do you use to create your labels ?

again thank you for sharing your effort

Error while using mobilenet weights

This code works fine for xception weighs but when i tried using mobilenet weights i get this error while trying to instantiate deeplab_model = Deeplabv3()
"ValueError: Layer #148 (named "image_pooling"), weight <tf.Variable 'image_pooling_1/kernel:0' shape=(1, 1, 2048, 256) dtype=float32_ref> has shape (1, 1, 2048, 256), but the saved weight has shape (256, 320, 1, 1).
"

new starter - confused with the resizing part in the example

So I'm very new to python and I was trying to use the example code given in README but with my own images:
from matplotlib import pyplot as plt
import cv2 # used for resize. if you dont have it, use anything else
import numpy as np
from model import Deeplabv3
deeplab_model = Deeplabv3()
img = plt.imread("imgs/image1.jpg")
w, h, _ = img.shape
ratio = 512. / np.max([w,h])
resized = cv2.resize(img,(int(ratioh),int(ratiow)))
resized = resized / 127.5 - 1.
pad_x = int(512 - resized.shape[0])
resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)),mode='constant')
res = deeplab_model.predict(np.expand_dims(resized2,0))
labels = np.argmax(res.squeeze(),-1)
plt.imshow(labels[:-pad_x])

I get the following error message:
ValueError: Error when checking input: expected input_3 to have shape (512, 512, 3) but got array with shape (512, 389, 3)

I've also tried to add the line
deeplab_model = Deeplabv3(input_shape=(384,384,3), classes=4)
but still receive the same error message but this time expecting input to have shape (384, 384, 3)

When I try to pre-process the images by resize them to the required shape i get this error instead:
3443: UserWarning: Attempting to set identical bottom==top results
in singular transformations; automatically expanding.
bottom=-0.5, top=-0.5
'bottom=%s, top=%s') % (bottom, top))
And the image output becomes a flat line.

Any help would be much appreciated.

training with large image resolution using the contrib/lms

Hi

I am about to attempt to train with a size of 2048 x 2048

using something roughly similar to
https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly

to create a fit generator

but in addition I will be using

https://github.com/tungld/tensorflow/blob/3b39b5ecd69bebb1e9222c2292a57c5532931ade/tensorflow/contrib/lms/README.md

with emphasis on
tf.keras-based training
Step 1: define an LMSKerasCallback.
from tensorflow.contrib.lms import LMSKerasCallback

LMSKerasCallback and LMS share a set of keyword arguments. Here we just

use the default options.

lms_callback = LMSKerasCallback()
Step 2: pass the callback to the Keras fit or fit_generator function.
model.fit_generator(generator=training_gen, callbacks=[lms_callback])

So hopefully I will be able to swap the model in from CPU only the GPU using Large Model Support.

Does anybody already have code with the fit_generator already working for the PASCAL VOC 2012 or COCO 2014/2017 data sets?

How can I run the video module to achieve real-time detection?

Hello I want to know the speed of deeplabv3+ ,and I try to run that:
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from matplotlib import pyplot as plt
import cv2 # used for resize. if you dont have it, use anything else
import numpy as np
from model import Deeplabv3
deeplab_model = Deeplabv3()
def detect_video(deeplab_model):
import cv2
vid = cv2.VideoCapture(0)
if not vid.isOpened():
raise IOError("Couldn't open webcam or video")
accum_time = 10
curr_fps = 10
fps = "20"
prev_time = timer()
while True:
return_value, frame = vid.read()
res = deeplab_model.predict(frame)

	result = array_to_img(res)
    curr_time = timer()
    exec_time = curr_time - prev_time
    prev_time = curr_time
    accum_time = accum_time + exec_time
    curr_fps = curr_fps + 1
    if accum_time > 1:
        accum_time = accum_time - 1
        fps = "FPS: " + str(curr_fps)
        curr_fps = 0
    cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                fontScale=0.50, color=(255, 0, 0), thickness=2)
    cv2.namedWindow("result", cv2.WINDOW_NORMAL)
    cv2.imshow("result", result)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
deeplab_model.close_session()

detect_video(deeplab_model)

but it doesn't work!
I thank I need your help thanks.

Training on uint8 [0...255] images instead of [-1...1]?

What will be the effect of training on images with values in the range 0-255, that is, not using the pre-processing function provided in models.py?
Due to memory limitations, I can't assign float64 array and had to use ubyte only, but this gave me very poor, random and fluctuating results on epochs.

Is the CRF included?

I'm pretty sure the CRF from the paper is not included in the model and has to be trained seperately anyways but I still wanted to be sure and ask you. Am I right?

btw. many thanks for your work!

Compiling Model

Any recommendations on compiling the model? Optimizers, loss function, etc.?

Weird looking output

When I run the following code:

import tensorflow as tf
from matplotlib import pyplot as plt
import cv2 # used for resize. if you dont have it, use anything else
import numpy as np
from model import Deeplabv3
import time
import pdb

model = Deeplabv3(input_shape=(427,640,3), backbone='xception', classes=21)

img = cv2.imread('imgs/image1.jpg')

input = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) / 127.5 - 1
res = model.predict(np.expand_dims(input, 0))

labels = np.argmax(res.squeeze(),-1)

plt.imshow(labels)
plt.show()

the result looks like this.
figure_1
This is not what I expect. What is going wrong here?

The right way of pre-processing for input?

image
We use the first img as input, and resize it from (427, 640, 3) to (512, 512, 3), and normalize it from [0,255] to [0,1].
image
As below, the output of OS=16 is normal,
image

but the output of OS=8 is worse.

I can not figure out that, so I think my pre-processiong(such as normalizing) is wrong.

Memory leak while using predict_generator

Hello,

I am trying to inference 10 thousand images using predict_generator. However, during the inference, the amount of used RAM is permanently increasing. For example, for three thousand pictures, it took more than 70 gigabytes of RAM. I use Deeplabv3(backbone='mobilenetv2').

Any ideas how to avoid this?
Thanks in advance.

ValueError in bilinear_upsampling due to mismatch in shape

Hi,
I'm getting error "
ValueError: Error when checking target: expected bilinear_upsampling_5 to have shape (101, 101, 2) but got array with shape (101, 101, 1)". This happens during xception as well as mobilenetv2.
my train img_size = (101,101)

can someone please tell me what's wrong?

Training with tf.data

I would like to train this model with some data coming from an tf.data.Dataset.

model.fit(dataset.make_one_shot_iterator(),
          steps_per_epoch=10,
          epochs=5)

This should be able to work, however it looks like it still expects a numpy array as input for the model.fit method. The error message is

Traceback (most recent call last):
File "train.py", line 91, in
epochs=5)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1630, in fit
batch_size=batch_size)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1476, in _standardize_user_data
exception_prefix='input')
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 76, in _standardize_input_data
data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data]
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 76, in
data = [np.expand_dims(x, 1) if x is not None and x.ndim == 1 else x for x in data]
AttributeError: 'Iterator' object has no attribute 'ndim'

Is this possible in an easy way?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.