Giter Site home page Giter Site logo

akshaykhadse / dl-face-detection Goto Github PK

View Code? Open in Web Editor NEW
8.0 5.0 5.0 11.99 MB

Repository for face detection using deep learning project.

Python 88.09% PHP 11.91%
face-detection deep-learning tensorflow deep-neural-networks python aflw opencv haar-cascade convolutional-neural-networks

dl-face-detection's Introduction

Face Detection using Deep Learning

  • 153079004 Ashish Sukhwani
  • 153079011 Akshay Khadse
  • 153079005 Raghav Gupta
  • 15307R001 Soumya Dutta

Downloads

Introduction

With recent technological advances, photographs have become most convenient and preferable medium of expression. Millions of photos make their way to various cloud storage and social networks every day. Retrieving and organising these photos impacts user experience and is very challenging.

Some solutions like geo-tagging allows photos to be organised by locations, but doing so on basis of some contextual queries like 'photos with a particular friends', etc. is much more difficult. This is because such queries require detection of human faces. There is no particular signal associated about the identities of people making this task much more difficult.

Photographs are 2D projections of 3D objects like faces. The human face is not a unique rigid object. There are billions of different faces and each of them can assume a variety of deformations. Inter-personal variations can be due to race, identity, or genetics while intra-personal variations can be due to deformations, expression, aging, facial hair, etc. We have taken the problem of multiview face detection in given image for this course project. However, identification was not considered as it would require more time and computational resources.

Face detection has been an active research are for past two decades. Well established techniques are available which enable detection of upright faces in real time with very low computational complexity. These involve cascade simple to complex face classifier. Many variants of these have been implemented in smartphones and digital camera. However, these techniques fail to detect faces with different angle and partial faces.

Existing Methodologies

From our survey we noted that one of the most successful methods for face detection was devised in [1]. The method described uses Haar feature based Cascade Classifers. For the classifiers to work the first step is feature extraction. Haar features are extracted from the images in order to construct the feature vector of each image. However the number of features that we get from images in this way are fairly large.

For feature selection Adaboost algorithm is used. Finally we get a classifier which is a weighted sum of each classifier in each step. This classifier is correctly able to detect faces as reported in [1].

However although this method works very well for images with frontal faces, the results are not satisfactory for images with other views. Thus, using this method for multiview detection poses several challenges, which we try to overcome in our project.

AFLW Dataset Link

https://lrs.icg.tugraz.at/research/aflw/

Method Described in Paper [2]

Paper [2] describes finetuning of Alexnet for face detection. For this AFLW dataset which consists of 21k images with 24k face annotations was used. Number of positive examples was increased by randomly sampling sub windows of the images. Intersection of Union(IOU) with ground truth was found with each such image, images which had more than 50% IOU were taken as positive examples. This resulted in 200k positive examples and 20 million negative examples. All image were resized to 227 X 227. For finetuning of Alexnet, a batch size of 128 image containing 32 positive and 96 negative examples was used for 50k iterations.

To realise final phase detector from this finetuned Alexnet, either region based or sliding based approach can be followed. Paper describes selection of sliding window approach because of its less complexity.

This face detector consists of five convolutional layers and three fully connected layers. The original author used caffe framework for implementing this strategy.

Paper also presents comparison of this method with R-CNN. For this purpose Alexnet was trained as described earlier, but output of fully connected layer was reshaped and used for training a SVM classifier. A bounding box regression unit was also trained to further improve results. This was repeated for Alexnet finetuned for PASCAL-VOC 2012 which involves recognition of objects and not particularly faces.It was observed that face trained Alexnet perform better than pascal-VOC Alexnet. Paper also mentions significant improvement of R-CNN due to bounding box regression. However, this performance was not as good as the method developed in this paper. Inferior performance of R-CNN can be due to

  • R-CNN method used selective search leading to missing out some of the face regions.
  • Loss in localisation due to imperfection in bounding box regression.

Methodology

The method that we have followed is described under two heads:-

  • Data Collection and Pre-processing: We used the same dataset as used by [2]. The AFLW dataset is not open for usage and can be accessed only for academic purposes. After getting the dataset we preprocess each image according to our need. Handling images of the dimension of 227 X 227 X 3 was computationally difficult for us. Thus we reduced each image to 128 X 128 1. Thus our face detector will be able to handle only grayscale images.
  • We then augmented our dataset in the same way as described in [2] and described in Section 3. However we took around 24k positive and around 120k negative examples.
  • Our network: Although [2] uses a pre-trained AlexNet model for the face detector, we create our own Deep Convolutional Neural Network using Tensorflow. We have used a batch size of 60. Each such batch has 45 negative examples and 15 positive examples. The architecture of our CNN cannot be too adventurous keeping in mind the computational complexity. We started out with 2 convolutional layers and 2 fully-connected layers we could not get suitable results even on our training dataset. Thereafter we fixed our network to have 3 convolutional layers and 3 fully-connected layers along with max-pooling. They are described below:-
Layer Details
CONV LAYER 1 Activation - ReLU Filter - 18 X 18 X 1 X 60 Bias - 60 Stride - 1 X 1
MAXPOOL LAYER1 Filter - 2 X 2 Stride - 2 X 2
CONV LAYER 2 Activation - ReLU Filter - 12 X 12 X 60 X 30 Bias - 30 Stride - 1 X 1
MAXPOOL LAYER2 Filter - 2 X 2 Stride - 2 X 2
CONV LAYER 3 Activation - ReLU Filter - 6 X 6 X 30 X 15 Bias - 15 Stride - 1 X 1
MAXPOOL LAYER3 Filter - 2 X 2 Stride - 2 X 2
FC LAYER 1 Activation - ReLU Weights - 3840 X 4096 Bias - 4096
FC LAYER 2 Activation - ReLU Weights - 4096 X 256 Bias - 256
FC LAYER 3 Activation - Sigmoid Weights - 256 X 1 Bias - 4096

We have also experimented with sigmoid activation functions at each stage. However this has not given satisfactory training performance.

One of the major challenges that we faced while training the network was the problem of extremely small weight values. We had used the cross-entropy loss function along with a regularization parameter of 0.01 as our loss function. We used Adam Optimizer of Tensorflow for training. We noticed that although the cost is coming down, the training accuracy is fixed at 75%. This was not at all efficient as predicting all negatives in each batch will lead to this efficiency. As expected, this gave a similar accuracy of 75% in our test set as well. Therefore we modified the cost function slightly, multiplying the positive error by a constant > 3.5. Such a modification actually improves training accuracy of our system and we get accuracies of > 85%. We saved each of those models and run it on our test examples.

We noted that changing the regularization parameter also did not have much effect on the performance of our model on the test data.

Dependencies

  • python3.5
  • numpy
  • tensorflow
  • opencv
  • matplotlib

Folder structure

project_folder
 |- face_detection_dataset/
 |   |- positive_bw/
 |   |- negative_bw/
 |- test_image/
 |- saved_model/
 |- train.py
 |- test.py
 |- preprocess.py
 |- draw_rect.py
 |- output.txt
 |- aflw_example.py
 |- haar.py
 |- haarcascade_frontalface_default.xml
 |- haarcascade_profileface.xml

aflw_example.py

  • This script has to be run in order to generate face coordinates. This uses aflw.sqlite available from the dataset website.
  • Coordinates are saved in output.txt

preprocess.py

  • This script uses the coordinates stored in output.txt
  • generates negative and positive images and stores them in negative_bw and positive_bw folder respectively.

train.py

  • This script is used to train the neural network using preprocessed example from AFLW dataset.
  • Saves trained model in saved_model folder.
  • To use this script, run python3 train.py
  • Dataset need to be as per above folder structure
  • Validation is also performed in this script itself

output.py

  • This script is used to test image and save diagonal co-ordinates of bounding boxes
  • To use this script, image to be tested needs to be in test_image folder
  • To run script use python3 output.py
  • This will generate a text file faces.txt which contains all the bounding boxes for that particular test image.

draw_rect.py

  • This script draws bounding boxes on test image according to faces.txt generated.
  • To use this script python3 draw_rect.py
  • Displays test image with bounding boxes over it

haar.py

  • Uses xml files of haar cascade classifier from OpenCV to generate bounding boxes around faces of test images.

Results

With the trained model we test it on some test images. We have also implemented Haar Cascade Classifier module from OpenCV and thus present a comparative study between our method and this famous method.

For testing images, our final task is to draw a rectangular box around detected faces in the test image. For this we first resize each image to 128 X 128 X 1. Now we crop parts of each test image (sample the image in boxes of 32 X 32 with a horizontal and vertical stride of 16). We resize each such window to 128 X 128 X 1 and apply our trained model on it. Depending on the value of our output neuron we draw a box on the original image. In this way our model is able to detect the faces in the original image. The performance of our model and the Haar Cascade classifier is shown below for some test images. Here we present performance of three of our saved models. They are as follows:-

  • Model 1: Training accuracy of 88.3333 %
  • Model 2: Training accuracy of 91.6667 %
  • Model 3: Training accuracy of 94.2857 %
Our face detector performance-model 1 to 3 (left to right)
Our face detector performance-model 1 to 3 (left to right)

We note that the results of one of our models is comparable to that of Haar Cascade Classifier in the case of a single face in the test image. Our model however outperforms the Haarcascade classifer in the case of Figure 2. Further we present the performance of our face detector for other complex test images.

Our face detector performance-model 1 to 3 (left to right)

We see that one of our models is able to detect the face of the left person. This is a very encouraging result. However the number of non-faces being marked as faces is a concern. This may be because of the way we are sampling windows from the images. Taking a small window and resizing it into 128 pixels is blurring the image, thereby making the job of face detection even more difficult. We finally present a comparison of a test image with many (> 6 faces).

Our face detector performance-model 1 to 3 (left to right)

We note that while the Haar Cascade Classifier correctly recognizes 1 face among the many faces in the picture one of our detectors recognizes almost all the faces. However again we experience the problem of false positives in the detector.

Conclusion

In this project we have designed a Deep Convolutional Neural Network using Tensorflow to detect faces from images. We have also done a comparative study of the face detector with the other most famous detectors available, namely, Haar Cascade Classifer. Although we could not train the complex model used by the authors of [2], we got results significantly superior to the the Haar Cascade Classifier implemented in OpenCV.

However, training time of our network is much higher. Also, we are getting many false positives. Thus we can claim that although our model did not achieve the standards of the model as described in [2], we got quite good performance from our model.

Acknowledgement

We would like to thank Professor Ganesh Ramakrishnan for giving us the opportunity to select a Project of our choice as our Course Project for CS725. We would also like to thank our TA mentors Suhit and Ajay for their guidance.

References

  1. Jones, Michael, and Paul Viola. "Fast multi-view face detection." Mitsubishi Electric Research Lab TR-20003-96 3 (2003): 14.
  2. Farfade, Sachin Sudhakar, Mohammad J. Saberian, and Li-Jia Li. "Multiview face detection using deep convolutional neural networks." Proceedings of the 5th ACM on International Conference on Multimedia Retrieval. ACM, 2015.
  3. Zhang, Zhanpeng, et al. "Facial landmark detection by deep multi-task learning." European Conference on Computer Vision. Springer International Publishing, 2014.

dl-face-detection's People

Contributors

akshaykhadse avatar raghavgupta0110 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

dl-face-detection's Issues

Out of range error during training

Whenever I try to train the network it crashes at 446.

Here's my code:

Face Detection using Deep Learning

==================================

Authors: Akshay Khadse, Ashish Sukhwani, Raghav Gupta, Soumya Dutta

Date: 29/04/2017

from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
import random
import tensorflow as tf
from glob import glob
from time import time
from math import ceil
import os

Reading Data

------------

Load Label Data

dataset_path = 'face_detection_dataset/'
positive_eg = 'positive_bw/'
negative_eg = 'negative_bw/'

def encode_label(path):
if 'positive' in path:
label = [1]
else:
label = [0]
return label

def read_label_dir(path):
filepaths = []
labels = []
for filepath in glob(path + '*.png'):
filepaths.append(filepath)
labels.append(encode_label(filepath))
return filepaths, labels

Start Building Pipeline

pos_filepaths, pos_labels = read_label_dir(dataset_path + positive_eg)
print('Positive Examples: %d' % len(pos_labels))
neg_filepaths, neg_labels = read_label_dir(dataset_path + negative_eg)
print('Negative Examples: %d' % len(neg_labels))

Convert string into tensors

pos_images = ops.convert_to_tensor(pos_filepaths, dtype=dtypes.string)
pos_labels = ops.convert_to_tensor(pos_labels, dtype=dtypes.int32)

neg_images = ops.convert_to_tensor(neg_filepaths, dtype=dtypes.string)
neg_labels = ops.convert_to_tensor(neg_labels, dtype=dtypes.int32)

Partitioning Data

test_set_size = 1200
pos_test_size = ceil(test_set_size / 4)
neg_test_size = test_set_size - pos_test_size

Positive Examples

Create a partition vector

pos_partitions = [0] * len(pos_filepaths)
pos_partitions[:int(pos_test_size)] = [1] * int(pos_test_size)
random.shuffle(pos_partitions)

Partition data into a test and train set according to partition vector

pos_train_images, pos_test_images = tf.dynamic_partition(pos_images, pos_partitions, 2)
pos_train_labels, pos_test_labels = tf.dynamic_partition(pos_labels, pos_partitions, 2)

Negative Examples

Create a partition vector

neg_partitions = [0] * len(neg_filepaths)
neg_partitions[:int(neg_test_size)] = [1] * int(neg_test_size)
random.shuffle(neg_partitions)

Partition data into a test and train set according to partition vector

neg_train_images, neg_test_images = tf.dynamic_partition(neg_images, neg_partitions, 2)
neg_train_labels, neg_test_labels = tf.dynamic_partition(neg_labels, neg_partitions, 2)

Build the Input Queues and Define How to Load Images

NUM_CHANNELS = 1

Create input queues

pos_train_queue = tf.train.slice_input_producer(
[pos_train_images, pos_train_labels],
shuffle=False)
pos_test_queue = tf.train.slice_input_producer(
[pos_test_images, pos_test_labels],
shuffle=False)

Process path and string tensor into an image and a label

pos_file_content = tf.read_file(pos_train_queue[0])
pos_train_image = tf.image.decode_png(pos_file_content, channels=NUM_CHANNELS)
pos_train_label = pos_train_queue[1]

pos_file_content = tf.read_file(pos_test_queue[0])
pos_test_image = tf.image.decode_png(pos_file_content, channels=NUM_CHANNELS)
pos_test_label = pos_test_queue[1]

Create negative input queues

neg_train_queue = tf.train.slice_input_producer(
[neg_train_images, neg_train_labels],
shuffle=False)
neg_test_queue = tf.train.slice_input_producer(
[neg_test_images, neg_test_labels],
shuffle=False)

Process path and string tensor into an image and a label

neg_file_content = tf.read_file(neg_train_queue[0])
neg_train_image = tf.image.decode_png(neg_file_content, channels=NUM_CHANNELS)
neg_train_label = neg_train_queue[1]

neg_file_content = tf.read_file(neg_test_queue[0])
neg_test_image = tf.image.decode_png(neg_file_content, channels=NUM_CHANNELS)
neg_test_label = neg_test_queue[1]

Group Samples into Batches

IMAGE_HEIGHT = 32
IMAGE_WIDTH = 32
BATCH_SIZE = 16
POS_BATCH_SIZE = int(ceil(BATCH_SIZE / 4))
NEG_BATCH_SIZE = BATCH_SIZE - POS_BATCH_SIZE

Define tensor shape

pos_train_image.set_shape([IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS])
pos_test_image.set_shape([IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS])

neg_train_image.set_shape([IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS])
neg_test_image.set_shape([IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS])

Collect batches of images before processing

pos_train_image_batch, pos_train_label_batch = tf.train.batch(
[pos_train_image, pos_train_label],
batch_size=POS_BATCH_SIZE
# ,num_threads=1
)
pos_test_image_batch, pos_test_label_batch = tf.train.batch(
[pos_test_image, pos_test_label],
batch_size=POS_BATCH_SIZE
# ,num_threads=1
)

neg_train_image_batch, neg_train_label_batch = tf.train.batch(
[neg_train_image, neg_train_label],
batch_size=NEG_BATCH_SIZE
# ,num_threads=1
)
neg_test_image_batch, neg_test_label_batch = tf.train.batch(
[neg_test_image, neg_test_label],
batch_size=NEG_BATCH_SIZE
# ,num_threads=1
)

Join the postive and negative batches

train_image_batch = tf.concat([pos_train_image_batch, neg_train_image_batch], 0)
train_label_batch = tf.concat([pos_train_label_batch, neg_train_label_batch], 0)
test_image_batch = tf.concat([pos_test_image_batch, neg_test_image_batch], 0)
test_label_batch = tf.concat([pos_test_label_batch, neg_test_label_batch], 0)

Neural Network Model

--------------------

Define Placeholders and Variables

x = tf.placeholder(tf.float32, shape=[None, 32, 32, 1])
y_ = tf.placeholder(tf.float32, shape=[None, 1])
keep_prob = tf.placeholder(tf.float32)

def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.01)
return tf.Variable(initial)

def bias_variable(shape):
initial = tf.constant(0.0, shape=shape)
return tf.Variable(initial)

Define Model

def conv2d(x, W, strides=[1, 1, 1, 1]):
return tf.nn.conv2d(x, W, strides, padding='SAME')

def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],
padding='SAME')

Input

x_image = tf.reshape(x, [-1, 32, 32, 1])

Weights of CNN & Layers

W_conv1 = weight_variable([18, 18, 1, 60])
b_conv1 = bias_variable([60])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool_a = max_pool_2x2(h_conv1)

W_conv2 = weight_variable([12, 12, 60, 30])
b_conv2 = bias_variable([30])

h_conv2 = tf.nn.relu(conv2d(h_pool_a, W_conv2) + b_conv2)
h_pool_b = max_pool_2x2(h_conv2)

W_conv3 = weight_variable([6, 6, 30, 15])
b_conv3 = bias_variable([15])

h_conv3 = tf.nn.relu(conv2d(h_pool_b, W_conv3) + b_conv3)
h_pool_c = max_pool_2x2(h_conv3)

W_fc1 = weight_variable([16 * 16 * 15, 4096])
b_fc1 = bias_variable([4096])

h_pool_c_flat = tf.reshape(h_pool_c, [-1, 16 * 16 * 15])
h_fc1 = tf.nn.relu(tf.matmul(h_pool_c_flat, W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

W_fc2 = weight_variable([4096, 256])
b_fc2 = bias_variable([256])

h_fc2 = tf.nn.relu(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
h_fc2_drop = tf.nn.dropout(h_fc2, keep_prob)

W_fc3 = weight_variable([256, 1])
b_fc3 = bias_variable([1])

y_conv = tf.sigmoid(tf.matmul(h_fc2_drop, W_fc3) + b_fc3)

Train and Evaluate

------------------

cross_entropy = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(y_, y_conv, pos_weight = 3.5)) + 0.01*(tf.nn.l2_loss(W_conv1)+tf.nn.l2_loss(W_conv2)+tf.nn.l2_loss(W_conv3)+tf.nn.l2_loss(W_fc1)+tf.nn.l2_loss(W_fc2)+tf.nn.l2_loss(W_fc3))

train_step = tf.train.AdamOptimizer(learning_rate=3e-6, beta1=0.7, beta2=0.75, epsilon=1e-8).minimize(cross_entropy)

y_thres = tf.round(y_conv)
correct_prediction = tf.equal(y_thres, y_)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

train_iterations = 10000
test_iterations = 100

Add ops to save and restore all the variables.

saver = tf.train.Saver()

def save_model(x, y):
# Save the variables to disk.
save_path = saver.save(sess, "./saved_model/model.ckpt")
print("Model saved in file: %s" % save_path)

with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
# Initialize the variables
sess.run(tf.global_variables_initializer())

# Initialize the queue threads to start to shovel data
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)

print("Training")

for i in range(train_iterations):
    start_time = time()
    feed_dict = {x: train_image_batch.eval(),
                 y_: train_label_batch.eval(),
                 keep_prob: 0.5}
    if i % 1 == 0:
        train_accuracy = accuracy.eval(feed_dict)
        error = cross_entropy.eval(feed_dict)
        print("Step %d, Training accuracy %g  %g" % (i, train_accuracy, error))
    feed_dict = {x: train_image_batch.eval(),
                 y_: train_label_batch.eval(),
                 keep_prob: 1.0}
    train_step.run(feed_dict)
    end_time = time()
    print("Training time %f" % (end_time - start_time))
   
for i in range(test_iterations):
    print("validation accuracy %g" % accuracy.eval(feed_dict={
          x: test_image_batch.eval(), y_: test_label_batch.eval(),
          keep_prob: 1.0}))

# Save trained model
save_model(i, train_accuracy)

# Stop our queue threads and properly close the session
coord.request_stop()
coord.join(threads)
sess.close()

Here's my error:

Traceback (most recent call last):
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\client\session.py", line 1278, in _do_call
return fn(*args)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\client\session.py", line 1263, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_4_batch_2/fifo_queue' is closed and has insufficient elements (requested 12, current size 2)
[[Node: batch_2 = QueueDequeueManyV2[component_types=[DT_UINT8, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch_2/fifo_queue, batch_2/n)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\PC Media Files\Desktop\dl-face-detection-master\train.py", line 271, in
feed_dict = {x: train_image_batch.eval(),
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\framework\ops.py", line 680, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\framework\ops.py", line 4951, in _eval_using_default_session
return session.run(tensors, feed_dict)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\client\session.py", line 877, in run
run_metadata_ptr)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\client\session.py", line 1100, in _run
feed_dict_tensor, options, run_metadata)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\client\session.py", line 1272, in _do_run
run_metadata)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\client\session.py", line 1291, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: FIFOQueue '_4_batch_2/fifo_queue' is closed and has insufficient elements (requested 12, current size 2)
[[Node: batch_2 = QueueDequeueManyV2[component_types=[DT_UINT8, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch_2/fifo_queue, batch_2/n)]]

Caused by op 'batch_2', defined at:
File "", line 1, in
File "D:\PC Media Files\Desktop\Python\lib\idlelib\run.py", line 144, in main
ret = method(*args, **kwargs)
File "D:\PC Media Files\Desktop\Python\lib\idlelib\run.py", line 474, in runcode
exec(code, self.locals)
File "D:\PC Media Files\Desktop\dl-face-detection-master\train.py", line 150, in
batch_size=NEG_BATCH_SIZE
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\training\input.py", line 988, in batch
name=name)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\training\input.py", line 762, in _batch
dequeued = queue.dequeue_many(batch_size, name=name)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\ops\data_flow_ops.py", line 476, in dequeue_many
self._queue_ref, n=n, component_types=self._dtypes, name=name)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py", line 3799, in queue_dequeue_many_v2
component_types=component_types, timeout_ms=timeout_ms, name=name)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\util\deprecation.py", line 454, in new_func
return func(*args, **kwargs)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\framework\ops.py", line 3155, in create_op
op_def=op_def)
File "D:\PC Media Files\Desktop\Python\lib\site-packages\tensorflow\python\framework\ops.py", line 1717, in init
self._traceback = tf_stack.extract_stack()

OutOfRangeError (see above for traceback): FIFOQueue '_4_batch_2/fifo_queue' is closed and has insufficient elements (requested 12, current size 2)
[[Node: batch_2 = QueueDequeueManyV2[component_types=[DT_UINT8, DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](batch_2/fifo_queue, batch_2/n)]]0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.