Giter Site home page Giter Site logo

krasserm / face-recognition Goto Github PK

View Code? Open in Web Editor NEW
375.0 375.0 198.0 16.84 MB

Deep face recognition with Keras, Dlib and OpenCV

License: Apache License 2.0

Python 5.43% Jupyter Notebook 94.57%
deep-learning face-recognition keras machine-learning

face-recognition's People

Contributors

008karan avatar krasserm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

face-recognition's Issues

Model performance is not good on new data

I have an issue while following the given steps when I tried to train in own data set. It trains well and giving good result on data whose Id's are on test data. But when I am creating pipeline to predict on new data model performance got poor. How can I improve this and get single prediction for the given new image? Do I need to train model with the weights with more number of epochs or there is something that I am missing?

image = "path to image"
example_image1 = load_image(imge)
img2 = cv2.resize(example_image1, (96,96))
img2 = (img2 / 255.).astype(np.float32)
embedded_2 = nn4_small2_pretrained.predict(np.expand_dims(img2, axis=0))
embedded_3 = embedded_2[0]
example_prediction1 = knn.predict([embedded_3])
example_identity1 = encoder.inverse_transform(example_prediction1)[0]
print(example_identity1)
font = cv2.FONT_HERSHEY_SIMPLEX
org = (50, 50)
fontScale = 1
color = (255, 0, 0)
thickness = 2
image = cv2.putText(example_image1, example_identity1,org, font,
                   fontScale, color, thickness, cv2.LINE_AA)

How to load a model with only keras.model.load_model() method?

I am able to save the keras model (nn4.small2.v1.h5) using the command model.save('new_model.h5'). But when I tried to load the model back, I am getting error, the complete details are given below.

nn4_small2_pretrained = create_model()
nn4_small2_pretrained.load_weights('weights/nn4.small2.v1.h5')

nn4_small2_pretrained.save("new_model.h5")

del nn4_small2_pretrained

from keras.models import load_model as lm

model = lm("new_model.h5")
Traceback (most recent call last):

  File "<ipython-input-8-74cf4e3a50de>", line 1, in <module>
    model = lm("waste.h5")

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/models.py", line 270, in load_model
    model = model_from_config(model_config, custom_objects=custom_objects)

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/models.py", line 347, in model_from_config
    return layer_module.deserialize(config, custom_objects=custom_objects)

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/layers/__init__.py", line 55, in deserialize
    printable_module_name='layer')

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/utils/generic_utils.py", line 144, in deserialize_keras_object
    list(custom_objects.items())))

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/engine/topology.py", line 2535, in from_config
    process_node(layer, node_data)

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/engine/topology.py", line 2492, in process_node
    layer(input_tensors[0], **kwargs)

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/engine/topology.py", line 619, in __call__
    output = self.call(inputs, **kwargs)

  File "/data/repos/tfenv/lib/python3.6/site-packages/keras/layers/core.py", line 685, in call
    return self.function(inputs, **arguments)

  File "/data/Work/workspace/person_recognition/python/face-recognition/utils.py", line 36, in LRN2D
    return tf.nn.lrn(x, alpha=1e-4, beta=0.75)

NameError: name 'tf' is not defined

How can I load a model without error

Training the model using with a custom triplet loss function causes the weights to explode and start predicting nan values

Created a triplet-generator while viewing this facenet implementation of triplet loss function. Then combined the model from the python notebook mentioned in this repo. I am training using the vgg2face dataset.

But training fails and stop after just 1 epoch step. The weights seem to be exploding,unable to figure out why. Attached the python notebook which shows the code I used for training

project-cleaned_2.0.zip

Saving models

Newbies question about saving train model.
I am training the model from scratch, and yes I know its expensive.
I am NOT loading your pre-trained weigths.

So here we go...
nn4_small2.summary() has the full create_model() very long layers.
nn4_small2_train.summary() has only 1 layer (triplelosslayer).

How did you managed to fit nn4_small2_train for training, yet able to load the full create_model() layers? I compare this with your nn4_small2_pretrained, has the full very long layers .

nn4_small2_pretrained = create_model()
nn4_small2_pretrained.load_weights('weights/nn4.small2.v1.h5')
nn4_small2_pretrained.summary()
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            (None, 96, 96, 3)    0                                            
__________________________________________________________________________________________________
zero_padding2d_24 (ZeroPadding2 (None, 102, 102, 3)  0           input_2[0][0]                    

"   "    "   Many Many Many more layers here...................
"   "    "   Many Many Many more layers here...................

_________________________________________________________________________________________________     
average_pooling2d_8 (AveragePoo (None, 1, 1, 736)    0           concatenate_14[0][0]             
__________________________________________________________________________________________________
flatten_2 (Flatten)             (None, 736)          0           average_pooling2d_8[0][0]        
__________________________________________________________________________________________________
dense_layer (Dense)             (None, 128)          94336       flatten_2[0][0]                  
__________________________________________________________________________________________________
norm_layer (Lambda)             (None, 128)          0           dense_layer[0][0]                
==================================================================================================
Total params: 3,743,280
Trainable params: 3,733,968
Non-trainable params: 9,312
__________________________________________________________________________________________________

But for my case, I get this:

nn4_small2_train.load_weights("weights/nn4_small2_train02.hdf5")
nn4_small2_train.summary()
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            (None, 96, 96, 3)    0                                            
__________________________________________________________________________________________________
input_3 (InputLayer)            (None, 96, 96, 3)    0                                            
__________________________________________________________________________________________________
input_4 (InputLayer)            (None, 96, 96, 3)    0                                            
__________________________________________________________________________________________________
model_1 (Model)                 (None, 512)          4026288     input_2[0][0]                    
                                                                 input_3[0][0]                    
                                                                 input_4[0][0]                    
__________________________________________________________________________________________________
triplet_loss_layer (TripletLoss [(None, 512), (None, 0           model_1[1][0]                    
                                                                 model_1[2][0]                    
                                                                 model_1[3][0]                    
==================================================================================================
Total params: 4,026,288
Trainable params: 4,016,976
Non-trainable params: 9,312
__________________________________________________________________________________________________

In [4]:

TypeError: unsupported operand type(s) for /: 'NoneType' and 'float'

TypeError Traceback (most recent call last)
in ()
5 img = align_image(img)
6 # scale RGB values to interval [0,1]
----> 7 img = (img / 255.).astype(np.float32)
8 # obtain embedding vector for image
9 #embedded[i] = nn4_small2_pretrained.predict(np.expand_dims(img, axis=0))[0]

Just need help on how you saved weights of nn4.small2.v1 in CSV

As I visited openface website and saw the pre-trained model it was in 't7' extension which is for 'Torch' in Lua. So I am wondering how come you provided a model to be useful in Keras. please share the steps or code snippets on how you saved the weights of pre-trained nn4.small2.v1 in CSV and later loaded them in Keras. Thanks

High false-positive in Inference

I've adapted the code to work in real-time, capturing frames from a webcam.
Instead of test the accuracy on 10 different identities of the LFW i've tested against a subset of approximately 600 identities.

The results seem to be exaggeratedly good, the accuracy at threshold 0.43 is 0.99!!
capture

It can be a problem of overfitting? But it's strange, because the model is trained with FaceScrub and CASIA-WebFace when the test is done with LFW instead.

In addition to this, I would have another problem, in real-time: almost always the positive cases are predicted rightly, but I have a very high rate of false positives.

I've used, as suggested, a SVC for predict the class and then if the L2 distance between (at least) one of the embeddings of the class found and the embedding of the real-time frame is less the threshold, for me is the same face.

P.S. this isn't an issue with the code obviously, hope that you can help me anyway!

Load Dataset for Deep Learning

I try to learn deep learning and I am trying coding a CNN model. I'm new and need to your help. I tried jpg files-jpg files or jpg files-txt files(as annotation) for train. But I didn't try jpg files-".features" file extension as annotation. I have a folder with 8 class include jpg files for train. And in the same way I have a folder with include ".features" file corresponding to 8 classes for annotation. I add content of a ".features" file corresponding a image file.

Can you help me how to load this dataset?

JCD:1.0,2.5,4.0,0.0,0.0,0.0,2.5,5.0,4.0,0.5,0.0,0.0,1.0,0.0,0.0,1.0,2.0,1.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.5,0.0,0.0,0.0,2.0,2.0,2.5,0.0,0.0,0.0,0.5,0.0,0.0,0.5,0.5,0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,1.0,2.0,2.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.5,1.5,0.5,0.0,0.0,0.0,1.0,2.5,3.0,0.0,0.0,0.0,0.5,0.0,0.0,0.5,0.5,0.5,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,1.0,2.0,2.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,1.0,2.0,2.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0 Tamura:3.6372549019607843,4.147101187511669,496.0,115.0,160.0,212.0,164.0,179.0,156.0,262.0,167.0,157.0,151.0,202.0,147.0,155.0,153.0,162.0 ColorLayout:15.0,18.0,6.0,10.0,14.0,5.0,15.0,19.0,15.0,18.0,14.0,16.0,13.0,15.0,11.0,18.0,16.0,18.0,14.0,16.0,14.0,20.0,19.0,11.0,18.0,13.0,18.0,43.0,7.0,26.0,10.0,24.0,13.0 EdgeHistogram:1.0,1.0,6.0,0.0,1.0,2.0,4.0,7.0,0.0,5.0,0.0,6.0,4.0,3.0,3.0,0.0,1.0,0.0,7.0,3.0,3.0,3.0,5.0,3.0,1.0,3.0,4.0,2.0,4.0,2.0,0.0,3.0,4.0,5.0,3.0,4.0,1.0,2.0,4.0,1.0,5.0,0.0,5.0,3.0,2.0,3.0,2.0,4.0,6.0,5.0,2.0,1.0,2.0,7.0,6.0,4.0,0.0,2.0,0.0,2.0,0.0,2.0,0.0,0.0,2.0,3.0,4.0,1.0,2.0,3.0,4.0,5.0,4.0,2.0,3.0,0.0,0.0,3.0,0.0,5.0 AutoColorCorrelogram:14.0,14.0,13.0,13.0,8.0,7.0,6.0,5.0,11.0,10.0,9.0,9.0,14.0,13.0,12.0,12.0,10.0,9.0,8.0,8.0,14.0,13.0,13.0,12.0,10.0,9.0,8.0,8.0,11.0,10.0,9.0,9.0,5.0,4.0,2.0,2.0,12.0,12.0,11.0,11.0,2.0,0.0,0.0,0.0,9.0,9.0,8.0,7.0,7.0,5.0,3.0,2.0,0.0,0.0,0.0,0.0,7.0,4.0,3.0,2.0,0.0,0.0,0.0,0.0,5.0,4.0,3.0,3.0,10.0,10.0,8.0,8.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,8.0,8.0,6.0,6.0,12.0,12.0,11.0,10.0,5.0,3.0,2.0,1.0,7.0,5.0,4.0,3.0,3.0,1.0,1.0,0.0,8.0,3.0,2.0,2.0,4.0,2.0,1.0,0.0,15.0,15.0,15.0,15.0,5.0,5.0,3.0,3.0,5.0,5.0,3.0,3.0,5.0,4.0,2.0,2.0,11.0,11.0,10.0,9.0,1.0,0.0,0.0,0.0,7.0,5.0,3.0,2.0,1.0,0.0,0.0,0.0,9.0,5.0,4.0,4.0,1.0,0.0,0.0,0.0,3.0,2.0,1.0,1.0,2.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,3.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0 PHOG:10.0,11.0,6.0,5.0,6.0,7.0,8.0,8.0,8.0,9.0,10.0,10.0,9.0,9.0,15.0,13.0,13.0,6.0,5.0,6.0,7.0,5.0,5.0,6.0,6.0,5.0,4.0,4.0,4.0,10.0,6.0,4.0,3.0,3.0,2.0,3.0,5.0,4.0,2.0,2.0,5.0,4.0,4.0,5.0,14.0,11.0,15.0,6.0,5.0,8.0,9.0,2.0,3.0,4.0,4.0,3.0,2.0,4.0,3.0,4.0,14.0,10.0,5.0,3.0,4.0,5.0,6.0,5.0,4.0,4.0,3.0,4.0,4.0,5.0,11.0,12.0,9.0,5.0,4.0,3.0,4.0,4.0,4.0,4.0,4.0,3.0,3.0,4.0,4.0,9.0,9.0,12.0,7.0,7.0,7.0,8.0,8.0,9.0,9.0,10.0,11.0,10.0,10.0,10.0,15.0,13.0,11.0,4.0,3.0,3.0,3.0,2.0,2.0,2.0,2.0,2.0,1.0,1.0,2.0,10.0,13.0,11.0,7.0,5.0,5.0,7.0,9.0,11.0,11.0,13.0,14.0,14.0,13.0,12.0,14.0,15.0,13.0,8.0,8.0,9.0,11.0,13.0,13.0,14.0,14.0,12.0,10.0,9.0,8.0,14.0,0.0,1.0,1.0,1.0,0.0,1.0,3.0,3.0,1.0,1.0,2.0,2.0,3.0,3.0,15.0,5.0,13.0,2.0,2.0,3.0,2.0,0.0,1.0,3.0,2.0,2.0,1.0,1.0,1.0,0.0,0.0,0.0,0.0,1.0,1.0,3.0,4.0,3.0,0.0,0.0,4.0,2.0,1.0,3.0,13.0,11.0,15.0,3.0,3.0,7.0,9.0,0.0,0.0,3.0,3.0,2.0,1.0,2.0,0.0,0.0,5.0,4.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,2.0,3.0,3.0,5.0,6.0,15.0,11.0,11.0,2.0,1.0,2.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,5.0,11.0,13.0,2.0,2.0,1.0,0.0,1.0,1.0,2.0,3.0,3.0,3.0,4.0,5.0,15.0,14.0,12.0,2.0,3.0,4.0,5.0,4.0,4.0,6.0,6.0,3.0,3.0,2.0,2.0,11.0,14.0,10.0,7.0,4.0,3.0,3.0,3.0,5.0,4.0,3.0,3.0,5.0,6.0,6.0,5.0,6.0,6.0,6.0,6.0,8.0,6.0,5.0,6.0,5.0,4.0,3.0,3.0,5.0,6.0,8.0,13.0,9.0,7.0,5.0,3.0,6.0,10.0,7.0,7.0,9.0,8.0,10.0,11.0,11.0,12.0,10.0,9.0,15.0,8.0,5.0,10.0,6.0,5.0,9.0,4.0,5.0,3.0,7.0,9.0,8.0,4.0,5.0,5.0,5.0,6.0,8.0,10.0,11.0,12.0,14.0,15.0,13.0,12.0,10.0,8.0,7.0,4.0,3.0,2.0,2.0,1.0,2.0,1.0,1.0,1.0,2.0,1.0,1.0,2.0,3.0,8.0,15.0,10.0,10.0,10.0,8.0,6.0,8.0,6.0,5.0,4.0,4.0,3.0,3.0,4.0,5.0,6.0,3.0,2.0,2.0,2.0,1.0,1.0,0.0,0.0,0.0,0.0,0.0,1.0,11.0,15.0,10.0,4.0,2.0,2.0,3.0,3.0,3.0,3.0,3.0,2.0,3.0,4.0,2.0,3.0,3.0,3.0,3.0,3.0,3.0,4.0,2.0,3.0,2.0,2.0,1.0,2.0,3.0,3.0,9.0,8.0,7.0,8.0,10.0,8.0,6.0,9.0,6.0,5.0,7.0,7.0,8.0,6.0,12.0,12.0,13.0,11.0,11.0,7.0,5.0,5.0,9.0,9.0,8.0,15.0,13.0,12.0,11.0,6.0,9.0,3.0,3.0,4.0,4.0,5.0,7.0,8.0,10.0,12.0,13.0,15.0,13.0,12.0,10.0,9.0,7.0,6.0,4.0,4.0,3.0,3.0,3.0,2.0,2.0,2.0,2.0,2.0,2.0,3.0,2.0,12.0,10.0,4.0,3.0,2.0,3.0,3.0,4.0,3.0,4.0,4.0,5.0,4.0,5.0,4.0,5.0,3.0,2.0,2.0,2.0,1.0,1.0,1.0,2.0,2.0,3.0,4.0,5.0,7.0,15.0,0.0,0.0,0.0,0.0,4.0,14.0,14.0,13.0,5.0,4.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,12.0,15.0,7.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,2.0,1.0,1.0,2.0,3.0,11.0,14.0,15.0,6.0,7.0,7.0,8.0,7.0,6.0,4.0,3.0,3.0,2.0,2.0,1.0,0.0,3.0,3.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,1.0,1.0,1.0,0.0,2.0,2.0,3.0,1.0,2.0,4.0,8.0,12.0,13.0,14.0,15.0,11.0,7.0,5.0,2.0,4.0

IndexError: arrays used as indices must be of integer (or boolean) type


IndexError Traceback (most recent call last)
in ()
33 bb = alignment.getLargestFaceBoundingBox(rgbImg)
34
---> 35 jc_aligned = alignment.align(96, rgbImg, bb, landmarkIndices=AlignDlib.OUTER_EYES_AND_NOSE)
36
37

/content/drive/My Drive/app/Face_recognition_Openface/align.py in align(self, imgDim, rgbImg, bb, landmarks, landmarkIndices, skipMulti)
182 npLandmarkIndices = np.ndarray(landmarkIndices, dtype='f')
183
--> 184 H = cv2.getAffineTransform(npLandmarks[npLandmarkIndices], imgDim*MINMAX_TEMPLATE[npLandmarkIndices])
185 thumbnail = cv2.warpAffine(rgbImg, H, (imgDim, imgDim))
186

IndexError: arrays used as indices must be of integer (or boolean) type

Traceback (most recent call last)

Has anyone had the same problem as me?
File "Training.py"t in
plt.gca().add_patch(patches.Rectangle((bb.left(), bb.top()), bb.width(), bb.height(), fill=False, color='red'))
AttributeError: 'NoneType' object has no attribute 'left'

Am getting below error while running notebook

Face aligment section

RuntimeError Traceback (most recent call last)
in
14
15 # Initialize the OpenFace face alignment utility
---> 16 alignment = AlignDlib('models/landmarks.dat')
17
18 # Load an image of Jacques Chirac

face-recognition/align.py in init(self, facePredictor)
87
88 self.detector = dlib.get_frontal_face_detector()
---> 89 self.predictor = dlib.shape_predictor(facePredictor)
90
91 def getAllFaceBoundingBoxes(self, rgbImg):

RuntimeError: Error deserializing a floating point number.
while deserializing a dlib::matrix
while deserializing object of type std::vector
while deserializing object of type std::vector
while deserializing object of type std::vector

i did install all requirements in virtual env.

OSError: SavedModel file does not exist at: ./model/traffic_sign.model/{saved_model.pbtxt|saved_model.pb}

System information

TensorFlow version: 2.2.0
Python version: 3.7.3
Installed using: pip

Describe the problem
#I got this error when I run it...can anyone assist me with these problems...

Traceback (most recent call last):
File "/home/pi/Traffic-sign-detection/src/utils.py", line 9, in
model = load_model("./model/traffic_sign.model")
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/save.py", line 189, in load_model
loader_impl.parse_saved_model(filepath)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/loader_impl.py", line 113, in parse_saved_model
constants.SAVED_MODEL_FILENAME_PB))
OSError: SavedModel file does not exist at: ./model/traffic_sign.model/{saved_model.pbtxt|saved_model.pb}

import cv2
import numpy as np
from math import sqrt, pow
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
import time

norm_size = 32
model = load_model("./model/traffic_sign.model")

how to use

Hi
Please supply step by step procedure
how to train the model on new images and how to use trained model in a real situation.
thanks a lot!.

Input tensors

@krasserm Shall I vary the input tensor size as per my requirement? What changes are required to do so?

while running notebook getting error?

Getting an error while calculating embeddings.
for i, m in enumerate(metadata):
----> 4 img = load_image(m.image_path())
5 img = align_image(img)
6 # scale RGB values to interval [0,1]

<ipython-input-14-23aefaa58a32> in load_image(path)
12 # in BGR order. So we need to reverse them
---> 13 return img[...,::-1]

TypeError: 'NoneType' object is not subscriptable

Embedding Vector

@krasserm Can you please explain about the embedding vectors?
What it contains?
what is the significance of embedding vector?
If I want to vary the embedding vector size then what should I do?

model saving and prediction on saved model

@krasserm I followed #12 and tried the same as that, but i got the ValueError.

ValueError: Error when checking model : the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 3 array(s), but instead got the following list of 1 arrays: [array([[[[0.4509804 , 0.5411765 , 0.39215687],
[0.42745098, 0.5137255 , 0.35686275],
[0.4117647 , 0.48235294, 0.3254902 ],
...,
[0.01568628, 0. , 0.00784314...

Can you please suggest the solution.
Thank You.

GPU usage (data generator

data.py.txt
gpu3

I created my own triplet_generator() (see attached txt). However, with reference to picture attached, I am unable to fully utilize of my RTX2070 (8GB) GPU, it does not seem busy enough. After investigation, I found the bottleneck came from loading images. I have tried to do all pre-processing work (i.e resize, ./255, etc) outside the training by saving them as *.npz files, hope to speed up processing time, however this does not yield much improvement.

Can you give me some advise how to constantly feeding gpu with data?
Or can you share your data.py code for training "weights/nn4.small2.v1.h5" ?
Thanks.

How to write triple'generator?

Now I want to use my own data to fine tune facenet, so that I can better extract Asian face features, but I don't know how your 'triple'generator' should generate data, can you help me, give me some guidance? Thank you very much.

Double counting pairs in distance threshold calculation

In the file face_recognition.ipynb in the Distance threshold section, the creation of pairs for the f1 and accuracy calculations double count all embedding pairs and also include pairs of the same embeddings.

Instead of

for i in range(num - 1):
    for j in range(1, num):

it should be

for i in range(num - 1):
    for j in range(i + 1, num):

I would have liked to make a pull request myself, but cannot run the ipynb on my remote server to generate the new images required. Thank you!

Handle case where no faces are detected on an image

I was trying to run the repo on my training data set.. I was getting an error during align_image function as some of the images is not being aligned properly i guess.So I wanted to know that is there any specific format of dataset(say : size of image , angle of image etc ) .

Getting confidence level of test image

Hi,

First of all thank you for creating this project, it helped me a lot with understanding how the tech works.

I was wondering how could I get the confidence level when doing face recognition. I tried using the predict_proba() function on the KNN model but got an array where all values where zero except for one. I was looking for something like: [ 0.1 , 0.4 , 0.2 , 1.0 ] (Considering there are only 4 labels)

Thanks!

NameError: name 'tf' is not defined

Hi, I have run your jupyter notebook file succesfully. I have then saved nn4_small2 into a .h5 file. I'm now trying to convert this file to tflite, for which I use the tflite_convert tool using the documentation provided here .

When I try to run this file, I get this error (note the last line, it is somewhere related to utils.py file):

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.6/bin/tflite_convert", line 11, in <module>
    sys.exit(main())
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 412, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 408, in run_main
    _convert_model(tflite_flags)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 100, in _convert_model
    converter = _get_toco_converter(flags)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/contrib/lite/python/tflite_convert.py", line 87, in _get_toco_converter
    return converter_fn(**converter_kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 368, in from_keras_model_file
    keras_model = _keras.models.load_model(model_file)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py", line 230, in load_model
    model = model_from_config(model_config, custom_objects=custom_objects)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/saving.py", line 310, in model_from_config
    return deserialize(config, custom_objects=custom_objects)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/layers/serialization.py", line 64, in deserialize
    printable_module_name='layer')
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/utils/generic_utils.py", line 173, in deserialize_keras_object
    list(custom_objects.items())))
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1302, in from_config
    process_node(layer, node_data)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/network.py", line 1260, in process_node
    layer(input_tensors[0], **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 757, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/layers/core.py", line 739, in call
    return self.function(inputs, **arguments)
  File "/Users/sparker0i/face-recognition/utils.py", line 35, in LRN2D
    return tf.nn.lrn(x, alpha=1e-4, beta=0.75)
NameError: name 'tf' is not defined

In utils.py I can see import tensorflow as tf in utils.py but I'm still not able to deduce why this error is coming up. Could you please look into it?

Euclidean distance function

Euclidean distance function:

def distance(emb1, emb2):
return np.sum(np.square(emb1 - emb2))

should be

def distance(emb1, emb2):
return np.sqrt(np.sum((emb1 - emb2)**2))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.