Giter Site home page Giter Site logo

antonmu / trainyourownyolo Goto Github PK

View Code? Open in Web Editor NEW
638.0 8.0 411.0 70.47 MB

Train a state-of-the-art yolov3 object detector from scratch!

License: Other

Python 1.14% Jupyter Notebook 98.86% Shell 0.01%
yolov3 yolo object-detection python custom-yolo deep-learning deep-learning-tutorial detector inference annotating-images

trainyourownyolo's People

Contributors

antonmu avatar dependabot[bot] avatar parikshit14 avatar superxwolf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

trainyourownyolo's Issues

Trying to run the Train_Yolo throws error

np_resource = np.dtype([("resource", np.ubyte, 1)])
Using TensorFlow backend.
Traceback (most recent call last):
File "C:/Users/Shreeni/Downloads/ripo/TrainYourOwnYOLO-master/2_Training/Train_YOLO.py", line 32, in
from keras_yolo3.yolo3.model import preprocess_true_boxes, yolo_body, tiny_yolo_body, yolo_loss
ModuleNotFoundError: No module named 'keras_yolo3'

Not able to detect training images

I'm running the Train_YOLO.py on my CPU and followed the steps. But I'm getting an error:

[Errno 2] No such file or directory: 'D:/TrainYourOwnYOLO/-master/Data/Source_Images/Training_Images/vott-csv-export/images%20(42).jpg'

The image is present in my directory but it is not getting detected. Any reason why?

Loss function doesn't decrease

Hi @AntonMu

I've tried you model last night in AWS EC2 machine.
My dataset contains 6 class, 1000 images pee class. I did 100 epochs and this morning I saw that the loss function is stuck around a value of ~35 after 50 epochs.
Any suggestions how to improve that?

Thanks again

TypeError: buffer is too small for requested array

My system is ubuntu 18.04, GTX 1080, RAM 32 GB, python 3.7.3

~/TrainYourOwnYOLO/2_Training$ python Download_and_Convert_YOLO_weights.py

Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W1218 10:59:15.021920 140670055905088 deprecation.py:323] From /media/suryadi/DATA/anaconda3/lib/python3.7/site-packages/tensorflow/python/compat/v2_compat.py:61: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
Loading weights.
Weights Header:  0 2 0 [32013312]
Parsing Darknet config.
Creating Keras model.
Parsing section net_0
Parsing section convolutional_0
conv2d bn leaky (3, 3, 3, 32)
2019-12-18 10:59:15.880605: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-18 10:59:16.005186: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 4008000000 Hz
2019-12-18 10:59:16.013213: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608a2426fb0 executing computations on platform Host. Devices:
2019-12-18 10:59:16.013306: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-18 10:59:16.039288: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.1
2019-12-18 10:59:16.181191: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-18 10:59:16.181547: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608a4d98fc0 executing computations on platform CUDA. Devices:
2019-12-18 10:59:16.181560: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
2019-12-18 10:59:16.182158: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-18 10:59:16.182423: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
2019-12-18 10:59:16.247569: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2019-12-18 10:59:16.299354: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.0
2019-12-18 10:59:16.324271: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.0
2019-12-18 10:59:16.334432: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.0
2019-12-18 10:59:16.394529: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.0
2019-12-18 10:59:16.441332: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.0
2019-12-18 10:59:16.556384: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
2019-12-18 10:59:16.556777: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-18 10:59:16.558875: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-18 10:59:16.560796: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0
2019-12-18 10:59:16.569143: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.0
2019-12-18 10:59:16.572617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-18 10:59:16.572671: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187]      0 
2019-12-18 10:59:16.572704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0:   N 
2019-12-18 10:59:16.573052: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-18 10:59:16.574301: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1005] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-18 10:59:16.575389: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6474 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
Parsing section convolutional_1
conv2d bn leaky (3, 3, 32, 64)
Parsing section convolutional_2
conv2d bn leaky (1, 1, 64, 32)
Parsing section convolutional_3
conv2d bn leaky (3, 3, 32, 64)
Parsing section shortcut_0
Parsing section convolutional_4
conv2d bn leaky (3, 3, 64, 128)
Parsing section convolutional_5
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_6
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_1
Parsing section convolutional_7
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_8
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_2
Parsing section convolutional_9
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_10
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_11
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_3
Parsing section convolutional_12
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_13
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_4
Parsing section convolutional_14
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_15
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_5
Parsing section convolutional_16
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_17
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_6
Parsing section convolutional_18
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_19
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_7
Parsing section convolutional_20
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_21
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_8
Parsing section convolutional_22
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_23
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_9
Parsing section convolutional_24
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_25
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_10
Parsing section convolutional_26
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_27
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_28
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_11
Parsing section convolutional_29
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_30
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_12
Parsing section convolutional_31
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_32
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_13
Parsing section convolutional_33
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_34
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_14
Parsing section convolutional_35
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_36
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_15
Parsing section convolutional_37
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_38
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_16
Parsing section convolutional_39
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_40
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_17
Parsing section convolutional_41
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_42
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_18
Parsing section convolutional_43
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_44
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_45
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_19
Parsing section convolutional_46
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_47
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_20
Parsing section convolutional_48
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_49
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_21
Parsing section convolutional_50
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_51
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_22
Parsing section convolutional_52
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_53
conv2d bn leaky (3, 3, 512, 1024)
Traceback (most recent call last):
  File "convert.py", line 262, in <module>
    _main(parser.parse_args())
  File "convert.py", line 143, in _main
    buffer=weights_file.read(weights_size * 4))
TypeError: buffer is too small for requested array

Please help. Thank you very much in advance.

Google Colab : [Errno 2] No such file or directory: '/content/gdrive/My'

Hi Anton!
Thank you for your amazing work man, it really helps me.
I managed to run the training process on my local machine, however since I only use CPU, it took so long. Therefore, I try to run the training process on google colab so it can benefit from the GPU.
I redo all the procedures in colab notebook, but on running Train_YOLO.py I got an error message:

Epoch 1/51 Traceback (most recent call last): File "Train_YOLO.py", line 157, in <module> callbacks=[logging, checkpoint]) File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1418, in fit_generator initial_epoch=initial_epoch) File "/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py", line 181, in fit_generator generator_output = next(output_generator) File "/usr/local/lib/python3.6/dist-packages/keras/utils/data_utils.py", line 709, in get six.reraise(*sys.exc_info()) File "/usr/local/lib/python3.6/dist-packages/six.py", line 693, in reraise raise value File "/usr/local/lib/python3.6/dist-packages/keras/utils/data_utils.py", line 685, in get inputs = self.queue.get(block=True).get() File "/usr/lib/python3.6/multiprocessing/pool.py", line 644, in get raise self._value File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/usr/local/lib/python3.6/dist-packages/keras/utils/data_utils.py", line 626, in next_sample return six.next(_SHARED_SEQUENCES[uid]) File "/content/gdrive/My Drive/Evos AI Stuffs/TrainYourOwnYOLO/Utils/Train_Utils.py", line 116, in data_generator image, box = get_random_data(annotation_lines[i], input_shape, random=True) File "/content/gdrive/My Drive/Evos AI Stuffs/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo3/utils.py", line 40, in get_random_data image = Image.open(line[0]) File "/usr/local/lib/python3.6/dist-packages/PIL/Image.py", line 2766, in open fp = builtins.open(filename, "rb") FileNotFoundError: [Errno 2] No such file or directory: '/content/gdrive/My'

I am pretty sure this caused by the name of my directory since it is contained space in My Drive (this is default from Gdrive), so the function fp = builtins.open(filename, "rb") failed to get the correct directory. Any suggestion or idea how to solve this?
Thank you

names deprecated (tensorflow_backend): Download_and_Convert_YOLO_weights.py

System information

  • What is the top-level directory of the model you are using: Usersโฉ/cpattersonโฉ/TrainYourOwnYOLO/
  • Have I written custom code (as opposed to using a stock example script provided in the repo): changed tensorflow-estimator version to 1.15.1
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac OS 10.14.6
  • TensorFlow version (use command below): v1.15.0-92-g5d80e1e8e6 1.15.2
  • Exact command to reproduce: python Download_and_Convert_YOLO_weights.py

Describe the problem

When I run Download_and_Convert_YOLO_weights.py, I'm getting several warnings about names being deprecated. For example, the first one is:

WARNING: Logging before flag parsing goes to stderr.
W0215 19:36:26.215769 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

Later, when I move on to the inference step, I don't get any results. So, I am curious if this could be the cause.

Source code / logs

Last login: Sat Feb 15 17:56:01 on ttys000
idc113-01:~ cpatterson$ cd TrainYourOwnYOLO/
idc113-01:TrainYourOwnYOLO cpatterson$ python3 -m venv env
idc113-01:TrainYourOwnYOLO cpatterson$ source env/bin/activate
(env) idc113-01:TrainYourOwnYOLO cpatterson$ pip install -r requirements.txt
Requirement already satisfied: setuptools>=41.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (45.1.0)
Requirement already satisfied: pip>=19.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 2)) (20.0.2)
Requirement already satisfied: absl-py==0.7.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 3)) (0.7.1)
Requirement already satisfied: astor==0.8.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 4)) (0.8.0)
Requirement already satisfied: attrs==19.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 5)) (19.1.0)
Requirement already satisfied: backcall==0.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 6)) (0.1.0)
Requirement already satisfied: bleach==3.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 7)) (3.1.0)
Requirement already satisfied: certifi==2019.6.16 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 8)) (2019.6.16)
Requirement already satisfied: chardet==3.0.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 9)) (3.0.4)
Requirement already satisfied: cycler==0.10.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 10)) (0.10.0)
Requirement already satisfied: decorator==4.4.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 11)) (4.4.0)
Requirement already satisfied: defusedxml==0.6.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 12)) (0.6.0)
Requirement already satisfied: progressbar2==3.46.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 13)) (3.46.1)
Requirement already satisfied: entrypoints==0.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 14)) (0.3)
Requirement already satisfied: gast==0.2.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 15)) (0.2.2)
Requirement already satisfied: google-pasta==0.1.7 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 16)) (0.1.7)
Requirement already satisfied: grpcio==1.22.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 17)) (1.22.0)
Requirement already satisfied: h5py==2.9.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 18)) (2.9.0)
Requirement already satisfied: idna==2.8 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 19)) (2.8)
Requirement already satisfied: ipykernel==5.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 20)) (5.1.1)
Requirement already satisfied: ipython==7.6.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 21)) (7.6.1)
Requirement already satisfied: ipython-genutils==0.2.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 22)) (0.2.0)
Requirement already satisfied: ipywidgets==7.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 23)) (7.5.0)
Requirement already satisfied: jedi==0.14.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 24)) (0.14.0)
Requirement already satisfied: Jinja2==2.10.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: joblib==0.13.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 26)) (0.13.2)
Requirement already satisfied: jsonschema==3.0.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 27)) (3.0.1)
Requirement already satisfied: jupyter==1.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 28)) (1.0.0)
Requirement already satisfied: jupyter-client==5.3.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 29)) (5.3.0)
Requirement already satisfied: jupyter-console==6.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 30)) (6.0.0)
Requirement already satisfied: jupyter-core==4.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 31)) (4.5.0)
Requirement already satisfied: Keras==2.2.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 32)) (2.2.4)
Requirement already satisfied: Keras-Applications==1.0.8 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 33)) (1.0.8)
Requirement already satisfied: Keras-Preprocessing==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 34)) (1.1.0)
Requirement already satisfied: kiwisolver==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 35)) (1.1.0)
Requirement already satisfied: Markdown==3.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 36)) (3.1.1)
Requirement already satisfied: MarkupSafe==1.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 37)) (1.1.1)
Requirement already satisfied: matplotlib==3.0.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 38)) (3.0.3)
Requirement already satisfied: mistune==0.8.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 39)) (0.8.4)
Requirement already satisfied: mpmath==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 40)) (1.1.0)
Requirement already satisfied: nbconvert==5.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 41)) (5.5.0)
Requirement already satisfied: nbformat==4.4.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 42)) (4.4.0)
Requirement already satisfied: notebook==5.7.8 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 43)) (5.7.8)
Requirement already satisfied: numpy==1.16.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 44)) (1.16.4)
Requirement already satisfied: opencv-python==4.1.0.25 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 45)) (4.1.0.25)
Requirement already satisfied: pandas==0.24.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 46)) (0.24.2)
Requirement already satisfied: pandocfilters==1.4.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 47)) (1.4.2)
Requirement already satisfied: parso==0.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 48)) (0.5.0)
Requirement already satisfied: pexpect==4.7.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 49)) (4.7.0)
Requirement already satisfied: pickleshare==0.7.5 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 50)) (0.7.5)
Requirement already satisfied: Pillow==6.2.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 51)) (6.2.1)
Requirement already satisfied: prometheus-client==0.7.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 52)) (0.7.1)
Requirement already satisfied: prompt-toolkit==2.0.9 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 53)) (2.0.9)
Requirement already satisfied: protobuf==3.8.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 54)) (3.8.0)
Requirement already satisfied: ptyprocess==0.6.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 55)) (0.6.0)
Requirement already satisfied: Pygments==2.4.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 56)) (2.4.2)
Requirement already satisfied: pyparsing==2.4.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 57)) (2.4.0)
Requirement already satisfied: pyrsistent==0.15.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 58)) (0.15.3)
Requirement already satisfied: python-dateutil==2.8.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 59)) (2.8.0)
Requirement already satisfied: pytz==2019.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 60)) (2019.1)
Requirement already satisfied: PyYAML==5.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 61)) (5.1.1)
Requirement already satisfied: pyzmq==18.0.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 62)) (18.0.2)
Requirement already satisfied: qtconsole==4.5.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 63)) (4.5.1)
Requirement already satisfied: requests==2.22.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 64)) (2.22.0)
Requirement already satisfied: scikit-learn==0.21.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 65)) (0.21.2)
Requirement already satisfied: scipy==1.3.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 66)) (1.3.0)
Requirement already satisfied: Send2Trash==1.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 67)) (1.5.0)
Requirement already satisfied: six==1.12.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 68)) (1.12.0)
Requirement already satisfied: sklearn==0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 69)) (0.0)
Requirement already satisfied: sympy==1.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 70)) (1.4)
Requirement already satisfied: tensorboard==1.15.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 71)) (1.15.0)
Requirement already satisfied: tensorflow==1.15.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 72)) (1.15.2)
Collecting tensorflow-estimator==1.15.1
Using cached tensorflow_estimator-1.15.1-py2.py3-none-any.whl (503 kB)
Requirement already satisfied: termcolor==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 74)) (1.1.0)
Requirement already satisfied: terminado==0.8.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 75)) (0.8.2)
Requirement already satisfied: testpath==0.4.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 76)) (0.4.2)
Requirement already satisfied: tornado==6.0.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 77)) (6.0.3)
Requirement already satisfied: traitlets==4.3.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 78)) (4.3.2)
Requirement already satisfied: urllib3==1.25.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 79)) (1.25.3)
Requirement already satisfied: wcwidth==0.1.7 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 80)) (0.1.7)
Requirement already satisfied: webencodings==0.5.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 81)) (0.5.1)
Requirement already satisfied: Werkzeug==0.15.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 82)) (0.15.4)
Requirement already satisfied: widgetsnbextension==3.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 83)) (3.5.0)
Requirement already satisfied: wrapt==1.11.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 84)) (1.11.2)
Requirement already satisfied: python-utils>=2.3.0 in ./env/lib/python3.6/site-packages (from progressbar2==3.46.1->-r requirements.txt (line 13)) (2.3.0)
Requirement already satisfied: appnope; sys_platform == "darwin" in ./env/lib/python3.6/site-packages (from ipython==7.6.1->-r requirements.txt (line 21)) (0.1.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in ./env/lib/python3.6/site-packages (from tensorboard==1.15.0->-r requirements.txt (line 71)) (0.34.2)
Requirement already satisfied: opt-einsum>=2.3.2 in ./env/lib/python3.6/site-packages (from tensorflow==1.15.2->-r requirements.txt (line 72)) (3.1.0)
Installing collected packages: tensorflow-estimator
Attempting uninstall: tensorflow-estimator
Found existing installation: tensorflow-estimator 1.15.0
Uninstalling tensorflow-estimator-1.15.0:
Successfully uninstalled tensorflow-estimator-1.15.0
Successfully installed tensorflow-estimator-1.15.1
(env) idc113-01:TrainYourOwnYOLO cpatterson$ cd 1_Image_Annotation
(env) idc113-01:1_Image_Annotation cpatterson$ python Convert_to_YOLO_format.py
(env) idc113-01:1_Image_Annotation cpatterson$ cd ../2_Training
(env) idc113-01:2_Training cpatterson$ python Download_and_Convert_YOLO_weights.py
99% (2479194 of 2480070) |################################################################################################################## | Elapsed Time: 0:00:39 ETA: 0:00:00Using TensorFlow backend.
Loading weights.
Weights Header: 0 2 0 [32013312]
Parsing Darknet config.
Creating Keras model.
WARNING: Logging before flag parsing goes to stderr.
W0215 19:36:26.215769 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W0215 19:36:26.223051 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

Parsing section net_0
Parsing section convolutional_0
conv2d bn leaky (3, 3, 3, 32)
W0215 19:36:26.230928 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W0215 19:36:26.249578 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

W0215 19:36:26.249835 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

W0215 19:36:26.250049 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2020-02-15 19:36:26.250795: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-02-15 19:36:26.283600: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fa56c5b9370 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-02-15 19:36:26.283625: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
W0215 19:36:26.286991 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

W0215 19:36:26.287442 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

W0215 19:36:26.327128 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.

W0215 19:36:26.413047 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

W0215 19:36:26.468871 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.

Parsing section convolutional_1
conv2d bn leaky (3, 3, 32, 64)
Parsing section convolutional_2
conv2d bn leaky (1, 1, 64, 32)
Parsing section convolutional_3
conv2d bn leaky (3, 3, 32, 64)
Parsing section shortcut_0
Parsing section convolutional_4
conv2d bn leaky (3, 3, 64, 128)
Parsing section convolutional_5
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_6
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_1
Parsing section convolutional_7
conv2d bn leaky (1, 1, 128, 64)
Parsing section convolutional_8
conv2d bn leaky (3, 3, 64, 128)
Parsing section shortcut_2
Parsing section convolutional_9
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_10
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_11
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_3
Parsing section convolutional_12
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_13
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_4
Parsing section convolutional_14
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_15
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_5
Parsing section convolutional_16
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_17
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_6
Parsing section convolutional_18
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_19
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_7
Parsing section convolutional_20
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_21
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_8
Parsing section convolutional_22
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_23
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_9
Parsing section convolutional_24
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_25
conv2d bn leaky (3, 3, 128, 256)
Parsing section shortcut_10
Parsing section convolutional_26
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_27
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_28
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_11
Parsing section convolutional_29
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_30
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_12
Parsing section convolutional_31
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_32
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_13
Parsing section convolutional_33
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_34
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_14
Parsing section convolutional_35
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_36
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_15
Parsing section convolutional_37
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_38
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_16
Parsing section convolutional_39
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_40
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_17
Parsing section convolutional_41
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_42
conv2d bn leaky (3, 3, 256, 512)
Parsing section shortcut_18
Parsing section convolutional_43
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_44
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_45
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_19
Parsing section convolutional_46
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_47
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_20
Parsing section convolutional_48
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_49
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_21
Parsing section convolutional_50
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_51
conv2d bn leaky (3, 3, 512, 1024)
Parsing section shortcut_22
Parsing section convolutional_52
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_53
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_54
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_55
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_56
conv2d bn leaky (1, 1, 1024, 512)
Parsing section convolutional_57
conv2d bn leaky (3, 3, 512, 1024)
Parsing section convolutional_58
conv2d linear (1, 1, 1024, 255)
Parsing section yolo_0
Parsing section route_0
Parsing section convolutional_59
conv2d bn leaky (1, 1, 512, 256)
Parsing section upsample_0
W0215 19:37:46.677906 4600120768 module_wrapper.py:139] From /Users/cpatterson/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

Parsing section route_1
Concatenating route layers: [<tf.Tensor 'up_sampling2d_1/ResizeNearestNeighbor:0' shape=(?, ?, ?, 256) dtype=float32>, <tf.Tensor 'add_19/add:0' shape=(?, ?, ?, 512) dtype=float32>]
Parsing section convolutional_60
conv2d bn leaky (1, 1, 768, 256)
Parsing section convolutional_61
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_62
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_63
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_64
conv2d bn leaky (1, 1, 512, 256)
Parsing section convolutional_65
conv2d bn leaky (3, 3, 256, 512)
Parsing section convolutional_66
conv2d linear (1, 1, 512, 255)
Parsing section yolo_1
Parsing section route_2
Parsing section convolutional_67
conv2d bn leaky (1, 1, 256, 128)
Parsing section upsample_1
Parsing section route_3
Concatenating route layers: [<tf.Tensor 'up_sampling2d_2/ResizeNearestNeighbor:0' shape=(?, ?, ?, 128) dtype=float32>, <tf.Tensor 'add_11/add:0' shape=(?, ?, ?, 256) dtype=float32>]
Parsing section convolutional_68
conv2d bn leaky (1, 1, 384, 128)
Parsing section convolutional_69
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_70
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_71
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_72
conv2d bn leaky (1, 1, 256, 128)
Parsing section convolutional_73
conv2d bn leaky (3, 3, 128, 256)
Parsing section convolutional_74
conv2d linear (1, 1, 256, 255)
Parsing section yolo_2


Layer (type) Output Shape Param # Connected to

input_1 (InputLayer) (None, None, None, 3 0


conv2d_1 (Conv2D) (None, None, None, 3 864 input_1[0][0]


batch_normalization_1 (BatchNor (None, None, None, 3 128 conv2d_1[0][0]


leaky_re_lu_1 (LeakyReLU) (None, None, None, 3 0 batch_normalization_1[0][0]


zero_padding2d_1 (ZeroPadding2D (None, None, None, 3 0 leaky_re_lu_1[0][0]


conv2d_2 (Conv2D) (None, None, None, 6 18432 zero_padding2d_1[0][0]


batch_normalization_2 (BatchNor (None, None, None, 6 256 conv2d_2[0][0]


leaky_re_lu_2 (LeakyReLU) (None, None, None, 6 0 batch_normalization_2[0][0]


conv2d_3 (Conv2D) (None, None, None, 3 2048 leaky_re_lu_2[0][0]


batch_normalization_3 (BatchNor (None, None, None, 3 128 conv2d_3[0][0]


leaky_re_lu_3 (LeakyReLU) (None, None, None, 3 0 batch_normalization_3[0][0]


conv2d_4 (Conv2D) (None, None, None, 6 18432 leaky_re_lu_3[0][0]


batch_normalization_4 (BatchNor (None, None, None, 6 256 conv2d_4[0][0]


leaky_re_lu_4 (LeakyReLU) (None, None, None, 6 0 batch_normalization_4[0][0]


add_1 (Add) (None, None, None, 6 0 leaky_re_lu_2[0][0]
leaky_re_lu_4[0][0]


zero_padding2d_2 (ZeroPadding2D (None, None, None, 6 0 add_1[0][0]


conv2d_5 (Conv2D) (None, None, None, 1 73728 zero_padding2d_2[0][0]


batch_normalization_5 (BatchNor (None, None, None, 1 512 conv2d_5[0][0]


leaky_re_lu_5 (LeakyReLU) (None, None, None, 1 0 batch_normalization_5[0][0]


conv2d_6 (Conv2D) (None, None, None, 6 8192 leaky_re_lu_5[0][0]


batch_normalization_6 (BatchNor (None, None, None, 6 256 conv2d_6[0][0]


leaky_re_lu_6 (LeakyReLU) (None, None, None, 6 0 batch_normalization_6[0][0]


conv2d_7 (Conv2D) (None, None, None, 1 73728 leaky_re_lu_6[0][0]


batch_normalization_7 (BatchNor (None, None, None, 1 512 conv2d_7[0][0]


leaky_re_lu_7 (LeakyReLU) (None, None, None, 1 0 batch_normalization_7[0][0]


add_2 (Add) (None, None, None, 1 0 leaky_re_lu_5[0][0]
leaky_re_lu_7[0][0]


conv2d_8 (Conv2D) (None, None, None, 6 8192 add_2[0][0]


batch_normalization_8 (BatchNor (None, None, None, 6 256 conv2d_8[0][0]


leaky_re_lu_8 (LeakyReLU) (None, None, None, 6 0 batch_normalization_8[0][0]


conv2d_9 (Conv2D) (None, None, None, 1 73728 leaky_re_lu_8[0][0]


batch_normalization_9 (BatchNor (None, None, None, 1 512 conv2d_9[0][0]


leaky_re_lu_9 (LeakyReLU) (None, None, None, 1 0 batch_normalization_9[0][0]


add_3 (Add) (None, None, None, 1 0 add_2[0][0]
leaky_re_lu_9[0][0]


zero_padding2d_3 (ZeroPadding2D (None, None, None, 1 0 add_3[0][0]


conv2d_10 (Conv2D) (None, None, None, 2 294912 zero_padding2d_3[0][0]


batch_normalization_10 (BatchNo (None, None, None, 2 1024 conv2d_10[0][0]


leaky_re_lu_10 (LeakyReLU) (None, None, None, 2 0 batch_normalization_10[0][0]


conv2d_11 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_10[0][0]


batch_normalization_11 (BatchNo (None, None, None, 1 512 conv2d_11[0][0]


leaky_re_lu_11 (LeakyReLU) (None, None, None, 1 0 batch_normalization_11[0][0]


conv2d_12 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_11[0][0]


batch_normalization_12 (BatchNo (None, None, None, 2 1024 conv2d_12[0][0]


leaky_re_lu_12 (LeakyReLU) (None, None, None, 2 0 batch_normalization_12[0][0]


add_4 (Add) (None, None, None, 2 0 leaky_re_lu_10[0][0]
leaky_re_lu_12[0][0]


conv2d_13 (Conv2D) (None, None, None, 1 32768 add_4[0][0]


batch_normalization_13 (BatchNo (None, None, None, 1 512 conv2d_13[0][0]


leaky_re_lu_13 (LeakyReLU) (None, None, None, 1 0 batch_normalization_13[0][0]


conv2d_14 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_13[0][0]


batch_normalization_14 (BatchNo (None, None, None, 2 1024 conv2d_14[0][0]


leaky_re_lu_14 (LeakyReLU) (None, None, None, 2 0 batch_normalization_14[0][0]


add_5 (Add) (None, None, None, 2 0 add_4[0][0]
leaky_re_lu_14[0][0]


conv2d_15 (Conv2D) (None, None, None, 1 32768 add_5[0][0]


batch_normalization_15 (BatchNo (None, None, None, 1 512 conv2d_15[0][0]


leaky_re_lu_15 (LeakyReLU) (None, None, None, 1 0 batch_normalization_15[0][0]


conv2d_16 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_15[0][0]


batch_normalization_16 (BatchNo (None, None, None, 2 1024 conv2d_16[0][0]


leaky_re_lu_16 (LeakyReLU) (None, None, None, 2 0 batch_normalization_16[0][0]


add_6 (Add) (None, None, None, 2 0 add_5[0][0]
leaky_re_lu_16[0][0]


conv2d_17 (Conv2D) (None, None, None, 1 32768 add_6[0][0]


batch_normalization_17 (BatchNo (None, None, None, 1 512 conv2d_17[0][0]


leaky_re_lu_17 (LeakyReLU) (None, None, None, 1 0 batch_normalization_17[0][0]


conv2d_18 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_17[0][0]


batch_normalization_18 (BatchNo (None, None, None, 2 1024 conv2d_18[0][0]


leaky_re_lu_18 (LeakyReLU) (None, None, None, 2 0 batch_normalization_18[0][0]


add_7 (Add) (None, None, None, 2 0 add_6[0][0]
leaky_re_lu_18[0][0]


conv2d_19 (Conv2D) (None, None, None, 1 32768 add_7[0][0]


batch_normalization_19 (BatchNo (None, None, None, 1 512 conv2d_19[0][0]


leaky_re_lu_19 (LeakyReLU) (None, None, None, 1 0 batch_normalization_19[0][0]


conv2d_20 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_19[0][0]


batch_normalization_20 (BatchNo (None, None, None, 2 1024 conv2d_20[0][0]


leaky_re_lu_20 (LeakyReLU) (None, None, None, 2 0 batch_normalization_20[0][0]


add_8 (Add) (None, None, None, 2 0 add_7[0][0]
leaky_re_lu_20[0][0]


conv2d_21 (Conv2D) (None, None, None, 1 32768 add_8[0][0]


batch_normalization_21 (BatchNo (None, None, None, 1 512 conv2d_21[0][0]


leaky_re_lu_21 (LeakyReLU) (None, None, None, 1 0 batch_normalization_21[0][0]


conv2d_22 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_21[0][0]


batch_normalization_22 (BatchNo (None, None, None, 2 1024 conv2d_22[0][0]


leaky_re_lu_22 (LeakyReLU) (None, None, None, 2 0 batch_normalization_22[0][0]


add_9 (Add) (None, None, None, 2 0 add_8[0][0]
leaky_re_lu_22[0][0]


conv2d_23 (Conv2D) (None, None, None, 1 32768 add_9[0][0]


batch_normalization_23 (BatchNo (None, None, None, 1 512 conv2d_23[0][0]


leaky_re_lu_23 (LeakyReLU) (None, None, None, 1 0 batch_normalization_23[0][0]


conv2d_24 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_23[0][0]


batch_normalization_24 (BatchNo (None, None, None, 2 1024 conv2d_24[0][0]


leaky_re_lu_24 (LeakyReLU) (None, None, None, 2 0 batch_normalization_24[0][0]


add_10 (Add) (None, None, None, 2 0 add_9[0][0]
leaky_re_lu_24[0][0]


conv2d_25 (Conv2D) (None, None, None, 1 32768 add_10[0][0]


batch_normalization_25 (BatchNo (None, None, None, 1 512 conv2d_25[0][0]


leaky_re_lu_25 (LeakyReLU) (None, None, None, 1 0 batch_normalization_25[0][0]


conv2d_26 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_25[0][0]


batch_normalization_26 (BatchNo (None, None, None, 2 1024 conv2d_26[0][0]


leaky_re_lu_26 (LeakyReLU) (None, None, None, 2 0 batch_normalization_26[0][0]


add_11 (Add) (None, None, None, 2 0 add_10[0][0]
leaky_re_lu_26[0][0]


zero_padding2d_4 (ZeroPadding2D (None, None, None, 2 0 add_11[0][0]


conv2d_27 (Conv2D) (None, None, None, 5 1179648 zero_padding2d_4[0][0]


batch_normalization_27 (BatchNo (None, None, None, 5 2048 conv2d_27[0][0]


leaky_re_lu_27 (LeakyReLU) (None, None, None, 5 0 batch_normalization_27[0][0]


conv2d_28 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_27[0][0]


batch_normalization_28 (BatchNo (None, None, None, 2 1024 conv2d_28[0][0]


leaky_re_lu_28 (LeakyReLU) (None, None, None, 2 0 batch_normalization_28[0][0]


conv2d_29 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_28[0][0]


batch_normalization_29 (BatchNo (None, None, None, 5 2048 conv2d_29[0][0]


leaky_re_lu_29 (LeakyReLU) (None, None, None, 5 0 batch_normalization_29[0][0]


add_12 (Add) (None, None, None, 5 0 leaky_re_lu_27[0][0]
leaky_re_lu_29[0][0]


conv2d_30 (Conv2D) (None, None, None, 2 131072 add_12[0][0]


batch_normalization_30 (BatchNo (None, None, None, 2 1024 conv2d_30[0][0]


leaky_re_lu_30 (LeakyReLU) (None, None, None, 2 0 batch_normalization_30[0][0]


conv2d_31 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_30[0][0]


batch_normalization_31 (BatchNo (None, None, None, 5 2048 conv2d_31[0][0]


leaky_re_lu_31 (LeakyReLU) (None, None, None, 5 0 batch_normalization_31[0][0]


add_13 (Add) (None, None, None, 5 0 add_12[0][0]
leaky_re_lu_31[0][0]


conv2d_32 (Conv2D) (None, None, None, 2 131072 add_13[0][0]


batch_normalization_32 (BatchNo (None, None, None, 2 1024 conv2d_32[0][0]


leaky_re_lu_32 (LeakyReLU) (None, None, None, 2 0 batch_normalization_32[0][0]


conv2d_33 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_32[0][0]


batch_normalization_33 (BatchNo (None, None, None, 5 2048 conv2d_33[0][0]


leaky_re_lu_33 (LeakyReLU) (None, None, None, 5 0 batch_normalization_33[0][0]


add_14 (Add) (None, None, None, 5 0 add_13[0][0]
leaky_re_lu_33[0][0]


conv2d_34 (Conv2D) (None, None, None, 2 131072 add_14[0][0]


batch_normalization_34 (BatchNo (None, None, None, 2 1024 conv2d_34[0][0]


leaky_re_lu_34 (LeakyReLU) (None, None, None, 2 0 batch_normalization_34[0][0]


conv2d_35 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_34[0][0]


batch_normalization_35 (BatchNo (None, None, None, 5 2048 conv2d_35[0][0]


leaky_re_lu_35 (LeakyReLU) (None, None, None, 5 0 batch_normalization_35[0][0]


add_15 (Add) (None, None, None, 5 0 add_14[0][0]
leaky_re_lu_35[0][0]


conv2d_36 (Conv2D) (None, None, None, 2 131072 add_15[0][0]


batch_normalization_36 (BatchNo (None, None, None, 2 1024 conv2d_36[0][0]


leaky_re_lu_36 (LeakyReLU) (None, None, None, 2 0 batch_normalization_36[0][0]


conv2d_37 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_36[0][0]


batch_normalization_37 (BatchNo (None, None, None, 5 2048 conv2d_37[0][0]


leaky_re_lu_37 (LeakyReLU) (None, None, None, 5 0 batch_normalization_37[0][0]


add_16 (Add) (None, None, None, 5 0 add_15[0][0]
leaky_re_lu_37[0][0]


conv2d_38 (Conv2D) (None, None, None, 2 131072 add_16[0][0]


batch_normalization_38 (BatchNo (None, None, None, 2 1024 conv2d_38[0][0]


leaky_re_lu_38 (LeakyReLU) (None, None, None, 2 0 batch_normalization_38[0][0]


conv2d_39 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_38[0][0]


batch_normalization_39 (BatchNo (None, None, None, 5 2048 conv2d_39[0][0]


leaky_re_lu_39 (LeakyReLU) (None, None, None, 5 0 batch_normalization_39[0][0]


add_17 (Add) (None, None, None, 5 0 add_16[0][0]
leaky_re_lu_39[0][0]


conv2d_40 (Conv2D) (None, None, None, 2 131072 add_17[0][0]


batch_normalization_40 (BatchNo (None, None, None, 2 1024 conv2d_40[0][0]


leaky_re_lu_40 (LeakyReLU) (None, None, None, 2 0 batch_normalization_40[0][0]


conv2d_41 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_40[0][0]


batch_normalization_41 (BatchNo (None, None, None, 5 2048 conv2d_41[0][0]


leaky_re_lu_41 (LeakyReLU) (None, None, None, 5 0 batch_normalization_41[0][0]


add_18 (Add) (None, None, None, 5 0 add_17[0][0]
leaky_re_lu_41[0][0]


conv2d_42 (Conv2D) (None, None, None, 2 131072 add_18[0][0]


batch_normalization_42 (BatchNo (None, None, None, 2 1024 conv2d_42[0][0]


leaky_re_lu_42 (LeakyReLU) (None, None, None, 2 0 batch_normalization_42[0][0]


conv2d_43 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_42[0][0]


batch_normalization_43 (BatchNo (None, None, None, 5 2048 conv2d_43[0][0]


leaky_re_lu_43 (LeakyReLU) (None, None, None, 5 0 batch_normalization_43[0][0]


add_19 (Add) (None, None, None, 5 0 add_18[0][0]
leaky_re_lu_43[0][0]


zero_padding2d_5 (ZeroPadding2D (None, None, None, 5 0 add_19[0][0]


conv2d_44 (Conv2D) (None, None, None, 1 4718592 zero_padding2d_5[0][0]


batch_normalization_44 (BatchNo (None, None, None, 1 4096 conv2d_44[0][0]


leaky_re_lu_44 (LeakyReLU) (None, None, None, 1 0 batch_normalization_44[0][0]


conv2d_45 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_44[0][0]


batch_normalization_45 (BatchNo (None, None, None, 5 2048 conv2d_45[0][0]


leaky_re_lu_45 (LeakyReLU) (None, None, None, 5 0 batch_normalization_45[0][0]


conv2d_46 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_45[0][0]


batch_normalization_46 (BatchNo (None, None, None, 1 4096 conv2d_46[0][0]


leaky_re_lu_46 (LeakyReLU) (None, None, None, 1 0 batch_normalization_46[0][0]


add_20 (Add) (None, None, None, 1 0 leaky_re_lu_44[0][0]
leaky_re_lu_46[0][0]


conv2d_47 (Conv2D) (None, None, None, 5 524288 add_20[0][0]


batch_normalization_47 (BatchNo (None, None, None, 5 2048 conv2d_47[0][0]


leaky_re_lu_47 (LeakyReLU) (None, None, None, 5 0 batch_normalization_47[0][0]


conv2d_48 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_47[0][0]


batch_normalization_48 (BatchNo (None, None, None, 1 4096 conv2d_48[0][0]


leaky_re_lu_48 (LeakyReLU) (None, None, None, 1 0 batch_normalization_48[0][0]


add_21 (Add) (None, None, None, 1 0 add_20[0][0]
leaky_re_lu_48[0][0]


conv2d_49 (Conv2D) (None, None, None, 5 524288 add_21[0][0]


batch_normalization_49 (BatchNo (None, None, None, 5 2048 conv2d_49[0][0]


leaky_re_lu_49 (LeakyReLU) (None, None, None, 5 0 batch_normalization_49[0][0]


conv2d_50 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_49[0][0]


batch_normalization_50 (BatchNo (None, None, None, 1 4096 conv2d_50[0][0]


leaky_re_lu_50 (LeakyReLU) (None, None, None, 1 0 batch_normalization_50[0][0]


add_22 (Add) (None, None, None, 1 0 add_21[0][0]
leaky_re_lu_50[0][0]


conv2d_51 (Conv2D) (None, None, None, 5 524288 add_22[0][0]


batch_normalization_51 (BatchNo (None, None, None, 5 2048 conv2d_51[0][0]


leaky_re_lu_51 (LeakyReLU) (None, None, None, 5 0 batch_normalization_51[0][0]


conv2d_52 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_51[0][0]


batch_normalization_52 (BatchNo (None, None, None, 1 4096 conv2d_52[0][0]


leaky_re_lu_52 (LeakyReLU) (None, None, None, 1 0 batch_normalization_52[0][0]


add_23 (Add) (None, None, None, 1 0 add_22[0][0]
leaky_re_lu_52[0][0]


conv2d_53 (Conv2D) (None, None, None, 5 524288 add_23[0][0]


batch_normalization_53 (BatchNo (None, None, None, 5 2048 conv2d_53[0][0]


leaky_re_lu_53 (LeakyReLU) (None, None, None, 5 0 batch_normalization_53[0][0]


conv2d_54 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_53[0][0]


batch_normalization_54 (BatchNo (None, None, None, 1 4096 conv2d_54[0][0]


leaky_re_lu_54 (LeakyReLU) (None, None, None, 1 0 batch_normalization_54[0][0]


conv2d_55 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_54[0][0]


batch_normalization_55 (BatchNo (None, None, None, 5 2048 conv2d_55[0][0]


leaky_re_lu_55 (LeakyReLU) (None, None, None, 5 0 batch_normalization_55[0][0]


conv2d_56 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_55[0][0]


batch_normalization_56 (BatchNo (None, None, None, 1 4096 conv2d_56[0][0]


leaky_re_lu_56 (LeakyReLU) (None, None, None, 1 0 batch_normalization_56[0][0]


conv2d_57 (Conv2D) (None, None, None, 5 524288 leaky_re_lu_56[0][0]


batch_normalization_57 (BatchNo (None, None, None, 5 2048 conv2d_57[0][0]


leaky_re_lu_57 (LeakyReLU) (None, None, None, 5 0 batch_normalization_57[0][0]


conv2d_60 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_57[0][0]


batch_normalization_59 (BatchNo (None, None, None, 2 1024 conv2d_60[0][0]


leaky_re_lu_59 (LeakyReLU) (None, None, None, 2 0 batch_normalization_59[0][0]


up_sampling2d_1 (UpSampling2D) (None, None, None, 2 0 leaky_re_lu_59[0][0]


concatenate_1 (Concatenate) (None, None, None, 7 0 up_sampling2d_1[0][0]
add_19[0][0]


conv2d_61 (Conv2D) (None, None, None, 2 196608 concatenate_1[0][0]


batch_normalization_60 (BatchNo (None, None, None, 2 1024 conv2d_61[0][0]


leaky_re_lu_60 (LeakyReLU) (None, None, None, 2 0 batch_normalization_60[0][0]


conv2d_62 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_60[0][0]


batch_normalization_61 (BatchNo (None, None, None, 5 2048 conv2d_62[0][0]


leaky_re_lu_61 (LeakyReLU) (None, None, None, 5 0 batch_normalization_61[0][0]


conv2d_63 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_61[0][0]


batch_normalization_62 (BatchNo (None, None, None, 2 1024 conv2d_63[0][0]


leaky_re_lu_62 (LeakyReLU) (None, None, None, 2 0 batch_normalization_62[0][0]


conv2d_64 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_62[0][0]


batch_normalization_63 (BatchNo (None, None, None, 5 2048 conv2d_64[0][0]


leaky_re_lu_63 (LeakyReLU) (None, None, None, 5 0 batch_normalization_63[0][0]


conv2d_65 (Conv2D) (None, None, None, 2 131072 leaky_re_lu_63[0][0]


batch_normalization_64 (BatchNo (None, None, None, 2 1024 conv2d_65[0][0]


leaky_re_lu_64 (LeakyReLU) (None, None, None, 2 0 batch_normalization_64[0][0]


conv2d_68 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_64[0][0]


batch_normalization_66 (BatchNo (None, None, None, 1 512 conv2d_68[0][0]


leaky_re_lu_66 (LeakyReLU) (None, None, None, 1 0 batch_normalization_66[0][0]


up_sampling2d_2 (UpSampling2D) (None, None, None, 1 0 leaky_re_lu_66[0][0]


concatenate_2 (Concatenate) (None, None, None, 3 0 up_sampling2d_2[0][0]
add_11[0][0]


conv2d_69 (Conv2D) (None, None, None, 1 49152 concatenate_2[0][0]


batch_normalization_67 (BatchNo (None, None, None, 1 512 conv2d_69[0][0]


leaky_re_lu_67 (LeakyReLU) (None, None, None, 1 0 batch_normalization_67[0][0]


conv2d_70 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_67[0][0]


batch_normalization_68 (BatchNo (None, None, None, 2 1024 conv2d_70[0][0]


leaky_re_lu_68 (LeakyReLU) (None, None, None, 2 0 batch_normalization_68[0][0]


conv2d_71 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_68[0][0]


batch_normalization_69 (BatchNo (None, None, None, 1 512 conv2d_71[0][0]


leaky_re_lu_69 (LeakyReLU) (None, None, None, 1 0 batch_normalization_69[0][0]


conv2d_72 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_69[0][0]


batch_normalization_70 (BatchNo (None, None, None, 2 1024 conv2d_72[0][0]


leaky_re_lu_70 (LeakyReLU) (None, None, None, 2 0 batch_normalization_70[0][0]


conv2d_73 (Conv2D) (None, None, None, 1 32768 leaky_re_lu_70[0][0]


batch_normalization_71 (BatchNo (None, None, None, 1 512 conv2d_73[0][0]


leaky_re_lu_71 (LeakyReLU) (None, None, None, 1 0 batch_normalization_71[0][0]


conv2d_58 (Conv2D) (None, None, None, 1 4718592 leaky_re_lu_57[0][0]


conv2d_66 (Conv2D) (None, None, None, 5 1179648 leaky_re_lu_64[0][0]


conv2d_74 (Conv2D) (None, None, None, 2 294912 leaky_re_lu_71[0][0]


batch_normalization_58 (BatchNo (None, None, None, 1 4096 conv2d_58[0][0]


batch_normalization_65 (BatchNo (None, None, None, 5 2048 conv2d_66[0][0]


batch_normalization_72 (BatchNo (None, None, None, 2 1024 conv2d_74[0][0]


leaky_re_lu_58 (LeakyReLU) (None, None, None, 1 0 batch_normalization_58[0][0]


leaky_re_lu_65 (LeakyReLU) (None, None, None, 5 0 batch_normalization_65[0][0]


leaky_re_lu_72 (LeakyReLU) (None, None, None, 2 0 batch_normalization_72[0][0]


conv2d_59 (Conv2D) (None, None, None, 2 261375 leaky_re_lu_58[0][0]


conv2d_67 (Conv2D) (None, None, None, 2 130815 leaky_re_lu_65[0][0]


conv2d_75 (Conv2D) (None, None, None, 2 65535 leaky_re_lu_72[0][0]

Total params: 62,001,757
Trainable params: 61,949,149
Non-trainable params: 52,608


None
Saved Keras model to yolo.h5
Read 62001757 of 62001757.0 from Darknet weights.

AttributeError: 'Model' object has no attribute '_get_distribution_strategy'

when i am running python Train_YOLO.py
I am getting the following error :
File "C:\Users\ANJANA~1\Envs\custom\lib\site-packages\tensorflow_core\python\keras\callbacks.py", line 1532, in set_model
self.log_dir, self.model._get_distribution_strategy()) # pylint: disable=protected-access
AttributeError: 'Model' object has no attribute '_get_distribution_strategy'

What has to be done

Training images with zero annotations

What would be the best way to modify the code to support images with zero annotations?
When you export the annotations as csv from Vott it makes a new folder with All the images you checked but it leaves images without annotations absent from the csv. Which means the training script will simply ignore them.

I was thinking the "Convert_to_YOLO_format.py" would be the best place to add in the images without annotations by making it check the folder for all the images that exist and than checking to see if they are referenced in the csv and if not add a row without bounding boxes.

Do you think this would be the best way? or will the training script fail if an images with no bounding boxes is provided in the dataset?

.weight file

Hey,
I've got my .h5 file which works perfectly!
But one question ... Is it possible to convert .h5 to .weight file ?
Thanks you ;)

Convert to YOLO format file error

Hey, when i run de Convert_to_YOLO_format.py I get this

`Traceback (most recent call last):
File "C:\Users\seba5\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\indexes\base.py", line 2656, in get_loc
return self._engine.get_loc(key)
File "pandas_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas_libs\hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'label'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "Convert_to_YOLO_format.py", line 51, in
labels = multi_df['label'].unique()
File "C:\Users\seba5\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\frame.py", line 2927, in getitem
indexer = self.columns.get_loc(key)
File "C:\Users\seba5\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\indexes\base.py", line 2658, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc
File "pandas_libs\hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas_libs\hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'label'
`

E-mail alert & Optimal time of person activity support in the script?

Hi Anton, I am stuck in these issues can you please help me to overcome them. Please refer to any link where I refer to these problems solutions.

  1. A script that generates an E-mail alert when any unusual activity has occurred.
  2. Find the optimal time of the person doing an activity to filling an empty box.

Please help
Thanks, Anton for this helpful code.

Tensorflow Issue

I am writing this to help other people who may get the same issues. After spending few hours I resolved them. To begin with, thank you, Anton, for your incredible Repo.

Problem 1: I had Python 3.8.0 installed on my system. When I was attempting this "pip install tensorflow==1.15.2" it said, "no module named tensorflow".

fix:
To resolve this I had to switch to Python 3.6 version.

Problem 2: After fixing above, I had a very strange error when running Minimal_Example.py. The error was: "from google.protobuf.pyext import _message
ImportError: DLL load failed: The specified procedure could not be found."

Fix:
Pip uninstall protobuf
pip install protobuf==3.6.0

Hope this is helpful.
Mostafa

Errors when computing

Hi Anton,

I still a beginner when it comes to programming. I have read through what you have wrote and confidence that I have followed word for word your instructions. I having trouble when running the train_yolo file. The training starts with bunch of errors and before epoch 1/51 finish computing it stops.
I need your help to figure up where it is going wrong.

These are the major errors/warning that I suspect of might be causing the error:

  1. There are bunch of warnings that shows the deprecated tf functions(dont think this is the problem)

  2. Some weight are not loaded since there are not matching in dimensions
    C:\Users\Dr Diban\TrainYourOwnYOLO\env\lib\site-packages\keras\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_59 due to mismatch in shape ((1, 1, 1024, 18) vs (255, 1024, 1, 1)).
    weight_values[i].shape))
    C:\Users\Dr Diban\TrainYourOwnYOLO\env\lib\site-packages\keras\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_59 due to mismatch in shape ((18,) vs (255,)).
    weight_values[i].shape))
    C:\Users\Dr Diban\TrainYourOwnYOLO\env\lib\site-packages\keras\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_67 due to mismatch in shape ((1, 1, 512, 18) vs (255, 512, 1, 1)).
    weight_values[i].shape))
    C:\Users\Dr Diban\TrainYourOwnYOLO\env\lib\site-packages\keras\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_67 due to mismatch in shape ((18,) vs (255,)).
    weight_values[i].shape))
    C:\Users\Dr Diban\TrainYourOwnYOLO\env\lib\site-packages\keras\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_75 due to mismatch in shape ((1, 1, 256, 18) vs (255, 256, 1, 1)).
    weight_values[i].shape))
    C:\Users\Dr Diban\TrainYourOwnYOLO\env\lib\site-packages\keras\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2d_75 due to mismatch in shape ((18,) vs (255,)).
    weight_values[i].shape))

  3. Pillow error-I also used your cat pictures for training but still get the below errors
    C:\Users\Dr Diban\TrainYourOwnYOLO\env\lib\site-packages\PIL\Image.py:989: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images

  4. Remapper error-E tensorflow/core/grappler/optimizers/meta_optimizer.cc:533] remapper failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1)

5)Finally the memory error- W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 177209344 exceeds 10% of system memory.

Hope that you can shed some light.

Having Problem Training

Dear Anton,
I followed your well written tutorial and I seem to have an issue with training. When I want to start training, the following error messages appear (I'm using Python 3.7.6):

C:\yolo\2_Training>python Train_YOLO.py
Using TensorFlow backend.
2020-01-17 14:46:20.007594: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64
_100.dll not found
2020-01-17 14:46:20.014193: I tensorflow/stream_executor/cuda/cudart_stub.cc:29]
Ignore above cudart dlerror if you do not have a GPU set up on your machine.
WARNING: Logging before flag parsing goes to stderr.
W0117 14:46:22.321202 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v
1.reset_default_graph instead.

W0117 14:46:22.321202 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.com
pat.v1.placeholder_with_default instead.

W0117 14:46:22.321202 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1
.get_default_graph instead.

W0117 14:46:22.336827 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.place
holder instead.

W0117 14:46:22.336827 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.unif
orm instead.

W0117 14:46:22.352451 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.
v1.get_default_session instead.

W0117 14:46:22.352451 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.Confi
gProto instead.

W0117 14:46:22.352451 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session i
nstead.

2020-01-17 14:46:22.363361: I tensorflow/core/platform/cpu_feature_guard.cc:142]
Your CPU supports instructions that this TensorFlow binary was not compiled to
use: AVX2
2020-01-17 14:46:22.372660: I tensorflow/stream_executor/platform/default/dso_lo
ader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-01-17 14:46:22.703434: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1
618] Found device 0 with properties:
name: GeForce GTX 850M major: 5 minor: 0 memoryClockRate(GHz): 0.9015
pciBusID: 0000:0a:00.0
2020-01-17 14:46:22.712369: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64
_100.dll not found
2020-01-17 14:46:22.720114: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'cublas64_100.dll'; dlerror: cublas64
_100.dll not found
2020-01-17 14:46:22.730215: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'cufft64_100.dll'; dlerror: cufft64_1
00.dll not found
2020-01-17 14:46:22.740197: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'curand64_100.dll'; dlerror: curand64
_100.dll not found
2020-01-17 14:46:22.748358: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'cusolver64_100.dll'; dlerror: cusolv
er64_100.dll not found
2020-01-17 14:46:22.757595: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'cusparse64_100.dll'; dlerror: cuspar
se64_100.dll not found
2020-01-17 14:46:22.766720: W tensorflow/stream_executor/platform/default/dso_lo
ader.cc:55] Could not load dynamic library 'cudnn64_7.dll'; dlerror: cudnn64_7.d
ll not found
2020-01-17 14:46:22.777468: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1
641] Cannot dlopen some GPU libraries. Please make sure the missing libraries me
ntioned above are installed properly if you would like to use GPU. Follow the gu
ide at https://www.tensorflow.org/install/gpu for how to download and setup the
required libraries for your platform.
Skipping registering GPU devices...
2020-01-17 14:46:22.959342: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1
159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-17 14:46:22.965080: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1
165] 0
2020-01-17 14:46:22.969206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1
178] 0: N
W0117 14:46:22.972037 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.
global_variables instead.

W0117 14:46:22.972037 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.com
pat.v1.is_variable_initialized instead.

W0117 14:46:22.972037 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:206: The name tf.variables_initializer is deprecated. Please use tf.compa
t.v1.variables_initializer instead.

W0117 14:46:22.987680 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat
.v1.nn.fused_batch_norm instead.

W0117 14:46:26.895240 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use
tf.compat.v1.image.resize_nearest_neighbor instead.

Create YOLOv3 model with 9 anchors and 1 classes.
C:\Users\Matthias\AppData\Local\Programs\Python\Python37\lib\site-packages\keras
\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2
d_59 due to mismatch in shape ((1, 1, 1024, 18) vs (255, 1024, 1, 1)).
weight_values[i].shape))
C:\Users\Matthias\AppData\Local\Programs\Python\Python37\lib\site-packages\keras
\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2
d_59 due to mismatch in shape ((18,) vs (255,)).
weight_values[i].shape))
C:\Users\Matthias\AppData\Local\Programs\Python\Python37\lib\site-packages\keras
\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2
d_67 due to mismatch in shape ((1, 1, 512, 18) vs (255, 512, 1, 1)).
weight_values[i].shape))
C:\Users\Matthias\AppData\Local\Programs\Python\Python37\lib\site-packages\keras
\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2
d_67 due to mismatch in shape ((18,) vs (255,)).
weight_values[i].shape))
C:\Users\Matthias\AppData\Local\Programs\Python\Python37\lib\site-packages\keras
\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2
d_75 due to mismatch in shape ((1, 1, 256, 18) vs (255, 256, 1, 1)).
weight_values[i].shape))
C:\Users\Matthias\AppData\Local\Programs\Python\Python37\lib\site-packages\keras
\engine\saving.py:1140: UserWarning: Skipping loading of weights for layer conv2
d_75 due to mismatch in shape ((18,) vs (255,)).
weight_values[i].shape))
Load weights C:\yolo\2_Training\src\keras_yolo3\yolo.h5.
Freeze the first 249 layers of total 252 layers.
W0117 14:46:31.845515 6352 module_wrapper.py:139] From C:\Users\Matthias\AppDat
a\Local\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_back
end.py:1521: The name tf.log is deprecated. Please use tf.math.log instead.

W0117 14:46:31.851520 6352 deprecation.py:323] From C:\Users\Matthias\AppData\L
ocal\Programs\Python\Python37\lib\site-packages\keras\backend\tensorflow_backend
.py:3080: where (from tensorflow.python.ops.array_ops) is deprecated and will be
removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Traceback (most recent call last):
File "Train_YOLO.py", line 184, in
lines = ChangeToOtherMachine(lines, remote_machine="")
File "C:\yolo\Utils\Train_Utils.py", line 238, in ChangeToOtherMachine
suffix = (file.split(repo))[1]
IndexError: list index out of range

TypeError

image
Getting this error while training after following all instructions on medium article.
Training with six classes instead of one how do i bypass this?

jpg/png and no output for some test images

Hey, thanks for sharing your work. It is very cool ! I enjoyed the output.
A couple of questions though:
1- It seems like the test images has to be in jpg format. isn't it?
2- when for some test images there is no corresponding output in the Test_Image_Detection_Results directory, what does that mean? Yolo didn't find a human face in it?
3- Do you think I can use it to distinguish real vs manipulated faces? (two class classification)
Bests,

FileNotFoundError

Hi, I am having the issue in training part.
I had issues in download and convert pre trained weights file , so I tried to run it in spyder and it worked i.e .h5 file is created now. but now there is following error when I run train_YOLO.py although the file 2Q__.jpg exists in vott-csv-export folder

FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/me/Desktop/TrainYourOwnYOLO/-master/Data/Source_Images/Training_Images/vott-csv-export/2Q__.jpg'

And when I tried to run same train_YyOLO file in python 3.7, it gives following error
AttributeError: module 'keras.backend' has no attribute 'control_flow_ops'

EC2 AWS: Keras: ValueError: Invalid backend.

Before filing a report consider this question:

Have you followed the instructions exactly (word by word)?

Once you are familiar with the code, you're welcome to modify it. Please only continue to file a bug report if you encounter an issue with the provided code and after having followed the instructions.

If you have followed the instructions exactly and would still like to file a bug or make a feature requests please follow the steps below.

  1. It must be a bug, a feature request, or a significant problem with the documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

System information

  • What is the top-level directory of the model you are using:
    /home/
  • Have I written custom code (as opposed to using a stock example script provided in the repo):Yes, 2 lines were modify, but those are in the train_YOLO.py, and the error is happening during the weights download process, "Code modified (from PIL import Image, ImageFile, ImageFile.LOAD_TRUNCATED_IMAGES = True)"
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Deep Learning AMI (Ubuntu 18.04) Version 26.0
  • TensorFlow version (use command below):
    tensorflow==1.15.0
  • CUDA/cuDNN version:
    Built on Sat_Aug_25_21:08:01_CDT_2018
    Cuda compilation tools, release 10.0, V10.0.130
  • GPU model and memory:
    High-performance NVIDIA K80 GPUs, each with 2,496 parallel processing cores and 12GiB of GPU memory
  • Exact command to reproduce:
    TrainYourOwnYOLO/2_Training$ python Download_and_Convert_YOLO_weights.py
    You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
v1.15.0-rc3-22-g590d6ee 1.15.0

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug or a feature request.

I first tried to run the pre-trained model, and the training locally in Windows with Linux subsystem, and both worked fine! Awesome job, thank you so much for sharing!
The problem happened when I tried to implement the YOLO in AWS inside of an EC2 instance.
I followed the instructions step by step, but when I got to the point when I have to download the pre-trained model, Keras failed to load the backend.

Source code / logs

user:~/YOLOV3/TrainYourOwnYOLO/2_Training$ python Download_and_Convert_YOLO_weights.py

99% (2477235 of 2480070) |################################ | Elapsed Time: 0:00:30 ETA: 0:00:00Traceback (most recent call last):
File "convert.py", line 14, in
from keras import backend as K
File "/home/ubuntu/YOLOV3/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/init.py", line 3, in
from . import utils
File "/home/ubuntu/YOLOV3/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/utils/init.py", line 6, in
from . import conv_utils
File "/home/ubuntu/YOLOV3/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/utils/conv_utils.py", line 9, in
from .. import backend as K
File "/home/ubuntu/YOLOV3/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/init.py", line 1, in
from .load_backend import epsilon
File "/home/ubuntu/YOLOV3/TrainYourOwnYOLO/env/lib/python3.6/site-packages/keras/backend/load_backend.py", line 101, in
raise ValueError('Invalid backend. Missing required entry : ' + e)
ValueError: Invalid backend. Missing required entry : placeholder

Training Stopped Prematurely

Before filing a report consider the following two questions:

Have you followed the instructions exactly (word by word)? Yes.

Have you checked the [troubleshooting] Yes (https://github.com/AntonMu/TrainYourOwnYOLO#troubleshooting) section?

Once you are familiar with the code, you're welcome to modify it. Please only continue to file a bug report if you encounter an issue with the provided code and after having followed the instructions.

If you have followed the instructions exactly, couldn't solve your problem with the provided troubleshooting tips and would still like to file a bug or make a feature requests please follow the steps below.

  1. It must be a bug, a feature request, or a significant problem with the documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

System information

  • What is the top-level directory of the model you are using: I didn't understand C:\Users\anjana ouseph\Desktop\Train\custom\TrainYourOwnYOLO\2_Training> Is this what you mean? i have created a virtual env.
  • Have I written custom code (as opposed to using a stock example script provided in the repo): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow version (use command below): The one in requirements.txt
  • CUDA/cuDNN version: I don't have CUDA installed
  • GPU model and memory: NVIDIA GEforce 940MX,4GB graphics card RAM 12GB
  • Exact command to reproduce: python detector.py

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

My training stopped at 78/102 epochs. It stopped prematurely. when i tried running the detector using the saved model i got okayish results but no that great. What is the issue here? I am running it on CPU as i don't have cuda installed. It's been training for one whole day.
yolo

Source code / logs

90/90 [==============================] - 1881s 21s/step - loss: 13.1176 - val_loss: 16.9962
Epoch 00078: early stopping

Subshape warning message

Hi @AntonMu
Thank you for responding to my previous issue before on Google colab problem. Now, I managed to run the train.py on gcolab, but it is showing the message as below:
Epoch 1/51 2020-01-14 16:53:26.300343: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] shape_optimizer failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1) 2020-01-14 16:53:26.329475: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] remapper failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1) 2020-01-14 16:53:26.491815: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] layout failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1) 2020-01-14 16:53:26.671226: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] shape_optimizer failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1) 2020-01-14 16:53:26.696333: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] remapper failed: Invalid argument: Subshape must have computed start >= end since stride is negative, but is 0 and 2 (computed from start 0 and end 9223372036854775807 over shape with rank 2 and stride-1)
what does this means? is it related to my image shape?

Not running on GPU

Hello,

I am not able to run this code on GPU. These are my specs:

  1. Windows 10
  2. Tensorflow 1.15
  3. Nvidia GeForce GTX1070
  4. CUDA 9 ( this is the only thing that allowed me to run the code)
  5. cudnn-9.0-windows10-x64-v7.6.3.30
  6. Python 3.7

print(tf.test.is_gpu_available()) from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) from keras import backend as K print(K.tensorflow_backend._get_available_gpus()) print(tf.__version__)

Gives

False [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 10565893046328515488 ] [] 1.15.0

Any suggestions?

Thank you

AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'

Before filing a report consider the following two questions:

Have you followed the instructions exactly (word by word)?

Yes. Have installed the required libraries from the requirements_cpu.txt
Not able to start training.

Have you checked the troubleshooting section?

Yes


System information

  • What is the top-level directory of the model you are using:

  • Have I written custom code (as opposed to using a stock example script provided in the repo):
    No, just extracted the code from the files into a Jupyter Notebook.

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    MacBook, macOS Catalina - version 10.15.3

  • TensorFlow version (use command below):
    v1.14.0-rc1-22-gaf24dc91b5 1.14.0

  • CUDA/cuDNN version:
    Not using CUDA

  • GPU model and memory:
    NA

  • Exact command to reproduce:

print(weights_path)
"/Documents/PythonWorkSpace/ObjectDetection/Yolov3/TrainYourOwnYOLO-TF2/Modified_code/src/keras_yolo3/yolo.h5"

model = create_model(input_shape, anchors, num_classes,
        freeze_body=2, weights_path = weights_path) # make sure you know what you freeze

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-18-2555863472d6> in <module>
      4 else:
      5     model = create_model(input_shape, anchors, num_classes,
----> 6         freeze_body=2, weights_path = weights_path) # make sure you know what you freeze
      7 
      8 log_dir_time = os.path.join(log_dir,'{}'.format(int(time())))

<ipython-input-14-dad9bb0414b7> in create_model(input_shape, anchors, num_classes, load_pretrained, freeze_body, weights_path)
     28             weights_path='keras_yolo3/model_data/yolo_weights.h5'):
     29     '''create the training model'''
---> 30     K.clear_session() # get a new session
     31     image_input = Input(shape=(None, None, 3))
     32     h, w = input_shape

/opt/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py in clear_session()
     86     global _SESSION
     87     global _GRAPH_LEARNING_PHASES
---> 88     tf.reset_default_graph()
     89     reset_uids()
     90     _SESSION = None

AttributeError: module 'tensorflow' has no attribute 'reset_default_graph'

Describe the problem

I keep getting error while creating the model. I have created a new environment to run the code and solve the mentioned issue. Uninstalled the existing libraries and installed the versions mentioned in the repo.
This error still exists.

TF2.0

Hi,

Any plan to update to TF2.0 in early future?

Question. Using Hyperdash or Weights & Biases

Hey there, i'm wondering how to implement hyperdash or weights & biases apps to the code, specifically i'm having trouble in finding how to record loss and accuracy to these apps while it is running the training. Currently i'm just able to instantiate hyperdash inside the Train_YOLO.py file, but it just records the prompt output
Any comment would be of great help
Thanks a lot

Speed!

I've got a question. I launch the script Detector.py from PHP. And it's very long for only one file (jpeg). It's not because of the detection but because of classes ... etc. "model, anchors, and classes loaded in 5.94sec". Is there a way to launch the detection for one file faster ?
I have a GPU ;)

Thanks so much

Error while running the TrainYOLO.py file

When I am running the Train_YOLO.py file it gets an error at line 217 i.e callbacks=[logging, checkpoint], here is the attached image regarding the error.

Screenshot 2020-01-26 03 53 44

I don't know why is it is showing this error I have tried to feed in the path myself for all the parameters but the problem still persists.

Folder Name Issue

@AntonMu
File "C:\Users\Sunil Lourembam\env\lib\site-packages\PIL\Image.py", line 2766, in open
fp = builtins.open(filename, "rb")
PermissionError: [Errno 13] Permission denied: 'C:/Users/Sunil'
how to solve this issue..to overwrite permission

Originally posted by @lrscse in #50 (comment)

how to calculate mAP

@AntonMu hi.. able to successfully trained and pedicton on some test image.i custom my data for tumor detection in brain mri. i want to know to to able to generate mAP.. will be very thankfull for your help

Subproccess call in Download_and_Convert_YOLO_weights.py does not work when using python3

The hard-coded variable call_string on line 43 of Download_and_Convert_YOLO_weights.py is set to 'python convert.py yolov3.cfg yolov3.weights yolo.h5' which throws a series of module not found errors if you have python 2 and 3 on your system and the modules needed are installed in the python 3 environment.

I solved this by changing the call string to 'python3 convert.py yolov3.cfg yolov3.weights yolo.h5'. Perhaps this should be mentioned in the readme file?

Question re: pretrained model

Hi, thanks for the demo! For the pre-trained cat face detector model included in this repository, did you train it with only the 100 annotated examples included? (because the performance is pretty good already with so little data - any stats on the accuracy you achieved?).

Keras Error

@AntonMu

_, ignore_mask = K.control_flow_ops.while_loop(lambda b,*args: b<m, loop_body, [0, ignore_mask]) AttributeError: module 'keras.backend' has no attribute 'control_flow_ops'

How do I fix the error?

requirements.txt: tensorflow-estimator

System information

  • What is the top-level directory of the model you are using: โ€Žโจ/โจUsersโฉ/cpattersonโฉ/TrainYourOwnYOLO/
  • Have I written custom code (as opposed to using a stock example script provided in the repo): no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Mac OS 10.14.6
  • TensorFlow version (use command below): v1.15.0-92-g5d80e1e8e6 1.15.2
  • Exact command to reproduce: pip install -r requirements.txt

Describe the problem

When I install the requirements, I get the following error:

ERROR: tensorflow 1.15.2 has requirement tensorflow-estimator==1.15.1, but you'll have tensorflow-estimator 1.15.0 which is incompatible.

Source code / logs

(env) idc113-01:TrainYourOwnYOLO cpatterson$ pip install -r requirements.txt
Requirement already satisfied: setuptools>=41.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (45.1.0)
Requirement already satisfied: pip>=19.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 2)) (20.0.2)
Requirement already satisfied: absl-py==0.7.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 3)) (0.7.1)
Requirement already satisfied: astor==0.8.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 4)) (0.8.0)
Requirement already satisfied: attrs==19.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 5)) (19.1.0)
Requirement already satisfied: backcall==0.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 6)) (0.1.0)
Requirement already satisfied: bleach==3.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 7)) (3.1.0)
Requirement already satisfied: certifi==2019.6.16 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 8)) (2019.6.16)
Requirement already satisfied: chardet==3.0.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 9)) (3.0.4)
Requirement already satisfied: cycler==0.10.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 10)) (0.10.0)
Requirement already satisfied: decorator==4.4.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 11)) (4.4.0)
Requirement already satisfied: defusedxml==0.6.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 12)) (0.6.0)
Requirement already satisfied: progressbar2==3.46.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 13)) (3.46.1)
Requirement already satisfied: entrypoints==0.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 14)) (0.3)
Requirement already satisfied: gast==0.2.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 15)) (0.2.2)
Requirement already satisfied: google-pasta==0.1.7 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 16)) (0.1.7)
Requirement already satisfied: grpcio==1.22.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 17)) (1.22.0)
Requirement already satisfied: h5py==2.9.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 18)) (2.9.0)
Requirement already satisfied: idna==2.8 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 19)) (2.8)
Requirement already satisfied: ipykernel==5.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 20)) (5.1.1)
Requirement already satisfied: ipython==7.6.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 21)) (7.6.1)
Requirement already satisfied: ipython-genutils==0.2.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 22)) (0.2.0)
Requirement already satisfied: ipywidgets==7.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 23)) (7.5.0)
Requirement already satisfied: jedi==0.14.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 24)) (0.14.0)
Requirement already satisfied: Jinja2==2.10.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 25)) (2.10.1)
Requirement already satisfied: joblib==0.13.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 26)) (0.13.2)
Requirement already satisfied: jsonschema==3.0.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 27)) (3.0.1)
Requirement already satisfied: jupyter==1.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 28)) (1.0.0)
Requirement already satisfied: jupyter-client==5.3.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 29)) (5.3.0)
Requirement already satisfied: jupyter-console==6.0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 30)) (6.0.0)
Requirement already satisfied: jupyter-core==4.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 31)) (4.5.0)
Requirement already satisfied: Keras==2.2.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 32)) (2.2.4)
Requirement already satisfied: Keras-Applications==1.0.8 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 33)) (1.0.8)
Requirement already satisfied: Keras-Preprocessing==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 34)) (1.1.0)
Requirement already satisfied: kiwisolver==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 35)) (1.1.0)
Requirement already satisfied: Markdown==3.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 36)) (3.1.1)
Requirement already satisfied: MarkupSafe==1.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 37)) (1.1.1)
Requirement already satisfied: matplotlib==3.0.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 38)) (3.0.3)
Requirement already satisfied: mistune==0.8.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 39)) (0.8.4)
Requirement already satisfied: mpmath==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 40)) (1.1.0)
Requirement already satisfied: nbconvert==5.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 41)) (5.5.0)
Requirement already satisfied: nbformat==4.4.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 42)) (4.4.0)
Requirement already satisfied: notebook==5.7.8 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 43)) (5.7.8)
Requirement already satisfied: numpy==1.16.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 44)) (1.16.4)
Requirement already satisfied: opencv-python==4.1.0.25 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 45)) (4.1.0.25)
Requirement already satisfied: pandas==0.24.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 46)) (0.24.2)
Requirement already satisfied: pandocfilters==1.4.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 47)) (1.4.2)
Requirement already satisfied: parso==0.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 48)) (0.5.0)
Requirement already satisfied: pexpect==4.7.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 49)) (4.7.0)
Requirement already satisfied: pickleshare==0.7.5 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 50)) (0.7.5)
Requirement already satisfied: Pillow==6.2.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 51)) (6.2.1)
Requirement already satisfied: prometheus-client==0.7.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 52)) (0.7.1)
Requirement already satisfied: prompt-toolkit==2.0.9 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 53)) (2.0.9)
Requirement already satisfied: protobuf==3.8.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 54)) (3.8.0)
Requirement already satisfied: ptyprocess==0.6.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 55)) (0.6.0)
Requirement already satisfied: Pygments==2.4.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 56)) (2.4.2)
Requirement already satisfied: pyparsing==2.4.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 57)) (2.4.0)
Requirement already satisfied: pyrsistent==0.15.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 58)) (0.15.3)
Requirement already satisfied: python-dateutil==2.8.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 59)) (2.8.0)
Requirement already satisfied: pytz==2019.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 60)) (2019.1)
Requirement already satisfied: PyYAML==5.1.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 61)) (5.1.1)
Requirement already satisfied: pyzmq==18.0.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 62)) (18.0.2)
Requirement already satisfied: qtconsole==4.5.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 63)) (4.5.1)
Requirement already satisfied: requests==2.22.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 64)) (2.22.0)
Requirement already satisfied: scikit-learn==0.21.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 65)) (0.21.2)
Requirement already satisfied: scipy==1.3.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 66)) (1.3.0)
Requirement already satisfied: Send2Trash==1.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 67)) (1.5.0)
Requirement already satisfied: six==1.12.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 68)) (1.12.0)
Requirement already satisfied: sklearn==0.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 69)) (0.0)
Requirement already satisfied: sympy==1.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 70)) (1.4)
Requirement already satisfied: tensorboard==1.15.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 71)) (1.15.0)
Requirement already satisfied: tensorflow==1.15.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 72)) (1.15.2)
Collecting tensorflow-estimator==1.15.0
Using cached tensorflow_estimator-1.15.0-py2.py3-none-any.whl (503 kB)
Requirement already satisfied: termcolor==1.1.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 74)) (1.1.0)
Requirement already satisfied: terminado==0.8.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 75)) (0.8.2)
Requirement already satisfied: testpath==0.4.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 76)) (0.4.2)
Requirement already satisfied: tornado==6.0.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 77)) (6.0.3)
Requirement already satisfied: traitlets==4.3.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 78)) (4.3.2)
Requirement already satisfied: urllib3==1.25.3 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 79)) (1.25.3)
Requirement already satisfied: wcwidth==0.1.7 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 80)) (0.1.7)
Requirement already satisfied: webencodings==0.5.1 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 81)) (0.5.1)
Requirement already satisfied: Werkzeug==0.15.4 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 82)) (0.15.4)
Requirement already satisfied: widgetsnbextension==3.5.0 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 83)) (3.5.0)
Requirement already satisfied: wrapt==1.11.2 in ./env/lib/python3.6/site-packages (from -r requirements.txt (line 84)) (1.11.2)
Requirement already satisfied: python-utils>=2.3.0 in ./env/lib/python3.6/site-packages (from progressbar2==3.46.1->-r requirements.txt (line 13)) (2.3.0)
Requirement already satisfied: appnope; sys_platform == "darwin" in ./env/lib/python3.6/site-packages (from ipython==7.6.1->-r requirements.txt (line 21)) (0.1.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in ./env/lib/python3.6/site-packages (from tensorboard==1.15.0->-r requirements.txt (line 71)) (0.34.2)
Requirement already satisfied: opt-einsum>=2.3.2 in ./env/lib/python3.6/site-packages (from tensorflow==1.15.2->-r requirements.txt (line 72)) (3.1.0)
ERROR: tensorflow 1.15.2 has requirement tensorflow-estimator==1.15.1, but you'll have tensorflow-estimator 1.15.0 which is incompatible.

Convert output network to tensorflow

Hi.
I have a question. The output is in format .h5 . What is a Keras format. How I can convert him to tensorflow format like pb? I find a few tutorial but he want also *.json file but, i dont have him. Do I need a train network with paramater who eneble -json? What I need for that? Thanks for answer.

add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.

My system is ubuntu 18.04, python 3.7.4, GPU, GTX 1080

$ python Minimal_Example.py

$ python Minimal_Example.py 
Detecting Cat Faces by calling: 

 python /media/suryadi/DATA/learn/TrainYourOwnYOLO/3_Inference/Detector.py --input_path /media/suryadi/DATA/learn/TrainYourOwnYOLO/Data/Source_Images/Test_Images --classes /media/suryadi/DATA/learn/TrainYourOwnYOLO/Data/Model_Weights/data_classes.txt --output /media/suryadi/DATA/learn/TrainYourOwnYOLO/Data/Source_Images/Test_Image_Detection_Results --yolo_model /media/suryadi/DATA/learn/TrainYourOwnYOLO/Data/Model_Weights/trained_weights_final.h5 --box_file /media/suryadi/DATA/learn/TrainYourOwnYOLO/Data/Source_Images/Test_Image_Detection_Results/Detection_Results.csv --anchors /media/suryadi/DATA/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/model_data/yolo_anchors.txt --file_types .jpg .jpeg .png  

Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W1210 11:12:24.056672 140496479233856 __init__.py:308] Limited tf.compat.v2.summary API due to missing TensorBoard installation.
W1210 11:12:24.483830 140496479233856 deprecation_wrapper.py:119] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W1210 11:12:24.483992 140496479233856 deprecation_wrapper.py:119] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W1210 11:12:24.485436 140496479233856 deprecation_wrapper.py:119] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W1210 11:12:24.500835 140496479233856 deprecation_wrapper.py:119] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

W1210 11:12:24.500959 140496479233856 deprecation_wrapper.py:119] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

W1210 11:12:24.872385 140496479233856 deprecation_wrapper.py:119] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

W1210 11:12:28.339353 140496479233856 deprecation_wrapper.py:119] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

/media/suryadi/DATA/learn/TrainYourOwnYOLO/Data/Model_Weights/trained_weights_final.h5 model, anchors, and classes loaded in 7.63sec.
W1210 11:12:32.294080 140496479233856 deprecation.py:323] From /media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Found 1 input labels: ['Cat_Face'] ...
Found 64 input images: ['04ebfcfd_fa70_4e49_84ff_66cf3c5c0407_best_cat_toys_for_older_cats_3.jpg', '0809_Cat_CNN.jpg', '0_Cat_research.jpg', '0_PAY_GRUMPY_CAT.jpg', '100.jpg'] ...
img_path =  /media/suryadi/DATA/learn/TrainYourOwnYOLO/Data/Source_Images/Test_Images/04ebfcfd_fa70_4e49_84ff_66cf3c5c0407_best_cat_toys_for_older_cats_3.jpg
image_data.shape =  (416, 416, 3)
Traceback (most recent call last):
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
    return fn(*args)
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
  (0) Failed precondition: Attempting to use uninitialized value batch_normalization_68/moving_variance
	 [[{{node batch_normalization_68/moving_variance/read}}]]
	 [[boolean_mask_1/GatherV2/_57]]
  (1) Failed precondition: Attempting to use uninitialized value batch_normalization_68/moving_variance
	 [[{{node batch_normalization_68/moving_variance/read}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/media/suryadi/DATA/learn/TrainYourOwnYOLO/3_Inference/Detector.py", line 176, in <module>
    prediction, image = detect_object(yolo, img_path, save_img = save_img, save_img_path = FLAGS.output, postfix=FLAGS.postfix)
  File "/media/suryadi/DATA/learn/TrainYourOwnYOLO/Utils/utils.py", line 40, in detect_object
    prediction, new_image = yolo.detect_image(image)
  File "/media/suryadi/DATA/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo.py", line 127, in detect_image
    out_boxes, out_scores, out_classes = self.sess.run( [self.boxes, self.scores, self.classes], feed_dict={ self.yolo_model.input: image_data, self.input_image_shape: [image.size[1], image.size[0]], K.learning_phase(): 0 })
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
    feed_dict_tensor, options, run_metadata)
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
    run_metadata)
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
  (0) Failed precondition: Attempting to use uninitialized value batch_normalization_68/moving_variance
	 [[node batch_normalization_68/moving_variance/read (defined at /anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:402) ]]
	 [[boolean_mask_1/GatherV2/_57]]
  (1) Failed precondition: Attempting to use uninitialized value batch_normalization_68/moving_variance
	 [[node batch_normalization_68/moving_variance/read (defined at /anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:402) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'batch_normalization_68/moving_variance/read':
  File "/learn/TrainYourOwnYOLO/3_Inference/Detector.py", line 148, in <module>
    "model_image_size" : (416, 416),
  File "/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo.py", line 47, in __init__
    self.boxes, self.scores, self.classes = self.generate()
  File "/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo.py", line 76, in generate
    if is_tiny_version else yolo_body(Input(shape=(None,None,3)), num_anchors//3, num_classes)
  File "/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo3/model.py", line 85, in yolo_body
    x, y3 = make_last_layers(x, 128, num_anchors*(num_classes+5))
  File "/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo3/model.py", line 63, in make_last_layers
    DarknetConv2D_BN_Leaky(num_filters, (1,1)))(x)
  File "/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo3/utils.py", line 17, in <lambda>
    return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
  File "/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo3/utils.py", line 17, in <lambda>
    return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
  File "/learn/TrainYourOwnYOLO/2_Training/src/keras_yolo3/yolo3/utils.py", line 17, in <lambda>
    return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
  [Previous line repeated 3 more times]
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/engine/base_layer.py", line 431, in __call__
    self.build(unpack_singleton(input_shapes))
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/layers/normalization.py", line 129, in build
    trainable=False)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/engine/base_layer.py", line 252, in add_weight
    constraint=constraint)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 402, in variable
    v = tf.Variable(value, dtype=tf.as_dtype(dtype), name=name)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 259, in __call__
    return cls._variable_v1_call(*args, **kwargs)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 220, in _variable_v1_call
    shape=shape)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
    previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2511, in default_variable_creator
    shape=shape)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 263, in __call__
    return super(VariableMetaclass, cls).__call__(*args, **kwargs)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1568, in __init__
    shape=shape)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1755, in _init_from_args
    self._snapshot = array_ops.identity(self._variable, name="read")
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 86, in identity
    ret = gen_array_ops.identity(input, name=name)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4253, in identity
    "Identity", input=input, name=name)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
    op_def=op_def)
  File "/anaconda3/envs/yolo/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

Detected Cat Faces in 11.2 seconds

Please help and thank you very much in advance.

Suryadi

Training Fails

I tried to train with the images and annotations provided without changing anything.

Python: v3.7
Tensorflow: v2
System: PI4 2GB RAM (Linux raspberrypi 4.19.75-v7l+ [Buster])

And it fails when running Train_TOLO.py
It fails after starting Epoch 1.

File "/home/pi/.virtualenvs/cv/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,416,416,32] and type float on /job:localhost/replica:0/task:0/device:CPU:0 by allocator cpu
[[node conv2d_1/convolution (defined at /home/pi/.virtualenvs/cv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[Op:__inference_keras_scratch_graph_20380]
Function call stack:
keras_scratch_graph

I can provide more info if needed. Thanks in advance for any insight.
I saw else where that it might be due to memory. I will try to figure that out and update if that does happen to be the case.

[Solved] h5py problem

My system is ubuntu 18.04, python 3.7.4.

$ python3 Minimal_Example.py 
Detecting Cat Faces by calling: 
...
  File "/media/suryadi/DATA/anaconda3/envs/yolo/lib/python3.7/site-packages/h5py/_hl/files.py", line 173, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (file signature not found)
Detected Cat Faces in 47.1 seconds

I found that ~/TrainYourOwnYOLO/Data/Model_Weights/trained_weights_final.h5 has zero byte.

My solution is:

$ pip uninstall progressbar
$ pip install progressbar2

Hope this information useful.

suryadi

TF2.0 load_weights() error

Hi,

While using TF2.0 with TF2 branch on this repo I am getting the "TypeError: load_weights() got an unexpected keyword argument 'skip_mismatch'" error. And if I remove the 'skip_mismatch()' parameter, I get the shape miss match error. Any thoughts?

load_weights_2
load_weights

Error: No such file or directory "example.jpg"

So after I start training it loads everything appropriatly, but when the first epoch starts I get the error message "Error: No such file or directory "example.jpg"". Here is the error traceback.
yolotrain problem.txt

So basically I'm no entirely sure if that is just the modules way to print the path for error output, but if it isn't I am not sure why the path is escaped by forward slashes "C:/Users/cooki/Desktop/desktop/..." instead of backslashes "path\to\jpg" since I am running on Windows and your train script is taking the os.path.

No module named 'keras_yolo3'

Before filing a report consider the following two questions:

Have you followed the instructions exactly (word by word)?

Yes

Have you checked the troubleshooting section?

Yes

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Windows 8.1

  • TensorFlow version (use command below):
    tensorflow==1.15.2

  • CUDA/cuDNN version:
    Cuda 9.0

  • GPU model and memory:

  • Exact command to reproduce:

When I try to run:
from keras_yolo3.yolo import YOLO, detect_video
issue No module named 'keras_yolo3 comes...
Up to this, I did only this code, maybe somewhere in files is this module explained how to install.>
Here is my code, on Python 3.7 on Spyder.

-- coding: utf-8 --

"""
Created on Wed Feb 12 21:00:36 2020

@author: Dinu
"""
from PIL import Image
from os import path, makedirs
import os
import re
import pandas as pd
import sys
import argparse

def get_parent_dir(n=1):
""" returns the n-th parent dicrectory of the current
working directory """
current_path = os.path.dirname(os.path.abspath('C:\Users\Dinu\Desktop\AI postari\1 proiect'))
for k in range(n):
current_path = os.path.dirname(current_path)
return current_path

sys.path.append(os.path.join('C:\Users\Dinu\Desktop\AI postari\1 proiect', "Utils"))
from Convert_Format import convert_vott_csv_to_yolo

Data_Folder = os.path.join('C:\Users\Dinu\Desktop\AI postari\1 proiect', "Data")
VoTT_Folder = os.path.join(
Data_Folder, "Source_Images", "Training_Images", "vott-csv-export"
)
VoTT_csv = os.path.join(VoTT_Folder, "Annotations-export.csv")
YOLO_filename = os.path.join(VoTT_Folder, "data_train.txt")

model_folder = os.path.join(Data_Folder, "Model_Weights")
classes_filename = os.path.join(model_folder, "data_classes.txt")

if name == "main":
# surpress any inhereted default values
parser = argparse.ArgumentParser(argument_default=argparse.SUPPRESS)
"""
Command line options
"""
parser.add_argument(
"--VoTT_Folder",
type=str,
default=VoTT_Folder,
help="Absolute path to the exported files from the image tagging step with VoTT. Default is "
+ VoTT_Folder,
)

parser.add_argument(
    "--VoTT_csv",
    type=str,
    default=VoTT_csv,
    help="Absolute path to the *.csv file exported from VoTT. Default is "
    + VoTT_csv,
)
parser.add_argument(
    "--YOLO_filename",
    type=str,
    default=YOLO_filename,
    help="Absolute path to the file where the annotations in YOLO format should be saved. Default is "
    + YOLO_filename,
)

FLAGS = parser.parse_args()

# Prepare the dataset for YOLO
multi_df = pd.read_csv(FLAGS.VoTT_csv)
labels = multi_df["label"].unique()
labeldict = dict(zip(labels, range(len(labels))))
multi_df.drop_duplicates(subset=None, keep="first", inplace=True)
train_path = FLAGS.VoTT_Folder
convert_vott_csv_to_yolo(
    multi_df, labeldict, path=train_path, target_name=FLAGS.YOLO_filename
)

# Make classes file
file = open(classes_filename, "w")

# Sort Dict by Values
SortedLabelDict = sorted(labeldict.items(), key=lambda x: x[1])
for elem in SortedLabelDict:
    file.write(elem[0] + "\n")
file.close()

"download pre trained dark-net weights and gonna convert them to YOLO format"
import os
import subprocess
import time
import sys
import argparse
import requests
import progressbar

FLAGS = None

root_folder = os.path.dirname(os.path.abspath('C:\Users\Dinu\Desktop\AI postari\1 proiect\2_Training\src\'))
download_folder = os.path.join(root_folder, "src", "keras_yolo3")

if name == "main":
# Delete all default flags
parser = argparse.ArgumentParser(argument_default=argparse.SUPPRESS)
"""
Command line options
"""
parser.add_argument(
"--download_folder",
type=str,
default=download_folder,
help="Folder to download weights to. Default is " + download_folder,
)

FLAGS = parser.parse_args()

url = "https://pjreddie.com/media/files/yolov3.weights"
r = requests.get(url, stream=True)

f = open(os.path.join(download_folder, "yolov3.weights"), "wb")
file_size = int(r.headers.get("content-length"))
chunk = 100
num_bars = file_size // chunk
bar = progressbar.ProgressBar(maxval=num_bars).start()
i = 0
for chunk in r.iter_content(chunk):
    f.write(chunk)
    bar.update(i)
    i += 1
f.close()

call_string = "python convert.py yolov3.cfg yolov3.weights yolo.h5"

subprocess.call(call_string, shell=True, cwd=download_folder)

"Detector pasul 3"
import os
import sys

def get_parent_dir(n=1):
""" returns the n-th parent dicrectory of the current
working directory """
current_path = os.path.dirname(os.path.abspath('C:\Users\Dinu\Desktop\AI postari\1 proiect'))
for k in range(n):
current_path = os.path.dirname(current_path)
return current_path

src_path = os.path.join(get_parent_dir(1), "2_Training", "src")
utils_path = os.path.join(get_parent_dir(1), "Utils")

sys.path.append(src_path)
sys.path.append(utils_path)

################
Then when i'm trying to do inference... Issue comes.

Some issue with running on GPU.

Before filing a report consider the following two questions:

Have you followed the instructions exactly (word by word)? Yes

Have you checked the troubleshooting section?Yes

Once you are familiar with the code, you're welcome to modify it. Please only continue to file a bug report if you encounter an issue with the provided code and after having followed the instructions.

If you have followed the instructions exactly, couldn't solve your problem with the provided troubleshooting tips and would still like to file a bug or make a feature requests please follow the steps below.

  1. It must be a bug, a feature request, or a significant problem with the documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

System information

  • What is the top-level directory of the model you are using: 2_Training
  • Have I written custom code (as opposed to using a stock example script provided in the repo):
    No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): windows 10
  • TensorFlow version (use command below): v1.15.0-rc3-22-g590d6eef7e 1.15.0
  • CUDA/cuDNN version: Cuda 10 and the latest cudnn for cuda 10
  • GPU model and memory: Nvedia Geforce 940MX and 4GB graphics card
  • Exact command to reproduce: python Train_YOLO.py

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug or a feature request.
I installed Cuda 10 and corresponding latest cudnn version for cuda 10. However I am getting some weird messages
anton
anton 2
anton 3
anton 4
anton 5
At the 1/51 epoch after 10/11th iteration it displays all the above messages which i am not able to understand. What exactly is the problem here?
Also i think the weights are not getting updated. When i tried testing some images it took the weights which i had got before (when i tried to train it with cpu)

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
Attached as images above

KeyError: 'val_loss'

Hi, while running the Train_YOLO.py command i got the following error after achieving Epoch 51/51

Traceback (most recent call last):
File "Train_YOLO.py", line 229, in
step1_val_loss = np.array(history.history["val_loss"])
KeyError: 'val_loss'

I've tried the tutorial one month ago and it was perfectly working.

-TF version: 2.0
-OS: windows 10
-Using CPU

ValueError: Unable to import backend : mxnet

Hi guys.

So today I wanted to train on some other data that I had.
After installing the requirements, I tried to run Download_and_Convert_YOLO_weights.py but got the following error:
raceback (most recent call last):
File "/home/ubuntu/TrainYourOwnYOLO/eli/lib/python3.6/site-packages/keras/backend/init.py", line 93, in
backend_module = importlib.import_module(_BACKEND)
File "/home/ubuntu/anaconda3/lib/python3.6/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in _gcd_import
File "", line 971, in _find_and_load
File "", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mxnet'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "convert.py", line 14, in
from keras import backend as K
File "/home/ubuntu/TrainYourOwnYOLO/eli/lib/python3.6/site-packages/keras/init.py", line 3, in
from . import utils
File "/home/ubuntu/TrainYourOwnYOLO/eli/lib/python3.6/site-packages/keras/utils/init.py", line 6, in
from . import conv_utils
File "/home/ubuntu/TrainYourOwnYOLO/eli/lib/python3.6/site-packages/keras/utils/conv_utils.py", line 9, in
from .. import backend as K
File "/home/ubuntu/TrainYourOwnYOLO/eli/lib/python3.6/site-packages/keras/backend/init.py", line 108, in
raise ValueError('Unable to import backend : ' + str(_BACKEND))
ValueError: Unable to import backend : mxnet

which didn't happened yesterday. Also, if i install mxnet manually it gives different errors as well.

Any thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.