Giter Site home page Giter Site logo

edjeelectronics / tensorflow-lite-object-detection-on-android-and-raspberry-pi Goto Github PK

View Code? Open in Web Editor NEW
1.5K 1.5K 678.0 130.45 MB

A tutorial showing how to train, convert, and run TensorFlow Lite object detection models on Android devices, the Raspberry Pi, and more!

License: Apache License 2.0

Python 28.19% Shell 0.90% Jupyter Notebook 70.91%

tensorflow-lite-object-detection-on-android-and-raspberry-pi's People

Contributors

edjeelectronics avatar elektronika-ba avatar hafred avatar jantielens avatar maxhancock1 avatar maxhancock16 avatar nidhxba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-lite-object-detection-on-android-and-raspberry-pi's Issues

Using the Coral USB Accelerator needs additional code

I get the following error when attempting to use the Coral USB Accelerator

python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model/ --labels=labelmap.txt --graph=detect_edgetpu.tflite
INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
  File "TFLite_detection_webcam.py", line 120, in <module>
    interpreter.allocate_tensors()
  File "/home/pi/Downloads/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/pi/Downloads/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Encountered unresolved custom op: edgetpu-custom-op.Node number 0 (edgetpu-custom-op) failed to prepare.

Found a fix by doing the following https://stackoverflow.com/questions/58458967/tf-converter-all-mapped-while-still-encounter-unresolved-custom-op

Solution:

  • [1/2] Import load_delegate

/.../
if pkg is None:
from tflite_runtime.interpreter import Interpreter
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
from tensorflow.lite.python.interpreter import load_delegate
/.../

  • [2/2] Calling the delegate

/.../
#Load the Tensorflow Lite model and get details
# interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter = Interpreter(model_path=PATH_TO_CKPT, experimental_delegates=[load_delegate("libedgetpu.so.1.0")])
/.../

Other info:
I use a newly installed Raspian Buster on a Raspberry pi 4B - 4GB RAM

Problems while training ssd_mobilenet_v2_quantized_coco on GPU but same works well for faster_rcnn_inception_v2_coco

I am trying to train a custom model that I will use later on raspberry pi for object detection. The model I want to train using GPU TensorFlow is ssd_mobilenet_v2_quantized_coco but when I try to run the training, itload all the gpu files successfuly but ran into the error of memory which surprisingly works perfectly well in case of training the faster_rcnn_inception_v2_coco. CPU version of TensorFlow works fine with both of the models.

My system specifications are:
Operating System: Windows 10
Graphics Card : Nvidia MX250
Ram : 16GB
Processor: Intel core i7-10th Gen.
Tensorflow Version: Tensorflow GPU 1.15.0
Cuda : 10.0
CuDNN : 7.4.6 (As the enssorflow model-master was compiled using this version)
Any recommendations that helps me fasten my SSD network training process will be highly appreciated.

ERROR conda.core.link:_execute(700): An error occurred while installing package 'defaults::m2-base-1.0.0-3'.

I am trying to follow your this tutorial and I am stuck at Step 2d. Download Bazel and Python package dependencies
Getting same error while installing different bazel versions.
My TensorFlow version is 1.15
I am running Anaconda prompt as administrator privileges:
Following is my error:
`(tensorflow-build) C:\Windows\system32>conda install -c conda-forge bazel=0.24.1
Collecting package metadata (current_repodata.json): done
Solving environment: done

Package Plan

environment location: C:\Users\Saqib\Anaconda3\envs\tensorflow-build

added / updated specs:
- bazel=0.24.1

The following packages will be downloaded:

package                    |            build
---------------------------|-----------------
bazel-0.24.1               |       he3c9ec2_0        58.0 MB
ca-certificates-2019.11.28 |       hecc5488_0         182 KB  conda-forge
certifi-2019.11.28         |           py37_0         148 KB  conda-forge
openssl-1.1.1d             |       hfa6e2cd_0         4.7 MB  conda-forge
------------------------------------------------------------
                                       Total:        63.0 MB`

The following NEW packages will be INSTALLED:

`The following NEW packages will be INSTALLED:

bazel pkgs/main/win-64::bazel-0.24.1-he3c9ec2_0
m2-base pkgs/msys2/win-64::m2-base-1.0.0-3
m2-bash pkgs/msys2/win-64::m2-bash-4.3.042-5
m2-bash-completion pkgs/msys2/win-64::m2-bash-completion-2.3-2
m2-catgets pkgs/msys2/win-64::m2-catgets-1.1-3
m2-coreutils pkgs/msys2/win-64::m2-coreutils-8.25-102
m2-dash pkgs/msys2/win-64::m2-dash-0.5.8-2
m2-diffutils pkgs/msys2/win-64::m2-diffutils-3.3-4
m2-file pkgs/msys2/win-64::m2-file-5.25-2
m2-filesystem pkgs/msys2/win-64::m2-filesystem-2016.04-4
m2-findutils pkgs/msys2/win-64::m2-findutils-4.6.0-2
m2-gawk pkgs/msys2/win-64::m2-gawk-4.1.3-2
m2-gcc-libs pkgs/msys2/win-64::m2-gcc-libs-5.3.0-4
m2-gettext pkgs/msys2/win-64::m2-gettext-0.19.7-4
m2-gmp pkgs/msys2/win-64::m2-gmp-6.1.0-3
m2-grep pkgs/msys2/win-64::m2-grep-2.22-4
m2-gzip pkgs/msys2/win-64::m2-gzip-1.7-2
m2-inetutils pkgs/msys2/win-64::m2-inetutils-1.9.2-2
m2-info pkgs/msys2/win-64::m2-info-6.0-2
m2-less pkgs/msys2/win-64::m2-less-481-2
m2-libasprintf pkgs/msys2/win-64::m2-libasprintf-0.19.7-4
m2-libbz2 pkgs/msys2/win-64::m2-libbz2-1.0.6-3
m2-libcatgets pkgs/msys2/win-64::m2-libcatgets-1.1-3
m2-libcrypt pkgs/msys2/win-64::m2-libcrypt-1.3-2
m2-libgettextpo pkgs/msys2/win-64::m2-libgettextpo-0.19.7-4
m2-libiconv pkgs/msys2/win-64::m2-libiconv-1.14-3
m2-libintl pkgs/msys2/win-64::m2-libintl-0.19.7-4
m2-liblzma pkgs/msys2/win-64::m2-liblzma-5.2.2-2
m2-libpcre pkgs/msys2/win-64::m2-libpcre-8.38-2
m2-libreadline pkgs/msys2/win-64::m2-libreadline-6.3.008-8
m2-libutil-linux pkgs/msys2/win-64::m2-libutil-linux-2.26.2-2
m2-libxml2 pkgs/msys2/win-64::m2-libxml2-2.9.2-3
m2-make pkgs/msys2/win-64::m2-make-4.1-5
m2-mintty pkgs/msys2/win-64::m2-mintty-1!2.2.3-2
m2-mpfr pkgs/msys2/win-64::m2-mpfr-3.1.4-2
m2-msys2-launcher~ pkgs/msys2/win-64::m2-msys2-launcher-git-0.3.28.860c495-2
m2-msys2-runtime pkgs/msys2/win-64::m2-msys2-runtime-2.5.0.17080.65c939c-3
m2-ncurses pkgs/msys2/win-64::m2-ncurses-6.0.20160220-2
m2-sed pkgs/msys2/win-64::m2-sed-4.2.2-3
m2-tar pkgs/msys2/win-64::m2-tar-1.28-4
m2-tftp-hpa pkgs/msys2/win-64::m2-tftp-hpa-5.2-2
m2-time pkgs/msys2/win-64::m2-time-1.7-2
m2-ttyrec pkgs/msys2/win-64::m2-ttyrec-1.0.8-2
m2-tzcode pkgs/msys2/win-64::m2-tzcode-2015.e-2
m2-unzip pkgs/msys2/win-64::m2-unzip-6.0-3
m2-util-linux pkgs/msys2/win-64::m2-util-linux-2.26.2-2
m2-which pkgs/msys2/win-64::m2-which-2.21-3
m2-zip pkgs/msys2/win-64::m2-zip-3.0-2
m2-zlib pkgs/msys2/win-64::m2-zlib-1.2.8-4
msys2-conda-epoch pkgs/msys2/win-64::msys2-conda-epoch-20160418-1
openjdk conda-forge/win-64::openjdk-11.0.1-1017
posix pkgs/msys2/win-64::posix-1.0.0-2
vs2013_runtime pkgs/main/win-64::vs2013_runtime-12.0.21005-1

The following packages will be UPDATED:

ca-certificates anaconda::ca-certificates-2019.11.27-0 --> conda-forge::ca-certificates-2019.11.28-hecc5488_0

The following packages will be SUPERSEDED by a higher-priority channel:

certifi anaconda --> conda-forge
openssl anaconda::openssl-1.1.1-he774522_0 --> conda-forge::openssl-1.1.1d-hfa6e2cd_0

Proceed ([y]/n)? y
`
The following packages will be UPDATED:

ca-certificates anaconda::ca-certificates-2019.11.27-0 --> conda-forge::ca-certificates-2019.11.28-hecc5488_0

The following packages will be SUPERSEDED by a higher-priority channel:

certifi anaconda --> conda-forge
openssl anaconda::openssl-1.1.1-he774522_0 --> conda-forge::openssl-1.1.1d-hfa6e2cd_0

Proceed ([y]/n)? y

Downloading and Extracting Packages
bazel-0.24.1 | 58.0 MB | ############################################################################################################################ | 100%
certifi-2019.11.28 | 148 KB | ############################################################################################################################ | 100%
ca-certificates-2019 | 182 KB | ############################################################################################################################ | 100%
openssl-1.1.1d | 4.7 MB | ############################################################################################################################ | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
ERROR conda.core.link:_execute(700): An error occurred while installing package 'defaults::m2-base-1.0.0-3'.
Rolling back transaction: done

LinkError: post-link script failed for package defaults::m2-base-1.0.0-3
location of failed script: C:\Users\Saqib\Anaconda3\envs\tensorflow-build\Scripts.m2-base-post-link.bat
==> script messages <==

==> script output <==
stdout:
stderr: 'chcp' is not recognized as an internal or external command,
operable program or batch file.
'chcp' is not recognized as an internal or external command,
operable program or batch file.
'chcp' is not recognized as an internal or external command,
operable program or batch file.

return code: 1

()

A question about debugging with the anaconda envs

Hi Evan. I follow all the flow and come to step 3b. with your script 'TFLite_detection_image.py'. But I cannot run the script properly and there is no errors back.
I wonder how can I set the checkpoint to figure out how the codes being bypassed. You see I tried inputting both image_dir and image, the error shows up which indicates line 53 is run indeed. Somehow I need to figure out why the for loop in line 103 is bypassed. Any suggestions?

The snapshot is attached below. Thanks in advance.

image

Trained results

Could you release frozen graph and tflite model of this tutorial about bird, squirrel, and raccoon detector model?

bazel run command building bazel everytime in windows 10, CPU version.

Every time I am running the command "bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=$OUTPUT_DIR/tflite_graph.pb --output_file=$OUTPUT_DIR/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --inference_type=FLOAT --allow_custom_ops" to convert pb file to tflite, first it builds bazel again like this "INFO: Analysed target //tensorflow/lite/toco:toco (79 packages loaded, 3345 targets configured)." can yoy provide solution for this? how can I permanantly set some indication that bazel is already build.

Same FPS using TFLite on Raspberry PI 3

Hi!
I was able to convert and ssd_mobilenet_v1_coco Model that I trained my self to detect, following this:

python python export_tflite_ssd_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1_coco.config --trained_checkpoint_prefix training/model.ckpt-224593 --output_directory output/frozen_tflite/frozen_inference_graph.pb -add_postprocessing_op True --max_detections 1

and then

tflite_convert --output_file=tflite/detect.tflite \ --graph_def_file=output/frozen_tflite/frozen_inference_graph.pb/tflite_graph.pb \ --input_arrays=normalized_input_image_tensor \ --output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \ --input_shape=1,300,300,3 \ --allow_custom_ops

Then I get a folder "tflite" with this files in it: detect.tflite labelmap.txt tflite_graph.pb tflite_graph.pbtxt

After that I copied all the files to the Raspberry PI, and using your TFLite_detection_webcam.py script I runned the model. It Works, but wont go over 1.3 FPS.

If I run the original version of this model, which I was running with the Object_detection_picamera.py script, it gives me same FPS. No change at all.

What could I be doing wronge?

Great tutorial by the way!!
Regards,

hash update...

I execute this line bash get_pi_requirements.sh
but error like this -> "THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them."

ImportError: /home/pi/tflite1/tflite1-env/lib/python3.5/site-packages/tensorflow/lite/python/interpreter_wrapper/_tensorflow_wrap_interpreter_wrapper.so: undefined symbol: _ZN6tflite12tensor_utils24NeonVectorScalarMultiplyEPKaifPf

I have observed one more thing today that TensorFlow v.2.0 is not available for RPi 3b+ Debrain stretch as tensorflow-2.0.0-cp35none-linux_armv7l.whl is not available. :(
When I try to install tenosrflow 2.0.0 using the following command:
sudo pip install tensorflow-2.0.0-cp35-none-linux_armv71.whl
It gives me the following error:
Requirement tensorflow-2.0.0-cp35-none-linux_armv71.whl looks like a filename, but the file doesnot exist

How to use multiple cores to speed up the inference time?

Hello. Let me first thank you for your tutorial.

I could successfully run the model on a video and see the detections. BUT, when I do htop, I can see that only one core is used! How can I force the TensorFlow to use the other cores too?

Windows 10 IoT

Can windows version [Part 1] been applied on windows 10 IoT ? since I want to install Window 10 IoT on Pi4.

I need help

R/s Sir

I was working on project of detecting objects using sample Tensorflow Lite model provided by Google. I've read & watched your tutorial for it whose Github repo. is this. All the things is done tested as you instructed. But now for my prototype I need following helps : -

  1. I wnat to know can I run it in Rpi. zero W. If yes then please tell me the Reduction rate of
    Accuracy, Quality & Frame rate as compared to Rpi 4b & 3b+.

  2. How many objects can be detected by that model and how to choose which objects (ie. required
    objects) is only need to detect?

  3. As I'm new boy in AI/ML & DL & Programing I request you please give me full modification
    of TFLite_detection_video.py for following requirements :-     Whenever it detects the Required
    objects sends the frame with Warning message via email (I'll use Gmail)    A option of life feed
    that can be acessed by any part of the world via link. 

By the way you are great and make very good projects!

Edge TPU program not working

Thanks for the great tutorial. I followed instructions in "Section 1 - How to Set Up and Run TensorFlow Lite Object Detection Models on the Raspberry Pi" and got it working without any problems.

However, in "Section 2 - Run Edge TPU Object Detection Models on the Raspberry Pi Using the Coral USB Accelerator", got the following error when running the demo script:

(tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model --edgetpu
INFO: Initialized TensorFlow Lite runtime.
/home/pi/tflite1/Sample_TFLite_model/edgetpu.tflite
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 140, in
interpreter.allocate_tensors()
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

Is it possible that this is related to recent update as noted here?
https://github.com/google-coral/edgetpu/issues/44#issuecomment-579787546

There appears to be a fix:
https://github.com/google-coral/edgetpu/issues/44#issuecomment-579905056
I followed the fix but this did not solve the problem.
Thank you so much again.

My system info:
(tflite1-env) pi@raspberrypi:~/tflite1 $ sudo dpkg -l | grep edge
ii libedgetpu1-std:armhf 13.0 armhf Support library for Edge TPU

(tflite1-env) pi@raspberrypi:~/tflite1 $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

Step 3a. Create optimized TensorFlow Lite model

Hi @ all,

i hope someone maybe can help me...
I get this error after running my model through the TOCO tool.

`(tensorflow-build) C:\tensorflow-build>bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=%OUTPUT_DIR%/tflite_graph.pb --output_file=%OUTPUT_DIR%/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_values=128 --change_concat_input_ranges=false --allow_custom_ops

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.google.protobuf.UnsafeUtil (file:/C:/Users/rudik/_bazel_rudik/install/49689753f6e99985995e6295d4436977/_embedded_binaries/A-server.jar) to field java.nio.Buffer.address
WARNING: Please consider reporting this to the maintainers of com.google.protobuf.UnsafeUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
ERROR: The 'run' command is only supported from within a workspace.
WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown".
INFO: Invocation ID: 5bfd1709-3dfa-4fcb-9bb5-c10458885e6e`

Greetings, Rudi

Unable to create model file 'detect.tflite' to use with TensorFlow Lite!

I executing the following command:

bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=../../models/research/object_detection/TFLite_model/tflite_graph.pb --output_file=../../models/research/object_detection/TFLite_model/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_values=128 --change_concat_input_ranges=false --allow_custom_ops

Command response:

**> > INFO: Analysed target //tensorflow/lite/toco:toco (0 packages loaded).

INFO: Found 1 target...
Target //tensorflow/lite/toco:toco up-to-date:
bazel-bin/tensorflow/lite/toco/toco
INFO: Elapsed time: 0.423s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/lite/toco/toco '--input_file=../../models/research/object_detection/TFLite_model/tflite_graph.pb' '--output_file=../../models/research/object_detection/TFLite_model/detect.tflite' '--input_shapes=1,300,300,3' '--input_arrays=normalized_input_image_tensor' '--output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3' '--inference_type=QUANTIZED_UINT8' '--mean_values=128' '--std_values=128' '--change_concat_input_ranges=false' --alINFO: Build completed successfully, 1 total action
2020-01-10 19:27:36.435310: F tensorflow/lite/toco/toco_convert.cc:45] Check failed: port::file::Exists(input_file.value(), port::file::Defaults()).ok() Specified input_file does not exist: ../../models/research/object_detection/TFLite_model/tflite_graph.pb.**

Although file is exists. I have checked it properly. Also I put the file in the same directory despite not working.

Error when running final step

FileNotFoundError: [Errno 2] No such file or directory: '/home/pi/tflite1/Sample_TFLite_mode/labelmap.txt'
Whats wrong? Please help

ERROR with USB webcam

when i run TFLite_detection_webcam.py
I met this error

INFO: Initialized TensorFlow Lite runtime.
VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
VIDEOIO ERROR: V4L: can't open camera by index 0
Traceback (most recent call last):
File "tflite/TFLite_detection_webcam.py", line 171, in
frame = frame1.copy()
AttributeError: 'NoneType' object has no attribute 'copy'

I tried two camera, but got same error.
maybe I should buy a new one?

Adding new class while preserving existing ones (SSD MobileNet v2 Quantized)

Thanks for your great tutorial. I trained my own custom class and managed to make it work on my PC. I used Google's COLAB to convert to TFLite format.

Question:
As we are using SSD MobileNet v2 model as a base for our custom training, is it possible to keep all classes that exist within it (all 90+1). I would like to detect everything it already supports including my own new class that I am adding during training. In the tutorial if I run the training it only detects my custom class(es) but without training (when downloaded) it detects all classes that COCO has.

Thanks

ImportError: No module named 'object_detection.core'

when i try to train new model i heve got this error! Any help please?

 File "train.py", line 50, in <module>
    from object_detection.builders import dataset_builder
  File "C:\tensorflow\models\research\object_detection\builders\dataset_builder.py", line 27, in <module>
    from object_detection.data_decoders import tf_example_decoder
  File "C:\tensorflow\models\research\object_detection\data_decoders\tf_example_decoder.py", line 27, in <module>
    from object_detection.core import data_decoder
ImportError: No module named 'object_detection.core'

How to control FPS ?

Hello.
I builded Tensorflow Lite with Raspberry Pi 3 B +.
FPS is below 0.9

Is there anything that can be improved?

thank you :)

Cumulative Counting Mode integration?

Hi,

Just like to say great work!

Question:

How would I go about integrating the Cumulative Counting Model api into the TFLite_detection_webcam.py?

https://github.com/ahmetozlu/tensorflow_object_counting_api

1.2) For detecting, tracking and counting the vehicles with enabled color prediction

Usage of "Cumulative Counting Mode" for the "vehicle counting" case:

fps = 24 # change it with your input video fps
width = 640 # change it with your input video width
height = 352 # change it with your input vide height
is_color_recognition_enabled = 0 # set it to 1 for enabling the color prediction for the detected objects
roi = 200 # roi line position
deviation = 3 # the constant that represents the object counting area

object_counting_api.cumulative_object_counting_y_axis(input_video, detection_graph, category_index, is_color_recognition_enabled, fps, width, height, roi, deviation) # counting all the objects

Errors with step 2e

I am having errors after executing this step:
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package

Here is the error after entering the build command:

(tensorflow-build) D:\tensorflow-build\tensorflow>bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
WARNING: The following rc files are no longer being read, please transfer their contents or import their path into one of the standard rc files:
d:\tensorflow-build\tensorflow/.bazelrc
Starting local Bazel server and connecting to it...
WARNING: Option 'experimental_shortened_obj_file_path' is deprecated
INFO: Invocation ID: 4508ddc2-a932-49db-aa03-e21190a5425f
ERROR: error loading package '': Encountered error while reading extension file 'closure/defs.bzl': no such package '@io_bazel_rules_closure//closure': The native http_archive rule is deprecated. load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") for a drop-in replacement.
Use --incompatible_remove_native_http_archive=false to temporarily continue using the native rule.
ERROR: error loading package '': Encountered error while reading extension file 'closure/defs.bzl': no such package '@io_bazel_rules_closure//closure': The native http_archive rule is deprecated. load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") for a drop-in replacement.
Use --incompatible_remove_native_http_archive=false to temporarily continue using the native rule.
INFO: Elapsed time: 2.345s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded)
    Fetching @io_bazel_rules_closure; fetching

My session configuration also seems to be slightly different than yours as well. I am using tensorflow v 1.12
The following is my configuration session:

You have bazel 0.21.0- (@non-git) installed.
Please specify the location of python. [Default is D:\Anaconda3\envs\tensorflow-build\python.exe]:


Found possible Python library paths:
  D:\Anaconda3\envs\tensorflow-build\lib\site-packages
Please input the desired Python library path to use.  Default is [D:\Anaconda3\envs\tensorflow-build\lib\site-packages]

Do you wish to build TensorFlow with Apache Ignite support? [Y/n]: n
No Apache Ignite support will be enabled for TensorFlow.

Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
No XLA JIT support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.

Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]:


Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]: n
Not overriding eigen strong inline, some compilations could take more than 20 mins.

About cv2

Hi I am execute line
python3 TFLite_detection_webcam.py --modeldir = Sample_TFLite_model
but that result is
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 19, in
import cv2
ImportError: No module named 'cv2'

Please help me..

Error from cv2

Hi.

When I run:
python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model

I got the following error:
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 19, in
import cv2
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/cv2/init.py", line 3, in
from .cv2 import *
ImportError: /home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/cv2/cv2.cpython-37m-arm-linux-gnueabihf.so: undefined symbol: __atomic_fetch_add_8

From your other interesting guide:
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi

I got the error:
n_picamera.py
Traceback (most recent call last):
File "Object_detection_picamera.py", line 23, in
import cv2
File "/usr/local/lib/python3.7/dist-packages/cv2/init.py", line 3, in
from .cv2 import *
ImportError: libQtTest.so.4: cannot open shared object file: No such file or directory

Is my problem something about the version with my python, OS, cv2 version? I'm on pi 4 and updated. It is just both of the guide results in an error for me.

Henrik.

IndexError: list index out of range

Hello. Thanks for this great tutorial. I was able to deploy tensorflow lite on the RasPi3+ and this here works well:

python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model

But when I try to calculate an own model and use it the same way (with TFLite_detection_webcam.py) I get the IndexError: list index out of range error.

I created the own model in Debian Linux 9 with TF2.

$ pip3 list | grep tensorflow
tensorflow                   2.0.0     
tensorflow-estimator         2.0.1     
tensorflow-hub               0.7.0     

I created the model whis way:

$ make_image_classifier \
--image_dir ~/tensorflow/images_train/ \
--tfhub_module https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4 \
--image_size 224 \
--saved_model_dir /tmp/mynewmodel \
--labels_output_file /tmp/mynewmodel/labelmap.txt \
--tflite_output_file /tmp/mynewmodel/detect.tflite

This is the output:

$ ls -lh /tmp/mynewmodel/
insgesamt 11M
drwxr-xr-x 2 bnc bnc 4,0K Jan 10 22:55 assets
-rw-r--r-- 1 bnc bnc 8,5M Jan 11 00:02 detect.tflite
-rw-r--r-- 1 bnc bnc   20 Jan 11 00:02 labelmap.txt
-rw-r--r-- 1 bnc bnc 2,0M Jan 11 00:02 saved_model.pb
drwxr-xr-x 2 bnc bnc 4,0K Jan 11 00:02 variables

When I try to use the self-generated tensorflow lite model on the RasPi 3+, I get this error message:

$ python3 TFLite_detection_webcam.py --modeldir=/home/pi/mynewmodel
INFO: Initialized TensorFlow Lite runtime.
Traceback (most recent call last):
  File "TFLite_detection_webcam.py", line 186, in <module>
    classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
IndexError: list index out of range

Every 1-2 hours, I get one of these lines as additional output:

Corrupt JPEG data: premature end of data segment

What can I do to fix this?
Has anyone the same issue?

cannot connect to X server

I was able to resolve the issue ImportError by installing Debian Buster on RPi-3B+ but now I am encountering another problem which I guess is not that tricky but I still I am struck :(
I get the following error when I try to run:
(tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model
INFO: Initialized TensorFlow Lite runtime.
: cannot connect to X server
Segmentation fault
@EdjeElectronics any suggestion?

Quantization challenges

-Tensorflow 1.14

I reached some challenges when training and exporting different quantized models with your tutorial. I initially took a MobilNet-V2-Quantized model and trained it on multiple classes of moths. Upon training the outputted information reads as below. That is was skipping some quantization.

TesnorflowGithubIsue

I ignored this warning and went on through the training process. Gained a loss consistently under 2 and saved a packet with a 1.58 loss. After going through the process of building the build_tenorflow conda environment I attempted to transform the model to a TF_Lite model with the output below

GITHUBASDF

with an unsupported data type for the quantization holder, the model would transform to an empty model with 0kb as below.

GITHUB ISSUES

I also found out that the model did not have an identifier when attempting to build this model as below.

githubissue_2

I attempted the same process with the facessd_mobilenet_v2_quanitzed model and got the same output. Where do you think the error in this process is?

THANKS EVAN!!!!!!!

cannot connect to X server

While trying to run the Google's sample I get this:

(tflite1-env) pi@raspberrypi:~/tflite1 $ python3 TFLite_detection_webcam.py --modeldir=Sample_TFLite_model
INFO: Initialized TensorFlow Lite runtime.
: cannot connect to X server

Overlapping Bounding Boxes

Hi man, Thank you so much for this guide.

I'm able to run everything successfully. FYI, I'm using the TFLite_detection_video.py file. But I noticed that bounding boxes of the same classes were overlapping, which is not good for my next process because I'll have to crop the image in the bounding box. How can I resolve this, please?

I'm looking forward to your help.

Thanks

About the Bazel configuration session for building GPU-enabled TensorFlow

Hey, i want to GPU-enabled version of TensorFlow, but i not seen the configuration session for building the GPU-enabled version in FAQ section, I guess you may have forgotten to add it.
And also i want to ask you about the Python library paths, my default is tensorflow1\models\research\slim, also i have the like lib\site-packages path in the "Found possible Python library paths", should i change my default path?

import cv2 ModuleNotFoundError: No module named 'cv2'

(tensorflow-build) C:\tensorflow-build>python TFLite_detection_webcam.py --modeldir=C:\tensorflow\models\research\object_detection\coco_ssd_mobilenet_v1_1.0_quant_2018_06_29
Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 19, in
import cv2
ModuleNotFoundError: No module named 'cv2'

i have installed tensorflow V1.13.2, but i always got this error !!
how can i fix it ?

Labels in Spanish

Hello, this is such a great Work, i can contribute with the labels translated to Spanish, have to add the "--labels labelmap_sp.txt" or whathever you name it:

???
Persona
Bicilceta
Auto
Motocicleta
Avion
Bus
Tren
Camion
Barco
Semaforo
Hidrante
???
Stop
Parqueo
Asiento
Pajaro
Gato
Perro
Caballo
Oveja
Vaca
Elefante
Oso
Zebra
Jirafa
???
Mochila
Paraguas
???
???
Cartera
Corbata
Traje
frisbee
Esqui
Tabla de Snowboard
Pelota
Cometa
Bate
Guante
Patineta
Tabla de Surf
Raqueta
Botella
???
Vaso de Vino
Taza
Tenedor
Cuchillo
Cuchara
Tazon
Guineo
Manzana
Sandwich
Naranja
Broccoli
Zanahoria
Panchito
Pizza
Dona
Torta
Silla
Sofa
Maceta
Cama
???
Mesita
???
???
Banho
???
Monitor
Computadora
Raton
Control Remoto
Teclado
Celular
Micro Hondas
Horno
Tostadora
Lavabo
Refrigerador
???
Libro
Reloj
Florero
Tijeras
Peluche
Secadora
Cepillo

ERROR: Config value opt is not defined in any .rc file

tensorflow-build) C:\tensorflow-build>bazel run -config=opt tensorflow/lite/toco:toco - --input_file=%OUTPUT_DIR%/tflite_graph.pb --output_file=%OUTPUT_DIR%/detect.tflite --input_shapes=1,300,300,3 --input_arrays=normalized_input_image_tensor --output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3 --inference_type=QUANTIZED_UINT8 --mean_values=128 --std_values=128 --change_concat_input_ranges=false --allow_custom_ops

Got this error please help asap!

INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=120
INFO: Options provided by the client:
Inherited 'build' options: --python_path=C:/Users/hamza/Anaconda3/envs/tensorflow-build/python.exe
ERROR: Config value opt is not defined in any .rc file

Error while building tensorflow with bazel

Hello, i got this error:

ERROR: C:/tensorflow-build/tensorflow/tensorflow/core/BUILD:2515:1: Executing genrule //tensorflow/core:version_info_gen failed (Exit 2): bash.exe failed: error executing command
  cd C:/users/*user*/_bazel_*user*/j7bi4x5j/execroot/org_tensorflow
  SET PATH=C:\msys64\usr\bin;C:\msys64\bin
    SET PYTHON_BIN_PATH=C:/Users/*user*/Anaconda3/envs/tensorflow-build/python.exe
    SET PYTHON_LIB_PATH=C:/tensorflow1/models/research/slim
    SET TF_DOWNLOAD_CLANG=0
    SET TF_NEED_CUDA=0
    SET TF_NEED_OPENCL_SYCL=0
    SET TF_NEED_ROCM=0
  C:/msys64/usr/bin/bash.exe -c source external/bazel_tools/tools/genrule/genrule-setup.sh; bazel-out/x64_windows-opt/bin/tensorflow/tools/git/gen_git_source.exe --generate external/local_config_git/gen/spec.json external/local_config_git/gen/head external/local_config_git/gen/branch_ref "bazel-out/x64_windows-opt/genfiles/tensorflow/core/util/version_info.cc" --git_tag_override=${GIT_TAG_OVERRIDE:-}
Execution platform: @bazel_tools//platforms:host_platform
C:/Users/*user*/Anaconda3/envs/tensorflow-build/python.exe: can't open file 'C:\users\*user*': [Errno 2] No such file or directory
Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 45,888s, Critical Path: 3,68s
INFO: 7 processes: 7 local.
FAILED: Build did NOT complete successfully

Please help.

Tensorflow-Lite Installation bug + Suggestion

Hi EdjeElectronics,

Great work!

Bug

I just started your tflite tutorial and encountered a strange issue with the pip installation for python3.5 on RPI3B+(Stretch). To clarify, I am running on venv. Although no error was raised from executing your script, I accidentally discovered that tflite_runtime library isn't actually installed. Its like only the registry of tflite_runtime is shown when I perform a "pip3 list" for 'tflite1' venv. The solution to the problem is to include a "sudo" command, infront of your existing pip3 installation code and it works.

Suggestion

To also include "sudo modprobe bcm2835-v4l2" after the opencv-python installation step, within your get_pi_requirements script.

Some things wrong when building the CPU-only version.

hey, I got some things wrong when I try to build the CPU-only version.
the configuration session look like this

You have bazel 0.21.0- (@non-git) installed.
Please specify the location of python. [Default is C:\Users\YT\Anaconda3\envs\tensorflow-build\python.exe]:
Found possible Python library paths:
C:\Users\YT\Anaconda3\envs\tensorflow-build\lib\site-packages
Please input the desired Python library path to use. Default is [C:\Users\YT\Anaconda3\envs\tensorflow-build\lib\site-packages]
Do you wish to build TensorFlow with XLA JIT support? [y/N]: n
No XLA JIT support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is /arch:AVX]:
Would you like to override eigen strong inline for some C++ compilation to reduce the compilation time? [Y/n]: y
Eigen strong inline overridden.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=gdr # Build with GDR support.
--config=verbs # Build with libverbs support.
--config=ngraph # Build with Intel nGraph support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=noignite # Disable Apacha Ignite support.
--config=nokafka # Disable Apache Kafka support.
--config=nonccl # Disable NVIDIA NCCL support.

Then it runs very well at first,but it encountered some errors in the middle,just like this:

ERROR: C:/tensorflow-build/tensorflow/tensorflow/core/kernels/BUILD:3221:1: C++ compilation of rule '//tensorflow/core/kernels:scan_ops' failed (Exit 2): cl.exe failed: error executing command

and this:

ERROR: C:/tensorflow-build/tensorflow/tensorflow/tools/pip_package/BUILD:241:1 C++ compilation of rule '//tensorflow/core/kernels:batch_matmul_op' failed (Exit 2): cl.exe failed: error executing command

I don't know if this error message is sufficient, if you need more details,just ask me.

interpreter.allocate_tensors() error when trying to test detection_image

When i tried to run "python TFLite_detection_image.py --modeldir=TFLite_model --image=P8120180.jpeg"

throws this error:
Traceback (most recent call last):
File "TFLite_detection_image.py", line 90, in
interpreter.allocate_tensors()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter.py", line 244, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 106, in AllocateTensors
return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Encountered unresolved custom op: AddV2.Node number 0 (AddV2) failed to prepare.

my TFLite_model directory:

  • detect.tflite
  • labelmap.txt
  • tflite_graph.pb
  • tflite_graph.pbtxt

TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

Hello,

I installed the project, followed all the steps and it worked fine, then I changed the webcam and it stopped working. Wiped out everything installed it again, and still got this error.

select timeout
Traceback (most recent call last):
File "Object_detection_picamera.py", line 192, in
feed_dict={image_tensor: frame_expanded})
File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 941, in run
run_metadata_ptr)
File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1133, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "/usr/lib/python3/dist-packages/numpy/core/numeric.py", line 538, in asarray
return array(a, dtype, copy=False, order=order)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

Error running TFLite_detection_webcam.py -- RPI4 (BUSTER)

Hello,

The next error appears when running the file TFLite_detection_webcam.py

File "TFLite_detection_video.py", line 115, in
if labels[0] == '???':
IndexError: list index out of range

I commented this lines and then a new error appeared:

Traceback (most recent call last):
File "TFLite_detection_webcam.py", line 119, in
interpreter = Interpreter(model_path=PATH_TO_CKPT)
File "/home/pi/tflite1/tflite1-env/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 206, in init
model_path))
ValueError: Mmap of '/home/pi/tflite1/Sample_TFLite_model/detect.tflite' failed.

Anybody that could help? Could be due to the tensorflow version? I just followed the tutorial step by step.

EdgeTpuDelegateForCustomOp

Hello i have new coral usb accelerator im testing but i go this bug.

image

maybe can you help me whats wrong?

2020-03-02 15:46:39.970128: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory
/home/pi/tflite1/Sample_TFLite_model/edgetpu.tflite
Traceback (most recent call last):
  File "TFLite_detection_image.py", line 119, in <module>
    interpreter.allocate_tensors()
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 245, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 110, in AllocateTensors
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 134350849Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

sometimes too

image

2020-03-02 16:06:25.274480: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory
Traceback (most recent call last):
  File "parkio-lite.py", line 100, in <module>
    interpreter.allocate_tensors()
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter.py", line 245, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper.py", line 110, in AllocateTensors
    return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

@Namburger

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.