Giter Site home page Giter Site logo

google-coral / pycoral Goto Github PK

View Code? Open in Web Editor NEW
340.0 26.0 138.0 6.32 MB

Python API for ML inferencing and transfer-learning on Coral devices

Home Page: https://coral.ai

License: Apache License 2.0

Makefile 5.05% Starlark 2.81% Python 71.14% Shell 1.35% C++ 11.92% Batchfile 6.42% PowerShell 0.74% Smarty 0.57%
pycoral edge-tpu coral-dev-board

pycoral's Introduction

PyCoral API

This repository contains an easy-to-use Python API that helps you run inferences and perform on-device transfer learning with TensorFlow Lite models on Coral devices.

To install the prebuilt PyCoral library, see the instructions at coral.ai/software/.

Note: If you're on a Debian system, be sure to install this library from apt-get and not from pip. Using pip install is not guaranteed compatible with the other Coral libraries that you must install from apt-get. For details, see coral.ai/software/.

Documentation and examples

To learn more about how to use the PyCoral API, see our guide to Run inference on the Edge TPU with Python and check out the PyCoral API reference.

Several Python examples are available in the examples/ directory. For instructions, see the examples README.

Compilation

When building this library yourself, it's critical that you have version-matching builds of libcoral and libedgetpu—notice these are submodules of the pycoral repo, and they all share the same TENSORFLOW_COMMIT value. So just be sure if you change one, you must change them all.

For complete details about how to build all these libraries, read Build Coral for your platform. Or to build just this library, follow these steps:

  1. Clone this repo and include submodules:

    git clone --recurse-submodules https://github.com/google-coral/pycoral
    

    If you already cloned without the submodules. You can add them with this:

    cd pycoral
    
    git submodule init && git submodule update
    
  2. Run scripts/build.sh to build pybind11-based native layer for different Linux architectures. Build is Docker-based, so you need to have it installed.

  3. Run make wheel to generate Python library wheel and then pip3 install $(ls dist/*.whl) to install it

pycoral's People

Contributors

davidfv2296190 avatar dmitriykovalev avatar hjonnala avatar manoj7410 avatar scott306lr avatar usmank13 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pycoral's Issues

slow inference on ARM64 device

Description

Hi there,
I am using a Rockchip - RK3399 (64-bit CPUS: Dual Cortex-A72 + Quad Cortex-A53, USB: 3.0), with Ubuntu 18.04.1 LTS, Python 3.6.8.
When running examples/detect_image.py with the model efficientdet_lite2_448_ptq_edgetpu.tflite:
the inference takes ~330 ms, while I expected ~100 ms (from the published benchmark).
Is my expectation realistic? or, are there issues in my setup that I'm not aware of?

Here's what I've done so far:

  • I installed libedgetpu1-std & python3-pycoral, as described in the official instructions.
  • I stored the input images locally (on the SD card).
  • I have set the 'performance' mode for the cpu freq.
  • I made sure that the device connection is recognized as USB 3 (lsusb -> bcdusb=3.1).

Finally, I tried the benchmarks/inference_benchmarks.py, and got:

******************** Check results *********************
 * Unexpected high latency! [inception_v1_224_quant_edgetpu.tflite]
   Inference time: 6.283602199999905 ms  Reference time: 4.0 ms
 * Unexpected high latency! [mobilenet_v1_1.0_224_quant_edgetpu.tflite]
   Inference time: 4.973522705000164 ms  Reference time: 2.22 ms
 * Unexpected high latency! [mobilenet_v2_1.0_224_quant_edgetpu.tflite]
   Inference time: 5.482743665000385 ms  Reference time: 2.56 ms
 * Unexpected high latency! [ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite]
   Inference time: 10.197461029999886 ms  Reference time: 7.78 ms
******************** Check finished! *******************

I would appreciate any help,
Thanks!

Click to expand!

Issue Type

Performance

Operating System

Ubuntu

Coral Device

USB Accelerator

Other Devices

No response

Programming Language

Python 3.6

Relevant Log Output

No response

Unable to cast Python instance to C++ type

Hi, I'm trying to check dma-buffer and use it to run my model (on google coral dev board), but when I try call a few functions from pycoral.utils.edgetpu, pybind return error:

Traceback (most recent call last):
File "", line 1, in
RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)

My code:
import pycoral.utils.edgetpu as edgetpu interpreter = edgetpu.make_interpreter('model_edgetpu.tflite') edgetpu.supports_dmabuf(interpreter)

I've try to use all pycoral versions (1.0.0, 1.0.1 and 2.0.0) but in each of it there is the same issue

error while cloning the repo/submodules

Hi,

when following the compilation instructions on https://github.com/google-coral/pycoral the first step already fails.

LANG=C git clone --recurse-submodules https://github.com/google-coral/pycoral
Cloning into 'pycoral'...
remote: Enumerating objects: 206, done.
remote: Counting objects: 100% (206/206), done.
remote: Compressing objects: 100% (151/151), done.
remote: Total 206 (delta 46), reused 198 (delta 41), pack-reused 0
Receiving objects: 100% (206/206), 2.88 MiB | 4.38 MiB/s, done.
Resolving deltas: 100% (46/46), done.
Submodule 'libcoral' (https://github.com/google-coral/libcoral) registered for path 'libcoral'
Submodule 'libedgetpu' (https://github.com/google-coral/libedgetpu) registered for path 'libedgetpu'
Submodule 'test_data' (https://github.com/google-coral/test_data) registered for path 'test_data'
Cloning into '/mnt/tmpfs/pycoral/libcoral'...
remote: Enumerating objects: 215, done.        
remote: Counting objects: 100% (215/215), done.        
remote: Compressing objects: 100% (183/183), done.        
remote: Total 215 (delta 26), reused 211 (delta 25), pack-reused 0        
Receiving objects: 100% (215/215), 8.73 MiB | 8.03 MiB/s, done.
Resolving deltas: 100% (26/26), done.
Cloning into '/mnt/tmpfs/pycoral/libedgetpu'...
remote: Enumerating objects: 567, done.        
remote: Counting objects: 100% (567/567), done.        
remote: Compressing objects: 100% (302/302), done.        
remote: Total 567 (delta 293), reused 531 (delta 260), pack-reused 0        
Receiving objects: 100% (567/567), 515.33 KiB | 1.58 MiB/s, done.
Resolving deltas: 100% (293/293), done.
Cloning into '/mnt/tmpfs/pycoral/test_data'...
remote: Enumerating objects: 203, done.        
remote: Total 203 (delta 0), reused 0 (delta 0), pack-reused 203        
Receiving objects: 100% (203/203), 480.93 MiB | 11.71 MiB/s, done.
Resolving deltas: 100% (51/51), done.
fatal: remote error: upload-pack: not our ref 8b21123c74d1f19c94b9d37aa16b26b80ef5e83b
Fetched in submodule path 'libcoral', but it did not contain 8b21123c74d1f19c94b9d37aa16b26b80ef5e83b. Direct fetching of that commit failed.

error: external/org_tensorflow/tensorflow/lite/core/subgraph.cc:1044 required_bytes != bytes (602112 != 150528)

Description

Since I use Windows 10, I unfortunately cannot use the current runtime, but have to use the version
edgetpu_runtime_20210119.zip
to use.

In order for this to work, I have to compile my model to 13 with the Edge TPU compiler
edgetpu_compiler -s -m 13 model_unquant_8020.tflite
new Mode: example_edetpu13.tflite

$ pip freeze
edgetpu @ https://dl.google.com/coral/edgetpu_api/edgetpu-2.14.0-cp37-cp37m-win_amd64.whl
install==1.3.4
numpy==1.21.2
opencv-contrib-python==4.5.3.56
opencv-python==4.5.3.56
Pillow==8.3.2
pycoral @ https://github.com/google-coral/pycoral/releases/download/v2.0.0/pycoral-2.0.0-cp37-cp37m-win_amd64.whl
tflite-runtime @ https://github.com/google-coral/pycoral/releases/download/v2.0.0/tflite_runtime-2.5.0.post1-cp37-cp37m-win_amd64.whl

My model is still not running

Test Code:

from edgetpu.classification.engine import ClassificationEngine
from PIL import Image
import cv2
import re
import os

#from edgetpu.classification.engine import ClassificationEngine

# the TFLite converted to be used with edgetpu
modelPath = './model/example_edetpu13.tflite'

# The path to labels.txt that was downloaded with your model
labelPath = './model/labels.txt'

# This function parses the labels.txt and puts it in a python dictionary
def loadLabels(labelPath):
    p = re.compile(r'\s*(\d+)(.+)')
    with open(labelPath, 'r', encoding='utf-8') as labelFile:
        lines = (p.match(line).groups() for line in labelFile.readlines())
        return {int(num): text.strip() for num, text in lines}

# This function takes in a PIL Image and the ClassificationEngine
def classifyImage(image, engine):
    # Classify and ouptut inference
    #classifications = engine.ClassifyWithImage(image)
    classifications = engine.classify_with_image(image)
   
    return classifications

def main():
    # Load your model onto your Coral Edgetpu
    engine = ClassificationEngine(modelPath)
    labels = loadLabels(labelPath)

    cap = cv2.VideoCapture(0)
    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            break

        # Format the image into a PIL Image so its compatable with Edge TPU
        cv2_im = frame
        #cv2_im = cv2.resize(cv2_im, (224, 224))
        pil_im = Image.fromarray(cv2_im)

        # Resize and flip image so its a square and matches training
        pil_im.resize((224, 224))
        pil_im.transpose(Image.FLIP_LEFT_RIGHT)

        # Classify and display image
        results = classifyImage(pil_im, engine)
        #cv2.imshow('frame', cv2_im)
        print(results)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    cap.release()
    cv2.destroyAllWindows()

if __name__ == '__main__':
    main()

Failure:

"__main__" geladen
"runpy" geladen
external/org_tensorflow/tensorflow/lite/core/subgraph.cc:1044 required_bytes != bytes (602112 != 150528)
Stapelüberwachung:
 >  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 26, in classifyImage
 >    classifications = engine.classify_with_image(image)
 >  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 50, in main
 >    results = classifyImage(pil_im, engine)
 >  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 60, in <module> (Current frame)
 >    main()
"edgetpu.swig.edgetpu_cpp_wrapper" geladen
"edgetpu.basic.basic_engine" geladen
"edgetpu.classification.engine" geladen
Traceback (most recent call last):
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\debugpy\__main__.py", line 45, in <module>
    cli.main()
  File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 430, in main
    run()
  File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\debugpy/..\debugpy\server\cli.py", line 267, in run_file
    runpy.run_path(options.target, run_name=compat.force_str("__main__"))
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 263, in run_path
    pkg_name=pkg_name, script_name=fname)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 96, in _run_module_code
    mod_name, mod_spec, pkg_name, script_name)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 60, in <module>
Der Thread 'MainThread' (0x1) hat mit Code 0 (0x0) geendet.
    main()
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 50, in main
    results = classifyImage(pil_im, engine)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\AnjaCoral.py", line 26, in classifyImage
    classifications = engine.classify_with_image(image)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\classification\engine.py", line 99, in classify_with_image
    return self.classify_with_input_tensor(input_tensor, threshold, top_k)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\classification\engine.py", line 123, in classify_with_input_tensor
    input_tensor)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\basic\basic_engine.py", line 136, in run_inference
    result = self._engine.RunInference(input)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\edgetpu\swig\edgetpu_cpp_wrapper.py", line 111, in RunInference
    return _edgetpu_cpp_wrapper.BasicEnginePythonWrapper_RunInference(self, input)
RuntimeError: external/org_tensorflow/tensorflow/lite/core/subgraph.cc:1044 required_bytes != bytes (602112 != 150528)
[ WARN:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-_xlv4eex\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback
Das Programm "python.exe" wurde mit Code 1 (0x1) beendet.

what is the problem now?

Click to expand!

Issue Type

Build/Install

Operating System

Windows 10

Coral Device

USB Accelerator

Other Devices

No response

Programming Language

Python 3.7

Relevant Log Output

No response

Coral Ejects at Runtime

Installed edgetpu_runtime_20210726 for windows running the latest C++ distribution of Visual Studio

I am running through the demo on the getting started page, currently on the python execution of the parrot classification.

Every time I try to execute the python script, the Google Coral "ejects" from the my pc and the terminal brings up a fresh line as though it finished properly. No output from python is displayed.

Any ideas to try to debug? I uninstalled and reinstalled with restarts in-between and still the same problem.

Unsupported data type in custom op handler

I'm running Python3.7 docker image with libedgetpu1-max (tried std as well) and as of yesterday I am no longer able to find the python3-pycoral package (which seemed to work while it was there).

I have since installed pycoral-1.0.0-cp37-cp37m-linux_aarch64.whl and tflite_runtime-2.5.0-cp37-cp37m-linux_aarch64.whl and am now getting the following error:

Unsupported data type in custom op handler: -1579713216Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

Support for OpenCL backend TFLite runtime package

Hello
How are you?
Thanks for contributing to this project.
I am going to install the OpenCL backend Tensorflow Lite runtime package for Python 3.7 on RK3399 device.

OS Platform: Ubuntu 18.04 aarch64
Mobile device: RK3399
GPU: Mali T860
OpenCL: 1.2
gcc(g++): 7.5
CMake: 3.18

Could u provide this asap?
Thanks

Can't run custom tflite model using PyCoral

  • Attempting to run a custom model (based on MobileDet) using PyCoral
  • Below code is how I ran it. Basing it off of the example provided for object detection (which does work when using the sample coco mobilenet model)
  • Not entirely sure what is going wrong here - any help would be greatly appreciated!
  • Below I am using the parrot image (provided from the classification example) for object detection
  • Will include the relevant files in a zip

(base) patrickcombe@MacBook-Pro pycoral % python3 examples/detect_image.py \ --model test_data/1127_MobileDet_output_model_1127ssdlite_mobiledet_edgetpu.tflite \ --labels test_data/classes.txt \ --input test_data/parrot.jpg \ --output ${HOME}/grace_hopper_processed.bmp ----INFERENCE TIME---- Note: The first inference is slow because it includes loading the model into Edge TPU memory. Traceback (most recent call last): File "examples/detect_image.py", line 108, in <module> main() File "examples/detect_image.py", line 85, in main interpreter.invoke() File "/Users/patrickcombe/opt/anaconda3/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 540, in invoke self._interpreter.Invoke() RuntimeError: external/org_tensorflow/tensorflow/lite/kernels/detection_postprocess.cc:426 ValidateBoxes(decoded_boxes, num_boxes) was not true.Node number 1 (TFLite_Detection_PostProcess) failed to invoke.

test_data.zip

ModuleNotFoundError: No module named 'pycoral.pybind' (am64, Windows 10, python 3.9)

Description

What i am trying to do and why

I am currently trying to integrate a dual edge tpu into our system.
The problem i face is that due to other packages we are using i am bound to specific versions of python and numpy (python 3.9 and numpy 1.19.3)
Now i was very delighted to find out that pycoral is already available prebuild for 3.9 but the prebuild wheel does not work with the numpy version i need. So i tried building the wheel myself from the source provided in the repos releases.

To set up my system I followed the official coralai setup for windows as described here in the getting started documentation
The runtime installer reports success on completion.

To build the wheel i downloaded the source and ran python -m build in the folder. Output does not indicate any errors.

The error

Building the wheel in general finishes successfully and i can install the wheel on my system. However i face the following issue when running the following code:

from pycoral.utils.edgetpu import make_interpreter

Error:

....
  File ".....\lib\site-packages\pycoral\utils\edgetpu.py", line 24, in <module>
    from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
ModuleNotFoundError: No module named 'pycoral.pybind'

I am aware that this issue in some form was already brought up in #48, #32 well as #24 but no suggestion in said threads did solve the issue for me.

some basic analysis of my system

The path explorer does indeed not show a module named pybind as child of pycoral and as can be seen on the right side the source pycoral folder does not contain this as well. Is this correct and i am missing something here?
image

Please let me know if any more information is required or if you have any hints as to what i could do to further analyse this issue.

Thanks in advance,

Click to expand!

Issue Type

Build/Install

Operating System

Windows 10

Coral Device

M.2 Accelerator with dual Edge TPU

Other Devices

No response

Programming Language

Python 3.9

Relevant Log Output

No response

Jetson Nano + Coral USB - RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.

:~/coral/pycoral$ dpkg -l | grep libedgetpu                                                                                            
ii  libedgetpu1-max:arm64                      15.0                                             arm64        Support library for Edge TPU
 python3 examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input test_data/parrot.jpg
Traceback (most recent call last):
  File "examples/classify_image.py", line 84, in <module>
    main()
  File "examples/classify_image.py", line 62, in main
    interpreter.allocate_tensors()
  File "/home/vanillax/.local/lib/python3.6/site-packages/tflite_runtime/interpreter.py", line 242, in allocate_tensors
    return self._interpreter.AllocateTensors()
  File "/home/vanillax/.local/lib/python3.6/site-packages/tflite_runtime/interpreter_wrapper.py", line 115, in AllocateTensors
    return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.

Bazel version required

Hi, I'm trying to compile pycoral with bazel 0.21 but it fails with:

ERROR: Unrecognized option: --experimental_repo_remote_exec
INFO: Invocation ID: 39fc9363-9dd7-4600-8a20-a9995f7691ee
Makefile:164: recipe for target 'tflite' failed
make: *** [tflite] Error 2
make: *** Waiting for unfinished jobs....
ERROR: Unrecognized option: --experimental_repo_remote_exec
INFO: Invocation ID: 77a603b4-dbdb-4b82-abd3-4aacc175c6bf
Makefile:155: recipe for target 'pybind' failed

Is newer/older version required? Thanks.

marek

PyCoral Vs TfLite_runtime

Hello,

By reading the python quickstart:
https://www.tensorflow.org/lite/guide/python
I understand that the tflite_runtime is installed thank to python wheel packages regenerated via pycoral.
Is my understanding right?

In that case what is their any adherence with the tflite_runtime generated via TensorFlow Lite github.
The version also does not match. Official Tensorflow Lite version (so runtime) is v2.3.1 and in the python quick start guide I can see a tflite_runtime version 2.5.0.

Further, running a TPU model using the Tensorflow Lite runtime v2.3.1 is failing:
RuntimeError: Internal: Unsupported data type in custom op handler: 5105160Node number 2 (EdgeTpuDelegateForCustomOp) failed to prepare.

Can you please provide more information regarding pyCoral Vs TfLite_runtime?

Thanks
Vincent

Siamese network support

The topic is not about an issue with pycoral. Just would like to ask if there is any way to support seamese network on edge TPU. Any suggestions will be appreciated.

How to record a video in coral dev board using hardware codec ?

I have coral dev kit and coral camera with me..

I want my inference pipelining running as it as while recording input (raw format is also acceptable ) camera frames in file parallely .How do i achieve that ? gsteramer pipeline ?

If i use cv2.videowriter , its very bottleneck & low fps

Error ValueError: Failed to load delegate from edgetpu.dll And No module named 'pycoral.pybind'

Description

I have been trying to get the google coral usb accelerator running under windwos 10 for days.

Under Visual Studio 2019 i have

  • python 3.7.8 64bit and

  • C++ App VS 2019 Installed

  • edgetpu_runtime_20210726.zip extracted

  • Install.bat executed

Install.bat Result:

Installing UsbDk
Installing Windows drivers
Microsoft-PnP-Hilfsprogramm

Treiberpaket wird hinzugefügt:  coral.inf
Das Treiberpaket wurde erfolgreich hinzugefügt. (Ist bereits im System vorhanden)
Veröffentlichter Name:         oem75.inf

Treiberpaket wird hinzugefügt:  Coral_USB_Accelerator.inf
Das Treiberpaket wurde erfolgreich hinzugefügt. (Ist bereits im System vorhanden)
Veröffentlichter Name:         oem76.inf

Treiberpaket wird hinzugefügt:  Coral_USB_Accelerator_(DFU).inf
Das Treiberpaket wurde erfolgreich hinzugefügt. (Ist bereits im System vorhanden)
Veröffentlichter Name:         oem77.inf
Treiberpaket auf dem Gerät installiert: USB\VID_1A6E&PID_089A\5&32865703&0&17
Treiberpaket auf dem Gerät installiert: USB\VID_1A6E&PID_089A\5&32865703&0&18
Das Treiberpaket auf dem Gerät ist auf dem neuesten Stand: USB\VID_1A6E&PID_089A\5&32865703&0&19

Treiberpakete insgesamt:  3
Hinzugefügte Treiberpakete:  3
Installing performance counters

Info: Anbieter {aaa5bf9e-c44b-4177-af65-d3a06ba45fe7}, der in C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\edgetpu_runtime\third_party\coral_accelerator_windows\coral.man definiert ist, ist bereits im Systemrepository installiert.
Info: Die Leistungsindikatoren wurden erfolgreich in C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\edgetpu_runtime\third_party\coral_accelerator_windows\coral.man installiert.Copying edgetpu and libusb to System32
        1 Datei(en) kopiert.
        1 Datei(en) kopiert.
Install complete
Drücken Sie eine beliebige Taste . . .

  • Restart Windows
  • Google coral usb accelerator connect to PC

Then I installed the following packages:
Edge-TPU-Python-API:
pip install https://dl.google.com/coral/edgetpu_api/edgetpu-2.14.0-cp37-cp37m-win_amd64.whl

tflite runtime
pip install https://github.com/google-coral/pycoral/releases/download/v2.0.0/tflite_runtime-2.5.0.post1-cp37-cp37m-win_amd64.whl

Pycoral installieren
pip install https://github.com/google-coral/pycoral/releases/download/v2.0.0/pycoral-2.0.0-cp37-cp37m-win_amd64.whl

**

  • test with classify_image.py:

**
python examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input test_data/parrot.jpg

Result:
$ python examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input test_data/parrot.jpg
Traceback (most recent call last):
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\tflite_runtime\interpreter.py", line 160, in load_delegate
    delegate = Delegate(library, options)
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\tflite_runtime\interpreter.py", line 119, in __init__
    raise ValueError(capture.message)
ValueError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "examples/classify_image.py", line 121, in <module>
    main()
  File "examples/classify_image.py", line 71, in main
    interpreter = make_interpreter(*args.model.split('@'))
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\pycoral\utils\edgetpu.py", line 87, in make_interpreter
    delegates = [load_edgetpu_delegate({'device': device} if device else {})]
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\pycoral\utils\edgetpu.py", line 52, in load_edgetpu_delegate
    return tflite.load_delegate(_EDGETPU_SHARED_LIB, options or {})
  File "C:\Users\anja-\Anja_Programme\AnjaCoral\envCoralPy8\lib\site-packages\tflite_runtime\interpreter.py", line 163, in load_delegate
    library, str(e)))
ValueError: Failed to load delegate from edgetpu.dll

(envCoralPy8)

the edgetpu.dll is in the directory C:\Windows\System32

Whats going on here?
do you have a tip?

I also tried to run the test script in Visual Studio 2019

  • Test Scirpt:
# Lint as: python3
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utilities for using the TensorFlow Lite Interpreter with Edge TPU."""

import contextlib
import ctypes
import ctypes.util

import numpy as np

# pylint:disable=unused-import
from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
from pycoral.pybind._pywrap_coral import InvokeWithBytes as invoke_with_bytes
from pycoral.pybind._pywrap_coral import InvokeWithDmaBuffer as invoke_with_dmabuffer
from pycoral.pybind._pywrap_coral import InvokeWithMemBuffer as invoke_with_membuffer
from pycoral.pybind._pywrap_coral import ListEdgeTpus as list_edge_tpus
from pycoral.pybind._pywrap_coral import SetVerbosity as set_verbosity
from pycoral.pybind._pywrap_coral import SupportsDmabuf as supports_dmabuf
import platform
import tflite_runtime.interpreter as tflite


_EDGETPU_SHARED_LIB = {
  'Linux': 'libedgetpu.so.1',
  'Darwin': 'libedgetpu.1.dylib',
  'Windows': 'edgetpu.dll'
}[platform.system()]


def load_edgetpu_delegate(options=None):
  """Loads the Edge TPU delegate with the given options.

  Args:
    options (dict): Options that are passed to the Edge TPU delegate, via
      ``tf.lite.load_delegate``. The only option you should use is
      "device", which defines the Edge TPU to use. Supported values are the same
      as `device` in :func:`make_interpreter`.
  Returns:
    The Edge TPU delegate object.
  """
  return tflite.load_delegate(_EDGETPU_SHARED_LIB, options or {})


def make_interpreter(model_path_or_content, device=None, delegate=None):
  """Creates a new ``tf.lite.Interpreter`` instance using the given model.

  **Note:** If you have multiple Edge TPUs, you should always specify the
  ``device`` argument.

  Args:
     model_path_or_content (str or bytes): `str` object is interpreted as
       model path, `bytes` object is interpreted as model content.
     device (str): The Edge TPU device you want:

       + None      -- use any Edge TPU (this is the default)
       + ":<N>"    -- use N-th Edge TPU (this corresponds to the enumerated
         index position from :func:`list_edge_tpus`)
       + "usb"     -- use any USB Edge TPU
       + "usb:<N>" -- use N-th USB Edge TPU
       + "pci"     -- use any PCIe Edge TPU
       + "pci:<N>" -- use N-th PCIe Edge TPU

       If left as None, you cannot reliably predict which device you'll get.
       So if you have multiple Edge TPUs and want to run a specific model on
       each one, then you must specify the device.
     delegate: A pre-loaded Edge TPU delegate object, as provided by
       :func:`load_edgetpu_delegate`. If provided, the `device` argument
       is ignored.

  Returns:
     New ``tf.lite.Interpreter`` instance.
  """
  if delegate:
    delegates = [delegate]
  else:
    delegates = [load_edgetpu_delegate({'device': device} if device else {})]
  if isinstance(model_path_or_content, bytes):
    return tflite.Interpreter(
        model_content=model_path_or_content, experimental_delegates=delegates)
  else:
    return tflite.Interpreter(
        model_path=model_path_or_content, experimental_delegates=delegates)


# ctypes definition of GstMapInfo. This is a stable API, guaranteed to be
# ABI compatible for any past and future GStreamer 1.0 releases.
# Used to get the underlying memory pointer without any copies, and without
# native library linking against libgstreamer.
class _GstMapInfo(ctypes.Structure):
  _fields_ = [
      ('memory', ctypes.c_void_p),  # GstMemory *memory
      ('flags', ctypes.c_int),  # GstMapFlags flags
      ('data', ctypes.c_void_p),  # guint8 *data
      ('size', ctypes.c_size_t),  # gsize size
      ('maxsize', ctypes.c_size_t),  # gsize maxsize
      ('user_data', ctypes.c_void_p * 4),  # gpointer user_data[4]
      ('_gst_reserved', ctypes.c_void_p * 4)
  ]  # GST_PADDING


# Try to import GStreamer but don't fail if it's not available. If not available
# we're probably not getting GStreamer buffers as input anyway.
_libgst = None
try:
  # pylint:disable=g-import-not-at-top
  import gi
  gi.require_version('Gst', '1.0')
  gi.require_version('GstAllocators', '1.0')
  # pylint:disable=g-multiple-import
  from gi.repository import Gst, GstAllocators
  _libgst = ctypes.CDLL(ctypes.util.find_library('gstreamer-1.0'))
  _libgst.gst_buffer_map.argtypes = [
      ctypes.c_void_p,
      ctypes.POINTER(_GstMapInfo), ctypes.c_int
  ]
  _libgst.gst_buffer_map.restype = ctypes.c_int
  _libgst.gst_buffer_unmap.argtypes = [
      ctypes.c_void_p, ctypes.POINTER(_GstMapInfo)
  ]
  _libgst.gst_buffer_unmap.restype = None
except (ImportError, ValueError, OSError):
  pass


def _is_valid_ctypes_input(input_data):
  if not isinstance(input_data, tuple):
    return False
  pointer, size = input_data
  if not isinstance(pointer, ctypes.c_void_p):
    return False
  return isinstance(size, int)


@contextlib.contextmanager
def _gst_buffer_map(buffer):
  """Yields gst buffer map."""
  mapping = _GstMapInfo()
  ptr = hash(buffer)
  success = _libgst.gst_buffer_map(ptr, mapping, Gst.MapFlags.READ)
  if not success:
    raise RuntimeError('gst_buffer_map failed')
  try:
    yield ctypes.c_void_p(mapping.data), mapping.size
  finally:
    _libgst.gst_buffer_unmap(ptr, mapping)


def _check_input_size(input_size, expected_input_size):
  if input_size < expected_input_size:
    raise ValueError('input size={}, expected={}.'.format(
        input_size, expected_input_size))


def run_inference(interpreter, input_data):
  """Performs interpreter ``invoke()`` with a raw input tensor.

  Args:
    interpreter: The ``tf.lite.Interpreter`` to invoke.
    input_data: A 1-D array as the input tensor. Input data must be uint8
      format. Data may be ``Gst.Buffer`` or :obj:`numpy.ndarray`.
  """
  input_shape = interpreter.get_input_details()[0]['shape']
  expected_input_size = np.prod(input_shape)

  interpreter_handle = interpreter._native_handle()  # pylint:disable=protected-access
  if isinstance(input_data, bytes):
    _check_input_size(len(input_data), expected_input_size)
    invoke_with_bytes(interpreter_handle, input_data)
  elif _is_valid_ctypes_input(input_data):
    pointer, actual_size = input_data
    _check_input_size(actual_size, expected_input_size)
    invoke_with_membuffer(interpreter_handle, pointer.value,
                          expected_input_size)
  elif _libgst and isinstance(input_data, Gst.Buffer):
    memory = input_data.peek_memory(0)
    map_buffer = not GstAllocators.is_dmabuf_memory(
        memory) or not supports_dmabuf(interpreter_handle)
    if not map_buffer:
      _check_input_size(memory.size, expected_input_size)
      fd = GstAllocators.dmabuf_memory_get_fd(memory)
      try:
        invoke_with_dmabuffer(interpreter_handle, fd, expected_input_size)
      except RuntimeError:
        # dma-buf input didn't work, likely due to old kernel driver. This
        # situation can't be detected until one inference has been tried.
        map_buffer = True
    if map_buffer:
      with _gst_buffer_map(input_data) as (pointer, actual_size):
        assert actual_size >= expected_input_size
        invoke_with_membuffer(interpreter_handle, pointer.value,
                              expected_input_size)
  elif isinstance(input_data, np.ndarray):
    _check_input_size(len(input_data), expected_input_size)
    invoke_with_membuffer(interpreter_handle, input_data.ctypes.data,
                          expected_input_size)
  else:
    raise TypeError('input data type is not supported.')

then i get the error Code line:
from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version - No module named 'pycoral.pybind'

Issue Type

Build/Install

Operating System

Windows 10

Coral Device

USB Accelerator

Other Devices

No response

Programming Language

Python 3.7

Relevant Log Output

No response

converting TF SavedModel to edgetpu.tflite

Hi im using a Tensorflow Saved model which works fine
im converting it to .tflite using the following code:

import cv2
from glob import glob
import numpy as np
import tensorflow as tf

def rep_data_gen():
    a = []
    counter = 0
    for img in glob('images/rep_data/*'):
        if counter < 100:
            counter += 1
            image = cv2.imread(img)
            image = cv2.resize(image, (640,640))
            image = image / 255.0
            image = image.astype(np.float32)
            a.append(image)
    a = np.array(a)
    img = tf.data.Dataset.from_tensor_slices(a).batch(1)
    for i in img.take(8):
        print(i)
        yield [i]

converter = tf.lite.TFLiteConverter.from_saved_model("saved_model")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = rep_data_gen

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
tflite_quant_model = converter.convert()
with open('model.tflite', 'wb') as f:
    f.write(tflite_quant_model)

after that i use the edgetpu_compiler
edgetpu_compiler model.tflite
to compile it to model_edgetpu.tflite

then running the detect_images.py results in:

python3 detect_image.py -m model_edgetpu.tflite -i A00147.png
----INFERENCE TIME----
Note: The first inference is slow because it includes loading the model into Edge TPU memory.
Traceback (most recent call last):
  File "detect_image.py", line 105, in <module>
    main()
  File "detect_image.py", line 84, in main
    objs = detect.get_objects(interpreter, args.threshold, scale)
  File "/usr/lib/python3.6/site-packages/pycoral/adapters/detect.py", line 208, in get_objects
    return [make(i) for i in range(count) if scores[i] >= score_threshold]
  File "/usr/lib/python3.6/site-packages/pycoral/adapters/detect.py", line 208, in <listcomp>
    return [make(i) for i in range(count) if scores[i] >= score_threshold]
IndexError: index 10 is out of bounds for axis 0 with size 10

all models and stuff at
https://drive.google.com/drive/folders/10Htmw0JWZ31Z47hn6mdZYHY7AkjXMus7?usp=sharing

Camera: select timeout VIDIOC_DQBUF: Resource temporarily unavailable

I am trying to access the Coral Board camera as the tutorial.
I have checked whether the cam is accessible or not:

v4l2-ctl --list-formats-ext
 ioctl: VIDIOC_ENUM_FMT
         Type: Video Capture
 
         [0]: 'YUYV' (YUYV 4:2:2)
                 Size: Discrete 640x480
 
                         Interval: Discrete 0.033s (30.000 fps)
                 Size: Discrete 720x480
                         Interval: Discrete 0.033s (30.000 fps)
                 Size: Discrete 1280x720
                         Interval: Discrete 0.033s (30.000 fps)
                 Size: Discrete 1920x1080
                         Interval: Discrete 0.067s (15.000 fps)
                         Interval: Discrete 0.033s (30.000 fps)
                 Size: Discrete 2592x1944
                         Interval: Discrete 0.067s (15.000 fps)
                 Size: Discrete 0x0

But when I try the snapshot in Coral, it displays nothing. And when trying access through the OpenCV, it yields the error when read the cap.read() as select timeout VIDIOC_DQBUF: Resource temporarily unavailable

Any ideas?

Missing binding for tflite interpreter causes edgetpu.run_inference() to fail

According to the documentation, PyCoral improves upon the Edge TPU Python API because it treats the Edge TPU operations as I/O-bound:

Python does not support real multi-threading for CPU-bounded operations (read about the Python global interpreter lock (GIL)). However, we have optimized the Edge TPU Python API (but not TensorFlow Lite Python API) to work within Python’s multi-threading environment for all Edge TPU operations—they are IO-bounded, which can provide performance improvements.

However, upon inspection of the code, calling interpreter.invoke() simply uses the existing tflite_runtime invoke() function. The only function that appears to be consistent with the documentation is pycoral.utils.edgetpu.run_inference(), which calls i/o-bounded functions (I presume) such as invoke_with_membuffer rather than the typical interpreter.invoke(). These are C++ functions from Libcoral which are exposed to the PyCoral API using pybind (pycoral.pybind._pywrap_coral).

There appears to be an error where the object types in python are misinterpreted by the C++ module. When I call run_inference() with the interpreter and a flattened numpy array as the input, I get the following error:

Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/dev/deep-sort-live/pipeline.py", line 45, in invoke_detector
self.detector.run_inference() # i/o bound
File "/home/pi/dev/deep-sort-live/detect_pycoral.py", line 162, in run_inference
edgetpu.run_inference(self.interpreter, input_data)
File "/usr/local/lib/python3.7/dist-packages/pycoral/utils/edgetpu.py", line 192, in run_inference
expected_input_size)
TypeError: InvokeWithMemBuffer(): incompatible function arguments. The following argument types are supported:
1. (arg0: object, arg1: int, arg2: int) -> None
Invoked with: 14635576, 2667068808, 270000

Note that the traceback shows that I'm calling run_inference in a separate thread. The reason for this is because I want to run the TPU inference in a separate thread, and do other processing while waiting for the output from the TPU.

As you can see from the output, InvokeWithMemBuffer() expects arg0 to be of type "object", although it is supposed to be the memory address of the interpreter (see the LibCoral C++ source).

absl::Status InvokeWithMemBuffer(tflite::Interpreter *interpreter, const void *buffer, size_t in_size, tflite::StatefulErrorReporter *reporter = nullptr)

I believe the int for arg0 (14635576) is the memory address of the interpreter because of the following line of code in run_inference():

interpreter_handle = interpreter._native_handle() # pylint:disable=protected-access

So, it appears that there is a missing declaration in the wrapper that is causing InvokeWithMemBuffer() to think that the interpreter address memory int value is not the correct parameter type, even though it is.

I also attempted to see whether this problem persisted with the unit tests in edgetpu_utils_test.py, which also tests invoke_with_membuffer. The only error I received when running the error tests was the following:

ERROR: test_run_inference_with_different_types (main.TestEdgeTpuUtils)
Traceback (most recent call last):
File "edgetpu_utils_test.py", line 145, in test_run_inference_with_different_types
self._run_inference_with_different_input_types(interpreter, input_data)
File "edgetpu_utils_test.py", line 118, in _run_inference_with_different_input_types
edgetpu.run_inference(interpreter, np_input)
File "/usr/local/lib/python3.7/dist-packages/pycoral/utils/edgetpu.py", line 192, in run_inference
expected_input_size)
RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)

I suspect that this error is caused by the same issue (a type-mismatch for arg0 of InvokeWithMemBuffer())

I'm running the code on a Raspberry Pi 4B (Buster, 32-bit, armv7a) and Coral USB accelerator. Any help would be appreciated!

No module named 'pycoral.pybind._pywrap_coral' (am64, Windows 10, python 3.7-3.9)

Hello,

Similar to a previously reported issue (#24) I am having similar failures under:
Windows 10
python 3.7 (have tried 3.8 and 3.9 to cover all bases)
architecture amd64

  • Preliminary setup is done - driver, support files, coral and runtimes all installed without issue.
    Windows environment variable PATH limit is disabled and environment variables are present for python (C:\Pyton37; C:\Python37\Scripts; C:\Python37\Lib;)

When I try to run the first example classify_image.py, I get the error:
ImportError: DLL load failed: The specified module could not be found.

I have tried:

  1. cloning the git repo and installing from there
  2. manually installing version 13 of runtime edgetpu_runtime_20210119.zip
  3. manually installing PyCoral API using whl's for versions 3.7 (and 3.8 and 3.9 with associated python amd 64 package)
  4. manually installing Edge TPU Python API (just in case) - understood its depreciated

All required prerequisite files/packages are installed including numpy, Pillow, etc.

The thing to be imported is present on the system at C:\Python37\Lib\site-packages\pycoral\pybind.

However, what gets installed is named _pywrap_coral.cp37-win_amd64.pyd.
Depending on what version of python, its cp38, cp39, respectively.
I've tried making a copy of the file named _pywrap_coral.pyd, but no luck.

The trace error message is:
...File "C:\Python37\lib\site-packages\pycoral\utils\edgetpu.py", line 24, in
from pycoral.pybind_pyrwap_coral import GetRuntimeVersion as get_runtime_version
ImportError: DLL load failed: The specified module could not be found.

The same happens if you try python versions like 3.8 and 3.9.

Everything appears to be correct but for some strange reason, python can't see it.

I've looked at what is visible to Python by dumping the sys.path
C:\Python37\python37.zip
C:\Python37\DLLs
C:\Python37\lib
C:\Python37
C:\Python37\site-packages
C:\Python37\site-packages\win32
C:\Python37\site-packages\win32\lib
C:\Python37\site-packages\Pythonwin

Can't wrap my head about this one.

Any takers?

Installation issue on x64

Hi there, I've followed your building guide, and while installing the wheels built from source I had the following error.

The directory '/home/bpinaya/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/bpinaya/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Processing ./dist/pycoral-1.0.0-cp36-cp36m-linux_x86_64.whl
Collecting tflite-runtime==2.5.0 (from pycoral==1.0.0)
Exception:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in main
    status = self.run(options, args)
  File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 342, in run
    requirement_set.prepare_files(finder)
  File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 380, in prepare_files
    ignore_dependencies=self.ignore_dependencies))
  File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 554, in _prepare_file
    require_hashes
  File "/usr/lib/python3/dist-packages/pip/req/req_install.py", line 278, in populate_link
    self.link = finder.find_requirement(self, upgrade)
  File "/usr/lib/python3/dist-packages/pip/index.py", line 465, in find_requirement
    all_candidates = self.find_all_candidates(req.name)
  File "/usr/lib/python3/dist-packages/pip/index.py", line 423, in find_all_candidates
    for page in self._get_pages(url_locations, project_name):
  File "/usr/lib/python3/dist-packages/pip/index.py", line 568, in _get_pages
    page = self._get_page(location)
  File "/usr/lib/python3/dist-packages/pip/index.py", line 683, in _get_page
    return HTMLPage.get_page(link, session=self.session)
  File "/usr/lib/python3/dist-packages/pip/index.py", line 795, in get_page
    resp.raise_for_status()
  File "/usr/share/python-wheels/requests-2.18.4-py2.py3-none-any.whl/requests/models.py", line 935, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://pypi.org/simple/tflite-runtime/

Same thing happens if I try to install from any of the release wheels for x64. Wonder if you have any ideas.

ImportError: _pywrap_coral.cpython-38-darwin.so not found

Platform:

{'platform': 'Darwin',
 'platform-release': '20.5.0',
 'platform-version': 'Darwin Kernel Version 20.5.0: Sat May  8 05:10:33 PDT 2021; root:xnu-7195.121.3~9/RELEASE_X86_64',
 'architecture': 'x86_64',
 'processor': 'i386',
 'ram': '8 GB'}

Python 3.8.5. with venv using python3 -m venv venv and installation using python3 -m pip install --index-url https://google-coral.github.io/py-repo/ --extra-index-url=https://pypi.python.org/simple pycoral

On running from pycoral.utils import edgetpu and having the usb accelerator plugged into USB, I receive:

ImportError: dlopen(/Users/robin/Github/pycoral-experiments/venv/lib/python3.8/site-packages/pycoral/pybind/_pywrap_coral.cpython-38-darwin.so, 2): Library not loaded: @rpath/libedgetpu.1.dylib
  Referenced from: /Users/robin/Github/pycoral-experiments/venv/lib/python3.8/site-packages/pycoral/pybind/_pywrap_coral.cpython-38-darwin.so
  Reason: image not found

Cannot run 'Getting started examples' for USB Accelerator on Windows 10: "ValueError: Failed to load delegate from edgetpu.dll"

On two different Windows 10 machines I run into the same problem when trying to run the pycoral example for the USB Accelerator. In both cases, I followed the steps from https://coral.ai/docs/accelerator/get-started/ . When trying to run classify_image.py, the script crashes when calling make_interpreter:

  File "C:\Users\Gebruiker\Documents\Thesis\CoralSetup\venv\lib\site-packages\tflite_runtime\interpreter.py", line 160, in load_delegate
    delegate = Delegate(library, options)
  File "C:\Users\Gebruiker\Documents\Thesis\CoralSetup\venv\lib\site-packages\tflite_runtime\interpreter.py", line 119, in __init__
    raise ValueError(capture.message)
ValueError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Gebruiker\Documents\Thesis\CoralSetup\coral\pycoral\examples\classify_image.py", line 121, in <module>
    main()
  File "C:\Users\Gebruiker\Documents\Thesis\CoralSetup\coral\pycoral\examples\classify_image.py", line 71, in main
    interpreter = make_interpreter(*args.model.split('@'))
  File "C:\Users\Gebruiker\Documents\Thesis\CoralSetup\venv\lib\site-packages\pycoral\utils\edgetpu.py", line 87, in make_interpreter
    delegates = [load_edgetpu_delegate({'device': device} if device else {})]
  File "C:\Users\Gebruiker\Documents\Thesis\CoralSetup\venv\lib\site-packages\pycoral\utils\edgetpu.py", line 52, in load_edgetpu_delegate
    return tflite.load_delegate(_EDGETPU_SHARED_LIB, options or {})
  File "C:\Users\Gebruiker\Documents\Thesis\CoralSetup\venv\lib\site-packages\tflite_runtime\interpreter.py", line 163, in load_delegate
    library, str(e)))
ValueError: Failed to load delegate from edgetpu.dll

On both machines, it the first time I did the installation, so I am pretty sure it does not have to do with versions (as was suggested in issues raised earlier: #281 and #46).

I have so far not been able to find out what causes this.

Coral TPU can't run a model from GCP AutoML Object Detection after being compiled with the latest version of the compiler

Hi, I have trained several models in the past using the GCP Object Detection AutoML for Edge (Now called Vertex AI). I usually compiled those models with edgetpu_compiler version 14.1 since the version 15 won't work on them.
Since the beginning of august I can't compile the models that I train as I normally do on Vertex AI. I have tried the last version of the compiler version 16 which allows me to compile the model with not so great results.

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 12
Number of operations that will run on CPU: 287

As you can image this is far from optimal. But even if I try to run this model on the USB Coral TPU I get the following error.

PyCoral 1.0.1 Python 3.8 Windows
ValueError: Op builtin_code out of range: 130. Are you using old TFLite binary with newer model?Registration failed.

PyCoral 2.0.0 with tflite_runtime 2.5.0.post1 Python 3.8 Windows
ValueError: Didn't find op for builtin opcode 'BROADCAST_TO' version '1'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model? Registration failed.

The biggest difference I see on the models that Vertex AI outputs is that in past the runtime was 1.14.0 and now it's 2.5.0.
I am assuming this is the problem.

Is there any fix for this issue?

Thanks

High inference time after full integer post-training quantization compared to normal unquantized tflite model

Hello everyone,

I am trying to perform full integer quantization on a pretrained model (ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/saved_model). I have followed this issue and was able to generate Google-Coral supported tflite model. Below is my python script which I used to perform quantization -

def representative_dataset_gen():
    folder = "images_test"
    image_size = 320
    raw_test_data = []

    files = glob.glob(folder+'/*.jpg')
    for file in files:
        image = Image.open(file)
        image = image.convert("RGB")
        image = image.resize((image_size, image_size))
        #Quantizing the image between -1,1;
        image = (2.0 / 255.0) * np.float32(image) - 1.0
        #image = np.asarray(image).astype(np.float32)
        image = image[np.newaxis,:,:,:]
        raw_test_data.append(image)

    for data in raw_test_data:
        yield [data]

converter = tf.lite.TFLiteConverter.from_saved_model('/home/tensorflow/models/research/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/saved_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
#converter.experimental_new_converter = True
converter.inference_input_type = tf.uint8
#converter.inference_output_type = tf.uint8
#converter.allow_custom_ops = True

converter.experimental_new_converter = True
converter.experimental_new_quantizer = True

converter.representative_dataset = representative_dataset_gen
tflite_model = converter.convert()

with open('/home/tensorflow/models/research/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/edge/model_v4.tflite', "wb") as w:
    w.write(tflite_model)

But I am getting inference time in range 3500ms - 3600ms which seems a lot. For verification, I tried converting "saved_model" into tflite by using tflite_convert CLI without quantization as mentioned below:
tflite_convert --saved_model_dir=ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/saved_model --output_file=ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/model.tflite
Above mentioned unquantized tflite model was giving me inference time around 300ms.

Please can anyone help me to figure out how to improve inference time of quantized model. Also it would be very helpful if anyone can suggest which model to use for post-quantization and which for quant-aware training.

I am using tensorflow==2.5.0
pycoral for inferencing - https://github.com/google-coral/pycoral

Error running face detection demo

Input:

edgetpu_detect_server \

--source /dev/video1:YUY2:800x600:24/1
--model ${DEMO_FILES}/ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite

Error:
INFO:OpenGL.acceleratesupport:No OpenGL_accelerate module loaded: No module named 'OpenGL_accelerate'
Error: gst-stream-error-quark: Internal data stream error. (1): gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
^CINFO:edgetpuvision.streaming.server:Server is shutting down
INFO:edgetpuvision.streaming.server:Camera stop recording

Using a webcam for inference

ImportError: DLL load failed while importing _pywrap_coral: The specified module could not be found.

Hi!
I'm trying to run test script using Coral USB Accelerator and I get this error:

from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
ImportError: DLL load failed while importing _pywrap_coral: The specified module could not be found.

OS: Win 10
TF: 2.4.1
Pycoral: 1.0.1

Script:

from os.path import join
import cv2

from pycoral.adapters.common import input_size
from pycoral.adapters.detect import get_objects
from pycoral.utils.dataset import read_label_file
from pycoral.utils.edgetpu import make_interpreter
from pycoral.utils.edgetpu import run_inference


def main():
    model_dir = '../all_models'
    model = join(model_dir, 'mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite')
    labels = join(model_dir, 'coco_labels.txt')
    threshold = 0.1
    top_k = 3

    test_vid_path = ''

    interpreter = make_interpreter(model)
    interpreter.allocate_tensors()
    labels = read_label_file(labels)
    inference_size = input_size(interpreter)

    cap = cv2.VideoCapture(test_vid_path)

    while True:
        ret, frame = cap.read()
        if not ret:
            break
        cv2_im = frame

        cv2_im_rgb = cv2.cvtColor(cv2_im, cv2.COLOR_BGR2RGB)
        cv2_im_rgb = cv2.resize(cv2_im_rgb, inference_size)
        run_inference(interpreter, cv2_im_rgb.tobytes())
        objs = get_objects(interpreter, threshold)[:top_k]
        cv2_im = append_objs_to_img(cv2_im, inference_size, objs, labels)

        cv2.imshow('frame', cv2_im)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    cap.release()
    cv2.destroyAllWindows()


def append_objs_to_img(cv2_im, inference_size, objs, labels):
    height, width, channels = cv2_im.shape
    scale_x, scale_y = width / inference_size[0], height / inference_size[1]
    for obj in objs:
        bbox = obj.bbox.scale(scale_x, scale_y)
        x0, y0 = int(bbox.xmin), int(bbox.ymin)
        x1, y1 = int(bbox.xmax), int(bbox.ymax)

        percent = int(100 * obj.score)
        label = '{}% {}'.format(percent, labels.get(obj.id, obj.id))

        cv2_im = cv2.rectangle(cv2_im, (x0, y0), (x1, y1), (0, 255, 0), 2)
        cv2_im = cv2.putText(cv2_im, label, (x0, y0 + 30),
                             cv2.FONT_HERSHEY_SIMPLEX, 1.0, (255, 0, 0), 2)
    return cv2_im

Image with 640x480 resolution breaks the Example in Detect_Image.py

 python3 examples/detect_image.py   --model test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite   --labels test_data/coco_labels.txt   --input test_data/bear480.png   --output ${HOME}/bear480_processed.png
Traceback (most recent call last):
  File "examples/detect_image.py", line 112, in <module>
    main()
  File "examples/detect_image.py", line 79, in main
    _, scale = common.set_resized_input(
  File "/usr/lib/python3/dist-packages/pycoral/adapters/common.py", line 99, in set_resized_input
    tensor[:h, :w] = np.reshape(result, (h, w, channel))
  File "<__array_function__ internals>", line 5, in reshape
  File "/usr/lib/python3/dist-packages/numpy/core/fromnumeric.py", line 301, in reshape
    return _wrapfunc(a, 'reshape', newshape, order=order)
  File "/usr/lib/python3/dist-packages/numpy/core/fromnumeric.py", line 58, in _wrapfunc
    return _wrapit(obj, method, *args, **kwds)
  File "/usr/lib/python3/dist-packages/numpy/core/fromnumeric.py", line 47, in _wrapit
    result = getattr(asarray(obj), method)(*args, **kwds)
ValueError: cannot reshape array of size 270000 into shape (225,300,3)

Test Data

bear480

Make Python Wheels for Centos 7

This is issue aims to report that Python 3.8 wheels, below for Linux 64 bit, has problems at Centos 7 when importing the interpreter of TensorFlow-lite runtime.

https://github.com/google-coral/pycoral/releases/download/v1.0.1/tflite_runtime-2.5.0-cp38-cp38-linux_x86_64.whl

import tflite_runtime.interpreter
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 36, in
from tflite_runtime import _pywrap_tensorflow_interpreter_wrapper as _interpreter_wrapper
ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /usr/local/lib/python3.8/site-packages/tflite_runtime/_pywrap_tensorflow_interpreter_wrapper.cpython-38-x86_64-linux-gnu.so)

I have cloned the project as described in the readme.md and run the command below an error related to the docker image is raised:
$ ./script/build.sh

Is there anyone that can explain how to run build.sh to create Python wheels for Centos 7?

Coral Mini Pycoral example error

Hardware : Google Coral Board Mini
Host Computer : RPI 4

I'm just testing the Coral board.
An error occurred at the beginning part as shown in the picture below.
image

I got the error code
`mendel@undefined-orange:~/coral/pycoral$ python3 examples/classify_image.py \

--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite
--labels test_data/inat_bird_labels.txt
--input test_data/parrot.jpg
Traceback (most recent call last):
File "examples/classify_image.py", line 84, in
main()
File "examples/classify_image.py", line 62, in main
interpreter.allocate_tensors()
File "/home/mendel/.local/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 242, in allocate_tensors
return self._interpreter.AllocateTensors()
File "/home/mendel/.local/lib/python3.7/site-packages/tflite_runtime/interpreter_wrapper.py", line 115, in AllocateTensors
return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self)
RuntimeError: Internal: Unsupported data type in custom op handler: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.`

Errors continue to occur in the same interpreter part.

I need help.

Regards,
SunBeenMoon

ImportError: DLL load failed while importing _pywrap_coral: The specified module could not be found.

Hello,

I am trying M.2 card on Mini PCIe bridge with Windows 10 OS.

While trying to test the Google Coral card for the example program, I am facing the folloiwng error:
https://coral.ai/docs/m2/get-started/#requirements

C:\Users\227G\pycoral>python examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input test_data/parrot.jpg
Traceback (most recent call last):
  File "examples/classify_image.py", line 39, in <module>
    from pycoral.utils.edgetpu import make_interpreter
  File "C:\Users\227G\AppData\Local\Programs\Python\Python38\lib\site-packages\pycoral\utils\edgetpu.py", line 24, in <module>
    from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
ImportError: DLL load failed while importing _pywrap_coral: The specified module could not be found.

The device manager is able to detect the device as Google Coral device correctly.
My system comfiguration:
Intel Elkhart Lake
RAM 16GB

I couldn't find the similar or related issue, could you help me in resolving it?

Kind Regards,
Arun

support for armv6l?

Is there support for armv6l (i.e. Raspberry Pi Zero W)
If no support exists currently, is there a way I can build this natively to support this ARM32 device?

More about the device I'm attempting to use the tflite_runtime on:
$ python3 --version
Python 3.7.3

$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"

$ cat /proc/cpuinfo
processor : 0
model name : ARMv6-compatible processor rev 7 (v6l)
BogoMIPS : 697.95
Features : half thumb fastmult vfp edsp java tls
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xb76
CPU revision : 7

Hardware        : BCM2835
Revision        : 9000c1
Serial          : 00000000fdca881b
Model           : Raspberry Pi Zero W Rev 1.1

$ uname -a
Linux experimentalpiZero 5.4.79+ #1373 Mon Nov 23 13:18:15 GMT 2020 armv6l GNU/Linux

$ uname -m
armv6l

$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/arm-linux-gnueabihf/8/lto-wrapper
Target: arm-linux-gnueabihf
Configured with: ../src/configure -v --with-pkgversion='Raspbian 8.3.0-6+rpi1' --with-bugurl=file:///usr/share/doc/gcc-
8/README.Bugs --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --
program-suffix=-8 --program-prefix=arm-linux-gnueabihf- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --
without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-bootstrap --enable-clocale=gnu --
enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --
disable-libitm --disable-libquadmath --disable-libquadmath-support --enable-plugin --with-system-zlib --with-target-
system-zlib --enable-objc-gc=auto --enable-multiarch --disable-sjlj-exceptions --with-arch=armv6 --with-fpu=vfp --with-
float=hard --disable-werror --enable-checking=release --build=arm-linux-gnueabihf --host=arm-linux-gnueabihf --
target=arm-linux-gnueabihf
Thread model: posix
gcc version 8.3.0 (Raspbian 8.3.0-6+rpi1)

Runtime error at pipeline runner

I followed the pipeline example model_pipelining_classify_image.py but it has runtime error

My environment is pycoral-2.0.0-cp38-cp38-linux_x86_64 and tflite_runtime-2.5.0.post1-cp38-cp38-linux_x86_64

python3 examples/model_pipelining_classify_image.py --models test_data/pipeline/inception_v3_299_quant_segment_%d_of_2_edgetpu.tflite --labels test_data/imagenet_labels.txt --input test_data/parrot.jpg

The runtime error is as following:

root@5c8d01ebd0fd:~# python3 examples/model_pipelining_classify_image.py   --models     test_data/pipeline/inception_v3_299_quant_segment_%d_of_2_edgetpu.tflite   --labels test_data/imagenet_labels.txt   --input test_data/parrot.jpg
Using devices:  ['pci:0', 'pci:1']
Using models:  ['test_data/pipeline/inception_v3_299_quant_segment_0_of_2_edgetpu.tflite', 'test_data/pipeline/inception_v3_299_quant_segment_1_of_2_edgetpu.tflite']
WARNING: Logging before InitGoogleLogging() is written to STDERR
I20210812 02:35:21.009770   102 pipelined_model_runner.cc:172] Thread: 139954348168960 receives empty request
I20210812 02:35:21.009820   102 pipelined_model_runner.cc:245] Thread: 139954348168960 is shutting down the pipeline...
I20210812 02:35:21.158869   102 pipelined_model_runner.cc:255] Thread: 139954348168960 Pipeline is off.
I20210812 02:35:21.159277   103 pipelined_model_runner.cc:207] Queue is empty and `StopWaiters()` is called.
-------RESULTS--------
macaw: 0.99609
Average inference time (over 5 iterations): 30.4ms
I20210812 02:35:21.160143    63 pipelined_model_runner.cc:172] Thread: 139954824456000 receives empty request
E20210812 02:35:21.160192    63 pipelined_model_runner.cc:240] Thread: 139954824456000 Pipeline was turned off before.
Exception ignored in: <function PipelinedModelRunner.__del__ at 0x7f49b8a0eee0>
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/pycoral/pipeline/pipelined_model_runner.py", line 83, in __del__
    self.push({})
  File "/usr/local/lib/python3.8/dist-packages/pycoral/pipeline/pipelined_model_runner.py", line 152, in push
    self._runner.Push(input_tensors)
RuntimeError: Pipeline was turned off before.
E20210812 02:35:21.161602    63 pipelined_model_runner.cc:240] Thread: 139954824456000 Pipeline was turned off before.
E20210812 02:35:21.161650    63 pipelined_model_runner.cc:147] Failed to shutdown status: INTERNAL: Pipeline was turned off before.

I guessed that the producer pushed empty dict {} to make runner to turn off. But consumer threading terminated by join() also cause runner destruction again.

# model_pipelining_classify_image.py 
 def producer():
    for _ in range(args.count):
      runner.push({name: image})
    runner.push({})

 def consumer():
    output_details = runner.interpreters()[-1].get_output_details()[0]
    scale, zero_point = output_details['quantization']
    while True:
      result = runner.pop()
      if not result:
        break
      values, = result.values()
      scores = scale * (values[0].astype(np.int64) - zero_point)
      classes = classify.get_classes_from_scores(scores, args.top_k,
                                                 args.threshold)
    print('-------RESULTS--------')
    for klass in classes:
      print('%s: %.5f' % (labels.get(klass.id, klass.id), klass.score))

  start = time.perf_counter()
  producer_thread = threading.Thread(target=producer)
  consumer_thread = threading.Thread(target=consumer)
  producer_thread.start()
  consumer_thread.start()
  producer_thread.join()
  consumer_thread.join()
  ...

How can I avoid this behavior? Any hint?

FAILED: Build did NOT complete successfully Makefile:155: recipe for target 'pybind' failed

I'm trying to build pycoral on a Debian 10 machine, I have the Mini PCIe Accelerator connected on the Mini PCIe for the WiFi card on a laptop AMD Turion(tm) II Dual-Core Mobile M520 with Docker

docker version
Client: Docker Engine - Community
 Version:           20.10.1
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        831ebea
 Built:             Tue Dec 15 04:34:48 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true
uname -r
4.19.0-13-amd64
lsmod | grep apex
apex                   20480  0
gasket                118784  1 apex
lspci -x | grep 089a
08:00.0 System peripheral: Device 1ac1:089a
ls /dev/apex_0
/dev/apex_0
scripts/build.sh
scripts/build_deb.sh

both scripts failed on the same way, any advise will be appreciated:

(cd /root/.cache/bazel/_bazel_root/eab0d61a99b6696edb3d2aff87b585e8/execroot/pycoral && \
  exec env - \
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \
    PWD=/proc/self/cwd \
    PYTHON_BIN_PATH=/usr/bin/python3 \
  /usr/bin/ar @bazel-out/k8-opt/bin/external/ruy/ruy/libhave_built_path_for_avx2_fma.a-2.params)
ERROR: /root/.cache/bazel/_bazel_root/eab0d61a99b6696edb3d2aff87b585e8/external/org_tensorflow/tensorflow/lite/schema/BUILD:79:22: Generating flatbuffer files for schema_fbs_srcs: @org_tensorflow//tensorflow/lite/schema:schema_fbs_srcs failed (Illegal instruction): process-wrapper failed: error executing command
  (cd /root/.cache/bazel/_bazel_root/eab0d61a99b6696edb3d2aff87b585e8/sandbox/processwrapper-sandbox/225/execroot/pycoral && \
  exec env - \
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \
    PYTHON_BIN_PATH=/usr/bin/python3 \
    TMPDIR=/tmp \
  /root/.cache/bazel/_bazel_root/install/ba7765e6f39a679257358196b530585b/process-wrapper '--timeout=0' '--kill_delay=15' /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; for f in external/org_tensorflow/tensorflow/lite/schema/schema.fbs; do bazel-out/host/bin/external/flatbuffers/flatc --no-union-value-namespacing --gen-object-api  -c -o bazel-out/k8-opt/bin/external/org_tensorflow/tensorflow/lite/schema $f; done') process-wrapper failed: error executing command
  (cd /root/.cache/bazel/_bazel_root/eab0d61a99b6696edb3d2aff87b585e8/sandbox/processwrapper-sandbox/225/execroot/pycoral && \
  exec env - \
    PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \
    PYTHON_BIN_PATH=/usr/bin/python3 \
    TMPDIR=/tmp \
  /root/.cache/bazel/_bazel_root/install/ba7765e6f39a679257358196b530585b/process-wrapper '--timeout=0' '--kill_delay=15' /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; for f in external/org_tensorflow/tensorflow/lite/schema/schema.fbs; do bazel-out/host/bin/external/flatbuffers/flatc --no-union-value-namespacing --gen-object-api  -c -o bazel-out/k8-opt/bin/external/org_tensorflow/tensorflow/lite/schema $f; done')
/bin/bash: line 1:  1336 Illegal instruction     (core dumped) bazel-out/host/bin/external/flatbuffers/flatc --no-union-value-namespacing --gen-object-api -c -o bazel-out/k8-opt/bin/external/org_tensorflow/tensorflow/lite/schema $f
Target //src:_pywrap_coral failed to build
INFO: Elapsed time: 345.240s, Critical Path: 30.73s
INFO: 225 processes: 225 processwrapper-sandbox.
FAILED: Build did NOT complete successfully
dmesg

[ 8216.925858] traps: python3[2959] trap invalid opcode ip:7f4b587d4923 sp:7fffd78925a0 error:0 in _edgetpu_cpp_wrapper.cpython-37m-x86_64-linux-gnu.so[7f4b585ae000+288000]
[ 8357.235313] traps: python3[2973] trap invalid opcode ip:7fc249581923 sp:7ffdf01e4b70 error:0 in _edgetpu_cpp_wrapper.cpython-37m-x86_64-linux-gnu.so[7fc24935b000+288000]
[ 8418.397008] traps: python3[3037] trap invalid opcode ip:7f55bdbae923 sp:7ffefe36bd30 error:0 in _edgetpu_cpp_wrapper.cpython-37m-x86_64-linux-gnu.so[7f55bd988000+288000]
[ 8528.418258] traps: python3[3041] trap invalid opcode ip:7f51b6691923 sp:7ffec4b59810 error:0 in _edgetpu_cpp_wrapper.cpython-37m-x86_64-linux-gnu.so[7f51b646b000+288000]
[ 8629.816238] traps: python3[3141] trap invalid opcode ip:7f0fbad8d923 sp:7ffe04982640 error:0 in _edgetpu_cpp_wrapper.cpython-37m-x86_64-linux-gnu.so[7f0fbab67000+288000]
[ 8744.007822] traps: python3[3151] trap invalid opcode ip:7f014fd8c923 sp:7fff0f5e57e0 error:0 in _edgetpu_cpp_wrapper.cpython-37m-x86_64-linux-gnu.so[7f014fb66000+288000]
[ 8924.998261] traps: python3[3345] trap invalid opcode ip:7f032ff72923 sp:7ffe76fa5850 error:0 in _edgetpu_cpp_wrapper.cpython-37m-x86_64-linux-gnu.so[7f032fd4c000+288000]
[ 9416.759505] audit: type=1400 audit(1609512897.912:21): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=5571 comm="apparmor_parser"

No module named 'pycoral.pybind._pywrap_coral' (aarch64, debian bullseye, python 3.9)

Hello folks!
So, I have the following system

debian bullseye
python 3.9
architecture aarch64 (arm64 or armv8)

after some time struggling, I compiled manually and generated the both tflite and pycoral for my board, installed them both and I'm trying to follow https://coral.ai/docs/m2/get-started/#4-run-a-model-on-the-edge-tpu
But I'm facing the following problem
ModuleNotFoundError: No module named 'pycoral.pybind._pywrap_coral'

I have no Idea what I did wrong. do you guys know?

Dual Edge TPU + two_models_inference.py Example (Two TPUs not detected)

Description

Dual Edge TPU installed + pycoral
No problem inferencing with a single model.

python3 examples/two_models_inference.py --classification_model test_data/mobilenet_v2_1.0_224_quant_edgetpu.tflite --detection_model test_data/ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite --image test_data/parrot.jpg

Traceback (most recent call last):
  File "examples/two_models_inference.py", line 204, in <module>
    main()
  File "examples/two_models_inference.py", line 185, in main
    raise RuntimeError('This demo requires at least two Edge TPU available.')
RuntimeError: This demo requires at least two Edge TPU available.

lspci -vvv

00:00.0 PCI bridge: Fuzhou Rockchip Electronics Co., Ltd Device 3566 (rev 01) (prog-if 00 [Normal decode])
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 106
	Bus: primary=00, secondary=01, subordinate=ff, sec-latency=0
	I/O behind bridge: 0000f000-00000fff
	Memory behind bridge: fff00000-000fffff
	Prefetchable memory behind bridge: 0000000000900000-0000000000afffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	[virtual] Expansion ROM at 300b00000 [disabled] [size=64K]
	BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: <access denied>
	Kernel driver in use: pcieport

01:00.0 System peripheral: Device 1ac1:089a (prog-if ff)
	Subsystem: Device 1ac1:089a
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 105
	Region 0: Memory at 300a00000 (64-bit, prefetchable) [size=16K]
	Region 2: Memory at 300900000 (64-bit, prefetchable) [size=1M]
	Capabilities: <access denied>
	Kernel driver in use: apex
	Kernel modules: apex

Advise on correct command and what output will look like if both TPU's were detected.

Python 3.9 support

I'd like to try and build tflite_runtime myself for python 3.9.

Any guidelines on how I would go about doing that?

Get Started Directions for USB Accelerator on RPi4 - No module named pycoral.adapters

Description

Refererence: https://coral.ai/docs/accelerator/get-started

Following the instructions at the above reference URL, I was able to build and install the Edge TPU Runtime (libedgetpu1-std) and PyCoral via the specified "sudo apt-get install python3-pycoral" and I see it's at the current version (a re-run of the command gets a "python3-pycoral is already at the newest version (2.0.0)". I have run the install_requirements.sh script which completed successfully. Running any of the example scripts (ie. examples/detect_image.py) gives an ImportError. Please advise.

Issue Type

Bug

Operating System

Linux

Coral Device

USB Accelerator

Other Devices

Rapsberry Pi 4

Programming Language

Python 3.7

Relevant Log Output

Traceback (most recent call last):
  File "examples/classify_image.py", Line 37, in <module>
    from pycoral.adapters import classify
ImportError: No module named pycoral.adapters

Can't build pycoral, includes not found

Hello,

I try to build pycoral on opensuse. I cloned the git, but at a "make CPU=k8 build" it failed:

src/coral_wrapper.cc:15:10: fatal error: numpy/arrayobject.h: No such file or directory
#
include <numpy/arrayobject.h>
         ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
src/main/tools/linux-sandbox-pid1.cc:437: waitpid returned 2
src/main/tools/linux-sandbox-pid1.cc:457: child exited with code 1
src/main/tools/linux-sandbox.cc:204: child exited normally with exitcode 1
Target //src:_pywrap_coral failed to build

The numpy-devel-package is installed, the file are located at /usr/lib64/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h

How can I tell the build process where to find the numpy includes?

Regards

Daniel

examples fail to run on coral Mini PCIe version - Could not set performance expectation

my system:

  • HP t620 Quad Core
  • AMD GX-415GA SOC with Radeon(tm) HD Graphics
  • 8GB RAM
  • Coral Device: Mini PCIe
> uname -m -r
5.12.13-051213-generic x86_64

> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.2 LTS
Release:        20.04
Codename:       focal

I followed the guide from coral.ai/docs/m2/get-started, including the need of blacklisting prebuilt/preinstalled apex and gasket to install provided driver and Edge TPU runtime:

gasket-dkms: newest version (1.0-16).
libedgetpu1-std: newest version (15.0).

according to driver and module verification - everything is ok:

> lspci -nn | grep 089a
01:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]

> ls /dev/apex_0
/dev/apex_0

moved on to installing the PyCoral library - succeeded.
moved on to run examples - failed with below report:

> python3 examples/classify_image.py \
>> --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
>> --labels test_data/inat_bird_labels.txt \
>> --input test_data/parrot.jpg

W :131] Could not set performance expectation : 4 (Inappropriate ioctl for device)
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 152, in load_delegate
    delegate = Delegate(library, options)
  File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 111, in __init__
    raise ValueError(capture.message)
ValueError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "examples/classify_image.py", line 84, in <module>
    main()
  File "examples/classify_image.py", line 61, in main
    interpreter = make_interpreter(*args.model.split('@'))
  File "/usr/lib/python3/dist-packages/pycoral/utils/edgetpu.py", line 66, in make_interpreter
    delegates = [load_edgetpu_delegate({'device': device} if device else {})]
  File "/usr/lib/python3/dist-packages/pycoral/utils/edgetpu.py", line 42, in load_edgetpu_delegate
    return tflite.load_delegate(_EDGETPU_SHARED_LIB, options or {})
  File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 154, in load_delegate
    raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1

I'm new to Coral Edge TPU & somewhere between beginner and intermediate in linux [far, far away from advanced user ;)] - wanted to use Coral TPU as accelerator for object detection in my smarthome system [HomeAssistant with Frigate NVR installed]. It looks like it is working - I see no issues/error reports within Frigate, and it's log looks fine, with a small exception - at startup I can see similar error like during the example scripts (check the 3rd line below):

detector.coral_pci             INFO    : Starting detection process: 39
frigate.edgetpu                INFO    : Attempting to load TPU as pci
W :131] Could not set performance expectation : 52 (Inappropriate ioctl for device)
frigate.edgetpu                INFO    : TPU found

is it something I:

  • can fix easily?
  • shouldn't be bothered with, it's fine.

Error at PipelinedModelRunner with detection models.

I divided the detection models(efficientdet, ssd_mobilenet, ...) into 4 segments with edgetpu_compiler v16 and pop in the pipeline.
However, the following error occurred:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_13970/2216799512.py in <module>
----> 1 res = runner.pop()

/usr/lib/python3/dist-packages/pycoral/pipeline/pipelined_model_runner.py in pop(self)
    168     result = self._runner.Pop()
    169     if result:
--> 170       result = {k: v.reshape(self._output_shapes[k]) for k, v in result.items()}
    171     return result
    172 

/usr/lib/python3/dist-packages/pycoral/pipeline/pipelined_model_runner.py in <dictcomp>(.0)
    168     result = self._runner.Pop()
    169     if result:
--> 170       result = {k: v.reshape(self._output_shapes[k]) for k, v in result.items()}
    171     return result
    172 

ValueError: cannot reshape array of size 400 into shape (1,25,4)

The model above is efficientdet_lite3x, and similar errors occur in ssd_mobilenet_v2(both tf1 and tf2) models.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_13970/2216799512.py in <module>
----> 1 res = runner.pop()

/usr/lib/python3/dist-packages/pycoral/pipeline/pipelined_model_runner.py in pop(self)
    168     result = self._runner.Pop()
    169     if result:
--> 170       result = {k: v.reshape(self._output_shapes[k]) for k, v in result.items()}
    171     return result
    172 

/usr/lib/python3/dist-packages/pycoral/pipeline/pipelined_model_runner.py in <dictcomp>(.0)
    168     result = self._runner.Pop()
    169     if result:
--> 170       result = {k: v.reshape(self._output_shapes[k]) for k, v in result.items()}
    171     return result
    172 

ValueError: cannot reshape array of size 320 into shape (1,20,4)

These errors did not appear in the classification model.

Is pipelining not possible in the detection model?

ModuleNotFoundError: No module named 'pycoral.pybind'

I am new to Coral TPU. I am using this USB accelerator and trying to get the initial demo run and monitor the inference time.
But, I am getting this error. Please let me know if you have any leads?
Thanks in advance guys! :)

Coral-TPU

Library not loaded: /opt/local/lib/libusb-1.0.0.dylib

Error

python3 examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input test_data/parrot.jpg
Traceback (most recent call last):
  File "~/Documents/GitHub/pycoral/examples/classify_image.py", line 40, in <module>
    from pycoral.utils.edgetpu import make_interpreter
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pycoral/utils/edgetpu.py", line 24, in <module>
    from pycoral.pybind._pywrap_coral import GetRuntimeVersion as get_runtime_version
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pycoral/pybind/_pywrap_coral.cpython-39-darwin.so, 2): Library not loaded: /opt/local/lib/libusb-1.0.0.dylib
  Referenced from: /usr/local/lib/libedgetpu.1.dylib
  Reason: image not found

Install

Following the instructions: https://coral.ai/docs/accelerator/get-started

python3 -m pip install --extra-index-url https://google-coral.github.io/py-repo/ pycoral~=2.0
Looking in indexes: https://pypi.org/simple, https://google-coral.github.io/py-repo/
Requirement already satisfied: pycoral~=2.0 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (2.0.0)
Requirement already satisfied: Pillow>=4.0.0 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pycoral~=2.0) (8.2.0)
Requirement already satisfied: tflite-runtime==2.5.0.post1 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pycoral~=2.0) (2.5.0.post1)
Requirement already satisfied: numpy>=1.16.0 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pycoral~=2.0) (1.21.0)

libusb 1.0.24 already installed

brew install libusb
Error:
  homebrew-core is a shallow clone.
To `brew update`, first run:
  git -C /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core fetch --unshallow
This command may take a few minutes to run due to the large size of the repository.
This restriction has been made on GitHub's request because updating shallow
clones is an extremely expensive operation due to the tree layout and traffic of
Homebrew/homebrew-core and Homebrew/homebrew-cask. We don't do this for you
automatically to avoid repeatedly performing an expensive unshallow operation in
CI systems (which should instead be fixed to not use shallow clones). Sorry for
the inconvenience!
Warning: libusb 1.0.24 is already installed and up-to-date.
To reinstall 1.0.24, run:
  brew reinstall libusb

Pycoral with GRPC error

While utilizing the GRPC python library on a raspberry pi, once the import grpc line of code is run, the load delegate function will fail to connect to the USB Coral accelerator. Can someone please advise what steps I can take?

Thank you.

python3-pycoral version: 1.0.1

Using load delegate:
image
Using a detection engine:
image

poor classification results with edgetpu model

Description

I create my models with Teachable Machine.
For this test, I downloaded all three model variants.

So far I have been using the unquant.tflite model for classification, which achieves excellent results.
Test with the script:
images_predict_unquant_Teach_Lite.py - All images in a directory are checked
Model: model_unquant.tflite

To speed things up, I got the
coral usb accelerator bought to classify with an edge tpu model

Unfortunately I get bad results with the edgetpu model and only two classes are returned as a result (none and cat), regardless of which class I test
Test with script:
classify_image_edgetpu.py - only individual images are checked
Model:
example_edgetpu_v13.tflite

Reference issues:
google-coral/libedgetpu#29 (comment)
#49
#48

The zipfile contains the two scripts, all models and a test dataset per class of 292
edgetpu_issue_Windows10 (3).zip
images

I have not yet used the model_quantized.tflite model for the tests. However, it is included as a zip file

Install edgetpu env

$ pip freeze
edgetpu @ https://dl.google.com/coral/edgetpu_api/edgetpu-2.14.0-cp37-cp37m-win_amd64.whl
install==1.3.4
numpy==1.21.2
opencv-contrib-python==4.5.3.56
opencv-python==4.5.3.56
Pillow==8.3.2
pycoral @ https://github.com/google-coral/pycoral/releases/download/v2.0.0/pycoral-2.0.0-cp37-cp37m-win_amd64.whl
tflite-runtime @ https://github.com/google-coral/pycoral/releases/download/v2.0.0/tflite_runtime-2.5.0.post1-cp37-cp37m-win_amd64.whl

Install tflite env

$ pip freeze
absl-py==0.11.0
astunparse==1.6.3
cachetools==4.2.0
certifi==2020.12.5
chardet==4.0.0
flatbuffers==1.12
gast==0.3.3
google-auth==1.24.0
google-auth-oauthlib==0.4.2
google-pasta==0.2.0
grpcio==1.32.0
h5py==2.10.0
idna==2.10
importlib-metadata==3.3.0
imutils==0.5.3
Keras-Preprocessing==1.1.2
Markdown==3.3.3
numpy==1.19.5
oauthlib==3.1.0
opencv-python==4.5.1.48
opt-einsum==3.3.0
Pillow==8.1.0
Pillow-PIL==0.1.dev0
protobuf==3.14.0
pyaes==1.6.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
requests==2.25.1
requests-oauthlib==1.3.0
rsa==4.6
scipy==1.6.0
six==1.15.0
Telethon==1.19.0
tensorboard==2.4.0
tensorboard-plugin-wit==1.7.0
tensorflow==2.4.0
tensorflow-estimator==2.4.0
termcolor==1.1.0
typing-extensions==3.7.4.3
urllib3==1.26.2
Werkzeug==1.0.1
wrapt==1.12.1
zipp==3.4.0
Click to expand!

Issue Type

Performance

Operating System

Windows 10

Coral Device

USB Accelerator

Other Devices

No response

Programming Language

Python 3.7

Relevant Log Output

No response

Dual edge TPU hangs when running detect_image.py

classify_image.py runs well and the temprature seems to be stable. But when I run detect_image.py the temprature goes negative and the device is throttled with HIB errors. Please see below for logs

(base) aditya@aditya-desktop:~/workspace/coral/pycoral$ python3.6 examples/classify_image.py --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels test_data/inat_bird_labels.txt --input test_data/parrot.jpg
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
12.1ms
2.7ms
2.7ms
2.7ms
2.7ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.76953
base) aditya@aditya-desktop:~$ for (( ; ; )); do  sleep 1; cat /sys/class/apex/apex_0/temp; done
43050
43300
43550
43300
43550
43300
43550
43550
43300
43550
43550
43550
43300
43300
43300
43300
43550
43050
43300
43300
43300
43050
43550
43050
43300
43550
43550
43300
43550
43300
43550
43300
43050
43300
43300
43300
43300
43050
43550
43050
43550
43300
43300
43300
43050
43300
43300
43050
43050
43300
43300
43300
43050
43300
43050
43050
43300
43050
43300
43050
43300
43550
43300
43050
43300
43300
43050
43300
43050
43300
43300
43300
43300
43300
43300
43300
(base) aditya@aditya-desktop:~/workspace/coral/pycoral$ python3.6 examples/detect_image.py   --model test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite   --labels test_data/coco_labels.txt   --input test_data/grace_hopper.bmp   --output ${HOME}/grace_hopper_processed.bmp
----INFERENCE TIME----
Note: The first inference is slow because it includes loading the model into Edge TPU memory.
E driver/mmio_driver.cc:254] HIB Error. hib_error_status = ffffffffffffffff, hib_first_error_status = ffffffffffffffff
(base) aditya@aditya-desktop:~$ for (( ; ; )); do  sleep 1; cat /sys/class/apex/apex_0/temp; done
43300
43550
43800
43800
43800
43550
43550
43800
43550
43800
43800
43050
43300
43300
43300
43550
43550
43300
43550
43550
43550
43550
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700
-89700


(base) aditya@aditya-desktop:~$ uname -r
4.15.0-153-generic

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.