Giter Site home page Giter Site logo

larrylart / codrive Goto Github PK

View Code? Open in Web Editor NEW
43.0 8.0 9.0 67.52 MB

Advanced driver-assistance system with Google Coral Edge TPU Dev Board / USB Accelerator, Intel Movidius NCS (neural compute stick), Myriad 2/X VPU, Gyrfalcon 2801 Neural Accelerator, NVIDIA Jetson Nano and Khadas VIM3

License: GNU General Public License v3.0

Makefile 0.20% C++ 48.39% C 51.16% Objective-C 0.15% Python 0.10%
google-coral edge-tpu intel-movidius ncs myriad gyrfalcon 2801 neural-accelerator vim3 amlogic

codrive's People

Contributors

larrylart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

codrive's Issues

Cross compile error

Hey @larrylart, During compilation of he code on VIM3, did you do it using cross compiler? If so, does this demo work using cross compiler?
COdrive-error

8x fps speedup & servo output

Hey! Cool project. I've extended it to do MJPEG streaming with target drawing etc. I've also added sub-image sampling so that inference is run repeatedly over slightly overlapping 300x300 images within the original image. This is so that targets are detected much farther away. It is actually faster too because scaling down takes so long, relatively.

Anyway, with 720p as an example, I discovered that you can get a 8x+ fps speedup just by using the CAP_V4L codec.
Replace:
cap = cv::VideoCapture(camera_device);
with:
cap = cv::VideoCapture(camera_device, CAP_V4L); ๐Ÿ‘

Do you know how to get servo/PWM output working? There are some badass things that can be done if we can get the coral dev board to do servo/PWM output.

No objects detected after going through GetInference function

I followed the instructions as in the README. Once the inference ran

pAMLWorker->GetInference( vectMatchResult );

There are no inference of any COCO objects since

vectMatchResult.size() is always zero.

pAMLWorker->m_inferenceTime is around 0.03. So I don't understand why no objects are detected.

However, I am using Ubuntu 20.04 with the latest image and NPU SDK has some updates over the past few months (and i adapted to use OpenCV 4.5). Am I missing something?

cannot find -ledgetpu

I'm trying to build this project on the coral dev board. I was able to build opencv4.1, but not tensorflow, or this project.

Issues:

  1. I was not able to build tensorflow lite (on the coral dev baord). I followed the instructions in the required section, and did a git reset to the same commit. I ran sh ./build_coral_lib.sh which failed on line 19 (before the line that was modified in your instructions): ./build_coral_lib.sh: 19: ./build_coral_lib.sh: Bad substitution. Full error
  2. I thought that I should be able to ignore the TF build issues and instead use the static library for TF, which I thought would come with the edgetpu library. Maybe it doesn't or maybe the makefile needs to be updated. I get the following error when running make (on the coral dev board): /usr/bin/ld: cannot find -ledgetpu Full error

Do you know how to solve either issue?

Cross Compiler

Hey @larrylart, During compilation of the code on VIM3, did you do it using cross compiler? If so, does this demo work using cross compiler?

Remove dependency on locally checked out tensorflow/lite

I'm using CLion with remote development set up to automatically push code the coral dev board (CDB). CLion also references the libraries on the CDB (instead of whatever libraries may or may not be installed on my desktop). It looks like you had tensorflow checked out in the project directory during development. I would like to remove that dependency and instead set it up so that CLion resolves tensorflow-lite and edgetpu objects in the provided static and shared libraries.

What would all instances of #include "tensorflow/lite/..." need to be replaced with to get CLion, or any other IDE, to reference and resolve tensorflow objects to lib/libtensorflow-lite_aarch64.a instead of a locally checked out tensorflow source directory?

(I realize that the makefile is already utilizing the static and shared libraries, but if the IDE can't resolve tensorflow objects without (a possibly different version of) tensorflow checked out locally, then that adds friction to development.)

Build error on Odroid XU-4

I'm trying to build this sample code on my XU-4 running Mate 16.04 and Mate 18.04 to help evaluate a potential performance regression in Mate18 vs. Mate16 that I uncovered running an OpenVINO C++ example code for the Movidius NCS2.

I get the same error on both systems.
doing: ./tools/make/build_rpi_lib.sh

I get this error:
In file included from ./tensorflow/lite/core/api/op_resolver.h:20:0,
from ./tensorflow/lite/core/api/flatbuffer_conversions.h:24,
from tensorflow/lite/core/api/flatbuffer_conversions.cc:16:
./tensorflow/lite/schema/schema_generated.h:21:37: fatal error: flatbuffers/flatbuffers.h: No such file or directory

I've no idea what package flatbuffers.h belongs to :(
but I doubt its the only missing dependency.

The idea is that if the Coral code doesn't have the performance decrement, the problem is likely in OpenVINO as its not "officially" supported on 18.04 at present.

Python samples using Coral and Movidius show differences within the run to run repeat variance of the code, although Mate16 is on average ~0.5 fps higher, although I won't claim statistical significance.

Gist of the performance decrement:
There appears to be a performance regression where Mate18 is significantly worse than the Mate16 system, remember this is C++ code, not Python. I get the following results from the sample code:
Odroid XU-4 Mate16
NCS: 8.22 fps
NCS2: 11.5 fps

Looks to be performance regression on Mate18 vs. Mate16!
Odroid XU-4 Mate18
NCS: 6.56 fps
NCS2: 8.36 fps

Raspberry Pi3B:
NCS: 6.93 fps
NCS2: 8.58 fps

Basically Mate18 is a bit worse than a Pi3B here!

Memory leak

run aml_object_detect got the Memory leak; in vsi_nn_RunGraph

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.