Giter Site home page Giter Site logo

mocleiri / tensorflow-micropython-examples Goto Github PK

View Code? Open in Web Editor NEW
168.0 13.0 78.0 7.56 MB

A custom micropython firmware integrating tensorflow lite for microcontrollers and ulab to implement the tensorflow micro examples.

License: MIT License

Shell 5.06% Makefile 14.02% C 41.12% C++ 7.29% Dockerfile 0.28% Python 8.34% CMake 23.90%

tensorflow-micropython-examples's Introduction

Tensorflow Micropython Examples

The purpose of this project is to make a custom micropython firmware that installs tensorflow lite for micro controllers and allows for experimentation.

Architecture

This project is a micropython module built using the USER_C_MODULES extension mechanism. There are several modules:

  • microlite
  • ulab
  • modcamera (for the person_detection example)

There are 4 top level git submodules:

  • tensorflow lite micro
  • micropython
  • ulab
  • tflm_esp_kernels

tflite-micro sources are generated within the microlite module at build time using the tensorflow lite example generator.

The microlite module has several types:

  • tensor
  • interpreter
  • audio_frontend (used for the micro_speech example)

Port Status

Build Type Status
ESP32 ESP32
ESP32 S3 ESP32 S3
RP2 RP2
STM32 Doesn't Work
UNIX UNIX

Prebuilt Firmware

The latest firmware can be downloaded from the applicable build above (in the Status section).

  1. Click on build status link.
  2. Click on the latest green build
  3. Review the available artifacts for download

You do need to be careful to get the proper firmware for your board. If your board is not currently being built please file an issue and it can be added.

Also be sure that you are getting the most recent boards build from the main branch. There are builds from other feature branches but many, especially those related to optional ops, are broken even if it looks like the build worked properly.

Recent Changes

STM32 Port Fixed 2022-01-02

The STM32 port works for hello_world now.

At the moment the build is specific to my Nucleo H743ZI2 board but I think can be generalized for many other STM32 boards.

Please file an issue if you would like to have a build added for your board.

Build Process Changed 2021-12-15

#36 moved the audio_frontend from a seperate module into a type within the microlite module.

Building from Scratch

The steps to build are self documented within the github actions used to build the firmware for the various supported boards. Look with the .github/workflows/ directory to see the pipeline scripts for each board.

Issues are welcomed to request adding ci support for new boards.

Follow the Upgrade Instructions on how to upgrade. The main issue is to get the 3 git submodules updated to the latest values.

Follow the Linux Build Instructions on how to build the latest firmware from a fresh clone of this repository.

Examples

The goal of this project is for experimenting with TinyML and the plan is to have micropython implementations of the examples from the tensorflow micro project.

In all cases the model.tflite can be used from upstream. However its common for us to have different implementation code written in micropython instead of C++.

Pull requests are welcome for additional examples with custom models.

TF Micro Example Reference Examples Training
hello_world Train Hello World
magic_wand Train Magic Wand
micro_speech Train Micro Speech
person_detection Train Person Detection

Hello World

Give a model an x value and it will give a y value. The chart of such points is an approximate sine wave:

Status:

  • Works on unix port and esp32 port.

Hello-World Documentation

Micro Speech

Process:

  1. Sample Audio
  2. Convert to spectrogram
  3. Set 1960 bytes on input tensor corresponding to 1 second worth of spectrogram.
  4. Run inference on model 3-4 times per second and average scores.
  5. Scores over 200 indicate a match

Status:

  • Works on unix port with files.
  • Works on esp32 no-spi ram board with machine.I2S.

Micro Speech Documentation

ESP32 Example

ESP32 Example with INMP441 Microphone

ESP32 Demo

Watch the micro speech video

Person Detection

Process:

  1. Capture Images
  2. Convert to 96x96 pixel int8 greyscale images
  3. Set on input later of model
  4. Run inference on image
  5. if person > no person it thinks the image is a person
  6. if no person > person it thinks the image contains no person

Status:

  • Works on unix port and esp32 port using files.

Person Detection Documentation

Magic Wand

TODO #5

About Tensorflow

At the moment we are using the main branch in the tensorflow lite micro repository.

This is the C++ api version of tensorflow lite designed to run on microcontrollers.

About Micropython

We are building from micropython master branch.

Flash image

ESP32D0WDQ6 4MB Flash

Download the firmware from the latest ci build.

The zip file contains:

  1. The bootloader
  2. The partition table
  3. The firmware

Flash from Windows

 esptool.py -p COM5 -b 460800 --before default_reset --after hard_reset --chip esp32 write_flash --flash_mode dio --flash_size detect --flash_freq 40m 0x1000 bootloader/bootloader.bin 0x8000 partition_table/partition-table.bin 0x10000 micropython.bin

Flash for Linux

TODO

Credits

Mike Teachman for I2S micropython implementation for ESP32, STM32 and RP2

The Micropython I2S implementation was written by Mike Teachman and its because of his hard work that the micro-speech example works so well.

Open MV

As far as I am aware OpenMV (https://openmv.io/) was the first micropython firmware to support tensorflow.

I copied extensively from their approach to get inference working in the hello world example and also for micro-speech example.

I started from their libtf code for how to interact with the Tensorflow C++ API from micropython:

https://github.com/openmv/tensorflow-lib/blob/343fe84c97f73d2fe17a0ed23540d06c782fafe7/libtf.cc and https://github.com/openmv/tensorflow-lib/blob/343fe84c97f73d2fe17a0ed23540d06c782fafe7/libtf.h

The audio-frontend module originated by looking at how openmv connected to the tensorflow microfrontend here: https://github.com/openmv/openmv/blob/3d9929eeae563c5b370ac86afa9216df50f0c079/src/omv/ports/stm32/modules/py_micro_speech.c

tensorflow-micropython-examples's People

Contributors

cgreening avatar mattusi avatar mocleiri avatar tgiles1998 avatar uraich avatar vikramdattu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-micropython-examples's Issues

Add support for esp32s3 and custom version with different SPIRAM CS Pin

Operating system
macOS Mojave

Python version
3.10

What Chip
ESP32 S2 16MB

Uploaded latest 16MB. Spiram Micropython

esptool.py --port /dev/cu.SLAB_USBtoUART -b 460800 --before default_reset --after hard_reset --chip esp32 write_flash --flash_mode dio --flash_size detect --flash_freq 40m 0x1000 bootloader/bootloader.bin 0x8000 partition_table/partition-table.bin 0x10000 micropython.bin

ampy -p /dev/cu.SLAB_USBtoUART put main.py
ampy -p /dev/cu.SLAB_USBtoUART put i2s_dump.py
ampy -p /dev/cu.SLAB_USBtoUART put micro_speech.py
ampy -p /dev/cu.SLAB_USBtoUART put model.tflite

Result returned:

ets Jul 29 2019 12:21:46

rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:2
load:0x3fff0030,len:4344
load:0x40078000,len:13816
load:0x40080400,len:3340
entry 0x40080618
Traceback (most recent call last):
File "main.py", line 14, in
ImportError: no module named 'audio_frontend'
MicroPython 36f7ca0 on 2021-11-09; ESP32 module (microlite-spiram-16m) with ESP32
Type "help()" for more information.

Is the audio frontedn part of this firmware or do I need to import this seperately and if so where might I find the file for audio_frontend

Many thanks,

Thomas Giles

Implement micro-speech on esp32

On esp32 we need to use the i2s perhiperal to sample audio and then convert into spectrograms to feed into the example input tensor.

The unix port works on the fixed 1 second yes and no samples so we need to adapt for running on a continually sampling basis.

At the moment my plan is to redo the sliding window sampling apprach in micropython. Also to redo the inference averaging output processor in micropython.

Setup CI job to automatically update submodules

If something changes in micropython, ulab or tensorflow and it breaks our build I want to know. At the moment the sub modules stay at some specific version until updated.

Lets create a new github actions job that will run daily and make a commit if the submodule branches have new commits on them.

See if we can use esp-dsp to accelerate esp32 math ops

I found out that esp32-s3 will have improved hardware support for some dsp functions.

But that espressif also have an esp-dsp module that also provides improved/optimized code for regular esp32 math ops.

Lets investigate on how to use these methods and if its possible for tensorflow lite to use these.

https://docs.espressif.com/projects/esp-dsp/en/latest/esp-dsp-apis.html
https://docs.espressif.com/projects/esp-dsp/en/latest/esp-dsp-benchmarks.html

Expose tensor type to validate model is as expected

I got caught out using the wrong model because I didn't validate the type of the input and output tensors.

We can either expose this on the tensor or add an an option when building the interpreter to validate the types there.

In both cases we need to make some constants that can be used to represent the different tensor types.

Setup CI for unix port

I want to setup a github actions workflow for the unix port.

It should built in debug mode.

Add microlite_op_resolver to allow for using the all or mutable op resolver

I found out that there is a memory overhead for using the all op resolver. I'm not sure on the exact overhead but I head it could be 4k.

Lets create a new type to represent the op_resolver. It should support using the all resolver or the mutable op resolver where you can specify which one to load.

stm32: Debug cause of board reset when trying to run the hello world example

MicroPython v1.16-222-g44818d1a3-dirty on 2021-09-08; NUCLEO_H743ZI2 MICROLITE with STM32H743

Type "help()" for more information.
>>> import hello_world
interpreter_make_new: model size = 2488, tensor area = 20048
Failed to allocate tail memory. Requested: 61942520, available 19888, missing: 61922632
Failed starting model allocation.

AllocateTensors() failed!
time step,y
MicroPython v1.16-222-g44818d1a3-dirty on 2021-09-08; NUCLEO_H743ZI2 MICROLITE with STM32H743
Type "help()" for more information.

In testing we are getting a very large allocation number. The allocation is supposed to all fit within the allocated tensor area.

I wonder if this is caused by alignment issues. I removed some of the alignment functions that were in the original files from openmv.

Remove unused code related to models and openmv

I started from trying to use the openmv C -> C++ bridge code but I've ended up with a similiar but different approach.

I want to cleanup the code base to remove the unused openmv functions.

The same for the model. I ended up getting inference to work using the interpreter object so I should remove the non-working microlite.model object skeleton.

Fix tensorflow build in esp32 ci build

When I manually updated the tensorflow submodule reference the build broke with a GCC error:

root@907bbbd0af42:/opt/tflite-micro-micropython/tensorflow# xtensa-esp32-elf-g++ -DNDEBUG -std=c++11  -fstrict-vol
    atile-bitfields -mlongcalls -nostdlib -fno-rtti -fno-exceptions -fno-threadsafe-statics -Werror -fno-unwind-tables
     -ffunction-sections -fdata-sections -fmessage-length=0 -DTF_LITE_STATIC_MEMORY -DTF_LITE_DISABLE_X86_NEON -Wsign-
    compare -Wdouble-promotion -Wshadow -Wunused-variable -Wmissing-field-initializers -Wunused-function -Wswitch -Wvl
    a -Wall -Wextra -Wstrict-aliasing -Wno-unused-parameter -DESP -Wno-return-type -Wno-strict-aliasing -Wno-ignored-q
    ualifiers -Wno-return-type -Wno-strict-aliasing -O2 -I. -Itensorflow/lite/micro/tools/make/downloads/gemmlowp -Ite
    nsorflow/lite/micro/tools/make/downloads/flatbuffers/include -Itensorflow/lite/micro/tools/make/downloads/ruy -Ite
    nsorflow/lite/micro/tools/make/gen/esp_xtensa-esp32_default/genfiles/ -Itensorflow/lite/micro/tools/make/downloads
    /kissfft -c tensorflow/lite/micro/kernels/l2_pool_2d.cc -o tensorflow/lite/micro/tools/make/gen/esp_xtensa-esp32_d
    efault/obj/kernels/tensorflow/lite/micro/kernels/l2_pool_2d.o
    tensorflow/lite/micro/kernels/l2_pool_2d.cc: In function 'TfLiteStatus tflite::{anonymous}::L2Eval(TfLiteContext*,
     TfLiteNode*)':
    tensorflow/lite/micro/kernels/l2_pool_2d.cc:128:1: error: insn does not satisfy its constraints:
     }
     ^
    (insn 274 18 183 28 (set (reg/v:SF 20 f1 [orig:77 sum_squares ] [77])
            (mem/u/c:SF (symbol_ref/u:SI ("*.LC28") [flags 0x2]) [0  S4 A32])) "./tensorflow/lite/kernels/internal/ref
    erence/pooling.h":169 47 {movsf_internal}
         (expr_list:REG_EQUAL (const_double:SF 0.0 [0x0.0p+0])
            (nil)))
    during RTL pass: postreload
    tensorflow/lite/micro/kernels/l2_pool_2d.cc:128:1: internal compiler error: in extract_constrain_insn, at recog.c:
    2210
    Please submit a full bug report,
    with preprocessed source if appropriate.
    See <https://gcc.gnu.org/bugs/> for instructions.

Migrate unix specific build logic into variant config

ESP32 is using cmake but I know pyboard uses gnumake so I need to move some of the configuration out from the micropython.mk files in the microlite and audio_frontend modules into a custom variant configuration.

That way things like running in -g mode won't happen when making an actual firmware.

Variant seems to be the unix port equivalent of the out of tree boards directory.

Add tf.lite.Interpreter class to match upstream

Tensorflow lite already has a python Interpreter module. This allows the lite model to be run for example on a Linux box.

I don't want to remove the first part yet but do want to see if we can reshape the microlite module so that the interpreter is api compatible with the existing tensorflow lite python Interpreter class.

This would allow building the python on a regular computer and then when ready to run in micropython.

We could run it in the micropython unix port and then also on the device ports using the same syntax.

Its hard to debug micropython scripts when running since the debugger wants to let you debug the C and C++ underneath the interpreter.

So an advantage of this would be a script that is runnable in windows as a python script where the python variables can be debugged.

https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter
tf lite Interpreter-methods

Setup CI to build the project

To start with we need to build for esp32:

  1. use espressif/idf public docker container for building
  2. build tensorflow
  3. build micropython esp32 port with microlite, audio-frontend and ulab modules.

For testing we also need to build for the unix port:

  1. Develop custom docker image for building for the unix port.
  2. build tensorflow
  3. build micropython unix port with microlite, audio-frontend and ulab modules.

Need to figure out how to run unit tests as part of the builds and also target the boards locally. Should be able to copy the micropython approach here.

Merge the audio_frontend module into the microlite module

Originally I had the audio_frontend as a separate module to allow for not including it but there are other ways to do so. For circuit python compatibility I think I need to have everything in the microlite module.

Lets restructure things so that audio_frontend is a class within the microlite module. I think ulab has an example we can use for conditionally including the audio_frontend class.

What are the arguments for interpreter

Hi I was wondering what each parameter means?

interp = microlite.interpreter(micro_speech_model, 8 * 1024, input_callback, output_callback)

Thanks I'm pretty new to micrpython and uncertain what each parameter means

Add Circuitpython compatibility

We should be able to copy the code from ulab which also has circuitpython support. It looks like you need to clone circuit python and then add a two links like this:

micropython-modules/microlite -> extmod/microlite
micropython-modules/audio_frontend -> extmod/audio_frontend

It would be similiar to this:
https://github.com/v923z/micropython-ulab/blob/1d9670096f093eae0a27437943dca088a27cc378/build-cp.sh#L41

We can try to use the version of ulab in circuit python but if that is a problem we can also make a symbolic link from:

micropython-ulab -> extmod/ulab

Add Dockerfile for building arm

This is partly for circuit python integration work but also for arm builds in general I need a dockerized build environment. I don't want to grind through sending every incremental change through ci builds.

I need to find out how to build tensorflow lite micro for arm and then build the firmware for arm.

i2c not working for esp32

Using the prebuilt firmware (https://github.com/mocleiri/tensorflow-micropython-examples/suites/4296460177/artifacts/112068306)
the version Core Panics for reading from I2C:

The simplest example to reproduce (assuming that there is the device on 0x53 addr, scl pin 18, sda pin 19):

import machine

from machine import Pin, I2C

a = i2c.readfrom_mem(0x53,50,6)

Guru Meditation Error: Core 0 panic'ed (StoreProhibited). Exception was unhandled.

Core 0 register dump:
PC : 0x4008c868 PS : 0x00060031 A0 : 0x8008c9fd A1 : 0x3ffbe980
A2 : 0x3ffb6fd8 A3 : 0x00000000 A4 : 0x00000000 A5 : 0x00000800
A6 : 0x3ffbf008 A7 : 0x00000000 A8 : 0x3ffb71e4 A9 : 0x00000901
A10 : 0x000000a5 A11 : 0x00000001 A12 : 0x00000001 A13 : 0x3ffb71e8
A14 : 0x00000000 A15 : 0x3ff53000 SAR : 0x0000001d EXCCAUSE: 0x0000001d
EXCVADDR: 0x00000008 LBEG : 0x00000000 LEND : 0x00000000 LCOUNT : 0x00000000

Backtrace:0x4008c865:0x3ffbe9800x4008c9fa:0x3ffbe9c0 0x40082d51:0x3ffbe9f0 0x40241a9f:0x3ffbcfe0 0x40122d57:0x3ffbd000 0x40096344:0x3ffbd020

ELF file SHA256: 84e9af058336fb6e

Fix esp32s3 build

The error we are getting is:

/home/runner/.espressif/tools/xtensa-esp32s3-elf/esp-2021r1-8.4.0/xtensa-esp32s3-elf/bin/../lib/gcc/xtensa-esp32s3-elf/8.4.0/../../../../xtensa-esp32s3-elf/bin/ld: micro_error_reporter.cpp:(.text+0x4f): undefined reference to `DebugLog'

I'm pretty sure this is due to board specific logic within microlite/micropython.cmake and its missing the esp32s3 when adding the debug log class.

Custom ESP32-S3 External RAM Configuration

Following on from the issue: #40

Error produced when trying to run the Micro-Speech example on the ESP32-S3 was:

Connecting to /dev/tty.SLAB_USBtoUART...
ESP-ROM:esp32s3-20210327
Build:Mar 27 2021
rst:0x10 (RTCWDT_RTC_RST),boot:0x8 (SPI_FAST_FLASH_BOOT)
SPIWP:0xee
mode:DIO, clock div:1
load:0x3fcd0108,len:0xf60
load:0x403b6000,len:0x978
load:0x403ba000,len:0x2c80
entry 0x403b616c
W (25) bootloader_random: RNG for ESP32-S3 not currently supported
W (313) bootloader_random: RNG for ESP32-S3 not currently supported
Traceback (most recent call last):
File "main.py", line 14, in
ImportError: no module named 'audio_frontend'
MicroPython 59e6194 on 2021-12-18; ESP32S3 module (microlite-spiram) with ESP32S3
Type "help()" for more information.

I'll also look into the RNG errors produced as well.

Create mechanism to build tensorflow ops as native modules to reduce the firmware size

Tensorflow lite can run in very small flash sizes if only the ops needed for the problem at hand are included in the firmware.

Currently we are bundling all of the tensorflow ops in the firmware however this has a price in terms of flash memory required.

For esp32 4MB flash we changed the partition size to be 3 MB firmware data and 1 MB filesystem.

Native modules are limited for the api's that they are called. However I want to see if the reverse is true. Can we add tensorflow ops in a way that they register with the firmware based microlite module. Because the microlite module is contained within the firmware will that allow the modules loaded via the reduced function table to be called from the code that has full access to all of the code on the firmware.

If we can implement this feature it will dramatically reduce the out of the box storage space needed.

Try to move tensorflow lite micro build inside of the microlite module

I'm getting linker errors when trying to combine the libtensorflow-microlite.a as built by ci: https://github.com/mocleiri/tensorflow-micropython-examples/actions/workflows/build_tensorflow_arm.yml

I think there is a difference between the toolchain used to build the static library and the firmware.

Lets try to use the create_tflm_tree.py script to generate the tensorflow lite micro files within the microlite module.

micropython needs the files to be called .cpp instead of .cc which is what tensorflow uses so we will have to transform the files into the .cpp naming. If we can find a way to do it in the python generation script we should up stream it for others or maybe use a bit of shell scripting to do the transform.

arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(new_op.o): in function `operator new(un
signed int)':
new_op.cc:(.text._Znwj+0xc): undefined reference to `malloc'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(vterminate.o): in function `__gnu_cxx::
__verbose_terminate_handler()':
vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x42): undefined reference to `fwrite'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x50): undefined reference to `fputs'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x5e): undefined reference to `fwrite'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x68): undefined reference to `free'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x76): undefined reference to `fputs'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x88): undefined reference to `fwrite'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x9c): undefined reference to `fwrite'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0xc0): undefined reference to `fwrite'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0xca): undefined reference to `fputs'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0xd4): undefined reference to `fputc'
arm-none-eabi-ld: vterminate.cc:(.text._ZN9__gnu_cxx27__verbose_terminate_handlerEv+0xf4): undefined reference to `_impure_ptr'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(cp-demangle.o): in function `d_growable
_string_callback_adapter':
cp-demangle.c:(.text+0x314): undefined reference to `realloc'
arm-none-eabi-ld: cp-demangle.c:(.text+0x32e): undefined reference to `free'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(cp-demangle.o): in function `d_append_n
um':
cp-demangle.c:(.text+0x552): undefined reference to `sprintf'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(cp-demangle.o): in function `d_print_co
mp_inner':
cp-demangle.c:(.text+0x2b20): undefined reference to `sprintf'
arm-none-eabi-ld: cp-demangle.c:(.text+0x3c5e): undefined reference to `sprintf'
arm-none-eabi-ld: cp-demangle.c:(.text+0x3d72): undefined reference to `sprintf'
arm-none-eabi-ld: cp-demangle.c:(.text+0x3e72): undefined reference to `sprintf'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(cp-demangle.o):cp-demangle.c:(.text+0x4
088): more undefined references to `sprintf' follow
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(cp-demangle.o): in function `__cxa_dema
ngle':
cp-demangle.c:(.text+0x5e50): undefined reference to `free'
arm-none-eabi-ld: cp-demangle.c:(.text+0x5e7a): undefined reference to `free'
arm-none-eabi-ld: cp-demangle.c:(.text+0x5e9e): undefined reference to `free'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(del_op.o): in function `operator delete
(void*)':
del_op.cc:(.text._ZdlPv+0x0): undefined reference to `free'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(eh_alloc.o): in function `__cxa_allocat
e_exception':
eh_alloc.cc:(.text.__cxa_allocate_exception+0x8): undefined reference to `malloc'
arm-none-eabi-ld: /usr/bin/../lib/gcc/arm-none-eabi/10.2.1/../../../../arm-none-eabi/lib/thumb/v7e-m+dp/hard/libstdc++.a(eh_alloc.o): in function `__cxa_free_ex
ception':
eh_alloc.cc:(.text.__cxa_free_exception+0x16): undefined reference to `free'
make[1]: *** [Makefile:713: /opt/tflite-micro-micropython/boards/stm32/NUCLEO_H743ZI2_MICROLITE/build/firmware.elf] Error 1
make[1]: Leaving directory '/opt/tflite-micro-micropython/micropython/ports/

I verified that both the normal build and normal build with user_c_modules worked fine.

Implement person detection example

I don't have a camera but I think I should be able to fake it by getting the tensorflow lite model and then the pictures that are from the upstream repo for a person and a not person image and then run the model to classify those pictures.

Add support for micropython stm32 boards

Within the stm32 boards there are a wide variety of cortex M0, M3, M4 and M7's. Each needs to be built separately on the tensorflow side.

Make separate CI to build libtensorflow-microlite.a for the different permutations. Then find out how to encode the required flavor in the board configurations.

The board specific build can then download the right tensorflow artifact to use when building.

Add support for customizing which tflite micro operators are included in the firmware

Both tensorflow and the micropython firmware are built where unused code can be dropped at link time. This allows for a smaller firmware because not all operators need to be included.

We can copy the ulab approach of having many user controllable #defines that can be used to reduce the size of the firmware.

As the unused code detection is done at linking time we may need to add some extra objects to what is linked as libtensorflow-microlite.a so that it knows which op resolver we want to use and how many ops to build.

I think we should be able to use the COUNTER preproccessor extension to know how many ops are being built at compile time and then to set a matching value for a MutableOpResolver.

First we should try to use the tensorflow makefile: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/tools/make/Makefile

If that doesn't work we can pick up the Zeyphr cmake work and change our build so that we build tensorflow within the micropython build instead of just linking to the result of the tensorflow build.

tensorflow/tensorflow#47241

Adjust CI to build stm32 H743

In order to facilitate external testing I want to add ci to build stm32 for the H743 board.

This involves disabling the static library build for tensorflow microlite and replacing with a single unified build.

Migrate to Micropython 1.15

Micropython 1.15 changes the build system for esp32 from GNU Make to CMake but also opens up the ability to build with the latest espressif idf versions.

I need to update the code so that the microlite and audio_frontend modules build using the new way.

ulab upstream also supports the new way so I will need to upgrade that aswell.

Analyze available tflite micro models for which operators are most commonly used

#28 will add the ability to control which operators are included in the firmware. The purpose of this issue is to analyze available tensorflow lite micro model's to find out which operator's are used the most.

Then we should try to put the most widely used into the standard firmware. It can also help to identify which ops are for a particular use-case.

We are interested in int8 quantized tflite models < 4MB in size.

Find out where the tensorflow micro version comes from in the new repository

I was depending on tensorflow/lite/version.h and then using the TFLITE_VERSION_STRING define but that did not make it into the tflite-micro repository.

There are constants in tensorflow/lite/micro/micro_interpreter.h for the schema version and I suspect that may include the tflite version in the future.

Cannot build anymore

I wanted to rebuild the MicroPython interpreter for the ESP32 from scratch, but I now fail with
undefined reference to `tflite::MicroInterpreter::MicroInterpreter(tflite::Model const*, tflite::MicroOpResolver const&, unsigned char*, unsigned int, tflite::ErrorReporter*, tflite::MicroResourceVariables*, tflite::MicroProfiler*)'
I think tflite has changed in the tensorflow repository

unix: Fix build when using tflm in microlite

Find a way to detect if we are in the stm32 build or the unix port build:

./../../micropython-modules/microlite/openmv-libtf.h:18:10: fatal error: tensorflow/lite/c/common.h: No such file or directory
 #include "tensorflow/lite/c/common.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
../../../micropython-modules/microlite/bare-metal-gc-heap.c:5:10: fatal error: sys/reent.h: No such file or directory
 #include <sys/reent.h>
          ^~~~~~~~~~~~~
compilation terminated.

The unix port is failing because its using gnu make which was modified for stm32.

Investigate automatic quantization when getting/setting values on tensors.

The rules for quantization seem to always exist on the model input and output tensors. For the hello world example we added methods you can call to quantize float32 to int8 and int8 to float32 but can we do this transform automatically?

In the microlite C code when we are getting and setting the value of a tensor we know both the tensor type and the micropython object type so could try to automatically quantize these values.

We might want to add a switch for this feature that can be set when the interpreter is created.

Use cmsis_nn optimized kernels for rp2 port

Currently the rp2 port is building the reference kernels. Lets finish the setup so that the build can use the cmsis_nn optimized kernels.

I think its only one change to the prepare-tflm-rp2.sh file and then some minor adjustments to which files are built.

Upgrade Micropython to latest to support version 4.3 esp-idf's

There is work in micropython on the master branch for the esp32 fork to allow it to be used with more modern espressif idf versions than 4.0.1.

See if we can move the miketeachman i2s commit to the tip of master and then adjust the build scripts accordingly to build the esp32 firmware using the new code.

I hope that by upgrading we will get more insight into the watchdog timer issue preventing from loading the microspeech model when running from esp32.

Update hello world example to load model file from the file system

The hello world example was implemented first. It copies the C++ approach of storing the model in an array.

But that didn't work right for the micro speech example.

Due to how micropython compilation worked instead of taking 20k ram for a 20k model we ran out of memory as it tried to allocate 70 k.

Instead let's put the model on the filesystem and load it from there

Let's also see if the comments can be improved and the readme to serve as a proper introduction.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.