Giter Site home page Giter Site logo

glistening / one Goto Github PK

View Code? Open in Web Editor NEW

This project forked from samsung/one

0.0 1.0 0.0 148.3 MB

On-device Neural Engine

License: Other

CMake 3.26% C++ 82.85% C 7.20% Shell 2.98% Python 3.32% Dockerfile 0.05% C# 0.06% Java 0.06% Makefile 0.04% Roff 0.10% Assembly 0.09%

one's Introduction

GitHub release (latest SemVer) Documentation Status GitHub commit activity Gitter

ONE (On-device Neural Engine)

ONE Logo

A high-performance, on-device neural network inference framework.

Goal

This project ONE aims at providing a high-performance, on-device neural network (NN) inference framework that performs inference of a given NN model on processors, such as CPU, GPU, DSP or NPU.

We develop a runtime that runs on a Linux kernel-based OS platform such as Ubuntu, Tizen, or Android, and a compiler toolchain to support NN models created using various NN training frameworks such as Tensorflow or PyTorch in a unified form at runtime.

Overview

Getting started

  • For the contribution, please refer to our contribution guide.
  • You can also find various how-to documents here.

Feature Request

You can suggest development of ONE's features that are not yet available.

The functions requested so far can be checked in the popular feature request list.

  • If the feature you want is on the list, ๐Ÿ‘ to the body of the issue. The feature with the most ๐Ÿ‘ is placed at the top of the list. When adding new features, we will prioritize them with this reference. Of course, it is good to add an additional comment which describes your request in detail.

  • For features not listed, create a new issue. Sooner or later, the maintainer will tag the FEATURE_REQUEST label and appear on the list.

We expect one of the most frequent feature requests would be the operator kernel implementation. It is good to make a request, but it is better if you contribute by yourself. See the following guide, How to add a new operation, for help.

We are looking forward to your participation. Thank you in advance!

How to Contact

  • Please post questions, issues, or suggestions into Issues. This is the best way to communicate with the developer.
  • You can also have an open discussion with community members through gitter.im channel.

one's People

Contributors

seanshpark avatar hseok-oh avatar jinevening avatar mhs4670go avatar llfreetimell avatar wateret avatar ragmani avatar glistening avatar balyshevartem avatar hyunsik-yoon avatar jyoungyun avatar struss avatar d-krylov avatar stamalakhov avatar chunseoklee avatar kvochko avatar slavikmipt avatar toomuchsalt avatar yongseopkim avatar periannath avatar ai-moiseev avatar binarman avatar dayo09 avatar m-bronnikov avatar lemmaa avatar underflow101 avatar dr-venkman avatar ashedko avatar yihyunjin avatar s-barannikov avatar

Watchers

 avatar

one's Issues

[onert, CFE/circle-quantizer] Weight Quantization Value Test (MobileNet v2)

As unit test is done, I checked the weight quantization works for mobilenet v2.

Download mobilnet_v2_100_224.tflite

$ wget https://tfhub.dev/iree/lite-model/mobilenet_v2_100_224/fp32/1?lite-format=tflite -o mv2.f32.tflite

Prepare hybrid model using toco

$ toco --input_file=mv2.f32.tflite \
--output_file=mv2.toco.tflite  \
--input_format=TFLITE --quantize_weights=true

I used toco built by myself (as it is not provided by default).
Also, I modified default param for toco.

diff --git a/tensorflow/lite/tools/optimize/quantize_weights.cc b/tensorflow/lite/tools/optimize/quantize_weights.cc
index a42662284c8..319a959ff04 100644
--- a/tensorflow/lite/tools/optimize/quantize_weights.cc
+++ b/tensorflow/lite/tools/optimize/quantize_weights.cc
@@ -54,7 +54,7 @@ struct TensorPerChannel {
 
 // The default minimum number of elements a weights array must have to be
 // quantized by this transformation.
-const int kWeightsMinNumElementsDefault = 1024;
+const int kWeightsMinNumElementsDefault = 1;
 
 // Convert the MLIR CustomOpMap from the TFlite CustomOpMap as their member
 // variables differ.

Prepare hybrid model using our tools

  • Run tflite2circle and circle-quantizer
$ build/compiler/tflite2circle/tflite2circle mv2.f32.{tflite,circle}
$ build/compiler/circle-quantizer/circle-quantizer --quantize_weights float32 int8 channel mv2.{f32,our}.circle

Make sure you have four mv2 models.

$ ls -l mv2*
$ ls -lh mv2*
-rw-rw-r-- 1 gyu gyu  14M Sep  6 11:53 mv2.f32.circle
-rw-rw-r-- 1 gyu gyu  14M Sep  6 09:58 mv2.f32.tflite
-rw-rw-r-- 1 gyu gyu 3.8M Sep  6 11:58 mv2.our.circle
-rw-rw-r-- 1 gyu gyu 3.8M Sep  6 14:20 mv2.toco.tflite

Prepare golden based on f32 model

$ tools/nnpackage_tool/gen_golden/gen_golden.py mv2.f32.tflite
$ ls -lh *.h5
-rw-rw-r-- 1 gyu gyu  13K Sep  6 15:11 expected.h5
-rw-rw-r-- 1 gyu gyu 598K Sep  6 15:11 input.h5

Run using onert_run

$ Product/x86_64-linux.debug/out/bin/onert_run -r 1 -d f32.h5 mv2.f32.circle
$ Product/x86_64-linux.debug/out/bin/onert_run -r 1 -d toco.h5 mv2.toco.tflite
$ Product/x86_64-linux.debug/out/bin/onert_run -r 1 -d our.h5 mv2.our.tflite

All should have same random input (with same seed).
-d dumps output values in hdf5.

Diff using h5diff (better to use plot or representative value like PEIR)

  • f32 (onert) vs golden (= tflite-interpreter)
$ h5diff -r -d 0.0001 {f32,expected}.h5
dataset: </value/0> and </value/0>
0 differences found
  • toco (= toco + onert) vs golden
$ h5diff -r -d 0.0001 {toco,expected}.h5
dataset: </value/0> and </value/0>
size:           [1x1001]           [1x1001]
position        0               0               difference          
------------------------------------------------------------
[ 0 6 ]          0.000199265     6.35138e-05     0.000135751    
[ 0 58 ]          0.000229912     0.000128965     0.000100947    
...
[ 0 972 ]          0.000763436     0.000147142     0.000616294    
[ 0 988 ]          0.000215246     5.71553e-05     0.000158091    
55 differences found
  • our (= circle-quantizer + onert) vs golden
$ h5diff -r -d 0.0001 {our,expected}.h5
dataset: </value/0> and </value/0>
size:           [1x1001]           [1x1001]
position        0               0               difference          
------------------------------------------------------------
[ 0 55 ]          0.000770634     0.000920759     0.000150125    
[ 0 80 ]          0.000158697     4.2059e-05      0.000116638    
...
[ 0 972 ]          0.00067822      0.000147142     0.000531078    
[ 0 988 ]          0.000161613     5.71553e-05     0.000104458    
50 differences found

Though there are several differences, it looks okay.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.