Giter Site home page Giter Site logo

ai-techsystems / deepc Goto Github PK

View Code? Open in Web Editor NEW
526.0 32.0 85.0 12.6 MB

vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers

Home Page: https://cainvas.ai-tech.systems/

License: Apache License 2.0

CMake 2.32% C++ 71.33% Python 6.14% Makefile 0.09% C 1.82% Shell 0.14% Cuda 0.94% Fortran 9.50% JavaScript 0.06% CSS 0.04% Dockerfile 0.01% Jupyter Notebook 7.35% SWIG 0.28%
onnx microcontrollers odroid arduino sparkfun-products raspberrypi machine-learning deep-learning arm64 edge-devices inference-framework stm32 esp32 stm32f4 nxp-cortex esp8266 arduino-nano-33-ble-sense raspberry-pi tinyml

deepc's Introduction

deepC

Build Status PyPI version Downloads Apache2.0 License Contributors Chat

The deepC is a vendor independent deep learning library, compiler and inference framework designed for small form-factor devices including μControllers, IoT and Edge devices

🏃‍♂️ Using deepC

Here are few of many ways.

  1. Try deepC with Colab Noteboook
  2. Install it on Ubuntu, raspbian (or any other debian derivatives) using pip install deepC
  3. Compile onnx model- read this article or watch this video
  4. Use deepC with a Docker File

See more examples in tutorial dir.

📛 what is deepC?

deepC library, compiler and inference framework is designed to enable and perform deep learning neural networks by focussing on features of small form-factor devices like micro-controllers, eFPGAs, cpus and other embedded devices like raspberry-pi, odroid, arduino, SparkFun Edge, risc-V, mobile phones, x86 and arm laptops among others.

edge Devices

deepC also offers ahead of time compiler producing optimized executable based on LLVM compiler tool chain specialized for deep neural networks with ONNX as front end.

📝 Design

Main components of deepC have been designed to represent and optimize the common deep learning networks in high level graph IR and to transform the computation graph to minimize memory utilization, optimize data layout and fuse computation patterns for different hardware backends.

Architecture

Read more at high level design document

💧 PreRequisites

💻 Development

Build and start modifying deepC locally from source code with following steps

⭕ Ubuntu 18.04

Follow the steps to install pre-requisites

sudo apt-get update
sudo apt-get install build-essential python3.6-dev python3-pip swig doxygen clang-format clang clang-8 llvm-8 llvm-8-dev protobuf-compiler libprotoc-dev
sudo pip3 install numpy==1.15.0 onnx==1.5.0

Once you are done, build deepC

git clone https://github.com/ai-techsystems/deepC.git
cd deepC
make

⭕ Mac OS / Windows 10

Make sure you have the below pre-requisites

Mac OS:

Windows 10:

Once you are done, build deepC inside docker container

git clone https://github.com/ai-techsystems/deepC.git
cd deepC
python buildDocker.py

📜 Output

find include src swig -name \*.h -print0 -o -name \*.cpp -print0 | xargs -0 -P8 -n1 clang-format -i
make -C src
make[1]: Entering directory 'deepC/src'
make -C core
make[2]: Entering directory 'deepC/src/core'
compiling broadcast.cpp
/usr/bin/g++ -O3 -Wall -std=c++14 -fPIC -march=native -msse2 \
    -isystem ./packages/eigen-eigen-323c052e1731 -I./include \
    -c broadcast.cpp -o obj/broadcast.o
compiling tensor.cpp
...
...
/usr/bin/g++ -shared  ./obj/dnnc_swig.o ./obj/dnnc_pyutils.o ./obj/dnnc_api.o -o lib/libdnnc.so
ln -s -f lib/libdnnc.so _dnnc.so
/usr/bin/python3 ../test/swig/basic.py

Current Support

Supported Architectures Status
Arm ✔️
Armv7 ✔️
Arm64 ✔️
AMD64 ✔️
ppc64le ✔️
Supported OS Distributions Status
Linux Ubuntu 18.04 ✔️
Linux CentOS 6 ✔️
Linux Arch Linux ✔️
Linux Manjaro ✔️
Windows 1803 and above ✔️
Mac OS Sierra and above ✔️

➕ Contribute

dnn Compiler adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Checkout the Contributor Guide.

🙏 Acknowledgement

We acknowledge the efforts predecessor projects like LLVM, ONNX etc. to make this project a reality.


🕵️‍♂️ Why compiler❔

deepC is targeted towards devices with small formfactor like microcontrollers, which are part of all sorts of household devices: think appliances, cars, and toys. In fact, there are around 30 billion microcontroller-powered devices produced each year. They're cheap, require very little energy, and are very reliable.

By bringing deep learning models to tiny microcontrollers, we can boost the intelligence of billions of devices that we use in our lives, without relying on expensive hardware or reliable internet connections. Imagine smart appliances that can adapt to your daily routine, intelligent industrial sensors that understand the difference between problems and normal operation, and magical toys that can help kids learn in fun and delightful ways.

Organizations

Support this project with your organization. Your logo will show up here with a link to your website. [Contribute]


Built on/with deepC

Products

  1. No code TinyML platform, built with deepC technology.
  2. No code TinyML Book, with a chapter on deepC.

Papers

Paper Citations

Book Chapter

  1. deepC Chapter in book Introduction to TinyML, available on Amazon and other retailers

deepc's People

Contributors

aalawani686 avatar gunjannandy avatar k4rth33k avatar monkeywithacupcake avatar nikhilt1998 avatar nixonz avatar risharma-csus avatar robinvanemden avatar simplegeometry avatar sravit1 avatar srohit0 avatar subhamio avatar tarushikapoor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepc's Issues

operator Conv is crashing likely because of invalid memory read.

Test may not crash on internal machines, it consistently crashes on google colab - a platform we support.

how to reproduce on Colab

!sudo apt-get update
!sudo apt-get install build-essential python3.6-dev python3-pip swig doxygen clang-format clang clang-8 llvm-8 llvm-8-dev
!sudo pip3 install numpy onnx
!git clone https://github.com/ai-techsystems/dnnCompiler
import os
os.chdir('/content/dnnCompiler/')
!make clean; make
os.chdir('/content/dnnCompiler/test')
!python3 run_one.py Conv.py

How to profile using valgrind

make DEBUG=y
cd test
valgrind --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes python run_one.py Conv.py | & tee val.log

Inspect val.log to find

==26696== Conditional jump or move depends on uninitialised value(s)
...
==26696== Invalid read of size 4

Fixing these leaks locally will likely fix the crash in colab too.

Error with mnist example

When I run onnx-cpp mnist.onnx I get the following error:

reading onnx model from file  mnist.onnx
Model info:
  ir_vesion :  7 
  doc       : 
WARN (ONNX): allowzero is not a valid graph-node attribute.
             operator Reshape will be added without this attribute.
INFO (ONNX): writing model parameter fc.weight to dir .
INFO (ONNX): writing model parameter fc.bias to dir .
INFO (ONNX): writing model parameter fc2.weight to dir .
INFO (ONNX): writing model parameter fc2.bias to dir .
running DNNC graph sanity check.
ERROR (GRAPH): some of graph torch_jit's node /fc/Gemm's
               outputs are not connected to other nodes in the graph.
ERROR (GRAPH): some of graph torch_jit's node /fc2/Gemm's
               outputs are not connected to other nodes in the graph.
        FAILED. Please check your model.
Writing C++ file  /mnist.cpp
ERROR (CODEGEN): cound not find all nodes for /fc/Gemm,
                 an instance of Gemm.
                 Please check model's sanity and try again.
ERROR (CODEGEN): cound not find all nodes for /fc2/Gemm,
                 an instance of Gemm.
                 Please check model's sanity and try again.
ERROR (CODEGEN): could not open file mnist.cppto write.
INFO (ONNX): model files are ready in dir 

SyntaxError: invalid syntax: "def compilerWrapper;"

I am trying to get "compile-onnx" running, but have an error:

root@15082926ce00:/# compile-onnx
Traceback (most recent call last):
  File "/usr/local/bin/compile-onnx", line 5, in <module>
    from deepC.scripts.onnx2exe import main
  File "/usr/local/lib/python3.10/dist-packages/deepC/scripts/onnx2exe.py", line 38
    def compilerWrapper;
                       ^
SyntaxError: invalid syntax

It can be reproduces with a simple empty ubuntu docker:

sudo docker run -i -t ubuntu /bin/bash
---------------------------
apt update && apt upgrade -y
apt install python3 python3-pip3
pip install deepC
compile-onnx

(also with an file argument compile-onnx xy.onnx)

Tensor Assignment operators fail in <double> Tensor

test / swig / tensorAssignmentOperators.py (Line No. 220, 380, 554)

def test_Assignment_Add_double_tensor_double_tensor (self):
    temp_np = self.np_double_0_4.copy()
    temp_np += self.np_double_5_9
    temp_dc = self.dc_double_0_4.copy()
    temp_dc += self.dc_double_5_9
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))

Gets output error like this:

gunjan@gunjan-VirtualBox:/Desktop/dnnCompiler/test/swig$ python3 tensorDetailedOperators.py -v
test_Assignment_Add_bool_tensor_bool_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Add_bool_tensor_bool_tensor (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Add_double_tensor_double_tensor (__main__.tensorDetailedOperatorsTest) ... FAIL
swig/python detected a memory leak of type 'std::vector< double,std::allocator< double > > *', no destructor found.
test_Assignment_Add_float_tensor_bool_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Add_float_tensor_float_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Add_float_tensor_float_tensor (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Add_float_tensor_int_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Add_int_tensor_int_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Add_int_tensor_int_tensor (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Sub_float_tensor_bool_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Sub_float_tensor_float_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Sub_float_tensor_float_tensor (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Sub_float_tensor_int_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Sub_int_tensor_int_scalar (__main__.tensorDetailedOperatorsTest) ... ok
test_Assignment_Sub_int_tensor_int_tensor (__main__.tensorDetailedOperatorsTest) ... ok

======================================================================
FAIL: test_Assignment_Add_double_tensor_double_tensor (__main__.tensorDetailedOperatorsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tensorDetailedOperators.py", line 225, in test_Assignment_Add_double_tensor_double_tensor
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 913, in assert_array_equal
    verbose=verbose, header='Arrays are not equal')
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 836, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Arrays are not equal

Mismatch: 100%
 x: array([ 5.,  7.,  9., 11., 13.])
 y: array(<Swig Object of type 'std::vector< double,std::allocator< double > > *' at 0x7f14578c1650>,
      dtype=object)

----------------------------------------------------------------------
Ran 15 tests in 0.010s

FAILED (failures=1)

Comparison TensorFlow Lite

Hi there,

Sorry in advance if these types of questions are inappropriate here. (Please point me to another forum, if that exist.)

I have to decide upon a framework for running an ONNX-model on an x86 micro-controller. I am curious to know the pros/cons of using deepC versus TensorFlow Lite or other alternative frameworks?

Make TEST fails swig tests

Probably a known issue, but I thought I'd report it here anyway. make TEST produces the following SWIG related errors:

Running tests in ===|swig|===
----------------------------------------------------------------------
Ran 726 tests in 0.194s

OK

Running tests in ===|parser|===
ERROR (GRAPH): some of graph RNN_graph's node dnnc___1's
               inputs are not connected to other nodes in the graph.
ERROR (TYPE INFER): cound not find all nodes for dnnc___1,
ERROR (CODEGEN): cound not find all nodes for dnnc___1,
                 an instance of RNN.
                 Please check model's sanity and try again.
ERROR (GRAPH): some of graph Loop_graph's node dnnc___1's
               outputs are not connected to other nodes in the graph.
ERROR (CODEGEN): cound not find all nodes for dnnc___1,
                 an instance of Loop.
                 Please check model's sanity and try again.
ERROR (GRAPH): some of graph If_graph's node 2's
               inputs are not connected to other nodes in the graph.
ERROR (GRAPH): some of graph BatchNormalization_graph's node dnnc___1's
               inputs are not connected to other nodes in the graph.
ERROR (GRAPH): some of graph BatchNormalization_graph's node 1's
               outputs are not connected to other nodes in the graph.
ERROR (TYPE INFER): cound not find all nodes for dnnc___1,
ERROR (CODEGEN): cound not find all nodes for dnnc___1,
                 an instance of BatchNormalization.
                 Please check model's sanity and try again.
ERROR (GRAPH): some of graph GRU_graph's node dnnc___1's
               inputs are not connected to other nodes in the graph.
ERROR (TYPE INFER): cound not find all nodes for dnnc___1,
ERROR (CODEGEN): cound not find all nodes for dnnc___1,
                 an instance of GRU.
                 Please check model's sanity and try again.
read 130 files.
----------------------------------------------------------------------
Ran 1 test in 0.042s

NN operators don't work on scalars

NN operators like dc.sqrt(5.0) don't work, because swig does not implicitly convert python scalars like float to tensor. It works fine in C++.

Symptom of this is that dc.sqrt() works on tensor, not on scalars as shown in the script below

import deepC.dnnc as dc
a=dc.arange(5)
dc.sqrt(a); #<<<<<<<< works on tensor 
[0.000000 1.000000 1.414214 1.732051 2.000000]
dc.sqrt(a.sum()) ; # <<<<<<<<<< throws on scalar

Traceback (most recent call last):
File "", line 1, in
File "/home/amd/dnnc/master/dnnCompiler/deepC/dnnc.py", line 16108, in sqrt
return _dnnc.sqrt(*args)
NotImplementedError: Wrong number or type of arguments for overloaded function 'sqrt'.
Possible C/C++ prototypes are:
dnnc::sqrt(dnnc::tensor< float > &)
dnnc::sqrt(dnnc::tensor< double > &)

Solution Hint

Make swig work on %imlicitconv dnnc::tensor; for tensor class.

Tensor True Div operator output not equal with Numpy (though they are)

test / swig / tensorAssignmentOperators.py (Line No. 619, 627, 651)

Here is the snippet of code

# float_tensor /= bool_scalar
  def test_Assignment_True_Div_float_tensor_bool_scalar (self):
    temp_np = self.np_float_0_4.copy()
    temp_np /= True
    temp_dc = self.dc_float_0_4.copy()
    temp_dc /= True
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))

  # float_tensor /= int_scalar (Numpy and Dnnc has same output, but says not equal)
  def test_Assignment_True_Div_float_tensor_int_scalar (self):
    temp_np = self.np_float_0_4.copy()
    temp_np /= 5
    temp_dc = self.dc_float_0_4.copy()
    temp_dc /= 5
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))

  # float_tensor /= float_scalar (Numpy and Dnnc has same output, but says not equal)
  def test_Assignment_True_Div_float_tensor_float_scalar (self):
    temp_np = self.np_float_0_4.copy()
    temp_np /= 5.0
    temp_dc = self.dc_float_0_4.copy()
    temp_dc /= 5.0
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))

  # float_tensor /= float_tensor (Numpy and Dnnc has same output, but says not equal)
  def test_Assignment_True_Div_float_tensor_float_tensor (self):
    temp_np = self.np_float_0_4.copy()
    temp_np /= self.np_float_5_9
    temp_dc = self.dc_float_0_4.copy()
    temp_dc /= self.dc_float_5_9
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))

The tensor<float>/=tensor<bool> works, but tensor<float>/=tensor<int/float/double> doesn't. As output it shows:

gunjan@gunjan-VirtualBox:/Desktop/dnnCompiler/test/swig$ python3 tensorAssignmentOperators.py -v
test_Assignment_True_Div_float_tensor_bool_scalar (__main__.tensorAssignmentOperatorsTest) ... ok
test_Assignment_True_Div_float_tensor_float_scalar (__main__.tensorAssignmentOperatorsTest) ... FAIL
test_Assignment_True_Div_float_tensor_float_tensor (__main__.tensorAssignmentOperatorsTest) ... FAIL
test_Assignment_True_Div_float_tensor_int_scalar (__main__.tensorAssignmentOperatorsTest) ... FAIL

======================================================================
FAIL: test_Assignment_True_Div_float_tensor_float_scalar (__main__.tensorAssignmentOperatorsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tensorAssignmentOperators.py", line 632, in test_Assignment_True_Div_float_tensor_float_scalar
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 913, in assert_array_equal
    verbose=verbose, header='Arrays are not equal')
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 836, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Arrays are not equal

Mismatch: 80%
Max absolute difference: 2.38418579e-08
Max relative difference: nan
 x: array([0. , 0.2, 0.4, 0.6, 0.8])
 y: array([0. , 0.2, 0.4, 0.6, 0.8])

======================================================================
FAIL: test_Assignment_True_Div_float_tensor_float_tensor (__main__.tensorAssignmentOperatorsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tensorAssignmentOperators.py", line 656, in test_Assignment_True_Div_float_tensor_float_tensor
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 913, in assert_array_equal
    verbose=verbose, header='Arrays are not equal')
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 836, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Arrays are not equal

Mismatch: 60%
Max absolute difference: 1.27724239e-08
Max relative difference: nan
 x: array([0.      , 0.166667, 0.285714, 0.375   , 0.444444])
 y: array([0.      , 0.166667, 0.285714, 0.375   , 0.444444])

======================================================================
FAIL: test_Assignment_True_Div_float_tensor_int_scalar (__main__.tensorAssignmentOperatorsTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tensorAssignmentOperators.py", line 624, in test_Assignment_True_Div_float_tensor_int_scalar
    np.testing.assert_array_equal(temp_np, np.array(temp_dc.data()))
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 913, in assert_array_equal
    verbose=verbose, header='Arrays are not equal')
  File "/home/gunjan/.local/lib/python3.6/site-packages/numpy/testing/_private/utils.py", line 836, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Arrays are not equal

Mismatch: 80%
Max absolute difference: 2.38418579e-08
Max relative difference: nan
 x: array([0. , 0.2, 0.4, 0.6, 0.8])
 y: array([0. , 0.2, 0.4, 0.6, 0.8])

----------------------------------------------------------------------
Ran 4 tests in 0.052s

FAILED (failures=3)

Error: building deepC inside docker container

Hi. I am glad to find your product and just dying to use it.
But when I try to build it in my local computer, cmd terminal prints error message below.

=> ERROR [3/4] RUN pip3 install numpy==1.15.0 onnx==1.5.0                                                        11.4s
------
 > [3/4] RUN pip3 install numpy==1.15.0 onnx==1.5.0:
#6 1.171 Collecting numpy==1.15.0
#6 2.429   Downloading https://files.pythonhosted.org/packages/88/29/f4c845648ed23264e986cdc5fbab5f8eace1be5e62144ef69ccc7189461d/numpy-1.15.0-cp36-cp36m-manylinux1_x86_64.whl (13.9MB)
#6 7.812 Collecting onnx==1.5.0
#6 8.174   Downloading https://files.pythonhosted.org/packages/88/50/e4a5a869093f35884d1fd95b46b24705ab27adb7e562a2a307523c043be3/onnx-1.5.0-cp36-cp36m-manylinux1_x86_64.whl (7.0MB)
#6 10.50 Collecting protobuf (from onnx==1.5.0)
#6 11.10   Downloading https://files.pythonhosted.org/packages/6c/be/4e32d02bf08b8f76bf6e59f2a531690c1e4264530404501f3489ca975d9a/protobuf-4.21.0-py2.py3-none-any.whl (164kB)
#6 11.27 protobuf requires Python '>=3.7' but the running Python is 3.6.9
------
executor failed running [/bin/sh -c pip3 install numpy==1.15.0 onnx==1.5.0]: exit code: 1
Unable to find image 'dnnc:latest' locally
[2022-10-26T23:16:11.819479900Z][docker-credential-desktop][W] Windows version might not be up-to-date: The system cannot find the file specified.
docker: Error response from daemon: pull access denied for dnnc, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.

I am using

  • python 3.10
  • window 10 x64

I have tried various solutions.

  • login to ducker in terminal with command
  • change dockerfile (python 3.6 dev -> python 3.7 dev)

But still, I can not fix it. Could you help me?

AttributeError: 'floatTensor' object has no attribute 'sum'


AttributeError Traceback (most recent call last)
in
1 knn = iris_knn()
----> 2 knn.fit()
3 knn.announce()

in fit(self, k)
22 for i in range(len(self.features)):
23 feature = dc.array(self.features[i])
---> 24 opinion_difference = dc.sqrt(dc.power(dc.sub(feature, self.query),2)).sum()
25 survey.append([opinion_difference, self.labels[i]])
26

/usr/local/lib/python3.6/dist-packages/deepC/dnnc.py in (self, name)
20866 for _s in [floatplaceHolder]:
20867 swig_getmethods.update(getattr(_s, 'swig_getmethods', {}))

20868 getattr = lambda self, name: _swig_getattr(self, floatTensor, name)
20869
20870 def init(self, *args):

/usr/local/lib/python3.6/dist-packages/deepC/dnnc.py in _swig_getattr(self, class_type, name)
78 if method:
79 return method(self)
---> 80 raise AttributeError("'%s' object has no attribute '%s'" % (class_type.name, name))
81
82

AttributeError: 'floatTensor' object has no attribute 'sum'

dc.slice operator is not compatible with numpy

numpy slice and dc.slice are giving different results.

 import deepC.dnnc as dc, numpy as np
 
 t1 = np.array ( [[0,1,2], [3,4,5], [6,7,8], [9,10,11]] )
 t2 = dc.array ( [[0,1,2], [3,4,5], [6,7,8], [9,10,11]] )
 
 start = dc.array ([1, 1]).asTypeULong()
 stop = dc.array ([2, 2]).asTypeULong()
 axis = dc.array ([0, 1]).asTypeInt()
 step = dc.array ([1, 1]).asTypeULong()
 
 print(t1 [1 : 2 : 1, 1 : 2 :1]); # returns [[4]]
 print(dc.slice (t2, start, stop, axis, step)); # return [[3,4,5]] <<<<<<BUG

Python operator add (tensor<int> + <float>) fails

tensor<int> + <float> fails

>>> import dnnc as dc
>>> a=dc.arange(5).asTypeInt()
>>> a
[0 1 2 3 4]
>>> b=a+1
>>> b
[1 2 3 4 5]
>>> b=a+1.0
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'dnnc.iTensor' and 'float'

whereas, tensor<float> + <int> and tensor<float> + <float> WORKS

>>> import dnnc as dc
>>> a=dc.arange(5)
>>> a
[0.000000 1.000000 2.000000 3.000000 4.000000]
>>> b=a+1
>>> b
[1.000000 2.000000 3.000000 4.000000 5.000000]
>>> b=a+1.0
>>> b
[1.000000 2.000000 3.000000 4.000000 5.000000]

Python reverse operators (like __radd__) are not working with 'swig -builtin'

Description

Python allows to overload operators in normal and reverse fashion.

Given, x=dc.arange(5), y=x+1 calls dnnc tensor operator+ (aka __add__ operator), whereas y=1+x calls dnnc tensor reverse operator + (aka __radd__ operator).

If you compile with swig -builtin OR make FAST_SWIG=n for performance reasons, reverse operators stop working.

How to Reproduce

Script: bug.py

import dnnc as dc

x=dc.arange(4)
y=x+1;              # works
y=1+x;              # fails with swig -builtin
print("PASS :-)")

% make clean; make
% python bug.py

Traceback (most recent call last):
  File "bug.py", line 5, in <module>
    y=1+x; #fails with swig -builtin
TypeError: unsupported operand type(s) for +: 'int' and 'dnnc.fTensor'

% make clean ; make FAST_SWIG=n
% python bug.py

PASS :-)

ONNX model exported from PyTorch is incorrect for onnx-cpp

Hello,
I am currently trying to generate some C code from a trained PyTorch model. I tried to follow the first example provided in the tutorials ("Intermediate codegen and generate binary/bundle for your model") but the process fails at the first step. Here is how I proceed :

  • Train a model in PyTorch (not shown here)
  • Export it as ONNX as shown in the PyTorch doc examples :
>>> torch.onnx.export(net, X[0], "testnet.onnx", verbose=True, export_params=True, opset_version=10)

graph(%input.1 : Float(4),
      %layers.0.bias : Float(8),
      %layers.1.bias : Float(1),
      %12 : Float(4, 8),
      %13 : Float(8, 1)):
  %6 : Float(8) = onnx::MatMul(%input.1, %12) # /home/tnoel/.local/lib/python3.6/site-packages/torch/nn/functional.py:1612:0
  %7 : Float(8) = onnx::Add(%6, %layers.0.bias)
  %8 : Float(8) = onnx::Tanh(%7) # solo_shoulder_approx_torch_nn.py:49:0
  %10 : Float(1) = onnx::MatMul(%8, %13) # /home/tnoel/.local/lib/python3.6/site-packages/torch/nn/functional.py:1612:0
  %11 : Float(1) = onnx::Add(%10, %layers.1.bias)
  return (%11)

To me, the ONNX model looks well-formed at this point, and the ONNX checker from the Python onnx lib does not throw any error when checking it.

  • I finally try to run onnx-cpp testnet.onnx as shown in the example, but this is where I get the following errors :
reading onnx model from file  testnet.onnx
Model info:
  ir_vesion :  6 
  doc       : 
INFO (ONNX): writing model parameter 12 to dir .
INFO (ONNX): writing model parameter 13 to dir .
INFO (ONNX): writing model parameter layers.0.bias to dir .
INFO (ONNX): writing model parameter layers.1.bias to dir .
running DNNC graph sanity check.
ERROR (GRAPH): some of graph torch-jit-export's node MatMul_0's
               outputs are not connected to other nodes in the graph.
ERROR (GRAPH): some of graph torch-jit-export's node Add_1's
               outputs are not connected to other nodes in the graph.
ERROR (GRAPH): some of graph torch-jit-export's node MatMul_3's
               outputs are not connected to other nodes in the graph.
ERROR (GRAPH): some of graph torch-jit-export's node Add_4's
               outputs are not connected to other nodes in the graph.
        FAILED. Please check your model.
Writing C++ file  /testnet.cpp
ERROR (CODEGEN): cound not find all nodes for MatMul_0,
                 an instance of MatMul.
                 Please check model's sanity and try again.
ERROR (CODEGEN): cound not find all nodes for Add_1,
                 an instance of Add.
                 Please check model's sanity and try again.
ERROR (CODEGEN): cound not find all nodes for MatMul_3,
                 an instance of MatMul.
                 Please check model's sanity and try again.
ERROR (CODEGEN): cound not find all nodes for Add_4,
                 an instance of Add.
                 Please check model's sanity and try again.
ERROR (CODEGEN): could not open file testnet.cppto write.
INFO (ONNX): model files are ready in dir 

The command still creates and populates the following files : 12, 13, layers.0.bias, layers.1.bias (with numerical values), and also creates the following testnet.cpp file (which looks quite not correct) :

#include "operators/MatMul.h"
#include "operators/Add.h"
#include "operators/Tanh.h"
#include "operators/MatMul.h"
#include "operators/Add.h"

using namespace dnnc;

int maint(){
  tensor<float> dnnc_input_dot_1(4);
  
  Tanh<float,float> Tanh_2("Tanh_2");
  tensor<float> dnnc_Tanh_2_8 = Tanh_2.compute ( dnnc_Add_1_7);

  return 0;
}

Would you please have any idea about how I should modify my model so that it is correctly handled by onnx-cpp? I dont really know where to look for now, and I dont see why the nodes are not correctly parsed from the ONNX model. Thanks in advance!

broadcast not working for 4D

New test Sub.py is failing on 4D.

python run_one.py Sub.py
/home/amd/dnnc/operators.dnnCompiler/test/swig
running test Sub.py
Unsupported!
======================================================================
FAIL: test_Broadcast (swig.Sub.SubTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/amd/dnnc/operators.dnnCompiler/test/swig/Sub.py", line 86, in test_Broadcast
    rtol=1e-3, atol=1e-3)
  File "/usr/local/lib/python3.6/dist-packages/numpy/testing/_private/utils.py", line 1493, in assert_allclose
    verbose=verbose, header=header, equal_nan=equal_nan)
  File "/usr/local/lib/python3.6/dist-packages/numpy/testing/_private/utils.py", line 819, in assert_array_compare
    raise AssertionError(msg)
AssertionError: 
Not equal to tolerance rtol=0.001, atol=0.001

The tflite model converted by TensorflowLite converter is .

Hello Rohit,

I tried to use tensorflowLite converter to compress CNN model, according to the methodology here: https://towardsdatascience.com/a-basic-introduction-to-tensorflow-lite-59e480c57292.

I did have created tflite model and the size of the model became 1/3 smaller compared to the original model , unfortunately, the inference time is 3 times longer and the inferring result is not correct if I use the converted tflite file.

Appreciate it very much if you would give me some clues where I did wrong.

Thank you for your support,
Gordon Zhou
[email protected]

ModuleNotFoundError: No module named '_dnnc'

Hi,
After installing deepC, I am trying to test if it is working. I see the following error. Any suggestion regarding this error?
Environment: Mac OS

(base) MU00158281X:~ mmoh0027$ python -c "import deepC.dnnc as dc; print(dc.arange(5));"
Traceback (most recent call last):
  File "/Users/mmoh0027/opt/anaconda3/lib/python3.7/site-packages/deepC/dnnc.py", line 14, in swig_import_helper
    return importlib.import_module(mname)
  File "/Users/mmoh0027/opt/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
  File "<frozen importlib._bootstrap>", line 983, in _find_and_load
  File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 670, in _load_unlocked
  File "<frozen importlib._bootstrap>", line 583, in module_from_spec
  File "<frozen importlib._bootstrap_external>", line 1043, in create_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
ImportError: dlopen(/Users/mmoh0027/opt/anaconda3/lib/python3.7/site-packages/deepC/_dnnc.so, 2): no suitable image found.  Did find:
	/Users/mmoh0027/opt/anaconda3/lib/python3.7/site-packages/deepC/_dnnc.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00
	/Users/mmoh0027/opt/anaconda3/lib/python3.7/site-packages/deepC/_dnnc.so: unknown file type, first eight bytes: 0x7F 0x45 0x4C 0x46 0x02 0x01 0x01 0x00

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/mmoh0027/opt/anaconda3/lib/python3.7/site-packages/deepC/dnnc.py", line 17, in <module>
    _dnnc = swig_import_helper()
  File "/Users/mmoh0027/opt/anaconda3/lib/python3.7/site-packages/deepC/dnnc.py", line 16, in swig_import_helper
    return importlib.import_module('_dnnc')
  File "/Users/mmoh0027/opt/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
ModuleNotFoundError: No module named '_dnnc'

Node id's including slashes cause "no such file or directory" error

When converting the included model with onnx-cpp it throws the following error:

reading onnx model from file  optimized.onnx
Model info:
  ir_vesion :  4
  doc       :
INFO (ONNX): writing model parameter const_fold_opt__17 to dir savedmodel/.
INFO (ONNX): writing model parameter StatefulPartitionedCall/sequential_1/dense_2/MatMul/ReadVariableOp:0 to dir savedmodel/.
Traceback (most recent call last):
  File "/home/robin/.local/bin/onnx-cpp", line 11, in <module>
    sys.exit(main())
  File "/home/robin/.local/lib/python3.6/site-packages/deepC/compiler/onnx2cpp.py", line 65, in main
    dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
  File "/home/robin/.local/lib/python3.6/site-packages/deepC/compiler/read_onnx.py", line 486, in main
    self.addParams(param);
  File "/home/robin/.local/lib/python3.6/site-packages/deepC/compiler/read_onnx.py", line 194, in addParams
    self.writeParamsToFile(param.name, param_vals);
  File "/home/robin/.local/lib/python3.6/site-packages/deepC/compiler/read_onnx.py", line 56, in writeParamsToFile
    with open(paramFile, "w") as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'savedmodel/StatefulPartitionedCall/sequential_1/dense_2/MatMul/ReadVariableOp:0'

I presume this is probably caused by the slashes in the node id's of the model?

slashes.zip

clang++ version restricton

Problem

If we specify clang++ verison 8 in Makefile.common like this:

/usr/bin/clang++-8

It won't work in clang++-9 though version 9 is supported.
But clang versions below 8 doesn't support aggregate expression. See below error:

dnnCompiler/include/operators/GlobalAveragePool.h:61:54: error: cannot compile this
      aggregate expression yet
    eResult = eigenTensor.mean(Eigen::array<int, 1>({2}));

Workarounds

  • Use cMake.
    • By doing this we will be supporting Windows natively (without docker).
  • Change the aggregate expression so that our compilation has backward compatiability.
    • By doing this we will be supporting Mac OS natively (without docker).

Solution

Implement both the above mentioned workarounds, to fully close this issue!

Uint8 quantisized model throws "struct.error"

Using WinMLTools floating point 32 into 8-bit integer optimization results in the following error:

Traceback (most recent call last):
  File "/usr/local/bin/onnx-cpp", line 11, in <module>
    load_entry_point('deepC==0.13', 'console_scripts', 'onnx-cpp')()
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/onnx2cpp.py", line 65, in main
    dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 489, in main
    self.addParams(param);
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 129, in addParams
    param_vals = struct.unpack(pack_format*param_len, param.raw_data) ;
struct.error: unpack requires a buffer of 432 bytes

The traceback seems to indicate that deepC ought to be able to convert the model, but encounters a minor issue - would you agree? See attached the uint8 optimized Resnet Cifar model we used to test the 8-bit integer quantisized model.

model.zip

Failed compiling a simple 2 layer NN model

Untitled

reading onnx model from file /mnt/d/model.onnx
Model info:
ir_vesion : 6
doc :
INFO (ONNX): writing model parameter linear1.bias to dir /mnt/d.
INFO (ONNX): writing model parameter linear1.weight to dir /mnt/d.
INFO (ONNX): writing model parameter linear2.bias to dir /mnt/d.
INFO (ONNX): writing model parameter linear2.weight to dir /mnt/d.
running DNNC graph sanity check.
ERROR (GRAPH): some of graph torch-jit-export's node Gemm_0's
outputs are not connected to other nodes in the graph.
ERROR (GRAPH): some of graph torch-jit-export's node Gemm_2's
outputs are not connected to other nodes in the graph.
FAILED. Please check your model.
Writing C++ file /mnt/d/model.cpp
ERROR (CODEGEN): cound not find all nodes for Gemm_0,
an instance of Gemm.
Please check model's sanity and try again.
ERROR (CODEGEN): cound not find all nodes for Gemm_2,
an instance of Gemm.
Please check model's sanity and try again.
INFO (ONNX): model files are ready in dir /mnt/d
g++ -O3 -I /home/user/.local/lib/python3.6/site-packages/deepC/include -isystem /home/user/.local/lib/python3.6/site-packages/deepC/packages/eigen-eigen-323c052e1731 /mnt/d/model.cpp -o /mnt/d/model.exe
/mnt/d/model.cpp: In function ‘int main(int, char**)’:
/mnt/d/model.cpp:34:50: error: ‘dnnc_Gemm_0_5’ was not declared in this scope
tensor dnnc_Relu_1_6 = Relu_1.compute ( dnnc_Gemm_0_5);
^~~~~~~~~~~~~
/mnt/d/model.cpp:34:50: note: suggested alternative: ‘dnnc_Relu_1_6’
tensor dnnc_Relu_1_6 = Relu_1.compute ( dnnc_Gemm_0_5);
^~~~~~~~~~~~~
dnnc_Relu_1_6

dnnc compilation failed. please file this bug with model/script file at
https://github.com/ai-techsystems/dnnCompiler/issues

numpy compatibility for basic python operators (+/- etc)

DNNC is compatible with numpy on operator functions like numpy.add(a,b) and dnnc.add(a,b), with input argument types and return types.

However, it is not compatible with symbol operators, y=a+b for example. See script below.

bug.py

 import numpy as np, dnnc as dc
   
 np_bool = np.arange(2).astype(np.bool)
 np_int  = np.arange(2).astype(np.int) 
 np_res  = np_bool + np_int 
   
 print(np_res, np_res.dtype); 
 # [0 2] int64

   
 dc_bool = dc.arange(2).asTypeBool()
 dc_int  = dc.arange(2).asTypeInt()
 dc_res  = dc_bool + dc_int
     
 print(dc_res, dc_res.dtype()); 
 # [0 1] bool

This is similar to #33

make failing inside /deepC

To reproduce

  • Go in /deepC directory
  • Run make

Error code

clang: error: no such file or directory: '/home/Desktop/dnnCompiler/src/core/obj/datatypes.o'
clang: error: no such file or directory: '/home/Desktop/dnnCompiler/src/operators/obj/opTypes.o'
clang: error: no such file or directory: '/home/Desktop/dnnCompiler/src/graph/obj/node.o'
clang: error: no such file or directory: '/home/Desktop/dnnCompiler/src/graph/obj/graph.o'
clang: error: no such file or directory: '/home/Desktop/dnnCompiler/src/codegen/obj/cppCodeGen.o'
Makefile:64: recipe for target 'lib/libdnnc.so' failed
make: *** [lib/libdnnc.so] Error 1

Slice.h can't take negative step value

  • Need to change DIMENSION to long, this will break many things!
  • Step value can be negative, that will start printing element from the end, and stop at start.

"Operator not supported" for implemented operations

Some actually implemented operators sometimes result in "operator xxx is not supported yet" messages in the generated c++ code. For example, the attached model results in several "Relu not supported" comments within the generated code:

  // operator Relu is not supported yet.
  // Please file a enhancement request at 
  //        https://github.com/ai-techsystems/dnnCompiler/issues 

model.zip

bool operator for add (sub) doesn't look right!

>>> import dnnc as dc
>>> tmp = dc.arange(5,10).asTypeBool()
>>> tmp
[1 1 1 1 1]
>>> tmp1 = tmp - dc.array([5]).asTypeBool()
>>> tmp1
[1 1 1 1 1]

Expected Behavior
tmp1 should be [0 0 0 0 0]

make datatypes available in python interface

similar to numpy or onnx based on feedback.
File: tensor.i
%include "core/datatypes.h"

Change:

  1. lowercase core/datatypes.h
    enum DNNC_DataType

  2. src/core/datatypes.cpp,
    corresponding case changes

  3. include/core/tensor.h
    add DNNC_DataType argument in

tensor<newT> asType(DNNC_DataType newType)

Remove

tensor<double> asTypeDouble()
tensor<float> asTypeFloat()
....

operator GEMM does not compile with template type int

How to Reproduce

Step 1
Change gemm operator in swig/dnnc.api file to add int shown below:

tensor<output> gemm(tensor<input> &a, tensor<input> &b, 
                                       tensor<input> &c, float alpha = 1.0, float beta = 1.0, 
                                       int transA = 0, int transB = 0) {
     Gemm<output, input, input> op("localOpName", alpha, beta, transA, transB);
     return op.compute(a, b, c);
     dtype = {
         "double" : "double",
         "float" : "float",
         "int" : "int"
     }
}

Step 2

cd swig
make clean
make

Observe compiler errors

/home/dnnc/master/dnnCompiler/include/operators/Gemm.h:139:25: error: invalid operands to binary expression ('float' and 'const Product<Eigen::Map<Eigen::Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen:: Map<Eigen::Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >')
eResult = alpha * (eigenMatrixA * eigenMatrixB) + beta * eigenMatrixC;
~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
dnnc_api.cpp:10885:13: note: in instantiation of member function 'dnnc::Gemm<int, int, int>::compute' requested here
return op.compute(a, b, c);
^
/home/dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/Eigen/src/Core/../plugins/CommonCwiseBinaryOps.h: 50:29: note: candidate function template not viable: no known conversion from 'const Product<Eigen::Map<Eigen:: Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen::Map<Eigen::Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >' to 'const Eigen::MatrixBase<Eigen::Matrix<int, -1, -1, 1, -1, -1> >::StorageBaseType' (aka 'const Eigen::MatrixBase<Eigen::Matrix<int, -1, -1, 1, -1, -1> >') for 2nd argument
EIGEN_MAKE_SCALAR_BINARY_OP(operator*,product)
^
/home/dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/Eigen/src/Core/util/Macros.h:960:41: note: expanded from macro 'EIGEN_MAKE_SCALAR_BINARY_OP'
EIGEN_MAKE_SCALAR_BINARY_OP_ONTHELEFT(METHOD,OPNAME)
^
/home/dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/Eigen/src/Core/util/Macros.h:953:4: note: expanded from macro 'EIGEN_MAKE_SCALAR_BINARY_OP_ONTHELEFT'
(METHOD)(const T& scalar, const StorageBaseType& matrix) { \

/home//dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/Eigen/src/Core/../plugins/CommonCwiseBinaryOps.h: 50:29: note: candidate function template not viable: no known conversion from 'const Product<Eigen::Map<Eigen:: Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> >, Eigen::Map<Eigen::Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >' to 'const Eigen::MatrixBase<Eigen::Map<Eigen::Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen:: Stride<0, 0> > >::StorageBaseType' (aka 'const Eigen::MatrixBase<Eigen::Map<Eigen::Matrix<int, -1, -1, 1, -1, -1>, 0, Eigen::Stride<0, 0> > >') for 2nd argument
/home//dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/Eigen/src/Core/util/Macros.h:960:41: note: expanded from macro 'EIGEN_MAKE_SCALAR_BINARY_OP'
EIGEN_MAKE_SCALAR_BINARY_OP_ONTHELEFT(METHOD,OPNAME)
^
/home//dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/Eigen/src/Core/util/Macros.h:953:4: note: expanded from macro 'EIGEN_MAKE_SCALAR_BINARY_OP_ONTHELEFT'
(METHOD)(const T& scalar, const StorageBaseType& matrix) {
^
/home//dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/Eigen/src/Core/PermutationMatrix.h:543:1: note: candidate template ignored: could not match 'MatrixBase' against 'float'
operator*(const MatrixBase &matrix,

...
...

/home/dnnc/master/dnnCompiler/packages/eigen-eigen-323c052e1731/unsupported/Eigen/CXX11/src/Tensor/TensorUInt128. h:135:35: note: candidate template ignored: could not match 'TensorUInt128<type-parameter-0-0, type-parameter-0-1>' against 'float'
TensorUInt128<uint64_t, uint64_t> operator * (const TensorUInt128<HL, LL>& lhs, const TensorUInt128<HR, LR>& rhs)
^

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.