Giter Site home page Giter Site logo

reneweb / react-native-tensorflow Goto Github PK

View Code? Open in Web Editor NEW
357.0 17.0 59.0 48.03 MB

A TensorFlow inference library for react native

License: Apache License 2.0

Java 47.09% JavaScript 4.86% Objective-C 2.95% Objective-C++ 45.10%
react-native tensorflow android mobile

react-native-tensorflow's Introduction

react-native-tensorflow

Note: This project is not maintained anymore

A TensorFlow inference library for react native. It follows the android inference api from TensorFlow: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/android

Getting started

$ npm install react-native-tensorflow --save

Linking

$ react-native link react-native-tensorflow

Additional steps for iOS

For the iOS setup you will need CocoaPods.

Create a Podfile in the iOS directory with the following content:

target '<ProjectName>'
       pod 'TensorFlow-experimental'

Then run pod install.

Usage

This library provides a api to directly interact with TensorFlow and a simple image recognition api. For most use cases for image recognition the image recognition api should suffice.

Image recognition

First you need to add the TensorFlow model as well as the label file to the project. There are a few ways to do that as described here

Next you need to initialize the TfImageRecognition class using the model and label files and then call the recognize function of the class with the image to recognize:

import { TfImageRecognition } from 'react-native-tensorflow';

const tfImageRecognition = new TfImageRecognition({
  model: require('./assets/tensorflow_inception_graph.pb'),
  labels: require('./assets/tensorflow_labels.txt'),
  imageMean: 117, // Optional, defaults to 117
  imageStd: 1 // Optional, defaults to 1
})

const results = await tfImageRecognition.recognize({
  image: require('./assets/apple.jpg'),
  inputName: "input", //Optional, defaults to "input"
  inputSize: 224, //Optional, defaults to 224
  outputName: "output", //Optional, defaults to "output"
  maxResults: 3, //Optional, defaults to 3
  threshold: 0.1, //Optional, defaults to 0.1
})

results.forEach(result =>
  console.log(
    result.id, // Id of the result
    result.name, // Name of the result
    result.confidence // Confidence value between 0 - 1
  )
)

await tfImageRecognition.close() // Necessary in order to release objects on native side

Direct API

Note: It is not recommended to use this API as it has some major problem described in the second point in the known issues and is quite difficult to use in its current state.

First you need to add the TensorFlow model to the project. There are a few ways to do that as described here

After adding the model and creating a TensorFlow instance using the model you will need to feed your data as a array providing the input name, shape and data type. Then run the inference and lastly fetch the result.

import { TensorFlow } from 'react-native-tensorflow';

const tf = new TensorFlow('tensorflow_inception_graph.pb')
await tf.feed({name: "inputName", data: [1,2,3], shape:[1,2,4], dtype: "int64"})
await tf.run(['outputNames'])
const output = await tf.fetch('outputName')    
console.log(output)

Check the android TensorFlow example for more information on the API: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowImageClassifier.java

Fetching files

  • Add as react native asset

Create the file rn-cli.config.js in the root of the project and add the following code where the array contains all the file endings you want to bundle (in this case we bundle pb and txt files next to the defaults).

module.exports = {
  getAssetExts() {
    return ['pb', 'txt']
  }
}

Then you can require the asset in the code, for example: require('assets/tensorflow_inception_graph.pb')

  • Add as iOS / Android asset

Put the file in the android/src/main/assets folder for Android and for iOS put the file, using XCode, in the root of the project. In the code you can just reference the file path for the asset.

  • Load from file system

Put the file into the file system and reference using the file path.

  • Fetch via url

Pass a url to fetch the file from a url. This won't store it locally, thus the next time the code is executed it will fetch it again.

Supported data types

  • DOUBLE
  • FLOAT
  • INT32
  • INT64
  • UINT8
  • BOOL - On Android will be converted into a byte array
  • STRING - On Android will be converted into a byte array

Known issues

  • When using the image recognition api the results don't match exactly between Android and iOS. Most of the time they seem reasonable close though.
  • When using the direct api the data to feed to TensorFlow needs to be provided on the JS side and is then passed to the native side. Transferring large payloads this way is very inefficient and will likely have a negative performance impact. The same problem exists when loading large data, like images, from the native side into JS side for processing.
  • The TensorFlow library itself as well as the TensorFlow models are quite large in size resulting in large builds.

react-native-tensorflow's People

Contributors

ajmssc avatar jose2007kj avatar reneweb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

react-native-tensorflow's Issues

Memory leak on iOS

Seems that there is a memory leak in the iOS version of the library.

On subsequent uses of the library we can see memory usage steadily increase until the app eventually crashes.

My setup is something like below:

function categorizeImage(imagePath) {
  const processor = new TfImageRecognition({
    model: require('my-model.pb'),
    labels: require('my-labels.txt'),
  });
  return processor.recognize({ image: imagePath, ..... });
}

Each time categorizeImage is called we see memory usage increase. Images provided below.

snip20180206_27
snip20180206_28

Update tensorflow version, please

I installed/linked the package but cannot run models that were built with tensorflow version > 1.3

Possible Unhandled Promise Rejection (id: 0):
05-26 21:42:48.749 6305 7014 W ReactNativeJS: Error: NodeDef mentions attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]>; NodeDef: conv1/Conv2D = Conv2D[T=DT_FLOAT, _output_shapes=[[64,48,48,96]], data_format="NHWC", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true](random_shuffle_queue_DequeueMany:1, conv1/kernel/read). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

ResourceManager: OkHttpClientProvider.createClient() has been made private

Hey! Thanks for all the great work.

I'm new to this so please let me know if I've missed something, but when I tried to build this via ./gradlew build I got the following error:

/home/casey/gitrepo/react-native-tensorflow/android/src/main/java/com/rntensorflow/ResourceManager.java:65: 
error: createClient() has private access in OkHttpClientProvider
            Response response = OkHttpClientProvider.createClient().newCall(request).execute();

I assume an update was made that turned the .createClient() method private so I've submitted my resolution in PR #35 but please let me know if there's a better way to address. Thanks again!

TransformError App.js: Cannot find module metro-react-native-babel-transformer

[Error: TransformError App.js: Cannot find module '/Users/akumar/Documents/TFRNdemo/node_modules/@react-native-community/cli/node_modules/metro-react-native-babel-transformer/src/index.js'

{
"name": "TFRNdemo",
"version": "0.0.1",
"private": true,
"scripts": {
"android": "react-native run-android",
"ios": "react-native run-ios",
"start": "react-native start",
"test": "jest",
"lint": "eslint ."
},
"dependencies": {
"react": "16.13.1",
"react-native": "0.63.1",
"react-native-tensorflow": "^0.1.8"
},
"devDependencies": {
"@babel/core": "^7.10.5",
"@babel/runtime": "^7.10.5",
"@react-native-community/eslint-config": "^2.0.0",
"babel-jest": "^26.1.0",
"eslint": "^7.4.0",
"jest": "^26.1.0",
"metro-react-native-babel-preset": "^0.60.0",
"react-test-renderer": "16.13.1"
},
"jest": {
"preset": "react-native"
}
}

Model details

Can you please more detail describe how do you create model for that lib and what kind of model is it?

What is the model and graph of your project?

Hi there,

I had successfully tested your project using react-native int.
But when I want to use another graph.pb, it sends me this error:

NodeDef mentions attr 'identical_element_shapes' not in Op<name=TensorArrayV3; signature=size:int32 -> handle:resource, flow:float; attr=dtype:type; attr=element_shape:shape,default=; attr=dynamic_size:bool,default=false; attr=clear_after_read:bool,default=true; attr=tensor_array_name:string,default=""; is_stateful=true>; NodeDef: Preprocessor/map/TensorArray_2 = TensorArrayV3clear_after_read=true, dtype=DT_INT32, dynamic_size=false, element_shape=, identical_element_shapes=true, tensor_array_name="". (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.)

How can I know the model is suitable for your library?

Any help will be appreciated!
Thanks in advance!

Error trying to run on android

Hi, I'm trying to run on android, but I'm getting the following errors:

/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:70: error: cannot find symbol
          float[] srcData = readableArrayToFloatArray(data.getArray("data"));
                            ^
  symbol:   method readableArrayToFloatArray(ReadableArray)
  location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:73: error: cannot find symbol
          int[] srcData = readableArrayToIntArray(data.getArray("data"));
                          ^
  symbol:   method readableArrayToIntArray(ReadableArray)
  location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:79: error: cannot find symbol
          int[] srcData = readableArrayToIntArray(data.getArray("data"));
                          ^
  symbol:   method readableArrayToIntArray(ReadableArray)
  location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:82: error: cannot find symbol
          byte[] srcData = readableArrayToByteBoolArray(data.getArray("data"));
                           ^
  symbol:   method readableArrayToByteBoolArray(ReadableArray)
  location: class RNTensorFlowInferenceModule
/home/gabriel/repos/ColorGrain/node_modules/react-native-tensorflow/android/src/main/java/com/rntensorflow/RNTensorFlowInferenceModule.java:85: error: cannot find symbol
          byte[] srcData = readableArrayToByteStringArray(data.getArray("data"));
                           ^
  symbol:   method readableArrayToByteStringArray(ReadableArray)
  location: class RNTensorFlowInferenceModule
5 errors
:react-native-tensorflow:compileReleaseJavaWithJavac FAILED

FAILURE: Build failed with an exception.

I'm trying to run on ubuntu

can not load resoure in release apk

I follow the code in your example, it work fine when i react-native run-android

But,when i build in release version。

error, can't load resource

react-native bundle --entry-file App.js --platform android --dev false --bundle-output ./android/app/src/main/assets/index.android.bundle --assets-dest ./android/app/src/main/res/

I use App.js as my entry file not index.js

and genkey, set it in some file

adb install release.apk

success install, but can't load release when i open it

Undefined is not an object (tensorflow ImageRecognition)

When trying to integrate a pretrained tensorflow model with expo (react-native), the following error occurs within these lines:

   _graph = async () => {

        var preder2 = null;
        var items = "";
        this.setState({result: "", value: null});

        let result = await ImagePicker.launchImageLibraryAsync({
            allowsEditing: true,
            aspect: [4, 3],
            base64: true,
        });

        if (!result.cancelled) {
            this.setState({ image: result});
        };

        try {
            const tfImageRecognition = new TfImageRecognition({
                model: require('./assets/tensorflow_inception_graph.pb'),
                labels: require('./assets/tensorflow_labels.txt')
            });
    
            const results = await tfImageRecognition.recognize({
              image: this.state.image
            }); 
            results.forEach(
              result => ((preder2 = result.confidence), (items = result.name))
            );
            await tfImageRecognition.close();  
            this.setState({
              result: items,
              value: preder2 * 100 + "%"
            });
            console.log(this.state.result);
          } catch (err) {
            this.setState({
              result: "No Internet",
              value: "Please connect to the internet"
            });
            console.log(err);
          }
    }

Which generates the following error

10:10:40 AM: undefined is not an object (evaluating 'RNImageRecognition.initImageRecognizer')

I have been trying to find the reason why this is not working but I cannot find a definite solution. The relative paths linking to the assets are correct and the extensions are present in the app.json. Furthermore the model is trained using the tensorflow api which should make it compatible with the react-native implementation.

I observed that after running

const tfImageRecognition = new TfImageRecognition({

                model: require('./assets/tensorflow_inception_graph.pb'),
                labels: require('./assets/tensorflow_labels.txt')

            });

The code changed immediately to "catch(err)" branch, which means it could not load the model and labels?

I am using expo SDK version 28.0.0, Expo XDE and react-native-tensorflow version ^0.1.8

can not build IOS

fatal error: 'tensorflow/core/framework/op_kernel.h' file not found
#include "tensorflow/core/framework/op_kernel.h"

Tensorflow throw error after build .apk and using on real device

I'm using tensorflow to detect data if true or false. It work perfectly on emulator device but after I generate .apk and run on real device. It always cause the error.
below is my code

const tfImageRecognition = new TfImageRecognition({
      model: require('path/to/model.pb'),
      labels: require('path/to/label.txt'),
      imageMean: 0, // Optional, defaults to 117
      imageStd: 255 // Optional, defaults to 1
    })
    tfImageRecognition.recognize({
      image: this.state.imagePath,
      inputName: "input_1", //Optional, defaults to "input"
      inputSize: 224, //Optional, defaults to 224
      outputName: "k2tfout_0", //Optional, defaults to "output"
      maxResults: 3, //Optional, defaults to 3
      threshold: 0.1 //Optional, defaults to 0.1
    })
      .then(results => {
        console.warn('done');
        ToastAndroid.show('tensowflow done', ToastAndroid.SHORT);
      })
      .catch(err => {
        ToastAndroid.show(err, ToastAndroid.SHORT);
        console.error('errorrrrrrrrrrrrrrrrrrrrr');
        console.error(err);
      })

when function tfImageRecognition.recognize was called
it always throw error to .catch

Could not invoke RNImageRecognition.initImageRecognizer. Failed to allocate byte allocation.

Just cloned the repo, and ran the example as-is on a Genymotion android device with 4Gb RAM.
Everything seems to build fine, but got this error in the device:

Could not invoke RNImageRecognition.initImageRecognizer. 

null

Failed to allocate a 53884608 byte allocation with 25165824 free bytes and 31MB until OOM, max allowed footprint 93269104, growth limit 100663296.

I can imagine it is an "hardware" problem but I find it a bit strange it's not working on a 4096MB RAM device for an image classifier.

Maybe use TensorFlow Lite instead ?
Any suggestion ?

Model file not found on iOS device

I'm unable to get the *.pb file working in release mode. Everything seems to work while running in debug mode on a simulator but when deploying in release mode to simulator or device I get the following error:

Error: Couldn't find 'file://var/.. ../assets/rounded_graph.pb' in bundle

I am using the example project from the repo with little changes outside of pointing to my model/label files and passing in appropriate params.

const ir = new TfImageRecognition({
  model:require('./assets/rounded_graph.pb'),
  labels: require('./assets/retrained_labels.txt'),
  imageMean: 128, // Optional, defaults to 117
  imageStd: 128 // Optional, defaults to 1
});

const results = await ir.recognize({
  image: _image,
  outputName: "final_result"
});

if (results) {
  this.setState({results: results, selectedImage: _image}); 
}

react-native: 0.52.1

RNTensorFlowInferenceModule not resetable (android)

RNTensorFlowInferenceModule is missing a method to reset the context.
I see a method in the file RNTensorflowInference.java to reset the context, and another to get the context, but it's not callable from RNTensorFlowInferenceModule.

Also index.js is missing reset() too. And I didn't check iOS but it might also be missing since it's missing from index.js.

Support for datatypes

I would like to spec out support for datatypes other than double.

For instance, the following would be a data type choice.

tensorflow::Tensor input_data(tensorflow::DT_INT32, shape);

Through tf.feed or tf.feedWithDims there could be a dtype option

tf.feed(...args, 'int32');

How to use base64 data with tensorflow?

Is there any way to use base64 string as the input for tensorflow image api or any workaround like sort of conversion of data type?

I saw #9 and #2 but didn't get the idea to do such conversion....
Any concrete example can be provided?
Thanks in advance.

documentation error

wrong:
results.forEach(result =>
console.log(
results.id, // Id of the result
results.name, // Name of the result
results.confidence // Confidence value between 0 - 1
)
)
right:
results.forEach(result =>
console.log(
result.id, // Id of the result
result.name, // Name of the result
result.confidence // Confidence value between 0 - 1
)
)

Can't close TensorFlow session on iOS

As raised in issue #20, it impossible to feed the same model twice when using the regular RNTensorFlowInference API from the package. Therefore I had to close the model and open it again, but this is only working on android.

On iOS the close() function for this session raise a bug
"RNTensorFlowInference unrecognized selector sent to instance"
Which create a memory error if I keep creating and feeding new instances.

can not run example in ios

I can not run the example in ios ,there is err: Failed to install the requested application
The bundle identifier of the application could not be determined.
Ensure that the application's Info.plist contains a value for CFBundleIdentifier.I don't know how to solve it.

Predictions are not consistent

When running the same model against the same picture(s), the predictions produced on the mobile device are vastly different than those produced by the linux VM. Anyone else seeing this?

Possbile Tensorflow Lite intergration?

Is it possible to replace TensorFlow with TensorFlow Lite without completely breaking the code? I'm just concerned about the larger .apk files sizes when using the entire TensorFlow library.

Getting error Running model failed: Not found

Hi Im receiving the error 'Running model failed: Not found: FetchOutputs node output: not found'. Not sure what I could be doing wrong I followed the doc and everything installed uneventfully.
`
export default class Album {
async _predict(){
const tfImageRecognition = new TfImageRecognition({
model: require('../../assets/retrained_graph.pb'),
labels: require('../../assets/retrained_labels.txt')
})

const results = await tfImageRecognition.recognize({
  image: require('../../assets/1.jpg'),
 })
 .then(r => console.log(r))
 .catch(err => console.log(err))

}
}
`

UnableToResolveError: Unable to resolve module

I have installed the library and trying to use the available example. When I am loading the model from../asset/tensorflow_inception_graph.pb then it is giving me the error that unable to resolve the module. I have checked the path of model which is correct. Can anyone help to resolve this? Below I am showing the error log

error:
bundling failed: UnableToResolveError: Unable to resolve module ../asset/tensorflow_inception_graph.pb from /Users/fazeel/CGapp/code/LoginDemo/src/screens/ImageRecognitionAI.js: could not resolve `/Users/fazeel/CGapp/code/LoginDemo/src/asset/tensorflow_inception_graph.pb' as a file nor as a folder
at ModuleResolver._loadAsFileOrDirOrThrow (/Users/fazeel/CGapp/code/LoginDemo/node_modules/metro-bundler/src/node-haste/DependencyGraph/ModuleResolution.js:337:11)

`tfImageRecognition.recognize(data)` not returning a result

      const results = await tfImageRecognition.recognize({
        image: require('../../assets/earth-alighted.jpeg'), // Also tried using "file://image_ocation.jpg" of cached image location from React-Native-Camera `takePictureAsync()`
        inputName: "input",
        inputSize: 224,
        outputName: "output",
        maxResults: 3,
        threshold: 0.1,
      });

Could it be because the property values inputSize, outputName... are not correct? How would I determine what those values are?

I saw in issue #11 that the outputName needs to be the same output name as the model used - I'm using the YOLO2 model. Could this maybe be the issue, that it's not mapping the return result correctly?

Unsigned Apk is not working as expected

What I did:
npx react-native bundle --platform android --dev false --entry-file index.js --bundle-output android/app/src/main/assets/index.android.bundle --assets-dest android/app/src/main/res

cd android && ./gradlew assembleDebug

and as expected I found the apk in this path
mobile-app/android/app/build/outputs/apk/debug/app-debug.apk

However, this .apk file is not working like it's supposed to like when I run the commands
npx react-native start
npx react-native run-android

Package.json:

{
  "name": "Myapp",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "android": "react-native run-android",
    "ios": "react-native run-ios",
    "start": "react-native start",
    "test": "jest",
    "lint": "eslint ."
  },
  "dependencies": {
    "@react-native-community/async-storage": "^1.12.1",
    "@react-native-community/masked-view": "^0.1.11",
    "@react-native-picker/picker": "^1.16.1",
    "@react-navigation/bottom-tabs": "^5.11.11",
    "@react-navigation/native": "^5.9.4",
    "@react-navigation/stack": "^5.14.5",
    "@teachablemachine/image": "^0.8.4",
    "@tensorflow-models/mobilenet": "^2.1.0",
    "@tensorflow/tfjs": "^3.7.0",
    "@tensorflow/tfjs-core": "^3.7.0",
    "@tensorflow/tfjs-react-native": "^0.5.0",
    "axios": "^0.21.1",
    "expo-av": "^9.1.2",
    "expo-camera": "^11.0.3",
    "expo-gl": "^10.3.0",
    "expo-gl-cpp": "^10.3.0",
    "expo-image-manipulator": "^9.1.0",
    "lottie-react-native": "^4.0.2",
    "react": "17.0.1",
    "react-native": "0.64.2",
    "react-native-fs": "^2.18.0",
    "react-native-gesture-handler": "^1.10.3",
    "react-native-picker-select": "^8.0.4",
    "react-native-reanimated": "^2.2.0",
    "react-native-responsive-screen": "^1.4.2",
    "react-native-safe-area-context": "^3.2.0",
    "react-native-screens": "^3.4.0",
    "react-native-shapes": "^0.1.0",
    "react-native-unimodules": "^0.13.3"
  },
  "devDependencies": {
    "@babel/core": "^7.12.9",
    "@babel/runtime": "^7.12.5",
    "@react-native-community/eslint-config": "^2.0.0",
    "babel-jest": "^26.6.3",
    "eslint": "7.14.0",
    "jest": "^26.6.3",
    "metro-react-native-babel-preset": "^0.64.0",
    "react-test-renderer": "17.0.1"
  },
  "jest": {
    "preset": "react-native"
  }
}

Tensorflow dense pose estimation support

Is there any plans to support importing & using tesnorflow dense pose.
I tried to replicate what's in the docs at tfjs.

I installed the package @tensorflow-models/posenet as well as the @tensorflow/[email protected].
then had a simple method to load the posenet model as follows:

componentDidMount() {
    this.loadNet();
  }
  
  async getDensePose()
  {
    const imageScaleFactor = 0.50;
    const flipHorizontal = false;
    const outputStride = 16;
    const net   = await posenet.load();
    const pose = net.estimateSinglePose(
      require('./images/sample.jpg'), 
      imageScaleFactor,
      flipHorizontal,
      outputStride);
    console.log('pose estimation result'+pose);

  }

but unfortunately having this error:
Unhandled JS Exception: No backend found in registry.

Any idea !
Thanks

image input array

Hi,
Please go easy on me, i'm relatively new to tensorflow/ml I'm more of a web guy.
I have a frozen model trained with the Tensorflow Object Detection API and hope to use react-native to deploy to android/ios.

When you say "Next you will need to feed an image in the form of a number array", what tools do you use in react-native to convert the image to a number array?

Thanks
Matt

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.