Giter Site home page Giter Site logo

ml5js / ml5-library Goto Github PK

View Code? Open in Web Editor NEW
6.4K 114.0 906.0 373.48 MB

Friendly machine learning for the web! πŸ€–

Home Page: https://ml5js.org

License: Other

JavaScript 99.96% Shell 0.04%
lstm deep-learning imagenet machine-learning javascript neural-network p5xjs p5js

ml5-library's Introduction

Read our ml5.js Code of Conduct and software licence here!

ml5

All Contributors BrowserStack Status Version Twitter Follow

This project is currently in development.

Friendly machine learning for the web!

ml5.js aims to make machine learning approachable for a broad audience of artists, creative coders, and students. The library provides access to machine learning algorithms and models in the browser, building on top of TensorFlow.js.

The library is supported by code examples, tutorials, and sample data sets with an emphasis on ethical computing. Bias in data, stereotypical harms, and responsible crowdsourcing are part of the documentation around data collection and usage.

ml5.js is heavily inspired by Processing and p5.js.

Please read our Code of Conduct, which establishes our commitment to make ml5.js a friendly and welcoming environment.

Usage

Before getting started with ml5.js, review our Code of Conduct. There are several ways you can use the ml5.js library:

  • You can use the latest version (0.12.2) by adding it to the head section of your HTML document:

v0.12.2

<script src="https://unpkg.com/[email protected]/dist/ml5.min.js" type="text/javascript"></script>

  • If you need to use an earlier version for any reason, you can change the version number. The previous versions of ml5 can be found here. You can use those previous versions by replacing <version> with the ml5 version of interest:
<script src="https://unpkg.com/ml5@<version>/dist/ml5.min.js" type="text/javascript"></script>

For example:

<script src="https://unpkg.com/[email protected]/dist/ml5.min.js" type="text/javascript"></script>
  • You can also reference "latest", but we do not recommend this as your code may break as we update ml5.
<script src="https://unpkg.com/ml5@latest/dist/ml5.min.js" type="text/javascript"></script>

Resources

Standalone Examples

You can find a collection of standalone examples in this repository within the examples/ directory. You can also test working hosted of the examples online on the ml5.js examples index website.

These examples are meant to serve as an introduction to the library and machine learning concepts.

Code of Conduct

We believe in a friendly internet and community as much as we do in building friendly machine learning for the web. Please refer to our Code of Conduct for our rules for interacting with ml5 as a developer, contributor, or as a person using the library.

Contributing

Want to be a contributor πŸ— to the ml5.js library? If yes and you're interested to submit new features, fix bugs, or help develop the ml5.js ecosystem, please go to our CONTRIBUTING documentation to get started.

See CONTRIBUTING πŸ› 

Acknowledgements

ml5.js is supported by the time and dedication of open source developers from all over the world. Funding and support is generously provided by a Google Education grant at NYU's ITP/IMA program.

Many thanks BrowserStack for providing testing support.

Contributors

Thanks goes to these wonderful people (emoji key):


Daniel Shiffman

πŸ’» πŸ’‘ πŸ“† πŸ‘€ ⚠️ πŸ“Ή

CristΓ³bal Valenzuela

πŸ’» πŸ’‘ πŸ‘€ πŸ”§ ⚠️

Yining Shi

πŸ’» πŸ’‘ πŸ‘€ πŸ”§ ⚠️ πŸ›

Hannah Davis

πŸ’» πŸ’‘

Joey Lee

πŸ’» πŸ’‘ πŸ‘€ πŸ–‹ ⚠️

AshleyJaneLewis

πŸ“ 🎨 πŸ“‹ πŸ–‹

Ellen Nickles

πŸ“ πŸ–‹ πŸ€” βœ…

Itay Niv

πŸ’» πŸ’‘

Nikita Huggins

πŸ“ πŸ–‹ πŸ€”

Arnab Chakravarty

πŸ–‹ πŸ““

Aidan Nelson

πŸ’» πŸ’‘

WenheLI

πŸ’» πŸ’‘ 🚧 πŸ€”

Darius Kazemi

πŸ€” πŸ’¬

Dingsu Wang

πŸ’» πŸ’‘

garym140

πŸ–‹ πŸ“ πŸ€” πŸ““

Gene Kogan

πŸ’» πŸ’‘ πŸ€”

Hayley Hwang

πŸ’» πŸ’‘ πŸ€”

Lisa Jamhoury

πŸ’‘ πŸ€”

Alejandro Matamala Ortiz

🎨 πŸ–‹ πŸ“

Maya Man

πŸ’» πŸ’‘

Mimi Onuoha

πŸ€” πŸ–‹ πŸ‘€

Yuuno, Hibiki

πŸ’» πŸ’‘ 🚧

Dan Oved

πŸ’» πŸ’‘ πŸ’¬ πŸ€”

Stephanie Koltun

πŸ’» πŸ’‘ πŸ–‹ πŸ“ 🎨

YG Zhang

πŸ’» πŸ’‘ πŸ€”

Wenqi Li

πŸ’» πŸ’‘ πŸš‡

Brent Bailey

⚠️ πŸ’» πŸ’‘

Jonarod

πŸ’»

Jasmine Otto

πŸ’» ⚠️ πŸ’‘

Zaid Alyafeai

πŸ’» πŸ’‘ πŸ€” πŸ’¬

Jacob Foster

πŸ’» πŸ’‘ ⚠️

Memo Akten

πŸ’» πŸ’‘

Mohamed Amine

πŸ’» πŸ’‘ πŸ€” ⚠️

Oliver Wright

πŸ’» ⚠️

Marshal Hayes

πŸ“–

Reiichiro Nakano

πŸ’» ⚠️ πŸ’‘

Nikhil Thorat

πŸ’» πŸ’‘ πŸ€” πŸš‡

Irene Alvarado

πŸ’» πŸ’‘ 🚧 πŸ€”

Andrew Lee

πŸ’» πŸ’‘ πŸ€”

Jerhone

πŸ“–

achimkoh

πŸ’» πŸ’‘ ⚠️

Jim

πŸ’‘ πŸ“– πŸ–‹

Junya Ishihara

🚧 πŸ’»

Naoto HIÉDA

🚧

aarΓ³n montoya-moraga

🚧 πŸ’‘

b2renger

πŸ’» πŸš‡

Aditya Sharma

🚧

okuna291

πŸ€”

Jenna

πŸ€”

nicoleflloyd

πŸ–‹ 🎨 πŸ““

jepster-dk

πŸ’» πŸ€”

Xander Jake de los Santos

πŸ€”

Cassie Tarakajian

πŸ› πŸš‡ πŸ€”

Dave Briccetti

πŸ›

Sblob1

πŸ›

Jared Wilber

πŸ’‘ πŸ€” πŸ’»

danilo

πŸ’» πŸ€”

Emma Goodliffe

πŸ€” πŸ’¬ 🚧

Yang

πŸ’» πŸ›

Lydia Jessup

πŸ’» πŸ€” πŸ’‘

CJ R.

πŸ“– πŸ–‹

Fabio Corona

πŸ›

Tobias Nickel

πŸ› πŸ’»

Michael Salaverry

πŸ–‹ πŸ›

Rob

πŸ› πŸ’¬

Pujaa Rajan

πŸ’‘ πŸ›

Nick McIntyre

⚠️ πŸ›

Andy Baio

πŸ–‹ 🚧

Wenqi Li

πŸ–‹ πŸ’» πŸš‡ 🚧 πŸ€”

garym140

🎨

Jim

πŸ–‹ 🚧 πŸ€”

Yeswanth

🚧 πŸ›

Pettrus Sherlock

πŸ› 🚧

danilo

πŸ–‹

Andreas Refsgaard

πŸ–‹

Brian Jordan

πŸ–‹

bradley inniss

πŸ–‹

Kaushlendra Pratap

πŸ–‹

maxdevjs

πŸ–‹

josher19

πŸ–‹ πŸ›

Frederik De Bleser

πŸ–‹

Violet

πŸ›

Tirta Wening Rachman

πŸ’» πŸ–‹ πŸ€”

MikΓ‘ Kruschel

πŸ’» πŸ›

Takanobu Asanuma

πŸ–‹

Martin L. Jensen

πŸ–‹

Hugo Romano

πŸ–‹

Darshan Sen

πŸ–‹

Ludwig Stumpp

πŸ‘€ πŸ› πŸ’‘

Bomani Oseni McClendon

πŸ–‹ πŸ’»

Jang Haemin

πŸ–‹

Anton Filatov

πŸ–‹

Elijah Lucian

πŸ“–

Tam

πŸ–‹

RGV

πŸ’¬

hansvana

πŸ–‹

Ali Karpuzoglu

πŸ› 🚧

Jacob Wysko

πŸ–‹

Dilwoar Hussain

πŸ“–

Manaswini Das

πŸ“–

Benjamin Botwin

πŸ‘€ πŸ’¬ πŸ€”

Henrique Mota

πŸ‘€ πŸ’¬ πŸ€”

CaseyPan

πŸ’» πŸ€”

Sam Tarakajian

πŸ’» πŸ› 🚧 πŸ€”

Michael Bell

πŸ“– πŸ› 🚧

machenmusik

πŸ’‘

Pranav Dudhane

πŸ“–

TΓΌndi SzΓ‘sz

🎨 πŸ€”

hellonun

πŸ€” πŸ““

Pierre Grimaud

πŸ“– πŸ–‹

Greg French

πŸ›

Dale Markowitz

πŸ’» πŸ€”

Ragland Asir

πŸ“–

Tom-Lucas SΓ€ger

πŸ“–

altruios

πŸ“–

mennosc

πŸ’»

neta

πŸ’» πŸ“–

Koji

πŸ’» πŸ“–

josephrocca

πŸ’» πŸ“–

Lauren Lee McCarthy

πŸ’» πŸ“–

Sorin Curescu

πŸ’»

mofanke

πŸ› πŸ’»

Ikko Ashimine

πŸ“–

Mudasar-Makandar

πŸ› πŸ’»

Amir Feqhi

πŸ’»

DasK

πŸ’» πŸ€”

Amir

πŸ’» πŸ’‘

lindapaiste

πŸ’» πŸ€” πŸ›

Evan Weinberg

πŸ“–

Coder Gautam

πŸ“–

Yong-Yuan Chen

πŸ“–

adrianfiedler

πŸ“– πŸ›

This project follows the all-contributors specification. Contributions of any kind welcome!

ml5-library's People

Contributors

achimkoh avatar allcontributors[bot] avatar amirfeqhi avatar asvsfs avatar bomanimc avatar caseypyz avatar champierre avatar cvalenzuela avatar danidask avatar dariusk avatar dependabot[bot] avatar eonyang avatar fdb avatar handav avatar hobg0blin avatar joeyklee avatar jwilber avatar lindapaiste avatar meiamsome avatar micuat avatar mofanke avatar nickmcintyre avatar raisinten avatar shiffman avatar tirtawr avatar tlsaeger avatar tmcw avatar vndrewlee avatar wenheli avatar yining1023 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ml5-library's Issues

LSTM defaults

How to handle if a property is not passed into generate():

  • seed: pick a random sequence of characters from source text as seed?
  • temperature: 0.5 ?
  • length: 140 ?

Thanks for generating both ES6 and plainJS (ES5)

Not an issue, I just wanted to thank you for generating your examples in both ES6 and ES5 (plainJS) formats. Not being used to Typescript, I really appreciate it.

P.S. How do you use TSC to generate the plainJS folder? I would really like deeplearnjs to do it just not sure what to suggest.

no data found for NDArray error

I get this error while predicting the class:

ml5.js:11350 Uncaught (in promise) Error: No data found for NDArray with data id 2441. Use dl.ENV.math instead of constructing your own NDArrayMath. If you need to construct your own math, make sure this array is allocated after the math construction
    at MathBackendWebGL.throwIfNoData (ml5.js:11350)
    at MathBackendWebGL.readSync (ml5.js:10824)
    at MathBackendWebGL.<anonymous> (ml5.js:10866)
    at step (ml5.js:10698)
    at Object.next (ml5.js:10679)
    at fulfilled (ml5.js:10670)
    at <anonymous>
MathBackendWebGL.throwIfNoData @ ml5.js:11350
MathBackendWebGL.readSync @ ml5.js:10824
(anonymous) @ ml5.js:10866
step @ ml5.js:10698
(anonymous) @ ml5.js:10679
fulfilled @ ml5.js:10670
Promise.then (async)
BackendEngine.scope @ ml5.js:18075
NDArrayMath.scope @ ml5.js:3387
_callee4$ @ ml5.js:26126
tryCatch @ ml5.js:17140
invoke @ ml5.js:17378
prototype.(anonymous function) @ ml5.js:17192
step @ ml5.js:26037
(anonymous) @ ml5.js:26037
(anonymous) @ ml5.js:26037
predict @ ml5.js:26159
predict @ knn_saturday.js:48

the script is:

let video;
let knn;
let buttonA, buttonB, buttonStart;

function preload () {
  // callback, num classes, k-value (just find one neighbor)
  knn = new ml5.KNNImageClassifier(modelLoaded, 2, 1);
}

function setup () {
  createCanvas(320, 240);

  video = createCapture(VIDEO);
  video.attribute('width', 227);
  video.attribute('height', 227);
  video.hide();

  buttonA = createButton('train A');
  buttonB = createButton('train B');
  buttonStart = createButton('start guessing');
  // buttonA.mousePressed(() => {
  //   train(1, 'A');
  // });
  // buttonB.mousePressed(() => {
  //   train(2, 'B');
  // });
  buttonA.mousePressed(train.bind(null, 1, 'A'));
  buttonB.mousePressed(train.bind(null, 2, 'B'));
  buttonStart.mousePressed(predict);
}

function draw () {
  background(0);

  tint(0, 223, 254);
  image(video, 0, 0, width, height);
}

function modelLoaded () {
  console.log('model loaded');
}

function train (category, type) {
  knn.addImage(video.elt, category);
}

function predict () {
  knn.predict(video.elt, gotResult);
}

function gotResult (res) {
  console.log('result: ', res);
}

error messages?

Continuing off of #63, we might consider providing some friendly error messages if users attempt to call train() before a model is loaded. Not sure how many situations this would apply to.

Clustering / Unsupervised Learning

I am working on some examples to cluster data sets. Here are some algorithms I imagine eventually having in this library:

  • kmeans
  • principal component analysis (PCA)
  • TSNe

I committed a kmeans example (uses random vectors) as a start.

https://github.com/ITPNYU/p5-deeplearn-js/tree/master/examples/clustering/kmeans

Some next steps are:

  • Is the algorithm correct? (I whipped it up quickly and needs double-checking)
  • Use an actual sample dataset
  • Visualize clusters
  • Think through clustering ITP and re-factors into library using deeplearn.js for vector math

There are some interesting possibilities with combining clustering algorithms with word vectors.

ImageNet Classifier Example

errorscreen
imagenetcode

I am getting error as mentioned in the screen-shot. I need to create prototype similar to teachable machine as there in your KNN image classifier example, before that I tried to build following ImageNet Model but ml.js seems to have some issues. Could you please provide some help. Thanks in advance :)

LSTM source text

How best to handle source text? For this quick example it's a JS file with all of hamlet stored into a variable. To be more flexible / beginner friendly, we should probably switch to using loadStrings()? Or there may be a more appropriate workflow once we have an example where the training uses the same source text in JS?

publish in npm

Currently the npm package for this library is empty.

Should we start publishing it? Maybe as a pre-release
It would be nice to do:

npm install ml5 

Training LSTM causes IndexError

Hi there,

Training a custom LSTM model using the supplied script causes an error when assigning a model name if the folder is not more than two directories deep.

python3 train.py --data_dir data/my_data_folder

Will cause the following error:

Traceback (most recent call last):
  File "train.py", line 171, in <module>
    main()
  File "train.py", line 69, in main
    train(args)
  File "train.py", line 78, in train
    model_name = args.data_dir.split("/")[2]
IndexError: list index out of range

It's a very easy fix, args.data_dir.split("/")[2] should be changed to something like args.data_dir.split("/")[-1]. I'm happy to submit a pull request if that is easier - just let me know.

All the best!

Image Utilities

Related to #26 #24 #21 #15, there are going to be many examples working with images. There are some awkward things now that perhaps we can handle internally and/or with an "image utility" class for the library that handles the images <--> NDArray stuff.

  1. Should all functions that receive an image should work with DOM elements (<img>, <video>, <canvas>, p5.Image objects, and pixels arrays?
  2. I've noticed our examples only work with certain square dimensions 127x127, 227x227, etc. There are likely performance considerations but ideally the library could handle internally any image resizing necessary.

What else?

model ready callback?

I noticed in my new LSTM example that I am able to call generate() before the model is loaded if I'm quick to press the button (or certainly if I call it directly in setup()) Perhaps this is something for later once we're further along in making a "library" but we should offer some sort of optional callback to know when the model is loaded?

What if deeplearnjs changes?

I think deeplearnjs Sessions and Graphs are on their way out. Although no one seems to be saying if this change is coming soon or going to take a while.

Is p5-deeplearn-js ready for the changes? Or is this going to cause a major break? I often make my web pages point to a specific build

<script src="https://unpkg.com/[email protected]/dist/deeplearn.js"> </script>

instead of

<script src="https://unpkg.com/deeplearn"></script>

Any opinions?

Training LSTM model

@nsthorat and @cvalenzuela have an e-mail thread about this, but adding here to track going forward.

At the moment the LSTM example uses a model trained with this script from deeplearn.js. Eventually we want to train the LSTM in browser, but before that I thought it might be simpler to demonstrate the training with a keras model using this example from my class last year.

@cvalenzuela attempted to use this script to convert the model from my example but the output isn't working just yet.

Shall we do more work to get the keras trained example to be compatible or point students towards train.py that's in deeplearn.js if they want to train their own model?

Did I get this right?

lstm() function optional arguments

The lstm() method should allow for options arguments:

  • No seed text provided, use random seed text from source text (how many chars?)
  • default temperature
  • default return length (or should this be required?)

Example distribution

As of now, the examples depend on models, js files, and other elements via relative paths in this full repo. We should come up with a way to "distribute" stand-alone examples where everything is contained within a single directory.

(Related to this, testing and releasing examples in the p5 web editor could also be useful.)

404 on assets in fast_style_transfer_mirror

In examples/fast_style_transfer_mirror/sketch.js, I get 404 on 'assets/models/udnie' and 'assets/img/udnie.jpg'. On my fork I've corrected this to the local files. I can submit a PR if this is correct. But I also noticed you have these files in docs/assets/ so I thought maybe you meant the global ones or that I made a mistake in setup. let me know if you'd like a PR.

frame drop while running imagenet.predict

I noticed during the workshop last week that p5.js canvas frame rate drops significantly while running video classification. Here's an example output from this code with console.log(frameRate()) in draw().

44.2477876172749
62.11180126747634
58.139534876795196
60.240963845789416
4.321521175375056
14.684287814305357
59.52380948074774
61.72839506855966
58.479532172979155
59.171597624902525
54.34782608448077
65.78947370859808
60.240963845789416
59.8802395465879
60.97560972744388
7.41839762612301
64.93506491876796
60.975609781548265
56.17977527634279
63.29113926550177
62.499999987267074
56.17977527634279
60.606060613757485
60.97560972744388
57.803468233998
65.78947364561368
56.497175151796085
63.29113926550177
60.606060613757485
58.47953212321371
58.479532172979155
57.14285713715517
63.69426756050796
62.8930817669005
59.171597624902525

imagenet.predict(...) happens asynchronously but I think deeplearn.js is using GPU, which is blocking canvas rendering. Is this a technical limitation, or can we somehow continue updating the canvas while classification job is running?

A name?

how should we call it? I'm using HighML just to start developing the library and refactor the code.

char_indices, indices_char for custom training text?

In class this week, to compare with my markov chain example, I ran through training an LSTM model using train.py with itp.txt. I've updated the code in this repo to reflect this. The steps I took are:

  1. Run train.py
  2. Run json_checkpoints_var.py
  3. Switch out hamlet.js in example with a new itp.js.

I realize this is tiny tiny data with little training so getting good results isn't really possible. However, I got nonsense results with characters not in the original data set. I'm imagining this has something to do with char_indices and indices_char. Is there a way to auto-generate these during the training process? (They would be different depending on the characters used in the training data, yes?)

LSTM "library"

Right now the lstm.js file is included in both of my examples 1 and 2. Long-term this is a bigger discussion: in theory all of this code would likely live in either deeplearn.js or a new p5-deeplearn add-on/bridge.

However, for now, should I refactor it out now into libraries?

KNN: export the model

The KNNImageClassifier has a function to import a model (loadModel(model)). Is there any way to export the trained model so I can import it later?

LSTM API

This is a proposal for the LSTM library from #2.

I'm imagining something like:

const lstm = new LSTMGenerator('/path/to/model/', 'path/to/variables.json');
let txt = lstm.generate(len, seed, temperature);

I am not at all sure about the naming. I am also wondering if variables.json should be integrated into the model files directory and not treated separately?

Image Captioning Example

There is interest in my A2Z class re: image captioning. I thought I would try to create a simple example modeled after the deeplearn.js imagenet that provides captions for a p5.Image as well as real-time captions of a recorded or live video. @cvalenzuela, just wanted to check whether I'd be duplicating anything you have already done? Will leave this thread open to discuss / plan.

GSOC Feature Requests

Hi there. Hopefully, git issues is an acceptable means of communication. I would like to take part in the Google Summer of Code 2018 and it would be great to make some meaningful contributions to this library. I'll need to write a proposal and was wondering if there are features/examples that are being sought?

I can see that Shiffman would like hardmaru's sketch-rnn to potentially be integrated in #11, and I can see that pix2pix is being ported in #43. It may be that pix2pix will be difficult due to the depth of the model, but potentially a shallower one could be trained?

In summary, it would be great for the devs and any users browsing to let me know what would be helpful and I'll do my best to add that to the proposal.

Getting Started Developer Guide

Since this project is making use of modules and other ES6 features, not to mention yarn and/or webpack-dev, we should make a getting started developer guide.

LSTM pretrained models

I think it would be useful for demonstrating LSTM's to have the two basic examples running with pre-trained models. Maybe we start by doing a large shakespeare corpus from Project Gutenberg?

examples and p5

This is related to #27 as well as pull requests #69 and #66. I'm reposting a comment I made on @dariusk's amazing recommendations document! Perhaps we can discuss at our meeting on Thursday.


Regarding the p5 examples, I would love to have a discussion about how to manage this. One of the reasons things skewed towards p5 is that I hope to integrate it into my beginner programming classes with p5, but this is probably not helpful for the larger web audience and adds unnecessary extra stuff where simple vanilla JS makes the most sense. I see a few options here:

  1. Maintain a separate repo with a p5 + ml5 examples.
  2. Maintain two sets of examples here with ml5 itself.
  3. Use p5 only where p5 shines (i.e. if we're making heavy use of canvas drawing, webcam capture, loading a CSV, etc.) but otherwise stick with vanilla JS.

Thoughts?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.