Giter Site home page Giter Site logo

node-word2vec's Introduction

NPM version Build Status Coverage Status Dependencies

node-word2vec

Node.js interface to the Google word2vec tool

What is it?

This is a Node.js interface to the word2vec tool developed at Google Research for "efficient implementation of the continuous bag-of-words and skip-gram architectures for computing vector representations of words", which can be used in a variety of NLP tasks. For further information about the word2vec project, consult https://code.google.com/p/word2vec/.

Installation

Currently, node-word2vec is ONLY supported for Unix operating systems.

Install it via npm:

npm install word2vec

To use it inside Node.js, require the module as follows:

var w2v = require( 'word2vec' );

Usage

API

.word2phrase( input, output, params, callback )

For applications where it is important that certain pairs of words are treated as a single term (e.g. "Barack Obama" or "New York" should be treated as one word), the text corpora used for training should be pre-processed via the word2phrases function. Words which frequently occur next to each other will be concatenated via an underscore, e.g. the words "New" and "York" if following next to each other might be transformed to a single word "New_York".

Internally, this function calls the C command line application of the Google word2vec project. This allows it to make use of multi-threading and preserves the efficiency of the original C code. It processes the texts given by the input text document, writing the output to a file with the name given by output.

The params parameter expects a JS object optionally containing some of the following keys and associated values. If they are not supplied, the default values are used.

Key Description Default Value
minCount discard words appearing less than minCount times 5
threshold determines the number of phrases, higher value means less phrases 100
debug sets debug mode 2
silent sets whether any output should be printed to the console false

After successful execution, the supplied callback function is invoked. It receives the number of the exit code as its first parameter.

.word2vec( input, output, params, callback )

This function calls Google's word2vec command line application and finds vector representations for the words in the input training corpus, writing the results to the output file. The output can then be loaded into node via the loadModel function, which exposes several methods to interact with the learned vector representations of the words.

The params parameter expects a JS object optionally containing some of the following keys and associated values. For those missing, the default values are used:

Key Description Default Value
size sets the size of word vectors 100
window sets maximal skip length between words 5
sample sets threshold for occurrence of words. Those that appear with higher frequency in the training data will be randomly down-sampled; useful range is (0, 1e-5) 1e-3
hs 1 = use Hierarchical Softmax 0
negative number of negative examples; common values are 3 - 10 (0 = not used) 5
threads number of used threads 12
iter number of training iterations 5
minCount This will discard words that appear less than minCount times 5
alpha sets the starting learning rate 0.025 for skip-gram and 0.05 for CBOW
classes output word classes rather than word vectors 0 (vectors are written)
debug sets debug mode 2
binary save the resulting vectors in binary mode 0 (off)
saveVocab the vocabulary will be saved to saveVocab value
readVocab the vocabulary will be read from readVocab value , not constructed from the training data
cbow use the continuous bag of words model 1 (use 0 for skip-gram model)
silent sets whether any output should be printed to the console false

After successful execution, the supplied callback function is invoked. It receives the number of the exit code as its first parameter.

.loadModel( file, callback )

This is the main function of the package, which loads a saved model file containing vector representations of words into memory. Such a file can be created by using the word2vec function. After the file is successfully loaded, the supplied callback function is fired, which following conventions has two parameters: err and model. If everything runs smoothly and no error occured, the first argument should be null. The model parameter is a model object holding all data and exposing the properties and methods explained in the Model Object section.

Example:

w2v.loadModel( './vectors.txt', function( error, model ) {
    console.log( model );
});

Sample Output:

{
    getVectors: [Function],
    distance: [Function: distance],
    analogy: [Function: analogy],
    words: '98331',
    size: '200'
}

Model Object

Properties

.words

Number of unique words in the training corpus.

.size

Length of the learned word vectors.

Methods

.similarity( word1, word2 )

Calculates the word similarity between word1 and word2.

Example:

model.similarity( 'ham', 'cheese' );

Sample Output:

0.4907762118841032

.mostSimilar( phrase[, number] )

Calculates the cosine distance between the supplied phrase (a string which is internally converted to an Array of words, which result in a phrase vector) and the other word vectors of the vocabulary. Returned are the number words with the highest similarity to the supplied phrase. If number is not supplied, by default the 40 highest scoring words are returned. If none of the words in the phrase appears in the dictionary, the function returns null. In all other cases, unknown words will be dropped in the computation of the cosine distance.

Example:

model.mostSimilar( 'switzerland', 20 );

Sample Output:

[
    { word: 'chur', dist: 0.6070252929307018 },
    { word: 'ticino', dist: 0.6049085549621765 },
    { word: 'bern', dist: 0.6001648890419077 },
    { word: 'cantons', dist: 0.5822226582323267 },
    { word: 'z_rich', dist: 0.5671853621346818 },
    { word: 'iceland_norway', dist: 0.5651901750812693 },
    { word: 'aargau', dist: 0.5590524831511438 },
    { word: 'aarau', dist: 0.555220055372284 },
    { word: 'zurich', dist: 0.5401119092258485 },
    { word: 'berne', dist: 0.5391358099043649 },
    { word: 'zug', dist: 0.5375590160292268 },
    { word: 'swiss_confederation', dist: 0.5365824598661265 },
    { word: 'germany', dist: 0.5337325187293028 },
    { word: 'italy', dist: 0.5309218588704736 },
    { word: 'alsace_lorraine', dist: 0.5270204106304165 },
    { word: 'belgium_denmark', dist: 0.5247942780963807 },
    { word: 'sweden_finland', dist: 0.5241634037188426 },
    { word: 'canton', dist: 0.5212495170066538 },
    { word: 'anterselva', dist: 0.5186651140386938 },
    { word: 'belgium', dist: 0.5150383129735169 }
]

.analogy( word, pair[, number] )

For a pair of words in a relationship such as man and king, this function tries to find the term which stands in an analogous relationship to the supplied word. If number is not supplied, by default the 40 highest-scoring results are returned.

Example:

model.analogy( 'woman', [ 'man', 'king' ], 10 );

Sample Output:

[
    { word: 'queen', dist: 0.5607083309028658 },
    { word: 'queen_consort', dist: 0.510974781496456 },
    { word: 'crowned_king', dist: 0.5060923120115347 },
    { word: 'isabella', dist: 0.49319425034513376 },
    { word: 'matilda', dist: 0.4931204901924969 },
    { word: 'dagmar', dist: 0.4910608716969606 },
    { word: 'sibylla', dist: 0.4832698899279795 },
    { word: 'died_childless', dist: 0.47957251302898396 },
    { word: 'charles_viii', dist: 0.4775804990655765 },
    { word: 'melisende', dist: 0.47663194967001704 }
]

.getVector( word )

Returns the learned vector representations for the input word. If word does not exist in the vocabulary, the function returns null.

Example:

model.getVector( 'king' );

Sample Output:

{
    word: 'king',
    values: [
        0.006371254151248689,
        -0.04533821363410406,
        0.1589142808632736,
        ...
        0.042080221123209825,
        -0.038347102017109225
    ]
}

.getVectors( [words] )

Returns the learned vector representations for the supplied words. If words is undefined, i.e. the function is evoked without passing it any arguments, it returns the vectors for all learned words. The returned value is an array of objects which are instances of the class WordVec.

Example:

model.getVectors( [ 'king', 'queen', 'boy', 'girl' ] );

Sample Output:

[
    {
        word: 'king',
        values: [
            0.006371254151248689,
            -0.04533821363410406,
            0.1589142808632736,
            ...
            0.042080221123209825,
            -0.038347102017109225
        ]
    },
    {
        word: 'queen',
        values: [
            0.014399041122817985,
            -0.000026896638109750347,
            0.20398248693190596,
            ...
            -0.05329081648586445,
            -0.012556868376422963
        ]
    },
    {
        word: 'girl',
        values: [
            -0.1247347144692245,
            0.03834108759049417,
            -0.022911846734360187,
            ...
            -0.0798994867922872,
            -0.11387393949666696
        ]
    },
    {
        word: 'boy',
        values: [
            -0.05436531234037158,
            0.008874993957578164,
            -0.06711992414442335,
            ...
            0.05673998568026764,
            -0.04885347925837509
        ]
    }
]

.getNearestWord( vec )

Returns the word which has the closest vector representation to the input vec. The function expects a word vector, either an instance of constructor WordVector or an array of Number values of length size. It returns the word in the vocabulary for which the distance between its vector and the supplied input vector is lowest.

Example:

model.getNearestWord( model.getVector('empire') );

Sample Output:

{ word: 'empire', dist: 1.0000000000000002 }

.getNearestWords( vec[, number] )

Returns the words whose vector representations are closest to input vec. The first parameter of the function expects a word vector, either an instance of constructor WordVector or an array of Number values of length size. The second parameter, number, is optional and specifies the number of returned words. If not supplied, a default value of 10 is used.

Example:

model.getNearestWords( model.getVector( 'man' ), 3 )

Sample Output:

[
    { word: 'man', dist: 1.0000000000000002 },
    { word: 'woman', dist: 0.5731114915085445 },
    { word: 'boy', dist: 0.49110060323870924 }
]

WordVector

Properties

.word

The word in the vocabulary.

.values

The learned vector representation for the word, an array of length size.

Methods

.add( wordVector )

Adds the vector of the input wordVector to the vector .values.

.subtract( wordVector )

Subtracts the vector of the input wordVector to the vector .values.

Unit Tests

Run tests via the command npm test

Build from Source

Clone the git repository with the command

$ git clone https://github.com/Planeshifter/node-word2vec.git

Change into the project directory and compile the C source files via

$ cd node-word2vec
$ make --directory=src

License

Apache v2.

node-word2vec's People

Contributors

aubergene avatar fremycompany avatar oskarflordal avatar pizzacat83 avatar planeshifter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-word2vec's Issues

Error: spawn ./word2vec ENOENT

Hello, When I start node, I get this error

Error: spawn ./word2vec ENOENT
at _errnoException (util.js:1024:11)
at Process.ChildProcess._handle.onexit (internal/child_process.js:190:19)
at onErrorNT (internal/child_process.js:372:16)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
at Function.Module.runMain (module.js:678:11)
at startup (bootstrap_node.js:187:16)
at bootstrap_node.js:608:3

I really do not know how to fix it

clang: error: the clang compiler does not support '-march=native' (Apple M1)

Thanks for your effort, but when I try to do a "npm install word2vec", I get an error, as mentioned in the title of the issue:
"clang: error: the clang compiler does not support '-march=native'". I also have a computer with an Apple M1 chip, a MacBook Air (M1, 2020). I was trying to figure it out, but it ended up not being so straightforward for me. Could you please give me a hint to help me solve this? Thanks!

Make error postinstall

Hello dev guys! :)

Would appreciate your help with install failure. I got GnuWin32 installed and path is specified in system and user path settings. Still I got this error.

Output (from npm):

`C:\Users\kolyk\WebstormProjects\whislabackend>npm install word2vec

> [email protected] postinstall C:\Users\kolyk\WebstormProjects\whislabackend\node_modules\word2vec
> make --directory=src

make: Entering directory `C:/Users/kolyk/WebstormProjects/whislabackend/node_modules/word2vec/src'
gcc word2vec.c -o word2vec -lm -pthread -O3 -march=native -Wall -funroll-loops -Wno-unused-result -fno-stack-protector
process_begin: CreateProcess(NULL, gcc word2vec.c -o word2vec -lm -pthread -O3 -march=native -Wall -funroll-loops -Wno-unused-result -fno-stack-protector, ...) failed.
make (e=2): The system cannot find the file specified.
make: *** [word2vec] Error 2
make: Leaving directory `C:/Users/kolyk/WebstormProjects/whislabackend/node_modules/word2vec/src'
npm ERR! code ELIFECYCLE
npm ERR! errno 2
npm ERR! [email protected] postinstall: `make --directory=src`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\kolyk\AppData\Roaming\npm-cache\_logs\2017-08-20T14_11_30_352Z-debug.log

C:\Users\kolyk\WebstormProjects\whislabackend>

Any ideas?

Thanks

LoadModel memory overflow

Hi there, I trained a model on the google news corpus and was able to successfully create an output to load, however when I use the loadModel function with the output dataset I'm getting a memory overflow from node.

Im running Node v18, linux

I also tried including the flag for memory allocation in the npm run script, and while it seems to run for a longer period before overflowing, it still doesn't complete with up to 12Gb allocated.

I'm running on a 16Gb RAM, but I was wondering if I'm missing an optimization step. The word embeddings file is only 3Gb.

If there's anything that can be done to better utilize memory I feel like it should work, as I was able to train this model on the same machine.

Any help would be much appreciated! Thanks.

post installation script fails on windows.

I tried to install this package on windows and it failed with following error:

npm ERR! [email protected] postinstall: `make --directory=src`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] postinstall script 'make --directory=src'.

npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the word2vec package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     make --directory=src
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs word2vec
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls word2vec
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     C:\rbws\learning\nlp\projects\nlpnode\npm-debug.log

'make' is not recognized as an internal or external command, in windows 64 bit

make --directory=src

'make' is not recognized as an internal or external command,
operable program or batch file.
npm ERR! Windows_NT 10.0.14393
npm ERR! argv "F:\studies\node\node.exe" "F:\studies\node\node_modules\npm\bin\npm-cli.js" "install" "word2vec"
npm ERR! node v6.11.0
npm ERR! npm v3.10.10
npm ERR! code ELIFECYCLE

npm ERR! [email protected] postinstall: make --directory=src
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script 'make --directory=src'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the word2vec package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! make --directory=src
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs word2vec
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls word2vec
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR! F:\git repository\npm-debug.log

"Child process exited with code 126" error

When I ran:
const w2v = require( 'word2vec' );
const corpusFilePath = 'cleared_words.txt';

w2v.word2vec(corpusFilePath, "vectors.txt", { size: 300 }, () => {
console.log("DONE");
});

It returns:
Child process exited with code 126
DONE

leaving vectors.txt unchanged. Is this because I'm running this on MacOS?

Had to remove arr.pop() on line 373 in model.js to get this library working

This line here was removing the last item from each vector which made their lengths different which caused a whole bunch of chaos down the line (multiplying by undefined).

I have no idea why it's there, but I did notice that it works for the example vector.txt in this project. Maybe something to do with \r and \n?

Also I added this:

if(isNaN(words) || isNaN(size)) {
  throw new Error("First line of input text file should be <number of words> <length of vector>. See example data 'vectors.txt' in repo");
}

After this line since that caused me a lot of trouble (I don't think it's mentioned anywhere in the readme).

Thanks for the awesome library!

Why is 1GB of ram required to load a 160mb word vector file?

Are there some pre-computations done which cause this lib to eat up RAM? Because I'm using a 160mb text file of word vectors and the node process is taking up 900mb+ of RAM.

Just wondering whether there is a good reason for this, or whether I should dig about looking for some inefficiencies somewhere.

Thanks

How to successfully load the GoogleNews-vectors-negative300 model?

Hi Philipp,
I downloaded from https://code.google.com/p/word2vec/ the file GoogleNews-vectors-negative300.bin.gz

w2v = require('word2vec');
{ word2vec: [Function: word2vec],
word2phrase: [Function: word2phrase],
loadModel: [Function: loadModel],
WordVector: [Function: WordVector] }

w2v.loadModel("/home/marco/crawlscrape/bashUtilitiesDir/GoogleNews-vectors-negative300.bin", function(err, model) {
... console.log(model);
... });
undefined
TypeError: Cannot read property 'length' of undefined
at /home/marco/node_modules/word2vec/lib/model.js:408:30
at FSReqWrap.wrapper as oncomplete

w2v.loadModel("/home/marco/crawlscrape/bashUtilitiesDir/GoogleNews-vectors-negative300.bin", function(err, model) {
... console.log(model);
... });
undefined
TypeError: undefined is not a function
at readOne (/home/marco/node_modules/word2vec/lib/model.js:433:55)
at FSReqWrap.wrapper as oncomplete

What do I have to do in order to successfully load the GoogleNews-vectors-negative300 model?

Looking forward to your kind help.
Marco

Child process exited with code null

var w2v = require( 'word2vec' );

w2v.word2vec( __dirname + '/input.txt', __dirname + '/output.txt', {
	cbow: 1,
	size: 200,
	window: 8,
	negative: 25,
	hs: 0,
	sample: 1e-4,
	threads: 20,
	iter: 15,
	minCount: 2
});

the example don't seem to work? it only returns Child process exited with code null also output.txt empty

'make' is not recognized as an internal or external command,

warning "@tensorflow/tfjs > @tensorflow/[email protected]" has unmet peer dependency "seedrandom@^3.0.5".
[4/4] Building fresh packages...
error C:\Users\lenov\Desktop\apps\test\s\a\node_modules\word2vec: Command failed.
Exit code: 1
Command: make --directory=src
Arguments:
Directory: C:\Users\lenov\Desktop\apps\test\s\a\node_modules\word2vec
Output:
'make' is not recognized as an internal or external command,
operable program or batch file.
info Visit https://yarnpkg.com/en/docs/cli/add for documentation about this command.

.mostSimilar() returns array of undefined words for an out-of-dictionary word

As the title says, .mostSimilar() returns array of objects with word = undefined and dist = -1 for an out-of-dictionary word. The array is as long as the number of entries requested in the second argument of mostSimilar. I would expect it to return an empty array or null if the word is not found in the dictionary.

Cheers

Code 126

Hi there - I am running a very basic example, but something seems not to work. I get a code 126 all the time, but not sure what is happening. Here is my code


w2v = require('word2vec');

w2v.word2phrase( 'in.txt', 'out.txt', {
    threshold:100,
    debug:2,
    minCount: 5
}, done);

function done(data)
{
  console.log(data);
}

`mostSimilar` outputs numbers when using Fasttext word vectors

Hi,

First of all, thanks for the awesome work!

I am trying to import the pre-trained files from the fasttext repo: https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md

The model loads without a problem; however, when I try mostSimilar, the most similar words appear to be numbers:

loadedModel.mostSimilar('hi')

> [ { word: '73301', dist: 0.4461598818767161 },
  { word: '266', dist: 0.44462500361860946 },
  { word: '399', dist: 0.44260747560473973 },
  { word: '-0.13061', dist: 0.4250619904094889 },
  { word: '745', dist: 0.4089746546859616 },
  { word: '7', dist: 0.39388342200258686 },
  { word: '233', dist: 0.38675386429631425 },
  { word: '.33347', dist: 0.38672456155896373 },
  { word: '999', dist: 0.3798941950492955 },
  { word: '.5158', dist: 0.3761412428047805 },
  { word: '4785', dist: 0.3756878374324986 },
  { word: '', dist: 0.3753017613199615 },
  { word: '4091', dist: 0.3728785618174816 },
  { word: '0.18393', dist: 0.3702285209309231 },
  { word: '5', dist: 0.3694416515730196 },
  { word: '', dist: 0.3682340927295216 },
  { word: '2', dist: 0.3682152969462404 },
  { word: '68', dist: 0.36721353813091373 },
  { word: '10285', dist: 0.36564681449501635 },
  { word: '', dist: 0.36526450978156066 },
  { word: '014575', dist: 0.36389461240841203 },
  { word: '468', dist: 0.36371019302454455 },
  { word: '-0.00046764', dist: 0.3637013226972051 },
  { word: '.012665', dist: 0.36367885124101007 },
  { word: '142', dist: 0.3636392745394945 },
  { word: '574', dist: 0.36060934864973193 },
  { word: '0.6865', dist: 0.3602319353978014 },
  { word: '91', dist: 0.357913584485305 },
  { word: '53', dist: 0.35790250493633724 },
  { word: '925', dist: 0.3576282053138198 },
  { word: '1942', dist: 0.35588944804722655 },
  { word: '', dist: 0.3558833583782604 },
  { word: '3', dist: 0.3546257354328858 },
  { word: '-0.059739', dist: 0.3546232535404894 },
  { word: '', dist: 0.35400407472165496 },
  { word: '08', dist: 0.3536348589615367 },
  { word: '093', dist: 0.35353088901048624 },
  { word: '0.11736', dist: 0.3529077373455495 },
  { word: '.12359', dist: 0.3511316591255266 },
  { word: '10224', dist: 0.35079793819829935 } ]

I also tried hello it says it is out of the dictionary. How can I import the Fasttext files so that this won't happen?

npm install error

Tried installing and got this error

distance.c:18:10: fatal error: 'malloc.h' file not found
#include <malloc.h>
         ^
1 error generated.
make: *** [distance] Error 1
npm ERR! Darwin 14.3.0
npm ERR! argv "node" "/usr/local/bin/npm" "install" "word2vec"
npm ERR! node v0.12.2
npm ERR! npm  v2.7.4
npm ERR! code ELIFECYCLE

npm ERR! [email protected] postinstall: `make --directory=src`
npm ERR! Exit status 2
npm ERR!
npm ERR! Failed at the [email protected] postinstall script 'make --directory=src'.
npm ERR! This is most likely a problem with the word2vec package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     make --directory=src
npm ERR! You can get their info via:
npm ERR!     npm owner ls word2vec
npm ERR! There is likely additional logging output above.

npm Test problem

Hello,
It seems there are some files missing for the "npm test":

  1. word2phrase can be called successfully:
    Uncaught AssertionError: expected false to be true

Do you continue this project?

Thanks,
Thomas

Pass text and get vectors as output without the use of files?

Is it possible to call the word2vec method and pass training data in raw and get the vectors out as raw also ?

The documentation seems to suggest you can only pass a path to the to input data and output data files.

Any suggestions to overcome this ?

In Readme

function is written as .word2phrases() but it is .word2phrase()

Vec2Word

Given the example provided.
vector('king') - vector('man') + vector('woman')

It would be nice to be able to pass in the raw vector and get the word.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.