Giter Site home page Giter Site logo

synaptic's Introduction

Synaptic Build Status Join the chat at https://synapticjs.slack.com

Important: Synaptic 2.x is in stage of discussion now! Feel free to participate

Synaptic is a javascript neural network library for node.js and the browser, its generalized algorithm is architecture-free, so you can build and train basically any type of first order or even second order neural network architectures.

This library includes a few built-in architectures like multilayer perceptrons, multilayer long-short term memory networks (LSTM), liquid state machines or Hopfield networks, and a trainer capable of training any given network, which includes built-in training tasks/tests like solving an XOR, completing a Distracted Sequence Recall task or an Embedded Reber Grammar test, so you can easily test and compare the performance of different architectures.

The algorithm implemented by this library has been taken from Derek D. Monner's paper:

A generalized LSTM-like training algorithm for second-order recurrent neural networks

There are references to the equations in that paper commented through the source code.

Introduction

If you have no prior knowledge about Neural Networks, you should start by reading this guide.

If you want a practical example on how to feed data to a neural network, then take a look at this article.

You may also want to take a look at this article.

Demos

The source code of these demos can be found in this branch.

Getting started

To try out the examples, checkout the gh-pages branch.

git checkout gh-pages

Other languages

This README is also available in other languages.

Overview

Installation

In node

You can install synaptic with npm:

npm install synaptic --save
In the browser

You can install synaptic with bower:

bower install synaptic

Or you can simply use the CDN link, kindly provided by CDNjs

<script src="https://cdnjs.cloudflare.com/ajax/libs/synaptic/1.1.4/synaptic.js"></script>

Usage

var synaptic = require('synaptic'); // this line is not needed in the browser
var Neuron = synaptic.Neuron,
	Layer = synaptic.Layer,
	Network = synaptic.Network,
	Trainer = synaptic.Trainer,
	Architect = synaptic.Architect;

Now you can start to create networks, train them, or use built-in networks from the Architect.

Examples

Perceptron

This is how you can create a simple perceptron:

perceptron.

function Perceptron(input, hidden, output)
{
	// create the layers
	var inputLayer = new Layer(input);
	var hiddenLayer = new Layer(hidden);
	var outputLayer = new Layer(output);

	// connect the layers
	inputLayer.project(hiddenLayer);
	hiddenLayer.project(outputLayer);

	// set the layers
	this.set({
		input: inputLayer,
		hidden: [hiddenLayer],
		output: outputLayer
	});
}

// extend the prototype chain
Perceptron.prototype = new Network();
Perceptron.prototype.constructor = Perceptron;

Now you can test your new network by creating a trainer and teaching the perceptron to learn an XOR

var myPerceptron = new Perceptron(2,3,1);
var myTrainer = new Trainer(myPerceptron);

myTrainer.XOR(); // { error: 0.004998819355993572, iterations: 21871, time: 356 }

myPerceptron.activate([0,0]); // 0.0268581547421616
myPerceptron.activate([1,0]); // 0.9829673642853368
myPerceptron.activate([0,1]); // 0.9831714267395621
myPerceptron.activate([1,1]); // 0.02128894618097928
Long Short-Term Memory

This is how you can create a simple long short-term memory network with input gate, forget gate, output gate, and peephole connections:

long short-term memory

function LSTM(input, blocks, output)
{
	// create the layers
	var inputLayer = new Layer(input);
	var inputGate = new Layer(blocks);
	var forgetGate = new Layer(blocks);
	var memoryCell = new Layer(blocks);
	var outputGate = new Layer(blocks);
	var outputLayer = new Layer(output);

	// connections from input layer
	var input = inputLayer.project(memoryCell);
	inputLayer.project(inputGate);
	inputLayer.project(forgetGate);
	inputLayer.project(outputGate);

	// connections from memory cell
	var output = memoryCell.project(outputLayer);

	// self-connection
	var self = memoryCell.project(memoryCell);

	// peepholes
	memoryCell.project(inputGate);
	memoryCell.project(forgetGate);
	memoryCell.project(outputGate);

	// gates
	inputGate.gate(input, Layer.gateType.INPUT);
	forgetGate.gate(self, Layer.gateType.ONE_TO_ONE);
	outputGate.gate(output, Layer.gateType.OUTPUT);

	// input to output direct connection
	inputLayer.project(outputLayer);

	// set the layers of the neural network
	this.set({
		input: inputLayer,
		hidden: [inputGate, forgetGate, memoryCell, outputGate],
		output: outputLayer
	});
}

// extend the prototype chain
LSTM.prototype = new Network();
LSTM.prototype.constructor = LSTM;

These are examples for explanatory purposes, the Architect already includes Multilayer Perceptrons and Multilayer LSTM network architectures.

Contribute

Synaptic is an Open Source project that started in Buenos Aires, Argentina. Anybody in the world is welcome to contribute to the development of the project.

If you want to contribute feel free to send PR's, just make sure to run npm run test and npm run build before submitting it. This way you'll run all the test specs and build the web distribution files.

Support

If you like this project and you want to show your support, you can buy me a beer with magic internet money:

BTC: 16ePagGBbHfm2d6esjMXcUBTNgqpnLWNeK
ETH: 0xa423bfe9db2dc125dd3b56f215e09658491cc556
LTC: LeeemeZj6YL6pkTTtEGHFD6idDxHBF2HXa
XMR: 46WNbmwXpYxiBpkbHjAgjC65cyzAxtaaBQjcGpAZquhBKw2r8NtPQniEgMJcwFMCZzSBrEJtmPsTR54MoGBDbjTi2W1XmgM

<3

synaptic's People

Contributors

agamm avatar anubisthejackle avatar bcbcb avatar bobalazek avatar cazala avatar coleww avatar cristy94 avatar drugoi avatar dscripps avatar filet-mign0n avatar germ13 avatar hongbo-miao avatar jabher avatar jakeprasad avatar jeremistadler avatar jzjzjzj avatar menduz avatar mkondel avatar morungos avatar noobtw avatar noraincode avatar oritwoen avatar perborgen avatar prayagverma avatar qard avatar sleepwalking avatar sta-ger avatar villesau avatar vkaracic avatar wagenaartje avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

synaptic's Issues

Creating own squashing function

I want to create own squashing function. I have tried this way to assign my function to the hidden layer neurons:

    var inputLayer = new Layer(1),
        hiddenLayer = new Layer(3),
        outputLayer = new Layer(1);

    inputLayer.project(hiddenLayer);
    hiddenLayer.project(outputLayer);

    var customFunction = function(x, derivate) {
        // some code
    };

    for (var k in hiddenLayer.list) {
        var neuron = hiddenLayer.list[k];
        neuron.squash = customFunction;
    }

    return new Network({
        input: inputLayer,
        hidden: [hiddenLayer],
        output: outputLayer
    });

but it seems to me that this function isn't activating during the activation of the network at all. The network always returns the same result for every input data. When I insert into a function some alert() or console.log(), nothing happens on activation.

I also tried it on an existing function from Synaptic.

    var customFunction = Neuron.squash.TANH;

It works

    // function copied from Neuron.squash.TANH;
    var customFunction = function(x, derivate) {
      if (derivate)
        return 1 - Math.pow(Neuron.squash.TANH(x), 2);
      var eP = Math.exp(x);
      var eN = 1 / eP;
      return (eP - eN) / (eP + eN);
    };

It doesn't works, same as with my custom function.

NaN

I am doing something really wrong :S

Won't converge for simple data

keyLength = 30;
inputLayer = new Layer(keyLength);

hiddenLayers = [new Layer(keyLength), new Layer(parseInt(keyLength / 2))];

outputLayer = new Layer(1);

inputLayer.project(hiddenLayers[0]);

hiddenLayers[0].project(hiddenLayers[1]);

hiddenLayers[1].project(outputLayer);

trainedNetwork = new Network({
  input: inputLayer,
  hidden: hiddenLayers,
  output: outputLayer
});

iterations = 0;

MSE = 1;

learningRate = 0.3;

while (!(iterations > 20000 || MSE < 0.000005)) {
  loopErrorTotal = 0;
  _.each(trainingData, function(trainingSet) {
    var currentOutput;
    currentOutput = void 0;
    currentOutput = trainedNetwork.activate(trainingSet.input);
    trainedNetwork.propagate(learningRate, [trainingSet.output]);
    return loopErrorTotal += Math.pow(currentOutput[0] - trainingSet.output, 2);
  });
  iterations++;
  MSE = loopErrorTotal / trainingData.length;
  console.log('Iteration: ' + iterations + '\u9MSE: ' + MSE);
}

That's my code. My trainingData only has 2 samples in it. I want to just get some basic convergence to get my error management correct. However, my MSE steadily increases instead of decreases. Not sure what I'm doing wrong. Any thoughts?

Performance Function

Hello,

I'm trying to use cross entropy instead of MSE but I am not sure how to change the perform func. Is it built in?

How do I set the output value range?

Lets say I want to train a network like so:
var sinNetwork = new Perceptron(1, 12, 1);

And it is trained to find the sin function - which outputs values between -1 and 1.
How can I get the network to output between that range? is that possible? currently it outputs only between one and zero.

I know that I can transform the number but it feels messy (ie => train: (Math.sin(x)+1)/2, convert back: out * 2 - 1).

In addition to that, what if I want to change the squash function to be a linear output unit? (e.g a real number) I found that I couldn't access those properties directly from the sinNetwork object.

Liquid State Machine Set Up

Hi,

This is a fantastic project! Good work! I was just wondering whether anyone had any thoughts on how to set up liquid state machines and specifically how they work in this module. I understand they are not the focus. Specifically, I was wondering whether there are anyways to determine pool size, connections and gates. I am finding it hard to dig up some (easily digestable) info on these networks.

Any thoughts?

Thanks!

Output/Hidden to Hidden gatings in LSTM-RNN

According to Felix Gers' dissertation[1], gates on memory cells have connection not only from input layer, but also the memory cells themselves. However, Architect currently only projects input-to-out/forget/in gates connections for LSTMs (except peepholes),

      inputLayer.project(inputGate);
      inputLayer.project(forgetGate);
      inputLayer.project(outputGate);

which means that the neural network remembers/forgets information only based on its current inputs, which could be diastrous for certain tasks that require long term memory.
In some other applications memory cells are even gated by outputs. Besides gating, first order connections from output to hidden layers also exist in some literatures.

This observation provides an insight for why the Wikipedia language modeling task doesn't give promising results even after hours of training. Through informal test enabling hidden layer to gates connections, the network is able to reproduce text such as "the of the of the of the of the of ..." on its own. I also trained a LSTM with 70 memory cells on some short paragraphs and the network can exactly reproduce two or three sentences on its own.

I'm going to run further tests to compare the hidden-to-gates connected LSTMs with input-to-gates connected ones.

[1] Gers, Felix. Long short-term memory in recurrent neural networks. PhD dissertation, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2001.

I can't test in my browser...

I'm trying to use synaptic in a web page but I have some errors like:
ReferenceError: require is not defined synaptic.js:53:4
ReferenceError: module is not defined layer.js:2:0
TypeError: Neuron is not a constructor layer.js:19:17

so I think there is a problem in my configuration. I have seen the "test" directory of the project where there must be a file test_browser.html... but I can't find it! Was it deleted?

Where can I find an example of web page where the synaptic.js is used?

Thanks!

Perceptron.activate undefined

Uncaught TypeError: Cannot read property '0' of undefined
at Perceptron.activate (eval at (http://localhost:3000/lib/ext/devsynaptic.js?3afe0daa1afdee8ef340e23bc09b4fb8729e94bc:814:23), :548:13)
at Object.Instruction.Train (http://localhost:3000/lib/Network.coffee.js?035a6338280de0bee6bb5004828fdea08605c1a6:46:22)
at :2:5
at Object.InjectedScript._evaluateOn (:895:140)
at Object.InjectedScript._evaluateAndWrap (:828:34)
at Object.InjectedScript.evaluate (:694:21)activate @ VM5256:548Instruction.Train @ Network.coffee:26(anonymous function) @ VM5255:2InjectedScript._evaluateOn @ VM5251:895InjectedScript._evaluateAndWrap @ VM5251:828InjectedScript.evaluate @ VM5251:694

var activate = function(input){
F[1] = input[0];
F[2] = input[1];
F[3] = input[2];
F[4] = input[3];
..
error at line starts with var activate

[0, 0, 1, 1, 0, 0, 1, 1]
VM5260:2 [0, 0, 0, 0, 0, 1, 0, 0]
VM5260:2 [0, 1, 0, 1, 0, 0, 0, 1]
VM5260:2 [0, 1, 0, 1, 1, 1, 1, 0]
VM5260:2 [0, 0, 0, 1, 1, 0, 1, 0]
VM5260:2 [0, 0, 1, 1, 1, 0, 0, 1]
VM5260:2 [0, 1, 1, 0, 0, 0, 1, 0]
VM5260:2 [0, 1, 0, 0, 0, 0, 1, 0]
VM5260:2 [0, 1, 0, 0, 1, 0, 0, 0]
VM5260:2 [0, 0, 0, 1, 1, 1, 1, 1]
VM5260:2 [0, 0, 1, 1, 1, 0, 1, 1]
VM5260:2 [0, 1, 0, 1, 1, 0, 0, 1]
VM5260:2 [0, 0, 0, 0, 0, 1, 0, 0]
VM5260:2 [0, 1, 0, 0, 1, 1, 1, 1]
VM5260:2 [0, 1, 0, 1, 1, 0, 1, 1]
VM5260:2 [0, 0, 1, 1, 0, 0, 0, 0]
VM5260:2 [0, 0, 1, 0, 1, 0, 0, 1]
VM5260:2 [0, 0, 0, 1, 0, 1, 1, 1]
VM5260:2 [0, 0, 1, 0, 0, 1, 0, 1]
VM5260:2 [0, 1, 0, 0, 1, 1, 0, 1]
VM5260:2 [0, 0, 1, 1, 0, 1, 0, 0]
VM5260:2 [0, 0, 1, 0, 1, 1, 1, 1]
VM5260:2 [0, 1, 0, 1, 0, 0, 0, 1]
VM5260:2 [0, 0, 1, 1, 0, 0, 0, 1]
VM5260:2 [0, 0, 0, 0, 1, 1, 1, 0]
VM5260:2 [0, 0, 1, 1, 1, 0, 1, 0]
VM5260:2 [0, 0, 1, 0, 0, 1, 0, 0]
VM5260:2 [0, 0, 0, 0, 1, 0, 0, 0]
VM5260:2 [0, 1, 0, 0, 1, 0, 0, 1]
VM5260:2 [0, 0, 1, 1, 1, 0, 0, 1]
VM5260:2 [0, 1, 0, 0, 0, 0, 0, 1]
VM5260:2 [0, 0, 0, 0, 0, 0, 0, 0]
VM5260:2 [0, 1, 0, 1, 0, 0, 0, 0]
VM5260:2 [0, 0, 0, 1, 0, 0, 1, 1]
VM5260:2 [0, 1, 0, 1, 1, 0, 1, 1]
VM5260:2 [0, 1, 0, 0, 1, 1, 1, 0]
VM5260:2 [0, 0, 1, 1, 0, 0, 0, 0]
VM5260:2 [0, 0, 0, 1, 1, 1, 1, 0]
VM5260:2 [0, 1, 0, 0, 1, 1, 1, 0]
VM5260:2 [0, 0, 0, 0, 1, 1, 1, 1]
VM5260:2 [0, 0, 0, 1, 1, 0, 1, 1]
VM5260:2 [0, 1, 1, 0, 0, 0, 1, 1]
VM5260:2 [0, 0, 1, 0, 1, 0, 1, 1]
VM5260:2 [0, 0, 1, 0, 0, 0, 0, 0]
VM5260:2 [0, 0, 0, 1, 1, 1, 1, 0]
VM5260:2 [0, 1, 0, 0, 0, 0, 0, 0]
VM5260:2 [0, 0, 0, 0, 1, 0, 0, 0]
VM5260:2 [0, 1, 0, 0, 1, 0, 0, 0]
VM5260:2 [0, 0, 0, 1, 0, 1, 1, 1]
VM5260:2 [0, 1, 0, 1, 1, 1, 1, 1]
VM5260:2 [0, 1, 0, 0, 0, 1, 1, 1]

My inputset for training.

By the way awesome project i was started writing in js and when doing some research i found this and deleted code that i was wrote :) only problem is this error

Cross validation

Is there a built in way or simple way of cross validating the net while training?
Ie:
Split the dataset to train-dataset and test-dataset - let's say 70% 30% respectively.
Train the network on the train-dataset until it reaches the desired error when tested against the test-dataset.

Thanks in advance.

trace.extended should be keyed by connection.ID

It looks like trace.extended should be keyed by the gated connection, not its target neuron. The latter seems to prevent the possibility of gating two distinct connections to the same target neuron - I assume it should be possible...?

Training is incredibly slow compared to BrainJS

Hey folks,

We have a relatively small dataset (<1000) that when we use BrainJS – trains in about 2-4 minutes.

However, with synaptic – the same task takes about 45 minutes or so. We're using a 1 layer Architect.Perceptron on Node 4.2.2. Changing the learning rate / error has given us some minimal speed improvements but definitely not anywhere close to the BrainJS version.

Any thoughts around this? What else should we try?

test network for multiple values

Hello, I'm trying to use ANN for some traffic related forecasting (I'm a total newbie at this). So to test the functions out I downloaded the data from USD->EUR currency exchange, so that I could have some easy input (in the form of [year, month, day]) and output (in the form of [exchange value]) , but even after reading the source code I'm not sure about what the .activate([input]) function does and if it is the proper function to be used to "ask" the ANN for its forecast on the output, or if it affects the network (as I saw that the next .propagate([output]) function will keep the last .activate as reference).

Since I would like to study how well the ANN predicts stuff 7-10 days in the future (doing an .activate for everyone of those days and studying how the prediction diverges from the real data) I wouldn't want it to modify the network behavior and was therefore wondering if I was using the wrong function or I should just end the cycle calling the .activate([current day]) function before propagating the real data of the current day.

Sorry for the messy question.

Thank you for all your work. I found it a great place to start, I hope to learn a lot trough your tools.

Failing Test

Perceptron - XOR input: [1,1] output: 1:
AssertionError: [1,1] did not output 0

Server side implementation very slow

Hi

Implemented the Paint an Image example to run server side using node canvas for the image data, everything is working as the browser based example, except that each iteration takes a very long time, up to a couple of seconds per iteration.

Any ideas of why it's so slow?

Amazing project!

Great job @cazala, ¡Genial!. I like the way you provides the examples, very interesting scenarios especially Paint an image.

LSTM supports fromJSON

LSTM supports toJSON, but haven't implement fromJSON method,
Is it possible to implement fromJSON method to LSTM?

Stopping trainer on custom conditions

Is there a way to manually stop training? For example, the error gets down to 0.3, but then starts going up again. I would like to stop and save the network to JSON now, while error is still 0.3.

Another example, using LSM some pools are just better than others for some solution. I would like to find these 'good fit' pools. If the error increases during training, then I would like to stop training and try another LSM.

There might be a way to do this by adding a boolean return value in customLog. Then the 2 places that use customLog, Trainer.train() and Trainer.workerTrain(), can check a new flag in lines 50 and 188, in addition to the 'iterations' and 'error' conditions. I could utilize this in my code where I get some external stats from the error value, then make decisions based on that:

trainer.train(tset,{
  iterations: 100000,
  rate: 0.00005,
  error: 0.1,
  customLog: {
      every: 1000,
      do: function(data) {
          //add new data point
          stats.Push( data.error )
          //custom log message
          console.log(data.iterations, 'e',data.error, 's',stats.Skewness(), 'k',stats.Kurtosis())
          //this checks a custom condition, returns true to end training
          if( stats.SOME_STAT() > SOME_LIMIT ){
            return true
          }
    }
  }
})

Thank you for a great project and making it available to others:)
-Max

Saving network to file

I exported my network to json using .toJSON() and saved it to file after using "JSON.stringify" on it, when I read it I used "JSON.parse" to convert into json and loaded it using "Network.fromJSON" and I got this error: "Connection Error: Invalid neurons".

has anyone else encountered this?

Any thoughts / docs / examples about reinforcement learning

I'd like to train the network with reinforcement learning. There have been some talk about that:

Maybe you can provide some examples / wiki about that area? I am clearly lacking some knowledge here. In any case, any pointers, examples or snippets regarding implementation of reinforcement learning ANN using synaptic would be great!

Activation order

Sorry simple question thats not addressed in the docs. I was just wondering whether the activation order returned by neurons returns the neurons in "chronological activation sequence" or the order of most active neurons? Thanks!

Uncaught ReferenceError: i is not defined

Hey guys, nice project you got here. But I'm getting some errors when I try to run just this simple test:

$(document).ready(function() {
var Neuron = synaptic.Neuron,
Layer = synaptic.Layer,
Network = synaptic.Network,
Trainer = synaptic.Trainer,
Architect = synaptic.Architect;

var percep = new Architect.Perceptron(2,3,1);
var trainer = new Trainer(percep);

trainer.XOR();

var trainingSet = [
    {input: [0,0], output: [0]},
    {input: [0,1], output: [1]},
    {input: [1,0], output: [0]},
    {input: [1,1], output: [0]}
];

var callback = function (result) {
    console.log('error:', result.error, 'iterations:', result.iterations, 'time:', result.time);
}

trainer.workerTrain(trainingSet, callback);

});

I get the following error/msgs in the browser:
Resource interpreted as Script but transferred with MIME type text/plain: "blob:http%3A//localhost%3A3000/a985b008-bf1a-4bc1-91c2-8c5fefe667c8".
Uncaught ReferenceError: i is not defined

This is an nodejs + express simple app, with a public folder, in which we can find all js files and index.html.

Can anybody tell me what's wrong? Is it a bug? Or am I using the lib incorrectly?

Help getting going

Newb to ML and NNets. What are some ways I could use synaptic to find hidden relationships in standard json objects?

Made a S.O query but doesn't seem very well received 👂
I have a long array of objects that are created to track daily actions.

Example:

[
    {name:'workout', duration:'120', enjoy: true, time:1455063275, tags:['gym', 'weights']},
    {name:'lunch', duration:'45', enjoy: false, time:1455063275, tags:['salad', 'wine']},
    {name:'sleep', duration:'420', enjoy: true, time:1455063275, tags:['bed', 'romance']}
]

I'm having a hard time understanding how to use this data in a neural network to predict if future actions would be enjoyable. Additionally, I want to find hidden relationships between various activities.

Not sure how to get the rubber on the road. How do I feed the network my array of objects and read the results?

If anyone can answer this within the context of https://github.com/cazala/synaptic that would be great. It's also super if the answer is a straight machine learning lesson.

Thanks all!
https://stackoverflow.com/questions/35304800/hidden-relationships-with-javascript-machine-learning

Min mean square error back propagation for output neurons

On https://github.com/cazala/synaptic/blob/master/src/neuron.js#L119, error responsibility for an output layer neuron is calculated as cross entropy derivative, which may not be optimal for regression tasks where square error is expected to be minimized.

 // output neurons get their error from the enviroment
if (isOutput)
  this.error.responsibility = this.error.projected = target - this.activation; // Eq. 10

Under MMSE criterion we should change the codes to

 // output neurons get their error from the enviroment
if (isOutput)
  this.error.responsibility = this.error.projected = (target - this.activation) * this.derivative; // Eq. 10

However I did some tests and found that the difference between MMSE and MCE (min cross entropy) training is barely observable.
I set up this issue and see if someone find MMSE better than MCE or vice versa.

Non-helpful question.

If I had trained a single network to add, subtract, multiply and/or divide any 32 bit integer - would that be at all interesting or surprising?

My apologies for the ignorant arm-chair question.

Strange JavaScript Syntax

Juan, could you please be a little kind by telling me what that + sign is doing before the localStorage in the snippet below... I guess I'm a newbie on this one

Wiki.time = localStorage.getItem("time") ? +localStorage.getItem("time") : 0;

out of memory

I use synaptic with and I'm very satisfied. In one application I have to read data from a .js file for about 400kb... and I get "out of memory" error. There must be a way to avoid this?

Did you mean Echo State Networks?

I just briefly looked at your Architect code for Liquid State Machines... I don't think I see any mention of spiking neural models anywhere in your code, which is a core basis of LSM. Assuming you're using sigmoidal activations (or similar) with a "reservoir", then I think what you've implemented is closer to an "Echo State Network". Both fall into the class of "Reservoir Computing", with the biggest distinction that LSM is a more biologically plausible version using spiking neurons. The other major thing to note is that there's no spatial conditions on the random connection initialization. In ESM/LSM, the neurons sit in a 3D space and the probability of a connection depends on the distance between the neurons, which creates spatially-local connections on average. This is another key component.

You should consider naming this something else entirely to reduce confusion. Maybe GatedReservoir?

Random dataset length classification problem

Hi. First of all let me thank you for awesome node project!

I'm not familiar with neural networks and looking for advice on such question - Is it possible to make learning process on arrays of data with random length like this:

/**
* Variables in dataset can be strings/ints/floats
* hour_of_day 0-23
* var1 - string 
* var2 - string 
* ...
* varN - string 
*/
var raw1 = {hour_of_day: 11, var1:"random_string_value"};
var raw2 = {hour_of_day: 14, var2:"random_string_value"};
var raw3 = {var1:"random_string_value"};
var raw4 = {hour_of_day: 11, var1:"rndm1", var3:"rndm23"};
var raw5 = {var1:"rndm6", var2:"rndm4", var3:"rndm456", ... , varN:"rndm234"};

//In each raw we have limited amount of possible variables (sometimes we new all variables, but more often only part of variables)

On such datasets we have to predict positive or negative outcome (0 - 1) of event.

Is it possible with neural networks? If not, maybe you can advise something?

How to use standalone() to reduce runtime

Loving synaptic.js so far after just a day playing with it!

I'm wondering how to properly use the standalone function to avoid retraining a network when run through node.js. For example, let's say I set up a network named ganglia in a file called "network.js" and run the following code:

var trainedGanglia = ganglia.standalone()
module.exports = trainedGanglia

If I then create a new file called, "trained.js" and require network.js like so:

var trained = require('./network.js')
trained([0.12, 0.42])

I get the correct output, but, as expected, only after the network is trained in network.js, which doesn't reduce the amount of time to run this file in node.js at all. Therefore, I'm not sure what you mean in the documentation when you say in the documentation that standalone() returns a function that has no dependencies on Synaptic etc... I see how the returned function itself doesn't have any dependencies, but I don't understand how this is useful. Could you please explain this further or give an example?

Thank you!

Change rate from schedule function

It would be nice to have possibility to change rate value from inside the schedule function.
Example:

var lastError = 0;
var trainer = new synaptic.Trainer(network);
trainer.train(trainingSet, {..., schedule: {
  every: 1,
  do: function(data){
    if(lastError - data.error > 0){
      trainer.rate *= 1.2;
    }else{
      trainer.rate *= 0.9;
    }
    lastError = data.error;
  }
}});

Values of data.rate seem to be read only and trainer.rate does not affect the rate used in training process (E.g. data.rate doesn't change)

Homepage Demo?

Where can I find the code for the demo on the homepage? I inspected the page but I couldn't find the script controlling the canvas, let alone the creature.js and world.js scripts. I would really like to see all the code behind the demo, is it possible for somebody to direct me to the source?

The creation of an optimized or an unoptimized network is not equivalent.

Hi,

The creation of an optimized or an unoptimized network is not equivalent.

For example, if you run the test "Optimized and Unoptimized Networks Equivalency" with
var iterations = 10000; , after a while we can see that the result is absolutely not equivalent.
I think this is a serious problem and I tried to solve it but I had no success yet because I don't understand well enough the optimize system.
Maybe someone has changed Unoptimized system without changing Optimized system.
Have you any idea how to fix it ?

Thx

Normalization, De-Normalization?

This project looks amazing, and I have't played with it yet, but I was curious if the inputs need to be pre-normalized?

If not, would this make for a good helper or utils library to compliment synaptic or do you have any tips on that?

code has no effect?

in synaptic.js at "project" function at line 230
it looks like connection is created and assigned to local scope variable and this code has no effect out of the scope of "else { }" may be "var" should be removed?
else {
// create a new connection
var connection = new Neuron.connection(this, neuron, weight);
}

import/export of networks

I hope I'm not missing anything obvious, but I seem to get back networks with no connections after a clone or after export/import to JSON. A network is created, along with the neurons, but the connection array is unpopulated.

I appreciate the work you've already put into this project.

Text - Transformations

Let me start out by saying thank your very much for all the hard work and dedication that you invested for this amazing NN js implementation.

Now to my question:
I'm a bit unclear about hocking up a NN to solve a text-transformation problem.
For example:

Input:

Jesse James
John Kennedy
Martin King

Output: (desired Username)

Jesse_J
John_K
Martin_K

What type of NN would be best suited to solve text transformations (like this & in general), and how would I hook up the neurons to the text input?
Do I have to encode every single letter into an ASCII code and feed then ASCII code of Letter_1 to Layer_1 in the NN, ASCII code of Letter_2 to Layer_2 in the NN, etc ..

Or how would one go about this problem?

Thanks a lot for helping!

Possible memory leak

Hi, I've tried LSTM module via Architect with following configuration:

var LSTM = new synaptic.Architect.LSTM(100, 100, 20);

My algorithm then make about 450 - 500 iterations and at each iteration it activates network and then propagates correct output values.

var output = LSTM.activate(HistoricalFrame);
//some code
LSTM.propagate(0.5, PredictionFrame);

And after 400 - 500 iterations it suddenly stops and start taking about 14GB of memory. Before this single step it uses about 1GB.

I don't know if it is caused by traces or optimization but the network is then unusable.

I've tested rest of code for memory leaks and I found that problem is caused by activate function.

This is sample code which demonstrates this issue. On my MacBook Pro with Intel Core i5 and 8GB RAM it stops on step 512 and start eating memory up to 14GB.

var synaptic = require("synaptic");

//Define frames
var HistoricalFrame = [];
var PredictionFrame = [];

var HistoricalFrameSize = 100;
var PredictionFrameSize = 20;

var FrameCount = 25000;

//Create LSTM
console.log("Initializing LSTM...");
var LSTM = new synaptic.Architect.LSTM(HistoricalFrameSize, HistoricalFrameSize, PredictionFrameSize);

console.log("Optimizing LSTM...");
LSTM.optimize();

console.log("Starting prediction...");

//Make predictions
for(var FrameIndex = 0; FrameIndex < FrameCount; FrameIndex++){

    console.log(FrameIndex);

    //Add value to frame(s)
    PredictionFrame.push(Math.random());

    //Move first value from prediction frame to historical frame
    if(PredictionFrame.length > PredictionFrameSize){
        HistoricalFrame.push( PredictionFrame.shift() );
    }

    //Throw away first value from historical frame to keep the max size
    if(HistoricalFrame.length > HistoricalFrameSize)
        HistoricalFrame.shift();

    //Activate LSTM when frames are filled
    if(HistoricalFrame.length == HistoricalFrameSize){

        var output = LSTM.activate(HistoricalFrame);
        LSTM.propagate(0.5, PredictionFrame);

    }

}

Any ideas where problem can be?

Dynamic Learning Rate

The Trainer is pretty awesome. But I had wanted to use a dynamic learning rate. If you are agreed @cazala, I can write something up and submit a PR. Still waiting on your feedback for #12, by the way.

connected() wrong/misused?

I noticed that line 947 checks:
if (connected == 'projected')
connections++;
should it be instead
if (connected.type == 'projected')
connections++;
?

Also, line 222:
var connected = this.connected(neuron);
if (connected) {
...

should this check for type 'projected' as well, or is this intended?? the comment on top of connected says:
// returns true or false whether the neuron is connected to another neuron (parameter)

but it's not what it is doing... something seems wrong. 'selfconnection', 'inputs' and 'gated' are all possible types, but they seem irrelevant anywhere the function is used. should it just return true in the projected case? selfconnection seems to also be treated specially where it's used...

Sound classifying network

I want to use this NN implementation to classify short sounds ( one second each at most ) in about 30 different classes. I am new to neural networks and was wondering what's a good network architecture to train on sounds and how long does it usually take to train a network like this (30 outputs, sound wave as input) on 1000 samples? I want to know if it takes seconds, minutes, hours or days? What's the time complexity of training the network?

For the input I was thinking of sampling the sound wave at different times or getting the positions of the peaks. For example, if all sounds are under 1 second and I sample every 0.016seconds (60fps) then I would have 60 values as input. Is this an approach that could work?

Custom Logs in workerTrain

It seems that custom logs don't work properly in workerTrain, while the same inputs (stripping the callback) do work just fine with the basic train.

The Code used: (pastebin to reduce reading size)
http://pastebin.com/DJYRe1X5

Executed in the most recent Google Chrome browser, and tested in firefox for completion

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.