Giter Site home page Giter Site logo

stephantul / somber Goto Github PK

View Code? Open in Web Editor NEW
53.0 7.0 13.0 509 KB

Recursive Self-Organizing Map/Neural Gas.

License: MIT License

Python 100.00%
kohonen som recurrent-neural-networks unsupervised machine-learning recsom neural-gas ng plsom cython

somber's Issues

Batching for recurrent SOMS

All recurrent SOMs would benefit from some kind of batching scheme. I currently have batched code, but it doesn't work. My hunch is that batching the recurrent/recursive/merging SOM is non-trivial because there is no way of making sure the concurrently processed batches converge on the same BMU.

Because we take the mean over batches, this means that updates will generally end up being more noisy as the batch size increases.

Two starting ideas:

  1. A possible solution could be to initialize the context Weights and weights of the SOM to some value which allows for convergence (e.g. the weights are skewed in such a way that the concurrently processed batches all go in the same direction), but this entails setting the parameters in advance, which kind of defeats any purpose the SOM has.

  2. A second solution would be to slowly increase the batch size, allowing the SOM to converge to something stable first, and then increasing batch size.

Add PLSOM2

The PLSOM2 is an interesting candidate for addition to SOMBER, for the following reasons:

  • we already have PLSOM
  • It is supposed to improve PLSOM by making it less sensitive to outliers

It would require changing the updates routine and parameter update routine in the PLSOM.

Add Gamma som

The Merge som is working, so we perhaps could implement the Gamma Som, which uses a generalization of the Merge Som context function (see here (behind a paywall unfortunately)).

The Merge Som Best Matching Unit (BMU) calculation takes into account the BMU at the previous time step, together with its context weight, as follows:

 context = (1 - self.beta) * self.weights[prev_bmu] + self.beta * self.context_weights[prev_bmu]

As context weights reflect the previous activation, you can use a merge som to "step back in time"

Generalizing over this, the Gamma Som defines a BMU function which takes into account the k previous BMUs explicitly (so the Merge Som can be defined as a Gamma SOM with k = 1). This makes the Gamma Som computationally more expensive than the Merge Som, but also more powerful.

Adding the Gamma Som will require relatively few changes, but I don't have any need for it currently.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.