Giter Site home page Giter Site logo

udaykamal20 / h-mem Goto Github PK

View Code? Open in Web Editor NEW

This project forked from igitugraz/h-mem

0.0 0.0 0.0 58 KB

Code for Limbacher, T. and Legenstein, R. (2020). H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks

License: GNU General Public License v3.0

Python 100.00%

h-mem's Introduction

H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks

This is the code used in the paper "H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks" for training H-Mem on a single-shot image association task and on the bAbI question-answering tasks.

H-Mem schema

Setup

You need TensorFlow to run this code. We tested it on TensorFlow version 2.1. Additional dependencies are listed in environment.yml. If you use Conda, run

conda env create --file=environment.yml

to install the required packages and their dependencies.

Usage

Single-shot associations with H-Mem

To start training on the single-shot image association task, run

python image_association_task.py

Set the command line argument --delay to set the between-image delay (in the paper we used delays ranging from 0 to 40). Run the following command

python image_association_task_lstm.py

to start training the LSTM model on this task (the default value for the between-image delay is 0; you can change it with the command line argument --delay).

Question answering with H-Mem

Run the following command

python babi_task_single.py

to start training on bAbI task 1 in the 10k training examples setting. Set the command line argument --task_id to train on other tasks. You can try different model configurations by changing various command line arguments. For example,

python babi_task_single.py --task_id=4 --memory_size=20 --epochs=50 --logging=1

will train the model with an associative memory of size 20 on task 4 for 50 epochs. The results will be stored in results/.

Memory-dependent memorization

In our extended model we have added an 'read-before-write' step. This model will be used if the command line argument --read_before_write is set to 1. Run the following command

python babi_task_single.py --task_id=16 --epochs=250 --read_before_write=1

to start training on bAbI task 16 in the 10k training examples setting (note that we trained the extended model for 250 epochs---instead of 100 epochs). You should get an accuracy of about 100% on this task. Compare to the original model, which does not solve task 16, by running the following command

python babi_task_single.py --task_id=16 --epochs=250

References

h-mem's People

Contributors

tlimbacher avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.