Giter Site home page Giter Site logo

hendrikstrobelt / lstmvis Goto Github PK

View Code? Open in Web Editor NEW
1.2K 53.0 257.0 5.65 MB

Visualization Toolbox for Long Short Term Memory networks (LSTMs)

License: BSD 3-Clause "New" or "Revised" License

HTML 0.92% JavaScript 9.07% Python 4.31% Lua 2.85% Jupyter Notebook 82.12% CSS 0.73%
lstm neural-network visualization recurrent-neural-networks

lstmvis's Introduction

Visual Analysis for State Changes in RNNs

More information about LSTMVis, an introduction video, and the link to the live demo can be found at lstm.seas.harvard.edu

Also check out our new work on Sequence-to-Sequence models on github or the live demo at http://seq2seq-vis.io/

Changes in V2.1

  • update to Python 3.7++ (thanks to @nneophyt)

Changes in V2

  • new design and server-backend
  • discrete zooming for hidden-state track
  • added annotation tracks for meta-data and prediction
  • added training and extraction workflow for tensorflow
  • client is now ES6 and D3v4
  • some performance enhancements on client side
  • Added Keras tutorial here (thanks to Mohammadreza Ebrahimi)

Install

Please use python 3.7 or later to install LSTMVis.

Clone the repository:

git clone https://github.com/HendrikStrobelt/LSTMVis.git; cd LSTMVis

Install python (server-side) requirements using pip:

python -m venv  venv3
source venv3/bin/activate
pip install -r requirements.txt

Download & Unzip example dataset(s) into <LSTMVis>/data/05childbook:

Children Book - Gutenberg - 2.2 GB

Parens Dataset - 10k small - 0.03 GB

start server:

source venv3/bin/activate
python lstm_server.py -dir <datadir>

For the example dataset, use python lstm_server.py -dir data

open browser at http://localhost:8888 - eh voila !

Adding Your Own Data

If you want to train your own data first, please read the Training document. If you have your own data at hand, adding it to LSTMVis is very easy. You only need three files:

  • HDF5 file containing the state vectors for each time step (e.g. states.hdf5)
  • HDF5 file containing a word ID for each time step (e.g. train.hdf5)*
  • Dict file containing the mapping from word ID to word (e.g. train.dict)*

A schematic representation of the data:

Data Format

*If you don't have these files yet, but a space-separated .txt file of your training data instead, check out our text conversion tool

Data Directory

LSTMVis parses all subdirectories of <datadir> for config files lstm.yml. A typical <datadir> might look like this:

<datadir>
├── paren  		        <--- project directory
│   ├── lstm.yml 		<--- config file
│   ├── states.hdf5 	        <--- states for each time step
│   ├── train.hdf5 		<--- word ID for each time step
│   └── train.dict 		<--- mapping word ID -> word
├── fun .. 

Config File

a simple example of an lstm.yml is:

name: children books  # project name
description: children book texts from the Gutenberg project # little description

files: # assign files to reference name
  states: states.hdf5 # HDF5 files have to end with .h5 or .hdf5 !!!
  train: train.hdf5 # word ids of training set
  words: train.dict # dict files have to end with .dict !!

word_sequence: # defines the word sequence
  file: train # HDF5 file
  path: word_ids # path to table in HDF5
  dict_file: words # dictionary to map IDs from HDF5 to words

states: # section to define which states of your model you want to look at
  file: states # HDF5 files containing the state for each position
  types: [
        {type: state, layer: 1, path: states1}, # type={state, output}, layer=[1..x], path = HDF5 path
        {type: state, layer: 2, path: states2},
        {type: output, layer: 2, path: output2}
  ]

Intrigued ? Here is more..

Check out our documents about:

Credits

LSTMVis is a collaborative project of Hendrik Strobelt, Sebastian Gehrmann, Bernd Huber, Hanspeter Pfister, and Alexander M. Rush at Harvard SEAS.

lstmvis's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lstmvis's Issues

model/get_states.lua:78: bad argument #1 to 'copy'

Getting the above error when I attempt to execute get_states.lua. This is on my own data. Here is what I do before executing get_states.lua
First, I download the tiny-Shakespeare dataset in jcjohson's github repo, torch-rnn; and split it into 2 datasets: train_tiny-shakespeare.txt and validation_tiny-shakespeare.txt.

Then, I run preprocess like so:
python model/preprocess.py data/tinyshakespeare/train_tiny-shakespeare.txt
data/tinyshakespeare/validation_tiny-shakespeare.txt 50 64
data/tinyshakespeare/convert/tiny-shakespeare

Then, I run main.lua to train on this data, like so:
th model/main.lua -rnn_size 128 -word_vec_size 64-num_layers 2
-epochs 50 -data_file data/tinyshakespeare/convert/tiny-shakespeare.hdf5
-val_data_file data/tinyshakespeare/convert/tiny-shakespeareval.hdf5
-gpuid 0 -savefile cv/tinyshakespeare
| tee train-tinyshakespeare.log

Then, I run get_states.lua like so:
th model/get_states.lua
-data_file data/tinyshakespeare/convert/tiny-shakespeare.hdf5
-checkpoint_file cv/tinyshakespeare_epoch20.00_420.16.t7
-output_file data/reads/tinyshakespeare_states.h5

This is where I get the error.

create test script

the script takes a config file (threshold, selected cells) and a binary ground truth to evaluate precision and recall

Possible minor bug in Keras instruction

In the keras instruction:

# Reshape y_train: 
y_train_tiled = numpy.tile(y_train, (num_time_steps,1))
y_train_tiled = y_train_tiled.reshape(len(y_train), num_time_steps , 1)

y_train_tiled should be transposed between tiling and reshaping if we want to assign the same label to the time distributed outputs of a single example.

Add a dataset with multiple different models

From the comments

It is good to see the hidden state dynamics can reflect some patterns corresponding to the range of inputs. But IMO, a more important thing is to understand the advantages of LSTMs over simple RNNs. For example, at first we can compare the different hidden dynamics between a LSTM and a simple RNN, to justify our hypothesis that a LSTM can remember the longer dependencies than a SRN. Then we may watch the behavior of the input/output/forget gates to help us understanding why this could happen.

Sure, this is a good suggestion. Maybe we can add one shared dataset with a simple RNN, LSTM (with gates), and a GRU. Would this let you look for the examples you are interested in?

use LSTMVis with OpenNMT

Not sure if this is addressed in the doc or elsewhere; I'll close this as soon as possible if I found out...

I did some development recently with OpenNMT. Is there a way to use this visualization tool for OpenNMT models? Especially since both OpenNMT and LSTMVi are developed by the same lab?

Thanks in advance!

error when starting server

  1. data in /LSTMVis/lstmdata/05childbook
  2. starting server with....
    root@b6d19b96355f:/LSTMVis# python server.py -dir /LSTMVis/lstmdata/05childbook
    File "server.py", line 267
    print args
    ^
    SyntaxError: Missing parentheses in call to 'print'

Any ideas what I'm missing? @HendrikStrobelt Thx!

Python module helper_functions is missing

Dear developers,
I ran the "install" instructions from the main page and downloaded the parens data as a test.
However, I get the following error, indicating a missing module:

Traceback (most recent call last): File "lstm_server.py", line 8, in <module> from lstmdata.data_handler import LSTMDataHandler File "/home/dnieuwenhuijse/Software/LSTMVis/lstmdata/data_handler.py", line 11, in <module> import helper_functions as hf ModuleNotFoundError: No module named 'helper_functions'

I couldn't find the module on your github repo either, so I am guessing that you have removed the module from your repo by accident?

Kind regards,

David

Work without meta

Traceback (most recent call last):
File "server.py", line 321, in
create_data_handlers(args.dir)
File "server.py", line 307, in create_data_handlers
data_handlers[p_dir] = LSTMDataHandler(directory=p_dir, config=config)
File "/home/ubuntu/Projects/LSTMVis/lstmdata/data_handler.py", line 79, in init
if self.config['meta']:
KeyError: 'meta'

Show class labels if available

I have a tagging task where each token has a bunch of annotations associated with it (the most important being the predicted and actual class). It would be great if the annotations could be shown below the main visualization. This ways, I could more easily see, e.g., which neurons lead to a wrong decision.

View Visualization for my HDF5 Dataset

Using the Training documentation I was able to generate the required HDF5 files (states, topk , saliency ...) .. I am not able to visualize this dataset thats running on my localhost after I started the server (python lstm_server.py ).... Is there any documentation for viewing my dataset (HDF5 files) in the Visualization tab

Clarification on states HDF5 input

Hi All,

Came across this project and it looks really helpful. I'm looking to use it to explore LSTM in a current project.

I'm a bit confused about the states.h5 input file. I imagine that the state vector from each timestep is the h_t output of the LSTM, for each t (as in the notation of this LSTM description). Is this correct?

If so, the states are dependent on the input sequence. So I can put any sequence into the LSTM and then get these hidden states, but wouldn't this tool also need the states from all of my other input sequences for some of the analysis? Or does it only look at one input at a time? Is there an easy way to swap between or compare activations across input sequences?

Thanks for the clarification, and for contributing this tool!

Make more generalizable

Hey guys,
Fantastic project!

As part of my research, I am looking into extending your platform outside of the NLP domain. In other words, I would like to be able to explore the activation states:

  • for n-dimensional vectors rather than word ids
  • with respect to the predicted and target value
  • of any RNN

Would you be interested in collaborating on something of the kind?

SwaggerValidationError 'states,words' is not of type 'array'

Upon running python lstm_server.py -dir data I get the error below. I'm running Python 2.7.15 in a virtualenv and I haven't altered any files or directories save for creating /data and downloading the two corpus packs. Any help would be greatly appreciated, thank you :)

  File "lstm_server.py", line 169, in <module>
    app.add_api('lstm_server.yaml')
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/connexion/app.py", line 168, in add_api
    validator_map=self.validator_map)
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/connexion/api.py", line 108, in __init__
    validate_spec(spec)
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 97, in validate_spec
    validate_apis(apis, bound_deref)
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 310, in validate_apis
    idx=idx,
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 243, in validate_parameter
    validate_default_in_parameter(param, deref)
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 172, in validate_default_in_parameter
    deref=deref,
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/common.py", line 29, in wrapper
    sys.exc_info()[2])
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/common.py", line 24, in wrapper
    return method(*args, **kwargs)
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/validator20.py", line 155, in validate_value_type
    validate_schema_value(schema=deref(schema), value=value, swagger_resolver=swagger_resolver)
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/swagger_spec_validator/ref_validators.py", line 101, in validate_schema_value
    create_dereffing_validator(swagger_resolver)(schema, resolver=swagger_resolver).validate(value)
  File "/Users/pkarpenko/Documents/Dev/python/LSTMVis/venv/lib/python2.7/site-packages/jsonschema/validators.py", line 130, in validate
    raise error
swagger_spec_validator.common.SwaggerValidationError: ('\'states,words\' is not of type \'array\'\n\nFailed validating \'type\' in schema:\n    {\'collectionFormat\': \'csv\',\n     \'default\': \'states,words\',\n     \'description\': "list of required data dimensions, like state values (\'states\'), token values (\'words\'),\\n  or meta information (\'meta_XYZ\')\\n",\n     \'in\': \'query\',\n     \'items\': {\'type\': \'string\'},\n     \'name\': \'dims\',\n     \'required\': False,\n     \'type\': \'array\'}\n\nOn instance:\n    \'states,words\'', <ValidationError: "'states,words' is not of type 'array'">)

There may be a typo in the simple example of an lstm.yml

In the simple example of an lstm.yml:
"files: # assign files to reference name
states: cbt_epoch10.h5 # HDF5 files have to end with .h5 or .hdf5 !!!
word_ids: train.h5 ------- # May be you mean train: train.h5
words: words.dict # dict files have to end with .dict !!
"
Since the following word_sequence cannot find the train file
"word_sequence: # defines the word sequence
file: train # HDF5 file
"

I am not sure about this, please confirm

Reload

I suggest to add a "reload data" functionality. If I start the server and afterwards change the data, I have to restart it again in order to see the changes. It would be great if this could be done without reloading the entire server.

Great tool btw :)

Match View not working!!

The match view is not working (both fast and precise). Even with a small subset of my dataset, it does not seem to work!

ModuleNotFoundError: No module named 'helper_functions'

When I run this execution:
python lstm_server.py -dir data

Getting this error below. Ideas on how to fix?

Traceback (most recent call last):
  File "lstm_server.py", line 8, in <module>
    from lstmdata.data_handler import LSTMDataHandler
  File "/home/user/Documents/GitHub/LSTMVis/lstmdata/data_handler.py", line 11, in <module>
    import helper_functions as hf
ModuleNotFoundError: No module named 'helper_functions'

check for discretization

false:

cs[cs >= activation_threshold_corrected] = 1
cs[cs < activation_threshold_corrected] = -1

right:

all_below = cs<activation_threshold_corrected
cs[:,:] = 1
cs[all_below] = -1

package versions

Please include the versions of the various packages used for which this has been tested and works particularly torch

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.