Giter Site home page Giter Site logo

rnn-benchmarks's Introduction

rnn-benchmarks

All benchmarks are reported for a host with the following specifications :

  • NVIDIA GeForce GTX TITAN X GPU
  • Intel(R) Xeon(R) CPU E5-2630L v3 @ 1.80GHz
  • CUDA 7.5, cudnnv5

These benchmarks compare the running time of various recurrent neural networks on different deep-learning libraries. The networks (RNN or LSTM) take as input a 3D Tensor batch_size x seq_length x hidden_size and output the last hidden state, compute a MSE loss, backpropagate the errors through the network and do a simple update of the parameters (params = params - lr*gradParams). The sequence length is always set to 30. The hidden_size specifies the size of the output and input layer of the networks.

The code of the scripts we ran are available. The implementations of each model on the different libraries each use the fastest implementations we were able to find. If you are aware of faster implementations, please let me know. I've reported results on Theano, Torch and TensorFlow so far, but we will try to include many more libraries in the future (including cudnn very soon).

The reported Train time is the average time needed to run (forward, backward, and update) a training example (and not a training batch), so the smaller the better. We also report Compile time, which includes symbolic graph optimizations (Theano and TensorFlow compilation), as well as a forward and backward pass (to allocate memory). While the compilation time isn't really a factor in production, it does increase debugging time, which is why we report it here.

LSTM

This LSTM implementation used for these benchmarks does not use peephole connections between cell and gates.

Batch Size 32

Hidden Size 128

Library Compile (s) Train (µs) Forward only (µs)
Theano 7.46 289.6 99.1
Torch 0.03 434.4 99.9
TensorFlow 3.91 820.0 266.7

Hidden Size 512

Library Compile (s) Train (µs) Forward only (µs)
Theano 7.59 619.4 200.9
Torch 0.19 610.7 201.7
TensorFlow 3.97 886.9 324.9

Hidden Size 1024

Library Compile (s) Train (µs) Forward only (µs)
Theano 9.62 1013.5 324.1
Torch 0.69 1139.8 346.3
TensorFlow 3.81 1329.2 562.7

Batch Size 128

Hidden Size 128

Library Compile (s) Train (µs) Forward only (µs)
Theano 7.38 102.9 25.6
Torch 0.03 109.8 25.2
TensorFlow 3.68 188.6 65.0

Hidden Size 512

Library Compile (s) Train (µs) Forward only (µs)
Theano 7.50 256.0 62.8
Torch 0.20 214.3 51.4
TensorFlow 3.73 255.2 114.2

Hidden Size 1024

Library Compile (s) Train (µs) Forward only (µs)
Theano 7.45 583.4 160.2
Torch 0.75 558.1 112.4
TensorFlow 3.84 592.2 238.1

RNN

This section benchmarks a simple RNN implementation.

Batch Size 32

Hidden Size 128

Library Compile (s) Train (µs) Forward only (µs)
Theano 4.31 104.6 30.9
Torch 0.05 259.53 103.06
TensorFlow 1.64 278.4 111.5

Hidden Size 512

Library Compile (s) Train (µs) Forward only (µs)
Theano 4.36 275.2 102.2
Torch 0.05 288.2 114.6
TensorFlow 1.62 349.7 218.4

Hidden Size 1024

Library Compile (s) Train (µs) Forward only (µs)
Theano 4.44 443.8 179.5
Torch 0.09 381.4 118.8
TensorFlow 1.72 530.0 241.7

Batch Size 128

Hidden Size 128

Library Compile (s) Train (µs) Forward only (µs)
Theano 4.48 45.4 13.7
Torch 0.08 67.7 32.7
TensorFlow 1.70 75.5 33.6

Hidden Size 512

Library Compile (s) Train (µs) Forward only (µs)
Theano 4.40 79.0 23.8
Torch 0.09 73.5 34.2
TensorFlow 1.63 125.6 86.8

Hidden Size 1024

Library Compile (s) Train (µs) Forward only (µs)
Theano 4.38 147.8 50.3
Torch 0.13 150.2 64.7
TensorFlow 1.70 222.5 137.8

rnn-benchmarks's People

Contributors

c0nn3r avatar nicholas-leonard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rnn-benchmarks's Issues

API mismatches with latest TensorFlow

Great benchmark for evaluting RNN units with different frameworks. But the world is changing fast and TensorFlow has evolved to version 1.4.0rc0. The benchmark for TenforFlow depends on some deprecated APIs which have been removed or replaced.

$ python rnn.py -n basic_lstm -b 32 -l 128 -s 30
basic_lstm
Compiling...
Traceback (most recent call last):
  File "rnn.py", line 59, in <module>
    output, _cell_state = rnn.rnn(cell, x, dtype=tf.float32)
AttributeError: 'module' object has no attribute 'rnn'

Hope you can update the code to follow the latest features of those frameworks. Thanks for your great works.

Benchmark hidden sizes that give better efficiency and bigger batches

Thank you for your RNN benchmarking work! Hidden size is somewhat arbitrary, so there is virtually no reason (other than being really tight on memory) to make it, say, 500 instead of 512. Underlying gemms in RNNs/LSTM's usually have better efficiency with power-of-two sizes. It would be good to have benchmarks for hidden sizes 128, 512, 1024. Also, pretty often people use bigger mini-batches than 20, so benchmarking batches of 32/64 also make sense.

Support for Android platform

Is there any RNN and LSTM benchmark available for android? Any help regarding this will highly be appreciated.

LICENSE

You need to add an open-source LICENSE.

I recommend BSD or MIT.

Add your and your organization’s name to the copyright notice in the LICENSE.txt.

I cannot do this for you as it must be done with your commit.

Also please send me an email so I know yours.

Add versions of libraries in the comparions

Since the libraries are very actively maintained and improved upon, can you add versions of the libraries that are used for the comparisons? (Unless I'm somehow missing this information on the README page).

caffe rnn

Can you add caffe rnn model to test?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.