Giter Site home page Giter Site logo

zuiwanting / minerva Goto Github PK

View Code? Open in Web Editor NEW

This project forked from dmlc/minerva

0.0 2.0 0.0 17.05 MB

Minerva: a fast and flexible tool for deep learning on multi-GPU. It provides ndarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy.

License: Other

Python 35.89% CMake 1.87% C++ 50.59% Shell 0.36% C 0.05% Cuda 5.47% Protocol Buffer 5.78%

minerva's Introduction

Minerva: a fast and flexible system for deep learning

Latest News

  • We've cleared quite a lot of Minerva's dependencies and made it easier to build. Basically, almost all needed are:

    ./build.sh

    Please see the wiki page for more information.

  • Minerva's Tutorial and API documents are released!

  • Minerva had migrated to dmlc, where you could find many awesome machine learning repositories!

  • Minerva now evolves to use cudnn_v2. Please download and use the new library.

Overview

Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.Minerva is a fast and flexible tool for deep learning. It provides NDarray programming interface, just like Numpy. Python bindings and C++ bindings are both available. The resulting code can be run on CPU or GPU. Multi-GPU support is very easy. Please refer to the examples to see how multi-GPU setting is used.

Quick try

After building and installing Minerva and Owl package (python binding) as in Install Minerva. Try run ./run_owl_shell.sh in Minerva's root directory. And enter:

>>> x = owl.ones([10, 5])
>>> y = owl.ones([10, 5])
>>> z = x + y
>>> z.to_numpy()

The result will be a 10x5 array filled by value 2. Minerva supports many numpy style ndarray operations. Please see the API document for more information.

Features

  • N-D array programming interface and easy integration with numpy

    >>> import numpy as np
    >>> x = np.array([1, 2, 3])
    >>> y = owl.from_numpy(x)
    >>> y += 1
    >>> y.to_numpy()
    array([ 2., 3., 4., ], dtype=float32)

    More is in the API cheatsheet

  • Automatically parallel execution

    >>> x = owl.zeros([256, 128])
    >>> y = owl.randn([1024, 32], 0.0, 0.01)

    The above x and y will be executed concurrently. How is this achieved?

    See Feature Highlight: Data-flow and lazy evaluation

  • Multi-GPU, multi-CPU support:

    >>> owl.set_device(gpu0)
    >>> x = owl.zeros([256, 128])
    >>> owl.set_device(gpu1)
    >>> y = owl.randn([1024, 32], 0.0, 0.01)

    The above x and y will be executed on two cards simultaneously. How is this achieved?

    See Feature Highlight: Multi GPU Training

Tutorial and Documents

  • Tutorials and high-level concepts could be found in our wiki page
  • A step-by-step walk through on MNIST example could be found here
  • We also built a tool to directly read Caffe's configure file and train. See document.
  • API documents could be found here

Performance

We will keep updating the latest performance we could achieve in this section.

Training speed

Training speed
(images/second)
AlexNet VGGNet GoogLeNet
1 card 189.63 14.37 82.47
2 cards 371.01 29.58 160.53
4 cards 632.09 50.26 309.27
  • The performance is measured on a machine with 4 GTX Titan cards.
  • On each card, we load minibatch size of 256, 24, 120 for AlexNet, VGGNet and GoogLeNet respectively. Therefore, the total minibatch size will increase as the number of cards grows (for example, training AlexNet on 4 cards will use 1024 minibatch size).

An end-to-end training

We also provide some end-to-end training codes in owl package, which could load Caffe's model file and perform training. Note that, Minerva is not the same tool as Caffe. We are not focusing on this part of logic. In fact, we implement these just to play with the Minerva's powerful and flexible programming interface (we could implement a Caffe-like network trainer in around 700~800 lines of python codes). Here is the training error with time compared with Caffe. Note that Minerva could finish GoogleNet training in less than four days with four GPU cards.

Error curve

Testing error rate

We trained several models using Minerva from scratch to show the correctness. The following table shows the error rate of different network under different testing settings.

Testing error rate AlexNet VGGNet GoogLeNet
single view top-1 41.6% 31.6% 32.7%
multi view top-1 39.7% 30.1% 31.3%
single view top-5 18.8% 11.4% 11.8%
multi view top-5 17.5% 10.8% 11.0%
  • AlexNet is trained with the solver except that we didn't use multi-group convolution.
  • GoogLeNet is trained with the quick_solver.
  • We didn't train VGGNet from scratch. We just transform the model into Minerva format and testing.

The models can be found in the following link: AlexNet GoogLeNet VGGNet

You can download the trained models and try them on your own machine using net_tester script.

Next Plan

  • Get rid of boost library dependency by using Cython. (DONE)
  • Large scale LSTM example using Minerva.
  • Easy support for user-defined new operations.

License and support

Minerva is provided in the Apache V2 open source license.

You can use the "issues" tab in github to report bugs. For non-bug issues, please send up an email at [email protected]. You can subscribe to the discussion group: https://groups.google.com/forum/#!forum/minerva-support.

Wiki

For more information on how to install, use or contribute to Minerva, please visit our wiki page: https://github.com/minerva-developers/minerva/wiki

minerva's People

Contributors

hotpxl avatar jermainewang avatar sneakerkg avatar serailhydra avatar hjk41 avatar redpony avatar xianyi avatar hanwentao avatar hyeontaek avatar zzhang-cn avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.