Giter Site home page Giter Site logo

sony / nnabla Goto Github PK

View Code? Open in Web Editor NEW
2.7K 153.0 334.0 122.68 MB

Neural Network Libraries

Home Page: https://nnabla.org/

License: Apache License 2.0

CMake 0.35% Python 62.23% C++ 28.97% Shell 0.06% Jupyter Notebook 5.81% Batchfile 0.30% Makefile 0.26% Dockerfile 0.08% Cython 1.93%

nnabla's Introduction

Neural Network Libraries

Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. We aim to have it running everywhere: desktop PCs, HPC clusters, embedded devices and production servers.

Installation

Installing Neural Network Libraries is easy:

pip install nnabla

This installs the CPU version of Neural Network Libraries. GPU-acceleration can be added by installing the CUDA extension with following command.

pip install nnabla-ext-cuda110

Above command is for version 11.0 CUDA Toolkit.

The other supported CUDA packages are listed here.

CUDA ver.10.x, ver.9.x, ver.8.x are not supported now.

For more details, see the installation section of the documentation.

Building from Source

See Build Manuals.

Running on Docker

For details on running on Docker, see the installation section of the documentation.

Features

Easy, flexible and expressive

The Python API built on the Neural Network Libraries C++11 core gives you flexibility and productivity. For example, a two layer neural network with classification loss can be defined in the following 5 lines of codes (hyper parameters are enclosed by <>).

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF

x = nn.Variable(<input_shape>)
t = nn.Variable(<target_shape>)
h = F.tanh(PF.affine(x, <hidden_size>, name='affine1'))
y = PF.affine(h, <target_size>, name='affine2')
loss = F.mean(F.softmax_cross_entropy(y, t))

Training can be done by:

import nnabla.solvers as S

# Create a solver (parameter updater)
solver = S.Adam(<solver_params>)
solver.set_parameters(nn.get_parameters())

# Training iteration
for n in range(<num_training_iterations>):
    # Setting data from any data source
    x.d = <set data>
    t.d = <set label>
    # Initialize gradients
    solver.zero_grad()
    # Forward and backward execution
    loss.forward()
    loss.backward()
    # Update parameters by computed gradients
    solver.update()

The dynamic computation graph enables flexible runtime network construction. Neural Network Libraries can use both paradigms of static and dynamic graphs, both using the same API.

x.d = <set data>
t.d = <set label>
drop_depth = np.random.rand(<num_stochastic_layers>) < <layer_drop_ratio>
with nn.auto_forward():
    h = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), name='conv0'))
    for i in range(<num_stochastic_layers>):
        if drop_depth[i]:
            continue  # Stochastically drop a layer
        h2 = F.relu(PF.convolution(x, <hidden_size>, (3, 3), pad=(1, 1), 
                                   name='conv%d' % (i + 1)))
        h = F.add2(h, h2)
    y = PF.affine(h, <target_size>, name='classification')
    loss = F.mean(F.softmax_cross_entropy(y, t))
# Backward computation (can also be done in dynamically executed graph)
loss.backward()

You can differentiate to any order with nn.grad.

import nnabla as nn
import nnabla.functions as F
import numpy as np

x = nn.Variable.from_numpy_array(np.random.randn(2, 2)).apply(need_grad=True)
x.grad.zero()
y = F.sin(x)
def grad(y, x, n=1):
    dx = [y]
    for _ in range(n):
        dx = nn.grad([dx[0]], [x])
    return dx[0]
dnx = grad(y, x, n=10)
dnx.forward()
print(np.allclose(-np.sin(x.d), dnx.d))
dnx.backward()
print(np.allclose(-np.cos(x.d), x.g))

# Show the registry status
from nnabla.backward_functions import show_registry
show_registry()

Command line utility

Neural Network Libraries provides a command line utility nnabla_cli for easier use of NNL.

nnabla_cli provides following functionality.

  • Training, Evaluation or Inference with NNP file.
  • Dataset and Parameter manipulation.
  • File format converter
    • From ONNX to NNP and NNP to ONNX.
    • From TensorFlow to NNP and NNP to TensorFlow.
    • From NNP to TFLite.
    • From ONNX or NNP to NNB or C source code.

For more details see Documentation

Portable and multi-platform

  • Python API can be used on Linux and Windows
  • Most of the library code is written in C++14, deployable to embedded devices

Extensible

  • Easy to add new modules like neural network operators and optimizers
  • The library allows developers to add specialized implementations (e.g., for FPGA, ...). For example, we provide CUDA backend as an extension, which gives speed-up by GPU accelerated computation.

Efficient

  • High speed on a single CUDA GPU
  • Memory optimization engine
  • Multiple GPU support

Documentation

https://nnabla.readthedocs.org

Getting started

  • A number of Jupyter notebook tutorials can be found in the tutorial folder. We recommend starting from by_examples.ipynb for a first working example in Neural Network Libraries and python_api.ipynb for an introduction into the Neural Network Libraries API.

  • We also provide some more sophisticated examples at nnabla-examples repository.

  • C++ API examples are available in examples/cpp.

Contribution guide

The technology is rapidly progressing, and researchers and developers often want to add their custom features to a deep learning framework. NNabla is really nice in this point. The architecture of Neural Network Libraries is clean and quite simple. Also, you can add new features very easy by the help of our code template generating system. See the following link for details.

License & Notice

Neural Network Libraries is provided under the Apache License Version 2.0 license.

It also depends on some open source software packages. For more information, see LICENSES.

Citation

@misc{hayakawa2021neural,
      title={Neural Network Libraries: A Deep Learning Framework Designed from Engineers' Perspectives}, 
      author={Takuya Narihira and Javier Alonsogarcia and Fabien Cardinaux and Akio Hayakawa
              and Masato Ishii and Kazunori Iwaki and Thomas Kemp and Yoshiyuki Kobayashi
              and Lukas Mauch and Akira Nakamura and Yukio Obuchi and Andrew Shin and Kenji Suzuki
              and Stephen Tiedmann and Stefan Uhlich and Takuya Yashima and Kazuki Yoshiyama},
      year={2021},
      eprint={2102.06725},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

nnabla's People

Contributors

akiohayakawa-sony avatar chao-xue avatar enzhu-xu avatar fabiencardinaux avatar kazukiyoshiyama-sony avatar kenji-suzuki-s avatar krishnaw10 avatar qiiajia avatar qizhen-xue avatar stefanuhlich-sony avatar takuma-sony avatar takuseno avatar takuyanarihira avatar takuyayashima avatar te-andrewshin avatar te-basavarajmurali avatar te-fanxl avatar te-hidehogomi avatar te-hiroaki-mikami avatar te-naokiide avatar te-poornimabiradar avatar te-shiyuyang avatar te-stephentiedemann avatar te-wangshi avatar te-woodyli avatar te-yongweisun avatar te-yoshiyukikobayashi avatar tomonobutsujikawa avatar yasunarizhashimoto avatar yukiooobuchi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nnabla's Issues

nnabla support raspberry pie?

Hi

I tried nnabla install to raspberry pie 3 B+ (raspbian OS)

Attached Install log, please reference.

and could you tell me nnabla support raspberry pie? if support, how to install?

nnablafail.txt

Reliance on Untrusted Inputs in a Security Decision

File: /src/nbla/logger.cpp

const char *homedir = getenv("HOME");
    if (homedir == nullptr) {
      struct passwd *pw = getpwuid(getuid());
      if (pw != nullptr) {
        homedir = pw->pw_dir;
}

If an attacker can set environment variables which are untrustable variables so they can have any content and length, and the same variable can be set more than once such as which will lead to attack such as (CWE-807, CWE-20).

PS: This issue was identified while code review, no PoC or exploit was created.

OSX Version

Hi guys, I'm a fan of your work, it looks very exciting.
However, I wonder if this project plans to support mac osx.
if this is not planned, I'd like to know how can I help this mac version.

A lot of people use linux and mac on academia.

Are there plans for a portable ui?

NnpExporter gets exception when parameter_type="included"

https://github.com/sony/nnabla/blob/master/python/src/nnabla/utils/converter/nnabla/exporter.py#L112-L119

The function write_nntxt() should be _write_nntxt()

Test:

nnabla_cli convert full_dense.nnp full_dense_test.nnp --nnp-parameter-nntxt

Output:

2021-03-02 09:53:23,709 [nnabla][INFO]: Initializing CPU extension...
NNabla command line interface (Version:1.17.0.dev2, Build:210301214346)
Importing full_dense.nnp
 Expanding full_dense.
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/cli.py", line 147, in cli_main
    return_value = args.func(args)
  File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/cli/convert.py", line 108, in convert_command
    nnabla.utils.converter.convert_files(args, args.files, output)
  File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 301, in convert_files
    return _export_from_nnp(args, nnp, output)
  File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/commands.py", line 137, in _export_from_nnp
    NnpExporter(nnp, args.batch_size, parameter_type).execute(output)
  File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/nnabla/exporter.py", line 155, in execute
    self._export_nnp(ofile)
  File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/nnabla/exporter.py", line 141, in _export_nnp
    self._export_files(tmpdir)
  File "/usr/local/lib/python3.8/dist-packages/nnabla/utils/converter/nnabla/exporter.py", line 113, in _export_files
    self.write_nntxt('{}/network.nntxt'.format(outdir), self._nnp)
AttributeError: 'NnpExporter' object has no attribute 'write_nntxt'

examples/cpp/mnist_collection/GNUmakefile missing nnabla_utils

running
cpp/mnist_collection$ make lenet

results:

/usr/bin/ld: /tmp/cccYCqnu.o: in function `lenet_training_with_dynamic_graph(nbla::Context)':
train_lenet_classifier.cpp:(.text+0x3c11): undefined reference to `nbla::utils::load_parameters(nbla::ParameterDirectory&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
/usr/bin/ld: train_lenet_classifier.cpp:(.text+0x5b3a): undefined reference to `nbla::utils::save_parameters(nbla::ParameterDirectory&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
/usr/bin/ld: /tmp/cccYCqnu.o: in function `lenet_training_with_static_graph(nbla::Context)':
train_lenet_classifier.cpp:(.text+0x6876): undefined reference to `nbla::utils::load_parameters(nbla::ParameterDirectory&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
/usr/bin/ld: train_lenet_classifier.cpp:(.text+0x8473): undefined reference to `nbla::utils::save_parameters(nbla::ParameterDirectory&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
collect2: error: ld returned 1 exit status

This is due to missing the library nnabla_utils.so in GNUmakefile

How to get or use network from nnp

Hi.

Neural Network Console of Windows provides h5 file and Python code. And Cloud version provides just nnp file. I know the nnp file has h5 and network of protocol buffers. But I don't know how to use network in nnp file.

Usally we can get the python code like below.

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF

def network(x, y, test=False):
    # Input:x -> 1,28,28
    # MaxPooling -> 1,14,14
    h = F.max_pooling(x, (2,2), (2,2))
    # Affine -> 100
    h = PF.affine(h, (100,), name='Affine')
    # ReLU
    h = F.relu(h, True)
    # Affine_2 -> 1
    h = PF.affine(h, (1,), name='Affine_2')
    # Sigmoid
    h = F.sigmoid(h)
    # BinaryCrossEntropy
    h = F.binary_cross_entropy(h, y)
    return h

And we don't neet BinaryCrossEntropy, so delete it. The nnp file's network has it.

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
from nnabla.utils import nnp_graph

nnp = nnp_graph.NnpLoader('./result.nnp')
graph = nnp.get_network('MainValidation')
y = graph.outputs
print(y)

This code outputs below.

{'BinaryCrossEntropy': <Variable((64, 1), need_grad=True) at 0x114f926d8>}

I hope use the nnp in Python like below.

import nnabla as nn

nn.load_parameters('./result.nnp')
graph = nn.get_network('MainValidation')
x = graph.inputs['x']
y = graph.outputs['y']
y.forward(x, test=True)
y.d

How to use network in the nnp file?

I know nnabla_cli can do it. But I don't know how to do it.

Thanks

onnx import problem with pytorch Linear

Hi,
If I try to use nnabla_cli to convert a simple ONNX file exported from Pytorch it fails with the following error:
ValueError: Unsupported attribute alpha was specified at Gemm

This seems to be caused by torch.nn.Linear() exporting with alpha (1.0) which seems correct for ONNX but unsupported by the nnabla_cli converter?

Is there a workaround for this?

ONNX Gemm import fails

Hi, I found a suspicious behavior (may bug) with ONNX importer.

Description

When ONNX Gemm node (produced by PyTorch Linear layer) is placed
in parallel branches, parameters of Gemm will not correctly converted to NNP.

(As a result, imported NnpNetwork would not produce correct value for inputs.)

To Reproduce

By executing the script below, we can observe the bug.

import subprocess

import numpy
import torch
import nnabla

from nnabla.utils.nnp_graph import NnpLoader


class TorchNet(torch.nn.Module):
    def __init__(self):
        super(TorchNet, self).__init__()
        self.l0 = torch.nn.Linear(3, 4)
        self.l1 = torch.nn.Linear(3, 4)

    def forward(self, x):
        h0 = self.l0(x)
        h1 = self.l1(x)
        return torch.cat((h0, h1), 1)


x = numpy.random.random((1, 3)).astype(numpy.float32)
tx = torch.from_numpy(x)
tnet = TorchNet()
print('# PyTorch')
print(tnet(tx))

torch.onnx.export(tnet, tx, 'test.onnx', opset_version=7)
subprocess.run(['nnabla_cli', 'convert', 'test.onnx', 'test.nnp'])

nnp = NnpLoader('test.nnp')
nnet = nnp.get_network('torch-jit-export', batch_size=1)
nx = list(nnet.inputs.values())[0]
ny = list(nnet.outputs.values())[0]
nx.d = x
ny.forward(clear_buffer=True)
print('# NNabla')
print(ny.d)

By printing nnet.variables, we can confirm 10.weight was not correctly loaded to NnpNetwork:

{'5': <Variable((1, 4), need_grad=False) at 0x7f5c38dea778>,
 '6': <Variable((1, 4), need_grad=False) at 0x7f5c38dea868>,
 '7': <Variable((1, 8), need_grad=False) at 0x7f5c38dea8b8>,
 'input': <Variable((1, 3), need_grad=False) at 0x7f5c38dea4f8>,
 'input_batchmatmul': <Variable((1, 4), need_grad=False) at 0x7f5c38dea688>,
 'l0.bias': <Variable((4,), need_grad=False) at 0x7f5c38dea598>,
 'l0.bias_reshape': <Variable((1, 4), need_grad=False) at 0x7f5c38dea728>,
 'l0.weight': None,
 'l1.bias': <Variable((4,), need_grad=False) at 0x7f5c38dea5e8>,
 'l1.bias_reshape': <Variable((1, 4), need_grad=False) at 0x7f5c38dea818>,
 'l1.weight': <Variable((4, 3), need_grad=False) at 0x7f5c38dea548>}

Produced results are as below:

# PyTorch
tensor([[-0.1046, -0.1339,  0.4336, -0.9163,  0.3735, -0.8550, -0.1840,  0.8109]],
       grad_fn=<CatBackward>)
# NNabla (differ from original)
[[ 0.36888292 -0.91139686 -0.20640835 -0.08116969  0.37347493 -0.8550156
  -0.1840314   0.8109182 ]]

Investigation

ONNX file is parsed into NNabla Protobuf (nnp.network_dict['torch-jit-export'])
as below (excerpt):

...

function {
  name: "torch-jit-export/Gemm_0/BatchMatmul_0"
  type: "BatchMatmul"
  input: "input"
  input: "l0.weight"
  output: "input_batchmatmul"
  batch_matmul_param {
    transpose_b: true
  }
}

...

function {
  name: "torch-jit-export/Gemm_1/BatchMatmul_1"
  type: "BatchMatmul"
  input: "input"
  input: "l1.weight"
  output: "input_batchmatmul"
  batch_matmul_param {
    transpose_b: true
  }
}

...

Here output input_batchmatmul is conflicting for BatchMatmul_0 and BatchMatmul_1.
This conflict leads to unexpected behavior in
NnpNetwork._functions_in_forward_order (only BatchMatmul_1 is picked up
for variable loading).

So, I modified Gemm output name to fix this issue.
Here I attach a PR (#667) to show this change.
(Although same type of bugs may occur for other nodes,
I only investigated Gemm node for this issue.)

With this change, I confirmed the script work with expected behaviors.

# Pytorch
tensor([[ 0.1004, -0.7396,  0.1541, -0.2690, -0.3904,  0.4919, -0.3348,  0.6096]],
       grad_fn=<CatBackward>)
# NNabla (approximately same with original)
[[ 0.10039233 -0.7396248   0.15409058 -0.26902562 -0.39042282  0.49185294
  -0.33481792  0.6096146 ]]

Environment

  • OS: Ubuntu 18.04.4 LTS
  • Version: NNabla 1.8.0 (and above)

Regards.

ONNX/Tensorflor export failure

In nnabla 1.20.0, use nnabla_cli to export nnp to onnx or tensorflow would encounter below error:

  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/cli/cli.py", line 147, in cli_main
    return_value = args.func(args)
  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/cli/convert.py", line 109, in convert_command
    nnabla.utils.converter.convert_files(args, args.files, output)
  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/converter/commands.py", line 306, in convert_files
    return _export_from_nnp(args, nnp, output)
  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/converter/commands.py", line 168, in _export_from_nnp
    OnnxExporter(nnp, args.batch_size).execute(output)
  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/converter/onnx/exporter.py", line 3605, in execute
    self.create_graph()
  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/converter/onnx/exporter.py", line 3538, in create_graph
    nl = self.set_nodes(f)
  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/converter/onnx/exporter.py", line 3417, in set_nodes
    return op_type(func)
  File "~/anaconda3/envs/tfenv/lib/python3.6/site-packages/nnabla/utils/converter/onnx/exporter.py", line 665, in BaseConvolution
    if new_input_shape != input_shape:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Failed Unit test on macOSX

Environment: Python 2.7.13, Cython==0.25.2, numpy==1.13.0

I tried two cases

  1. Build NNabla by Clang with yamachu@882bade
  2. Build NNabla by gcc.
export CC=/usr/local/bin/gcc # not clang
export CXX=/usr/local/bin/g++-7 # not clang
cmake ../ ; make -j ... and same flow as Linux

Then modify library path
https://gist.github.com/yamachu/c5815d4ca63c712cf916347dbf922f1f#file-nnabla-how-to-run-on-mac

In two environments failed same test case.
Is mac not supported yet?

Detail

Test Result
https://gist.github.com/yamachu/c5815d4ca63c712cf916347dbf922f1f

FAILURES...

=================================== FAILURES ===================================
________ test_reduction_forward_backward[sum-ctx0-Sum-False-axis6-313] _________

op = 'sum', seed = 313, axis = (1, 2, 3), keepdims = False
ctx = Context(backend='cpu', array_class='', device_id='0', compute_backend='default')
func_name = 'Sum'

    @pytest.mark.parametrize("seed", [313])
    @pytest.mark.parametrize("axis", [None, 0, 1, 2, 3, (0, 2), (1, 2, 3)])
    @pytest.mark.parametrize("keepdims", [False, True])
    @pytest.mark.parametrize("op, ctx, func_name", list_ctx_and_func_name(['sum', 'mean', 'max', 'min', 'prod']))
    def test_reduction_forward_backward(op, seed, axis, keepdims, ctx, func_name):
        from nbla_test_utils import function_tester
        func = getattr(F, op)
        ref_func = getattr(np, op)
        rng = np.random.RandomState(seed)
        inputs = [rng.randn(2, 3, 4, 5).astype(np.float32)]
        function_tester(rng, func, ref_func, inputs,
                        func_args=[axis],
                        func_kwargs=dict(keepdims=keepdims),
                        ctx=ctx, func_name=func_name,
>                       atol_b=3e-3)

python/test/function/test_reduction.py:40: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

rng = <mtrand.RandomState object at 0x10c0fa140>
func = <function sum at 0x10a0fbde8>, ref_func = <function sum at 0x10496a9b0>
inputs = [array([[[[ 0.18527888, -1.2367872 ,  0.32600349,  0.99755579,  0.08737291],
  ...3105891, -0.11142748, -2.01562786,  1.57058263,  1.40829337]]]], dtype=float32)]
func_args = [(1, 2, 3)], func_kwargs = {'keepdims': False}, atol_f = 1e-06
atol_b = 0.003, dstep = 0.001, backward = [True]
ctx = Context(backend='cpu', array_class='', device_id='0', compute_backend='default')
func_name = 'Sum', ref_grad = None

    def function_tester(rng, func, ref_func, inputs,
                        func_args=[], func_kwargs={},
                        atol_f=1e-6, atol_b=1e-3, dstep=1e-3, backward=None,
                        ctx=None, func_name=None, ref_grad=None):
        """ Automatic testing of forward/backward pass of `func` by comparing it
        to the reference implementation in `ref_func`.
    
        Syntax of `ref_func`: inputs, parametes
        Syntax of `ref_grad`: inputs, output grads, parameters
        """
    
        if ctx is None:
            ctx = nn.Context()
        if backward is None:
            backward = [True for _ in inputs]
    
        # Create Variables
        # print 'create_variable'
    
        def create_variables(inputs, backward):
            vinputs = []
            for i, b in zip(inputs, backward):
                if i is None:
                    vinputs += [None]
                    continue
                vinputs += [nn.Variable(i.shape, need_grad=b)]
                vinputs[-1].data.cast(i.dtype)[...] = i
            return vinputs
        vinputs = create_variables(inputs, backward)
    
        # Checking forward
        # print 'checking forward'
        with nn.context_scope(ctx), nn.auto_forward():
            o = func(*(vinputs + func_args), **func_kwargs)
        rinputs = copy.deepcopy(inputs)  # inputs for ref_func
        refs = ref_func(*(rinputs + func_args), **func_kwargs)
    
        def force_tuple(x):
            if isinstance(x, tuple):
                return x
            return (x,)
        refs = force_tuple(refs)
        o = force_tuple(o)
        assert len(o) == len(refs)
        for i, ref in enumerate(refs):
            res = o[i].d
            assert np.allclose(ref, res, atol=atol_f)
    
        # Checking function name
        # print 'checking function name'
        if func_name is not None:
            assert o[0].parent.name == func_name
    
        # Checking backward
        # print 'checking backward'
        if not True in backward:
            return
    
        # NNabla backward
        for v in vinputs:
            if v is None:
                continue
            if len(v.shape) == 0:
                v.g = rng.randn()
                continue
            v.g = rng.randn(*v.shape).astype(v.data.dtype)
        # Verify grad
        vinputs = create_variables(inputs, backward)
        rinputs = copy.deepcopy(inputs)
        rinputs = [rinput if test else None for rinput,
                   test in zip(rinputs, backward)]
        vgrads = [rng.randn(*o_.shape) for o_ in o]
    
        agrads, ngrads = compute_analytical_and_numerical_grad(
            o[0].parent, vinputs, o, rinputs, vgrads, epsilon=dstep,
            rng=rng)
        if ref_grad is not None:
            rinputs = copy.deepcopy(inputs)
            doutputs = [o_.g for o_ in o]
            ngrads = ref_grad(*(rinputs + doutputs + func_args), **func_kwargs)
>       assert np.allclose(ngrads, agrads, atol=atol_b)
E       AssertionError

python/test/nbla_test_utils.py:249: AssertionError
_________ test_reduction_forward_backward[sum-ctx0-Sum-True-axis6-313] _________

op = 'sum', seed = 313, axis = (1, 2, 3), keepdims = True
ctx = Context(backend='cpu', array_class='', device_id='0', compute_backend='default')
func_name = 'Sum'

    @pytest.mark.parametrize("seed", [313])
    @pytest.mark.parametrize("axis", [None, 0, 1, 2, 3, (0, 2), (1, 2, 3)])
    @pytest.mark.parametrize("keepdims", [False, True])
    @pytest.mark.parametrize("op, ctx, func_name", list_ctx_and_func_name(['sum', 'mean', 'max', 'min', 'prod']))
    def test_reduction_forward_backward(op, seed, axis, keepdims, ctx, func_name):
        from nbla_test_utils import function_tester
        func = getattr(F, op)
        ref_func = getattr(np, op)
        rng = np.random.RandomState(seed)
        inputs = [rng.randn(2, 3, 4, 5).astype(np.float32)]
        function_tester(rng, func, ref_func, inputs,
                        func_args=[axis],
                        func_kwargs=dict(keepdims=keepdims),
                        ctx=ctx, func_name=func_name,
>                       atol_b=3e-3)

python/test/function/test_reduction.py:40: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

rng = <mtrand.RandomState object at 0x10c177050>
func = <function sum at 0x10a0fbde8>, ref_func = <function sum at 0x10496a9b0>
inputs = [array([[[[ 0.18527888, -1.2367872 ,  0.32600349,  0.99755579,  0.08737291],
  ...3105891, -0.11142748, -2.01562786,  1.57058263,  1.40829337]]]], dtype=float32)]
func_args = [(1, 2, 3)], func_kwargs = {'keepdims': True}, atol_f = 1e-06
atol_b = 0.003, dstep = 0.001, backward = [True]
ctx = Context(backend='cpu', array_class='', device_id='0', compute_backend='default')
func_name = 'Sum', ref_grad = None

    def function_tester(rng, func, ref_func, inputs,
                        func_args=[], func_kwargs={},
                        atol_f=1e-6, atol_b=1e-3, dstep=1e-3, backward=None,
                        ctx=None, func_name=None, ref_grad=None):
        """ Automatic testing of forward/backward pass of `func` by comparing it
        to the reference implementation in `ref_func`.
    
        Syntax of `ref_func`: inputs, parametes
        Syntax of `ref_grad`: inputs, output grads, parameters
        """
    
        if ctx is None:
            ctx = nn.Context()
        if backward is None:
            backward = [True for _ in inputs]
    
        # Create Variables
        # print 'create_variable'
    
        def create_variables(inputs, backward):
            vinputs = []
            for i, b in zip(inputs, backward):
                if i is None:
                    vinputs += [None]
                    continue
                vinputs += [nn.Variable(i.shape, need_grad=b)]
                vinputs[-1].data.cast(i.dtype)[...] = i
            return vinputs
        vinputs = create_variables(inputs, backward)
    
        # Checking forward
        # print 'checking forward'
        with nn.context_scope(ctx), nn.auto_forward():
            o = func(*(vinputs + func_args), **func_kwargs)
        rinputs = copy.deepcopy(inputs)  # inputs for ref_func
        refs = ref_func(*(rinputs + func_args), **func_kwargs)
    
        def force_tuple(x):
            if isinstance(x, tuple):
                return x
            return (x,)
        refs = force_tuple(refs)
        o = force_tuple(o)
        assert len(o) == len(refs)
        for i, ref in enumerate(refs):
            res = o[i].d
            assert np.allclose(ref, res, atol=atol_f)
    
        # Checking function name
        # print 'checking function name'
        if func_name is not None:
            assert o[0].parent.name == func_name
    
        # Checking backward
        # print 'checking backward'
        if not True in backward:
            return
    
        # NNabla backward
        for v in vinputs:
            if v is None:
                continue
            if len(v.shape) == 0:
                v.g = rng.randn()
                continue
            v.g = rng.randn(*v.shape).astype(v.data.dtype)
        # Verify grad
        vinputs = create_variables(inputs, backward)
        rinputs = copy.deepcopy(inputs)
        rinputs = [rinput if test else None for rinput,
                   test in zip(rinputs, backward)]
        vgrads = [rng.randn(*o_.shape) for o_ in o]
    
        agrads, ngrads = compute_analytical_and_numerical_grad(
            o[0].parent, vinputs, o, rinputs, vgrads, epsilon=dstep,
            rng=rng)
        if ref_grad is not None:
            rinputs = copy.deepcopy(inputs)
            doutputs = [o_.g for o_ in o]
            ngrads = ref_grad(*(rinputs + doutputs + func_args), **func_kwargs)
>       assert np.allclose(ngrads, agrads, atol=atol_b)
E       AssertionError

python/test/nbla_test_utils.py:249: AssertionError
==================== 2 failed, 871 passed in 19.59 seconds =====================

ONNX support for nnabla

Hi Nnabla team,

How can we get Onnx support for nnabla?
Is there onnx support for nnabla in works?
By when would it release?
I Would like to contribute to nnabla-onnx support.

Thanks and Regards

Dimensions problem in saving network .nnp with save.py

I am working with Sony board Spresense and the problem is : when I try to save my model to .nnp format (training all with nnabla), I encounter a problem if I use a Convolution layer. Even in your simple example about saving the MLP, the error occurs when adding a convolutional layer.

In my network, the batch_size is 64 and the number of kernels in the first convolutional layer is 32, but when I try to save it throws me the error (see at the end), it seems that the code takes the number of layer instead of batch size.

I cannot find a well documentation on how the "contents" filed in save function must be, and it seems quite different from .pbtxt style. Please help

Net:

c1 = PF.fixed_point_quantized_convolution(x, 32, (3, 3), name='c1')
c1 = PF.batch_normalization(c1, name='bn1')
r1 = F.relu(c1)

c2 = PF.fixed_point_quantized_convolution(r1, 28, (3, 3), name='c2')
c2 = PF.batch_normalization(c2, name='bn2')
r2 = F.relu(c2)
r2 = F.average_pooling(r2, (2, 2))

c3 = PF.fixed_point_quantized_convolution(r2, 24, (3, 3), name='c3')
c3 = PF.batch_normalization(c3, name='bn3')
r3 = F.relu(c3)

c4 = PF.fixed_point_quantized_convolution(c3, 24, (3, 3), name='c4')
c4 = PF.batch_normalization(c4, name='bn4')
r4 = F.relu(c4)
r4 = F.average_pooling(r4, (2, 2))

fc5 = F.relu(PF.fixed_point_quantized_affine(r4, 32, name='fc5'))

fc6 = F.relu(PF.fixed_point_quantized_affine(fc5, 16, name='fc6'))

y = PF.fixed_point_quantized_affine(fc6, 2, name='fc7')

Code that gives error:

contents = {
    'networks': [
        {'name': 'net1',
         'batch_size': batch_size,
         'outputs': {'y': y},
         'names': {'x':x}}],
    'executors': [
        {'name': 'runtime',
         'network': 'net1',
         'data': ['x'],
         'output': ['y']}]}

save('net1.nnp', contents)

Error:

ValueError Traceback (most recent call last)
in
15 'data': ['x'],
16 'output': ['y']}]}
---> 17 save('net1.nnp', contents)

~/anaconda3/envs/py36/lib/python3.6/site-packages/nnabla/utils/save.py in save(filename, contents, include_params, variable_batch_size, extension)
650 nntxt = io.StringIO()
651 save(nntxt, contents, include_params=False,
--> 652 variable_batch_size=variable_batch_size, extension='.nntxt')
653 nntxt.seek(0)
654

~/anaconda3/envs/py36/lib/python3.6/site-packages/nnabla/utils/save.py in save(filename, contents, include_params, variable_batch_size, extension)
637 if ext == '.nntxt' or ext == '.prototxt':
638 logger.info("Saving {} as prototxt".format(filename))
--> 639 proto = create_proto(contents, include_params, variable_batch_size)
640 with get_file_handle_save(filename, ext) as file:
641 text_format.PrintMessage(proto, file)

~/anaconda3/envs/py36/lib/python3.6/site-packages/nnabla/utils/save.py in create_proto(contents, include_params, variable_batch_size)
458 proto_nets = []
459 for net in contents['networks']:
--> 460 networks[net['name']] = _create_network(net, variable_batch_size)
461 proto_nets.append(networks[net['name']])
462 proto.network.extend(proto_nets)

~/anaconda3/envs/py36/lib/python3.6/site-packages/nnabla/utils/save.py in _create_network(net, variable_batch_size)
229 if b != expect_batch_size:
230 raise ValueError('Variable "{}" has different batch size {} (expected {})'.format(
--> 231 v.name, b, expect_batch_size))
232 shape[0] = -1
233

ValueError: Variable "Convolution_Input" has different batch size 64 (expected 32)

How to give w_init param?

for example.

with nn.parameter_scope('Convolution'):
h = PF.convolution(x, 16, (7,7), (0,0))

I want to change w_init to "calc_normal_std_he_forward".
Could you please,how to give w_init param?

Improve DataSource documentation

the documentation for a DataSource could be improved. Ideally there should be some prototype example that explains a minimal custom DataSource in the docs.

E.g. without looking into the nnabla code there is currently no way notice that a __getitem__ is performed using the hidden property _get_data.

Building on macOS fails

Hello, I tried to build nnabla from source on macOS(10.12.6). I installed required libraries for Linux described in doc.

I tried both clang and gcc, and modified CMakeLists.txt according to #1, butmake -j 16 threw an error below so failed to install.

<path to nnabla>/nnabla/include/nbla/common.hpp:24:10: fatal error: 'cstdint' file not found
#include <cstdint>

I chaged cstdint to stdint.h then the new error message is this.

<path to nnabla>/nnabla/include/nbla/common.hpp:24:10: fatal error: 'tuple' file not found
#include <tuple>

Does anyone know how to solve this? Thank you for advance.

nnabla python on Windows

After installing nnabla Python package in Windows, I tried to import nnabla module, but unfortunately the following errors happened:

(nnabla) C:\Users\abc>python
Python 3.6.3 |Anaconda, Inc.| (default, Nov  8 2017, 15:10:56) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import nnabla
2017-11-21 15:49:26,440 [nnabla][INFO]: Initializing CPU extension...
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\abc\AppData\Local\conda\conda\envs\nnabla\lib\site-packages\nnabla\__init__.py", line 42, in <module>
    prefer_cached_array(True)
  File "_init.pyx", line 103, in nnabla._init.prefer_cached_array.lambda
  File "_init.pyx", line 103, in nnabla._init.prefer_cached_array.lambda
TypeError: a bytes-like object is required, not 'str'

Could you give some clues on how to fix this problem? Thanks

Problems when saving network with GRU

Dear sir/madam,

I am using the following packages with their versions:
nnabla 1.13.0
nnabla-ext-cuda 1.0.18
nnabla-ext-cuda90 1.13.0

I am trying to save a neural network which contains a GRU layer but it is crashing.
As an example I have used the following code:

import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
import nnabla.solvers as S
from nnabla.utils.save import save as save_nn

seq_len = 10
batch_size = 64
input_size = 30
hidden_size = 20
num_layers = 2
num_directions = 1
nn.clear_parameters()
x = nn.Variable((seq_len, batch_size, input_size))
h = nn.Variable((num_layers, num_directions, batch_size, hidden_size))
y, hn = PF.gru(x, h)

contents = {
    'networks': [
        {'name': 'gru',
         'batch_size': batch_size,
         'outputs': {'y': y},
         'names': {'x': x}}],
    'executors': [
        {'name': 'runtime',
         'network': 'gru',
         'data': ['x'],
         'output': ['y']}]}
save_nn("gru.nnp",contents)

However, I am receiving the following error: ValueError: Variable "GRU_Input" has different batch size 2 (expected 10)
Maybe, I am doing something wrong in my code.

Thank you very much,
Gonzalo

nnabla-ext-cuda102 with cuDNN 7.6

Is it possible to maintain a version of nnabla-ext-cuda102 for both cuDNN 7.6 and 8.0 (Linux + Windows)?
As of the latest version, it looks like only the Windows version is for 7.6 and on Linux it's 8.0.

Implement StopIteration of `data_iterator`

The default behavior of the nnablas data_iterator is to reset the data_source, once the position is exceeding its bounds:

if self._data_source.position >= self._size:
    self._reset()

This requires users to yield a fixed amount of samples from the data_iterator instead of just letting it yield all unique samples (set by the definition). This could easily be implemented by raising StopIteration. Is there a specific reason that this is behavior is not implemented. This would then more closely match to pytorch and keras. E.g. look here

Failed `shape_check == shape_b`: Shape of beta(inputs[1]) does not match. beta: (1, 100, 1, 1) != expected: (1, 100).

Hello.

I'm trying to convert a PyTorch model to onnx and then to NNabla, but when I convert nn.BatchNorm1d(), I get the error titled nnabla_cli draw_graph.

pytorch model:

import os

import torch
from torch import nn


def main():
  model = nn.BatchNorm1d(100)
  input = torch.randn(20, 100)

  file_name = str(model.__class__.__name__)

  torch.onnx.export(model, input, os.path.join('onnx_model', file_name + '.onnx'))

  print(f'saved: {file_name}.onnx')


if __name__ == "__main__":
  main()

and onnx to nnabla convert:

nnabla_cli convert onnx_model/BatchNorm1d.onnx BatchNorm1d.nnp

When I try to check inside the model, an error occurs.

nnannabla_cli draw_graph BatchNorm1d.nnp
2021-03-01 18:43:26,396 [nnabla][INFO]: Drawing: torch-jit-export
Traceback (most recent call last):
  File "/user/python/.venv/lib/python3.8/site-packages/nnabla/utils/cli/cli.py", line 141, in cli_main
    return_value = args.func(args)
  File "/user/python/.venv/lib/python3.8/site-packages/nnabla/utils/cli/draw_graph.py", line 44, in draw_graph_command
    graph = nnp.get_network(n)
  File "/user/python/.venv/lib/python3.8/site-packages/nnabla/utils/nnp_graph.py", line 498, in get_network
    return NnpNetwork(network_proto, self._params, batch_size, callback=callback)
  File "/user/python/.venv/lib/python3.8/site-packages/nnabla/utils/nnp_graph.py", line 415, in __init__
    self._create_function(f, callback, current_scope)
  File "/user/python/.venv/lib/python3.8/site-packages/nnabla/utils/nnp_graph.py", line 340, in _create_function
    outputs = function_instance(
  File "function.pyx", line 302, in nnabla.function.Function.__call__
  File "function.pyx", line 280, in nnabla.function.Function._cg_call
RuntimeError: value error in setup_impl
/Users/gitlab-runner/builds/9703d983/1/nnabla/builders/all/nnabla/src/nbla/function/./generic/batch_normalization.cpp:58
Failed `shape_check == shape_b`: Shape of beta(inputs[1]) does not match. beta: (1, 100, 1, 1) != expected: (1, 100).
  • versions
    • Python: 3.8.0
    • torch: 1.7.1
    • nnabla: 1.16.0
    • nnabla-converter: 1.16.0
    • graphviz(pip): 0.16
    • graphviz(from brew): 2.46.1
  • environment
    • macOS Catalina
    • CPU: x86

nnabla_utils link errors.

I found nnabla_utils link errors caused by GNUmakefiles in nnabla/examples/cpp/, and fix them.
We should add "-lnnabla_utils" to all compile command lines in GNUmakefiles accurately.

  1. nnabla/examples/cpp/mnist_collection/GNUmakefile
    add " -lnnabla_utils" to all compile command lines.
  2. nnabla/examples/cpp/mnist_runtime/GNUmakefile
    collect " -lnbla_utils" to "-lnnabla_utils"
  3. nnabla/examples/cpp/mnist_training/GNUmakefile
    collect " -lnbla_utils" to "-lnnabla_utils"

The latest PYNVML causes initialization error in nnabla-ext-cuda

The latest PYNVML package which has been released this month causes an initialization error in nnabla-ext-cuda* pakages.

To fix this temporarily, please uninstall your pytnvml once, and install the previous version as follwoing:

pip uninstall -y pynvml
pip install pynvml==8.0.4

We'll fix it in the next release. Sorry for the inconvenience.

running on cpu

Hi,
following the tutorial in the following link:
http://nnabla.readthedocs.io/en/latest/python/tutorial/multi_device_training.html

I run: rc = ipp.Client(profile='mpi')
/usr/local/lib/python3.5/dist-packages/ipyparallel/client/client.py:458: RuntimeWarning:
Controller appears to be listening on localhost, but not on this machine.
If this is true, you should specify Client(...,sshserver='you@bashar-ThinkPad-T460')
or instruct your controller to listen on an external IP.
RuntimeWarning)
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
rc = ipp.Client(profile='mpi')
File "/usr/local/lib/python3.5/dist-packages/ipyparallel/client/client.py", line 511, in init
self._connect(sshserver, ssh_kwargs, timeout)
File "/usr/local/lib/python3.5/dist-packages/ipyparallel/client/client.py", line 630, in _connect
raise error.TimeoutError("Hub connection request timed out")
ipyparallel.error.TimeoutError: Hub connection request timed out

Any idea how to solve that?

ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory

I installed nnabla using pip install nnabla-ext-cuda102 and I have CUDA-10.2 installed on my system. I have also added
export LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.2/bin${PATH:+:${PATH}} in my .bashrc file. I am not able to solve this error. I am able to use nnabla with CPU extension though.

Android build is broken

Calling make bwd-nnabla-cpplib-android does not work.

It fails at installing basic dependencies. I tried to fix it by changing the OS from Ubuntu 16.04 to 18.04 and it fixes the initial error but fails to build the library because the version of protobuf is probably incompatible:

In file included from <>/nnabla/src/nbla_utils/nnabla.pb.cc:4:
<>/nnabla/src/nbla_utils/nnabla.pb.h:10:10: fatal error: 'google/protobuf/port_def.inc' file not found

I also tried to manually build it without docker but I got the same error. It seems like the version of protobuf that is specified in the Dockerfile (3.1.0) is not the correct version.

I am really interested in using this library in an Android app, do you plan to support Android build?

Feature Request: Add Mish activation function

Mish is a new novel activation function proposed in this paper.
It has shown promising results so far and has been adopted in several packages including:

All benchmarks, analysis and links to official package implementations can be found in this repository

It would be nice to have Mish as an option within the activation function group.

This is the comparison of Mish with other conventional activation functions in a SEResNet-50 for CIFAR-10: (Better accuracy and faster than GELU)
se50_1

Reinforcement learning examples

Could please anybody commit to post reinforcement learning models such as DNQ and A3C? I need some hint and guide how to build reinforcement learning RNN modlez

value error in nnabla on running x-umx on Rpi4, Raspberry Pi OS

On running x-umx on Rpi4, 8GB on Raspberry Pi OS i get the below error,

root@raspberrypi:/home/pi/x-umx# python3 test.py --inputs ../Music/test_16k_S16_LE_stereo.wav --context cpu --model /home/pi/x-umx/x-umx.h5 --outdir /home/pi/x-umx/results
2021-01-30 16:37:37,007 [nnabla][INFO]: Initializing CPU extension...
Traceback (most recent call last):
File "test.py", line 198, in
test()
File "test.py", line 170, in test
residual_model=args.residual_model
File "test.py", line 84, in separate
mix_spec, msk, _ = unmix_target(audio_nn, test=True)
File "/home/pi/x-umx/model.py", line 300, in call
lstm_out_bass = self.lstm(cross_1, nb_samples, "lstm_bass", test)
File "/home/pi/x-umx/model.py", line 231, in lstm
bidirectional=not self.unidirectional, training=not test, dropout=0.4, name=scope_name)
File "", line 8, in lstm
File "/usr/local/lib/python3.7/dist-packages/nnabla/parametric_functions.py", line 1567, in lstm
return F.lstm(x, h, c, weight_l0=w0, weight=w, bias=b, num_layers=num_layers, dropout=dropout, bidirectional=bidirectional, training=training)
File "", line 3, in lstm
File "/usr/local/lib/python3.7/dist-packages/nnabla/function_bases.py", line 222, in lstm
return F.LSTM(ctx, num_layers, dropout, bidirectional, training)(*inputs, n_outputs=n_outputs, auto_forward=get_auto_forward(), outputs=outputs)
File "function.pyx", line 292, in nnabla.function.Function.call
File "function.pyx", line 271, in nnabla.function.Function._cg_call
RuntimeError: value error in setup_impl
/home/pi/x-umx/nnabla/src/nbla/function/./generic/split.cpp:36
Failed num_outputs_ == outputs.size(): inputs[0].shape[axis] must be the same number as the outputs. inputs[0].shape[axis]: 431, outputs: 2.

I have successfully manually built & installed nnabla & llvmlite.
The latter was really very difficult to build & install.
root@raspberrypi:/home/pi/x-umx# pip3 freeze | grep 'nnabla'
nnabla==1.9.0
root@raspberrypi:/home/pi/x-umx# pip3 freeze | grep 'llvmlite'
llvmlite==0.32.1+0.gaa11b12.dirty

I think, now it is throwing error related to nnabla, of the input parameter size not equivalent to
output parameter size. Can you please suggest, where we need to set this nnabla files ?

Please help me.
Regards,
Rajiv.

TFLite converter needs `flatc` command.

The conversion to TFLite format has been greatly improved in nnabla 1.20.0, but since the flatc command is used internally, the conversion will fail if the flatc command is not installed in the environment used beforehand.

We plan to make further improvements to make it easier to use, but as of now you can either build it yourself using https://google.github.io/flatbuffers/flatbuffers_guide_building.html as a guide or install the flatc command in snap install flatbuffers under Linux environment.

These is no nnabla.pb file

To compile nbla_utils nnp_impl.h includes "nnabla.pb.h", but there is no .pb file in github.
pb.h file will be generated by protocolbuffers from .pb file.
Is .pb file generated by other tool?

How to disable logger info output.

When you run the sample program in the documentation, the info log is displayed from the logger by default.

2019-08-18 07:28:04,788 [nnabla][INFO]: DataSource with shuffle(True)
2019-08-18 07:28:04,789 [nnabla][INFO]: Using DataSourceWithMemoryCache
2019-08-18 07:28:04,789 [nnabla][INFO]: DataSource with shuffle(True)
2019-08-18 07:28:04,789 [nnabla][INFO]: On-memory
2019-08-18 07:28:04,789 [nnabla][INFO]: Using DataIterator

Is there a way to remove this output from the console?

Cannot import nnabla

I have installed nnabla in pycharm. it shows error when imported.
import nnabla
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
import nnabla
File "/home/bashar/pycharm/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import
module = self._system_import(name, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/nnabla/init.py", line 16, in
from .logger import logger
File "/home/bashar/pycharm/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 20, in do_import
module = self._system_import(name, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/nnabla/logger.py", line 98, in
filemode='w+')
File "/usr/lib/python3.5/logging/init.py", line 1744, in basicConfig
h = FileHandler(filename, mode)

running on cpu rather than GPU. dataparallel communicator

in the following configuration,
Define the communicator for gradients exchange.

%%px
extension_module = "cuda.cudnn"
ctx = extension_context(extension_module)
comm = C.MultiProcessDataParalellCommunicator(ctx)
comm.init()
n_devices = comm.size
mpi_rank = comm.rank
device_id = mpi_rank
ctx = extension_context(extension_module, device_id=device_id)

Question: what do rank and size of communicator mean?

I have tried to use CPU instead as my Graphics does not support Cuda.

extension_module = "cpu"
ctx = extension_context(extension_module)
comm = C.DataParalellCommunicator(ctx)

Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 3, in
comm = C.DataParalellCommunicator(ctx)
File "nnabla/communicator.pyx", line 177, in nnabla.communicator.DataParalellCommunicator (/home/gitlab-runner/builds/0cc8eecd/2/nnabla/nnabla-builder/nnabla/python/src/nnabla/communicator.cpp:2651)
RuntimeError: unclassified error in query
/home/gitlab-runner/builds/0cc8eecd/5/nnabla/nnabla-builder/nnabla/include/nbla/function_registry.hpp:77
Failed cand.size() > 0: ('cpu', 'default') could not be found in []

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.