Giter Site home page Giter Site logo

lava-nc / lava-dl Goto Github PK

View Code? Open in Web Editor NEW
139.0 139.0 66.0 99.9 MB

Deep Learning library for Lava

Home Page: https://lava-nc.org

License: BSD 3-Clause "New" or "Revised" License

Python 3.37% Cuda 0.22% Jupyter Notebook 96.41%
deep-learning neural-networks neuromorphic neuromorphic-computing python pytorch

lava-dl's People

Contributors

ahenkes1 avatar alexggener avatar awintel avatar bamsumit avatar ccaccavella avatar dependabot[bot] avatar elrond91 avatar fangwei123456 avatar joyeshmishra avatar mathisrichter avatar mgkwill avatar michaelbeale-il avatar michaeljurado42 avatar paologcd avatar philippplank avatar r-gaurav avatar stevenabreu7 avatar timcheck avatar tobias-fischer avatar uslumt avatar valmat07 avatar weidel-p avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lava-dl's Issues

The Input dimensions of the CUBA and other blocks

Hello,
I am simply trying to train a simple network to re-train a simple network of which has layers

    self.blocks = torch.nn.ModuleList([
            slayer.block.cuba.Input(neuron_params),
            slayer.block.cuba.Dense(neuron_params_drop, 34*34*2, 512, weight_norm=True, delay=True),
            slayer.block.cuba.Dense(neuron_params_drop, 512, 512, weight_norm=True, delay=True),
            slayer.block.cuba.Dense(neuron_params, 512, 10, weight_norm=True),
        ])

But, I am unable to figure out how the input layer should be shaped. I could not find any explanation in the type declaration codes unfortunately. This documentation request is not only for Input block but also when I try to add a convolutional layer (it seems library does not follow PyTorch convention straight forward) I can not figure it out the how the Conv block dimensions are formed. So I am getting dimensions more than I expected.

Can you explain the input parameter requirements of the block little bit?

My attempt of re-training is related to the issue #53 .

All the best,

Operation system Ubuntu 20.04

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Bug of device and dtype of `WgtScaleBatchNorm.std`

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.5.0
  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Traceback (most recent call last):
  File "/home/wfang/spikingjelly_dev/spikingjelly/test4.py", line 15, in <module>
    net(x)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/cuba.py", line 439, in forward
    _, voltage = self.dynamics(input)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/cuba.py", line 365, in dynamics
    current = self.norm(current)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/norm.py", line 209, in forward
    std = self.std(var)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/norm.py", line 170, in std
    return torch.ones(1) << torch.ceil(torch.log2(std)).clamp(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Expected behavior:

  • No error is raised.

Steps to reproduce:

Run the following codes:

from lava.lib.dl import slayer
import torch

net = slayer.neuron.cuba.Neuron(
    threshold=1.,
    current_decay=1.,
    voltage_decay=0.,
    scale=1 << 6,
    norm=slayer.neuron.norm.WgtScaleBatchNorm
)
device = 'cuda:0'
net.to(device)
with torch.no_grad():
    x = torch.rand([4, 4, 4], device=device)
    net(x)

Related code:


Other information:

My env:

(lava-env) wfang@mlg-ThinkStation-P920:~$ conda list lava
# packages in environment at /home/wfang/anaconda3/envs/lava-env:
#
# Name                    Version                   Build  Channel
lava                      0.5.0              pyhd8ed1ab_0    conda-forge
lava-dl                   0.3.0              pyhd8ed1ab_0    conda-forge
(lava-env) wfang@mlg-ThinkStation-P920:~$ conda list torch
# packages in environment at /home/wfang/anaconda3/envs/lava-env:
#
# Name                    Version                   Build  Channel
pytorch                   1.12.1          cuda112py310h51fe464_200    conda-forge
torchvision               0.13.0          cuda112py310h453157a_0    conda-forge

Accelerate recurrent SNN training

Objective of issue:

Lava DL version:

  • 0.2.0 (current version)

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • Recurrent SNN training speed scales with network size poorly

Expected behavior:

  • Recurrent SNN training should scale no worse than to be expected from BPTT

NMINST tutorial not utilizing full GPU

Objective of issue:

  • Increase GPU utilization.

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • GPU utilization is not high enough.

Expected behavior:

  • Would like GPU utilization to be >50%.

Steps to reproduce:

  • Run the notebook and monitor GPU utilization with nvidia-smi.

Training deep SNNs with DECOLLE

Objective of issue:
Implementation of the training algorithm DECOLLE (Kaiser et al. 2020)

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

I'm submitting a ...

  • bug report
  • [x ] feature request
  • documentation request

Current behavior:
Currently, only training with SLAYER is implemented and documented.

Expected behavior:
As per the desire of the developers to provide a general framework for neuromorphic computing, I believe that it would be great if Lava implemented training of deep SNNs via a variety of algorithms. In that sense, the naming of the training library (lava.lib.dl.slayer) would also benefit from evolving in future iterations, e.g. as lava.lib.dl.training. In the PR linked to this issue, I have implemented training of SNNs via DECOLLE.

Other information:
A PR is linked to this issue.

NetX Dimension Seems Lost

Hello,
I have trained the 3 fully connected layer network using NMNIST dataset ( using the tutotorial notebook )

When I try to convert this model to the process network I have got the dimension different.
The network blocks in Slayer was;
self.blocks = torch.nn.ModuleList([
slayer.block.cuba.Dense(neuron_params_drop, 34342, 512, weight_norm=True, delay=True),
slayer.block.cuba.Dense(neuron_params_drop, 512, 512, weight_norm=True, delay=True),
slayer.block.cuba.Dense(neuron_params, 512, 10, weight_norm=True),
])

However when I convert it using NetX it becomes:

There are 3 layers in network:
Dense : Process_1 , shape : (512,)
Dense : Process_4 , shape : (512,)
Dense : Process_7 , shape : (10,)

There might be something wrong with what I am doing but I could not figure it out.
Simply I have used
net = netx.hdf5.Network(net_config='Trained/network.net')
print(net)

Just like what has been done in the NetX tutorials.

All the best,
Ahmet Akman

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request
  • I really do not know right now it might be a bug , might be a documentation request.

About Multiple GPU's Training Support

Objective of issue:
Helllo,
I am curious about how one can train a network in SLAYER with the support of multiple GPU's.
I simply tried adding the lines net= torch.nn.DataParallel(net) net.to(device) . in standard NMNIST notebook, but it did not work after first epoch.

Is or Will there be any slayer adaptation of multi GPUs training support ?

If one can use the multi GPU training utilities in PyTorch how it should be set in order not to crash ?

Sincerely,

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Boostrap MNIST tutorial

While running MNIST.py (with CPU setup) arises an error : Device index must not be negative since tensor.get_device() returns the index of cpu, it can be solved by changing line 64, rather returning device variable. Since to(device) is costless for default device and name is returned.

With GPU set up There still remains cuda nvcc error.

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • return cpu device name

Expected behavior:

  • returns device index

Steps to reproduce:

  • Running MNIST.py

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Device index must not be negative

SNN model for load forecasting using Lava framework implementation on Loihi 2

@mgkwill
I am working on a project about creating an SNN model for load forecasting (Time-series data) on the Lava framework. However, I am struggling to implement SNN with time-series data.
Could you give a simple example to use the SNN function of time series? I apologize for this may disturb you, I am a green hand.

In any case, thanks for your help.

Streamline PilotNet SNN notebook using RefPorts

Objective of issue:
Streamline PilotNet SNN notebook using RefPorts

Lava version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • Current notebook uses var.set/get api

Expected behavior:

  • Use refports api to enable execution of the network at once.

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Bug of device and dtype of `WgtScaleBatchNorm.std`

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.5.0
  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Traceback (most recent call last):
  File "/home/wfang/spikingjelly_dev/spikingjelly/test4.py", line 15, in <module>
    net(x)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/cuba.py", line 439, in forward
    _, voltage = self.dynamics(input)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/cuba.py", line 365, in dynamics
    current = self.norm(current)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/norm.py", line 209, in forward
    std = self.std(var)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/neuron/norm.py", line 170, in std
    return torch.ones(1) << torch.ceil(torch.log2(std)).clamp(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Expected behavior:

  • No error is raised.

Steps to reproduce:

Run the following codes:

from lava.lib.dl import slayer
import torch

net = slayer.neuron.cuba.Neuron(
    threshold=1.,
    current_decay=1.,
    voltage_decay=0.,
    scale=1 << 6,
    norm=slayer.neuron.norm.WgtScaleBatchNorm
)
device = 'cuda:0'
net.to(device)
with torch.no_grad():
    x = torch.rand([4, 4, 4], device=device)
    net(x)

Related code:


Other information:

My env:

(lava-env) wfang@mlg-ThinkStation-P920:~$ conda list lava
# packages in environment at /home/wfang/anaconda3/envs/lava-env:
#
# Name                    Version                   Build  Channel
lava                      0.5.0              pyhd8ed1ab_0    conda-forge
lava-dl                   0.3.0              pyhd8ed1ab_0    conda-forge
(lava-env) wfang@mlg-ThinkStation-P920:~$ conda list torch
# packages in environment at /home/wfang/anaconda3/envs/lava-env:
#
# Name                    Version                   Build  Channel
pytorch                   1.12.1          cuda112py310h51fe464_200    conda-forge
torchvision               0.13.0          cuda112py310h453157a_0    conda-forge

Multi-GPU Training

Is training on multiple GPUs possible with the current lava-dl release?

I distribute the batch for the nmnist tutorial on two GPUs with:
net = Network()
net = nn.DataParallel(net, device_ids=[0, 1])
net.to(device)

The training for the first epoch runs as expected. In the second epoch I get an error during backpropagation:
_Traceback (most recent call last):
File "nmnist.py", line 220, in
output, count = assistant.train(input, label)
File "/home/user/lava-dl/src/lava/lib/dl/slayer/utils/assistant.py", line 139, in train
loss.backward()
File "/usr/local/lib/python3.8/dist-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py", line 145, in backward
Variable.execution_engine.run_backward(
RuntimeError: output with shape [1] doesn't match the broadcast shape [512]

Is this a known issue and can you reproduce it on your GPU clusters? Or is it specific to my GPU settings? Or is there extra code necessary to enable multiple GPU training?

Thank you for your support.

Lava lif neuron and slayer lif neuron mismatch

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • The outputs mismatch sometimes

Expected behavior:

  • They should match

Steps to reproduce:

  • code below

Related code:

lava-nc/lava#245

About installation procedure unittests

Objective of issue:
Hello,
I have got an error while following the installation procedure on my Ubuntu 20.04 system. I have installed the lava core packet beforehand. I could not figure out why such an error I am getting so I am giving the whole terminal output.

Lava version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

I'm submitting a ...

  • [x ] bug report
  • feature request
  • documentation request

Current behavior:

user@user-pc1:~/lava-dl$ pyb -E unit
PyBuilder version 0.13.5
Build started at 2022-03-04 15:33:17

[INFO] Installing or updating plugin "pypi:pybuilder_bandit, module name 'pybuilder_bandit'"
[INFO] Processing plugin packages 'pybuilder_bandit' to be installed with {}
[INFO] Activated environments: unit
[INFO] Building lava-dl version 0.1.1
[INFO] Executing build in /home/user/lava-dl
[INFO] Going to execute tasks: analyze, publish
[INFO] Processing plugin packages 'flake8~=4.0' to be installed with {'upgrade': True}
[INFO] Processing plugin packages 'pypandoc~=1.4' to be installed with {'upgrade': True}
[INFO] Processing plugin packages 'setuptools>=38.6.0' to be installed with {'upgrade': True}
[INFO] Processing plugin packages 'sphinx_rtd_theme' to be installed with {}
[INFO] Processing plugin packages 'sphinx_tabs' to be installed with {}
[INFO] Processing plugin packages 'twine>=1.15.0' to be installed with {'upgrade': True}
[INFO] Processing plugin packages 'unittest-xml-reporting~=3.0.4' to be installed with {'upgrade': True}
[INFO] Processing plugin packages 'wheel>=0.34.0' to be installed with {'upgrade': True}
[INFO] Creating target 'build' VEnv in '/home/user/lava-dl/target/venv/build/cpython-3.8.10.final.0'
[INFO] Processing dependency packages 'lava' from git+https://github.com/lava-nc/lava.git to be installed with {'force_reinstall': True}
[INFO] Processing dependency packages 'requirements.txt' to be installed with {}
[INFO] Creating target 'test' VEnv in '/home/user/lava-dl/target/venv/test/cpython-3.8.10.final.0'
[INFO] Processing dependency packages 'requirements.txt' to be installed with {}
[INFO] Executing flake8 on project sources.
[INFO] Running unit tests
[INFO] Executing unit tests from Python modules in /home/user/lava-dl/tests/lava
[INFO] Executed 82 unit tests
[ERROR] Test has error: unittest.loader._FailedTest.test_hdf5
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
yield
File "/usr/lib/python3.8/unittest/case.py", line 676, in run
self._callTestMethod(testMethod)
File "/usr/lib/python3.8/unittest/case.py", line 633, in _callTestMethod
method()
File "/usr/lib/python3.8/unittest/loader.py", line 34, in testFailure
raise self._exception
ImportError: Failed to import test module: test_hdf5
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 154, in loadTestsFromName
module = import(module_name)
File "/home/user/lava-dl/tests/lava/lib/dl/netx/test_hdf5.py", line 18, in
from lava.lib.dl import netx
File "/home/user/lava-dl/src/lava/lib/dl/netx/init.py", line 4, in
from . import hdf5
File "/home/user/lava-dl/src/lava/lib/dl/netx/hdf5.py", line 15, in
from lava.proc.sdn.process import Sigma, Delta, SigmaDelta
ModuleNotFoundError: No module named 'lava.proc.sdn'

[ERROR] Test has error: unittest.loader._FailedTest.test_blocks
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/case.py", line 60, in testPartExecutor
yield
File "/usr/lib/python3.8/unittest/case.py", line 676, in run
self._callTestMethod(testMethod)
File "/usr/lib/python3.8/unittest/case.py", line 633, in _callTestMethod
method()
File "/usr/lib/python3.8/unittest/loader.py", line 34, in testFailure
raise self._exception
ImportError: Failed to import test module: test_blocks
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 154, in loadTestsFromName
module = import(module_name)
File "/home/user/lava-dl/tests/lava/lib/dl/netx/test_blocks.py", line 20, in
from lava.lib.dl.netx.blocks.process import Dense, Conv, Input
File "/home/user/lava-dl/src/lava/lib/dl/netx/init.py", line 4, in
from . import hdf5
File "/home/user/lava-dl/src/lava/lib/dl/netx/hdf5.py", line 15, in
from lava.proc.sdn.process import Sigma, Delta, SigmaDelta
ModuleNotFoundError: No module named 'lava.proc.sdn'


BUILD FAILED - There were 2 error(s) and 0 failure(s) in unit tests (pybuilder/plugins/python/unittest_plugin.py:109)

Build finished at 2022-03-04 15:39:45
Build took 387 seconds (387886 ms)

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

No module named 'dynamics' in OXFORD Training Notebook

When I try to run the SLAYER oxford training notebook, I have faced the problem ' ImportError: No module named 'dynamics' ' in the Training Loop step. This is probably originated from PyTorch but I could not find any other people who experienced this error. On the other hand, I could not find an appropriate dynamics module.

All the best,
Ahmet

ipython package not included in requirements.txt

When creating and installing a venv for LAVA from scratch, pip does not include ipython package. This package is needed for the Spike to Spike Regression: Oxford in lava-dl. Should this package be included in the requirements.txt for lava-dl?

Cheers,
Alex.

[PilotNet Tutorial]: Google Drive URL dataset is not valid

Hello,

I tried to run the tutorial about car steering prediction using a sigma-delta neural network. Unfortunately, the Google Drive URL referenced in lava-dl/tutorials/lava/lib/dl/slayer/pilotnet/pilotnet_dataset.py is not working anymore. For information, the referenced URL is the following: https://drive.google.com/file/d/0B-KJCaaF7elleG1RbzVPZWV4Tlk/view?usp=sharing .

I also checked the URL from the original PilotNet repository ( https://github.com/lhzlhz/PilotNet ) and the URL does not work as well, since it is the same.

Can you provide a new link if possible ?

Thanks for your help and for your awesome project!

Update notebooks with new hyperparameters

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request
  • update request

Current behavior:

  • PilotNet SDNN hyperparameter needs updating

RuntimeError: Number of dimensions of repeat dims can not be smaller than number of dimensions of tensor

Hi
what is the cause of below issue during training. Need solution at the earliest.

MyDataset is like this:

  event_d is EMG data of 1041 sets, each with 42000 samples (sampling rate of 700Hz and for 60s).  event_c is np.ones(42000) and event_t is (list(float_range(0, 59.976, '0.001428'))), 0 is the start , 59.976 is the stop and 0.001428 is the step.



    inputSpikes = slayer.io.Event(
                    event_d, None, event_c, event_t
                    ).to_tensor(sampling_time=self.samplingTime,
                                dim= (2,2,self.nTimeBins)) #ajaybs#torch.zeros((2,1,2,self.nTimeBins))) (channels, height, width, time)


    desiredClass = torch.zeros((3, 1, 1, 1))
    desiredClass[label,...] = 1
    
    return inputSpikes, desiredClass

RuntimeError Traceback (most recent call last)
Input In [17], in <cell line: 3>()
4 for i, (input, label) in enumerate(train_loader): # training loop
5 input=input.float()
----> 6 output = assistant.train(input, label)
7 print(f'\r[Epoch {epoch:2d}/{epochs}] {stats}', end='')
9 for i, (input, label) in enumerate(test_loader): # training loop

File /content/lava-dl/src/lava/lib/dl/slayer/utils/assistant.py:121, in Assistant.train(self, input, target)
119 else:
120 if self.lam is None:
--> 121 output = self.net(input)
122 else:
123 output, net_loss = self.net(input)

File /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

Input In [6], in Network.forward(self, spike)
21 def forward(self, spike):
22 for block in self.blocks:
---> 23 spike = block(spike)
24 return spike

File /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

File /content/lava-dl/src/lava/lib/dl/slayer/block/base.py:518, in AbstractDense.forward(self, x)
516 x = delay(x, 1)
517 if self.delay is not None:
--> 518 x = self.delay(x)
520 if self.count_log is True:
521 return x, torch.mean(x > 0)

File /usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []

File /content/lava-dl/src/lava/lib/dl/slayer/axon/delay.py:144, in Delay.forward(self, input)
140 broadcast_shape = list(input.shape[1:-1])
141 broadcast_shape[0] = 1
142 return _delayFunction.apply(
143 input,
--> 144 self.delay.reshape(-1, 1, 1).repeat(broadcast_shape),
145 self.grad_scale,
146 self.sampling_time
147 )

RuntimeError: Number of dimensions of repeat dims can not be smaller than number of dimensions of tensor

No CUDA runtime is found, using CUDA_HOME='/usr'

Hello everyone, i have this error:
"No CUDA runtime is found, using CUDA_HOME='/usr' "

each time i import stuff from slayer like "import lava.lib.dl.slayer as slayer" . Asking torch.cuda.is_available() in a python3 shell inside the env does tell me that CUDA is available.
pytest pass all necessary steps.
Anyone has suggestion on how to resolve this problem?
thanks for your attention!

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

SLAYER ALIF neuron's forward function uses wrong spike function signature

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • forward call to neuron has errors with wrong spike function signature

Expected behavior:

Steps to reproduce:

  • Instantiate alif neuron and call it directly.

Related code:

insert short code snippets here

Other information:

File "/workspace/k8mirror/lava-dl/src/lava/lib/dl/slayer/neuron/alif.py", line 610, in forward
    return self.spike(voltage, threshold + refractory)
TypeError: spike() missing 1 required positional argument: 'refractory'

[Errno 2] No such file or directory: 'gifs/inp0.png'

Objective of issue: No such file or directory error

Lava version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Current behavior:
When I run this cell:

output = net(input.to(device), mode=scheduler.mode(100, 0, False))
for i in range(5):
img = (2*input[i].reshape(28, 28).cpu().data.numpy()-1) * 255
Image.fromarray(img).convert('RGB').save(f'gifs/inp{i}.png')
out_event = slayer.io.tensor_to_event(output[i].cpu().data.numpy().reshape(1, 10, -1))
out_anim = out_event.anim(plt.figure(figsize=(10, 3.5)), frame_rate=2400)
out_anim.save(f'gifs/out{i}.gif', animation.PillowWriter(fps=24), dpi=300)

I get this error:

[Errno 2] No such file or directory: 'gifs/inp0.png'

Other information:

I have Python 3.8.5 for my virtual environment.

NetX NMNIST Tutorial

The objective of the issue: I have created a notebook that implements the conversion of the network created in the Slayer NMNIST tutorial. This issue is opened for the pull request.

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • Seems working very well, but sometimes NMNIST dataset source download may fail as it is in the slayer NMNIST tutorial.

Expected behavior:

Steps to reproduce:

Related code:

[BUG] Salyer - Application of normalization function in CUBA neuron

Objective of issue:
Normalization function in CUBA neuron is applied on "input" variable which has no effect on code.

Lava version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

**I'm submitting a **

  • bug report
  • feature request
  • documentation request

Current behavior:
image
Normalization function in CUBA neuron is applied on "input" variable which has no effect on code.

Expected behavior:
If this normalization function needs to be applied on "input" it should be implemented before calculation of current dynamics.
If this normalization function should be applied on "current", then the variable name should be changed.

NETX library fails to convert simple slayer classification network to an equivalent lava network

Objective of issue: I trained an IRIS network using the slayer library to near 100% accuracy and converted it over to a lava process using the netx library; however, the accuracy is not the same (about 33%). I need documentation or examples on how to convert a network trained via slayer over to a lava process with minimal accuracy loss if possible.

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • Slayer trained network is not converting over (netx) to an equivalent network in lava.

Expected behavior:

  • Each network (slayer and lava) should produce the same inference results.

Steps to reproduce:

# Clone and setup my custom repo
git clone https://github.com/rhendz/lava-apps.git
cd lava-apps
sh setup.sh

# Use lava-dl virtual environment
source lava-dl/.venv/bin/activate

# Setup notebook and kernel
pip install notebook
pip install ipykernel

python -m ipykernel install --user --name=lavadl

# Launch notebook
jupyter notebook

# Notes:
# Use lava-dl-netx-iris to test inference of original network and generate/run an inference test on the netx network
# Use lava-dl-slayer-iris to train IRIS network
# Change kernel to lavadl

Other information:
My objective is to eventually bring in other networks via netx library and be able to produce good inference results in lava. However, I need a good baseline via the slayer library first.

Update dependency versions

Objective of issue: Update dependency versions to latest specified by @bamsumit

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Hanging code block issue

Hello,

I have been re-attempting to train another network on NMNIST dataset ,but I am getting the issue #54 .
In addition, currenty when I try to run the NMNIST training code directly from tutorial ( It was working properly yesterday ) the code is hanging in the training notebook block inconsistenly. Setup is same, tutorial is same , variables are same but the code is hanging this is another problem I am facing.

When I interrupt the kernel it does not stop and needs restart.

Screenshot from 2022-03-14 12-35-29

Setup is GTX1650 GPU and Ubuntu 20.04 and I am getting no epoch in 20 mins. Which is weird because I was getting one epoch per 8 mins.

All the best,
Ahmet

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Correction for installation from binaries.

Hello,
It seems the installation procedure given for installing from binaries is copy-pasted from the main lava repository and may lead others to reinstall main lava instead of installing lava-dl.
A possible correction might be pip install lava-dl-0.1.1.tar.gz
Sincerely,

PS. I am having problems with pybuilder in every lava repository in every os constantly. This might be stemmed from both they are new repos under construction and I am not an expert on those builders (I have not experienced so much different errors in any other mainstream framework though). So, it might be nice to test those procedures on an empty computer to see what might be wrong.

About CUDA version compatibility

Objective of issue: I have been trying to make a clear installation to my another newly set up Ubuntu computer . Since it is CUDA enabled I have been trying to install CUDA by 'sudo apt install nvidia-cuda-toolkit' however this command installed my computer CUDA version 10.1 . Which then caused bunch of errors installing 'lava-dl' . The 10.1 version can cause some incompatibility issues because of PyTorch requirement. So it would be better to add the requirements the CUDA version compatibility since installer does not check automatically.

All the best,

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

About NetX visualization ,and conversion to others

Hello, I would like to suggest a feature. There are a bunch of available visualization modules for the onnx models, pytorch models and so on. And, there are web visualizers like netron.app. If there would be available visualization (or proper conversation) for NetX it would be quite easy to understand what is going on with the network. For example, I am trying to map the nmnist model to lava connected processes so, sometimes it becomes hard to figure out which parameter stands for which parameter when I use .pt file.

Also, I am curious about what corresponds the in slayer CUBA model to the LIF model in lava ? What does exactly voltage decay mean for a LIF layer?

I am a little bit confused about them so sorry about not-concise questions?

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

`netx.hdf5.Network` raises the error `AttributeError: 'str' object has no attribute 'decode'`

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  File "C:/Users/fw/Desktop/test.py", line 86, in <module>
    net = netx.hdf5.Network(net_config='./lava_net.net')
  File "C:\Users\fw\anaconda3\envs\lava-env\lib\site-packages\lava\magma\core\process\process.py", line 97, in __call__
    obj = type.__call__(cls, *args, **kwargs)
  File "C:\Users\fw\anaconda3\envs\lava-env\lib\site-packages\lava\lib\dl\netx\hdf5.py", line 49, in __init__
    self.layers = self._create()
  File "C:\Users\fw\anaconda3\envs\lava-env\lib\site-packages\lava\lib\dl\netx\hdf5.py", line 405, in _create
    layer, table = self.create_dense(
  File "C:\Users\fw\anaconda3\envs\lava-env\lib\site-packages\lava\lib\dl\netx\hdf5.py", line 242, in create_dense
    neuron_params = Network.get_neuron_params(layer_config['neuron'])
  File "C:\Users\fw\anaconda3\envs\lava-env\lib\site-packages\lava\lib\dl\netx\hdf5.py", line 92, in get_neuron_params
    neuron_type = neuron_config['type']
  File "C:\Users\fw\anaconda3\envs\lava-env\lib\site-packages\lava\lib\dl\netx\utils.py", line 64, in __getitem__
    return value.decode('ascii')
AttributeError: 'str' object has no attribute 'decode'

Expected behavior:

  • The network is loaded from file.

Steps to reproduce:

  • You can run the following codes to reproduce it.

Related code:

import torch
from lava.lib.dl import netx, slayer
import torch.nn as nn
import h5py
class Network(torch.nn.Module):
    def __init__(self):
        super(Network, self).__init__()

        neuron_params = {
                'threshold'     : 0.1,
                'current_decay' : 1,
                'voltage_decay' : 0.1,
                'requires_grad' : True,
            }

        self.blocks = torch.nn.ModuleList([
                slayer.block.cuba.Dense(neuron_params, 200, 256),
                slayer.block.cuba.Dense(neuron_params, 256, 200),
            ])

    def forward(self, spike):
        for block in self.blocks:
            spike = block(spike)
        return spike

    def export_hdf5(self, filename):
        # network export to hdf5 format
        h = h5py.File(filename, 'w')
        layer = h.create_group('layer')
        for i, b in enumerate(self.blocks):
            b.export_hdf5(layer.create_group(f'{i}'))

net = Network()
with torch.no_grad():
    net(torch.rand([1, 200, 2]))
net.export_hdf5('./lava_net.net')
net = netx.hdf5.Network(net_config='./lava_net.net')
print(net)
print(f'There are {len(net)} layers in network:')

for l in net.layers:
    print(f'{l.block:5s} : {l.name:10s}, shape : {l.shape}')

Other information:

# packages in environment at C:\Users\fw\anaconda3\envs\lava-env: 
#                                                                 
# Name                    Version                   Build  Channel
torch                     1.8.1+cu111              pypi_0    pypi 
torchaudio                0.8.1                    pypi_0    pypi 
torchvision               0.9.1+cu111              pypi_0    pypi 

# packages in environment at C:\Users\fw\anaconda3\envs\lava-env: 
#                                                                 
# Name                    Version                   Build  Channel
lava-dl                   0.2.0                    pypi_0    pypi 
lava-nc                   0.3.0                    pypi_0    pypi 

# packages in environment at C:\Users\fw\anaconda3\envs\lava-env:
#
# Name                    Version                   Build  Channel
hdf5                      1.10.4               h7ebc959_0    defaults

# packages in environment at C:\Users\fw\anaconda3\envs\lava-env:
#
# Name                    Version                   Build  Channel
h5py                      2.10.0           py38h5e291fa_0    defaults


About SLAYER deConv feature

Objective of issue: Introducing SLAYER deConv block

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • Is there any way of introducing convTranspose on SLAYER for deConv layers?

Expected behavior:

  • Something similar to slayer.block.cuba.deConv()

Thanks in advance,
Alex.

Broken changes with lava 0.4.0

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.5.0 (feature release)
  • 0.4.1 (bug fixes)
  • 0.4.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • The unittests don't work due to api changes in lava 0.4.0

Expected behavior:

  • The unittests should work

Steps to reproduce:

  • Run the unittest with lava 0.4.0

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Conda availability

I was going to update to the new version of lava. I am curious whether lava-dl will be available to be installed from conda ?

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Variable name bug in `block.AbstractInput`

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Traceback (most recent call last):
  File "/home/wfang/spikingjelly_dev/spikingjelly/test.py", line 6, in <module>
    print(net(torch.rand([1, 2, 3])))
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/block/base.py", line 94, in forward
    self.input_shape = input.shape[1:-1]
AttributeError: 'builtin_function_or_method' object has no attribute 'shape'

Expected behavior:

  • No error is raised.

Steps to reproduce:

  • Please run codes in Related code.

Related code:

import torch
from lava.lib.dl import slayer

net = slayer.block.cuba.Input()
print(net(torch.rand([1, 2, 3])))

Other information:

NMNIST Training Notebook CUDA Error

Objective of issue:

Hello, so I've been struggling getting the training notebook for the NMNIST example set up and working. I've been able to set up an environment that has all the necessary dependencies installed, but when I run the cell to actually execute the training loop I get the following error:

image

Bugs in pool when stride is odd

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • Raise an error
  File "/home/wfang/anaconda3/envs/lava-env/lib/python3.10/site-packages/lava/lib/dl/slayer/synapse/layer.py", line 502, in forward
    return result.reshape((
RuntimeError: shape '[2, 4, -1, 8, 4]' is invalid for input of size 2112

Expected behavior:

  • Output correct results

Steps to reproduce:

x = torch.rand([2, 4, 16, 16, 4])
pool = slayer.synapse.Pool(kernel_size=3, stride=2)
pool(x)

Related code:

insert short code snippets here

Other information:

(lava-env) wfang@mlg-ThinkStation-P920:~$ conda list la
# packages in environment at /home/wfang/anaconda3/envs/lava-env:
#
# Name                    Version                   Build  Channel
lava                      0.4.0              pyhd8ed1ab_0    conda-forge
lava-dl                   0.2.0              pyhd8ed1ab_0    conda-forge

PilotNet does not run on Loihi

Objective of issue:
NetX does not support running PilotNet on Loihi

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Expected behavior:

Steps to reproduce:

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

[WSL-2] lava-dl installation: BUILD FAILED - RuntimeError: CUDA error: no kernel image is available for execution on the device

I tried to build lava-dl on my local WSL2 (with Ubuntu 20.04) machine with an RTX 3090 (after successfully installing lava and lava-optimization) and followed therefor the given installation instructions. (I already had a working CUDA 11.5 installation on this machine.)
Unfortunately I received this error:

BUILD FAILED - RuntimeError: CUDA error: no kernel image is available for execution on the device (tests/lava/lib/dl/slayer/neuron/test_adrf_iz.py:45)

Is the RTX 3090 currently not supported? How to resolve this issue? Are there otherwise any workarounds?
Here is the full output:

(base) user@machine:~/miniconda3/lava-dl$ pip install -r build-requirements.txt
Requirement already satisfied: pybuilder in /home/user/miniconda3/lib/python3.8/site-packages (from -r build-requirements.txt (line 1)) (0.13.3)
Requirement already satisfied: flake8 in /home/user/miniconda3/lib/python3.8/site-packages (from -r build-requirements.txt (line 2)) (4.0.1)
Requirement already satisfied: pytest in /home/user/miniconda3/lib/python3.8/site-packages (from -r build-requirements.txt (line 3)) (6.2.5)
Requirement already satisfied: unittest2 in /home/user/miniconda3/lib/python3.8/site-packages (from -r build-requirements.txt (line 4)) (1.1.0)
Requirement already satisfied: bandit in /home/user/miniconda3/lib/python3.8/site-packages (from -r build-requirements.txt (line 5)) (1.7.1)
Requirement already satisfied: coverage in /home/user/miniconda3/lib/python3.8/site-packages (from -r build-requirements.txt (line 6)) (6.2)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /home/user/miniconda3/lib/python3.8/site-packages (from flake8->-r build-requirements.txt (line 2)) (0.6.1)
Requirement already satisfied: pycodestyle<2.9.0,>=2.8.0 in /home/user/miniconda3/lib/python3.8/site-packages (from flake8->-r build-requirements.txt (line 2)) (2.8.0)
Requirement already satisfied: pyflakes<2.5.0,>=2.4.0 in /home/user/miniconda3/lib/python3.8/site-packages (from flake8->-r build-requirements.txt (line 2)) (2.4.0)
Requirement already satisfied: attrs>=19.2.0 in /home/user/miniconda3/lib/python3.8/site-packages (from pytest->-r build-requirements.txt (line 3)) (21.2.0)
Requirement already satisfied: toml in /home/user/miniconda3/lib/python3.8/site-packages (from pytest->-r build-requirements.txt (line 3)) (0.10.2)
Requirement already satisfied: iniconfig in /home/user/miniconda3/lib/python3.8/site-packages (from pytest->-r build-requirements.txt (line 3)) (1.1.1)
Requirement already satisfied: pluggy<2.0,>=0.12 in /home/user/miniconda3/lib/python3.8/site-packages (from pytest->-r build-requirements.txt (line 3)) (1.0.0)
Requirement already satisfied: packaging in /home/user/miniconda3/lib/python3.8/site-packages (from pytest->-r build-requirements.txt (line 3)) (21.3)
Requirement already satisfied: py>=1.8.2 in /home/user/miniconda3/lib/python3.8/site-packages (from pytest->-r build-requirements.txt (line 3)) (1.11.0)
Requirement already satisfied: six>=1.4 in /home/user/miniconda3/lib/python3.8/site-packages (from unittest2->-r build-requirements.txt (line 4)) (1.16.0)
Requirement already satisfied: traceback2 in /home/user/miniconda3/lib/python3.8/site-packages (from unittest2->-r build-requirements.txt (line 4)) (1.4.0)
Collecting argparse
  Using cached argparse-1.4.0-py2.py3-none-any.whl (23 kB)
Requirement already satisfied: stevedore>=1.20.0 in /home/user/miniconda3/lib/python3.8/site-packages (from bandit->-r build-requirements.txt (line 5)) (3.5.0)
Requirement already satisfied: GitPython>=1.0.1 in /home/user/miniconda3/lib/python3.8/site-packages (from bandit->-r build-requirements.txt (line 5)) (3.1.24)
Requirement already satisfied: PyYAML>=5.3.1 in /home/user/miniconda3/lib/python3.8/site-packages (from bandit->-r build-requirements.txt (line 5)) (6.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/user/miniconda3/lib/python3.8/site-packages (from GitPython>=1.0.1->bandit->-r build-requirements.txt (line 5)) (4.0.0)
Requirement already satisfied: gitdb<5,>=4.0.1 in /home/user/miniconda3/lib/python3.8/site-packages (from GitPython>=1.0.1->bandit->-r build-requirements.txt (line 5)) (4.0.9)
Requirement already satisfied: pbr!=2.1.0,>=2.0.0 in /home/user/miniconda3/lib/python3.8/site-packages (from stevedore>=1.20.0->bandit->-r build-requirements.txt (line 5)) (5.8.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/user/miniconda3/lib/python3.8/site-packages (from packaging->pytest->-r build-requirements.txt (line 3)) (3.0.6)
Requirement already satisfied: linecache2 in /home/user/miniconda3/lib/python3.8/site-packages (from traceback2->unittest2->-r build-requirements.txt (line 4)) (1.0.0)
Requirement already satisfied: smmap<6,>=3.0.1 in /home/user/miniconda3/lib/python3.8/site-packages (from gitdb<5,>=4.0.1->GitPython>=1.0.1->bandit->-r build-requirements.txt (line 5)) (5.0.0)
Installing collected packages: argparse
Successfully installed argparse-1.4.0
(base) user@machine:~/miniconda3/lava-dl$ pip install -r requirements.txt
Requirement already satisfied: torch==1.8.1 in /home/user/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 1)) (1.8.1)
Requirement already satisfied: torchvision==0.9.1 in /home/user/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 2)) (0.9.1)
Requirement already satisfied: numpy in /home/user/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 3)) (1.21.2)
Requirement already satisfied: scipy in /home/user/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 4)) (1.7.2)
Requirement already satisfied: matplotlib in /home/user/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 5)) (3.5.0)
Requirement already satisfied: ninja in /home/user/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 6)) (1.10.2.3)
Requirement already satisfied: h5py>=3.1.0 in /home/user/miniconda3/lib/python3.8/site-packages (from -r requirements.txt (line 7)) (3.6.0)
Requirement already satisfied: typing-extensions in /home/user/miniconda3/lib/python3.8/site-packages (from torch==1.8.1->-r requirements.txt (line 1)) (4.0.0)
Requirement already satisfied: pillow>=4.1.1 in /home/user/miniconda3/lib/python3.8/site-packages (from torchvision==0.9.1->-r requirements.txt (line 2)) (8.4.0)
Requirement already satisfied: fonttools>=4.22.0 in /home/user/miniconda3/lib/python3.8/site-packages (from matplotlib->-r requirements.txt (line 5)) (4.28.1)
Requirement already satisfied: cycler>=0.10 in /home/user/miniconda3/lib/python3.8/site-packages (from matplotlib->-r requirements.txt (line 5)) (0.11.0)
Requirement already satisfied: python-dateutil>=2.7 in /home/user/miniconda3/lib/python3.8/site-packages (from matplotlib->-r requirements.txt (line 5)) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/user/miniconda3/lib/python3.8/site-packages (from matplotlib->-r requirements.txt (line 5)) (1.3.2)
Requirement already satisfied: setuptools-scm>=4 in /home/user/miniconda3/lib/python3.8/site-packages (from matplotlib->-r requirements.txt (line 5)) (6.3.2)
Requirement already satisfied: pyparsing>=2.2.1 in /home/user/miniconda3/lib/python3.8/site-packages (from matplotlib->-r requirements.txt (line 5)) (3.0.6)
Requirement already satisfied: packaging>=20.0 in /home/user/miniconda3/lib/python3.8/site-packages (from matplotlib->-r requirements.txt (line 5)) (21.3)
Requirement already satisfied: six>=1.5 in /home/user/miniconda3/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib->-r requirements.txt (line 5)) (1.16.0)
Requirement already satisfied: setuptools in /home/user/miniconda3/lib/python3.8/site-packages (from setuptools-scm>=4->matplotlib->-r requirements.txt (line 5)) (52.0.0.post20210125)
Requirement already satisfied: tomli>=1.0.0 in /home/user/miniconda3/lib/python3.8/site-packages (from setuptools-scm>=4->matplotlib->-r requirements.txt (line 5)) (1.2.2)
(base) user@machine:~/miniconda3/lava-dl$ pyb -E unit
PyBuilder version 0.13.3
Build started at 2021-12-01 16:10:58
------------------------------------------------------------
[INFO]  Installing or updating plugin "pypi:pybuilder_bandit, module name 'pybuilder_bandit'"
[INFO]  Processing plugin packages 'pybuilder_bandit' to be installed with {}
[INFO]  Activated environments: unit
[INFO]  Building lava-dl version 0.1.1
[INFO]  Executing build in /home/user/miniconda3/lava-dl
[INFO]  Going to execute tasks: analyze, publish
[INFO]  Processing plugin packages 'flake8~=3.7' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'pypandoc~=1.4' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'setuptools>=38.6.0' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'sphinx_rtd_theme' to be installed with {}
[INFO]  Processing plugin packages 'sphinx_tabs' to be installed with {}
[INFO]  Processing plugin packages 'twine>=1.15.0' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'unittest-xml-reporting~=3.0.4' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'wheel>=0.34.0' to be installed with {'upgrade': True}
[INFO]  Creating target 'build' VEnv in '/home/user/miniconda3/lava-dl/target/venv/build/cpython-3.8.12.final.0'
[INFO]  Processing dependency packages 'requirements.txt' to be installed with {}
[INFO]  Creating target 'test' VEnv in '/home/user/miniconda3/lava-dl/target/venv/test/cpython-3.8.12.final.0'
[INFO]  Processing dependency packages 'requirements.txt' to be installed with {}
[INFO]  Executing flake8 on project sources.
[INFO]  Running unit tests
[INFO]  Executing unit tests from Python modules in /home/user/miniconda3/lava-dl/tests/lava
/home/user/miniconda3/lava-dl/target/venv/build/cpython-3.8.12.final.0/lib/python3.8/site-packages/torch/cuda/__init__.py:104: UserWarning:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
------------------------------------------------------------
BUILD FAILED - RuntimeError: CUDA error: no kernel image is available for execution on the device (tests/lava/lib/dl/slayer/neuron/test_adrf_iz.py:45)
------------------------------------------------------------
Build finished at 2021-12-01 16:11:06
Build took 8 seconds (8015 ms)

I afterwards installed the latest PyTorch version directly from the PyTorch page and re-ran the building process but ran into a similar issue:

(base) user@machine:~/miniconda3/lava-dl$ pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
Looking in links: https://download.pytorch.org/whl/cu113/torch_stable.html
Collecting torch==1.10.0+cu113
  Using cached https://download.pytorch.org/whl/cu113/torch-1.10.0%2Bcu113-cp38-cp38-linux_x86_64.whl (1821.4 MB)
Collecting torchvision==0.11.1+cu113
  Using cached https://download.pytorch.org/whl/cu113/torchvision-0.11.1%2Bcu113-cp38-cp38-linux_x86_64.whl (24.6 MB)
Collecting torchaudio==0.10.0+cu113
  Using cached https://download.pytorch.org/whl/cu113/torchaudio-0.10.0%2Bcu113-cp38-cp38-linux_x86_64.whl (2.9 MB)
Requirement already satisfied: typing-extensions in /home/user/miniconda3/lib/python3.8/site-packages (from torch==1.10.0+cu113) (4.0.0)
Requirement already satisfied: numpy in /home/user/miniconda3/lib/python3.8/site-packages (from torchvision==0.11.1+cu113) (1.21.2)
Requirement already satisfied: pillow!=8.3.0,>=5.3.0 in /home/user/miniconda3/lib/python3.8/site-packages (from torchvision==0.11.1+cu113) (8.4.0)
Installing collected packages: torch, torchvision, torchaudio
  Attempting uninstall: torch
    Found existing installation: torch 1.8.1
    Uninstalling torch-1.8.1:
      Successfully uninstalled torch-1.8.1
  Attempting uninstall: torchvision
    Found existing installation: torchvision 0.9.1
    Uninstalling torchvision-0.9.1:
      Successfully uninstalled torchvision-0.9.1
  Attempting uninstall: torchaudio
    Found existing installation: torchaudio 0.8.1
    Uninstalling torchaudio-0.8.1:
      Successfully uninstalled torchaudio-0.8.1
Successfully installed torch-1.10.0+cu113 torchaudio-0.10.0+cu113 torchvision-0.11.1+cu113
(base) user@machine:~/miniconda3/lava-dl$ pyb -E unit
PyBuilder version 0.13.3
Build started at 2021-12-01 16:22:04
------------------------------------------------------------
[INFO]  Installing or updating plugin "pypi:pybuilder_bandit, module name 'pybuilder_bandit'"
[INFO]  Processing plugin packages 'pybuilder_bandit' to be installed with {}
[INFO]  Activated environments: unit
[INFO]  Building lava-dl version 0.1.1
[INFO]  Executing build in /home/user/miniconda3/lava-dl
[INFO]  Going to execute tasks: analyze, publish
[INFO]  Processing plugin packages 'flake8~=3.7' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'pypandoc~=1.4' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'setuptools>=38.6.0' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'sphinx_rtd_theme' to be installed with {}
[INFO]  Processing plugin packages 'sphinx_tabs' to be installed with {}
[INFO]  Processing plugin packages 'twine>=1.15.0' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'unittest-xml-reporting~=3.0.4' to be installed with {'upgrade': True}
[INFO]  Processing plugin packages 'wheel>=0.34.0' to be installed with {'upgrade': True}
[INFO]  Creating target 'build' VEnv in '/home/user/miniconda3/lava-dl/target/venv/build/cpython-3.8.12.final.0'
[INFO]  Processing dependency packages 'requirements.txt' to be installed with {}
[INFO]  Creating target 'test' VEnv in '/home/user/miniconda3/lava-dl/target/venv/test/cpython-3.8.12.final.0'
[INFO]  Processing dependency packages 'requirements.txt' to be installed with {}
[INFO]  Executing flake8 on project sources.
[INFO]  Running unit tests
[INFO]  Executing unit tests from Python modules in /home/user/miniconda3/lava-dl/tests/lava
/home/user/miniconda3/lava-dl/target/venv/build/cpython-3.8.12.final.0/lib/python3.8/site-packages/torch/cuda/__init__.py:104: UserWarning:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))
------------------------------------------------------------
BUILD FAILED - RuntimeError: CUDA error: no kernel image is available for execution on the device (tests/lava/lib/dl/slayer/neuron/test_adrf_iz.py:45)
------------------------------------------------------------
Build finished at 2021-12-01 16:22:10
Build took 5 seconds (5793 ms)

Slayer Recurrent block: wrong synapse when bias is present

Objective of issue:

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

Bug in code: /lava-dl/src/lava/lib/dl/slayer/block/base.py line 1138

    z = self.synapse(x) + self.bias

Expected behavior:

should have been self.input_synapse(x)

Runtime errors related to ninja_build

I tried to run your tutorial from bootstrap/mnist/train.ipynb, but it crashes because of runtime errors related to ninja_build (see details below). When I switch to "device = torch.device('cpu')" it works fine, but there is a problem when I use "device = torch.device('cuda')". I have all the requirements properly installed. While troubleshooting the build.ninja file I discovered that the x86 VS linker is used instead of 64 bit one.

Do you have any idea why this errors are occurring?

Traceback (most recent call last):
File "C:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\utils\cpp_extension.py", line 1667, in _run_ninja_build
subprocess.run(
File "C:\Users\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:/Users/github/intel-py38-lava011/lava-dl/tutorials/lava/lib/dl/bootstrap/mnist/mnist_train_example.py", line 116, in
output = net.forward(input, mode)
File "C:/Users/github/intel-py38-lava011/lava-dl/tutorials/lava/lib/dl/bootstrap/mnist/mnist_train_example.py", line 53, in forward
x = block(x, mode=m)
File "C:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\nn\modules\module.py", line 889, in call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\bootstrap\block\cuba.py", line 36, in forward
return AbstractBlock.forward(self, x, mode)
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\bootstrap\block\base.py", line 124, in forward
x = self.forward_snn(x, sample=True)
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\bootstrap\block\base.py", line 89, in forward_snn
x = self.neuron(z)
File "C:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\nn\modules\module.py", line 889, in call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\slayer\neuron\cuba.py", line 433, in forward
, voltage = self.dynamics(input)
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\slayer\neuron\cuba.py", line 351, in dynamics
current = leaky_integrator.dynamics(
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\slayer\neuron\dynamics\leaky_integrator.py", line 95, in dynamics
output = Accelerated.leaky_integrator.dynamics(
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\slayer\utils\utils.py", line 14, in get
return staticmethod(self.fget).get(None, owner)()
File "C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\slayer\neuron\dynamics\leaky_integrator.py", line 40, in leaky_integrator
Accelerated.module = load(
File "C:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\utils\cpp_extension.py", line 1079, in load
return jit_compile(
File "C:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\utils\cpp_extension.py", line 1292, in jit_compile
write_ninja_file_and_build_library(
File "C:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\utils\cpp_extension.py", line 1404, in write_ninja_file_and_build_library
run_ninja_build(
File "C:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\utils\cpp_extension.py", line 1683, in run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'dynamics': [1/2] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\nvcc --generate-dependencies-with-compile --dependency-output leaky_integrator.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=dynamics -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\TH -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include" -IC:\Users\AppData\Local\Programs\Python\Python38\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr -gencode=arch=compute_50,code=compute_50 -gencode=arch=compute_50,code=sm_50 -c C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\slayer\neuron\dynamics\leaky_integrator.cu -o leaky_integrator.cuda.o
FAILED: leaky_integrator.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\nvcc --generate-dependencies-with-compile --dependency-output leaky_integrator.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=dynamics -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\TH -IC:\Users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include" -IC:\Users\AppData\Local\Programs\Python\Python38\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_50,code=compute_50 -gencode=arch=compute_50,code=sm_50 -c C:\Users\github\intel-py38-lava011\lava-dl\src\lava\lib\dl\slayer\neuron\dynamics\leaky_integrator.cu -o leaky_integrator.cuda.o
C:/Users/github/intel-py38-lava011/python38_venv1/lib/site-packages/torch/include\c10/macros/Macros.h(189): warning C4067: unexpected tokens following preprocessor directive - expected a newline
c:\users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
c:\users\appdata\local\programs\python\python38\include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:/Users/github/intel-py38-lava011/python38_venv1/lib/site-packages/torch/include\c10/macros/Macros.h(189): warning C4067: unexpected tokens following preprocessor directive - expected a newline
c:\users\github\intel-py38-lava011\python38_venv1\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
c:\users\appdata\local\programs\python\python38\include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime.h(184): error: invalid redeclaration of type name "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new.h(66): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new.h(71): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new.h(77): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new.h(82): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new.h(184): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new.h(199): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new_debug.h(23): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\vcruntime_new_debug.h(31): error: first parameter of allocation function must be of type "size_t"

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(168): error: class template "std::_Is_function" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(212): error: class template "std::_Is_memfunptr" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(1849): error: class template "std::result_of" has already been defined

C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\VC\Tools\MSVC\14.16.27023\include\type_traits(1849): error: class template "std::result_of" has already been defined

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\crt/common_functions.h(117): error: first parameter of allocation function must be of type "size_t"

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\crt/common_functions.h(118): error: first parameter of allocation function must be of type "size_t"

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\crt/common_functions.h(240): error: first parameter of allocation function must be of type "size_t"

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\crt/common_functions.h(241): error: first parameter of allocation function must be of type "size_t"

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(104): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(105): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(109): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(110): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(111): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(112): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(113): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(114): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(115): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(116): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(117): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(118): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(119): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(120): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(122): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(123): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(124): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(125): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(126): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(127): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(128): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(129): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(130): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(131): error: asm operand type size(8) does not match type/size implied by constraint 'r'

c:\program files\nvidia gpu computing toolkit\cuda\v10.2\include\sm_32_intrinsics.hpp(132): error: asm operand type size(8) does not match type/size implied by constraint 'r'

Error limit reached.
100 errors detected in the compilation of "C:/Users/AppData/Local/Temp/tmpxft_00002444_00000000-10_leaky_integrator.cpp1.ii".
Compilation terminated.
leaky_integrator.cu
ninja: build stopped: subcommand failed.

SLAYER CuBa Conv block with spikes

Hi.

I'm trying to replicate the SNN network SpikeMS (https://github.com/prgumd/SpikeMS) developed in SLAYERpytorch with the lava-dl. The SpikeMS implementation contains 6 Convolutional layers that process spikes in the following Tensor format
Tensor=[n_channels, height, width, num_time_bins ] as in NMNIST dataset lava-dl provides.

The current SNN implementation I have for the moment is the following:

self.blocks = torch.nn.ModuleList([

        slayer.block.cuba.Conv(neuron_params_drop_conv1,  2, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 16, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 32, 64, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 64, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 32, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 16,  2, 3, 2, 0, dilation=1, groups=1, weight_scale=1),

And my input spike tensor is input_spikes = Tensor[bach, n_channels*height*width, num_time_bins]. Of course this does not work since dimensions of the input_spikes and the first layer are not the same, so I've tried to add a dummy Dense Layer as follows

self.blocks = torch.nn.ModuleList([
        # Input dummy layer
        slayer.block.cuba.Dense(neuron_input_params_drop, 2*144*256, 16*2*3*3), # channels=2, height=144, width=256
        # Autoencoder layers
        slayer.block.cuba.Conv(neuron_params_drop_conv1,  2, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv2, 16, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv3, 32, 64, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv4, 64, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv5, 32, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv6, 16,  2, 3, 2, 0, dilation=1, groups=1, weight_scale=1),

However, dimensions do not fit between Dense layer and Conv layer.

My question is the following: How is slayer doing to manage time spike tensors with conv blocks? and What is the proper way of preparing input tensors of spikes for Conv blocks with slayer?

Proposal: A section in the documentation explaining how tensors are managed for each block and what is the proper Input and Output for them.

Thanks a million in advance,

Alex.

Objective of issue: Improve documentation about the use of CuBa blocks

Lava DL version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

Lava version:

  • 0.4.0 (feature release)
  • 0.3.1 (bug fixes)
  • 0.3.0 (current version)
  • 0.2.0
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Unittest failing with lava tip

Objective of issue:

Lava version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • tests/lava/lib/dl/netx/test_hdf5.py fails

Expected behavior:

  • tests/lava/lib/dl/netx/test_hdf5.py needs to pass

Steps to reproduce:

  • run unittest with lava tip

Related code:

insert short code snippets here

Other information:

insert the output from lava debug here

Network Exchange Implementation

Objective of issue:
Network exchange library implementation for automatic generation of Lava process from Lava-DL-SLAYER trained network.

Lava version:

  • 0.3.0 (feature release)
  • 0.2.1 (bug fixes)
  • 0.2.0 (current version)
  • 0.1.2

I'm submitting a ...

  • bug report
  • feature request
  • documentation request

Current behavior:

  • Not implemented

About some problems occurred in the NMNIST training example in Windows

Hello,
Currently, I am running/trying to run NMNIST training example on my both computers one of them is Windows and uses CUDA, and the other one is an Ubuntu computer with no CUDA support.
I am running the notebook successfully on my Ubuntu computer without a problem but since it is only CPU (an old one) it is quite slow as expected.

For windows, there are some problems, and probably just because of the windows system's unique requirements. Let me express them.
Firstly the windows indexing convention makes some of the codes no longer useful. I have solved the problem in the nmnist.py by replacing the split('/') with '\' however this is not a proper solution. There might be better ways to solve this incompatibility.

On the other hand, I have got an error about Visual Studio environment variables, ' CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1. ' I will try to solve this issue also by myself since it is probably because windows C++ environment setup.

P.S.: I am aware it is best to switch the OS of the better computer to Ubuntu, but right now it is not a possible option ( I really do not like to work with windows, I will switch soon.)

All the best,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.