Giter Site home page Giter Site logo

fangwei123456 / spikingjelly Goto Github PK

View Code? Open in Web Editor NEW
1.2K 15.0 231.0 41.5 MB

SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.

Home Page: https://spikingjelly.readthedocs.io

License: Other

Python 91.52% Cuda 8.48%
pytorch spiking-neural-networks snn deep-learning machine-learning dvs

spikingjelly's Introduction

SpikingJelly

GitHub last commit Documentation Status PyPI PyPI - Python Version repo size GitHub issues GitHub closed issues GitHub pull requests GitHub closed pull requests Visitors GitHub forks GitHub Repo stars GitHub contributors

English | 中文(Chinese)

demo

SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.

The documentation of SpikingJelly is written in both English and Chinese: https://spikingjelly.readthedocs.io.

Installation

Note that SpikingJelly is based on PyTorch. Please make sure that you have installed PyTorch before you install SpikingJelly.

Version notes

The odd version number is the developing version, updated with the GitHub/OpenI repository. The even version number is the stable version and is available at PyPI.

The default doc is for the latest developing version. If you are using the stable version, do not forget to switch to the doc in the corresponding version.

From the version 0.0.0.0.14, modules including clock_driven and event_driven are renamed. Please refer to the tutorial Migrate From Old Versions.

If you use an old version of SpikingJelly, you may encounter some fatal bugs. Refer to Bugs History with Releases for more details.

Docs for different versions:

Install the last stable version from PyPI:

pip install spikingjelly

Install the latest developing version from the source code:

From GitHub:

git clone https://github.com/fangwei123456/spikingjelly.git
cd spikingjelly
python setup.py install

From OpenI:

git clone https://openi.pcl.ac.cn/OpenI/spikingjelly.git
cd spikingjelly
python setup.py install

Build SNN In An Unprecedented Simple Way

SpikingJelly is user-friendly. Building SNN with SpikingJelly is as simple as building ANN in PyTorch:

nn.Sequential(
        layer.Flatten(),
        layer.Linear(28 * 28, 10, bias=False),
        neuron.LIFNode(tau=tau, surrogate_function=surrogate.ATan())
        )

This simple network with a Poisson encoder can achieve 92% accuracy on the MNIST test dataset. Read refer to the tutorial for more details. You can also run this code in a Python terminal for training on classifying MNIST:

python -m spikingjelly.activation_based.examples.lif_fc_mnist -tau 2.0 -T 100 -device cuda:0 -b 64 -epochs 100 -data-dir <PATH to MNIST> -amp -opt adam -lr 1e-3 -j 8

Fast And Handy ANN-SNN Conversion

SpikingJelly implements a relatively general ANN-SNN Conversion interface. Users can realize the conversion through PyTorch. What's more, users can customize the conversion mode.

class ANN(nn.Module):
    def __init__(self):
        super().__init__()
        self.network = nn.Sequential(
            nn.Conv2d(1, 32, 3, 1),
            nn.BatchNorm2d(32, eps=1e-3),
            nn.ReLU(),
            nn.AvgPool2d(2, 2),

            nn.Conv2d(32, 32, 3, 1),
            nn.BatchNorm2d(32, eps=1e-3),
            nn.ReLU(),
            nn.AvgPool2d(2, 2),

            nn.Conv2d(32, 32, 3, 1),
            nn.BatchNorm2d(32, eps=1e-3),
            nn.ReLU(),
            nn.AvgPool2d(2, 2),

            nn.Flatten(),
            nn.Linear(32, 10)
        )

    def forward(self,x):
        x = self.network(x)
        return x

This simple network with analog encoding can achieve 98.44% accuracy after conversion on MNIST test dataset. Read the tutorial for more details. You can also run this code in a Python terminal for training on classifying MNIST using the converted model:

>>> import spikingjelly.activation_based.ann2snn.examples.cnn_mnist as cnn_mnist
>>> cnn_mnist.main()

CUDA-Enhanced Neuron

SpikingJelly provides two backends for multi-step neurons. You can use the user-friendly torch backend for easily coding and debugging and use cupy backend for faster training speed.

The following figure compares the execution time of two backends of Multi-Step LIF neurons (float32):

exe_time_fb

float16 is also provided by the cupy backend and can be used in automatic mixed precision training.

To use the cupy backend, please install CuPy. Note that the cupy backend only supports GPU, while the torch backend supports both CPU and GPU.

Device Supports

  • Nvidia GPU
  • CPU

As simple as using PyTorch.

>>> net = nn.Sequential(layer.Flatten(), layer.Linear(28 * 28, 10, bias=False), neuron.LIFNode(tau=tau))
>>> net = net.to(device) # Can be CPU or CUDA devices

Neuromorphic Datasets Supports

SpikingJelly includes the following neuromorphic datasets:

Dataset Source
ASL-DVS Graph-based Object Classification for Neuromorphic Vision Sensing
CIFAR10-DVS CIFAR10-DVS: An Event-Stream Dataset for Object Classification
DVS128 Gesture A Low Power, Fully Event-Based Gesture Recognition System
ES-ImageNet ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks
HARDVS HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors
N-Caltech101 Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades
N-MNIST Converting Static Image Datasets to Spiking Neuromorphic Datasets Using Saccades
Nav Gesture Event-Based Gesture Recognition With Dynamic Background Suppression Using Smartphone Computational Capabilities
Spiking Heidelberg Digits (SHD) The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks
DVS-Lip Multi-Grained Spatio-Temporal Features Perceived Network for Event-Based Lip-Reading

Users can use both the origin event data and frame data integrated by SpikingJelly:

import torch
from torch.utils.data import DataLoader
from spikingjelly.datasets import pad_sequence_collate, padded_sequence_mask
from spikingjelly.datasets.dvs128_gesture import DVS128Gesture

# Set the root directory for the dataset
root_dir = 'D:/datasets/DVS128Gesture'
# Load event dataset
event_set = DVS128Gesture(root_dir, train=True, data_type='event')
event, label = event_set[0]
# Print the keys and their corresponding values in the event data
for k in event.keys():
    print(k, event[k])

# t [80048267 80048277 80048278 ... 85092406 85092538 85092700]
# x [49 55 55 ... 60 85 45]
# y [82 92 92 ... 96 86 90]
# p [1 0 0 ... 1 0 0]
# label 0

# Load a dataset with fixed frame numbers
fixed_frames_number_set = DVS128Gesture(root_dir, train=True, data_type='frame', frames_number=20, split_by='number')
# Randomly select two frames and print their shapes
rand_index = torch.randint(low=0, high=fixed_frames_number_set.__len__(), size=[2])
for i in rand_index:
    frame, label = fixed_frames_number_set[i]
    print(f'frame[{i}].shape=[T, C, H, W]={frame.shape}')

# frame[308].shape=[T, C, H, W]=(20, 2, 128, 128)
# frame[453].shape=[T, C, H, W]=(20, 2, 128, 128)

# Load a dataset with a fixed duration and print the shapes of the first 5 samples
fixed_duration_frame_set = DVS128Gesture(root_dir, data_type='frame', duration=1000000, train=True)
for i in range(5):
    x, y = fixed_duration_frame_set[i]
    print(f'x[{i}].shape=[T, C, H, W]={x.shape}')

# x[0].shape=[T, C, H, W]=(6, 2, 128, 128)
# x[1].shape=[T, C, H, W]=(6, 2, 128, 128)
# x[2].shape=[T, C, H, W]=(5, 2, 128, 128)
# x[3].shape=[T, C, H, W]=(5, 2, 128, 128)
# x[4].shape=[T, C, H, W]=(7, 2, 128, 128)

# Create a data loader for the fixed duration frame dataset and print the shapes and sequence lengths
train_data_loader = DataLoader(fixed_duration_frame_set, collate_fn=pad_sequence_collate, batch_size=5)
for x, y, x_len in train_data_loader:
    print(f'x.shape=[N, T, C, H, W]={tuple(x.shape)}')
    print(f'x_len={x_len}')
    mask = padded_sequence_mask(x_len)  # mask.shape = [T, N]
    print(f'mask=\n{mask.t().int()}')
    break

# x.shape=[N, T, C, H, W]=(5, 7, 2, 128, 128)
# x_len=tensor([6, 6, 5, 5, 7])
# mask=
# tensor([[1, 1, 1, 1, 1, 1, 0],
#         [1, 1, 1, 1, 1, 1, 0],
#         [1, 1, 1, 1, 1, 0, 0],
#         [1, 1, 1, 1, 1, 0, 0],
#         [1, 1, 1, 1, 1, 1, 1]], dtype=torch.int32)

More datasets will be included in the future.

If some datasets' download links are not available for some users, the users can download from the OpenI mirror:

https://openi.pcl.ac.cn/OpenI/spikingjelly/datasets?type=0

All datasets saved in the OpenI mirror are allowable by their license or author's agreement.

Tutorials

SpikingJelly provides elaborate tutorials. Here are some tutorials:

Figure Tutorial
basic_concept Basic Conception
neuron Neuron
lif_fc_mnist Single Fully Connected Layer SNN to Classify MNIST
conv_fashion_mnist Convolutional SNN to Classify FMNIST
ann2snn ANN2SNN
neuromorphic_datasets Neuromorphic Datasets Processing
classify_dvsg Classify DVS Gesture
recurrent_connection_and_stateful_synapse Recurrent Connection and Stateful Synapse
stdp_learning STDP Learning
reinforcement_learning Reinforcement Learning

Other tutorials that are not listed here are also available at the document https://spikingjelly.readthedocs.io.

Publications and Citation

Publications using SpikingJelly are recorded in Publications. If you use SpikingJelly in your paper, you can also add it to this table by pull request.

If you use SpikingJelly in your work, please cite it as follows:

@article{
doi:10.1126/sciadv.adi1480,
author = {Wei Fang  and Yanqi Chen  and Jianhao Ding  and Zhaofei Yu  and Timothée Masquelier  and Ding Chen  and Liwei Huang  and Huihui Zhou  and Guoqi Li  and Yonghong Tian },
title = {SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence},
journal = {Science Advances},
volume = {9},
number = {40},
pages = {eadi1480},
year = {2023},
doi = {10.1126/sciadv.adi1480},
URL = {https://www.science.org/doi/abs/10.1126/sciadv.adi1480},
eprint = {https://www.science.org/doi/pdf/10.1126/sciadv.adi1480},
abstract = {Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for preprocessing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing. Motivation and introduction of the software framework SpikingJelly for spiking deep learning.}}

Contribution

You can read the issues and get the problems to be solved and the latest development plans. We welcome all users to join the discussion of development plans, solve issues, and send pull requests.

Not all API documents are written in both English and Chinese. We welcome users to complete translation (from English to Chinese or from Chinese to English).

About

Multimedia Learning Group, Institute of Digital Media (NELVT), Peking University and Peng Cheng Laboratory are the main developers of SpikingJelly.

PKU

PCL

The list of developers can be found here.

All Thanks to Our Contributors

spikingjelly's People

Contributors

allenyolk avatar apu52 avatar chekaiwei avatar dingjianhao avatar dkjshk avatar fangwei123456 avatar grasshlw avatar hana61 avatar harshhere905 avatar k-h-ismail avatar lavishsheth avatar lucifer2859 avatar lyu6poshao avatar madongyang-stack avatar maybe2022 avatar met4physics avatar miaobuao avatar mohitd404 avatar panyicheng avatar pkuxmq avatar ridgerchu avatar shinoharayuuyoru avatar shivam250702 avatar thvnvtos avatar yanqi-chen avatar yeyumin avatar yufangmo avatar yult0821 avatar zhouchenlin2096 avatar zhy99426 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spikingjelly's Issues

type object 'CIFAR10DVS' has no attribute 'downloadable'

When I execute

from spikingjelly.datasets.cifar10_dvs import CIFAR10DVS
from spikingjelly.datasets.dvs128_gesture import DVS128Gesture

print('CIFAR10-DVS downloadable', CIFAR10DVS.downloadable())
print('resource, url, md5/n', CIFAR10DVS.resource_url_md5())

print('DVS128Gesture downloadable', DVS128Gesture.downloadable())
print('resource, url, md5/n', DVS128Gesture.resource_url_md5())

throws the following error:

E:\SoftWare\Anaconda3\lib\site-packages\torchaudio\extension\extension.py:14: UserWarning: torchaudio C++ extension is not available.
  warnings.warn('torchaudio C++ extension is not available.')
E:\SoftWare\Anaconda3\lib\site-packages\torchaudio\backend\utils.py:88: UserWarning: No audio backend is available.
  warnings.warn('No audio backend is available.')
Traceback (most recent call last):
  File "D:/Project/rpg_e2vid/test.py", line 4, in <module>
    print('CIFAR10-DVS downloadable', CIFAR10DVS.downloadable())
AttributeError: type object 'CIFAR10DVS' has no attribute 'downloadable'

咨询作者

您好,我是一名学生,感觉您这个SNN的包真的是太棒了,有许多问题想跟您咨询(可以有偿),您能给我说一下您的联系方式吗?
我的邮箱是[email protected]
十分期待您的来信。

About Unsupervised Learning

Hi!
It's a brilliant job to construct and improve spikingjelly.
And the question is, will spikingjelly support unsupervised learning such as STDP and so on? Cause unsupervised learning is a very important part and many researchers are on it.
Thanks for your time!

前馈ANN转化为SNN

您好,在前馈ANN转换SNN的章节中,我运行代码出现两个问题:

  1. ann2snn.examples.if_cnn_mnist.py,在examples目录下并没有这个文件
  2. 加载默认的转换配置并保存用到了utils模块,在文中提到的ann2snn.utils实现了Config类,但我在ann2snn.examples.utils下并没有找到Config的实现
    期待您的回复

使用卷积SNN识别Fashion-MNIST

你好,我想请问一下,在卷积SNN中,图像输入转化为脉冲输入,是使用的什么编码方式呢?是使用首脉冲触发的时间编码吗?
在教程中没有看到详细的说明,请指点一二。

Dropout API的中英文跳转

.. _Dropout.__init__-cn:

本地编译文档的时候会报警告:

C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.Dropout:1: WARNING: undefined label: dropout.__init__-en
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.Dropout:28: WARNING: undefined label: dropout.__init__-cn
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.Dropout2d:1: WARNING: undefined label: dropout2d.__init__-en
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.Dropout2d:11: WARNING: undefined label: dropout.__init__-cn
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.Dropout2d:13: WARNING: undefined label: dropout2d.__init__-cn
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.Dropout2d:24: WARNING: undefined label: dropout.__init__-en
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.MultiStepDropout:1: WARNING: undefined label: dropout.__init__-en
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.MultiStepDropout:28: WARNING: undefined label: dropout.__init__-cn
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.MultiStepDropout2d:1: WARNING: undefined label: dropout2d.__init__-en
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.MultiStepDropout2d:11: WARNING: undefined label: dropout.__init__-cn
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.MultiStepDropout2d:13: WARNING: undefined label: dropout2d.__init__-cn
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.MultiStepDropout2d:24: WARNING: undefined label: dropout.__init__-en
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.DropConnectLinear:16: WARNING: undefined label: dropout.__init__-cn
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\layer.py:docstring of spikingjelly.clock_driven.layer.DropConnectLinear:49: WARNING: undefined label: dropout.__init__-en

Spiking LSTM 速度慢,且不收敛

训练的代码如下,在K80上大约每秒1.7次迭代,训练几轮后正确率仍然是0.1:

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from spikingjelly.clock_driven import rnn
from torch.utils.tensorboard import SummaryWriter
import readline
import torchvision
import tqdm
class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.lstm = rnn.SpikingLSTM(28, 1024, 10)
        self.fc = nn.Linear(1024, 10)

    def forward(self, x):
        x, _ = self.lstm(x)
        return self.fc(x[-1])
def main():

    device = input('输入运行的设备,例如“cpu”或“cuda:0”\n input device, e.g., "cpu" or "cuda:0": ')
    dataset_dir = input('输入保存MNIST数据集的位置,例如“./”\n input root directory for saving MNIST dataset, e.g., "./": ')
    batch_size = int(input('输入batch_size,例如“64”\n input batch_size, e.g., "64": '))
    learning_rate = float(input('输入学习率,例如“1e-3”\n input learning rate, e.g., "1e-3": '))
    train_epoch = int(input('输入训练轮数,即遍历训练集的次数,例如“100”\n input training epochs, e.g., "100": '))
    log_dir = input('输入保存tensorboard日志文件的位置,例如“./”\n input root directory for saving tensorboard logs, e.g., "./": ')




    writer = SummaryWriter(log_dir)

    # 初始化数据加载器
    train_data_loader = torch.utils.data.DataLoader(
        dataset=torchvision.datasets.MNIST(
            root=dataset_dir,
            train=True,
            transform=torchvision.transforms.ToTensor(),
            download=True),
        batch_size=batch_size,
        shuffle=True,
        drop_last=True)
    test_data_loader = torch.utils.data.DataLoader(
        dataset=torchvision.datasets.MNIST(
            root=dataset_dir,
            train=False,
            transform=torchvision.transforms.ToTensor(),
            download=True),
        batch_size=batch_size,
        shuffle=True,
        drop_last=False)

    # 初始化网络
    net = Net().to(device)
    # 使用Adam优化器
    optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
    train_times = 0
    max_test_accuracy = 0
    for epoch in range(train_epoch):
        net.train()
        for img, label in tqdm.tqdm(train_data_loader):
            img = img.to(device)  # [N, 1, 28, 28]
            label = label.to(device)
            label_one_hot = F.one_hot(label, 10).float()

            img.squeeze_()  # [N, 28, 28]
            img = img.permute(1, 0, 2)  # [28, N, 28]

            optimizer.zero_grad()

            out_spikes_counter_frequency = net(img)

            loss = F.mse_loss(out_spikes_counter_frequency, label_one_hot)
            loss.backward()
            optimizer.step()


            accuracy = (out_spikes_counter_frequency.max(1)[1] == label).float().mean().item()
            if train_times % 256 == 0:
                writer.add_scalar('train_accuracy', accuracy, train_times)
            train_times += 1
        net.eval()
        with torch.no_grad():
            # 每遍历一次全部数据集,就在测试集上测试一次
            test_sum = 0
            correct_sum = 0
            for img, label in test_data_loader:
                img = img.to(device)
                label = label.to(device)

                img.squeeze_()  # [N, 28, 28]
                img = img.permute(1, 0, 2)  # [28, N, 28]
                out_spikes_counter_frequency = net(img)

                correct_sum += (out_spikes_counter_frequency.argmax(dim=1) == label).float().sum().item()
                test_sum += label.numel()
            test_accuracy = correct_sum / test_sum
            writer.add_scalar('test_accuracy', test_accuracy, epoch)
            if max_test_accuracy < test_accuracy:
                max_test_accuracy = test_accuracy
                print('saving net...')
                torch.save(net, log_dir + '/net_max_acc.pt')
                print('saved')

        print(
            'device={}, dataset_dir={}, batch_size={}, learning_rate={}, log_dir={}, max_test_accuracy={}, train_times={}'.format(
                device, dataset_dir, batch_size, learning_rate, log_dir, max_test_accuracy, train_times
            ))


if __name__ == '__main__':
    main()

cannot import modules "_C_neuron" and "_C_gemm"

Hi !
First of all thank you very much for the great repo, this is amazing work !
I was trying to run the speed comparison in the first code snippet found at this tutorial page, but I ran into a couple of problems :

Traceback (most recent call last): File "/home/ulysse/Desktop/PFE CerCo/code/stereovision/spikingjelly/tuto_cext_acceleration.py", line 2, in <module> from spikingjelly.cext import neuron as cext_neuron File "/home/ulysse/miniconda3/envs/spiking/lib/python3.8/site-packages/spikingjelly/cext/neuron.py", line 5, in <module> import _C_neuron ModuleNotFoundError: No module named '_C_neuron'

The same happens with module _C_gemm when trying to do from spikingjelly.cext import functional.

I'm far from being an expert in communication between CUDA/C/Python codes so I might be missing something obvious, but it seems to me that these import statements are no longer valid on the current repo. If so, could you please fix this problem, and in the other case, tell me what I am doing wrong ?

Thanks a lot in advance !

The implementation of NeuNorm

The implementation of NeuNorm here

self.x = self.k0 * self.x + (self.k1 * in_spikes.sum(dim=1).unsqueeze(1))
return in_spikes - self.w * self.x

seems different with the NeuNorm described in the original paper.

  1. In my opinions, the trainable weight U (which named w in your code), should have the same size with FM. which means we should set w as [1, height, width], not [channesl, 1, 1]
  2. the code in_spikes.sum(dim=1).unsqueeze(1) will generate a tensor with shape [batch_size, 1, width, height], but L71 directly multiplies self.w with self.x, which is [channels, 1, 1] x [batch_size, 1, width, height] may also be not correct.

Learnable Membrane Time Constant

您好,再阅读您的文章Incorporating Learnable Membrane Time Constant to Enhance Learning of
Spiking Neural Networks,请问您关于 Learnable Membrane Time Constant也是用spikingjelly实现的吗?有code example吗?谢谢

关于脉冲网络的训练

你好,感谢你们开源的工作!
在你们的文档中提到了一个脉冲神经网络的训练示例,是基于泊松编码,在MNIST数据集上的分类问题。
该网络模拟了100个时间不长,在显卡上耗时72分钟完成了训练。
关于这个,有一些问题我想请教一下:

  1. 模拟100个时间步长有必要吗?在我的实验中,发现,其实模拟1个时间步长,和100个时间步长带来的效果其实是差不多的。所以,我这里挺疑惑的:在直接训练的方法中,增加脉冲步长,究竟是从哪一方面提升了网络的效果呢?因为我们知道基于转换的模型中,增加脉冲步长是为了更好的统计出脉冲发放的个数,从而计算出更准确的频率。但是在直接训练的问题中,好像并没有一个很合理的解释。
  2. 另外,我们知道,增加了时间步长的同时,在计算梯度的时候,也成T倍的累计了梯度,因此,如果真的增加时间步长可能使得网络收敛更快,那是不是单纯由于这个T倍的梯度累计导致的呢?也就是说,步长为1的网络,是否经过T倍的训练量以后,其实性能会差不多呢?
  3. 脉冲网络之所以看起来效果不错会不会很大原因是因为MNIST数据集本身就仅仅只包含0/1?

common.multiprocessing_env

from common.multiprocessing_env import SubprocVecEnv

ModuleNotFoundError: No module named 'common.multiprocessing_env'

About tempotron_mnist.py

在例子tempotron_mnist.py中,第127行是不是应该调用test_data_loader,而不是train_data_loader?

关于surrogate模块中的参数param spiking问题

您好,非常感谢你们的开源工作!我想咨询一下surrogate模块中的参数param spiking的问题,文档中说的是param spiking: 是否输出脉冲,默认为 True,在前向传播时使用 heaviside 而在反向传播使用替代梯度。若为 False则不使用替代梯度,前向传播时,使用反向传播时的梯度替代函数对应的原函数;是否可以理解为如果spikingFalse,则前向传播是基于替代梯度函数的原函数而不是阶跃函数,这样LIF神经元就类似于特殊的ANN神经元,替代梯度类似传统的激活函数?关于spiking参数的设置的用途是否可以先选择spiking=False训练几轮模型,然后spiking设置为true,在基于预训练模型继续训练。另外我在源代码中暂时只看到PiecewiseLeakyReLU类有关于spiking参数的实现,其他替代梯度函数的写法类似吗?期待您的解答。

脉冲梯度的处理

二值脉冲本应用bool类型表示,能够极大的节省显存。但由于PyTorch的bool类型不支持携带梯度,需要设法重写梯度的机制。

RuntimeError: Ninja is required to load C++ extensions

Hello,
I am struggling to have this module installed :

spikingjelly/cext/neuron.py", line 5, in
import _C_neuron
ModuleNotFoundError: No module named '_C_neuron'

I've tried both installing from pip and building from source. Can you help me ?

梯度替代和反向传播算法是同一回事嘛?

请教大佬一个原理问题,没想明白。
spikingjelly教程里面说到,反向传播的时候,使用梯度替代函数sigmoid函数来替代脉冲的梯度。同时,我在很多文献中看到“spike-based BP”的方式.例如https://arxiv.org/pdf/1903.06379v1.pdf这篇文章中的误差反向传播算法。
image
这中误差反向传播方式和spikingjelly中的一样吗?期待您的答疑解惑

ANN2SNN issue

cnn_mnist 有 ONNX版本报错

[ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 14 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 13.

要不要考虑添加onnx.version_converter?onnx文档

关于cuda增强的神经元在3090显卡中的使用问题

您好,我在2080Ti显卡,cuda10.1环境下成功使用了cuda增强的神经元进行加速,但是当我使用相同的安装方法在3090显卡,cuda11环境中尝试进一步加速时,虽然可以成功安装,而且在不使用cuda增强的神经元情况下也可以成功运行,但是当使用cuda加速神经元是,导入模块时显示 import _C_neuron 时出错,ImportError: libcudart.so.10.1: cannot open shared object file: No such file or directory。不知目前是对cuda版本有一定要求吗,感谢您的帮助,非常感谢!

重写n_mnist数据集

使用 torchvision.datasets.utils 中的函数重写,比原来自行手写下载、解压要好一些。

LIF神经元的参数\tau的单位是秒还是毫秒呢?

image

请教一个问题。LIF神经元中,膜时间常数\tau的默认值比如说100,那物理意义上是指100s还是指100ms呢?我看很多文章中LIF的膜时间常数都是以毫秒为单位的,在spikingjelly中的LIF是不是也是这样呢?

编码器改版

虽然编码器用的不多,但目前的编码器写法不太好。准备改成通过encode函数编码,然后forward函数输出单步的编码结果。如果做了更改,之后的教程也要随之做少量更新。

ANN2SNN Example Bug

@DingJianhao


python spikingjelly/clock_driven/ann2snn/examples/resnet18_cifar10.py
/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Downloading ./SJ-cifar10-resnet18_model-sample.pth from https://ndownloader.figshare.com/files/26676110, please wait...
输入运行的设备,例如“cpu”或“cuda:0”
 input training device, e.g., "cpu" or "cuda:0": cuda:0
输入分析模型的设备,例如“cpu”或“cuda:0”
 input parsing device, e.g., "cpu" or "cuda:0": cuda:0
输入保存cifar10数据集的位置,例如“./”
 input root directory for saving cifar10 dataset, e.g., "./": ./
输入batch_size,例如“128”
 input batch_size, e.g., "128": 128
输入仿真时长,例如“400”
 input simulating steps, e.g., "400": 400
输入模型名字,例如“resnet18_cifar10”
 input model name, for log_dir generating , e.g., "resnet18_cifar10":
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./cifar-10-python.tar.gz
100%|███████████████████████████████████████████████████████████████████████████████████████████▉| 170483712/170498071 [00:29<00:00, 8585486.03it/s]Extracting ./cifar-10-python.tar.gz to ./
170500096it [00:30, 5665526.36it/s]                                                                                                                 
Files already downloaded and verified
use 128 imgs to parse
parser log_dir: ./resnet18_cifar10/parser

Absorbing BatchNorm Parameters...

42it [00:00, 1065.84it/s]
Finish layer reduction!

Getting intermediate output statistics...

100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:08<00:00,  5.66it/s]
Finished getting intermediate output statistics!

Normalizing model...

  0%|                                                                                                                        | 0/50 [00:00<?, ?it/s]{'shape': (128, 3, 32, 32), 'min': -2.4290657, 'max': 2.7537065, '99.9': 2.753706455230713}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 880.18it/s]
Making new scale node...
Finished normalizing model!
no user-defined conversion method found, use default
Traceback (most recent call last):
  File "spikingjelly/clock_driven/ann2snn/examples/resnet18_cifar10.py", line 110, in <module>
    main('./resnet18_cifar10')
  File "spikingjelly/clock_driven/ann2snn/examples/resnet18_cifar10.py", line 91, in main
    snn = onnxparser.parse(ann, norm_data.to(parser_device))
  File "/home/wangxiao/Documents/vidar-siamfc-pytorch/spikingjelly/spikingjelly/clock_driven/ann2snn/__init__.py", line 108, in parse
    torch.save(model, self.ann_filename)
  File "/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/torch/serialization.py", line 370, in save
    _legacy_save(obj, opened_file, pickle_module, pickle_protocol)
  File "/home/wangxiao/anaconda3/envs/spikingjelly/lib/python3.7/site-packages/torch/serialization.py", line 443, in _legacy_save
    pickler.dump(obj)
_pickle.PickleError: can't pickle repeated message fields, convert to list first
(spikingjelly) wangxiao@wx:~/Documents/vidar-siamfc-pytorch/spikingjelly$

DQN之CartPole模型缺失问题

大佬你好,我想咨询一个问题。
我在运行Spiking_DQN_state.py程序时,会报错,显示“不存在policy_net_256_max.pt”。
这个policy_net_256_max.pt的模型要怎么得到呢?或者说,需要自己训练一个?

关于脉冲编码器的问题:

在使用卷积SNN识别Fashion-Mnist一例中,使用教程中说到了这样一句话:“实际上我们完全可以直接将 图片送入SNN,在这种情况下,SNN中的首层脉冲神经元层及其之前的层,可以看作是一个参数可学习的自编码器。例如我们刚才定义的卷积层中的这些层”,请问这句话该如何理解?卷积层是怎么把像素值编码为脉冲序列的?这种方法引用自那篇文献?以及采用这种方式进行编码是否需要对输入归一化?非常期待你的回答,万分感谢!

Monitor模块的改进

目前的神经元的Monitor存在改进的空间:

  1. Monitor监视的数据目前是直接用numpy格式放在CPU上,但数据的后处理可能仍然需要GPU,传输开销会很大,因此应当能够指定输出设备。
  2. 监视的raw data往往不是想要的,可以增加一些神经元、网络高级统计数据的API,按需调用。
  3. 降低神经元与Monitor的代码耦合程度

Visualizing Feature Maps

Hi,
Hope you are fine.

I am trying to visualize MNIST feature maps by using your visualization code. However, I've got the below error:

image

Can you please help how to sort out this error? I got stuck to visualize feature maps for a longer time.

Your help will be appreciated.

Thanks

Regards

是否有打算整合SpykeTorch的项目代码

你好,SpykeTorch也是基于PyTorch框架开发的。但是这个项目已经是两年前的产品了。
然而其中的一些函数代码和模型以及一些设计**应该可以高效的整合进来
请问你们有打算整合SpykeTorch的项目代码吗?

有没有办法估算脉冲平均发放频率呢?

您好。由于这个SNN是基于GPU和pytorch的,所以无法体现出来SNN的低功耗特性。有相关研究表示,神经态芯片主要的功耗在于脉冲发放,只要知道了脉冲发放频率和对应的突触数量,就能大概估算出来相应的功率。
那有没有办法估算脉冲平均发放频率的方法呢?比如说在使用卷积SNN来运算的时候。
恳请大佬指点一二。

【长期】处理编译文档warning

编译文档产生的warning是一个生灭过程...

请诸位开发者闲暇时处理一下文档编译时产生的warning(不局限于自己的代码)。可以在本地进行编译测试:

(pytorch-env) PS C:\Users\fw\Desktop\代码\spikingjelly> cd .\docs\
(pytorch-env) PS C:\Users\fw\Desktop\代码\spikingjelly\docs> make html
Running Sphinx v3.4.2
sys.path ['C:\\Users\\fw\\Desktop\\代码\\spikingjelly', 'C:\\Users\\fw\\anaconda3\\envs\\pytorch-env\\Scripts\\sphinx-build.exe', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\python38.zip', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\DLLs', 'c:\
\users\\fw\\anaconda3\\envs\\pytorch-env\\lib', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\lib\\site-packages', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\lib\\site-packages\\scipy-1.6.0-py3.8-wi
n-amd64.egg', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\lib\\site-packages\\tqdm-4.55.1-py3.8.egg', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\lib\\site-packages\\matplotlib-3.3.3-py3.8-win-amd64.egg', 'c:\\users\\fw\\anaconda3\\envs\\pytorc
h-env\\lib\\site-packages\\python_dateutil-2.8.1-py3.8.egg', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\lib\\site-packages\\pyparsing-3.0.0b2-py3.8.egg', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\lib\\site-packages\\kiwisolver-1.3.1-py3.8-wi
n-amd64.egg', 'c:\\users\\fw\\anaconda3\\envs\\pytorch-env\\lib\\site-packages\\cycler-0.10.0-py3.8.egg', 'c:\\users\\fw\\desktop\\代码\\spikingjelly']
list dir ['.git', '.gitignore', '.idea', '.pytest_cache', '.readthedocs.yml', '.vscode', 'build', 'dist', 'docs', 'LICENSE', 'LICENSE-CN', 'README.md', 'README_cn.md', 'requirements.txt', 'setup.py', 'spikingjelly', 'spikingjelly.egg-info', 'test.p
y', '_C_neuron.cp38-win_amd64.pyd']
loading translations [zh_CN]... done
loading pickled environment... done
loading intersphinx inventory from https://pytorch.org/docs/stable/objects.inv...
WARNING: failed to reach any of the inventories with the following issues:
intersphinx inventory 'https://pytorch.org/docs/stable/objects.inv' not fetchable due to <class 'requests.exceptions.ConnectionError'>: HTTPSConnectionPool(host='pytorch.org', port=443): Max retries exceeded with url: /docs/stable/objects.inv (Caus
ed by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000024640EDB640>: Failed to establish a new connection: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。'))
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 0 source files that are out of date
updating environment: 0 added, 6 changed, 0 removed
usage: sphinx-build [-h] [--seed SEED] [--use-cuda USE_CUDA]
sphinx-build: error: unrecognized arguments: -M html source build
Testing...
reading sources... [100%] tutorials_en
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\modules.rst:4: WARNING: toctree contains reference to nonexisting document 'SpikingJelly'
WARNING: autodoc: failed to import module 'A2C' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
No module named 'common'
WARNING: autodoc: failed to import module 'DQN_state' from module 'spikingjelly.clock_driven.examples'; the module executes module level statement and it might call sys.exit().
WARNING: autodoc: failed to import module 'PPO' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
No module named 'common'
WARNING: autodoc: failed to import module 'Spiking_A2C' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
No module named 'common'
WARNING: autodoc: failed to import module 'Spiking_DQN_state' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
Traceback (most recent call last):
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\site-packages\sphinx\ext\autodoc\importer.py", line 67, in import_module
    return importlib.import_module(modname)
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\Spiking_DQN_state.py", line 310, in <module>
    play(use_cuda=False, pt_path='./model/CartPole-v0/policy_net_256_max.pt', env_name='CartPole-v0', \
  File "C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\Spiking_DQN_state.py", line 213, in play
    policy_net = DQSN(n_states, hidden_size, n_actions, T).to(device)
  File "C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\Spiking_DQN_state.py", line 55, in __init__
    nn.Linear(input_size, hidden_size),
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\site-packages\torch\nn\modules\linear.py", line 78, in __init__
    self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: new(): argument 'size' must be tuple of ints, but found element of type 0 at pos 2

WARNING: autodoc: failed to import module 'Spiking_PPO' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
No module named 'common'
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\classify_dvsg.py:docstring of spikingjelly.clock_driven.examples.classify_dvsg.main:15: WARNING: Option list ends without a blank line; unexpected unindent.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\classify_dvsg.py:docstring of spikingjelly.clock_driven.examples.classify_dvsg.main:29: WARNING: Unexpected indentation.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\classify_dvsg.py:docstring of spikingjelly.clock_driven.examples.classify_dvsg.main:30: WARNING: Block quote ends without a blank line; unexpected unindent.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\classify_dvsg.py:docstring of spikingjelly.clock_driven.examples.classify_dvsg.main:49: WARNING: Option list ends without a blank line; unexpected unindent.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\classify_dvsg.py:docstring of spikingjelly.clock_driven.examples.classify_dvsg.main:63: WARNING: Unexpected indentation.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\classify_dvsg.py:docstring of spikingjelly.clock_driven.examples.classify_dvsg.main:64: WARNING: Block quote ends without a blank line; unexpected unindent.
WARNING: autodoc: failed to import module 'spiking_lstm_text' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
Traceback (most recent call last):
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\site-packages\sphinx\ext\autodoc\importer.py", line 67, in import_module
    return importlib.import_module(modname)
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\importlib\__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\clock_driven\examples\spiking_lstm_text.py", line 243, in <module>
    net = torch.load('char_rnn_classification.pth')
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\site-packages\torch\serialization.py", line 581, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\site-packages\torch\serialization.py", line 230, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "c:\users\fw\anaconda3\envs\pytorch-env\lib\site-packages\torch\serialization.py", line 211, in __init__
    super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'char_rnn_classification.pth'

WARNING: autodoc: failed to import module 'common.multiprocessing_env' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
No module named 'multiprocessing_env'
WARNING: autodoc: failed to import module 'common' from module 'spikingjelly.clock_driven.examples'; the following exception was raised:
No module named 'multiprocessing_env'
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\spikingjelly.clock_driven.optim.rst:2: WARNING: Title underline too short.

spikingjelly.clock_driven.optim package
======================================
WARNING: autodoc: failed to import module 'optim' from module 'spikingjelly.clock_driven'; the following exception was raised:
No module named 'spikingjelly.clock_driven.optim'
WARNING: autodoc: failed to import module 'nav_gesture' from module 'spikingjelly.datasets'; the following exception was raised:
No module named 'spikingjelly.datasets.nav_gesture'
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\datasets\__init__.py:docstring of spikingjelly.datasets.split_to_train_test_set:4: WARNING: Field list ends without a blank line; unexpected unindent.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\datasets\__init__.py:docstring of spikingjelly.datasets.split_to_train_test_set:8: WARNING: Unexpected indentation.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\datasets\__init__.py:docstring of spikingjelly.datasets.split_to_train_test_set:11: WARNING: Block quote ends without a blank line; unexpected unindent.
C:\Users\fw\Desktop\代码\spikingjelly\spikingjelly\datasets\__init__.py:docstring of spikingjelly.datasets.NeuromorphicDatasetFolder:3: WARNING: Unknown interpreted text role "name".
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\tutorials_en.rst:5: WARNING: toctree contains reference to nonexisting document 'tutorial.clock_driven_en'
looking for now-outdated files... none found
pickling environment... done
checking consistency... C:\Users\fw\Desktop\代码\spikingjelly\docs\source\clock_driven\1_surrogate.rst: WARNING: document isn't included in any toctree
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\modules.rst: WARNING: document isn't included in any toctree
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\spikingjelly.rst: WARNING: document isn't included in any toctree
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\spikingjelly.clock_driven.ann2snn.kernels.rst: WARNING: document isn't included in any toctree
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\tutorials.rst: WARNING: document isn't included in any toctree
C:\Users\fw\Desktop\代码\spikingjelly\docs\source\tutorials_en.rst: WARNING: document isn't included in any toctree
done
preparing documents... done
writing output... [100%] tutorials_en
generating indices... genindex py-modindex done
highlighting module code... [100%] spikingjelly.visualizing                    _mnistquential_mnisted_backpropagation
writing additional pages... search done
copying images... [100%] _static/tutorials/clock_driven/11_cext_neuron_with_lbl/test.svgg
copying downloadable files... [100%] ../clock_driven/examples/classify_dvsg.py
copying static files... done
copying extra files... done
dumping search index in Chinese (code: zh)... done
dumping object inventory... done
build succeeded, 31 warnings.

The HTML pages are in build\html.

cifar10_r11_enabling_spikebased_backpropagation.py

in the "cifar10_r11_enabling_spikebased_backpropagation.py",What is the purpose of the following code?
for m in net.modules():
if isinstance(m, LIFNode) and m.fire:
m.v_acc += (m.v_acc < 1e-3).float()
m.new_grad = (m.v_acc_l > 1e-3).float() + math.log(1 - 1 / m.tau) * torch.div(m.v_acc_l, m.v_acc)

使用卷积SNN识别Fashion-MNIST

请问“时间驱动:使用卷积SNN识别Fashion-MNIST”这一例子的代码是参考的哪一篇文献啊?代码的有些地方我还不是很理解,感觉更像是ANN而不像是SNN?对于直接使用第一个卷积层作为脉冲编码器,我也不是很理解?麻烦了。

stateful变量的判定问题

采用是否有reset属性来判别该层是否有状态并不合理

def is_stateful(module: nn.Module):
for m in module.modules():
if hasattr(m, 'reset'):
return True
return False

如果内部变量在网络运行过程中不变,只是在开始或结束时reset,那么运行过程中不存在数据依赖,仍然可以并行。

No module named '_C_neuron'

您好,在运行example里conv_fashion_mnist_cuda_lbl.py时会出现No module named '_C_neuron',请问怎么解决?谢谢。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.