Comments (3)
@alexggener transposed convolution synapses are already available in lava-dl SLAYER. The block api for transposed convolution is not available though.
So there are two options for you:
-
Not use block api (block api is just for convenience) and initialize the neuron, synapse and quantization directly in your network definition until transposed convolution support in block api is available.
e.g:neuron = slayer.neuron.cuba.Neuron(...) synapse = slayer.synapse.ConvTranspose(...) synapse.pre_hook_fx = neuron.quantize_8bit # if you need quantization
and call them in sequence in
forward()
:spike_output = neuron(synapse(spike_input))
-
Implement block api and contribute to the repo. I can list out what needs to be done.
from lava-dl.
@bamsumit Thanks for pointing us to the right direction. Implementing it on the block api seems doable so we'll try to contribute to it. Any guidance during the process would be appreciated.
from lava-dl.
@alexggener sounds good. Here are the steps
- in
slayer/base.py
, implementAbstractConvT
andAbstractUnpool
. Follow the flow similar toAbstractConv
andAbstractPool
respectively, with different synapse of-course. - Derive
ConvT
andUnpool
in all the other neuron specific blocks.
from lava-dl.
Related Issues (20)
- Support verification and optimization of YoloKP on Loihi2 HOT 2
- Compiled netx hdf5 models cannot be serialized. HOT 1
- YOLO SDNN GPU inference notebook is too big to render on github
- Unable to reproduce Slayer NMNIST Test Accuracy HOT 1
- lava.lib.dl.netx.hdf5 imports Convolutional Layers incorrectly HOT 3
- YOLO SDNN inference
- SDNNs and SNNs
- error while using Recurrent block in lava-dl
- TypeError when using adrf neurons HOT 1
- Regression Tutorial using slayer HOT 2
- RuntimeError when using Recurrent blocks HOT 2
- When using slayer.block.cuba.Pool, input-output dimensions are not as expected. HOT 1
- next input block does not connect input port to neuron input. HOT 2
- Allow slayer norms to use parameters HOT 2
- Making the decay parameters(dv,du) learnable and separate du, dv for different layers? HOT 2
- optimize_weight_bits is increasing the weight matrix scale? HOT 1
- Netx DelaySynapse Bug: Weight_exp is None
- Neuron Parameters remain unchanged after setting them and also after training them. HOT 1
- Save recurrent network in lava-dl to hdf5 file, and load hdf5 file into lava with NetX
- Accelerate BDD100K dataset
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lava-dl.