Comments (3)
Thank you @bamsumit for the quick reply.
I cannot run the exact code that you proposed. For this I get a device mismatch between the input tensor and weight tensor. Thus I tried this:
net = Network().to(device)
net.forward(torch.rand(1, 2, 128, 128, 1).to(device))
net = nn.DataParallel(net, device_ids=[0, 1])
Then this code snippet works.
If I run it before the training loop, I can even run the training for several epochs without receiving the backprop error message from the previous comment.
But without the net.forward line, the error is still there. Do you understand why this is the case?
So thank you for the hint, I can now train on multiple GPUs.
from lava-dl.
Hi @michaelneumeier can you try this.
net = Network()
net.forward(torch.rand(1, C, H, W, 1).to(device)) # CHW = your input dimension
net = nn.DataParallel(net, device_ids=[0, 1])
net.to(device)
from lava-dl.
Glad you can run it now. The slayer models support runtime shape identification. This means that we don't need to specify the shape of a layer. Instead they are identified on runtime. With multi GPU, the shape update happens on the spawned dataparallel copy but is not reflected back on the main model, so it needs to be done on the main copy once with a dummy load before the multiGPU run. This should also work.
net = Network().to(device)
net = nn.DataParallel(net, device_ids=[0, 1])
...
net.module.forward(torch.rand(1, 2, 128, 128, 1).to(device))
# training sequence
from lava-dl.
Related Issues (20)
- Support verification and optimization of YoloKP on Loihi2 HOT 2
- Compiled netx hdf5 models cannot be serialized. HOT 1
- YOLO SDNN GPU inference notebook is too big to render on github
- Unable to reproduce Slayer NMNIST Test Accuracy HOT 1
- lava.lib.dl.netx.hdf5 imports Convolutional Layers incorrectly HOT 3
- YOLO SDNN inference
- SDNNs and SNNs
- error while using Recurrent block in lava-dl
- TypeError when using adrf neurons HOT 1
- Regression Tutorial using slayer HOT 2
- RuntimeError when using Recurrent blocks HOT 2
- When using slayer.block.cuba.Pool, input-output dimensions are not as expected. HOT 1
- next input block does not connect input port to neuron input. HOT 2
- Allow slayer norms to use parameters HOT 2
- Making the decay parameters(dv,du) learnable and separate du, dv for different layers? HOT 2
- optimize_weight_bits is increasing the weight matrix scale? HOT 1
- Netx DelaySynapse Bug: Weight_exp is None
- Neuron Parameters remain unchanged after setting them and also after training them. HOT 1
- Save recurrent network in lava-dl to hdf5 file, and load hdf5 file into lava with NetX
- Accelerate BDD100K dataset
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lava-dl.