Comments (10)
I think it depends on the type of GPU being used and the code being run. Not all network simulations are well-suited to being run on the GPU, especially smaller networks. I have a GTX 1060 6gb that I use to test the code, and found that there are some great time savings when the simulated networks are large. On the other hand, small-scale experiments seem to take more time to run on the GPU.
I've also tested with an NVIDIA Quadro M1200, where I observed similar runtime behavior, but the 1060 card definitely gave better results for larger networks.
Intuitively, this makes sense to me, as GPUs are known for being useful for single instruction, multiple data (SIMD) instructions, and large tensors can fit in GPU memory. However, busing data back and forth from the main memory to the GPU memory may be the speed bottleneck in the small-scale simulations.
All in all, it depends on the structure and size of the network being simulated. I'm hoping to quantify this more in the future, and have the information available on the repository.
from bindsnet.
@ssmint Feel free to take a look at the code and make suggestions or even pull requests to speed things up!
from bindsnet.
I agree that larger network will be beneficial from using GPU. I am using Titan XP for testing. The network in Peter Diehl's example code is probably not large enough. I will close the issue then.
Looking forward to see more quantifying from BindsNET in the future.
from bindsnet.
It is interesting that the code is slower by 2x using Titan XP. I'm not seeing such a dramatic difference with the GTX 1060. I'll be looking into this.
from bindsnet.
I will play with the example code and send you more detailed comparison if I have time.
from bindsnet.
The code was also slower by 2x when I run the examples/mnist/conv_mnist.py using GTX 1070 than CPU.
from bindsnet.
@IgnoreSilence From above:
I think it depends on the type of GPU being used and the code being run. Not all network simulations are well-suited to being run on the GPU, especially smaller networks. I have a GTX 1060 6gb that I use to test the code, and found that there are some great time savings when the simulated networks are large. On the other hand, small-scale experiments seem to take more time to run on the GPU.
from bindsnet.
As an aside, any pull requests addressing this issue will be greatly appreciated!
from bindsnet.
I was assuming that bindsnet uses GPU by default, but it wasn't. And I didn't know how to enable GPU without any serious modification of codes. How do you guys are toggling the GPU on and off? Beside, I am thinking that introducing batch-computing will dramatically increase the utilization of GPU for bindsnet. I'll make pull request if this is needed feature.
from bindsnet.
I'm using torch.set_default_tensor_type('torch.cuda.FloatTensor')
at the top of scripts when I intend to use the GPU. I haven't had any problems with this approach yet. It would be nice to have cpu()
/ gpu()
functions for the Network
/ Nodes
/ 'Connection' objects (see #160). Keep in mind (see above discussion) that enabling GPU computation won't always result in faster simulation.
As for batch computing, this may be complicated. Remember that Nodes
objects typically maintain state (voltages, refractory periods, etc.), so computing over a batch will require duplicating those state variables (over a batch dimension) and keeping track of them independently during simulation.
from bindsnet.
Related Issues (20)
- A (serious) bug preventing RL algorithms to work HOT 4
- Has anyone manage to make one of the RL examples to work? HOT 2
- Saving, loading, and performing prediction from supervised examples HOT 1
- Is there any way to use BindsNet on RTX 3090? HOT 1
- 'bindsnet' is not recognized as an internal or external command, operable program or batch file. HOT 2
- How would I run this type of setup? HOT 2
- Reservoir issues!
- Network converted from ANN doesn't retain weights after training? HOT 3
- Question: Can it be used for speech emotion recognition task? HOT 1
- Does the `poisson` function under-produce spikes? HOT 13
- THE DEAD NEURON PROBLEM HOT 14
- ModuleNotFoundError: No module named 'torch._six' HOT 5
- Is SingleEncoder timing-based? HOT 7
- Are learning rules such as gradient descent available? HOT 1
- ModuleNotFoundError: No module named 'torch._six' HOT 4
- Columns and DataType Not Explicitly Set on line 18 of plot_benchmark.py
- How backpropagation work? HOT 6
- Using bindsnet for temperature prediction? HOT 1
- bug in examples/breakout HOT 2
- Regarding Inhibitory Neurons and Excitatory Neurons Under Dale's Rule HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bindsnet.