Giter Site home page Giter Site logo

Comments (10)

djsaunde avatar djsaunde commented on May 17, 2024

I think it depends on the type of GPU being used and the code being run. Not all network simulations are well-suited to being run on the GPU, especially smaller networks. I have a GTX 1060 6gb that I use to test the code, and found that there are some great time savings when the simulated networks are large. On the other hand, small-scale experiments seem to take more time to run on the GPU.

I've also tested with an NVIDIA Quadro M1200, where I observed similar runtime behavior, but the 1060 card definitely gave better results for larger networks.

Intuitively, this makes sense to me, as GPUs are known for being useful for single instruction, multiple data (SIMD) instructions, and large tensors can fit in GPU memory. However, busing data back and forth from the main memory to the GPU memory may be the speed bottleneck in the small-scale simulations.

All in all, it depends on the structure and size of the network being simulated. I'm hoping to quantify this more in the future, and have the information available on the repository.

from bindsnet.

djsaunde avatar djsaunde commented on May 17, 2024

@ssmint Feel free to take a look at the code and make suggestions or even pull requests to speed things up!

from bindsnet.

 avatar commented on May 17, 2024

I agree that larger network will be beneficial from using GPU. I am using Titan XP for testing. The network in Peter Diehl's example code is probably not large enough. I will close the issue then.

Looking forward to see more quantifying from BindsNET in the future.

from bindsnet.

djsaunde avatar djsaunde commented on May 17, 2024

It is interesting that the code is slower by 2x using Titan XP. I'm not seeing such a dramatic difference with the GTX 1060. I'll be looking into this.

from bindsnet.

 avatar commented on May 17, 2024

I will play with the example code and send you more detailed comparison if I have time.

from bindsnet.

haoyz avatar haoyz commented on May 17, 2024

The code was also slower by 2x when I run the examples/mnist/conv_mnist.py using GTX 1070 than CPU.

from bindsnet.

djsaunde avatar djsaunde commented on May 17, 2024

@IgnoreSilence From above:

I think it depends on the type of GPU being used and the code being run. Not all network simulations are well-suited to being run on the GPU, especially smaller networks. I have a GTX 1060 6gb that I use to test the code, and found that there are some great time savings when the simulated networks are large. On the other hand, small-scale experiments seem to take more time to run on the GPU.

from bindsnet.

djsaunde avatar djsaunde commented on May 17, 2024

As an aside, any pull requests addressing this issue will be greatly appreciated!

from bindsnet.

KyungsuKim42 avatar KyungsuKim42 commented on May 17, 2024

I was assuming that bindsnet uses GPU by default, but it wasn't. And I didn't know how to enable GPU without any serious modification of codes. How do you guys are toggling the GPU on and off? Beside, I am thinking that introducing batch-computing will dramatically increase the utilization of GPU for bindsnet. I'll make pull request if this is needed feature.

from bindsnet.

djsaunde avatar djsaunde commented on May 17, 2024

I'm using torch.set_default_tensor_type('torch.cuda.FloatTensor') at the top of scripts when I intend to use the GPU. I haven't had any problems with this approach yet. It would be nice to have cpu() / gpu() functions for the Network / Nodes / 'Connection' objects (see #160). Keep in mind (see above discussion) that enabling GPU computation won't always result in faster simulation.

As for batch computing, this may be complicated. Remember that Nodes objects typically maintain state (voltages, refractory periods, etc.), so computing over a batch will require duplicating those state variables (over a batch dimension) and keeping track of them independently during simulation.

from bindsnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.