Giter Site home page Giter Site logo

Comments (4)

KyungsuKim42 avatar KyungsuKim42 commented on May 21, 2024 4

OK, I'll make PR when my implementation is stabilized enough. It may take some time.

from bindsnet.

KyungsuKim42 avatar KyungsuKim42 commented on May 21, 2024 1

I understand what you've said, but I still think there's some problem with updating process. The process that I expect is like this.

  1. Network is simulated for certain number of timesteps.
  2. Based on calculated output, agent interacts with picked action and then gets observation and reward.
  3. Network updates its parameter based on reward given by environment

However, current implementation that I understand is

  1. iterate a., b. for timesteps
    a. Network is simulated for single timestep
    b. Network's parameter is updated for single timestep based on reward given by previous interaction with environment
  2. Based on calculated output, agent interacts with picked action and then gets observation and reward.

The problem of current implementation is that update based on reward can be only applied after single timestep of simulation. Where I expect the update should be done before any simulation timestep.

from bindsnet.

djsaunde avatar djsaunde commented on May 21, 2024

This is intended; typically, the environment takes a step, and then the agent takes a step, then the environment, ... as in the typical RL setup. The network can "tick" (or, run a simulation timestep) multiple times when it's called in between calls to the environment.

It might be interesting to allow reward to be either a float or a torch.Tensor; that is, a sequence of rewards, one for every simulation timestep. This could encompass your use-case: set reward = torch.tensor([1, 0, 0, 0, 0]) for a network with simulation time 5, and reward-modulated STDP with reward 1.

This is also conceptually simpler than adding yet another flag variable.

from bindsnet.

djsaunde avatar djsaunde commented on May 21, 2024

Hm, that's interesting. Could you put together a PR to this effect? I can imagine, like you say, that some learning rules could be applied in a batch-like manner, where the updates only occur after a chunk of simulation time (rather than per simulation timestep).

This is already possible in experimental scripts: you can perform arbitrary updates to connection weights of your Network object between calls to pipeline.step(). However, it would be nice to have some sort of functionality for this built into BindsNET.

from bindsnet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.