Giter Site home page Giter Site logo

q-optimality-tightening's People

Contributors

shibihe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

q-optimality-tightening's Issues

Question about quadratic penalties

@ShibiHe
Hi, thanks for your great paper, and sorry to bother you.

In the paper, the upper bound and lower bound are incorporated into the algorithm via quadratic penalties. But I cannot find the implementation corresponding to these two quadratic penalties.

It seems that the loss function is defined in the init function of DeepQLearner class. Here no penalties are added.

And some main differences comparing with the original DQN codes are shown in _do_training function of OptimalityTightening class. I am not so sure what is the meaning of targets1 variable. And how can this implementation works as two quadratic penalties in paper?

Please correct me if I'm wrong, and thank you very much!!

Fewer steps per second as training progresses

Apologies if there is an obvious answer, but from the readme I gathered that when running properly, the steps per second should remain constant throughout training. Running on a GTX 970, I started out with ~90 steps per second and 25% GPU utilization. After leaving it to run overnight, I've found it's only run for 6 epochs and has slowed to about 46 steps per second, with about 15% GPU utilization. Everything runs perfectly otherwise, it takes several hours for the issue to appear, and restarting brings it back up to a normal rate. Is there a known cause/solution for this?

Thank you

Question on upper bound

@ShibiHe ,
First of all, thanks for this inspiring paper and implementation, great work!

In paper, you use index substitution to derive the upper bound for Q, which perfectly makes sense mathematically.

However, in implementation, Upper bound is used the same way as Lower bound, without dependency(thus gradient) w.r.t. parameters.

Which means, for example, at time step t, in trajectory (s[t-2], a[t-2], r[t-2], s[t-1], a[t-1], r[t-1], s[t], a[t], r[t], ...), if r[t-2] and r[t-1] is very low, we need to decrease the value of Q[t] according to upper bounds introduced by r[t-2], r[t-1].

which means essentially what happened before time step t will have impact on the value Q[t].

Does that conflict with definition of Discounted Future Reward and also the assumption of MDP?

Please correct me if anything wrong,

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.