Comments (7)
It's a bug! Well spotted - I must have got the indices for s and s' mixed at some point. I've just fixed that and run a quick test and the DQN actually seems to be learning good policies (empirically, not just on metrics like V). So thanks for pointing that out.
I'm still actively developing this so if you do see anything else that looks wrong then please let me know.
from atari.
Ok,no problem. One more question, in Google's source code for DQN, they used the next transition for creating next state.However, prime state used in your code.Could you explain it? Thank you!!!
from atari.
If I understand you correctly then it seems like in the original source code (I'm using this repo and function for reference), they add s'
and terminal
to the memory, and then add s
, a
and r
during training. They then use the memory to read out the historical sequence of states. They need to use a few conditions to deal with termination and storage during training vs. testing.
In my code, during training, I store two halves of different experience tuples. Namely r
, s'
and terminal
from one transition, and s
(the same as s'
here) and a
from the next transition. In reality a
hasn't been used on the environment yet, but it's assumed it will be. As for the historical sequence of states for playing the game, I use a separate structure, which nicely separates it from the memory. This way I don't have to worry about training vs. testing.
from atari.
Thank you.But I found that:transitions[i][histlen] is always equal to self.states[index]. For example, the retrieve index is 8. Current is supposed to be transTuples = {s[5],s[6],s[7],s[8]},next transitions is supposed to be {s[9],s[8],s[7],s[6]}. So transitions[i][1] = transTuples[2] = s[6];transitions[i][2] = transTuples[3] = s[7];transitions[i][3] = transTuples[4] = s[8]. This is right.
But by runing : self.buffers.transitions[i][self.opt.histLen] = self.states[self.circIndex(indices[i] + 1)]self.castType
transitions[i][4] = self.states[index] = s[4]. But I think transitions[i][4] = s[9].
So in my opinion, we can fix it by :
self.buffers.transitions[i][self.opt.histLen] = self.states[self.circIndex(indices[i] + 1)][self.castType](self.circIndex%28indices[i] + 1%29)
from atari.
Another bug! OK so the first one I fixed with a0663c0, and this one with 002ceca. Let me know if that looks correct now.
from atari.
OK, I think it's correct now. I made a comparasion between your sampling process and Google's.
Firstly in your project, You set states[1] = 0,terminals[1] = 0,actions[1] =1(no-op). Then we get the first observation (r1,s1,t1) and action a1 at timestep = 1. If training, we store rewards[1] = r1,states[2] = s1,terminals[2] = t1,actions[2]=a1. Next timesetp, we observe (r2,s2,t2) and get action a2 , then store rewards[2] = r2,states[3] = s2,terminals[3] = t2,actions[3]=a2..... and so on.
Then in Google's source project , they observe (r1,s1,t1) at first. Then get next observation (r2,s2,t2) by performing action a1. Then store rewards[1] = r2,states[1] = s1,terminals[1] = t1,actions[1]=a1.
So I think the first sample stored in Google's {states[1],rewards[1],terminals[1],actions[1] }is equal to
{states[2],rewards[2],terminals[2],actions[2] } in your project . Is it right??
from atari.
Looks right to me. So actually my code has some incorrect assumptions, which can be seen when compared to their code. I allow sampling from index 1, which is invalid at the beginning. However, once the memory fills up it shouldn't be invalid, and I'm not sure they catch that. In fact, I believe that neither of us account for the fact that when the memory fills up the transitions just after the insertion index become invalid (maybe they do though, I'd have to examine their code more carefully).
One thing which I will definitely correct soon is that they require s
to be the "s'
" of a valid transition.
from atari.
Related Issues (20)
- Implement Memory Q-networks
- Implement Retrace(λ)
- Finish prioritised experience replay HOT 2
- Allow non-visual environments
- Can I convert rank-based prioritized experience replay to a python version HOT 2
- Async A3C Network Outputs NaN HOT 4
- Load models like environments HOT 2
- Disagreements with the async paper HOT 2
- Possible improvements on speeding up HOT 1
- problem in Agent.lua HOT 1
- gnuplots memory unreleased HOT 1
- Why is the current sharedRmsprop thread safe? HOT 2
- Implement optimality tightening HOT 8
- What is the actual performance? HOT 7
- Refactor DQN train function into separate functions
- Partition number and segments HOT 1
- How to process with the salient map? HOT 4
- actor-critic based HOT 2
- About A3C HOT 1
- Questions about training A3C HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from atari.