algorithmic-hri's People
algorithmic-hri's Issues
Grayscale correct and consistent?
Just realized, I need to ensure that the frames are converted to grayscale in a manner that is consistent with the current way in spragnur's code! It looks like he uses ale.getScreenGrayscale
.
Edit: ahh, and having to ensure that my images take the 'max' over two consecutive frames. Spragnur's code indeed does that.
So in all I need to check:
- grayscale conversion same during data generation from human gameplay [TODO],
- taking max over consecutive frames during data generation from human gameplay [TODO]
- (also) ensuring that I use a frame skip of 4 only [Done, already inserted in code, but data not udated]
to make sure the networks are taking consistent inputs.
Modify code to collect a count of all the actions chosen
Have an array of size (num_epochs, num_actions). Then for each time we select an action, we should increment the counter. That will provide some useful statistics and helps to check if the network is doing what I expect.
Figure out a way to use ale.getMinimalActionSet()
Reminder to self: spragnur's code actually uses ale.getMinimalActionSet() which gives us the minimal amount of actions needed to play (e.g., in Breakout, it's 0:NOOP, 1:FIRE, 4:LEFT, and 3:RIGHT). This might help generalize some of my code to other games.
NATURE version will quickly go through 1 million iterations
Per epoch, there are 250k steps for the NATURE version.
With 1 million steps, this means we will get right past the human net case with just 4 epochs. In fact, it could be less if the testing epochs of 125k is included, but I don't think that's the case.
I should consider increasing the number of steps (EPSILON_DECAY in the code) so that I can better investigate the impact of my net. Maybe increasing it by 10x, so 10 million, would work?
How to get human experience replay
Doing human experience replay the naive way (as in, making a separate numpy array and loading them in, then combining with the built-in dataset in deep_q_rl) means the code runs possibly several orders of magnitude slower. The built-in replay memory has a size of 1 million, and my data is "only" on the order of 10k so there's non reason why my version should be that slow. My guess is that it has something to do with memory issues, if I decrease my human experience replay data by a factor of 10, runtime increases by a factor of 10.
So let's instead figure out how to get the dataset built into the normal experience replay in deep_q_rl.
Input scale?
A continuation of this previous issue.
The code in spragnur divides the state by the input scale. So I think I have to do the same thing, i.e. train the human-guided net on input which was scaled by 255.
Change the cropping method for NATURE scripts!
If I'm using NATURE instead of NIPS (as I should be!) then I need to change the cropping method in my code to use the nature version, which actually does it "worse". Unfortunately I didn't do that for my Breakout version. :(
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.