Comments (1)
For PPO, does this mean the discount factor used for GAE is the same as the reward discount factor?
I guess there is confusion between GAE and what that piece of code do.
This code snippet is only about VecNormalize
and the way it normalizes the reward (there are many issues in SB2/SB3 repo about why it is like that).
GAE uses two hyperparameters, gamma
the discount factor and gae_lambda
which does the tradeoff between TD(0) and MC estimate.
from rl-baselines3-zoo.
Related Issues (20)
- Training DonkeyCar with TQC algorithm with pretrained AE
- [Bug]: Custom Sub-Hyperparameters during train.py -> Optimize HOT 1
- [Question] You must pass an environment when using `HerReplayBuffer` HOT 1
- [Question] RuntimeError: Unable to sample before the end of the first episode. We recommend choosing a value for learning_starts that is greater than the maximum number of timesteps in the environment. HOT 5
- [Question] Custom Eval Callback for train/optimize HOT 2
- [Bug]: TODO: add test dependencies in the `setup.py` HOT 1
- [Question] Does hyperparameter tuning support custom vectorized environments? HOT 6
- [Bug]: Training suddenly stops at 25000 timesteps and Optuna optimization immediately exits in my custom environment HOT 7
- [Bug]: Custom environment not found in gym registry, you maybe meant... error message HOT 1
- [Bug]: Optimization log and optimal policy not in `--optimization-log-path` but in `--log-folder` HOT 1
- [Question] Number of parallel environments with hyperparameters optimization HOT 1
- [Question] RL traning could not reach convergence for a customised environment HOT 1
- [Question] How many startup trials in distributed optimization HOT 2
- [Question] The trained agent resets every 1000 episodes. HOT 3
- Stuck at Local Minimum in PPO with CarRacing-v2 Environment
- [Question] How to render "info" on tensorboard HOT 1
- [Bug]: Docker tag is still 2.2.0a2, but latest rl-baselines3-zoo is now 2.3.0
- [Question] Results vastly different for an agent created with Stable Baselines3 using hyperparameters optimized in RL Baselines3 Zoo. HOT 1
- [Question] Should train & eval environment seeds differ? HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rl-baselines3-zoo.