Teach a Taxi to pick up and drop off passengers at the right locations with Reinforcement Learning
ref: https://www.learndatasci.com/tutorials/reinforcement-q-learning-scratch-python-openai-gym/
Most of you have probably heard of AI learning to play computer games on their own, a very popular example being Deepmind. Deepmind hit the news when their AlphaGo program defeated the South Korean Go world champion in 2016. There had been many successful attempts in the past to develop agents with the intent of playing Atari games like Breakout, Pong, and Space Invaders.
Each of these programs follow a paradigm of Machine Learning known as Reinforcement Learning. If you've never been exposed to reinforcement learning before, the following is a very straightforward analogy for how it works.
- !pip install cmake gym[atari] scipy
- you will need this gym[toy_text] liberary if you run on your PC instead of colab:
!pip install cmake gym[toy_text] scipy
-
Q_RL_trainig(hyperparameters, env, num_episodes = 100000)
it takes the hyperparameters as tuple and the environment with num_episodes = 100000 as the default value. Returns the q_tabel and (total_epochs+total_penalties)/num_episodes as a metric -
Q_RL_evaluation(q_table,env,episodes = 100)
It takes the learned q_table and the environment with episodes = 100 as the default value.
Then it prints the Average timesteps per episode and the Average penalties per episode. -
tune_alph_gamma_epsilon(alpha_tune, gamma_tune, epsilon_tune ,num_of_iterations = 100000)
It takes the alpha_tune, gamma_tune, epsilon_tune their values should determine how fast the exponential decay should be.
with num_of_iterations = 100000 as the default value.
Then returns alpha_values, gamma_values and epsilon_values as arrayes of the decaied values of length = num_of_iterations
also it displayes a plot to show the tune output.
for example:
-
Q_RL_train_exponential_decay(hyperparameters, env, num_episodes = 100000)
it takes the hyperparameters as a matrix and the environment with num_episodes = 100000 as the default value.
Returns the q_tabel and (total_epochs+total_penalties)/num_episodes as a metric
genetic algorithms is more efficient when dealing with many hyperparemetrs compared with grid search.