Giter Site home page Giter Site logo

snake-q-learning's Introduction

snake-Q-Learning

Q-Learning with the classic snake game

python3 & pygame

  • run "python qlearning.py t" for training

  • run "python qlearning.py p" for "playing"

  • qlearning.py runs snake.py and emulates the keypresses

  • in traing mode it runs snake_headless.py, which disables fps ticks and graphical output for faster training

  • Q-Function used: formula

diagramm demo

snake-q-learning's People

Contributors

kevinunger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

snake-q-learning's Issues

A better documentation would be nice 😄

a: 0.1
e: 9.251777478947598e-05
g: 0.9

I don't know what these mean its just an example but comments and how to make it better etc is surely missing. I know its your learning project but do tell us what you learnt as I too wanna do this Q learning project as my first and want to make the perfect snake 🥇 .

on a closer look I think its just ,

a (α - Alpha): This is the learning rate, denoted by α. It determines how quickly the Q-values are updated based on new experiences. A higher value means that the agent will adjust its Q-values more rapidly in response to new information. A lower value makes the agent more resistant to changing its Q-values based on new experiences.

e (ε - Epsilon): This is the exploration factor, denoted by ε. It determines the likelihood that the agent will choose a random action instead of following its learned policy. Exploration is important to discover new actions and states, which helps the agent find better policies. A higher ε encourages more exploration, while a lower ε favors exploitation of the current knowledge.

g (γ - Gamma): This is the discount factor, denoted by γ. It determines the agent's consideration of future rewards in the decision-making process. A higher value of γ makes the agent prioritize long-term rewards, while a lower value makes it focus more on immediate rewards. It is used to calculate the cumulative discounted future rewards when updating Q-values.

If it was written somewhere in README.md , it would be greatly helpful

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.