Giter Site home page Giter Site logo

karolisram / colour-shape-goal-misgeneralization Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 0.0 99 KB

Code for paper "Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study"

License: MIT License

Jupyter Notebook 83.29% Shell 9.94% Python 6.77%

colour-shape-goal-misgeneralization's Introduction

Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study

This repository contains code for Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study, which appeared at ATTRIB: Workshop on Attributing Model Behavior at Scale at NeurIPS 2023.
We took one of the goal misgeneralization examples (Maze colour vs shape) from Di Langosco et al. (2022) and tried to understand how exactly it happens. We built on top of their code here and here. See the paper for full details of what was added, summary here:

  • Maze environment simplified and new goal objects added.
  • Code to run systematic evaluations of agents and measure capabilities and goal preferences.
  • Code to produce plots, videos, and gifs.

Requirements

To run the training code you will need to install requirements from both train-procgen and procgenAISC. To be able to run all notebooks and utils you will also need to run this command:

pip install -r requirements.txt

Training with default settings requires 14GB of GPU memory. See the Tips and Tricks in the end for ways to reduce it.

Training

To train multiple (5 by default) agents to reach a yellow line with textured backgrounds, run this command:

. utils/train-many-with-backgrounds.sh 

Same, but with black backgrounds:

. utils/train-many.sh

The trained agents will be located in train-procgen/logs/train/maze_pure_yellowline. Each agent folder will have a screenshot from a training level for sanity checks. Training on other settings, like the white line or grey backgrounds is explained in the Tips and Tricks below. One agent takes about 40 minutes to train on consumer hardware.

Evaluation

To evaluate the agents on the same set of 1,000 levels in all the two-object combos from the paper, run:

. utils/run-maze-many-all-settings.sh 

The results will be located in train-procgen/experiments/results-1000. Each agent folder will have a screenshot of the first level. The first agent folder will have the screenshots of all 1,000 test levels. Evaluating 100 agents in each two-object combo takes between 1 and 10 hours, depending on the difficulty (yellow lines are easy, invisible objects are hard).

Plots, videos, gifs, screenshots

After evaluation, you can produce the plots from the paper by running the notebooks.
To make videos of agents solving the tasks, run:

. utils/run-maze-many-videos.sh

Before running it, you will have to replace the --model_file in the .sh file above with your own trained models.
The videos will be placed in videos, to turn them into easily shareable screenshots and gifs, run:

cd utils
python videos-to-pngs-and-gifs.py

These will be placed in video-frames-and-gifs. Note that by default the screenshots and the gifs will be 6 times larger than the original 64x64 video. This is because many apps will introduce blurriness when resizing very small images. Adjust the 6x parameter according to your needs.

Trained models and evaluation results

You can download all 1,000+ trained models and the results of over 10 million evaluations here.

Tips and Tricks

Below is an assorted list of tips and tricks that you can use to make the code do what you want.

  • Training to reach different objects: here.
  • Changing background colour to grey: here.
  • Changing training maze size: here.
  • Adding back randomness to maze size: here.
  • Change back mazegen algo: here.
  • Reduce minibatch size to fit models on smaller GPUs: here.
  • Make screenshots of human and agent view: here.

Results

If you train an agent to reach a yellow line, will it prefer a yellow gem or a red line?
maze train and test levels
It depends on the random seed used for training! Below is a plot showing how training 100 agents (with just the random seed different) produces capable agents with different goal preferences.
maze preferences vs capabilities
See the paper for other results and more details.

Citation

Please cite the paper using the below BibTeX:

@article{ramanauskas2023colour,
  title={Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study},
  author={Ramanauskas, Karolis and {\c{S}}im{\c{s}}ek, {\"O}zg{\"u}r},
  journal={arXiv preprint arXiv:2312.03762},
  url={https://arxiv.org/abs/2312.03762},
  year={2023}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.