Giter Site home page Giter Site logo

cic's Introduction

Contrastive Intrinsic Control (CIC)

This is the official codebase for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery. If you find this codebase useful to your research you can cite it as:

@misc{laskin2022cic,
Author = {Michael Laskin and Hao Liu and Xue Bin Peng and Denis Yarats and Aravind Rajeswaran and Pieter Abbeel},
Title = {CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery},
Year = {2022},
Eprint = {arXiv:2202.00161},
}

This codebase is built on top of the Unsupervised Reinforcement Learning Benchmark (URLB) codebase. We include agents for all baselines in the agents folder. Our method CIC is implemented in agents/cic.py and the config is specified in agents/cic.yaml.

To pre-train CIC, run the following command:

python pretrain.py agent=cic domain=walker experiment=YOUR_EXP_NAME

To finetune CIC, run the following command. Make sure to specify the directory of your saved snapshots with YOUR_EXP_NAME.

python finetune.py pretrained_agent=cic agent=cic experiment=YOUR_EXP_NAME task=walker_stand snapshot_ts=2000000 agent=cic

Requirements

We assume you have access to a GPU that can run CUDA 10.2 and CUDNN 8. Then, the simplest way to install all required dependencies is to create an anaconda environment by running

conda env create -f conda_env.yml

After the instalation ends you can activate your environment with

conda activate urlb

Available Domains

We support the following domains.

Domain Tasks
walker stand, walk, run, flip
quadruped walk, run, stand, jump
jaco reach_top_left, reach_top_right, reach_bottom_left, reach_bottom_right

Monitoring

Logs are stored in the exp_local folder. To launch tensorboard run:

tensorboard --logdir exp_local

The console output is also available in a form:

| train | F: 6000 | S: 3000 | E: 6 | L: 1000 | R: 5.5177 | FPS: 96.7586 | T: 0:00:42

a training entry decodes as

F  : total number of environment frames
S  : total number of agent steps
E  : total number of episodes
R  : episode return
FPS: training throughput (frames per second)
T  : total training time

cic's People

Contributors

mishalaskin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cic's Issues

Select skill for fine-tune

Hi,

The paper states that during finetune the skill is searched through grid-search. However, in the code a fixed skill used, is that normal?

The state and skill encoder learned with contrastive learning is never used?

Hi, thank you very much for sharing the codes of the paper. Integrating contrastive learning into skill discovery is very attractive.

However, I found that in this implementation, the state encoder and skill encoder in cic module ($g_{\psi_1}$ and $g_{\psi_2}$ in the paper) are never used before being fed into policy neural networks. In cic/agent/cic.py line 222, parameters in cic is updated once
but not called for encoding obs and skill thereafter.

Another question is how can the agent guarantee that the policy is "indeed conditioned on z" since the intrinsic reward has noting to do with z? In another word, $\tau$ can be arbitarily diverse, which is good for exploration, but there lacks a mechnism to ensure the agent know "what's the influnce of z".

I really like your work. But these issues confuse me a lot. Please correct me if I am wrong or miss something. Thank you again for your kindness of sharing.

No gradient through the cnn encoder

In CIC code the observations are encoded inside torch.no_grad which means that there is no gradient calculation for the CNN, which means that the CNN weights will not be updated during pre-training.

obs = self.aug_and_encode(obs)

APT reward

Why we pass (next_obs, next_obs)? It should (obs, next_obs) right? Because you are optimizing for the entropy of $\tau=(s, s^{'})$

intr_reward = self.compute_apt_reward(next_obs,next_obs)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.