Giter Site home page Giter Site logo

expel's Introduction

Smoll baby robot ExpeL: LLM Agents are Experiential Learners

โšก [AAAI 2024 (Oral)] Official implementation of the ExpeL Agent โšก

~ by Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, Gao Huang ~

Release Notes License: Apache 2.0 GitHub star chart Open Issues


๐ŸŒ $\cdot$ Project Page โ€‚ ๐Ÿ“„ $\cdot$ Paper

"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E." - Tom Mitchell

๐Ÿ“– Table of Contents

๐Ÿ‘‹ Introduction

This repo is the official implementation of Expel: LLM Agents are Experiential Learners.

Our agent autonomously gathers experiences and extracts knowledge using natural language from a collection of training tasks. At inference, the agent recalls its extracted insights and past experiences to make informed decisions. Our empirical results highlight the robust learning efficacy of the ExpeL agent, indicating a consistent enhancement in its performance as it accumulates experiences.

๐Ÿ› ๏ธ Installation

Python version : 3.9.17

  1. Create a virtual environment using Anaconda (or your favorite package manager), activate it, clone the repo and install the requirements.
conda create -n expel python=3.9.17
conda activate expel

git clone https://github.com/LeapLabTHU/ExpeL.git expel
cd expel

pip install -r requirements.txt

Next you need to setup the environments.

๐ŸŒณ Environments

Baby ExpeL has been playing around with the following environments:

Among these, ALFWorld and Webshop require manual installation (+ loading a server (can be local) for Webshop). Details below:

๐Ÿ  ALFWorld

The installation instructions are shown below. Use the previously created environment to install ALFWorld. You will also need to download the data at the specified location: data/alfworld.

conda activate expel
pip install alfworld[full]

export ALFWORLD_DATA="data/alfworld"
alfworld-download

If you need more details, please refer to the Official repo.

๐Ÿ›’ Webshop

Webshop installation is different from the other environments. You will have to install it and manually run the server (can be local) in parallel of ExpeL to interact with the environment. The succint installation instructions are shown below.

git clone https://github.com/princeton-nlp/webshop.git webshop
cd webshop

# Create another env for the webshop server to avoid conflicts
conda create -n webshop python=3.8.13 
conda activate webshop

./setup.sh -d all

By default the WebShop only loads 1,000 products. But we need ALL OF THEM (๐Ÿคฏ). So change web_agent_site/utils.py:

# DEFAULT_ATTR_PATH = join(BASE_DIR, '../data/items_ins_v2_1000.json')
# DEFAULT_FILE_PATH = join(BASE_DIR, '../data/items_shuffle_1000.json')
DEFAULT_ATTR_PATH = join(BASE_DIR, '../data/items_ins_v2.json')
DEFAULT_FILE_PATH = join(BASE_DIR, '../data/items_shuffle.json')

To run the server, run the following command:

./run_dev.sh

You will be given an URL (and port) once the website is on:

  • Go back to the cloned ExpeL repo
  • Modify the config file and add the given URL at envs/webshop/webshop.py:
WEBSHOP_URL = "http://127.0.0.1:3000" # Example URL

Note that you will have to run the webshop server in the background to interact with the environment. We gathered some bugs we encountered during the Webshop Server setup here.

If you need more details, please refer to the Official repo.

๐Ÿš€ Quick start

Below are the commands to run the ExpeL Agent.

Either put your OpenAI API key in a .env (OPENAI_API_KEY=XXX) file or get prompted in the command line

1. For the Experience Gathering stage:

python train.py benchmark=<benchmark-name> \
  run_name=<train-run-name> \
  testing=false \
  resume=false

# resume = true/false if you want to resume a previous run
# benchmark = {hotpotqa, alfworld, webshop, fever}
# agent.llm = {gpt-3.5-turbo (default), gpt-4}

Below are the commands to run the experience gathering stage as in the paper:

# ๐Ÿ  ALFWorld
python train.py benchmark=alfworld run_name=<train-run-name> testing=false resume=false
# ๐Ÿ›’ Webshop
python train.py benchmark=webshop run_name=<train-run-name> testing=false resume=false
# โ“ HotpotQA
python train.py benchmark=hotpotqa run_name=<train-run-name> testing=false resume=false

By default, the result files (logs, dictionnaries) will be saved in logs/<benchmark-name>/expel referenced by <train-run-name>. You can change the log directory by adding log_dir=<log-dir> to the command line.

2. For the Insights Extraction stage:

Use the collected experiences to extract insights.

python insight_extraction.py \
  benchmark=<benchmark-name> \
  load_run_name=<train-run-name> \
  run_name=<insights-extraction-run-name> \ 
  agent.llm=<model> \
  agent.max_num_rules=<insights-num> \
  agent.success_critique_num=<exp-num> \
  testing=true \
  resume=false

# agent.success_critique_num = number of experiences to give per iteration
# agent.max_num_rules = target number of insights to extract

To resume a run that stopped at a specific fold, remove load_run_name from the parameters and specify the fold resume_fold it stopped at and resume=true.

Below are the commands to run the insights extraction stage as in the paper:

# ๐Ÿ  ALFWorld
python insight_extraction.py benchmark=alfworld load_run_name=<train-run-name> run_name=<insights-extraction-run-name> agent.llm=gpt-4 agent.max_num_rules=10 agent.success_critique_num=8 testing=false resume=false
# ๐Ÿ›’ Webshop
python insight_extraction.py benchmark=webshop load_run_name=<train-run-name> run_name=<insights-extraction-run-name> agent.llm=gpt-4 agent.max_num_rules=8 agent.success_critique_num=4 testing=false resume=false
# โ“ HotpotQA
python insight_extraction.py benchmark=hotpotqa load_run_name=<train-run-name> run_name=<insights-extraction-run-name> agent.llm=gpt-4 agent.max_num_rules=10 agent.success_critique_num=8 testing=false resume=false

The final result files will be saved in logs/<benchmark-name>/expel/extracted_insights referenced by <insights-extraction-run-name>.

3. For Evaluation:

python eval.py benchmark=<benchmark-name> \
  load_run_name=extracted_insights/<insights-extraction-run-name> \
  run_name=<eval-run-name> \
  benchmark.eval_configs.k_folds=<fold-num> \
  agent.fewshot_strategy=task_similarity \
  agent.retrieval_kwargs.max_fewshot_tokens= <max-retrieval-token-size> \
  agent.retrieval_kwargs.buffer_retrieve_ratio = <retrieve_multiplier-coefficient> \
  testing=false \
  resume=false

# agent.fewshot_strategy = {task_similarity, thought_similarity,task_thought_similarity)
# agent.llm = {gpt-3.5-turbo (default), gpt-4}
# agent.retrieval_kwargs.max_fewshot_tokens=auto
# benchmark.eval_configs.k_folds=2 
# agent.retrieval_kwargs.buffer_retrieve_ratio = safety measure to not retrieve 0 examples (bigger is safer)

To resume a run that stopped, remove load_run_name from the parameters and add resume=true at the end of the command line.

Below are the commands to evalute ExpeL as in the paper:

# ๐Ÿ  ALFWorld
python eval.py benchmark=alfworld load_run_name=extracted_insights/<insights-extraction-run-name> run_name=<eval-run-name> agent.fewshot_strategy=task_similarity agent.retrieval_kwargs.max_fewshot_tokens=auto testing=false resume=false
# ๐Ÿ›’ Webshop
python eval.py benchmark=webshop load_run_name=extracted_insights/<insights-extraction-run-name> run_name=<eval-run-name> agent.fewshot_strategy=task_similarity agent.retrieval_kwargs.max_fewshot_tokens=auto agent.retrieval_kwargs.buffer_retrieve_ratio=20 testing=false resume=false
# โ“ HotpotQA
 python eval.py benchmark=hotpotqa load_run_name=extracted_insights/<insights-extraction-run-name> run_name=<eval-run-name> agent.fewshot_strategy=task_similarity testing=false resume=false

The result files will be saved in logs/<benchmark-name>/expel/eval referenced by <eval-run-name>.

๐Ÿซก Cite us !

This repository contains code for reproducing results. If you find this work useful in your research (and/or daily life), please cite:

@misc{zhao2023expel,
      title={ExpeL: LLM Agents Are Experiential Learners}, 
      author={Andrew Zhao and Daniel Huang and Quentin Xu and Matthieu Lin and Yong-Jin Liu and Gao Huang},
      year={2023},
      eprint={2308.10144},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

๐Ÿ’Œ Contact us !

If you have any questions, feel free to contact Andrew Zhao, Daniel Huang or Quentin Xu.

๐Ÿ›๏ธ License

Check LICENSE.md

โš ๏ธ Issues

We encountered some errors and gathered them here (note that at time of reading, their might have been fixed). If you don't encountered them, lucky you ๐Ÿ˜’.

๐Ÿ›’ Webshop-server installation:

# install 
python -m spacy download en_core_web_lg #  
pip install lightgbm nmslib # need to have c compiler and stuff
conda install mkl # if ImportError: libmkl_intel_lp64.so.1:
pip install pysernini
pip install pyserini --no-cache-dir # if low on ram
pip install typing-inspect==0.8.0 typing_extensions==4.5.0 # if issubclass errors 
# if libjvm.so something, need to export JAVA_HOME
./setup.sh -d all 

On Mac, if you have problem with lightgbm or nmslib you might to replace their pip to:

brew install cmake libomp 
pip install lightgbm

CFLAGS="-mavx -DWARN(a)=(a)" pip install --use-pep517 nmslib

expel's People

Contributors

andrewzh112 avatar leaplabthu avatar qwentxz avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.