Giter Site home page Giter Site logo

zhuohaoyu / kieval Goto Github PK

View Code? Open in Web Editor NEW
32.0 3.0 2.0 10.84 MB

[ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models

Python 97.90% Shell 2.10%
acl2024 explainable-ai llm llm-evaluation llm-evaluation-framework llm-evaluation-metrics llm-evaluation-toolkit machine-learning

kieval's Introduction

KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models

Zhuohao Yu1  Chang Gao1  Wenjin Yao1  Yidong Wang1
Wei Ye†1  Jindong Wang2  Xing Xie2  Yue Zhang3  Shikun Zhang1

1 Peking University, 2 Microsoft Research, 3 Westlake University.

Overview

This is the official repository for KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models, accepted to the main conference of 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024).

Automatic evaluation methods for large language models (LLMs) are hindered by data contamination, leading to inflated assessments of their effectiveness. Existing strategies, which aim to detect contaminated texts, focus on quantifying contamination status instead of accurately gauging model performance. In this paper, we introduce KIEval, a Knowledge-grounded Interactive Evaluation framework, which incorporates an LLM-powered "interactor" role for the first time to accomplish a dynamic contamination-resilient evaluation. Starting with a question in a conventional LLM benchmark involving domain-specific knowledge, KIEval utilizes dynamically generated, multi-round, and knowledge-focused dialogues to determine whether a model's response is merely a recall of benchmark answers or demonstrates a deep comprehension to apply knowledge in more complex conversations. Extensive experiments on seven leading LLMs across five datasets validate KIEval's effectiveness and generalization. We also reveal that data contamination brings no contribution or even negative effect to models' real-world applicability and understanding, and existing contamination detection methods for LLMs can only identify contamination in pre-training but not during supervised fine-tuning.

Quick Start

To get started, first clone the repository and setup the environment:

git clone https://github.com/zhuohaoyu/KIEval.git
cd KIEval
pip install -r requirements.txt

We provide a modular implementation of our method, currently we support evaluating models locally with Huggingface's Transformers, and remote models with text-generation-inference or other APIs.

To reproduce results in our paper or evaluate new models with KIEval, we recommend starting a text-generation-inference instance with your model:

model=meta-llama/Llama-2-7b-chat-hf
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4 --model-id $model

Then, generate an evaluation config file with our script:

python scripts/generate-basic.py \
    --template ./config/template-basic.json \ # a template config file we provide
    --dataset arc_challenge \ # dataset name, please refer to datasets/ for all supported datasets
    --base_url http://your-host-url:8080 \ # replace with your host url, if you start the text-generation-inference locally, use http://localhost:8080
    --model_name llama-2-7b-chat-hf \ # any name you like
    --model_path meta-llama/Llama-2-7b-chat-hf \ # Huggingface model ID or local model path
    --openai_api_base https://api.openai.com/v1/ \ # OpenAI API base url, you could replace with proxy URL if needed
    --openai_key your_openai_key \ # replace with your OpenAI API key
    --openai_model gpt-4-1106-preview \ 
    --output_path ./result \ # output path for evaluation results
    --generate_path ./config/generated.json # output path for generated config file

Finally, run the evaluation process with the generated config file and wait for the results :)

python run.py -c ./config/generated.json

This repository provides all settings necessary for researchers to reproduce the results of KIEval, it also facilitates the reproduction of all metrics (from previous works) discussed in our paper. Please refer to config/templates for all supported evaluation methods.

Citation

✨ If you find our work helpful, please consider citing with:

@article{yu2024kieval,
  title={KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models}, 
  author={Zhuohao Yu and Chang Gao and Wenjin Yao and Yidong Wang and Wei Ye and Jindong Wang and Xing Xie and Yue Zhang and Shikun Zhang},
  journal={ArXiv},
  year={2024},
  volume={abs/2402.15043},
}

kieval's People

Contributors

zhuohaoyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

kieval's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.