Giter Site home page Giter Site logo

simple-evals's Introduction

Overview

This repository contains a lightweight library for evaluating language models. We are open sourcing it so we can be transparent about the accuracy numbers we're publishing alongside our latest models (starting with gpt-4-turbo-2024-04-09 and gpt-4o).

Evals are sensitive to prompting, and there's significant variation in the formulations used in recent publications and libraries. Some use few-shot prompts or role playing prompts ("You are an expert software programmer..."). These approaches are carryovers from evaluating base models (rather than instruction/chat-tuned models) and from models that were worse at following instructions.

For this library, we are emphasizing the zero-shot, chain-of-thought setting, with simple instructions like "Solve the following multiple choice problem". We believe that this prompting technique is a better reflection of the models' performance in realistic usage.

We will not be actively maintaining this repository and monitoring PRs and Issues. In particular, we're not accepting new evals. Here are the changes we might accept.

  • Bug fixes (hopefully not needed!)
  • Adding adapters for new models
  • Adding new rows to the table below with eval results, given new models and new system prompts.

This repository is NOT intended as a replacement for https://github.com/openai/evals, which is designed to be a comprehensive collection of a large number of evals.

Evals

This repository currently contains the following evals:

Samplers

We have implemented sampling interfaces for the following language model APIs:

Make sure to set the *_API_KEY environment variables before using these APIs.

Setup

Due to the optional dependencies, we're not providing a unified setup mechanism. Instead, we're providing instructions for each eval and sampler.

For HumanEval (python programming)

git clone https://github.com/openai/human-eval
pip install -e human-eval

For the OpenAI API:

pip install openai

For the Anthropic API:

pip install anthropic

Demo

python -m simple-evals.demo

This will launch evaluations through the OpenAI API.

Benchmark Results

Model Prompt MMLU GPQA MATH HumanEval MGSM DROP
(F1,3-shot)
OPENAI GPT4s
gpt-4o chatgpt1 88.7 53.6 76.6 90.2 90.5 83.4
gpt-4o assistant2 87.2 49.9 76.6 91.0 89.9 83.7
gpt-4-turbo-2024-04-09 chatgpt 86.5 49.1 72.2 87.6 88.6 85.4
gpt-4-turbo-2024-04-09 assistant 86.7 49.3 73.4 88.2 89.6 86.0
gpt-4-1106(-vision)-preview chatgpt 84.6 42.1 64.1 82.2 86.5 81.3
gpt-4-1106(-vision)-preview assistant 84.7 42.5 64.3 83.7 87.1 83.2
gpt-4-0125-preview chatgpt 84.8 39.7 64.2 88.2 83.7 83.4
gpt-4-0125-preview assistant 85.4 41.4 64.5 86.6 85.1 81.5
REFERENCE-RERUN
Claude-3-Opus (rerun w/ api) empty3 84.1 49.7 63.2 84.8 89.7 79.0
Claude-3-Opus (rerun w/ api) lmsys4 84.2 50.7 63.8 82.9 89.2 77.1
Llama3 70b (rerun w/ api) empty 80.2 41.3 52.8 70.1 82.6 81.4
REFERENCE-REPORT (5-shot)
Claude-3-Opus (report5) unknown 86.8 50.4 60.1 84.9 90.7 83.1
Gemini-Ultra-1.0 (report6) unknown 83.7 n/a 53.2 74.4 79.0 82.4
Gemini-Pro-1.5 (report6) unknown 81.9 n/a 58.5 71.9 88.7 78.9
Llama3 8b (report7) unknown 68.4 34.2 30.0 62.2 n/a 58.4
Llama3 70b (report7) unknown 82.0 39.5 50.4 81.7 n/a 79.7
Llama3 400b (still training, report7) unknown 86.1 48.0 57.8 84.1 n/a 83.5

Legal Stuff

By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI evals will be subject to our usual Usage Policies: https://platform.openai.com/docs/usage-policies.

Footnotes

  1. chatgpt system message: "You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture.\nKnowledge cutoff: 2023-12\nCurrent date: 2024-04-01"

  2. assistant system message in OpenAI API doc: "You are a helpful assistant." .

  3. claude-3 empty system message: suggested by Anthropic API doc, and we have done limited experiments due to rate limit issues, but we welcome PRs with alternative choices.

  4. claude-3 lmsys system message: system message in LMSYS Fast-chat open source code: "The assistant is Claude, created by Anthropic. The current date is {{currentDateTime}}. Claude's knowledge base was last updated ... ". We have done limited experiments due to rate limit issues, but we welcome PRs with alternative choices.

  5. claude-3 reports: https://www.anthropic.com/news/claude-3-family.

  6. gemini-1.5 reports: https://goo.gle/GeminiV1-5, we dont have rerun results due to rate_limit issues and paid-as-you-go version are still "coming at May 14" by the time of this study on 05/11. 2

  7. Llama 3 tech report: https://ai.meta.com/blog/meta-llama-3/. Note Llama 400b is still training and these numbers are based on the best of their pretrain/instruct Llama 400b numbers. 2 3

simple-evals's People

Contributors

eltociear avatar yuchenhe07 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

simple-evals's Issues

Has anyone run this code and cached the granular data?

I'm a student working on a final project and wanted to use the granular data here (e.g. not “GPT-4o hits 88.7% on MMLU" but rather “what did it answer for this multiple choice question?”. I don't really have credits nor can afford them but I'm sure this has been run before so would be wasteful anyway. Was looking for someone who has used this repo before for a push in the right direction (or even better if someone has a csv with the actual inputs/responses).

Thank you! This will go a long way to help :)

Publishing Prompts for Multimodal Evals?

Hi,

Thanks so much for publishing eval details like these--it's really appreciated!

I was wondering whether there are any intentions to publish the eval settings + prompts for the GPT-4o multimodal evals that were reported:

image

Having these as references would be extremely helpful to the community!

Timeout if not answer

If the bot uses over x seconds without a respond, the bot should return that the answer could not be answered.

Demo does not run - azure credentials

Hello! I saw the note about not really monitoring issues or actively maintaining this, but potentially accepting PRs to add other samplers - I'm interested in running the demo to reproduce and potentially adding Cohere's models as samplers.

However I'm running into issues with Blobfile that I think will prevent the demo from running for any others external to Openai as well - I've set up Azure credentials and authenticated to my account, but still get permissions errors like:

Could not find any credentials that grant access to storage account: 'openaipublic' and container: 'simple-evals'

I also notice some blobs referenced (ie MathEval) are in az://oaijoschu

Can these be moved somewhere publicly accessible for reproduction? I noticed a similar thread in your other repos here: https://github.com/openai/automated-interpretability/pull/15/files

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.