Giter Site home page Giter Site logo

bluex_eval's Introduction

GPT-4-ENEM

*** Most of the code in this repository has been adapted from Language Model Evaluation Harness. ***

Introduction

This repository contains code and data used in the following papers:

This most recent study presents a comprehensive framework to evaluate language models on entrance exams, which incorporates both textual and visual elements. We evaluate the two most recent editions of Exame Nacional do Ensino MΓ©dio (ENEM), the main standardized entrance examination adopted by Brazilian universities.

Our study not only reaffirms the capabilities of GPT-4 as the state of the art for handling complex multidisciplinary questions, but also pioneers in offering a realistic assessment of multimodal language models on Portuguese examinations.

One of the highlights is that text captions transcribing visual content outperform the direct use of images, suggesting that the vision model has room for improvement. Yet, despite improvements afforded by images or captions, mathematical questions remain a challenge for these state-of-the-art models.

Significant improvements are noticeable when incorporating either textual or visual representations of images, with the difference nearing 10 points, particularly when utilizing captions.

Area ENEM 2022 ENEM 2023
without images with images with captions without images with images with captions
Languages and Codes 73.33 82.22 84.44 84.44 86.67 91.11
Human Sciences 88.89 95.56 95.56 95.56 100.00 100.00
Natural Sciences 73.33 77.78 82.22 86.67 91.11 93.33
Mathematics 54.55 61.36 61.36 54.55 65.91 75.00
Total 72.63 79.33 81.01 80.45 86.03 89.94

Results of GPT-4V on ENEM 2022 and ENEM 2023, using 3-shot with Chain-of-Thought (CoT) prompts.

The best-performing model was the GPT-4, that achieved an accuracy of 90.5% on ENEM 2023, using captions, largely surpassing the GPT-3.5 by 17 points

Data

We made available the ENEM 2022 and ENEM 2023 datasets. These datasets encompass all multiple-choice questions from the last two editions. The datasets have been created to allow the evaluation of both textual-only and textual-visual language models. To evaluate textual-only models, we incorporated into the datasets the textual descriptions of the images that appear in the questions' statements from the orange ENEM exam booklet, a particular booklet that offers accessibility to people with visual impairments.

The datasets can also be accessed via the πŸ€— Datasets library: https://huggingface.co/datasets/maritaca-ai/enem

The deprecated ENEM 2022 dataset can be found here.

Warning

We do not recommend using the deprecated dataset, since it does not include the image placeholders, image paths, and textual descriptions. Also, the tables are not well-formatted.

Tasks

We have implemented a set of 16 tasks, described below:

Task Enem edition Experiment CoT Use all the questions
enem_2022_blind ENEM 2022 without images No βœ”οΈ
enem_cot_2022_blind ENEM 2022 without images Yes βœ”οΈ
enem_2022_images ENEM 2022 with images No βœ”οΈ
enem_cot_2022_images ENEM 2022 with images Yes βœ”οΈ
enem_2022_captions ENEM 2022 with captions No βœ”οΈ
enem_cot_2022_captions ENEM 2022 with captions Yes βœ”οΈ
enem_2023_blind ENEM 2023 without images No βœ”οΈ
enem_cot_2023_blind ENEM 2023 without images Yes βœ”οΈ
enem_2023_images ENEM 2023 with images No βœ”οΈ
enem_cot_2023_images ENEM 2023 with images Yes βœ”οΈ
enem_2023_captions ENEM 2023 with captions No βœ”οΈ
enem_cot_2023_captions ENEM 2023 with captions Yes βœ”οΈ
enem Enem Challenge (2009-2017) - No ❌
enem_cot Enem Challenge (2009-2017) - Yes ❌
enem_2022_deprecated ENEM 2022 - No ❌
enem_cot_2022_deprecated ENEM 2022 - Yes ❌

Reproducing the results

To reproduce the experiments described in the paper, please follow the steps below:

1. Clone the repository:

git clone https://github.com/piresramon/gpt-4-enem.git

2. Install the required packages:

pip install -e .

3. Set the OPENAI API key:

Visit openai to retrieve API keys and insert into your the env variable.

OPENAI_API_SECRET_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

4. Run the experiments:

To reproduce the results of the Table 1, run the following commands:

# running 3-shot with CoT for GPT-4V on ENEM 2022
python main.py \
    --model chatgpt \
    --model_args engine=gpt-4-vision-preview \
    --tasks enem_cot_2022_blind,enem_cot_2022_images,enem_cot_2022_captions \
    --description_dict_path description.json \
    --num_fewshot 3 \
    --conversation_template chatgpt

# running 3-shot with CoT for GPT-4V on ENEM 2023
python main.py \
    --model chatgpt \
    --model_args engine=gpt-4-vision-preview \
    --tasks enem_cot_2023_blind,enem_cot_2023_images,enem_cot_2023_captions \
    --description_dict_path description.json \
    --num_fewshot 3 \
    --conversation_template chatgpt

To experiment other OpenAI model, just change the engine. The tasks enem_cot_2022_images and enem_cot_2023_images are not supported by text-based models.

It is possible to use a different number of few-shot examples (maximum 3).

Tip

You can experiment any other model available in the πŸ€— Transformers library. Just change the model and model_args parameters. It is necessary to disable the parameter conversation_template.

Citation

If you use this code or data in your research, please cite the following papers:

@misc{pires2023evaluating,
      title={Evaluating GPT-4's Vision Capabilities on Brazilian University Admission Exams}, 
      author={Ramon Pires and Thales Sales Almeida and Hugo Abonizio and Rodrigo Nogueira},
      year={2023},
      eprint={2311.14169},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{nunes2023evaluating,
      title={Evaluating GPT-3.5 and GPT-4 Models on Brazilian University Admission Exams}, 
      author={Desnes Nunes and Ricardo Primi and Ramon Pires and Roberto Lotufo and Rodrigo Nogueira},
      year={2023},
      eprint={2303.17003},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

bluex_eval's People

Contributors

piresramon avatar zanezzephyrs avatar

Forkers

joaosantosi6

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.