This is a framework to evaluate autoregressive code generation language models. This is a work in progress part of the BigCode project, and is inspired from EleutherAI/lm-evaluation-harness for evaluating language models in general. We welcome contributions to fix issues, enhance features and add new benchmarks. You can find a contribution guides in docs/guide.md
and CONTRIBUTING.md
and more documentation in docs/README.md
.
Below are the features and tasks of this framework:
- Any autoregressive model available on Hugging Face hub can be used, but we recommend using code generation models trained specifically on Code such as CodeParrot, InCoder and CodeGen.
- 3 code generation Python tasks (with unit tests): HumanEval, APPS and MBPP.
- CoNaLa for Python code generation (2-shot setting and evaluation with BLEU score)
- Concode for Java code generation (2-shot setting and evaluation with BLEU score)
- Code to text task from CodeXGLUE (zero-shot & fine-tuning) for 6 languages: Python, Go, Ruby, Java, JavaScript and PHP.
- 3 multilingual downstream classification tasks: Java Complexity prediction, Java code equivalence prediction, C code defect prediction.
git clone https://github.com/bigcode-project/bigcode-evaluation-harness.git
cd bigcode-evaluation-harness
Install torch
based on your device type and the other packages using:
pip install -r requirements.txt
Also make sure you have git-lfs
installed and are logged in the Hub
huggingface-cli login
We use accelerate
to generate code/text in parallel when multiple GPUs are present (multi-GPU mode). You can configure it using:
accelerate config
This evaluation harness can also be used in an an evaluation only mode, you can use a Multi-CPU setting. For this mode you can also find an example of setup instructions in evaluation_setup.sh
, where we configure the environement and evaluate some MBPP generations donwloaded from the hub.
You can use this evaluation harness to generate text solutions to code benchmarks with your model, to evaluate (and execute) the solutions or to do both. While it is betetr to use GPUs for the generation, the evaluation only requires CPUs. So it might be beneficial to separate these two steps. By default both generation and evaluation are performed.
For more details on how to evaluate on the tasks, please refer to the documentation in docs/README.md
.
Below are some examples to generate and evaluate on some tasks.
accelerate launch main.py \
--model <MODEL_NAME> \
--tasks <TASK_NAME> \
--limit <NUMBER_PROBLEMS> \
--max_length_generation <MAX_LENGTH> \
--temperature <TEMPERATURE> \
--do_sample True \
--n_samples 100 \
--batch_size 10 \
--allow_code_execution=False
limit
represnts the number of problems to solve, if it's not provided all problems in the benchamrk are selected.allow_code_execution
is for executing the generated code: read the displayed warning before setting it toTrue
.
Some tasks don't require code execution such as
codexglue_code_to_text-<LANGUAGE>
/codexglue_code_to_text-python-left
/conala
/concode
that use BLEU evaluation. In addition, we generate one candidate solution for each problem in these tasks, so use n_samples=1
and batch_size=1
. (Note that batch_size
should always be equal or less than n_samples
).
- For APPS tasks, you can use
n_samples=1
for strict and average accuracies (from the original APPS paper) andn_samples>1
for pass@k.
If you want to generate solutions without executing and evaluating the code, set generation_only
to True, in addition to the instructions above. This will save the solutions in a json file in the working directory.
If you already have the generations in a json file from this evaluation harness and want to evaluate them, specify the path of the generations via the generation_path
argument. You may need to reconfigure accelerate
to use multiple CPUs. For this mode you can also find an example of setup instructions in evaluation_setup.sh
.
Below is an example, be mind of specifying arguments proper to the task you are evaluating on, and note that model
value here only serves for documenting the experiment.
accelerate launch main.py --tasks mbpp --allow_code_execution=True --generations_path generations.json --model incoder-temperature-08
To implement a new task in this evaluation harness, see the guide in docs/guide
. The are also contribution guidelines in this CONTRIBUTING.md
We provide documentation for the existing benchmarks and how we make the evaluation in docs/README.md
.
- Currenltly, we use parallel evaluation across multiple GPUs using
accelerate
, this assumes that you can fit the model in one GPU. - Please note this evaluation harness tries to cover a wide set of models, but there could still be room for improvement based on each model, some might require different prompt engineering or post-processing of the code generations.
- For some scores of ongoing experiments please refer to
example_scores/README.md
.
We thank EleutherAI for their work on the lm-evaluation harness from which this repository is inspired.