Giter Site home page Giter Site logo

sorokinvld / mera Goto Github PK

View Code? Open in Web Editor NEW

This project forked from ai-forever/mera

1.0 1.0 0.0 8.26 MB

MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating fundamental models.

License: MIT License

Shell 0.13% Python 17.19% Jupyter Notebook 82.68%

mera's Introduction

MERA

MERA

License Release

MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating fundamental models.

About MERA

MERA benchmark brings together all industry and academic players in one place to study the capabilities of fundamental models, draw attention to AI problems, develop collaboration within the Russian Federation and in the international arena and create an independent unified system for measuring all current models. This repository is a customized version of original Language Model Evaluation Harness (LM-Harness v0.3.0).

Our contributions to this project are:

  • Instruction-based tasks available on πŸ€— HuggingFace dataset card.
  • Customized version of LM-Harness evaluation code for models (v0.3.0).
  • Benchmark website with the Leaderboard and the scoring submission system.
  • Baselines of the open models and Human Benchmark.

The MERA benchmark includes 21 text tasks (17 base tasks + 4 diagnostic tasks). See the task-table for a complete list.

Name Task Name Task Type Test Size N-shots Metrics
MathLogicQA mathlogicqa Math, Logic 1143 5 Acc
MultiQ multiq Reasoning 900 0 EM / F1
PARus parus Common Sense 500 0 Acc
RCB rcb NLI 438 0 Acc / F1_macro
ruModAr rumodar Math, Logic 6000 0 Acc
ruMultiAr rumultiar Math 1024 5 Acc
ruOpenBookQA ruopenbookqa World Knowledge 400 5 Acc / F1_macro
ruTiE rutie Reasoning, Dialogue Context, Memory 430 0 Acc
ruWorldTree ruworldtree World Knowledge 525 5 Acc / F1_macro
RWSD rwsd Reasoning 260 0 Acc
SimpleAr simplear Math 1000 5 Acc
BPS bps Code, Math 1000 2 Acc
CheGeKa chegeka World Knowledge 416 4 EM / F1
LCS lcs Code, Math 500 2 Acc
ruHumanEval ruhumaneval Code 164 0 Pass@k
ruMMLU rummlu Reasoning 961 5 Acc
USE use Exam 900 0 Grade_norm
ruDetox rudetox Ethics 800 0 J(STA, SIM, FL)
ruEthics ruethics Ethics 1935 0 5 MCC
ruHateSpeech ruhatespeech Ethics 265 0 Acc
ruHHH ruhhh Ethics 178 0 Acc

Our aim is to evaluate all the models:

  • in the same scenarios;
  • using the same metrics;
  • with the same adaptation strategy (e.g., prompting);
  • provide an opportunity to make controlled and clear comparisons.

MERA is a collaborative project created in a union of industry and academia with the support of all the companies, that are creating the foundation models, to ensure fair and transparent leaderboards for the models evaluation.

We express our gratitude to our team and partners:

SberDevices, Sber AI, Yandex, Skoltech AI, MTS AI, NRU HSE, Russian Academy of Sciences, etc.

Powered by Aliance AI

Contents

The repository has the following structure:

  • examples β€” the examples of loading and using data.
  • humanbenchmarks β€” materials and code for human evaluation.
  • modules β€” the examples of scoring scripts that are used on the website for scoring your submission.
  • lm-evaluation-harness β€” a framework for few-shot evaluation of language models.

The process of submission is the following:

  • to view the datasets use the HuggingFace preview or run the prepared instruction;
  • clone MERA benchmark repository;
  • to get submission files use shell script and the provided customized lm-harness code (the actual model is not required for submission and evaluation).
  • run your model on the all datasets using the code of lm-harness: the result of the code is the archive in ZIP format for the submission;
  • register on the website;
  • upload the submission files (ZIP) via the platform interface for the automatic assessment.

Note that, the evaluation result is then displayed in the user's account and is kept private. Those who want to make their submission results public could use the ''Publish'' function. After validation of the submission is approved, the model's overall score will be shown publicly. The parameters of the generation, prompts and few-shot/zero-shot are fixed. You can vary them for your own purposes. If you want to submit your results on the public leaderboard check that these parameters are the same and please add the logs. We have to be sure that the scenarios for the models evaluation are the same and reproducible.

We provide the sample submission for you to check the format.

The process of the whole MERA evaluation is described on the Figure:

evaluation setup


πŸ“Œ It’s the first text version of the benchmark. We are to expand and develop it in the future with new tasks and multimodality.

Feel free to ask any questions regarding our work, write on email [email protected]. If you have ideas and new tasks feel free to suggest them, it’s important! If you see any bugs, or you know how to make the code better please suggest the fixes via pull-requests and issues in this official github πŸ€—. We will be glad to get the feedback in any way.

Cite as

@misc{fenogenova2024mera,
    title={{MERA}: A Comprehensive {LLM} Evaluation in {Russian}}, 
    author={Alena Fenogenova and Artem Chervyakov and Nikita Martynov and Anastasia Kozlova and Maria Tikhonova and Albina Akhmetgareeva and Anton Emelyanov and Denis Shevelev and Pavel Lebedev and Leonid Sinev and Ulyana Isaeva and Katerina Kolomeytseva and Daniil Moskovskiy and Elizaveta Goncharova and Nikita Savushkin and Polina Mikhailova and Denis Dimitrov and Alexander Panchenko and Sergei Markov},
    year={2024},
    eprint={2401.04531},
    url = {https://arxiv.org/abs/2401.04531},
    eprinttype={arXiv},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    journal={arXiv},
    volume={2401.04531}
}

mera's People

Contributors

mariyatikhonova avatar artemorloff avatar lsinev avatar alenush avatar colindonolwe avatar king-menin avatar ivansedykh avatar meduzick avatar ulyanaisaeva avatar

Stargazers

Vladislav Sorokin avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.