Giter Site home page Giter Site logo

sorokinvld / thoughtsource Goto Github PK

View Code? Open in Web Editor NEW

This project forked from openbiolink/thoughtsource

0.0 1.0 0.0 52.08 MB

A central, open resource for data and tools related to chain-of-thought reasoning in large language models. Developed @ Samwald research group: https://samwald.info/

License: MIT License

JavaScript 0.01% Python 0.94% TypeScript 0.08% Makefile 0.01% HTML 0.01% Jupyter Notebook 98.96% SCSS 0.01%

thoughtsource's Introduction

ThoughtSource⚡

A framework for the science of machine thinking

DatasetsTutorial notebookInstallation guideDataset Annotator

ThoughtSource is a central, open resource and community centered on data and tools for chain-of-thought reasoning in large language models (Wei 2022). Our long-term goal is to enable trustworthy and robust reasoning in advanced AI systems for driving scientific research and medical practice.

ThoughtSource overview 3

📄 Pre-print: Ott et al. "ThoughtSource: A central hub for large language model reasoning data", arXiv, 2023

Workflow

ThoughtSource overview 1 ThoughtSource overview 2

Available datasets

Our dataloaders allow you to access the following datasets in a standardized chain-of-thought format. The dataloaders create objects in the Hugging Face 🤗 Datasets format. We (sometimes extensively) post-processed the source datasets in different ways to create more coherent reasoning chains.


Datasets can be browsed online through the Dataset Viewer 🔎


General question answering

  • commonsense_qa: Multiple-choice commonsense knowledge question answering dataset (Talmor 2018, License: MIT). Reasoning chains from three different sources are included:

    • Human-generated reasoning chains derived from the ECQA dataset (Aggarwal 2021) for train and validation split. Used as gold standard. License: Community Data License Agreements Sharing license 1.0.
    • AI-generated (few-shot prompting) reasoning chains from Wei 2022. Only available for validation split. License: Unknown
    • AI-generated (zero-shot prompting) generated reasoning chains from Kojima 2022. Only available for validation split. License: Unknown
  • strategy_qa: General-domain question-answering data from the StrategyQA dataset, reasoning chains are derived from original dataset. (Geva 2021). License: MIT.

    • Human-generated reasoning chains derived from the original dataset for train split. Used as gold standard. License: MIT.
    • AI-generated (few-shot) reasoning chains from Wei 2022. Only available for train split. License: Unknown
    • AI-generated (zero-shot) generated reasoning chains from Kojima 2022. Only available for train split. License: Unknown
  • qed: General-domain question-answering data and justifications from the QED dataset (Lamm 2020). License: CC BY-SA 3.0.

Scientific / medical question answering

  • worldtree: Scientific question-answering data from the WorldTree v2 dataset (Xie 2020). Human-generated reasoning chains derived from the original dataset. License: AI2 Mercury.
  • entailment_bank: Science exam questions with expert-authored explanations from the EntailmentBank dataset (Dalvi 2022). Human-generated reasoning chains derived from the original dataset. License: CC BY 4.0. (Note: significant overlap with worldtree v2)
  • open_book_qa: Scientific question-answering modeled after open book exams for assessing human understanding from the OpenBookQA dataset (Mihaylov 2018). Human-generated reasoning chains derived from the original dataset. License: Apache License 2.0.
  • med_qa (USMLE subset): Free-form multiple-choice OpenQA dataset containing questions from medical board exams in US (USMLE). Note: the original MedQA dataset also provides Chinese-language data, which are currently not included. (Jin 2020). License: MIT.
    • AI-generated (zero-shot) reasoning chains derived from Liévin 2022. Only available for the test split, only US questions. License: Unknown.
  • medmc_qa: Multiple-Choice Question Answering dataset containing real-world medical entrance exam questions from the All India Institute of Medical Sciences (AIIMS PG) and National Eligibility cum Entrance Test (NEET PG). (Pal 2022). License: MIT.
    • Human-generated reasoning chains derived from the original dataset for ~85% of train and validation split. Used as gold standard. License: MIT.
    • AI-generated (zero-shot) reasoning chains derived from Liévin 2022. Only available for 1000 samples from the validation split. License: CC-BY.
  • pubmed_qa: QA dataset containing biomedical questions extracted from PubMed abstracts that can be answered with yes/no/maybe (Jin 2019). License: MIT.
    • Human-generated reasoning chains derived from the original dataset. Used as gold standard. License: MIT.
    • AI-generated (zero-shot) reasoning chains derived from Liévin 2022. Only available for the test split. License: CC-BY.

Math word problems

  • aqua: Math word problems from the AQUA-RAT (Algebra Question Answering with Rationales) dataset (Ling 2017). Reasoning chains derived from the original dataset. License: Apache 2.0.
  • asdiv: Math word problems from the Academia Sinica Diverse MWP dataset (Miao 2020). Reasoning chains derived from the original dataset. License: CC BY-NC 4.0.
  • gsm8k: Math word problems from the GSM8K dataset (Cobbe 2021). Reasoning chains derived from the original dataset. License: MIT.
  • mawps: Math word problems from MAWPS, the Math Word Problem Repository dataset (Koncel-Kedziorski 2016). Reasoning chains derived from the original dataset. License: MIT.
  • svamp: Math word problems. Source: SVAMP (Patel 2021). Reasoning chains derived from the original dataset. License: MIT.

We are working on collecting and generating additional datasets, and on further improving the quality of existing datasets (see dataset issues). We welcome suggestions for the inclusion of other datasets.

We welcome dataset contributions! 👉 Have a look at our contribution guide!

Annotator

Demonstration of the annotator tool

The annotator allows for highlighting similarities between different generated reasoning chains, making it easier to spot strenghts and weaknesses and to select best results.


Use the web-based annotator 📝
To try out the annotator, simply type in your name and load this example file



Installation and code structure

Installation

execute in terminal line by line:

git clone [email protected]:OpenBioLink/ThoughtSource.git
cd ThoughtSource
# install pip and virtualenv
sudo apt install python3-pip
sudo apt install python3-venv
# create and activate virtual environment
python3 -m venv venv
source ./venv/bin/activate
# install requirements and API packages
pip install -e ./libs/cot[api]

Applications

  • annotator: Web-based tool for annotating chain-of-thought data.

  • dataset-viewer: Streamlit application for browsing ThoughtSource datasets

Libraries

  • cot:
    • dataloader: Creating and processing of ThoughtSource datasets (based on the Hugging Face 🤗 Datasets library).
    • generate: Generating reasoning chains with a wide variety of language models (currently OpenAI and models on Hugging Face hub)
    • evaluate: Evaluate the performance of predictions extracted using generated reasoning chains
# 1) Dataset loading and selecting a random sample
collection = Collection(["worldtree"], verbose=False)
collection = collection.select(split="train", number_samples=10)

# 2) Language Model generates chains of thought and then extracts answers
config={
    "instruction_keys": ['qa-01'], # "Answer the following question through step-by-step reasoning."
    "cot_trigger_keys": ['kojima-01'], # "Answer: Let's think step by step."
    "answer_extraction_keys": ['kojima-A-D'], # "Therefore, among A through D, the answer is"
    "api_service": "huggingface_hub",
    "engine": "google/flan-t5-xl",
    "warn": False,
    "verbose": False,
}
collection.generate(config=config)

# 3) Performance evaluation
collection.evaluate()
{'accuracy': {'qa-01_kojima-01_kojima-A-D': 0.6}}

👉 See the tutorial notebook for more code examples.


Citation

@misc{https://doi.org/10.48550/arxiv.2301.11596,
  doi = {10.48550/ARXIV.2301.11596},
  url = {https://arxiv.org/abs/2301.11596},
  author = {Ott, Simon and Hebenstreit, Konstantin and Liévin, Valentin and Hother, Christoffer Egeberg and Moradi, Milad and Mayrhauser, Maximilian and Praas, Robert and Winther, Ole and Samwald, Matthias},
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {ThoughtSource: A central hub for large language model reasoning data},
  publisher = {arXiv},
  year = {2023}, 
  copyright = {Creative Commons Attribution 4.0 International}
}

Versioning

All updates/changes to datasets are explicitly mentioned in bold.

0.0.5 (2023-03-10) - Function to select which generated CoTs to keep after loading: collection.select_generated_cots(author="thoughtsource")

0.0.4 (2023-03-08) - Evaluation function improved. Function to load ThoughtSource100 collection: Collection.load_thoughtsource_100()

0.0.3 (2023-02-24) - ThoughtSource_100 collection released with reasoning chains from GPT-text-davinci-003, flan-t5-xxl, and cohere's command-xl

0.0.2 (2023-02-15) - Annotator tool updated for correct data schema (this might result in errors loading old datasets, when loading from json files).

        Pubmed_qa: Included "LONG_ANSWER" from origin schema as "cot" in ThoughtSource schema

0.0.1 (2023-02-01) - Initial release after Twitter announcement of project

thoughtsource's People

Contributors

nomisto avatar matthias-samwald avatar konstantinhebenstreit avatar llewi avatar elmaestrobert avatar mmoradi-iut avatar hwchase17 avatar jas-ho avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.