Giter Site home page Giter Site logo

autorag's Introduction

AutoRAG

RAG AutoML tool for automatically finds an optimal RAG pipeline for your data.

Explore our 📖 Document!!

Plus, join our 📞 Discord Community.


📌 Colab Tutorial


🚨 YouTube Tutorial

AutoRAG.Tutorial.1.1.mp4

Muted by default, enable sound for voice-over

You can see on YouTube

📑 Index

Introduction

There are many RAG pipelines and modules out there, but you don’t know what pipeline is great for “your own data” and "your own use-case." Making and evaluating all RAG modules is very time-consuming and hard to do. But without it, you will never know which RAG pipeline is the best for your own use-case.

AutoRAG is a tool for finding optimal RAG pipeline for “your data.” You can evaluate various RAG modules automatically with your own evaluation data, and find the best RAG pipeline for your own use-case.

AutoRAG supports a simple way to evaluate many RAG module combinations. Try now and find the best RAG pipeline for your own use-case.

⚡ Quick Install

pip install AutoRAG

💪 Strengths

1. Find your RAG baseline

Benchmark RAG pipelines with few lines of code. You can quickly get a high-performance RAG pipeline just for your data. Don’t waste time dealing with complex RAG modules and academic paper. Focus on your data.

2. Analyze where is wrong

Sometimes it is hard to keep tracking where is the major problem within your RAG pipeline. AutoRAG gives you the data of it, so you can analyze and focus where is the major problem and where you to focus on.

3. Quick Starter Pack for your new RAG product

Get the most effective RAG workflow among many pipelines, and start from there. Don’t start at toy-project level, start from advanced level.

4. Share your experiment to others

It's really easy to share your experiment to others. Share your config yaml file and summary csv files. Plus, check out others result and adapt to your use-case.

⚡ QuickStart

1. Prepare your evaluation data

For evaluation, you need to prepare just three files.

  • QA dataset file (qa.parquet)
  • Corpus dataset file (corpus.parquet)
  • Config yaml file (config.yaml)

There is a template for your evaluation data for using AutoRAG.

  • Check out how to make evaluation data at here.
  • Check out the evaluation data rule at here.
  • Plus, you can get example datasets for testing AutoRAG at here.

2. Evaluate your data to various RAG modules

You can get various config yaml files at here. We highly recommend using pre-made config yaml files for starter.

If you want to make your own config yaml files, check out the Config yaml file section.

You can evaluate your RAG pipeline with just a few lines of code.

from autorag.evaluator import Evaluator

evaluator = Evaluator(qa_data_path='your/path/to/qa.parquet', corpus_data_path='your/path/to/corpus.parquet')
evaluator.start_trial('your/path/to/config.yaml')

or you can use command line interface

autorag evaluate --config your/path/to/default_config.yaml --qa_data_path your/path/to/qa.parquet --corpus_data_path your/path/to/corpus.parquet

Once it is done, you can see several files and folders created at your current directory. At the trial folder named to numbers (like 0), you can check summary.csv file that summarizes the evaluation results and the best RAG pipeline for your data.

For more details, you can check out how the folder structure looks like at here.

3. Use a found optimal RAG pipeline

You can use a found optimal RAG pipeline right away. It needs just a few lines of code, and you are ready to use!

First, you need to build pipeline yaml file from your evaluated trial folder. You can find the trial folder in your current directory. Just looking folder like '0' or other numbers.

from autorag.deploy import Runner

runner = Runner.from_trial_folder('your/path/to/trial_folder')
runner.run('your question')

Or, you can run this pipeline as api server. You can use python code or CLI command. Check out API endpoint at here.

from autorag.deploy import Runner

runner = Runner.from_trial_folder('your/path/to/trial_folder')
runner.run_api_server()

You can run api server with CLI command.

autorag run_api --config_path your/path/to/pipeline.yaml --host 0.0.0.0 --port 8000

4. Run Dashboard

You can run dashboard to easily see the result.

autorag dashboard --trial_dir /your/path/to/trial_dir

5. Share your RAG pipeline

You can use your RAG pipeline from extracted pipeline yaml file. This extracted pipeline is great for sharing your RAG pipeline to others.

You must run this at project folder, which contains datas in data folder, and ingested corpus for retrieval at resources folder.

from autorag.deploy import extract_best_config

pipeline_dict = extract_best_config(trial_path='your/path/to/trial_folder', output_path='your/path/to/pipeline.yaml')

➕ Create your own Config yaml file

You can build your own evaluation process with config yaml file. You can check detailed explanation how to configure each module and node at here.

There is a simple yaml file example.

It evaluates two retrieval modules which are BM25 and Vector Retriever, and three reranking modules. Lastly, it generates prompt and makes generation with two other LLM models and three temperatures.

node_lines:
  - node_line_name: retrieve_node_line
    nodes:
      - node_type: retrieval
        strategy:
          metric: retrieval_f1
        top_k: 50
        modules:
          - module_type: bm25
          - module_type: vector
            embedding_model: [ openai, openai_embed_3_large ]
          - module_type: hybrid_rrf
            target_modules: ('bm25', 'vectordb')
            rrf_k: [ 3, 5, 10 ]
      - node_type: reranker
        strategy:
          metric: retrieval_precision
          speed_threshold: 5
        top_k: 3
        modules:
          - module_type: upr
          - module_type: tart
            prompt: Arrange the following sentences in the correct order.
          - module_type: monoT5
  - node_line_name: generate_node_line
    nodes:
      - node_type: prompt_maker
        modules:
          - module_type: fstring
            prompt: "This is a news dataset, crawled from finance news site. You need to make detailed question about finance news. Do not make questions that not relevant to economy or finance domain.\n{retrieved_contents}\n\nQ: {query}\nA:"
      - node_type: generator
        strategy:
          metric:
            - metric_name: meteor
            - metric_name: rouge
            - metric_name: sem_score
              embedding_model: openai
            - metric_name: g_eval
              model: gpt-3.5-turbo
        modules:
          - module_type: llama_index_llm
            llm: openai
            model: [ gpt-3.5-turbo-16k, gpt-3.5-turbo-1106 ]
            temperature: [ 0.5, 1.0, 1.5 ]

❗Supporting Nodes & modules

You can check our all supporting Nodes & modules at here

❗Supporting Evaluation Metrics

You can check our all supporting Evaluation Metrics at here

🛣Roadmap

  • Policy Module for modular RAG pipeline
  • Visualize evaluation result
  • Visualize config yaml file
  • More RAG modules support
  • Token usage strategy
  • Multi-modal support
  • More evaluation metrics
  • Answer Filtering Module
  • Restart optimization from previous trial

Contribution

We are developing AutoRAG as open-source.

So this project welcomes contributions and suggestions. Feel free to contribute to this project.

Plus, check out our detailed documentation at here.

autorag's People

Contributors

bwook00 avatar eastsidegunn avatar gauravsaini02 avatar hongsw avatar vkehfdl1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autorag's Issues

Add n-gram metric options

From PR #40

I think these n-gram based metrics can open some options.

  • N : int minimum unit for n-gram
  • tokenizer : select tokenizer. (according to language, method etc include in tokenizer, tokenizer can influence value of each metric)

  • etc

Make brief mvp for RAGround

  • Add some default modules, evaluation process and parser from config yaml file.
  • Make evaluation result of each node_lines and nodes.

Add Hybrid Retrieval "Deploy" Function.

It is so confusing and difficult to make deploy function at retrieval_node for hybrid retrieval.
So, I have to make deploy function for hybrid retrieval.
Plus, I have to make hybrid retrieval yaml file for deployment.

Add deploy method

First, I will make use function for using found optimal pipeline for single query.

Refactor method that save best module and best param

Now, we save which module and param is best to the best filename directly.
But, it will occur some unintended errors when param or module name has underscore (_) or hyphen (-).
So, I have to find a way how to notify which module and module params are selected in node run function.

Add Passage Compressor Node

  • Add first passage compressor module.
  • Add passage compressor node decorator.
  • Add passage compressor run functions.

[Hotfix] Window OS error

I had run python -m pytest in terminal

and I got this Error messege

======================================================================================================= short test summary info =======================================================================================================
FAILED tests/autorag/test_evaluator.py::test_start_trial - OSError: [Errno 22] Invalid argument: 'C:\\Users\\hanpa\\PycharmProjects\\AutoRAG\\0\\retrieve_node_line\\retrieval\\bm25=>top_k_50.parquet'
FAILED tests/autorag/nodes/retrieval/test_run_retrieval_node.py::test_run_retrieval_node - OSError: [Errno 22] Invalid argument: 'C:\\Users\\hanpa\\PycharmProjects\\AutoRAG\\tests\\resources\\test_project\\test_trial\\test_node_line\\retrieval\\bm25=>top_k_4.parquet'
============================================================================================= 2 failed, 26 passed, 23 warnings in 24.79s ==============================================================================================
(venv) PS C:\Users\hanpa\PycharmProjects\AutoRAG>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.