Giter Site home page Giter Site logo

instructlab / instructlab-bot Goto Github PK

View Code? Open in Web Editor NEW
12.0 12.0 15.0 2.07 MB

GitHub bot to assist with the taxonomy contribution workflow

License: Apache License 2.0

Dockerfile 0.92% Shell 2.85% Makefile 2.39% Go 31.20% Jinja 0.02% Jupyter Notebook 29.10% JavaScript 0.24% TypeScript 32.05% CSS 1.23%

instructlab-bot's Introduction

InstructLab ๐Ÿถ (ilab)

Lint Tests Build Release License

๐Ÿ“– Contents

Welcome to the InstructLab CLI

InstructLab ๐Ÿถ uses a novel synthetic data-based alignment tuning method for Large Language Models (LLMs.) The "lab" in InstructLab ๐Ÿถ stands for Large-Scale Alignment for ChatBots [1].

[1] Shivchander Sudalairaj*, Abhishek Bhandwaldar*, Aldo Pareja*, Kai Xu, David D. Cox, Akash Srivastava*. "LAB: Large-Scale Alignment for ChatBots", arXiv preprint arXiv: 2403.01081, 2024. (* denotes equal contributions)

๐ŸŽบ What's new

InstructLab release 0.17.0 on June 14, 2024 contains updates to the ilab CLI design. The ilab commands now fall into groups for an easier workflow and understanding of the commands. For more information, see the InstructLab CLI reference To view all the available flags for each command group, use the --help tag after the command. The original commands are still in effect, but will be deprecated in release 0.19.0 on July 11, 2024.

โ“ What is ilab

ilab is a Command-Line Interface (CLI) tool that allows you to perform the following actions:

  1. Download a pre-trained Large Language Model (LLM).
  2. Chat with the LLM.

To add new knowledge and skills to the pre-trained LLM, add information to the companion taxonomy repository.

After you have added knowledge and skills to the taxonomy, you can perform the following actions:

  1. Use ilab to generate new synthetic training data based on the changes in your local taxonomy repository.
  2. Re-train the LLM with the new training data.
  3. Chat with the re-trained LLM to see the results.
graph TD;
  download-->chat
  chat[Chat with the LLM]-->add
  add[Add new knowledge\nor skill to taxonomy]-->generate[generate new\nsynthetic training data]
  generate-->train
  train[Re-train]-->|Chat with\nthe re-trained LLM\nto see the results|chat
Loading

For an overview of the full workflow, see the workflow diagram.

Important

We have optimized InstructLab so that community members with commodity hardware can perform these steps. However, running InstructLab on a laptop will provide a low-fidelity approximation of synthetic data generation (using the ilab data generate command) and model instruction tuning (using the ilab model train command, which uses QLoRA). To achieve higher quality, use more sophisticated hardware and configure InstructLab to use a larger teacher model such as Mixtral.

๐Ÿ“‹ Requirements

  • ๐ŸŽ Apple M1/M2/M3 Mac or ๐Ÿง Linux system (tested on Fedora). We anticipate support for more operating systems in the future.
  • C++ compiler
  • Python 3.10+ (<3.12 for PyTorch JIT)
  • Approximately 60GB disk space (entire process)

NOTE: PyTorch 2.2.1 does not support torch.compile with Python 3.12. On Fedora 39+, install python3.11-devel and create the virtual env with python3.11 if you wish to use PyTorch's JIT compiler.

NOTE: When installing the ilab CLI on macOS, you may have to run the xcode select --install command, installing the required packages previously listed.

โœ… Getting started

๐Ÿงฐ Installing ilab

  1. When installing on Fedora Linux, install C++, Python 3.10+, and other necessary tools by running the following command:

    sudo dnf install g++ gcc make pip python3 python3-devel python3-GitPython

    Optional: If g++ is not found, try gcc-c++ by running the following command:

    sudo dnf install gcc-c++ gcc make pip python3 python3-devel python3-GitPython

    If you are running on macOS, this installation is not necessary and you can begin your process with the following step.

  2. Create a new directory called instructlab to store the files the ilab CLI needs when running and cd into the directory by running the following command:

    mkdir instructlab
    cd instructlab

    NOTE: The following steps in this document use Python venv for virtual environments. However, if you use another tool such as pyenv or Conda Miniforge for managing Python environments on your machine continue to use that tool instead. Otherwise, you may have issues with packages that are installed but not found in venv.

  3. There are a few ways you can locally install the ilab CLI. Select your preferred installation method from the following instructions. You can then install ilab and activate your venv environment.

    NOTE: โณ pip install may take some time, depending on your internet connection. In case installation fails with error unsupported instruction `vpdpbusd', append -C cmake.args="-DLLAMA_NATIVE=off" to pip install command.

    See the GPU acceleration documentation for how to to enable hardware acceleration for interaction and training on AMD ROCm, Apple Metal Performance Shaders (MPS), and Nvidia CUDA.

    Install using PyTorch without CUDA bindings and no GPU acceleration

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    pip cache remove llama_cpp_python
    pip install instructlab --extra-index-url=https://download.pytorch.org/whl/cpu

    NOTE: Additional Build Argument for Intel Macs

    If you have an Mac with an Intel CPU, you must add a prefix of CMAKE_ARGS="-DLLAMA_METAL=off" to the pip install command to ensure that the build is done without Apple M-series GPU support.

    (venv) $ CMAKE_ARGS="-DLLAMA_METAL=off" pip install ...

    Install with AMD ROCm

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    pip cache remove llama_cpp_python
    pip install instructlab \
       --extra-index-url https://download.pytorch.org/whl/rocm6.0 \
       -C cmake.args="-DLLAMA_HIPBLAS=on" \
       -C cmake.args="-DAMDGPU_TARGETS=all" \
       -C cmake.args="-DCMAKE_C_COMPILER=/opt/rocm/llvm/bin/clang" \
       -C cmake.args="-DCMAKE_CXX_COMPILER=/opt/rocm/llvm/bin/clang++" \
       -C cmake.args="-DCMAKE_PREFIX_PATH=/opt/rocm"

    On Fedora 40+, use -DCMAKE_C_COMPILER=clang-17 and -DCMAKE_CXX_COMPILER=clang++-17.

    Install with Apple Metal on M1/M2/M3 Macs

    NOTE: Make sure your system Python build is Mach-O 64-bit executable arm64 by using file -b $(command -v python).

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    pip cache remove llama_cpp_python
    pip install instructlab

    Install with Nvidia CUDA

    python3 -m venv --upgrade-deps venv
    source venv/bin/activate
    pip cache remove llama_cpp_python
    pip install instructlab -C cmake.args="-DLLAMA_CUDA=on"
  4. From your venv environment, verify ilab is installed correctly, by running the ilab command.

    ilab

    Example output of the ilab command

    (venv) $ ilab
    Usage: ilab [OPTIONS] COMMAND [ARGS]...
    
    CLI for interacting with InstructLab.
    
    If this is your first time running InstructLab, it's best to start with `ilab config init` to create the environment.
    
    Options:
    --config PATH  Path to a configuration file.  [default: config.yaml]
    --version      Show the version and exit.
    --help         Show this message and exit.
    
    Command: 
       config      Command group for Interacting with the Config of InstructLab
       data        Command group for Interacting with the Data of generated by...
       model       Command group for Interacting with the Models in InstructLab
       sysinfo     Print system information
       taxonomy    Command group for Interacting with the Taxonomy in InstructLab
    
    Aliases:
       chat: model chat
       convert: model convert
       diff: taxonomy diff
       download: model download
       generate: model generate
       init: config init 
       serve: model serve
       test: model test
       train: model train

    IMPORTANT: every ilab command needs to be run from within your Python virtual environment. To enter the Python environment, run the following command:

    source venv/bin/activate
  5. Optional: You can enable tab completion for the ilab command.

    Bash (version 4.4 or newer)

    Enable tab completion in bash with the following command:

    eval "$(_ILAB_COMPLETE=bash_source ilab)"

    To have this enabled automatically every time you open a new shell, you can save the completion script and source it from ~/.bashrc:

    _ILAB_COMPLETE=bash_source ilab > ~/.ilab-complete.bash
    echo ". ~/.ilab-complete.bash" >> ~/.bashrc

    Zsh

    Enable tab completion in zsh with the following command:

    eval "$(_ILAB_COMPLETE=zsh_source ilab)"

    To have this enabled automatically every time you open a new shell, you can save the completion script and source it from ~/.zshrc:

    _ILAB_COMPLETE=zsh_source ilab > ~/.ilab-complete.zsh
    echo ". ~/.ilab-complete.zsh" >> ~/.zshrc

    Fish

    Enable tab completion in fish with the following command:

    _ILAB_COMPLETE=fish_source ilab | source

    To have this enabled automatically every time you open a new shell, you can save the completion script and source it from ~/.bashrc:

    _ILAB_COMPLETE=fish_source ilab > ~/.config/fish/completions/ilab.fish

๐Ÿ—๏ธ Initialize ilab

  1. Initialize ilab by running the following command:

    ilab config init

    Example output

    Welcome to InstructLab CLI. This guide will help you set up your environment.
    Please provide the following values to initiate the environment [press Enter for defaults]:
    Path to taxonomy repo [taxonomy]: <ENTER>
  2. When prompted by the interface, press Enter to add a new default config.yaml file.

  3. When prompted, clone the https://github.com/instructlab/taxonomy.git repository into the current directory by typing y.

    Optional: If you want to point to an existing local clone of the taxonomy repository, you can pass the path interactively or alternatively with the --taxonomy-path flag.

    Example output after initializing ilab

    (venv) $ ilab config init
    Welcome to InstructLab CLI. This guide will help you set up your environment.
    Please provide the following values to initiate the environment [press Enter for defaults]:
    Path to taxonomy repo [taxonomy]: <ENTER>
    `taxonomy` seems to not exists or is empty. Should I clone https://github.com/instructlab/taxonomy.git for you? [y/N]: y
    Cloning https://github.com/instructlab/taxonomy.git...
    Generating `config.yaml` in the current directory...
    Initialization completed successfully, you're ready to start using `ilab`. Enjoy!

    ilab will use the default configuration file unless otherwise specified. You can override this behavior with the --config parameter for any ilab command.

๐Ÿ“ฅ Download the model

  • Run the ilab model download command.

    ilab model download

    ilab model download downloads a compact pre-trained version of the model (~4.4G) from HuggingFace and store it in a models directory:

    (venv) $ ilab model download
    Downloading model from instructlab/merlinite-7b-lab-GGUF@main to models...
    (venv) $ ls models
    merlinite-7b-lab-Q4_K_M.gguf

    NOTE โณ This command can take few minutes or immediately depending on your internet connection or model is cached. If you have issues connecting to Hugging Face, refer to the Hugging Face discussion forum for more details.

    Downloading a specific model from a Hugging Face repository

  • Specify repository, model, and a Hugging Face token if necessary. More information about Hugging Face tokens can be found here

    HF_TOKEN=<YOUR HUGGINGFACE TOKEN GOES HERE> ilab model download --repository=TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF --filename=mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf

    Downloading an entire Hugging Face repository

  • Specify repository, and a Hugging Face token if necessary. For example:

    HF_TOKEN=<YOUR HUGGINGFACE TOKEN GOES HERE> ilab model download --repository=mistralai/Mixtral-8x7B-v0.1

๐Ÿด Serving the model

  • Serve the model by running the following command:

    ilab model serve
  • Serve a non-default model (e.g. Mixtral-8x7B-Instruct-v0.1):

    ilab serve --model-path models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf
  • Once the model is served and ready, you'll see the following output:

    (venv) $ ilab model serve
    INFO 2024-03-02 02:21:11,352 lab.py:201 Using model 'models/ggml-merlinite-7b-lab-Q4_K_M.gguf' with -1 gpu-layers and 4096 max context size.
    Starting server process
    After application startup complete see http://127.0.0.1:8000/docs for API.
    Press CTRL+C to shut down the server.

    NOTE: If multiple ilab clients try to connect to the same InstructLab server at the same time, the 1st will connect to the server while the others will start their own temporary server. This will require additional resources on the host machine.

๐Ÿ“ฃ Chat with the model (Optional)

Because you're serving the model in one terminal window, you will have to create a new window and re-activate your Python virtual environment to run ilab model chat command:

source venv/bin/activate
ilab model chat

Chat with a non-default model (e.g. Mixtral-8x7B-Instruct-v0.1):

source venv/bin/activate
ilab chat --model models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf

Before you start adding new skills and knowledge to your model, you can check its baseline performance by asking it a question such as what is the capital of Canada?.

NOTE: the model needs to be trained with the generated synthetic data to use the new skills or knowledge

(venv) $ ilab model chat
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ system โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ Welcome to InstructLab Chat w/ GGML-MERLINITE-7B-lab-Q4_K_M (type /h for help)                                                                                                                                                                    โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
>>b> what is the capital of Canada                                                                                                                                                                                                 [S][default]
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ ggml-merlinite-7b-lab-Q4_K_M โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ The capital city of Canada is Ottawa. It is located in the province of Ontario, on the southern banks of the Ottawa River in the eastern portion of southern Ontario. The city serves as the political center for Canada, as it is home to โ”‚
โ”‚ Parliament Hill, which houses the House of Commons, Senate, Supreme Court, and Cabinet of Canada. Ottawa has a rich history and cultural significance, making it an essential part of Canada's identity.                                   โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ elapsed 12.008 seconds โ”€โ•ฏ
>>>                                                                                                                                                                                                                               [S][default]

๐Ÿ’ป Creating new knowledge or skills and training the model

๐ŸŽ Contribute knowledge or compositional skills

  1. Contribute new knowledge or compositional skills to your local taxonomy repository.

Detailed contribution instructions can be found in the taxonomy repository.

Important

There is a limit to how much content can exist in the question/answer pairs for the model to process. Due to this, only add a maximum of around 2300 words to your question and answer seed example pairs in the qna.yaml file.

๐Ÿ“œ List and validate your new data

  1. List your new data by running the following command:

    ilab taxonomy diff
  2. To ensure ilab is registering your new knowledge or skills, you can run the ilab taxonomy diff command. The following is the expected result after adding the new compositional skill foo-lang:

    (venv) $ ilab taxonomy diff
    compositional_skills/writing/freeform/foo-lang/foo-lang.yaml
    Taxonomy in /taxonomy/ is valid :)

๐Ÿš€ Generate a synthetic dataset

Before following these instructions, ensure the existing model you are adding skills or knowledge to is still running.

  1. To generate a synthetic dataset based on your newly added knowledge or skill set in taxonomy repository, run the following command:

    ilab data generate

    Use a non-default model (e.g. Mixtral-8x7B-Instruct-v0.1) to generate data, run the following command:

    ilab generate --model models/mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf

    NOTE: โณ This can take from 15 minutes to 1+ hours to complete, depending on your computing resources.

    Example output of ilab data generate

    (venv) $ ilab generate
    INFO 2024-02-29 19:09:48,804 lab.py:250 Generating model 'ggml-merlinite-7b-lab-Q4_K_M' using 10 CPUs,
    taxonomy: '/home/username/instructlab/taxonomy' and seed 'seed_tasks.json'
    
    0%|##########| 0/100 Cannot find prompt.txt. Using default prompt.
    98%|##########| 98/100 INFO 2024-02-29 20:49:27,582 generate_data.py:428 Generation took 5978.78s

    The synthetic data set will be three files in the newly created generated directory named generated*.json, test*.jsonl, and train*.jsonl.

Note

If you want to pickup from where a failed or canceled ilab data generate left off, you can copy the generated*.json file into a file named regen.json. regen.json will be picked up at the start of lab generate when available. You should remove it when the process is completed.

  1. Verify the files have been created by running the ls generated command.

    (venv) $ ls generated/
    'generated_ggml-merlinite-7b-lab-0226-Q4_K_M_2024-02-29T19 09 48.json'   'train_ggml-merlinite-7b-lab-0226-Q4_K_M_2024-02-29T19 09 48.jsonl'
    'test_ggml-merlinite-7b-lab-0226-Q4_K_M_2024-02-29T19 09 48.jsonl'

    Optional: It is also possible to run the generate step against a different model via an OpenAI-compatible API. For example, the one spawned by ilab serve or any remote or locally hosted LLM (e.g. via ollama, LM Studio, etc.). Run the following command:

    ilab data generate --endpoint-url http://localhost:8000/v1

๐Ÿ‘ฉโ€๐Ÿซ Training the model

There are many options for training the model with your synthetic data-enhanced dataset.

Note: Every ilab command needs to run from within your Python virtual environment.

Train the model locally on Linux

ilab model train

NOTE: โณ This step can potentially take several hours to complete depending on your computing resources. Please stop ilab model chat and ilab model serve first to free resources.

ilab model train outputs a brand-new model that can be served in the models directory called ggml-model-f16.gguf.

 (venv) $ ls models
 ggml-merlinite-7b-lab-Q4_K_M.gguf  ggml-model-f16.gguf

Train the model locally on an M-series Mac

To train the model locally on your M-Series Mac is as easy as running:

ilab model train

Note: โณ This process will take a little while to complete (time can vary based on hardware and output of ilab data generate but on the order of 5 to 15 minutes)

ilab model train outputs a brand-new model that is saved in the <model_name>-mlx-q directory called adapters.npz (in Numpy compressed array format). For example:

(venv) $ ls instructlab-merlinite-7b-lab-mlx-q
adapters-010.npz        adapters-050.npz        adapters-090.npz        config.json             tokenizer.model
adapters-020.npz        adapters-060.npz        adapters-100.npz        model.safetensors       tokenizer_config.json
adapters-030.npz        adapters-070.npz        adapters.npz            special_tokens_map.json
adapters-040.npz        adapters-080.npz        added_tokens.json       tokenizer.jso

Train the model locally with GPU acceleration

Training has experimental support for GPU acceleration with Nvidia CUDA or AMD ROCm. Please see the GPU acceleration documentation for more details. At present, hardware acceleration requires a data center GPU or high-end consumer GPU with at least 18 GB free memory.

ilab model train --device=cuda

Train the model in the cloud

Follow the instructions in Training.

โณ Approximate amount of time taken on each platform:

  • Google Colab: 5-10 minutes with a T4 GPU
  • Kaggle: ~30 minutes with a P100 GPU.

After that's done, you can play with your model directly in the Google Colab or Kaggle notebook. Model trained on the cloud will be saved on the cloud. The model can also be downloaded and served locally.

๐Ÿ“œ Test the newly trained model

  • Run the following command to test the model:

    ilab test

    NOTE: ๐ŸŽ This step is only implemented for macOS with M-series chips (for now)

    The output from the command will consist of a series of outputs from the model before and after training.

๐Ÿด Serve the newly trained model

  1. Stop the server you have running by entering ctrl+c keys in the terminal running the server.

    IMPORTANT:

    • ๐ŸŽ This step is only implemented for macOS with M-series chips (for now).

    • Before serving the newly trained model you must convert it to work with the ilab cli. The ilab model convert command converts the new model into quantized GGUF format which is required by the server to host the model in the ilab serve command.

  2. Convert the newly trained model by running the following command:

    ilab model convert
  3. Serve the newly trained model locally via ilab model serve command with the --model-path argument to specify your new model:

    ilab model serve --model-path <new model path>

    Which model should you select to serve? After running the ilab model convert command, some files and a directory are generated. The model you will want to serve ends with an extension of .gguf and exists in a directory with the suffix trained. For example: instructlab-merlinite-7b-lab-trained/instructlab-merlinite-7b-lab-Q4_K_M.gguf.

๐Ÿ“ฃ Chat with the new model (not optional this time)

  • Try the fine-tuned model out live using the chat interface, and see if the results are better than the untrained version of the model with chat by running the following command:

    ilab model chat -m <New model name>

    If you are interested in optimizing the quality of the model's responses, please see TROUBLESHOOTING.md

๐Ÿš€ Upgrade InstructLab to latest version

  • To upgrade InstructLab to the latest version, use the following command:

    pip install instructlab --upgrade

๐ŸŽ Submit your new knowledge or skills

Of course, the final step is, if you've improved the model, to open a pull-request in the taxonomy repository that includes the files (e.g. qna.yaml) with your improved data.

๐Ÿ“ฌ Contributing

Check out our contributing guide to learn how to contribute.

instructlab-bot's People

Contributors

cdoern avatar danmcp avatar dave-tucker avatar dependabot[bot] avatar gregory-pereira avatar jjasghar avatar lhawthorn avatar mergify[bot] avatar mingxzhao avatar nathan-weinberg avatar nerdalert avatar oindrillac avatar russellb avatar vishnoianil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

instructlab-bot's Issues

Consider moving from Redis Lists to Streams

We currently use a Redis List for jobs and results. See the arch diagram.

It seems that Redis Streams may be a better fit for the use case.

For the Job Queue, we would create a consumer group for our workers so that each job goes to a single worker in the group.

This implementation would be a bit more complicated, but it comes with some benefits:

  • With Redis Streams, a message in the Job Queue is marked as pending after delivered to a worker. It will remain in this state until the worker acks it (XACK). This will allow for cleaner recovery in the case that the worker dies for some reason in the middle of job execution. It can pick up where it left off with whatever is still marked as pending.
  • This same benefit applies for the Results Queue. Once the bot receives results, it can remain in a pending state until we know we have successfully posted it back to GitHub as a comment.

This would change the semantics a bit from "at most once" delivery to "at least once." Technically we could execute a job or post a result more than once if the failure occurred at just the right time. This seems better than having a window to lose something completely.

Another benefit is that any monitoring we do or dashboard we provide has an easy way to see which job(s) are currently being executed by a worker. This would be represented by pending messages associated with a consumer.

A final detail, these streams will need to be explicitly trimmed occasionally so they don't grow unbounded.

An implementation of this should fix #49.

Clean up keys in Redis after job in completed

Either:

a) Add a jobs:N:state key and have something that periodically checks jobs and does keys jobs:N:* | delete
b) In the Typescript worker, do keys jobs:N:* | delete after we've posted the comment

Add worker image with the ilab CLI but not CUDA

We have two worker images right now:

  • A full image with the worker, ilab CLI, and CUDA support
  • A test image with the worker binary only, no ilab CLI, for running in a test mode that doesn't do any real work

I have a need for a third image that includes the worker and the ilab CLI, but not any GPU support. This would be used for running a worker that doesn't use a local model at all. It would be for a worker that only talks to remote models after #116.

Add access controls to bot commands

We need some access controls on bot commands so backend resources can not be abused. A relatively simple solution would be to block on a PR label that is set by triagers to indicate that the PR has been sanity checked and that it's safe to be consumed by automation. The label should be reset when new changes are pushed to the PR, as well.

Create a bot status page

Now that we have a queue for jobs, it will be very helpful to be able to see the status of that queue.

  • Simple API backend that returns status data that it pulls from Redis
  • A web UI that presents status using this API

Run precheck and generate on PRs automatically

At some point in the future, once we are feeling very confident in the tooling and infrastructure, we can start running these jobs against PRs automatically without requiring a comment to kick it off. A comment could still be supported for triggering re-runs, but the initial runs could just always happen.

worker: Add option to use remote endpoint URL

By default, the worker uses a local model. IBM has a hosted version of the full model we can use. That will simplify things for us and give better results. We should retain the local model at least for our own testing, but prod will probably be configured to use this hosted model.

Update README

The README has gone a bit stale ...

  • Separate overview of the eventual vision from describing what is implemented and working now
  • Update docs on how to run a dev environment
  • Update infrastructure comments: drop PoC infra info, describe current prod architecture
  • drop anything that seems better tracked as github issues

Add bot workers to status UI

This depends on #40 and #41.

Once we have a status UI and data about workers in Redis, the API + UI should be updated to make use of the worker details now available.

Installer script and/or ansible for workers

It might be helpful to have an install.sh that we can use to quickly bring workers online
Otherwise some ansible that would:

  • Download the latest binary from GitHub releases
  • Download the systemd unit file #36 or plist #35 depending on the OS
  • Use the correct incantations to register a systemd or launchd service

Keep stats on job performance per worker

As a follow-up to #41, it would be interesting to keep some basic stats on job performance for a given worker. For example:

  • Average time to run generate over the last N days
  • Number of tasks processed

This could be helpful when looking at bot status and determining estimates for how long it'll take to process a queue backlog.

Have workers register metadata about themselves in Redis

It would be helpful to have a central way to keep track of which workers are online. This info can go in Redis.

  • hostname
  • OS
  • CPU
  • RAM
  • GPU(s)
  • Contact
  • Last updated timestamp

Most of this can be automatically pulled from the system. We may need to get a contact email address as an argument to the worker.

We'll need a way to know that the info represents a worker that's still online. Redis key expiration with a requirement that the worker periodically update its key may be enough.

Add some developer environment automation

Most of our automation is focused on deployment. It would be nice to have quick Makefile targets for spinning up / down / restarting a local dev environment that has the bot + redis + worker + lab serve. For bot development purposes, GPU access isn't needed. We could even configure worker to only generate 1 for --num-instructions instead of its normal default of 10.

Tasks are lost if a worker fails for any reason

I'm working on my worker configuration and noticed that once it pulls a generate task, if it fails, it will be a silent failure from the user perspective. We should catch this and queue up a "failed" result for the bot to post back to the PR.

Move everything to podman

We should move everything to podman so that docker desktop is not required in a desktop-based development environment.

We might as well just podman everywhere for consistency.

  • Update the dev environment - #107
  • GitHub workflows
  • deploy/ansible/

Change results publishing to have a much longer expiry time

Right now results links expire in 7 days. That seems quick, but that's the upper limit on S3 pre-signed URLs. We can probably just make the S3 bucket much more open soon once the project is opened up.

We should still have some sort of expiry on results, since they don't need to stick around for forever.

Clean up generated contents in AWS after a 5 days

This should be simple enough to do with [Expiry[(https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-expire-general-considerations.html).
We can set the Expiry when we upload objects and will be taken care of for us.

Update gobot to embed gosmee

The go version of the bot added in #91 requires running a webhook proxy outside of the bot. Probot had smee included and handled automatically. We can embed gosmee and drop the requirement to run it separately.

Use short commit ref for generate directory

From testing:

generate-pr-2-d90da40bb88ea2a91c4ee7f295c3d263c187c27d

We can use a short ref, equivalent to:

OUTPUT_DIR="generate-pr-${PR_ID}-$(git rev-parse --short HEAD)"

Create new command for testing existing model behavior

Create a new command to the bot that takes all of the added questions in the given taxonomy PR and captures existing answers from the model. The results should be published similar to the generate output we already do.

I'm not sure if lab provides a shortcut to this. Adding one would make this a little easier on us. Otherwise, I assume we can feed input into lab chat to do it a bit more manually.

We could also talk to the OAI endpoint directly, but I think it's better to go through lab to make sure we're using the same default prompt.

Support user request for num-instructions for generate

We default to --num-instructions 10 for lab generate for our own testing purposes, but that also seems like a good default to keep. A small set is good for a first check. Allowing users to say @instruct-lab-bot generate full would be great, as well, to get the full lab default of 100.

Note that this has some impact on #39, as we may have reason to do generate more than once on the same commit from a PR. (10 and 100).

Add "instruct-lab-bot approve"

  • Approves generated seed data.
  • Applies the label training-data-approved
  • Prompts the user to run @instruct-lab-bot train

Investigate go-git concurrency issues

Consistently seeing 3/3 on retries. Either bump them up or dig into the issue. Not a big deal to just add more retries imo since it's just a local thing but worth a little time investigating.

Example:

2024-04-04T18:54:19.938Z	INFO	cmd/generate.go:246	Processing job	{"job": "5"}
2024-04-04T18:54:20.037Z	INFO	cmd/generate.go:310	Retrying fetching updates, attempt 2/3	{"job": "5", "pr_number": "4", "work_dir": "/data", "origin": "origin"}
2024-04-04T18:54:22.097Z	INFO	cmd/generate.go:310	Retrying fetching updates, attempt 3/3	{"job": "5", "pr_number": "4", "work_dir": "/data", "origin": "origin"}

Add commmand for checking new model behavior

Very similar to the workflow of #85, except this command should use an API endpoint for a new model that included the PR in its training process.

@instruct-lab-bot postcheck

I think it's OK if our first implementation of this one ONLY supports a remote endpoint URL.

bot should trim whitespace around PR comments

I commented @instruct-lab-bot generate with some leading whitespace by accident, and it didn't trigger the bot. We should just trim the outer whitespace before evaluating the comment. I assume that's what happened, though I haven't looked closely to verify.

Ansible playbook to set up machine

Should set up our machine with:

  • NodeJS
  • Python3
  • Docker

Should deploy our app:

Should create an instruct-lab environment at a well known path with checkouts of:

  • instruct-lab/cli
  • instruct-lab/taxonomy

Should install:

  • Merlinite-7B : into instruct-lab/models
  • Mistral-7B-Instruct: into instruct-lab/models

Don't call "lab generate" to be multiple times on the same SHA

If we've already got results uploaded for the SHA then we should skip calling lab generate and instead:

  • Generate new presigned URLs for the .jsonl files
  • Generate a fresh index.html
  • Upload the new index.html
  • Respond with a comment

This allows for:

  • Results to be viewed (again) after the URL has expired
  • Saving resources

Fix Docker Image Builds in CI

The bot image can just use the node:20-slim image given it only needs NodeJS now.

The worker image should have a stage that creates the Go binary...
There is then a final stage that is used for both the lab serve and the worker images - which has all the CUDA stuff in too.

Perhaps we'll get lucky and the resulting images might be reasonably sized again ๐Ÿ˜†

Add systemd unit file

Add a systemd unit file to allow the worker binary to be installed as a systemd service so we can run this as a daemon in Linux

Deploy gobot to prod

#91 introduced a go version of the bot. We should deploy it to prod. We'll need to update the ansible playbooks first as well as build an image.

  • create gobot image
  • update deploy/ansible/ -- #112
  • deploy to prod

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.