Giter Site home page Giter Site logo

lora's Introduction

lora

Train Large Language Models (LLM) using Huggingface, PEFT and LoRA

I have included a script that sets up most of the things needed if you use lambdalabs.
It is called: setup_lambdalabs.py

To use this script you will need to create a .env file
containing these three entries:

LL_SECRET=my_lambda_labs_secret
ssh_key_filename=my_path_to_my_private_rsa_key
training_data_url=https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt

The LoRA training script is called train_text.py

Please look at the header of the train_text.py script to adjust settings like:

model_name = "eachadea/vicuna-13b-1.1"
load_in_8bit=True
lora_file_path = "my_lora"
text_filename='input.txt'
output_dir='.'
cutoff_len = 512
overlap_len = 128
newline_favor_len = 128

Very Important: There is currently a problem saving the LoRA model. User angelovAlex found a great solution here: #1

The setup_lambdalabs.py will automatically apply this patch.

If you don't use lambdalabs you will have to apply this patch manually.

To use the LoRA model you can take a look at inference.py.

It also uses hard coded values, so if you change model names you will have to adapt his script too.

lora's People

Contributors

rhulha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

lora's Issues

save_pretrained issue

Hey Ray.
Did you find a solution for save_pretrained issue? I am experiencing the same problem. According to stack trace it crashes simply on calling model.state_dict() because bitsandbytes tries to allocate additional memory in 'undo_layout'.

After some playing I came up with workaround and managed to successfully save the adaptor. I don't know if it has any side effects so I would recommend to use it only for saving result of lora training.

First of all, remove original peft and install this version pip install git+https://github.com/huggingface/peft.git@70af02a2bca5a63921790036b2c9430edf4037e2 (even with lots of ram and no cuda errors, I had the adapter file always 433 bytes on latest peft, but this version seems to work fine)

Depending on your installation (just look closely to stack trace of cuda error), you need to find where bitsandbytes library is stored and change bitsandbytes/nn/modules.py

I did a tweak to move tensors to cpu and removed cloning and exception check. I would recommend to rename original function, put this one next to it, run training, and restore original function after the training

def _save_to_state_dict(self, destination, prefix, keep_vars):
        if not self.state.has_fp16_weights and self.state.CB is None and self.state.CxB is not None:
            # reorder weight layout back from ampere/turing to row
            reorder_layout = True
            #weight_clone = self.weight.data.clone()
        else:
            reorder_layout = False

        #try:
        if reorder_layout:
            self.weight.data = undo_layout(self.state.CxB.cpu(), self.state.tile_indices.cpu())

        super()._save_to_state_dict(destination, prefix, keep_vars)

        # we only need to save SCB as extra data, because CB for quantized weights is already stored in weight.data
        weight_name = "SCB"

        # case 1: .cuda was called, SCB is in self.weight
        param_from_weight = getattr(self.weight, weight_name)
        # case 2: self.init_8bit_state was called, SCB is in self.state
        param_from_state = getattr(self.state, weight_name)

        key_name = prefix + f"{weight_name}"
        if param_from_weight is not None:
            destination[key_name] = param_from_weight if keep_vars else param_from_weight.detach()
        elif not self.state.has_fp16_weights and param_from_state is not None:
            destination[key_name] = param_from_state if keep_vars else param_from_state.detach()
        #finally:
        #    if reorder_layout:
        #        self.weight.data = weight_clone

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.