Giter Site home page Giter Site logo

llama's Introduction


Credits: Large parts of the code are based on the PR by Jason Phang. Thank you for your hard work!


LLaMA. Simple. Using HuggingFace.

What is this all about?

  • Do you also want a "private GPT-3" at home?
  • It also annoys you that people on the internet are excited about "llama weights" and yet there is no interface or any guide for how to use them?
  • You also sick of dealing with all kinds of people on the Internet who play around with tensors then upload a code that no one can really use?

I prepared a single repo for you with EVERYTHING you need to run LLaMA.

Here is Everything you need for running (and training!) LLaMA using Hugging Face interface ๐Ÿ‘Œ

TL;DR:

tokenizer = llama.LLaMATokenizer.from_pretrained('decapoda-research/llama-7b-hf')
model = llama.LLaMAForCausalLM.from_pretrained('decapoda-research/llama-7b-hf')
print(tokenizer.decode(model.generate(tokenizer('Yo mama', return_tensors = "pt")["input_ids"])[0]))

Yeah. No overengineering bullshit.

Also: No need to clone a huge custom transformers repo that you later on stuck with maintaining and updating yourself.

What is LLaMA?

TL;DR: GPT model by meta that surpasses GPT-3, released to selected researchers but leaked to the public.

LLaMA is a large language model trained by Meta AI that surpasses GPT-3 in terms of accuracy and efficiency while being 10 times smaller.

Paper Abstract:

We introduce LLaMA, a collection of founda- tion language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla- 70B and PaLM-540B. We release all our models to the research community.

How can I use LLaMA?

Installation

git clone https://github.com/ypeleg/llama

Usage

1. Import the library and choose model size

import llama
MODEL = 'decapoda-research/llama-7b-hf'

We currently support the following models sizes:

  • Options for MODEL:
    • decapoda-research/llama-7b-hf
    • decapoda-research/llama-13b-hf
    • decapoda-research/llama-30b-hf
    • decapoda-research/llama-65b-hf

Note: The model size is the number of parameters in the model. The larger the model, the more accurate the model is, but the slower, heavier and more expensive it is to run.

2. Load the tokenizer and model

tokenizer = llama.LLaMATokenizer.from_pretrained(MODEL)
model = llama.LLaMAForCausalLM.from_pretrained(MODEL)
model.to('cuda')

3. Encode the prompt

For example, we will use the prompt: "Yo mama"

We will use the tokenizer to encode the prompt into a tensor of integers.

PROMPT = 'Yo mama'
encoded = tokenizer(PROMPT, return_tensors = "pt")

4. Generate the output

We will use the model to generate the output.

generated = model.generate(encoded["input_ids"].cuda())[0])

5. Decode the output

decoded = tokenizer.decode(generated)

6. Print the output

print(decoded)

Expected output: "Yo mama is so fat, she has to buy two seats on the plane."

llama's People

Contributors

ypeleg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

llama's Issues

GPU hardware for inference

Can you please specify/estimate the hardware required to run each model?
It will help me choose the correct EC2 instance.
Many thanks!

data format

you didn't specify the content of CSV, please provide that or explain how we can create our own. thanks!

llama 7b 4bit

Is possible run this code with llama 7b 4bit weight?

torch using all GPU RAM

What is the necessary memory to use the 7B model? I have a 3060 card with 12GB of RAM and when I execute inference_example.py I get the following error:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 12.00 GiB total capacity; 11.29 GiB already allocated; 0 bytes free; 11.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

OOM with 80GB-A100

Training leads to OOM even with an 80GB GPU card. Would you please give some advices ?

***** Running training *****
  Num examples = 1799
  Num Epochs = 1
  Instantaneous batch size per device = 1
  Total train batch size (w. parallel, distributed & accumulation) = 1
  Gradient Accumulation steps = 1
  Total optimization steps = 1799
  Number of trainable parameters = 6738423808
  0%|                                                                                                                                                               | 0/1799 [00:00<?, ?it/s]Traceback (most recent call last):
  File "training_example.py", line 45, in <module>
    Trainer(model = model,
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/trainer.py", line 1543, in train
    return inner_training_loop(
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/trainer.py", line 1858, in _inner_training_loop
    self.optimizer.step()
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/torch/optim/optimizer.py", line 113, in wrapper
    return func(*args, **kwargs)
  File "/data/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/optimization.py", line 362, in step
    denom = exp_avg_sq.sqrt().add_(group["eps"])
RuntimeError: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 79.18 GiB total capacity; 76.21 GiB already allocated; 162.38 MiB free; 77.88 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
  0%|                                                                                                                                                               | 0/1799 [00:01<?, ?it/s]

Add Quantization Code

Are you able to add quantization code so that the model can be run on a smaller GPU?

How to organize customized text dataset ?

I would like to express my gratitude for your hard work on the project.
I came across a training script where you used the following code:

DATA_FILE_PATH = 'elon_musk_tweets.csv'
texts = pd.read_csv(DATA_FILE_PATH)['text']

However, I was unable to find the 'elon_musk_tweets.csv' file, and as a result, I am facing difficulty in organizing my own text dataset. I would appreciate any guidance on how to structure my own dataset.

Thank you for your time and assistance.

OutOfMemoryError: CUDA out of memory

Hi All
i am running on nvidia gtx 1080ti (11GB Video Memory) on windows 11
i get following error on running inference_example on llama-7b-hf model

OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 11.00GiB total capacity; 10.29 GiB already allocated; 0 bytes free; 10.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

May i know how much memory is required to run this model locally
Also is there any work around

Thanks

KeyError: 'decoder.layers.35.attention_norm.weight' when running inference

The code did work properly but now for the same call I get this key error.

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ โ”‚ 8 โ”‚ โ”‚ 9 tokenizer = llama.LLaMATokenizer.from_pretrained(MODEL) โ”‚ โ”‚ 10 #model = llama.LLaMAForCausalLM.from_pretrained(MODEL, low_cpu_mem_usage = True) โ”‚ โ”‚ โฑ 11 model = llama.LLaMAForCausalLM.from_pretrained(MODEL, low_cpu_mem_usage=True, device_map โ”‚ โ”‚ 12 โ”‚ โ”‚ 13 model.to('cuda') โ”‚ โ”‚ 14 โ”‚ โ”‚ โ”‚ โ”‚ /opt/conda/envs/env/lib/python3.9/site-packages/transformers/modeling_utils.py:2326 in โ”‚ โ”‚ from_pretrained โ”‚ โ”‚ โ”‚ โ”‚ 2323 โ”‚ โ”‚ โ”‚ if dtype_orig is not None: โ”‚ โ”‚ 2324 โ”‚ โ”‚ โ”‚ โ”‚ torch.set_default_dtype(dtype_orig) โ”‚ โ”‚ 2325 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โฑ 2326 โ”‚ โ”‚ โ”‚ model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._loa โ”‚ โ”‚ 2327 โ”‚ โ”‚ โ”‚ โ”‚ model, โ”‚ โ”‚ 2328 โ”‚ โ”‚ โ”‚ โ”‚ state_dict, โ”‚ โ”‚ 2329 โ”‚ โ”‚ โ”‚ โ”‚ loaded_state_dict_keys, # XXX: rename? โ”‚ โ”‚ โ”‚ โ”‚ /opt/conda/envs/env/lib/python3.9/site-packages/transformers/modeling_utils.py:2448 in โ”‚ โ”‚ _load_pretrained_model โ”‚ โ”‚ โ”‚ โ”‚ 2445 โ”‚ โ”‚ โ”‚ for key in missing_keys: โ”‚ โ”‚ 2446 โ”‚ โ”‚ โ”‚ โ”‚ if key.startswith(prefix): โ”‚ โ”‚ 2447 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ key = ".".join(key.split(".")[1:]) โ”‚ โ”‚ โฑ 2448 โ”‚ โ”‚ โ”‚ โ”‚ param = model_state_dict[key] โ”‚ โ”‚ 2449 โ”‚ โ”‚ โ”‚ โ”‚ if param.device == torch.device("meta"): โ”‚ โ”‚ 2450 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ if not load_in_8bit: โ”‚ โ”‚ 2451 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ set_module_tensor_to_device(model, key, "cpu", torch.empty(*para โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ KeyError: 'decoder.layers.35.attention_norm.weight

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.