The dataset used in this project can be found on Kaggle: Llama-2 Dataset.
This project demonstrates the quantization of the Llama 2 model using the llama.cpp library to optimize its deployment on consumer-grade hardware. The goal is to reduce the model size and improve inference speed without significant loss of performance, making advanced AI models more accessible for a wider range of applications.
- Llama 2 Model: A cutting-edge large language model known for its deep understanding and generation of natural language.
- llama.cpp Library: A tool designed for model conversion and quantization, facilitating the execution of large models on lower-performance hardware.
- Quantization Process: Techniques applied to convert the Llama 2 model into the GGUF format, optimizing it for efficient use on consumer devices.
- Installation of llama.cpp and its necessary dependencies to prepare the environment for quantization.
- Conversion of the Llama 2 model to a quantized format (GGUF) that is more suitable for execution on devices with limited computational resources.
- Demonstration of the quantization process, highlighting the steps taken to reduce the precision of the model's parameters effectively.
This project is geared towards AI researchers, ML engineers, and developers looking to deploy large language models like Llama 2 on a broad array of hardware, including those with constrained computational capabilities. It offers insights into making state-of-the-art AI technologies more accessible and efficient for real-world applications.
To get started with this project:
- Clone this repository to your local machine.
- Ensure you have Jupyter Notebook installed and running.
- Install the required dependencies.
- Download the "Llama-2 Dataset" and place it in the designated directory.
- Open and run the Jupyter Notebook "Llama2-Quantization.ipynb" to train and evaluate the model.
We welcome contributions to enhance the functionality and efficiency of this script. Feel free to fork, modify, and make pull requests to this repository. To contribute:
- Fork the Project.
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
). - Commit your Changes (
git commit -m 'Add some AmazingFeature'
). - Push to the Branch (
git push origin feature/AmazingFeature
). - Open a Pull Request against the
main
branch.
This project is licensed under the MIT License - see the LICENSE
file for details.
Author: Akhil Chhibber