This web application utilizes Llama 2, an open-source Large Language Model (LLM) developed by Meta for generating blog content across different fields such as data science, research, and general interest topics. In this project, a compressed version of the Llama 2 model is employed to accommodate lower GPU specifications on my laptop. The specific pretrained model file used can be found at llama-2-7b.ggmlv3.q8_0.bin.
This model is a quantized version of the Llama 2 7B generative text model, created by Meta and converted by TheBloke. It uses the GGML format, which is a compressed format for CPU and GPU inference using llama.cpp and other compatible libraries and UIs. The model has 8-bit quantization, meaning it maintains nearly the same accuracy as the original float16 model but is smaller in size and faster to load. However, it also incurs higher resource usage and slower inference than lower-bit quantized models. The model is suitable for various text generation tasks, such as writing stories, poems, essays, code, etc., using prompts or templates.
Llama 2 is a family of large language models (LLMs) developed and released by Meta, a company formerly known as Facebook. Llama 2 models are trained on a large corpus of public data and can generate natural language for various tasks and applications. Some of the Llama 2 models are also fine-tuned for dialogue using human feedback, making them more engaging and helpful for chatbot users. Llama 2
Download pretrained Llama 2 GGM model from here. After downloading model file place into folder name "Model" Clone project:
git clone https://github.com/ShubhamSupekar/Blog-Generation-WebApp.git
Deactivate the default environment:
conda deactivate
Need help with Conda installation? You can find guidance at click here
Create new environment:
conda create -p venv python==3.9 --y
Activate new "venv" environment:
conda activate venv/
The "requirements.txt" file contains the names of all the libraries needed for this project. Installing all the libraries from requirements.txt file
pip install -r .\requirement.txt
now to run the WebApp:
streamlit run app.py