README
This project involves fine-tuning the Llama-2 model using the Learning Rate Annealing (LoRA) technique with the "Jeopardy_csv.csv" dataset. The goal is to enhance the model's performance on question-answering tasks, leveraging the trivia-style format of the Jeopardy dataset.
see requirement.txt
The dataset used is "Jeopardy_csv.csv," which contains Jeopardy questions and answers. Ensure you have downloaded and placed this dataset in the appropriate directory before running the notebook.
Main Notebook: Bonus_LLAMA_2_FineTuning_LoRA.ipynb
Inference Notebook: Bonus_Make_Inference_from_saved_ft_model.ipynb
Data Loading and Preprocessing: Instructions on how to load and preprocess the Jeopardy dataset. Model Setup: Setting up the Llama-2 model and the LoRA parameters. Fine-Tuning Process: Detailed steps for fine-tuning the model with the dataset. Inference: Methods to evaluate the fine-tuned model's performance.
base model: "NousResearch/Llama-2-7b-chat-hf"
Training details in Main Notebook: Bonus_LLAMA_2_FineTuning_LoRA.ipynb
LoRA Weights can be found via link:
https://drive.google.com/drive/folders/1J6Gx4TAW7UYQe2BXkFKKHaDEGp6RFMUu?usp=drive_link
Log can be found in: record.log
To load the saved model and make inference, execute Colab file the function make_inference() will take test.txt as input and generate test-output.txt as output.
test.txt contains questions: Who is the president of United States? Which city is the capital of PRC? 1+1=?
test-output.txt contains answers generated by the fine-tuned model: Joe Biden Beijing 2
For any questions, please contact [email protected]