Language model for investment analysis. Project for Stanford Deep Generative Models class.
Poster | Report (to be added)
Training: We conducted three experiments fine-tuning Llama2 and GPT3.5.
- Supervised and unsupervised fine-tuning training code is in llama_for_finance_finetuning.ipynb.
- Instruction fine-tuning of GPT code is in finetune_GPT.ipynb.
Data:
- Data scraping for Llama2 unsupervised fine-tuning: llama_scrape.ipynb.
- Some utility functions are in finetuned_assistant.ipynb.
Evaluation:
- Evaluation code for Llama2 baseline and fine-tuned versions is in llama_evaluate.ipynb.
- Evaluation of GPT models is in the same file as fine-tuning: finetune_GPT.ipynb.
- Unsupervised fine-tuning of Llama2: PEFT model on Huggingface Hub.
- Instruction fine-tuned GPT3.5 is on OpenAI Hub.