This is an implementation of a Conversational Retrieval Chain with Sources (LangChain) that performs a semantic search of a Vectara database, all with a Chainlit frontend including chat settings and other fancy bits.
Blame this guy -> [email protected]
- Create a Vectara Index
Visit Vectara's Getting Started page to create a free index. Files can be uploaded and upserted via their web portal, so ingestion is not necessary in this file.
- Clone the repo and switch directories
git clone https://github.com/lukeslp/chainlit-langchain-vectara.git
cd chainlit-langchain-vectara
- Create a virtual environment
python3 -m venv env
- Activate the virtual environment
On Windows, use:
.\env\Scripts\activate
On Unix, Linux, or MacOS, use:
source env/bin/activate
- Install the requirements
pip install -r requirements.txt
- Add API Keys
Add API keys to .env.example and rename to .env
- Run the application
chainlit run app.py
This base project is easily modified to use other chains, llms, and vectorstores. Make it your own! For more information about the tools used in this project, please visit their respective documentation:
Contributing
I welcome contributions to this project. Please feel free to open an issue or submit a pull request. However, there's a larger project this feeds into that is worth discussing if you're considering contributing. Contact me by email, discord (lukelinguist), or visit luke.augcom.tech.
This project is licensed under the terms of the MIT license.