Giter Site home page Giter Site logo

rajveersinghcse / petcare-ai-chatbot Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 1.0 42.22 MB

๐Ÿถ PetCare AI Chatbot, powered by Langchain, Ollama, and Flask, utilizes RAG for context-aware answers from PDF documents.

License: MIT License

Python 32.56% HTML 67.44%
ai-chatbot langchain llama2 llama3 petcare rag

petcare-ai-chatbot's Introduction

PetCare AI Chatbot using Langchain, llama3, and Flask

This project implements an AI Chatbot that can answer queries based on PDF files from a dataset. The chatbot utilizes Langchain, Ollama, and Flask frameworks, along with the RAG (Retrieval Augmentation Generation) technique for generating answers.

gif

Core Concept

The core concept behind this AI Chatbot lies in leveraging advanced natural language processing (NLP) techniques to provide accurate and context-aware responses to user queries. Here's a brief overview of the key components:

  • Langchain: Langchain is a powerful framework that integrates various NLP tools and libraries, making it easier to build complex conversational systems. In this project, Langchain is used for document loading, text splitting, embeddings, and more.

  • Ollama: Ollama is an advanced language model (LLM) that enables the chatbot to understand and generate human-like responses. By incorporating Ollama into the conversational chain, the chatbot gains a deeper understanding of user queries and context.

  • Flask: Flask is a lightweight web framework used for building web applications, including APIs and web-based interfaces. In this project, Flask serves as the backend server for hosting the chatbot and handling user interactions.

  • RAG (Retrieval Augmentation Generation): RAG is a methodology that combines information retrieval (retrieval) with language generation (generation) to produce high-quality responses. By retrieving relevant information from PDF documents using Langchain's retrieval capabilities and augmenting it with Ollama's generation, the chatbot can provide accurate and informative answers.

Installation

  1. Clone the repository:

    git clone https://github.com/rajveersinghcse/PetCare-AI-Chatbot.git
    cd your_repository
  2. Install dependencies:

    pip install -r requirements.txt

Usage

  1. Ensure you have the necessary PDF files in the data/ directory. Supported PDF files include:

    • dog_care_encyclopedia.pdf
    • Veterinary_Clinical_Pathology.pdf
    • You can add more pdf files. But it will take a lot of computational resources.
  2. You have to run your llama model from Ollama.

    Ollama serve
  3. Run the Flask application:

    python app.py
  4. Access the chatbot interface by navigating to http://localhost:5000 in your web browser.

Project Structure

  • app.py: Flask application for running the chatbot server.
  • templates/index.html: HTML template for the chatbot interface.
  • data/: Directory containing PDF files used for answering queries.
  • requirements.txt: List of Python dependencies.

Configuration

  • loader: Loads PDF documents from the data/ directory using Langchain's DirectoryLoader.
  • text_splitter: Splits PDF documents into chunks for processing.
  • embeddings: Utilizes Hugging Face embeddings for text representation.
  • vector_store: Stores text chunks using FAISS for efficient retrieval.
  • llm: Implements Ollama's Large Language Model for conversational context.
  • memory: Manages conversation history for context-aware responses.
  • chain: Constructs the Conversational Retrieval Chain using the configured components.

Contributing

Contributions to enhance the chatbot's functionality, add new features, or improve documentation are welcome. Please fork the repository, make your changes, and submit a pull request.

License

This project is licensed under the MIT License.

petcare-ai-chatbot's People

Contributors

rajveersinghcse avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

subrat2004

petcare-ai-chatbot's Issues

help needed

since i am new to this language model i am unable to understand , how to run this project .
i have cloned this repository and installed the requirements but i am unable to understand this

You have to run your llama model from Ollama.
Ollama serve.

i even installed ollama in my device .
i am running the commands in terminal but i dont know what to do for running llama.
please help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.