Giter Site home page Giter Site logo

paulpierre / rasagpt Goto Github PK

View Code? Open in Web Editor NEW
2.2K 35.0 202.0 3.08 MB

💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. Built w/ Rasa, FastAPI, Langchain, LlamaIndex, SQLModel, pgvector, ngrok, telegram

Home Page: https://rasagpt.dev

License: MIT License

Makefile 11.67% Dockerfile 0.88% Python 81.61% Shell 5.84%
ai chatbot fastapi gpt-3 gpt-4 langchain llama-index llm ml openai

rasagpt's Introduction

RasaGPT Logo



🏠 Overview

💬 RasaGPT is the first headless LLM chatbot platform built on top of Rasa and Langchain. It is boilerplate and a reference implementation of Rasa and Telegram utilizing an LLM library like Langchain for indexing, retrieval and context injection.




RasaGPT Youtube Video



💬 What is Rasa?

In their own words:

💬 Rasa is an open source (Python) machine learning framework to automate text- and voice-based conversations: NLU, dialogue management, connect to Slack, Facebook, and more - Create chatbots and voice assistants


In my words:

Rasa is a very popular (dare I say de facto?) and easy-enough to use chatbot framework with built in NLU ML pipelines that are obsolete and a conceptual starting point for a reimagined chatbot framework in a world of LLMs.



💁‍♀️ Why RasaGPT?

RasaGPT works out of the box. A lot of the implementing headaches were sorted out so you don’t have to, including:

  • Creating your own proprietary bot end-point using FastAPI, document upload and “training” 'pipeline included
  • How to integrate Langchain/LlamaIndex and Rasa
  • Library conflicts with LLM libraries and passing metadata
  • Dockerized support on MacOS for running Rasa
  • Reverse proxy with chatbots via ngrok
  • Implementing pgvector with your own custom schema instead of using Langchain’s highly opinionated PGVector class
  • Adding multi-tenancy (Rasa doesn't natively support this), sessions and metadata between Rasa and your own backend / application

The backstory is familiar. A friend came to me with a problem. I scoured Google and Github for a decent reference implementation of LLM’s integrated with Rasa but came up empty-handed. I figured this to be a great opportunity to satiate my curiosity and 2 days later I had a proof of concept, and a week later this is what I came up with.


⚠️ Caveat emptor: This is far from production code and rife with prompt injection and general security vulnerabilities. I just hope someone finds this useful 😊



 Quick start

Getting started is easy, just make sure you meet the dependencies below.


⚠️⚠️⚠️ ** ATTENTION NON-MACOS USERS: ** If you are using Linux or Windows, you will need to change the image name from khalosa/rasa-aarch64:3.5.2 to rasa/rasa:latest in docker-compose.yml on line #64 and in the actions Dockerfile on line #1 here


# Get the code
git clone https://github.com/paulpierre/RasaGPT.git
cd RasaGPT

## Setup the .env file
cp .env-example .env

# Edit your .env file and add all the necessary credentials
make install

# Type "make" to see more options
make



🔥 Features

Full Application and API

  • LLM “learns” on an arbitrary corpus of data using Langchain
  • Upload documents and “train” all via FastAPI
  • Document versioning and automatic “re-training” implemented on upload
  • Customize your own async end-points and database models via FastAPI and SQLModel
  • Bot determines whether human handoff is necessary
  • Bot generates tags based on user questions and response automatically
  • Full API documentation via Swagger and Redoc included
  • PGAdmin included so you can browser your database
  • Ngrok end-points are automatically generated for you on startup so your bot can always be accessed via https://t.me/yourbotname
  • Embedding similarity search built into Postgres via pgvector and Postgres functions
  • Dummy data included for you to test and experiment
  • Unlimited use cases from help desk, customer support, quiz, e-learning, dungeon and dragons, and more

Rasa integration

  • Built on top of Rasa, the open source gold-standard for chat platforms
  • Supports MacOS M1/M2 via Docker (canonical Rasa image lacks MacOS arch. support)
  • Supports Telegram, easily integrate Slack, Whatsapp, Line, SMS, etc.
  • Setup complex dialog pipelines using NLU models form Huggingface like BERT or libraries/frameworks like Keras, Tensorflow with OpenAI GPT as fallback

Flexibility

  • Extend agentic, memory, etc. capabilities with Langchain
  • Schema supports multi-tenancy, sessions, data storage
  • Customize agent personalities
  • Saves all of chat history and creating embeddings from all interactions future-proofing your retrieval strategy
  • Automatically generate embeddings from knowledge base corpus and client feedback



🧑‍💻 Installing

Requirements


Setup

git clone https://github.com/paulpierre/RasaGPT.git
cd RasaGPT
cp .env-example .env

# Edit your .env file and all the credentials

At any point feel free to just type in make and it will display the list of options, mostly useful for debugging:


Makefile main


Docker-compose

The easiest way to get started is using the Makefile in the root directory. It will install and run all the services for RasaGPT in the correct order.

make install

# This will automatically install and run RasaGPT
# After installation, to run again you can simply run

make run

Local Python Environment

This is useful if you wish to focus on developing on top of the API, a separate Makefile was made for this. This will create a local virtual environment for you.

# Assuming you are already in the RasaGPT directory
cd app/api
make install

# This will automatically install and run RasaGPT
# After installation, to run again you can simply run

make run

Similarly, enter make to see a full list of commands

Makefile API


Installation process

Installation should be automated should look like this:

Installation

👉 Full installation log: https://app.warp.dev/block/vflua6Eue29EPk8EVvW8Kd


The installation process for Docker takes the following steps at a high level

  1. Check to make sure you have .env available
  2. Database is initialized with pgvector
  3. Database models create the database schema
  4. Trains the Rasa model so it is ready to run
  5. Sets up ngrok with Rasa so Telegram has a webhook back to your API server
  6. Sets up the Rasa actions server so Rasa can talk to the RasaGPT API
  7. Database is populated with dummy data via seed.py



☑️ Next steps


💬 Start chatting

You can start chatting with your bot by visiting 👉 https://t.me/yourbotsname

Telegram



👀 View logs

You can view all of the log by visiting 👉 https://localhost:9999/ which will displaying real-time logs of all the docker containers

Dozzle



📖 API documentation

View the API endpoint docs by visiting 👉 https://localhost:8888/docs

In this page you can create and update entities, as well as upload documents to the knowledge base.

Swagger Docs



✏️ Examples

The bot is just a proof-of-concept and has not been optimized for retrieval. It currently uses 1000 character length chunking for indexing and basic euclidean distance for retrieval and quality is hit or miss.

You can view example hits and misses with the bot in the RESULTS.MD file. Overall I estimate index optimization and LLM configuration changes can increase output quality by more than 70%.


👉 Click to see the Q&A results of the demo data in RESULTS.MD



💻 API Architecture and Usage

The REST API is straight forward, please visit the documentation 👉 http://localhost:8888/docs

The entities below have basic CRUD operations and return JSON



Organization

This can be thought of as a company that is your client in a SaaS / multi-tenant world. By default a list of dummy organizations have been provided

Screenshot 2023-05-05 at 8.45.28 AM.png

[
  {
    "id": 1,
    "uuid": "d2a642e6-c81a-4a43-83e2-22cee3562452",
    "display_name": "Pepe Corp.",
    "namespace": "pepe",
    "bot_url": null,
    "created_at": "2023-05-05T10:42:45.933976",
    "updated_at": "2023-05-05T10:42:45.933979"
  },
  {
    "id": 2,
    "uuid": "7d574f88-6c0b-4c1f-9368-367956b0e90f",
    "display_name": "Umbrella Corp",
    "namespace": "acme",
    "bot_url": null,
    "created_at": "2023-05-05T10:43:03.555484",
    "updated_at": "2023-05-05T10:43:03.555488"
  },
  {
    "id": 3,
    "uuid": "65105a15-2ef0-4898-ac7a-8eafee0b283d",
    "display_name": "Cyberdine Systems",
    "namespace": "cyberdine",
    "bot_url": null,
    "created_at": "2023-05-05T10:43:04.175424",
    "updated_at": "2023-05-05T10:43:04.175428"
  },
  {
    "id": 4,
    "uuid": "b7fb966d-7845-4581-a537-818da62645b5",
    "display_name": "Bluth Companies",
    "namespace": "bluth",
    "bot_url": null,
    "created_at": "2023-05-05T10:43:04.697801",
    "updated_at": "2023-05-05T10:43:04.697804"
  },
  {
    "id": 5,
    "uuid": "9283d017-b24b-4ecd-bf35-808b45e258cf",
    "display_name": "Evil Corp",
    "namespace": "evil",
    "bot_url": null,
    "created_at": "2023-05-05T10:43:05.102546",
    "updated_at": "2023-05-05T10:43:05.102549"
  }
]

Project

This can be thought of as a product that belongs to a company. You can view the list of projects that belong to an organizations like so:

org-projects.png

[
  {
    "id": 1,
    "documents": [
      {
        "id": 1,
        "uuid": "92604623-e37c-4935-bf08-0e9efa8b62f7",
        "display_name": "project-pepetamine.md",
        "node_count": 3
      }
    ],
    "document_count": 1,
    "uuid": "44a4b60b-9280-4b21-a676-00612be9aa87",
    "display_name": "Pepetamine",
    "created_at": "2023-05-05T10:42:46.060930",
    "updated_at": "2023-05-05T10:42:46.060934"
  },
  {
    "id": 2,
    "documents": [
      {
        "id": 2,
        "uuid": "b408595a-3426-4011-9b9b-8e260b244f74",
        "display_name": "project-frogonil.md",
        "node_count": 3
      }
    ],
    "document_count": 1,
    "uuid": "5ba6b812-de37-451d-83a3-8ccccadabd69",
    "display_name": "Frogonil",
    "created_at": "2023-05-05T10:42:48.043936",
    "updated_at": "2023-05-05T10:42:48.043940"
  },
  {
    "id": 3,
    "documents": [
      {
        "id": 3,
        "uuid": "b99d373a-3317-4699-a89e-90897ba00db6",
        "display_name": "project-kekzal.md",
        "node_count": 3
      }
    ],
    "document_count": 1,
    "uuid": "1be4360c-f06e-4494-bf20-e7c73a56f003",
    "display_name": "Kekzal",
    "created_at": "2023-05-05T10:42:49.092675",
    "updated_at": "2023-05-05T10:42:49.092678"
  },
  {
    "id": 4,
    "documents": [
      {
        "id": 4,
        "uuid": "94da307b-5993-4ddd-a852-3d8c12f95f3f",
        "display_name": "project-memetrex.md",
        "node_count": 3
      }
    ],
    "document_count": 1,
    "uuid": "1fd7e772-365c-451b-a7eb-4d529b0927f0",
    "display_name": "Memetrex",
    "created_at": "2023-05-05T10:42:50.184817",
    "updated_at": "2023-05-05T10:42:50.184821"
  },
  {
    "id": 5,
    "documents": [
      {
        "id": 5,
        "uuid": "6deff180-3e3e-4b09-ae5a-6502d031914a",
        "display_name": "project-pepetrak.md",
        "node_count": 4
      }
    ],
    "document_count": 1,
    "uuid": "a389eb58-b504-48b4-9bc3-d3c93d2fbeaa",
    "display_name": "PepeTrak",
    "created_at": "2023-05-05T10:42:51.293352",
    "updated_at": "2023-05-05T10:42:51.293355"
  },
  {
    "id": 6,
    "documents": [
      {
        "id": 6,
        "uuid": "2e3c2155-cafa-4c6b-b7cc-02bb5156715b",
        "display_name": "project-memegen.md",
        "node_count": 5
      }
    ],
    "document_count": 1,
    "uuid": "cec4154f-5d73-41a5-a764-eaf62fc3db2c",
    "display_name": "MemeGen",
    "created_at": "2023-05-05T10:42:52.562037",
    "updated_at": "2023-05-05T10:42:52.562040"
  },
  {
    "id": 7,
    "documents": [
      {
        "id": 7,
        "uuid": "baabcb6f-e14c-4d59-a019-ce29973b9f5c",
        "display_name": "project-neurokek.md",
        "node_count": 5
      }
    ],
    "document_count": 1,
    "uuid": "4a1a0542-e314-4ae7-9961-720c2d092f04",
    "display_name": "Neuro-kek",
    "created_at": "2023-05-05T10:42:53.689537",
    "updated_at": "2023-05-05T10:42:53.689539"
  },
  {
    "id": 8,
    "documents": [
      {
        "id": 8,
        "uuid": "5be007ec-5c89-4bc4-8bfd-448a3659c03c",
        "display_name": "org-about_the_company.md",
        "node_count": 5
      },
      {
        "id": 9,
        "uuid": "c2b3fb39-18c0-4f3e-9c21-749b86942cba",
        "display_name": "org-board_of_directors.md",
        "node_count": 3
      },
      {
        "id": 10,
        "uuid": "41aa81a9-13a9-4527-a439-c2ac0215593f",
        "display_name": "org-company_story.md",
        "node_count": 4
      },
      {
        "id": 11,
        "uuid": "91c59eb8-8c05-4f1f-b09d-fcd9b44b5a20",
        "display_name": "org-corporate_philosophy.md",
        "node_count": 4
      },
      {
        "id": 12,
        "uuid": "631fc3a9-7f5f-4415-8283-78ff582be483",
        "display_name": "org-customer_support.md",
        "node_count": 3
      },
      {
        "id": 13,
        "uuid": "d4c3d3db-6f24-433e-b2aa-52a70a0af976",
        "display_name": "org-earnings_fy2023.md",
        "node_count": 5
      },
      {
        "id": 14,
        "uuid": "08dd478b-414b-46c4-95c0-4d96e2089e90",
        "display_name": "org-management_team.md",
        "node_count": 3
      }
    ],
    "document_count": 7,
    "uuid": "1d2849b4-2715-4dcf-aa68-090a221942ba",
    "display_name": "Pepe Corp. (company)",
    "created_at": "2023-05-05T10:42:55.258902",
    "updated_at": "2023-05-05T10:42:55.258904"
  }
]

Document

This can be thought of as an artifact related to a product, like an FAQ page or a PDF with financial statement earnings. You can view all the Documents associated with an Organization’s Project like so:

documents.png

{
  "id": 1,
  "uuid": "44a4b60b-9280-4b21-a676-00612be9aa87",
  "organization": {
    "id": 1,
    "uuid": "d2a642e6-c81a-4a43-83e2-22cee3562452",
    "display_name": "Pepe Corp.",
    "bot_url": null,
    "status": 2,
    "created_at": "2023-05-05T10:42:45.933976",
    "updated_at": "2023-05-05T10:42:45.933979",
    "namespace": "pepe"
  },
  "document_count": 1,
  "documents": [
    {
      "id": 1,
      "uuid": "92604623-e37c-4935-bf08-0e9efa8b62f7",
      "organization_id": 1,
      "project_id": 1,
      "display_name": "project-pepetamine.md",
      "url": "",
      "data": "# Pepetamine\n\nProduct Name: Pepetamine\n\nPurpose: Increases cognitive focus just like the Limitless movie\n\n**How to Use**\n\nPepetamine is available in the form of rare Pepe-coated tablets. The recommended dosage is one tablet per day, taken orally with a glass of water, preferably while browsing your favorite meme forum for maximum cognitive enhancement. For optimal results, take Pepetamine 30 minutes before engaging in mentally demanding tasks, such as decoding ancient Pepe hieroglyphics or creating your next viral meme masterpiece.\n\n**Side Effects**\n\nSome potential side effects of Pepetamine may include:\n\n1. Uncontrollable laughter and a sudden appreciation for dank memes\n2. An inexplicable desire to collect rare Pepes\n3. Enhanced meme creation skills, potentially leading to internet fame\n4. Temporary green skin pigmentation, resembling the legendary Pepe himself\n5. Spontaneously speaking in \"feels good man\" language\n\nWhile most side effects are generally harmless, consult your memologist if side effects persist or become bothersome.\n\n**Precautions**\n\nBefore taking Pepetamine, please consider the following precautions:\n\n1. Do not use Pepetamine if you have a known allergy to rare Pepes or dank memes.\n2. Pepetamine may not be suitable for individuals with a history of humor deficiency or meme intolerance.\n3. Exercise caution when driving or operating heavy machinery, as Pepetamine may cause sudden fits of laughter or intense meme ideation.\n\n**Interactions**\n\nPepetamine may interact with other substances, including:\n\n1. Normie supplements: Combining Pepetamine with normie supplements may result in meme conflicts and a decreased sense of humor.\n2. Caffeine: The combination of Pepetamine and caffeine may cause an overload of energy, resulting in hyperactive meme creation and potential internet overload.\n\nConsult your memologist if you are taking any other medications or substances to ensure compatibility with Pepetamine.\n\n**Overdose**\n\nIn case of an overdose, symptoms may include:\n\n1. Uncontrollable meme creation\n2. Delusions of grandeur as the ultimate meme lord\n3. Time warps into the world of Pepe\n\nIf you suspect an overdose, contact your local meme emergency service or visit the nearest meme treatment facility. Remember, the key to enjoying Pepetamine is to use it responsibly, and always keep in mind the wise words of our legendary Pepe: \"Feels good man.\"",
      "hash": "fdee6da2b5441080dd78e7850d3d2e1403bae71b9e0526b9dcae4c0782d95a78",
      "version": 1,
      "status": 2,
      "created_at": "2023-05-05T10:42:46.755428",
      "updated_at": "2023-05-05T10:42:46.755431"
    }
  ],
  "display_name": "Pepetamine",
  "created_at": "2023-05-05T10:42:46.060930",
  "updated_at": "2023-05-05T10:42:46.060934"
}

Node

Although this is not exposed in the API, a node is a chunk of a document which embeddings get generated for. Nodes are used for retrieval search as well as context injection. A node belongs to a document.


User

A user represents the person talking to a bot. Users do not necessarily belong to an org or product, but this relationship is captured in ChatSession below.


ChatSession

Not exposed via API, but this represent a question and answer between the User and a bot. Each of these objects can be flexibly identified by a session_id which gets automatically generated. Chat Sessions contain rich metadata that can be used for training and optimization. ChatSessions via the /chat endpoint ARE in fact associated with organization (for multi-tenant security purposes)



📚 How it works


Rasa

  1. Rasa handles integration with the communication channel, in this case Telegram.
    • It specifically handles submitting the target webhook user feedback should go through. In our case it is our FastAPI server via /webhooks/{channel}/webhook
  2. Rasa has two components, the core Rasa app and an Rasa actions server that runs separately
  3. Rasa must be configured (done already) via a few yaml files:
    • config.yml - contains NLU pipeline and policy configuration. What matters is setting the FallbackClassifier threshold
    • credentials.yml - contains the path to our webhook and Telegram credentials. This will get updated by the helper service rasa-credentials via app/rasa-credentials/main.py
    • domain.yml - This contains the chat entrypoint logic configuration like intent and the action to take against the intent. Here we add the action_gpt_fallback action which will trigger our actions server
    • endpoints.yml - This is where we set our custom action end-point for Rasa to trigger our fallback
    • nlu.yml - this is where we set our intent out_of_scope
    • rules.yml - we set a rule for this intent that it should trigger the action action_gpt_fallback
    • actions.py - this is where we define and express our action via the ActionGPTFallback class. The method name returns the action we defined for our intent above
  4. Rasa's NLU models must be trained which can be done via CLI with rasa train . This is done automatically for you when you run make install
  5. Rasa's core must be ran via rasa run after training
  6. Rasa's action server must be ran separately with rasa run actions

Telegram

  1. Rasa automatically updates the Telegram Bot API with your callback webhook from credentials.yml.
  2. By default this is static. Since we are running on our local machine, we leverage Ngrok to generate a publically accessible URL and reverse tunnel into our docker container
  3. rasa-credentials service takes care of this process for you. Ngrok runs as a service, once it is ready rasa-credentials calls the local ngrok API to retrieve the tunnel URL and updates the credentials.yml file and restarts Rasa for you
  4. The webhook Telegram will send messages to will be our FastAPI server. Why this instead of Rasa? Because we want flexibility to capture metadata which Rasa makes a PITA and centralizing to the API server is ideal
  5. The FastAPI server forwards this to the Rasa webhook
  6. Rasa will then determine what action to take based on the user intent. Since the intents have been nerfed for this demo, it will go to the fallback action running in actions.py
  7. The custom action will capture the metadata and forward the response from FastAPI to the user

PGVector

pgvector is a plugin for Postgres and automatically installed enabling your to store and calculate vector data types. We have our own implementation because the Langchain PGVector class is not flexible to adapt to our schema and we want flexibility.

  1. By default in postgres, any files in the container's path /docker-entry-initdb.d get run if the database has not been initialized. In the postgres Dockerfile we copy create_db.sh which creates the db and user for our database
  2. In the models command in the Makefile, we run the models.py in the API container which creates the tables from the models.
  3. The enable_vector method enables the pgvector extension in the database

Langchain

  1. The training data gets loaded in the database
  2. The data is indexed if the index doesn't exist and stored in a file named index.json
  3. LlamaIndex uses a basic GPTSimpleVectorIndex to find the relevant data and injects it into a prompt.
  4. Guard rails via prompts are used to keep the conversation focused

Bot flow

  1. The user will chat in Telegram and the message will be filtered for existing intents
  2. If it detects there is no intent match but instead matches the out_of_scopebased on rules.yml it will trigger the action_gpt_fallback action
  3. The ActionGPTFallback function will then call the FastAPI API server
  4. the API using LlamaIndex will find the relevant indexed content and inject it into a prompt to send to OpenAI for inference
  5. The prompt contains conversational guardrails including:
    • Requests data be returned in JSON
    • Create categorical tags based on what the user's question
    • Return a boolean if the conversation should be escalated to a human (if there is no context match)



📝 TODO

  • Write tests 😅
  • Implement LlamaIndex optimizations
  • Implement chat history
  • Implement Query Routers Abstractions to understand which search strategy to use (one-shot vs few-shot)
  • Explore other indexing methods like Tree indexes, Keyword indexes
  • Add chat history for immediate recall and context setting
  • Add a secondary adversarial agent (Dual pattern model) with the following potential functionalities:
    • Determine if the question has been answered and if not, re-optimize search strategy
    • Ensure prompt injection is not occurring
  • Increase baseline similarity search by exploring:
    • Regularly generate “fake” document embeddings based on historical queries and link to actual documents via HyDE pattern
    • Regularly generate “fake” user queries based on documents and link to actual document so user input search and “fake” queries can match better



🔍 Troubleshooting

In general, check your docker container logs by simply going to 👉 http://localhost:9999/


Ngrok issues

Always check that your webhooks with ngrok and Telegram match. Simply do this by

curl -sS "https://api.telegram.org/bot<your-bot-secret-token>/getWebhookInfo" | json_pp

.. should return this:

{
    "ok": true,
    "result": {
        "url": "https://b280-04-115-40-112.ngrok-free.app/webhooks/telegram/webhook",
        "has_custom_certificate": false,
        "pending_update_count": 0,
        "max_connections": 40,
        "ip_address": "1.2.3.4"
    }
}

.. which should match the URL in your credentials.yml file or visit the Ngrok admin UI 👉 http://localhost:4040/status

ngrok-admin.png


Looks like it is a match. If not, restart everything by running:

make restart



💪 Contributing / Issues

  • Pull requests welcome
  • Please submit issues via Github, I will do my best to resolve them
  • If you want to get in touch, feel free to hmu on twitter via @paulpierre`



thumbsup
Congratulations, all your base are belong to us! kthxbye



🌟 Star History

Star History Chart

📜 Open source license

Copyright (c) 2023 Paul Pierre. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

rasagpt's People

Contributors

eltociear avatar nandangrover avatar paulpierre avatar pythonicninja avatar snapre avatar xunfeng1980 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rasagpt's Issues

[BUG] Error: [Errno 13] Permission denied: '/app/.config'

2023-07-15 09:16:15 INFO rasa.cli.train - Started validating domain and training data...
/opt/venv/lib/python3.10/site-packages/tensorflow/python/framework/dtypes.py:246: DeprecationWarning: np.bool8 is a deprecated alias for np.bool_. (Deprecated NumPy 1.24)
np.bool8: (False, True),
2023-07-15 09:16:36 INFO rasa.validator - Validating intents...
2023-07-15 09:16:36 INFO rasa.validator - Validating uniqueness of intents and stories...
2023-07-15 09:16:36 INFO rasa.validator - Validating utterances...
2023-07-15 09:16:36 INFO rasa.validator - Story structure validation...
Processed story blocks: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 830.23it/s, # trackers=1]
2023-07-15 09:16:36 INFO rasa.core.training.story_conflict - Considering all preceding turns for conflict analysis.
2023-07-15 09:16:36 INFO rasa.validator - No story structure conflicts found.
2023-07-15 09:16:40 WARNING rasa.utils.common - Failed to write global config. Error: [Errno 13] Permission denied: '/app/.config'. Skipping.
The configuration for pipeline was chosen automatically. It was written into the config file at 'config.yml'.
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect
return fn()
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 327, in connect
return _ConnectionFairy._checkout(self)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 894, in _checkout
fairy = _ConnectionRecord.checkout(pool)
File "/opt/venv/lib/python3.10/site-packages/sqlalchemy/pool/base.py", line 493, in checkout
rec = pool._do_get()

Flaky responses in telegram bot

The bot is working for some question but for others it won't work. The reply is processed from the container though. ngrok response status is also 200 Ok. I am not sure what might be going wrong here.

Kudos to creeating this repository! Great work.

Screenshot 2023-05-14 at 12 28 45 am Screenshot 2023-05-14 at 12 29 19 am

Got this error ftime of make install

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address

password authentication failed for user "api"

2023-07-07 10:41:29.083 UTC [6134] STATEMENT:  SELECT organization.id, organization.uuid, organization.display_name, organization.namespace, organization.bot_url, organization.status, organization.created_at, organization.updated_at FROM organization WHERE organization.namespace = 'openai';
07/07/2023 16:15:15
2023-07-07 10:45:15.963 UTC [26] LOG:  checkpoint starting: time
07/07/2023 16:15:21
2023-07-07 10:45:21.386 UTC [26] LOG:  checkpoint complete: wrote 55 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=5.411 s, sync=0.006 s, total=5.424 s; sync files=42, longest=0.004 s, average=0.001 s; distance=157 kB, estimate=157 kB
07/07/2023 16:22:42
2023-07-07 10:52:42.014 UTC [7501] ERROR:  operator does not exist: character varying = integer at character 233
07/07/2023 16:22:42
2023-07-07 10:52:42.014 UTC [7501] HINT:  No operator matches the given name and argument types. You might need to add explicit type casts.
07/07/2023 16:22:42
2023-07-07 10:52:42.014 UTC [7501] STATEMENT:  SELECT organization.id, organization.uuid, organization.display_name, organization.namespace, organization.bot_url, organization.status, organization.created_at, organization.updated_at 
07/07/2023 16:22:42
	FROM organization 
07/07/2023 16:22:42
	WHERE organization.status = 2
07/07/2023 17:07:02
2023-07-07 11:37:02.307 UTC [11304] FATAL:  password authentication failed for user "api"
07/07/2023 17:07:02
2023-07-07 11:37:02.307 UTC [11304] DETAIL:  Connection matched pg_hba.conf line 100: "host all all all scram-sha-256"
07/07/2023 17:07:14
2023-07-07 11:37:14.658 UTC [11321] FATAL:  password authentication failed for user "api"
07/07/2023 17:07:14
2023-07-07 11:37:14.658 UTC [11321] DETAIL:  Connection matched pg_hba.conf line 100: "host all all all scram-sha-256"
07/07/2023 17:20:16
2023-07-07 11:50:16.619 UTC [26] LOG:  checkpoint starting: time
07/07/2023 17:20:16
2023-07-07 11:50:16.933 UTC [26] LOG:  checkpoint complete: wrote 4 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.303 s, sync=0.003 s, total=0.314 s; sync files=4, longest=0.003 s, average=0.001 s; distance=0 kB, estimate=141 kB````


what is the reason for this error and how to resolve it

启动报错

⠿ rasa-core Pulled 4.0s
⠿ dbf6a9befcde Already exists 0.0s
⠿ 3805270d692e Already exists 0.0s
⠿ 6214d5facb77 Already exists 0.0s
⠿ 72e79ac92225 Already exists 0.0s
⠿ bb0cbea8a793 Already exists 0.0s
⠿ 9e952b6124bb Already exists 0.0s
⠿ 4f4fb700ef54 Already exists 0.0s
⠿ 6a9541aaa5f0 Already exists 0.0s
[+] Running 5/5
⠿ Network rasagpt_chat-network Created 0.1s
⠿ Network rasagpt_default Created 0.1s
⠿ Container chat_rasa_credentials Started 0.4s
⠿ Container chat_rasa_actions Started 3.2s
⠿ Container chat_rasa_core Started 3.5s
make[2]: Leaving directory '/home/house365ai/xxm/RasaGPT'
Error response from daemon: Container 801beb0d327cf7f94e0bb013eccae74663bfb872d49575c468ec39250d289c91 is not running
make[1]: *** [Makefile:291: rasa-train] Error 1
make[1]: Leaving directory '/home/house365ai/xxm/RasaGPT'
make: *** [Makefile:57: install] Error 2

[Issue] - Cannot start service rasa-actions

Hi,

I cannot find where to define $PATH, could you please help to troubleshoot this issue?

Failed LOG:

ERROR: for chat_rasa_actions  Cannot start service rasa-actions: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "run": executable file not found in $PATH: unknown

ERROR: for rasa-actions  Cannot start service rasa-actions: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "run": executable file not found in $PATH: unknown
ERROR: Encountered errors while bringing up the project.
make[2]: *** [Makefile:115: rasa-start] Error 1
make[2]: Leaving directory '/root/RasaGPT'
make[1]: *** [Makefile:290: rasa-train] Error 2
make[1]: Leaving directory '/root/RasaGPT'
make: *** [Makefile:57: install] Error 2

Thanks & Regards,
Felix Nguyen

rasa-actions error

issues is
error checking context: can't stat '/var/www/html/Hug/RasaGPT/app/rasa/.rasa/cache/tmpq4o74_rt'
ERROR: Service 'rasa-actions' failed to build : Build failed
make[3]: *** [Makefile:115: rasa-start] Error 1

Successful installation: Problem

Hi, I am curious, if any community members have been successfully able to install and run the RasaGPT?

Appreciate the help, coffee from my end :).

[+] Running 5/5
✔ Network rasagpt_chat-network Created 0.1s
✔ Network rasagpt_default Created 0.1s
✔ Container chat_rasa_credentials Started 0.0s
✔ Container chat_rasa_actions Started 0.0s
✔ Container chat_rasa_core Started 0.0s
make[2]: Leaving directory '/workspace/helm/RasaGPT'
Error response from daemon: Container c08306a3dd93b7da06b150708b51c907bc2d3ef4dba9538a7b7ced4aaef64423 is not running
make[1]: *** [Makefile:291: rasa-train] Error 1
make[1]: Leaving directory '/workspace/helm/RasaGPT'
make: *** [Makefile:57: install] Error 2

Container chat_rasa_actions is not running...:(

Container chat_rasa_actions Started6.1s
! rasa-actions The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested 0.0s
✔ Container chat_rasa_core Started8.2s
! rasa-core The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64) and no specific platform was requested 0.0s

Organization already exists

Getting the error below when running the command
make install.

Traceback (most recent call last):
File "/app/api/seed.py", line 128, in
org_obj = create_org_by_org_or_uuid(
File "/app/api/helpers.py", line 95, in create_org_by_org_or_uuid
raise HTTPException(status_code=404, detail="Organization already exists")
fastapi.exceptions.HTTPException
make[1]: *** [seed] Error 1
make: *** [install] Error 2

Container is not running - Windows-WSL-Ubuntu

Hi,

Problem while running "make install"

Error response from daemon: Container f73041640f9c5db75dcdd5cd62338ae63aa1caf79692fb8262fd359cc8da3ac9 is not running
make[1]: *** [Makefile:291: rasa-train] Error 1
make[1]: Leaving directory '/mnt/d/RasaGPT'
make: *** [Makefile:57: install] Error 2

My telegram doesn't receive messages from my bot, but my messages show as read

I can't get messages from bots. I can see the robot in the log, went to query GPT, and returned the result, but I can’t receive it in telegram
Similarly, I also tested the official robot of RASAGPT, but I can't get the message, but I can see that my message has been read.

2023-05-11 10:28:48 chat_api | DEBUG:urllib3.connectionpool:http://rasa-core:5005 "POST /webhooks/telegram/webhook HTTP/1.1" 200 7
2023-05-11 10:28:48 chat_api | http://rasa-core:5005 "POST /webhooks/telegram/webhook HTTP/1.1" 200 7
2023-05-11 10:28:48 chat_api | DEBUG:config:[🤖 RasaGPT API webhook]
2023-05-11 10:28:48 chat_api | Posting data: {"update_id": "398224788", "message": {"message_id": 29, "from": {"id": 5388561203, "is_bot": false, "first_name": "Link", "last_name": "H", "username": "KaisenseH", "language_code": "zh-hans"}, "chat": {"id": 5388561203, "first_name": "Link", "last_name": "H", "username": "KaisenseH", "type": "private"}, "date": 1683771946, "text": "hello", "meta": {"response": "Hello! How may I assist you today?", "tags": ["greeting"], "is_escalate": false, "session_id": "93739abf-ef28-444f-a145-10d66bb38d2a"}}}
2023-05-11 10:28:48 chat_api |
2023-05-11 10:28:48 chat_api | [🤖 RasaGPT API webhook]
2023-05-11 10:28:48 chat_api | Rasa webhook response: success

How to run with rasa shell

I have successfully installed RasaGPT with make install and make run. Instead of using the Telegram front, how can I interact with the system with make shell-rasa? Actually, I got the following error:

💻🐢  Opening a bash shell in the rasa-core container ..

Container rasa-core is not running

chat api error

05/23/2023 12:49:02 PM
Process SpawnProcess-1:
05/23/2023 12:49:02 PM
Traceback (most recent call last):
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
05/23/2023 12:49:02 PM
self.run()
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/multiprocessing/process.py", line 108, in run
05/23/2023 12:49:02 PM
self._target(*self._args, **self._kwargs)
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
05/23/2023 12:49:02 PM
target(sockets=sockets)
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/uvicorn/server.py", line 61, in run
05/23/2023 12:49:02 PM
return asyncio.run(self.serve(sockets=sockets))
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
05/23/2023 12:49:02 PM
return loop.run_until_complete(main)
05/23/2023 12:49:02 PM
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/uvicorn/server.py", line 68, in serve
05/23/2023 12:49:02 PM
config.load()
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/uvicorn/config.py", line 473, in load
05/23/2023 12:49:02 PM
self.loaded_app = import_from_string(self.app)
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/uvicorn/importer.py", line 21, in import_from_string
05/23/2023 12:49:02 PM
module = importlib.import_module(module_str)
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/importlib/init.py", line 127, in import_module
05/23/2023 12:49:02 PM
return _bootstrap._gcd_import(name[level:], package, level)
05/23/2023 12:49:02 PM
File "", line 1030, in _gcd_import
05/23/2023 12:49:02 PM
File "", line 1007, in _find_and_load
05/23/2023 12:49:02 PM
File "", line 986, in _find_and_load_unlocked
05/23/2023 12:49:02 PM
File "", line 680, in _load_unlocked
05/23/2023 12:49:02 PM
File "", line 850, in exec_module
05/23/2023 12:49:02 PM
File "", line 228, in _call_with_frames_removed
05/23/2023 12:49:02 PM
File "/app/api/main.py", line 28, in
05/23/2023 12:49:02 PM
from llm import (
05/23/2023 12:49:02 PM
File "/app/api/llm.py", line 5, in
05/23/2023 12:49:02 PM
from langchain.docstore.document import Document as LangChainDocument
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/init.py", line 6, in
05/23/2023 12:49:02 PM
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/agents/init.py", line 2, in
05/23/2023 12:49:02 PM
from langchain.agents.agent import (
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/agents/agent.py", line 15, in
05/23/2023 12:49:02 PM
from langchain.agents.tools import InvalidTool
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/agents/tools.py", line 8, in
05/23/2023 12:49:02 PM
from langchain.tools.base import BaseTool, Tool, tool
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/tools/init.py", line 31, in
05/23/2023 12:49:02 PM
from langchain.tools.vectorstore.tool import (
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/tools/vectorstore/tool.py", line 13, in
05/23/2023 12:49:02 PM
from langchain.chains import RetrievalQA, RetrievalQAWithSourcesChain
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/chains/init.py", line 2, in
05/23/2023 12:49:02 PM
from langchain.chains.api.base import APIChain
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/chains/api/base.py", line 13, in
05/23/2023 12:49:02 PM
from langchain.chains.api.prompt import API_RESPONSE_PROMPT, API_URL_PROMPT
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/chains/api/prompt.py", line 2, in
05/23/2023 12:49:02 PM
from langchain.prompts.prompt import PromptTemplate
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/prompts/init.py", line 3, in
05/23/2023 12:49:02 PM
from langchain.prompts.chat import (
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/prompts/chat.py", line 10, in
05/23/2023 12:49:02 PM
from langchain.memory.buffer import get_buffer_string
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/memory/init.py", line 23, in
05/23/2023 12:49:02 PM
from langchain.memory.vectorstore import VectorStoreRetrieverMemory
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/memory/vectorstore.py", line 10, in
05/23/2023 12:49:02 PM
from langchain.vectorstores.base import VectorStoreRetriever
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/init.py", line 2, in
05/23/2023 12:49:02 PM
from langchain.vectorstores.analyticdb import AnalyticDB
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/analyticdb.py", line 15, in
05/23/2023 12:49:02 PM
from langchain.embeddings.base import Embeddings
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/embeddings/init.py", line 19, in
05/23/2023 12:49:02 PM
from langchain.embeddings.openai import OpenAIEmbeddings
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 66, in
05/23/2023 12:49:02 PM
class OpenAIEmbeddings(BaseModel, Embeddings):
05/23/2023 12:49:02 PM
File "pydantic/main.py", line 197, in pydantic.main.ModelMetaclass.new
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 506, in pydantic.fields.ModelField.infer
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.init
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 663, in pydantic.fields.ModelField._type_analysis
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 436, in pydantic.fields.ModelField.init
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 552, in pydantic.fields.ModelField.prepare
05/23/2023 12:49:02 PM
File "pydantic/fields.py", line 668, in pydantic.fields.ModelField._type_analysis
05/23/2023 12:49:02 PM
File "/usr/local/lib/python3.9/typing.py", line 852, in subclasscheck
05/23/2023 12:49:02 PM
return issubclass(cls, self.origin)
05/23/2023 12:49:02 PM
TypeError: issubclass() arg 1 must be a class

response error

Hey I'm running the containers on a MacOS, I got all containers running but getting no answer from bot. In the logs it shows a post response but then I get a rasa webhook response: failed.

Any ideas?

Make install ERROR Error response from daemon is not running

Hi,
I have a problem when install RasaGPT == make install. I using window 11
I also change rasa/rasa:latest and add USER root to RasaGPT\app\rasa\actions and user: "root:root" to rasa-core, rasa-actions , 'rasa-credentials'.
I almost done but found error

make[2]: Leaving directory 'C:/Users/maidu/RasaGPT'
Error response from daemon: Container aa9703a170e87b8d3e0cfce416c45e4ffda368c951f889674ee5aa0f4dc85c89 is not running
make[1]: *** [Makefile:286: rasa-train] Error 1
make[1]: Leaving directory 'C:/Users/maidu/RasaGPT'
make: *** [Makefile:57: install] Error 2

Please help me !
Thanks.

installation issue:

facing this issue when installing. can anyone pls help? i am installing in ubuntu 20.04 with docker version 23.0.0

[+] Building 0.0s (0/0)
http: invalid Host header
make[2]: *** [Makefile:115: rasa-start] Error 17
make[2]: Leaving directory '/home/dell/rasagpt/RasaGPT'
make[1]: *** [Makefile:290: rasa-train] Error 2
make[1]: Leaving directory '/home/dell/rasagpt/RasaGPT'
make: *** [Makefile:57: install] Error 2

Session errors

This is not the only error I had from the database model:
Parent instance <Organization at 0xffffa99dcd40> is not bound to a Session; lazy load operation of attribute 'projects' cannot proceed

this is one of almost five errors when I tried to create a project for Org or read a project by Org id.

The reason for this error is the ModelBase structure and not using any linting tools, the all-class needs to be restructured
this class 👇

class BaseModel(SQLModel):

Container chat_rasa_core not running.... only started

[+] Running 8/8
✔ Container chat_ngrok Running 0.0s
✔ Container chat_db Running 0.0s
✔ Container chat_dozzle Running 0.0s
✔ Container chat_rasa_credentials Running 0.0s
✔ Container chat_pgadmin Running 0.0s
✔ Container chat_api Running 0.0s
✔ Container chat_rasa_actions Running 0.0s
✔ Container chat_rasa_core Started

why .... explain this issue

Chat_pgadmin is getting error

2023-05-15 07:32:17 +0000] [366] [INFO] Booting worker with pid: 366
15/05/2023 13:02:20
[2023-05-15 07:32:20 +0000] [366] [INFO] Worker exiting (pid: 366)
15/05/2023 13:02:20
ERROR : Failed to create the directory /var/lib/pgadmin/sessions:
15/05/2023 13:02:20
[Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
15/05/2023 13:02:20
HINT : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by
15/05/2023 13:02:20
'pgadmin', and try again, or, create a config_local.py file
15/05/2023 13:02:20
and override the SESSION_DB_PATH setting per
15/05/2023 13:02:20
https://www.pgadmin.org/docs/pgadmin4/7.1/config_py.html
15/05/2023 13:02:20
[2023-05-15 07:32:20 +0000] [367] [INFO] Booting worker with pid: 367
15/05/2023 13:02:22
ERROR : Failed to create the directory /var/lib/pgadmin/sessions:
15/05/2023 13:02:22
[Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
15/05/2023 13:02:22
HINT : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by
15/05/2023 13:02:22
'pgadmin', and try again, or, create a config_local.py file
15/05/2023 13:02:22
and override the SESSION_DB_PATH setting per
15/05/2023 13:02:22
https://www.pgadmin.org/docs/pgadmin4/7.1/config_py.html
15/05/2023 13:02:22
[2023-05-15 07:32:22 +0000] [367] [INFO] Worker exiting (pid: 367)
15/05/2023 13:02:22
[2023-05-15 07:32:22 +0000] [368] [INFO] Booting worker with pid: 368

Container for Mac on x86

I am using Mac on x86, and I have changed to use rasa/rasa:latest in those two files, core and credential containers were created file, but for action container, I am still getting this error, any reason?

rasa-actions The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v2) and no specific platform was requested 0.0s

installion error

I am getting this error on make install

[+] Building 2.7s (8/8)
=> [rasa-actions internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 32B 0.0s
=> [rasa-actions internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [rasa-actions internal] load metadata for docker.io/khalosa/rasa-aarc 0.3s
=> [rasa-actions internal] load build context 0.0s
=> => transferring context: 555B 0.0s
=> [rasa-actions 1/4] FROM docker.io/khalosa/rasa-aarch64:3.5.2@sha256:4 0.0s
=> CACHED [rasa-actions 2/4] COPY . /app 0.0s
=> CACHED [rasa-actions 3/4] WORKDIR /app 0.0s
=> ERROR [rasa-actions 4/4] RUN pip install python-dotenv rasa-sdk reque 0.4s
^Z

#8 3.476 ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/opt/venv/lib/python3.10/site-packages/dotenv'

Hi, I need some help getting this running, been trying for the last couple of days without success.

I´ll outline the steps I´ve gone through and the fixes I´ve applied.

  1. Im on Windows 10, so I´ve made the changes suggested in the updated "Readme", changing the image to rasa/rasa:latest in the "docker-compose" and the "Dockerfile" in "app/api". This solves the "service api declares mutually exclusive network_mode and networks: invalid compose project" as a result of running "make install"
  2. Running "make install" here results in the following error: "Error response from daemon: Container f961d9eb78d99976626287f9cbe674c75ddb4104292be40a2df2cf504cb4e35b is not running". The "chat_rasa_core" image is shut down after startup while the other images are running. In Docker logs of "chat_rasa_core" this message shows: "chmod: changing permissions of '/app/wait-for-it.sh': Operation not permitted". I tried to resolve this by changing this line in the "docker-compose"-file:

entrypoint: ["/bin/bash", "-c", "chmod +x /app/wait-for-it.sh && /app/wait-for-it.sh rasa-credentials:8889 -t 120 -o && rasa run --enable-api --cors '*' --debug --credentials /app/credentials.yml --endpoints /app/endpoints.yml --model /app/models"]
to:
entrypoint: ["/bin/bash", "-c", "/app/wait-for-it.sh rasa-credentials:8889 -t 120 -o && rasa run --enable-api --cors '*' --debug --credentials /app/credentials.yml --endpoints /app/endpoints.yml --model /app/models"]
I also go to the RasaGPT/app/scripts directory via Git Bash and run "chmod +x /wait-for-it.sh"

Apart from this, I also need to change the "EOL Conversion" to "Unix (LF)" of the "wait-for-it.sh" file in app/scripts.

I also made the changes in docker-compose suggested here: PythonicNinja@f899209

All in all, this is my docker-compose file:
`# -------------------------------------

▒█▀▀█ █▀▀█ █▀▀ █▀▀█ ▒█▀▀█ ▒█▀▀█ ▀▀█▀▀

▒█▄▄▀ █▄▄█ ▀▀█ █▄▄█ ▒█░▄▄ ▒█▄▄█ ░▒█░░

▒█░▒█ ▀░░▀ ▀▀▀ ▀░░▀ ▒█▄▄█ ▒█░░░ ░▒█░░

+-----------------------------------+

| http://RasaGPT.dev by @paulpierre |

+-----------------------------------+

version: '3.9'

services:

-------------------

API service for LLM

-------------------

api:
build:
context: ./app/api
restart: always
container_name: chat_api
env_file:
- .env
ports:
- 8888:8888
healthcheck:
test: ["CMD", "curl", "-f", "http://api:8888/health"]
interval: 15s
retries: 5
depends_on:
- db
networks:
- chat-network
volumes:
- ./app/scripts/wait-for-it.sh:/app/api/wait-for-it.sh
- ./app/api:/app/api

-------------------

Ngrok agent service

-------------------

ngrok:
image: ngrok/ngrok:latest
container_name: chat_ngrok
ports:
- 4040:4040
env_file:
- .env
environment:
NGROK_CONFIG: /etc/ngrok.yml
NGROK_AUTH_TOKEN: ${NGROK_AUTH_TOKEN:-}
NGROK_DEBUG: ${NGROK_DEBUG:-true}
NGROK_API_KEY: ${NGROK_API_KEY:-}
networks:
- chat-network
volumes:
- ./app/rasa/ngrok.yml:/etc/ngrok.yml
restart: unless-stopped

-----------------

Core Rasa service

-----------------

rasa-core:
image: rasa/rasa:latest
container_name: chat_rasa_core
env_file:
- .env
volumes:
- ./app/rasa:/app
- ./app/scripts/wait-for-it.sh:/app/wait-for-it.sh

ports:
  - 5005:5005
entrypoint: ["/bin/bash", "-c", "/app/wait-for-it.sh rasa-credentials:8889 -t 120 -o && rasa run --enable-api --cors '*' --debug --credentials /app/credentials.yml --endpoints /app/endpoints.yml --model /app/models"]

networks:
  - chat-network
depends_on:
  - rasa-actions
  - rasa-credentials

--------------------

Rasa actions service

--------------------

rasa-actions:
build:
context: ./app/rasa
dockerfile: ./actions/Dockerfile
container_name: chat_rasa_actions
env_file:
- .env
ports:
- 5055:5055
depends_on:
- rasa-credentials
networks:
- chat-network

-------------------------------

Rasa credentials helper service

-------------------------------

rasa-credentials:
build:
context: ./app/rasa-credentials
dockerfile: Dockerfile
container_name: chat_rasa_credentials
volumes:
- ./app/rasa:/app/rasa
- ./app/rasa-credentials:/app/rasa-credentials
ports:
- 8889:8889
env_file:
- .env
networks:
- chat-network
healthcheck:
test: ["CMD", "curl", "-f", "http://rasa-credentials:8889"]
interval: 15s
retries: 5

-------------------------

Postgres database service

-------------------------

db:
build:
context: ./app/db
container_name: chat_db
env_file:
- .env
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
volumes:
- ./mnt/db:/var/lib/postgresql/data
ports:
- "5432:5432"
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
interval: 5s
retries: 5
networks:
- chat-network

--------------------------------

PgAdmin database browser service

--------------------------------

pgadmin:
container_name: chat_pgadmin
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:[email protected]}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- ./mnt/pgadmin:/var/lib/pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
restart: unless-stopped
depends_on:
- db
networks:
- chat-network

----------------------------

Container log viewer service

----------------------------

dozzle:
container_name: chat_dozzle
image: amir20/dozzle:latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 9999:8080
depends_on:
- db

networks:
chat-network:
driver: bridge`

When running "make install" after these changes, the "chat_rasa_core" stays open without any errors.

  1. Instead this error comes up:
    "#5 CANCELED failed to solve: process "/bin/bash -o pipefail -c pip install python-dotenv rasa-sdk requests" did not complete successfully: exit code:1"

#8 3.476 ERROR: Could not install packages due to an OSError: [Errno 13] Permission denied: '/opt/venv/lib/python3.10/site-packages/dotenv'

This error i can not get around. It seems to arise when the "requirements.txt" in app/api is ran. I´ve tried manually installing it via the terminal in Docker, getting the same response. I´ve also tried modifying the "Dockerfile" in app/api to give it root privileges, like this:

USER root RUN pip install --no-cache-dir -r requirements.txt

Still the same error.
I would very much appreciate some help in getting further!

Running on Windows 11

Hi,

I am running "make install" inside the python VirtualEnvironment which I created. I get this error:

! rasa-actions The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v4) and no specific platform was requested 0.0s
✔ Container chat_rasa_core Started 3.0s
! rasa-core The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v4) and no specific platform was requested 0.0s
make[2]: Leaving directory 'D:/mami/mami/RasaGPT'
Error response from daemon: Container 6a22a3413ba41aa4ee048ff18e17990191de0e5267bc8d9b7574ac58e4e108fc is not running
make[1]: *** [Makefile:291: rasa-train] Error 1

FULL STACKTRACE BELOW:

(mami) PS D:\mami\mami\RasaGPT> make install
make[1]: Entering directory 'D:/mami/mami/RasaGPT'
"\n\n-------------------------------------"
"ΓûÆΓûêΓûÇΓûÇΓûê ΓûêΓûÇΓûÇΓûê ΓûêΓûÇΓûÇ ΓûêΓûÇΓûÇΓûê ΓûÆΓûêΓûÇΓûÇΓûê ΓûÆΓûêΓûÇΓûÇΓûê ΓûÇΓûÇΓûêΓûÇΓûÇ"
"ΓûÆΓûêΓûäΓûäΓûÇ ΓûêΓûäΓûäΓûê ΓûÇΓûÇΓûê ΓûêΓûäΓûäΓûê ΓûÆΓûêΓûæΓûäΓûä ΓûÆΓûêΓûäΓûäΓûê ΓûæΓûÆΓûêΓûæΓûæ"
"ΓûÆΓûêΓûæΓûÆΓûê ΓûÇΓûæΓûæΓûÇ ΓûÇΓûÇΓûÇ ΓûÇΓûæΓûæΓûÇ ΓûÆΓûêΓûäΓûäΓûê ΓûÆΓûêΓûæΓûæΓûæ ΓûæΓûÆΓûêΓûæΓûæ"
"+-----------------------------------+"
"| http://RasaGPT.dev by @paulpierre |"
"+-----------------------------------+\n\n"
make[1]: Leaving directory 'D:/mami/mami/RasaGPT'
make[1]: Entering directory 'D:/mami/mami/RasaGPT'
"🔍 Stopping any running containers .. \n"
[+] Running 10/10
✔ Container chat_dozzle Removed 0.5s
✔ Container chat_ngrok Removed 0.0s
✔ Container chat_rasa_core Removed 0.0s
✔ Container chat_pgadmin Removed 2.0s
✔ Container chat_api Removed 1.1s
✔ Container chat_rasa_actions Removed 1.2s
✔ Container chat_rasa_credentials Removed 12.6s
✔ Container chat_db Removed 0.7s
✔ Network rasagpt_default Removed 0.6s
✔ Network rasagpt_chat-network Removed 1.1s
make[1]: Leaving directory 'D:/mami/mami/RasaGPT'
make[1]: Entering directory 'D:/mami/mami/RasaGPT'
"🔍 Checking if envvars are set ..\n";
make[1]: Leaving directory 'D:/mami/mami/RasaGPT'
make[1]: Entering directory 'D:/mami/mami/RasaGPT'
"💽 Generating Rasa models ..\n"
make[2]: Entering directory 'D:/mami/mami/RasaGPT'
"🤖 Starting Rasa ..\n"
[+] Running 7/7
✔ Network rasagpt_chat-network Created 0.7s
✔ Network rasagpt_default Created 0.6s
✔ Container chat_rasa_credentials Started 1.0s
✔ Container chat_rasa_actions Started 2.0s
! rasa-actions The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v4) and no specific platform was requested 0.0s
✔ Container chat_rasa_core Started 3.0s
! rasa-core The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v4) and no specific platform was requested 0.0s
make[2]: Leaving directory 'D:/mami/mami/RasaGPT'
Error response from daemon: Container 6a22a3413ba41aa4ee048ff18e17990191de0e5267bc8d9b7574ac58e4e108fc is not running
make[1]: *** [Makefile:291: rasa-train] Error 1
make[1]: Leaving directory 'D:/mami/mami/RasaGPT'
make: *** [Makefile:57: install] Error 2

Permission Error When Changing Permissions of 'wait-for-it.sh' in Docker Container

I encountered a permission error when attempting to change the permissions of the 'wait-for-it.sh' script within a Docker container. Here are the details: When i do make run

version: '3'
services:
  rasa-core:
    image: rasa/rasa:latest
    container_name: chat_rasa_core
    env_file:
      - .env
    volumes:
      - ./app/rasa:/app
      - ./app/scripts/wait-for-it.sh:/app/wait-for-it.sh
    ports:
      - 5005:5005
    entrypoint: ["/bin/bash", "-c", "chmod +x /app/wait-for-it.sh && /app/wait-for-it.sh rasa-credentials:8889 -t 120 -o && rasa run --enable-api --cors '*' --debug --credentials /app/credentials.yml --endpoints /app/endpoints.yml --model /app/models"]
    networks:
      - chat-network
    depends_on:
      - rasa-actions
      - rasa-credentials

Error message:

chmod: changing permissions of '/app/wait-for-it.sh': Operation not permitted

Steps to Reproduce:

1)open makefile path terminal
2)make run command
3)Monitor the logs of the chat_rasa_core container using docker logs chat_rasa_core --tail 100 -f.
4)Observe the permission error message when the chmod command is executed.

Anyone who can fix the issue? action_gpt_fallback or action_default_fallback

          I took a deeper look into this issue and found this out that it seems to be an issue with `rasa.core.policies.rule policy`.

A response to the user ist sent by There is a rule for the next action 'action_gpt_fallback' and a Predicted next action 'action_ apt_ fallback' with confidence 1.00
and no response is sent when the policy returns There is no applicable rule. and Predicted next action 'action_default_fallback' with confidence 0.30.

I'm new to rasa, will try to look further into it. But just fyi if you are faster ;-)

Debug

Originally posted by @OmidH in #7 (comment)

I have the same issue, when the action is action_gpt_fallback it works ok, it has nothing to respose when the action is action_default_fallback, it blocks me almost a month...., help me please.

users cannot get response from Telegram

I found that when the intent is judged as 'greet', users cannot get response from Telegram.
when the intent is judged as 'out_of_scope', users can get response from Telegram.

rasa_core log when can get response from telegram:
rasa.core.processor - Received user message 'About Pepe Corp' with intent '{'name': 'out_of_scope', 'confidence': 0.9581602811813354}' and entities '[]'

rasa_core log when cannot get response from telegram:
rasa.core.processor - Received user message 'Tell me about Pepe Corp' with intent '{'name': 'greet', 'confidence': 0.9222689867019653}' and entities '[]'

Using 'make models' can not create 'API' database

FATAL: database "api" does not exist
Hi,
I have a problem when install RasaGPT == make models. I using window 11
When i using make models Terminal error message:
sqlalchemy.exc.ArgumentError: Textual SQL expression 'CREATE EXTENSION IF NOT E...' should be explicitly declared as text('CREATE EXTENSION IF NOT E...')
Therefore
I fix it I fix it by change ' query = "CREATE EXTENSION IF NOT EXISTS vector;" ' in line 619 file models.py to query = text("CREATE EXTENSION IF NOT EXISTS vector;") and add sqlalchemy import text.
But when i using make models again, another bug found:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "db" (172.30.0.6), port 5432 failed: FATAL: database "api" does not exist

And i have no ideas to fix it.
If you no please help me !!!
Many thanks.

No operator matches the given name and argument types. You might need to add explicit type casts.

This error occured when seeding the database. My guess is that type mismatching is occuring but explicit type cast didn't solve the issue. A help would be much appriciated. Thanks @paulpierre
🌱 Seeding database ...
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: entity_status = integer
LINE 3: ...splay_name = 'project-pepetamine.md' AND document.status = 2
^
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.

[SQL: SELECT document.id, document.uuid, document.organization_id, document.project_id, document.display_name, document.url, document.data, document.hash, document.version, document.status, document.created_at, document.updated_at
FROM document
WHERE %(param_1)s = document.project_id AND document.display_name = %(display_name_1)s AND document.status = %(status_1)s]
[parameters: {'param_1': 1, 'display_name_1': 'project-pepetamine.md', 'status_1': 2}]

PDF integration

Thank you for the awesome repo. Is it also possible to add a pdf and use the chatbot to answer questions from that PDF?

[BUG] ERROR [4/4] RUN pip install python-dotenv rasa-sdk requests

I am getting an error and log for it is as follows :
`

[4/4] RUN pip install python-dotenv rasa-sdk requests:
#0 0.499 exec /bin/sh: exec format error


failed to solve: executor failed running [/bin/sh -c pip install python-dotenv rasa-sdk requests]: exit code: 1
make[2]: *** [Makefile:115: rasa-start] Error 17
make[2]: Leaving directory '/home/MyData/work/Bots/RasaGPT/RasaGPT'
make[1]: *** [Makefile:290: rasa-train] Error 2
make[1]: Leaving directory '/home/MyData/work/Bots/RasaGPT/RasaGPT'
make: *** [Makefile:57: install] Error 2

My pc architecture is x86_64
`

the telegram application sends a text but no reply comes back -please help

telegram app send mesage -ok

 data: {'intent': {'name': 'greet', 'confidence': 0.9993299245834351}, 'entities': [], 'text': 'test message -ok', 'message_id': '3f94b27b500544cd9561873a2dc36885', 'metadata': {'response': None, 'tags': None, 'is_escalate': False, 'session_id': 'cb686254-f3cf-4150-a83f-2d7d525acd50'}, 'text_tokens': [[0, 4], [5, 12], [14, 16]], 'intent_ranking': [{'name': 'greet', 'confidence': 0.9993299245834351}, {'name': 'out_of_scope', 'confidence': 0.0006701531819999218}], 'response_selector': {'all_retrieval_intents': [], 'default': {'response': {'responses': None, 'confidence': 0.0, 'intent_response_key': None, 'utter_action': 'utter_None'}, 'ranking': []}}}
    1. 2023 16:16:57
      metadata: {'response': None, 'tags': None, 'is_escalate': False, 'session_id': 'cb686254-f3cf-4150-a83f-2d7d525acd50'}
    1. 2023 16:16:57
      response: None
    1. 2023 16:16:57
      tags: None
    1. 2023 16:16:57
      is_escalate: False

the telegram application sends a text but no reply comes back

2023-06-30 14:16:57 DEBUG aiogram - Make request: "getMe" with data: "{}" and files "None"
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG aiogram - Response for getMe: [200] "'{"ok":true,"result":{"id":6263777998,"is_bot":true,"first_name":"Pepe_788bot","username":"Pepe788_bot","can_join_groups":true,"can_read_all_group_messages":false,"supports_inline_queries":false}}'"
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.shared.utils.common - [🤖 ActionGPTFallback]
30. 06. 2023 16:16:57
metadata: {'response': None, 'tags': None, 'is_escalate': False, 'session_id': 'cb686254-f3cf-4150-a83f-2d7d525acd50'}
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.core.lock_store - Issuing ticket for conversation '6141325834'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.core.lock_store - Acquiring lock for conversation '6141325834'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.core.lock_store - Acquired lock for conversation '6141325834'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.core.tracker_store - Recreating tracker for id '6141325834'
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.runner.dask - Running graph with inputs: {'message': [<rasa.core.channels.channel.UserMessage object at 0x7f8d3b8dc460>], 'tracker': <rasa.shared.core.trackers.DialogueStateTracker object at 0x7f8d14528df0>}, targets: ['run_RegexMessageHandler'] and ExecutionContext(model_id='c9a3bdb3a8354228bf1ece7c8de5f86a', should_add_diagnostic_data=False, is_finetuning=False, node_name=None).
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'nlu_message_converter' running 'NLUMessageConverter.convert_user_message'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_WhitespaceTokenizer0' running 'WhitespaceTokenizer.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_RegexFeaturizer1' running 'RegexFeaturizer.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_LexicalSyntacticFeaturizer2' running 'LexicalSyntacticFeaturizer.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_CountVectorsFeaturizer3' running 'CountVectorsFeaturizer.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_CountVectorsFeaturizer4' running 'CountVectorsFeaturizer.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_DIETClassifier5' running 'DIETClassifier.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_EntitySynonymMapper6' running 'EntitySynonymMapper.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_ResponseSelector7' running 'ResponseSelector.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.nlu.classifiers.diet_classifier - There is no trained model for 'ResponseSelector': The component is either not trained or didn't receive enough training data.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.nlu.selectors.response_selector - Adding following selector key to message property: default
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_FallbackClassifier8' running 'FallbackClassifier.process'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'domain_provider' running 'DomainProvider.provide_inference'.
30. 06. 2023 16:16:57
2023-06-30 14:16:57 DEBUG rasa.engine.graph - Node 'run_RegexMessageHandler' running 'RegexMessageHandler.process'.
30. 06. 2023 16:16:57

Installation stuck at Building api

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Building api
[+] Building 1077.8s (7/8)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/python:3.9 15.4s
=> [1/4] FROM docker.io/library/python:3.9@sha256:c0dcc146710fed0a6d62cb55b92f00bfbfc3b931fff6218f4958bab58333c37b 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 1.46kB 0.0s
=> CACHED [2/4] WORKDIR /app/api 0.0s
=> CACHED [3/4] COPY . . 0.0s
=> [4/4] RUN pip install --no-cache-dir -r requirements.txt 1062.4s
=> => # Downloading numpy-1.26.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
=> => # INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. If you want to abort this run, yo
=> => # u can press Ctrl + C to do so. To improve how pip performs, tell us what happened here: https://pip.pypa.io/surveys/backtracking
=> => # Downloading numpy-1.25.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.3 MB)
=> => # Downloading numpy-1.25.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.7 MB)
=> => # Downloading numpy-1.25.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.7 MB)

SEVERAL HOURS PASSED STILL STUCK AT THIS ENDLESS DOWNLOADING

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.