Giter Site home page Giter Site logo

rasa-x-helm's Introduction

Rasa Enterprise Helm Chart

Join the chat on our Rasa Community Forum PRs Welcome Artifact Hub

Rasa Enterprise is a platform for multidisciplinary teams to create AI assistants that drive business value. This Helm chart provides a quick, production-ready deployment of Rasa Enterprise in your cluster.

NOTE: Please see the Rasa Enterprise documentation for a detailed guide on usage and configuration of this chart.

Prerequisites

  • Kubernetes 1.12+
  • Helm 2.11+ or 3
  • Persistent Volume provisioner support in the underlying infrastructure

Installation

helm repo add rasa-x https://rasahq.github.io/rasa-x-helm
helm install <your release name> rasa-x/rasa-x

Upgrading the deployment

helm upgrade <your release name> rasa-x/rasa-x

Uninstalling

helm delete <your release name>

To 5.0.0

The rasa-x-helm chart in version 5.0.0 supports deployment of Rasa Pro version 3.8.0 and above only. Older version Rasa is no longer support with this version of the Rasa-x helm chart.

The redis lock store type is changed to concurrent_redis from rasa_plus.components.concurrent_lock_store.ConcurrentRedisLockStore as part of this release.

To 4.0.0

The rasa-x-helm chart in version 4.0.0 introduces the following breaking changes:

Update chart dependencies to the latest available version, below you can find listed a summary of major changes compared to the previous version used by the rasa-x-helm chart:

  • Redis - the chart for Redis is updated to version 15.

    • Credentials parameter are reorganized under the auth parameter.
    • The cluster.enabled parameter is deprecated in favor of architecture parameter that accepts two values: standalone and replication.
    • securityContext.* is deprecated in favor of XXX.podSecurityContext and XXX.containerSecurityContext (XXX can be replaces with master or replica).
    • redis.redisPort is deprecated in favor of master.service.port and replica.service.port.

    A full list of changes between 10.5.14 and 15.7.4 versions for the Bitnami Redis chart can be found in the changelog.

  • RabbitMQ - the chart for RabbitMQ is updated to version 8.

    • securityContext.* is deprecated in favor of podSecurityContext and containerSecurityContext.
    • Authentication parameters were reorganized under the auth.* parameter:
      • rabbitmq.username, rabbitmq.password, and rabbitmq.erlangCookie are now auth.username, auth.password, and auth.erlangCookie respectively.

    A full list of changes between 6.19.2 and 8.26.0 versions for the Bitnami RabbitMQ chart can be found in the changelog.

  • PostgreSQL - the chart for PostgreSQL is updated to version 10.

    • Default PostgresSQL version is updated from 12.8.0 to 12.9.0 (a dump/restore is not required for those running 12.X)
    • The term master has been replaced with primary and slave with readReplicas throughout the chart. Role names have changed from master and slave to primary and read.

    A full list of changes between 6.19.2 and 8.26.0 versions for the Bitnami RabbitMQ chart can be found in the changelog.

To 3.0.0

The rasa-x-helm chart in version 3.0.0 introduces the following breaking changes:

  • Default version for PostgreSQL is 12.8.0.

    PostgreSQL deployment for < 3.0.0 version of chart used PostgreSQL 11. In this document you can find guide on how to migrate from PostgreSQL 11 to 12.

  • Ingress is disabled by default.

    ingress:
      enabled: false
  • Default username for Rasa Enterprise is admin.

  • The Rasa production deployment is disabled by default and will be removed in the future.

    rasa:
      versions:
        rasaProduction:
          # the rasa production deployment is disabled by default.
          enabled: false

    It's recommended to use tha rasa helm chart to deploy Rasa OSS. Visit the rasa chart docs to learn more.

    Before you upgrade the helm chart check the migration guide.

To 2.0.0

The rasa-x-helm chart in version 2.0.0 supports using an external Rasa OSS deployment.

Enabling an external Rasa OSS deployment

The rasa-x-helm chart >= 2.0.0 supports an option to use an external Rasa OSS deployment. Below you can find an example of configuration that uses the external deployment.

The following configuration disables the rasa-production deployment and uses an external deployment instead.

# versions of the Rasa container which are running
versions:
  # rasaProduction is the container which serves the production environment
  rasaProduction:
    # enable the rasa-production deployment
    # You can disable the rasa-production deployment to use external Rasa OSS deployment instead.
    enabled: false

    # Define if external Rasa OSS should be used.
    external:
      # enable external Rasa OSS
      enabled: true

      # URL address of external Rasa OSS deployment
      url: "https://rasa-bot.external.deployment.domain.com"

Now you can apply your changes by using the helm upgrade command.

NOTE: Any Rasa Open Source server can stream events to Rasa Enterprise using an event broker. Both Rasa and Rasa Enterprise will need to refer to the same event broker.

You can use the rasa-bot helm chart to deploy Rasa OSS. Visit the rasa chart docs to learn more.

Configuration

All configurable values are documented in values.yaml. For a quick installation we recommend to set at least these values:

Parameter Description Default
rasax.passwordSalt Password salt which Rasa Enterprise uses for the user passwords. passwordSalt
rasax.token Token which the Rasa Enterprise pod uses to authenticate requests from other pods. rasaXToken
rasax.command Override the default command to run in the container. []
rasax.args Override the default arguments to run in the container. []
rasax.jwtSecret Secret which is used to sign JWT tokens of Rasa Enterprise users. jwtSecret
rasax.initialUser.username Only for Rasa Enterprise. A name of the user that will be created immediately after the first launch (rasax.initialUser.password should be specified). admin
rasax.initialUser.password Password for the initial user. If you use Rasa Enterprise and leave it empty, no users will be created. If you use Rasa CE and leave it empty, the password will be generated automatically. ""
rasa.token Token which the Rasa pods use to authenticate requests from other pods. rasaToken
rasa.command Override the default command to run in the container. []
rasa.args Override the default arguments to run in the container. []
rasa.extraArgs Additional rasa arguments. []
rabbitmq.auth.password Password for RabbitMQ. test
global.postgresql.postgresqlPassword Password for the Postgresql database. password
global.redis.password Password for redis. password
rasax.tag Version of Rasa Enterprise which you want to use. 1.4.0
rasa.version Version of Rasa Open Source which you want to use. 3.8.0
rasa.tag Image tag which should be used for Rasa Open Source. Uses rasa.version if empty. ``
app.name Name of your action server image. rasa/rasa-x-demo
app.tag Tag of your action server image. 0.42.0
app.command Override the default command to run in the container. []
app.args Override the default arguments to run in the container. []
eventService.command Override the default command to run in the container. []
eventService.args Override the default arguments to run in the container. []
nginx.command Override the default command to run in the container. []
nginx.args Override the default arguments to run in the container. []
duckling.command Override the default command to run in the container. []
duckling.args Override the default arguments to run in the container. []
global.progressDeadlineSeconds Specifies the number of seconds you want to wait for your Deployment to progress before the system reports back that the Deployment has failed progressing. 600
networkPolicy.enabled If enabled, will generate NetworkPolicy configs for all combinations of internal ingress/egress false
postgresql.image.tag The PostgreSQL Image tag 12.8.0

Where to get help

  • If you encounter bugs or have suggestions for this Helm chart, please create an issue in this repository.
  • If you have general questions about usage, please create a thread in the Rasa Forum.

How to contribute

We are very happy to receive and merge your contributions. You can find more information about how to contribute to Rasa (in lots of different ways!) here.

To contribute via pull request, follow these steps:

  1. Create an issue describing the feature you want to work on
  2. Create a pull request describing your changes

Development Internals

Releasing a new version of this chart

This repository automatically release a new version of the Helm chart once new changes are merged. The only required steps are:

  1. Make the changes to the chart
  2. Run helm lint --strict charts/rasa-x
  3. Increase the chart version in charts/rasa-x/Chart.yaml

Changelog

generate-changelog-action is used to capture changelogs from commit messages. This means there is a special format for commit messages if you want them to appear in release change logs.

The format is as following:

type: description [flags]

where type is the category of the change, description is a short sentence to describe the change, and flags is an optional comma-separated list of one or more of the following (must be surrounded in square brackets):

breaking: alters type to be a breaking change

type can be

  • feature
  • fix
  • build
  • other
  • perf
  • refactor
  • style
  • test
  • doc

For more information, please see here.

License

Licensed under the Apache License, Version 2.0. Copyright 2021 Rasa Technologies GmbH. Copy of the license.

rasa-x-helm's People

Contributors

alexweidauer avatar alwx avatar amalsgit avatar ancalita avatar archish27 avatar camattin avatar degiz avatar desmarchris avatar erohmensing avatar erost avatar hotthoughts avatar indam23 avatar kronos-cm avatar loomsen avatar maxbischoff avatar mbelang avatar miraai avatar mprazz avatar orrshilon avatar rasa-aadlv avatar rasa-jmac avatar rasabot avatar rasadsa avatar rgstephens avatar smirl avatar souvikg10 avatar stevelam20 avatar tmbo avatar virtualroot avatar wochinge avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rasa-x-helm's Issues

Nil pointer {}.annotations error - Latest Helm Release

Hello! I am very new to this, and I am in the process of setting up my CI/CD pipeline. The latest helm chart is giving me a bug. This is happening both in my GH Actions and on my local machine. I had the older version of the helm chart saved as 'rasa-x' and added the latest under 'rasa-x-new' and ran identical commands. The latter gave me an error.

Screen Shot 2020-11-10 at 11 50 32 AM

Given the recency of the release and changes to the specific code that is causing an error for me, I thought it worth bringing up. Thanks so much, and I appreciate any help!

Restart Rasa-X and Rasa when configmaps changed

Kubernetes doesn't automatically restart Rasa X and Rasa Open Source pods when the configmaps they use change. E.g. if you change the configmap (rasa-configuration-files) in order to change some channel credentials, the configmap will be updated, but the Rasa X deployment and Rasa Open Source won't see the changes (and hence not get the new channel credentials).

A workaround is to manually restart the deployment for rasa-x and rasa-production.

A better solution would be something like this:
https://sanderknape.com/2019/03/kubernetes-helm-configmaps-changes-deployments/

Failed to write global config. Error: [Errno 13] Permission denied: '/.config'.

For Rasa Open Source 2.0 in the logs, we can see the following error:

Failed to write global config. Error: [Errno 13] Permission denied: '/.config'. Skipping.
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-zj2d5evg because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing

It looks like we should add proper permissions for writing temporary files. @federicotdn @wochinge Have you seen this error before?

Invalid Semantic Version

If the values.yml for rasax.tag contains an invalid semantic version the deployment fails with the error below. Docker hub contains many versions that don't follow semantic versioning: latest, master, 0.35.0a5, etc.

Error: template: rasa-x/templates/rasa-deployments.yaml:41:10: executing "rasa-x/templates/rasa-deployments.yaml" at <include "initContainer.dbMigration" $>: error calling include: template: rasa-x/templates/_helpers.tpl:266:22: executing "initContainer.dbMigration" at <include "db-migration-service.requiredVersion" .>: error calling include: template: rasa-x/templates/_helpers.tpl:248:7: executing "db-migration-service.requiredVersion" at <semverCompare ">= 0.33.0" (include "db-migration-service.version" .)>: error calling semverCompare: Invalid Semantic Version

extraEnvs for eventService

This will allow use of custom schemas with a postgres database.

See rasa x changelog for description of the POSTGRESQL_SCHEMA environment variable, which needs to be set on the rasax, rasa and eventService services.

Also update rasa x documentation page on kubernetes-openshift.

no duckling url configured

in order to connect to duckling via ENABLE_DUCKLING=True, you need to give the right url in the configuration file. this should be done automatically in the setup.

service type is not configurable for rasa and rasa-x services [default type is ClusterIP]

I'm trying to use ingress instead of nginx reverse proxy, since my requirement involves multiple rasa namespaces and use of aws-load-balancer-controller.

But aws-load-balancer-controller failed to setup because default service type is ClusterIP and it has to be NodePort type.
I'm able to change it to NodePort for rabbitmq service using the existing values.yml file
But the configurable service type is not present for rasa, rasa-x service.

Attaching aws-load-balancer-controller issue reference:
kubernetes-sigs/aws-load-balancer-controller#1695

Unable to upgrade rasa-x via Server quick install script

When running server quick install script for updating, i.e. curl -s get-rasa-x.rasa.com | sudo bash I get an error

Error: UPGRADE FAILED: template: rasa-x/templates/rasa-x-service.yaml:8:16: executing "rasa-x/templates/rasa-x-service.yaml" at <.Values.rasax.service.annotations>: nil pointer evaluating interface {}.annotations

Support full startup command replacement

Add support for a values.yml option to specify the full startup command line with options. In some cases, users need to run the rasa command from within an APM tool such as newrelic or datadog.

newrelic-admin run-program rasa run
ddtrace-run python rasa run

Rasa-X deployment gets updated, although nothing has changed

Hi.

I'm trying to understand the reasoning behind the Deployment Strategy type of Rereate for rasa-x. As far as I understand, rasa-x is just the user interface. Why do we have to Recreate if it's just the UI? It integrates with git anyway to pull the latest changes, no?

Also what's the reasoning behind the checksum parameters? This will almost always trigger a redeploy. It breaks our workflow, and more importantly, the workflow of our content creator colleagues, who then ask us why "it's broken all the time".

Thank you.

How to disable authentication in rasa-x

Hello, we want to integrate OAuth authentication using Nginx ingress to authenticate to rasa-x and so we need to disable the default login form in rasa-x. However, there is no available option in the rasa-x helm to do so.

Ingress' annotationsRasaX not getting applied

When running helm template, the annotations specified under annotationsRasaX in Ingress: don't get applied.

Ingress Annotation:

ingress:
  # enabled should be `true` if you want to use this ingress.
  # Note that if `nginx.enabled` is `true` the `rasa/nginx` image is used as reverse proxy.
  # In order to use nginx ingress you have to set `nginx.enabled=false`.
  enabled: true
  # annotations for the ingress - annotations are applied for the rasa and rasax ingresses
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # annotationsRasa is extra annotations for the rasa nginx ingress
  annotationsRasa: {}
  # annotationsRasaX is extra annotations for the rasa x nginx ingress
  annotationsRasaX:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"

helm template output:

# Source: rasa-x/templates/ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: RELEASE-NAME-rasa-x
  labels:
    helm.sh/chart: rasa-x-1.7.12
    app.kubernetes.io/name: rasa-x
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.33.0"
    app.kubernetes.io/managed-by: Helm

When these annotations get moved to annotations: in Ingress:, however, they do work:

# ingress settings
ingress:
  # enabled should be `true` if you want to use this ingress.
  # Note that if `nginx.enabled` is `true` the `rasa/nginx` image is used as reverse proxy.
  # In order to use nginx ingress you have to set `nginx.enabled=false`.
  enabled: true
  # annotations for the ingress - annotations are applied for the rasa and rasax ingresses
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  # annotationsRasa is extra annotations for the rasa nginx ingress
  annotationsRasa: {}
  # annotationsRasaX is extra annotations for the rasa x nginx ingress
  annotationsRasaX: {}

helm template output:

# Source: rasa-x/templates/ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: RELEASE-NAME-rasa-x
  labels:
    helm.sh/chart: rasa-x-1.7.12
    app.kubernetes.io/name: rasa-x
    app.kubernetes.io/instance: RELEASE-NAME
    app.kubernetes.io/version: "0.33.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"

Idea: disable Ingress by default

From what I see, most of the helm charts (for example, https://github.com/helm/charts/blob/master/stable/wordpress/values.yaml, https://github.com/helm/charts/blob/master/stable/keycloak/values.yaml) include Ingress but it's disabled by default.

I think it makes perfect sense because

  1. it's not necessary
  2. it needs to be configured anyway, so enabling it with the default hostname makes no sense

Another possible solution is to disable ingress if there is no hostname specified.

I also think that if we allow Ingress to be enabled, there should also be an option to use CertManager with it.

Ingress/Egress Requirements

Can we document the default ingress/egress requirements in the readme and the chart. I think this are the defaults (not sure about the event service:

All connections are TCP unless indicated otherwise.

Source Destination
rasa-production rasa-x:5002
rasa-production postgresql:5432
rasa-production rabbit:5672
rasa-production app:5055
rasa-production duckling:8000
rasa-production redis:6379
rasa-x postgresql:5432
rasa-x rasa-production:5005
rasa-x rasa-worker:5005
rasa-x event:5673
rasa-x rabbit:5672
event rabbit:5672
app
nginx rasa-production:5005
nginx rasa-x:5002
<Rasa X Users> nginx:80
<Rasa Channels> nginx:80

templating fails for non semantic versions

Hi,
as mentioned in this comment, helm teplating fails if rasa.tag is a commit hash.

$ helm upgrade $CI_COMMIT_REF_SLUG \
  rasa-x/rasa-x \
  -f deployment/rasa-x/values.yaml \
  -i \
  --namespace $CI_COMMIT_REF_SLUG \
  --create-namespace \
  --set ingress.hosts[0].host=$deployment_url \
  --set ingress.tls[0].hosts[0]=$deployment_url \
  --set ingress.tls[0].secretName=$CI_COMMIT_REF_SLUG-tls 
  --set rasa.tag=$CI_COMMIT_SHA \
  --set app.tag=$CI_COMMIT_SHA \
  --wait \
  --timeout 1200s

Error: template: rasa-x/templates/rasa-x-deployment.yaml:23:33: executing "rasa-x/templates/rasa-x-deployment.yaml" at <include (print $.Template.BasePath "/rasa-config-files-configmap.yaml") .>: error calling include: template: rasa-x/templates/rasa-config-files-configmap.yaml:40:12: executing "rasa-x/templates/rasa-config-files-configmap.yaml" at <semverCompare ">= 1.9.0" .Values.rasa.tag>: error calling semverCompare: Invalid Semantic Version

I think this is a very common pattern...

README updates

Documentation question

  • Should Chart.appVersion in the table be rasax.tag
  • How to add the --debug option to the rasa startup args
  • How do I add channel code to rasa-production

Make imagePullSecrets configurable per service

Currently, the images.imagePullSecrets variable is global and it isn't possible to define pull secrets per service.

A local imagePullSecrets variable should override the global one if it's used.

Rasa X not reachable when using custom port

Summary

  • When setting .Values.rasax.port it will change the rasax service port, but will not change the port rasax is using
  • When adding extraEnvs to set SELF_PORT correctly, the service is hardcoded to point to 5002.
  • Should be a simple PR to add SELF_PORT always, and update the target port to named like in other services

I will raise a PR

Problem starting rasa-x default chart in digital ocean kubernetes

Hi, I've been trying to spin up this helm chart in a digital ocean kubernetes cluster for the last few days but have been unsuccessful. I finally tracked the issue down to a couple things and thought I should share them:

  1. For whatever reason, the default app version, which is based off the .Chart.AppVersion of 0.24.1 is failing. When I specifically set the tags for rasax, eventService, and nginx to 0.24.7, those issues resolved themselves.

  2. The PVC claim that rasa-x was making. It seems that, by default, it tries to claim a storage class of "standard" whereas DO clusters seem to want a storage class of "do-block-storage".

The other three PVC's that are created through this chart, from the helm stable charts, all are created successfully. I think it's b/c by default those charts use templates that inspect the global state. For instance, below is the template from the postgres helm chart.

I'd like to propose using a similar template for the rasa-x storage class.

{{/*
Return  the proper Storage Class
*/}}
{{- define "postgresql.storageClass" -}}
{{/*
Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
*/}}
{{- if .Values.global -}}
    {{- if .Values.global.storageClass -}}
        {{- if (eq "-" .Values.global.storageClass) -}}
            {{- printf "storageClassName: \"\"" -}}
        {{- else }}
            {{- printf "storageClassName: %s" .Values.global.storageClass -}}
        {{- end -}}
    {{- else -}}
        {{- if .Values.persistence.storageClass -}}
              {{- if (eq "-" .Values.persistence.storageClass) -}}
                  {{- printf "storageClassName: \"\"" -}}
              {{- else }}
                  {{- printf "storageClassName: %s" .Values.persistence.storageClass -}}
              {{- end -}}
        {{- end -}}
    {{- end -}}
{{- else -}}
    {{- if .Values.persistence.storageClass -}}
        {{- if (eq "-" .Values.persistence.storageClass) -}}
            {{- printf "storageClassName: \"\"" -}}
        {{- else }}
            {{- printf "storageClassName: %s" .Values.persistence.storageClass -}}
        {{- end -}}
    {{- end -}}
{{- end -}}
{{- end -}}

Update API version for Ingress

Current API versions that are used for Ingress templates are deprecated. The templates should be updated that a deployment uses the correct API version for Ingress that is compatible with a version available on a Kubernetes cluster where the rasa-x helm chart was installed.

networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress

Do not bump Rasa X version multiple times

The GitHub-action bot will bump the Rasa X version everytime it checks, resulting in duplicated branches and pull requests. We want to check if such a branch or PR already exists, so that no more duplicated branches and PR.

UPGRADE FAILED: "0.28.3" has no deployed releases

The update is requested in the RASA-X interface
rasa-upgrade

but the following commands

helm repo update
helm dependency update
helm upgrade 0.28.3 rasa-x/rasa-x

returns this erros

Error: UPGRADE FAILED: "0.28.3" has no deployed releases

make Redis installation optional

If we disable redis installation, the charts render unusable. In order to make this work, some changes need to be made to the templates.

If redis installation disabled, there will be a missing key error:

Rendered rasa-deployments.yaml:

[...]
        - name: "REDIS_PASSWORD"
          valueFrom:
            secretKeyRef:
              name: "rasa-x-redis"
              key: 
[...]

Decouple rasa/nginx and make it optional in favor of Ingress

Great work on the chart, big improvements since my older fork, especially around decoupling rabbitmq, redis, rabbitmq to official sub-charts and general normalization. However rasa/nginx image deployment is still required as rasa distributed network system proxy.

What's the reason of requiring nginx as a reverse-proxy for the Rasa system? I use the official k8s nginx-ingress controller for all my cluster networking, and I want to refrain from adding another container and networking layer.

I've had to dig in rasa/nginx:latest image (Where is the Dockerfile?) and convert /opt/bitnami/nginx/conf/conf.d/rasax.nginx to Kubernetes 1st-class citizen Ingress objects with all the appropriate rewrites and custom snippets and made rasa/nginx optional.

INGRESS                HOSTS               PATHS        SERVICES
+------                +----               +----        +-------
rasa-app               bot.foobar.com      /app/(.*)    app:5055
rasa-core-production   bot.foobar.com      /core/(.*)   rasa-production:5005
rasa-core-socket       bot.foobar.com      /socket.io   rasa-production:5005
rasa-x                 bot.foobar.com      /            rasa-x:5002
rasa-x-chat            bot.foobar.com      /api/chat    rasa-x:5002
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rasa-app
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
  - host: bot.foobar.com
    http:
      paths:
      - backend:
          serviceName: app
          servicePort: 5055
        path: /app/(.*)

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rasa-core-production
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
  - host: bot.foobar.com
    http:
      paths:
      - backend:
          serviceName: rasa-production
          servicePort: 5005
        path: /core/(.*)

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/proxy-buffering: "off"
    nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
  name: rasa-core-socket
spec:
  rules:
  - host: bot.foobar.com
    http:
      paths:
      - backend:
          serviceName: rasa-production
          servicePort: 5005
        path: /socket.io

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rasa-x
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
spec:
  rules:
  - host: bot.foobar.com
    http:
      paths:
      - backend:
          serviceName: rasa-x
          servicePort: 5002
        path: /

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: rasa-x-chat
  annotations:
    nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($arg_environment = "") {
          rewrite ^ /core/webhooks/rasa/webhook last;
      }
      if ($arg_environment = "production") {
          rewrite ^ /core/webhooks/rasa/webhook last;
      }
spec:
  rules:
  - host: bot.foobar.com
    http:
      paths:
      - backend:
          serviceName: rasa-x
          servicePort: 5002
        path: /api/chat

Deployment of production and worker fails due to latest alpine image

short description

Hi,

our deployments start failing due to the latest update of the alpine image, which is used for the init-db container.

Production and worker pods remain in Init, because the init-db container doesn't finish.

long story

The change must have been introduced between alpine@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436 and alpine:latest which at the time of this writing is alpine@sha256:d9a7354e3845ea8466bb00b22224d9116b183e594527fb5b6c3d30bc01a20378.

The problem is, that nslookup now returns 1 instead of 0 for nslookup rasa-x-db-migration-service-headless.

 nvarz:~$ k8s_run_image alpine@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436 
/ # nslookup rasa-x-db-migration-service-headless 1>/dev/null; echo $?
0


 nvarz:~$ k8s_run_image alpine:latest
/ # nslookup rasa-x-db-migration-service-headless 1>/dev/null; echo $?
1

This in turn causes the until loop to run forever:

until nslookup rasa-x-db-migration-service-headless 1> /dev/null; do echo Waiting for the database migration service; sleep 2; done 

and thus the init-db doesn't return success and the production and worker pods are stuck in Init:

rasa-x-app-749447984-sxkc6                1/1     Running    0          48m
rasa-x-db-migration-service-0             1/1     Running    2          48m
rasa-x-duckling-85b858bc77-6kvl4          1/1     Running    0          48m
rasa-x-postgresql-0                       1/1     Running    0          48m
rasa-x-rabbit-0                           1/1     Running    0          48m
rasa-x-rasa-production-7c894fb75b-fzdgq   0/1     Init:0/1   0          48m
rasa-x-rasa-worker-759584656-9mml9        0/1     Init:0/1   0          48m
rasa-x-rasa-x-757fd968fc-brmml            1/1     Running    0          48m
rasa-x-redis-master-0                     1/1     Running    0          48m

current workaround

To workaround this problem we can use:

--set dbMigrationService.initContainer.image="alpine@sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436"

in our helm command.

Quick install script: nginx initialProbeDelay results in nil pointer

Running the quick-install script results in:

Error: UPGRADE FAILED: template: rasa-x/templates/nginx-deployment.yaml:53:41: executing "rasa-x/templates/nginx-deployment.yaml" at <.Values.nginx.livenessProbe.initialProbeDelay>: nil pointer evaluating interface {}.initialProbeDelay

Error: failed to download "rasa-x/rasa-x" (hint: running `helm repo update` may help)

Hi,

last time rasa-x charts worked without problems, since today the new installation fails with

Martins-MacBook-Pro:/ mf$ helm install --namespace rasa --values values.yml --generate-name rasa-x/rasa-x
Error: failed to download "rasa-x/rasa-x" (hint: running `helm repo update` may help)

repo update:

Martins-MacBook-Pro:/ mf$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "rasa-x" chart repository
...Successfully got an update from the "stable" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈

repo list

Martins-MacBook-Pro:/ mf$ helm repo list
NAME         	URL                                              
stable       	https://kubernetes-charts.storage.googleapis.com/
bitnami      	https://charts.bitnami.com/bitnami               
ingress-nginx	https://kubernetes.github.io/ingress-nginx       
rasa-x       	https://rasahq.github.io/rasa-x-helm     

helm version
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"dirty", GoVersion:"go1.15.2"}

generally helm seems to work, tried with a bitnami mysql without issues.

any ideas?

Martin

Feature: extraManifests support

We can have a way to specify the extraManifests so that we can create extra resources like Secret which can be used by existingSecret from the chart only.

For example.
extraManifests.[] : https://github.com/airflow-helm/charts/tree/main/charts/airflow#docs-kubernetes---additional-manifests

Implementation: https://github.com/airflow-helm/charts/blob/main/charts/airflow/templates/extra-manifests.yaml

{{- range .Values.extraManifests }}
---
{{ tpl (toYaml .) $ }}
{{- end }}

Please let me know If I can send a PR?

global.storageClass

Redis, Postgresl, RAbbit are using the key global.storageClass to set the storageClass driver for the persistent volume claims. We should also stick to this convention instead of defining an extra key.

nginx instance will be in a crashloop if there's no custom action server

The following lines in /opt/bitnami/entrypoint.sh of rasa/nginx will run quite a while: (14(curl timeout for me) + 1(sleep)) * 10 = 150

check_if_app_container_is_connected () {
    number_of_tries=0
    curl_result_code=${CURL_STATUS_CODE_FOR_HOST_NOT_FOUND}

    # Retry this ten times so that we can be sure
    while [[ ${number_of_tries} -lt 10 ]]  && [[ ${curl_result_code} == ${CURL_STATUS_CODE_FOR_HOST_NOT_FOUND} ]]
    do
        curl --output /dev/null app  # Check if custom action server is up
        curl_result_code=$?  # Save the result of the check
        echo ${curl_result_code}
        number_of_tries=$((number_of_tries+1))  # Increment the number of tries

        sleep 1s  # wait 1s between the retries
    done

    return ${curl_result_code}
}

While by default, the liveness probing (https://github.com/RasaHQ/rasa-x-helm/blob/master/charts/rasa-x/values.yaml#L2340) will probably fail by then. I think the proper fix is probably to add a timeout to curl in entrypoint.sh, but docker image rasa/nginx is private so I report it here.

Obviously, the workaround (or fix here) now is to override (or change) the initialProbeDelay with a bigger number.

Show the Rasa CE generated password after the installation

Currently, the following information is shown after the successful rasa-x-helm installation:

NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thanks for installing Rasa X  !

Creating a Rasa X user:

- Rasa X CE: go to the terminal of the `rasa-x` pod and then
  execute `python scripts/manage_users.py create --update me <your password> admin`
  to set your password
- Rasa X EE: go to the terminal of the `rasa-x` pod and then
  execute `python scripts/manage_users.py create <your username> <your password> <role of your user>`.
  You can then log in using these credentials.

Also check out the Rasa X docs here for more help:
https://rasa.com/docs/rasa-x/

We show that the password can be changed but this information is not sufficient (it doesn't include any information about what specific pod you should access and how to do that), so the idea is to show the generated password after the successful installation.

Add CHANGELOG

Add a workflow that generates a changelog for release.

Things that have to be done:

  • generate a file with changelog entries
  • attach changelog entries to a GH release
  • add information to README file on how to create a changelog entry

Support multiple environments for Rasa Enterprise

Rasa Enterprise supports multiple deployment environments..

The Helm chart currently doesn't support it since it is currently hardcoded to production and worker: https://github.com/RasaHQ/rasa-x-helm/blob/master/charts/rasa-x/templates/rasa-x-config-files-configmap.yaml.

The multiple rasa deployments / services are already created in a loop when added here . The missing piece hence is to

  • add the missing environments to the config map for environments.yml
  • somewhat solve the problem that the version currently are named rasaProduction & rasaWorker in values.yml, but simple production and worker in the config map for environments.yml
  • Document the procedure in the kubernetes/openshift advanced configuration section
  • while at it, also update this out-of-date documentation page on Deployment Environments. That page describes docker-compose, but will be linked to from the kubernetes/openshift page.

Use nginx-ingress for k3s

Currently, the e2e tests don't create an ingress resource, the access to the Rasa X goes via a Kubernetes service.
We wanna test the ingress configuration as well.

To do:

  • setup k3s with ingress-nginx
  • install Rasa X with enabled ingress
  • run tests that use the ingress resource

PV not created.

Hi, I have some problem using helm.

When I tested to deploy rasa using kubernetes and helm, I have faced a problem that it wouldn't be created PV.
It tried on minikube.
So, I tested again on google kubernetes service as a tutorial.
It was working well and PV was generated automatically.
So I tried to find a helm script about declaring of PV but, I can't find that.
So, What's problem of this issue?

Add test deployment to CI

We should to a test deployment as part of the CI. Should be easy once we have the quick install script.

check if we can get rid of the fsGroup config in the Helm chart

You currently have to set the fsGroup to 0 when running the Helm chart on K8s. People tend to forget that. However, we don't need to this for the subcharts. So I guess there is a way around it. We should check how our subcharts are handling this and then handle it the same way.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.