Giter Site home page Giter Site logo

incubator-devlake-helm-chart's People

Contributors

abeizn avatar adamwolfe-tc avatar aflybird0 avatar albinvass avatar arjundevarajan avatar d4x1 avatar daniel-hutao avatar duhow avatar elesangwon avatar guilhem avatar hezyin avatar ironcore864 avatar jorgegar avatar keon94 avatar klesh avatar koendelaat avatar likyh avatar lshmouse avatar matrixji avatar mintsweet avatar projectsofmlee avatar rbraeunlich avatar walkowif avatar warren830 avatar zhangning10 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incubator-devlake-helm-chart's Issues

Both deployments select the same pods

Hi everyone,
as far as I can see, both Deployments use the same selector labels. See:
https://github.com/apache/incubator-devlake-helm-chart/blob/main/charts/devlake/templates/deployments.yaml#L36
https://github.com/apache/incubator-devlake-helm-chart/blob/main/charts/devlake/templates/deployments.yaml#L127

This leads to both deployments thinking they are responsible for the "lake" and the "ui" pods. E.g. in my K9s it looks like this:
image
image

That both Deployments share those pods looks wrong to me. Therefore, I think they should use different selector labels. Or at least add e.g. the devlakeComponent component label.

Helm Chart installation on AWS EKS: Unable to attach or mount volumes

I installed DevLake using Helm chart on my AWS EKS cluster. Used previous instructions from ticket:
#87 but, without success. Still not working.

Helm values.yaml looks like:

replicaCount: 1
imageTag: v0.16.0-beta17

mysql:
  useExternal: false
  externalServer: 127.0.0.1
  externalPort: 3306
  username: merico
  password: merico
  database: lake
  rootPassword: admin
  storage:
    class: "gp2"
    size: 50Gi

  image:
    repository: mysql
    tag: 8
    pullPolicy: IfNotPresent

  resources: {}

  nodeSelector: {}
  tolerations: []

  affinity: {}


grafana:
  image:
    repository: apache/devlake-dashboard
    pullPolicy: Always

  useExternal: false

  externalUrl: ''

  resources: {}

  nodeSelector: {}

  tolerations: []

  affinity: {}


lake:
  image:
    repository: apache/devlake
    pullPolicy: Always
  storage:
    class: "gp2"
    size: 100Mi
  dotenv:
    API_TIMEOUT: "120s"
    API_RETRY: "3"
    API_REQUESTS_PER_HOUR: "10000"
    PIPELINE_MAX_PARALLEL: "1"
    IN_SECURE_SKIP_VERIFY: "false"

  hostNetwork: false

  resources: {}

  nodeSelector: {}

  tolerations: []

  affinity: {}

  loggingDir: "/app/logs"
  loggingLevel: "info"

ui:
  image:
    repository: apache/devlake-config-ui
    pullPolicy: Always

  resources: {}

  nodeSelector: {}

  tolerations: []

  affinity: {}

  basicAuth:
    enabled: false
    user: admin
    password: admin
    useSecret: false
    autoCreateSecret: true
    secretName: devlake-auth


alpine:
  image:
    repository: alpine
    tag: 3.16
    pullPolicy: IfNotPresent

service:
  type: NodePort
  uiPort: 32001
  grafanaPort : 32002

ingress:
  enabled: true
  enableHttps: false
  className: "alb"
  prefix: /
  tlsSecretName: ""
  httpPort: 80
  httpsPort: 443
  useDefaultNginx: false
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip

option:
  localtime: /etc/localtime

  database: mysql

  useConnectionDetailsSecret: false

  connectionSecretName: devlake-db-connection

  autoCreateSecret: true

I get multiple errors like:

Unable to attach or mount volumes: unmounted volumes=[devlake-lake-config], unattached volumes=[devlake-lake-localtime kube-api-access-r7dgn devlake-lake-config]: timed out waiting for the condition

Unable to attach or mount volumes: unmounted volumes=[devlake-lake-config], unattached volumes=[kube-api-access-r7dgn devlake-lake-config devlake-lake-localtime]: timed out waiting for the condition

Unable to attach or mount volumes: unmounted volumes=[devlake-lake-config], unattached volumes=[devlake-lake-config devlake-lake-localtime kube-api-access-r7dgn]: timed out waiting for the condition

AttachVolume.Attach failed for volume "pvc-09b75f09-f0e4-4700-b42c-e9f901a13bf2" : volume attachment is being deleted

MountVolume.WaitForAttach failed for volume "pvc-09b75f09-f0e4-4700-b42c-e9f901a13bf2" : volume attachment is being deleted

Also tried to set storage class to empty (default) and same behaviour. Any idea?

[Bug][Helm chart] Path based routing doesn't work for common host

There is an issue with the helm chart for Ingress resources. We are using path based routing for multiple applications with the same host and the way Devlake helm chart creates ingress resource is based on the helm values file and the path is picked from Prefix value of helm chart. Based on how our ingresses are setup, we are supposed to give devlake as prefix. But this option doesn't seem to working since the application is looking for assets in / and throwing a 404 error in logs. So basically I am getting a blank page when I try to access https:///devlake.
helm values file

--
devlake:
  ingress:
    enabled: true
    enableHttps: false
    useDefaultNginx: true
    annotations:
      kubernetes.io/ingress.class: nginx
      kubernetes.io/tls-acme: "true"
    hostname: devlake-hostname.com
    prefix: devlake
    httpPort: 80

Ingress logs

infra-ingress-controller-ingress-nginx-controller-6bc664dbs2svg:controller 172.20.3.79 - - [19/Jun/2023:23:47:53 +0000] "GET /assets/index-afa74f6a.css HTTP/2.0" 404 548 "https://dvlake-hostname.com/devlake" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36" 366 0.001 [upstream-default-backend] [] 127.0.0.1:8181 548 0.001 404 1cad31d8fa0f700b151b7624b57aabaa

Ingress resource

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /$2
  generation: 3
  name: devlake
  namespace: devlake
spec:
  rules:
  - host: devlake-hostname.com
    http:
      paths:
      - backend:
          service:
            name: devlake-grafana
            port:
              number: 3000
        path: /devlake/grafana(/|$)(.*)
        pathType: Prefix
      - backend:
          service:
            name: devlake-ui
            port:
              number: 4000
        path: /devlake(/?|$)(.*)
        pathType: Prefix

If we use prefix as / it works just fine but we are not hosting anything at the root of the url. I think this is because of the base_href path is not mentioned in index.html.

Having an option to overwrite the base_path would be ideal in this case where one can put any custom path based on their environments.

Grafana Kubernetes Pod stuck in Container Init

Release : 1.0.0-beta5

Post upgrade of Devlake using Helm Chart, I encountered the following error.

Events:
  Type     Reason              Age                    From                     Message
  ----     ------              ----                   ----                     -------
  Normal   Scheduled           6m47s                  default-scheduler        Successfully assigned devlake/devlake-grafana-6554869b4-czhzw to ip-10-85-44-217.ap-south-1.compute.internal
  Warning  FailedAttachVolume  6m47s                  attachdetach-controller  Multi-Attach error for volume "pvc-fff59316-7f48-406d-abb1-9c02c0895571" Volume is already used by pod(s) devlake-grafana-79775f95f8-4ksq9
  Warning  FailedMount         2m28s (x2 over 4m44s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[storage], unattached volumes=[storage kube-api-access-sh8m6 config]: timed out waiting for the condition
  Warning  FailedMount         10s                    kubelet                  Unable to attach or mount volumes: unmounted volumes=[storage], unattached volumes=[kube-api-access-sh8m6 config storage]: timed out waiting for the condition

Workaround:

➜  ~ k -n devlake scale deployment/devlake-grafana --replicas=0
deployment.apps/devlake-grafana scaled
➜  ~ k -n devlake scale deployment/devlake-grafana --replicas=1
deployment.apps/devlake-grafana scaled

[improvement][ci] make sure each merge with the chart version promote.

Some basic rules needed to add to ci:

  • if things are changed under the charts folder, the version field in charts/devlake/Chart.yaml should be promoted, e.g: v0.14.5 -> v0.14.6, v0.14.5 -> v0.15.0
  • the chart and the devlake version alignment: using the same major and minor version numbers, and could have different patch numbers. e.g chart version v0.14.5 for DevLake version v0.14.2
  • each merge to master should be fast-forwardable and all commits need to squash to one before it merged

I'll add some initial CI actions to check it for PR, and initial some guidance for it.

Encryption Secret error

i am experiencing this error from the devlake pod, the pod keeps crashing in a loop.

time="2024-02-20 11:11:06" level=info msg="\x1b[31;1m/app/impls/dalgorm/dalgorm.go:209 \x1b[35;1minvalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret\n\x1b[0m\x1b[33m[0.934ms] \x1b[34;1m[rows:3]\x1b[0m SELECT * FROM _devlake_blueprints WHERE enable = true AND is_manual = false ORDER BY id DESC"
panic: invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret; invalid encryptionSecret (500)"

Proposal to manage Grafana chart as dependency

As of now, Grafana is integrated as a deployment in the DevLake helm chart. I'd like to propose removing it and leveraging the official chart as a dependency. This would help keep up to date with Grafana's improvements and corrections, and it would standardise its usage.

Helm Chart GHPages broken

Error: looks like "https://apache.github.io/incubator-devlake-helm-chart" is not a valid chart repository or cannot be reached: failed to fetch https://apache.github.io/incubator-devlake-helm-chart/index.yaml : 404 Not Found

Optional deployment of grafana

If there is already a grafana installed in the cluster, it would be interesting to be able to import the datasource and the dashboards and not have the product implemented twice.

Sorry for my English.

I can do a pull request for the changes.

[Question][Installation] Error while finding kubernetes cluster

I am using helm to install devlake and while giving the below cmd
"helm install devlake devlake/devlake --version=0.17.0-beta4"

I am getting an error saying "INSTALLATION FAILED: Kubernetes cluster unreachable: the server could not find the requested resource"

am I doing something wrong or the version is different ?

[Question][Installation] Why the URL to access the Grafana dashboard don't work?

Good afternoon Teams,

I'm doing a PoC and I deployed the Apache DevLake by using the Helm Chart [ https://apache.github.io/incubator-devlake-helm-chart | devlake | 0.19.0-beta6 ]. Deploying the DevLake UI, database, Grafana, and also the ingress.

The deployment went well. I'm able to manage the tool pretty well. Adding connections, creating projects, and collecting data, without any problems.

Just some pieces of evidence:

image

image

Unfortunately, when I try to access the option [ Dashboards ] (at the top and the right side of the home page) my NGINX give me the error 503:

image

I'm using the Helm values below to set up the ingress

image

Also, my NGINX deployment has the values:

image

The ingress is created, correctly in the cluster:

image

And if a do a port forward for the Grafana service, I can access my Apache DevLake Grafana:

image

Well, I am 90% sure that it could be related to the NGINX annotations.....
But I can't see/identify what is happening :(

Please, somebody here could give me a hand :) ?

Thank you so much in advance.
Edson Morais.

initiate mounting of certificate using helm approach in kubernetes

The devlake UI is functional in EKS cluster, but accessing the jira connection, we need to put the Self signed certificate in helm chart. As the docker images are not directly accessible by us, please help in putting the certificates in helm chart and then install devlake in EKS

No values passed from helm chart for pgsql

I'm trying to add a helm deployment via terraform provider

With the postgres external option set, it does not appear to have the values passed from the helm chart; stops on checking database - from the statefulset devlake-lake:

initContainers": [
      {
        "name": "waiting-database-ready",
        "image": "alpine:3.16",
        "command": [
          "sh",
          "-c",
          "until nc -z -w 2   ; do\n  echo wait for database ready ...\n  sleep 2\ndone\necho database is ready\n"
        ],
        "resources": {},

Using the section

....
    pgsql: {
      useExternal: true,
      externalServer: "postgres.${var.domain_name}",
      username: "devlake",
      password: data.aws_secretsmanager_secret_version.postgres.secret_string,
      database: "devlake"
    },
    option: {
      database: "pgsql"
    },
....

Could be that the values are passed to the initi container in the chart?

[Helm][EKS][Grafana] Inconsistent ingress -> service port for Grafana

Hi

When using this chart, version: 0.18.0-beta1 and values from below, I am facing a problem with the service for Grafana and the ingress definition.

Values:

mysql:
  useExternal: true
  externalServer: my-rds-endpoint
  externalPort: my-rds-port

lake:
  encryptionSecret:
    secret: my-secret

option:
  database: "mysql"
  connectionSecretName: "mysecret"
  autoCreateSecret: false

grafana:
  envFromSecrets:
    - name: "mysecret"

ingress:
  enabled: true
  enableHttps: true
  useDefaultNginx: false
  className: alb
  hostname: hostname.com
  annotations:
    some_custom_annotations

Grafana service created (devlake-grafana) contains the following port definition

│   ports:                                                                                                                                
│   - name: service                                                                                                                       
│     port: 80                                                                                                                            
│     protocol: TCP                                                                                                                       
│     targetPort: 3000  

Ingress created contains the following Grafana definition

│   - host: hostname.com                                                                                                                                                                                                                                          
│     http:                                                                                                                                                                                                                                                                                
│       paths:                                                                                                                                                                                                                                                                             
│       - backend:                                                                                                                                                                                                                                                                         
│           service:                                                                                                                                                                                                                                                                       
│             name: devlake-grafana                                                                                                                                                                                                                                                        
│             port:                                                                                                                                                                                                                                                                        
│               number: 3000                                                                                                                                                                                                                                                          
│         path: /grafana                                                                                                                                                                                                                                                                   
│         pathType: Prefix   

In k8s events I can see:

│ Kind:                Event                                                                                                               │
│ Last Timestamp:      2023-07-18T12:32:17Z                                                                                                │
│ Message:             Failed build model due to ingress: devlake/devlake: unable to find port 3000 on service devlake/devlake-grafana

Note that number: 3000 seems to be wrong, as soon as I edit my ingress resource setting number: 80 ingress is able to move forward and gets properly provisioned.
I have been looking into values but I can't find a way to parametrize this value, it seems to be harcoded to me https://github.com/apache/incubator-devlake-helm-chart/blob/81f3833e73787dd78ef0fef3fa44378d14dcf968/charts/devlake/templates/ingresses.yaml#L72C31-L72C31

Could you please guide me on how to set this value from values.yaml file so I don't need to manually update said value (if that's possible)?

Thanks!

Dynamic replica management using replicaCount to enable or disable DEV-LAKE.

Knowing that the dev-lake deployments cannot have more than one replica, we would like to have the ability to manage the replicas of StatefulSet and deployments to set to 0, and then completely disable devlake-helm-chart.

Current configuration:

spec:
  replicas: 1

Changes for StatefulSet and deployments :

spec:
  replicas: {{ if gt .Values.replicaCount 1 }}1{{ else }}{{ .Values.replicaCount }}{{ end }}

If you find it convenient:

Changes for StatefulSet and Deployment (devlake-ui)

spec:
  replicas: {{ .Values.replicaCount }}

Change for Deployment (dev-lake)

spec:
  replicas: {{ if gt .Values.replicaCount 1 }}1{{ else }}{{ .Values.replicaCount }}{{ end }}

field is immutable when upgrading

When upgrading from 0.20.0-beta8 to 0.20.0-beta9 I get this error:

Error: 
cannot patch "devlake-ui" with kind Deployment: Deployment.apps "devlake-ui" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string
    ]string{
        "app.kubernetes.io/instance": "devlake",
        "app.kubernetes.io/name": "devlake",
        "devlakeComponent": "ui"
    }, MatchExpressions: []v1.LabelSelectorRequirement(nil)
}: field is immutable
cannot patch "devlake-lake" with kind Deployment: Deployment.apps "devlake-lake" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string
    ]string{
        "app.kubernetes.io/instance": "devlake",
        "app.kubernetes.io/name": "devlake",
        "devlakeComponent": "lake"
    }, MatchExpressions: []v1.LabelSelectorRequirement(nil)
}: field is immutable

Replica Count Not Configurable in Helm Chart

The current Helm chart for Apache DevLake hard-codes the replicas value in the deployment.yaml file to 1. This setting ignores any replicaCount value set in values.yaml, restricting the ability to scale the deployment through Helm configuration.

Current Configuration:

spec:
  replicas: 1

Suggested Fix

spec:
  replicas: {{ .Values.replicaCount }}

Issue via Traefik in Kubernetes

Helm Chart Version: v0.19.0

@ZhangNing10 I'm able to load the devlake-config-ui by adding some network policy.

But somehow the UI is not able to connect to /api/ping and /api/userinfo endpoints, I get 404 in the browser and below is the snapshot for the same.

image

image

When I get into the Pod and do a curl with http://localhost:4000/api/userinfo, I do get a 200 with a JSON response.
could you please suggest what is the issue here ?

Config UI container has neither liveness nor healthiness probe

Hello,
is there a special reason why the container has neiter liveness nor healthiness probe defined? The lake deployment has the property .Values.lake.livenessProbe but for config-ui there is none. Isn't it best practice to have every container define both probes?
Cheers

[Feature Request]: Request for Addition of "ImagePullSecrets" in Helm Chart Templates for Private Repo Support

Hi,

I am currently facing an issue while attempting to upload the DevLake chart to a private repository and subsequently pulling the image from that repository. The templates within the charts lack the "ImagePullSecrets" value, which is preventing efficient integration with private repositories.

I kindly request the addition of the "ImagePullSecrets" value to the Helm chart templates. This enhancement is crucial for enabling the use of the chart from a private repository within our company.

Thanks in advanced.

Adding extra containers to deployments

When using an external database, it's common to have to use a proxy to safely connect from a service to the database. In Google Cloud, one of the recommended approaches is to have a side car container of cloud-sql-proxy.

I am deploying Devlake in a GitOps Flux-based environment, so Kustomize is not an option as it executes before the Helm chart is extracted and applied. Helm post renderers are an option, but some good effort to get it working.

Additionally, enabling extra containers can also enable other use cases.

Installed DevLake on AWS EKS cluster using helm

I installed Apache DevLake on AWS EKS cluster using helm, but when I want to access it, it's not working.
Even when I export node port and node ip I can't access it.

On EKS is already deployed ingress-nginx so I used:
helm install devlake devlake/devlake --set "ingress.enabled=true,ingress.hostname=devlake.dev.site1.dev"

Cannot Connect from devlake-ui to devlake-lake

error log

[error] 24#24: *31991 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.42.2.46, server: localhost, request: "GET /api/plugins HTTP/1.1", upstream: "http://10.43.48.226:8080/plugins", host: "devlake-ui.devlake.svc.cluster.local:4000"

Local k3s (v1.28.6) is OK

Use nodeport

image

curl test

kubectl exec devlake-ui-7dc97b47cb-zg8p5 -n devlake -- curl devlake-lake.devlake.svc.cluster.local:8080/plugins % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2974 0 2974 0 0 1219k 0 --:--:-- --:--:-- --:--:-- 1452k [{"plugin":"slack","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"starrocks","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"teambition","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"zentao","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"bitbucket","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"dora","metric":{"requiredDataEntities":[{"model":"cicd_tasks","requiredFields":{"column":"type","execptedValue":"Deployment"}}],"runAfter":[],"isProjectMetric":true}},{"plugin":"github","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"icla","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"jenkins","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"refdiff","metric":{"requiredDataEntities":[],"runAfter":[],"isProjectMetric":false}},{"plugin":"tapd","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"trello","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"webhook","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"circleci","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"jira","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"sonarqube","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"azuredevops","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"dbt","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"org","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"customize","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"gitee","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"gitextractor","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"feishu","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"github_graphql","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"ae","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"bamboo","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"gitlab","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"opsgenie","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"pagerduty","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}}]%

But, Dev Cluster(v1.26.11) is not working

Use ingress

image

curl test in devlake-ui

kubectl exec devlake-ui-9d5bf46f4-2lw7d -n devlake -- curl devlake-lake.devlake.svc.cluster.local:8080/plugins % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:02:07 --:--:-- 0 curl: (28) Failed to connect to devlake-lake.devlake.svc.cluster.local port 8080 after 127268 ms: Couldn't connect to server command terminated with exit code 28

curl test in busybox

kubectl exec busybox-6b95744666-7rxg7 -n devlake -- ./curl-amd64 devlake-lake.devlake.svc.cluster.local:8080/plugins % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2974 0 2974 0 0 996k 0 --:--:-- --:--:-- -[{"plugin":"ae","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"bamboo","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"github","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"slack","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"zentao","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"bitbucket","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"icla","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"opsgenie","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"starrocks","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"trello","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"customize","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"dbt","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"dora","metric":{"requiredDataEntities":[{"model":"cicd_tasks","requiredFields":{"column":"type","execptedValue":"Deployment"}}],"runAfter":[],"isProjectMetric":true}},{"plugin":"jenkins","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"pagerduty","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"tapd","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"feishu","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"circleci","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"gitextractor","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"org","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"github_graphql","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"sonarqube","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"azuredevops","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"gitee","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"gitlab","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"jira","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"refdiff","metric":{"requiredDataEntities":[],"runAfter":[],"isProjectMetric":false}},{"plugin":"teambition","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}},{"plugin":"webhook","metric":{"requiredDataEntities":null,"runAfter":null,"isProjectMetric":false}}]-:--:-- 1452k

So it is in pending state

image

values.yaml

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# replica count
replicaCount: 1
imageTag: v0.21.0-beta7

# image pull secrets
imagePullSecrets: []

#the common environments for all pods except grafana, grafana needs to be set in grafana section seperately
commonEnvs:
  TZ: "Asia/Seoul"

mysql:
  # if use external mysql server, please set true
  # by default using false, chart will create a single mysql instance
  useExternal: false

  # the external mysql server address
  externalServer: 127.0.0.1

  # external mysql port
  externalPort: 3306

  # the username for devlake database
  username: merico

  # the password for devlake database
  password: merico

  # the database for devlake
  database: lake

  # root password for mysql, only used when use_external=false
  rootPassword: admin

  # storage for mysql
  storage:
    # pvc or hostpath
    type: pvc
    # the storage class for pv, leave empty will using default
    class: "local-path"
    size: 50Gi
    hostPath: /devlake/mysql/data

  # image for mysql
  image:
    repository: mysql
    tag: 8
    pullPolicy: IfNotPresent

  # init containers for mysql if have
  initContainers: []

  # resources config for mysql if have
  resources: {}

  # nodeSelector config for mysql if have
  nodeSelector: {node: worker}

  # tolerations config for mysql if have
  tolerations: []

  # affinity config for mysql if have
  affinity: {}

  extraLabels: {}

  securityContext: {}

  containerSecurityContext: {}

  podAnnotations: {}

  service:
    type: "ClusterIP"
    nodePort: ""

# pgsql:
#   # if use external pgsql server, please set true
#   #   by default using false, chart will create a single pgsql instance
#   useExternal: false

#   # the external pgsql server address
#   externalServer: 127.0.0.1

#   # external pgsql port
#   externalPort: 5432
#   # the username for devlake database
#   username: merico

#   # the password for devlake database
#   password: merico

#   # the database for devlake
#   database: lake

#   # storage for pgsql
#   storage:
#     # the storage class for pv, leave empty will using default
#     class: ""
#     size: 5Gi

#   # image for pgsql
#   image:
#     repository: postgres
#     tag: 14.5
#     pullPolicy: IfNotPresent

#   # resources config for pgsql if have
#   resources: {}

#   # nodeSelector config for pgsql if have
#   nodeSelector: {}

#   # tolerations config for pgsql if have
#   tolerations: []

#   # affinity config for pgsql if have
#   affinity: {}

#   extraLabels: {}

#   securityContext: {}

#   containerSecurityContext: {}

#   annotations: {}

# dependency chart values
grafana:
  enabled: true
  #if grafana enabled is false, then external url should be provided
  external:
    url: ""
  image:
    repository: devlake.docker.scarf.sh/apache/devlake-dashboard
    tag: v0.21.0-beta7
  adminPassword: "admin"
  grafana.ini:
    server:
      root_url: "%(protocol)s://%(domain)s/grafana"
  #the secret name should be as same as .Values.option.connectionSecretName
  envFromSecrets:
    - name: "devlake-mysql-auth"
  #keep grafana timezone same as other pods, which is set by .Values.commonEnvs.TZ
  env:
    TZ: "Asia/Seoul"
  persistence:
    enabled: true
    size: 4Gi
  ingressServiceName: ""
  ingressServicePort: ""
  nodeSelector: {node: worker}

lake:
  image:
    repository: devlake.docker.scarf.sh/apache/devlake
    pullPolicy: Always
    # defaults to imageTag; if set, lake.image.tag will override imageTag
    # tag:
  # storage for config
  port: 8080
  envs:
    API_TIMEOUT: "120s"
    API_RETRY: "3"
    API_REQUESTS_PER_HOUR: "10000"
    PIPELINE_MAX_PARALLEL: "1"
    IN_SECURE_SKIP_VERIFY: "false"
    LOGGING_DIR: "/app/logs"
    # debug, info, warn, error
    LOGGING_LEVEL: "info"
  #extra envs from an existing secret
  extraEnvsFromSecret: ""
  encryptionSecret:
    # The name of secret which contains keys named ENCRYPTION_SECRET
    secretName: ""
    # if secretName is empty, secret should be set
    # you can generate the encryption secret via cmd `openssl rand -base64 2000 | tr -dc 'A-Z' | fold -w 128 | head -n 1`
    secret: "YRQRONVNYMRFXDYANUHRTUDDNKRJPKBCKDUSRYCVZDRTRCGVDXDVLWTBJZADCUJUSFKAKOOUAXERAEIXZKQUWYNMYNABMFOMZASGBUBPOIDOOWRJMPDQJLYRAOQAJLNI"
    autoCreateSecret: true

  # If hostNetwork is true, then dnsPolicy is set to ClusterFirstWithHostNet
  hostNetwork: false

  resources: {}

  strategy:
    type: Recreate

  nodeSelector: {node: worker}

  tolerations: []

  affinity: {}

  extraLabels: {}

  securityContext: {}

  containerSecurityContext: {}

  podAnnotations: {}

  livenessProbe:
    httpGet:
      path: /ping
      port: 8080
      scheme: HTTP
    failureThreshold: 5
    initialDelaySeconds: 30
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5

  readinessProbe:
    httpGet:
      path: /ping
      port: 8080
      scheme: HTTP
    failureThreshold: 3
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5

  deployment:
    extraLabels: {}

ui:
  image:
    repository: devlake.docker.scarf.sh/apache/devlake-config-ui
    pullPolicy: Always
    # defaults to imageTag; if set, lake.image.tag will override imageTag
    # tag:
  resources: {}

  strategy: {}

  nodeSelector: {node: worker}

  tolerations: []

  affinity: {}

  livenessProbe:
    httpGet:
      path: /health/
      port: 4000
      scheme: HTTP
    failureThreshold: 5
    initialDelaySeconds: 15
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5

  readinessProbe:
    httpGet:
      path: /health/
      port: 4000
      scheme: HTTP
    failureThreshold: 3
    initialDelaySeconds: 5
    periodSeconds: 5
    successThreshold: 1
    timeoutSeconds: 5

  basicAuth:
    enabled: false
    user: admin
    password: admin
    autoCreateSecret: true
    secretName: ""

  extraLabels: {}

  podAnnotations: {}

  ## SecurityContext holds pod-level security attributes and common container settings.
  ## This defaults to non root user with uid 101 and gid 1000. *v1.PodSecurityContext  false
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
  securityContext:
    {}
    # fsGroup: 101
    # runAsGroup: 1000
    # runAsNonRoot: true
  # runAsUser: 101

  ## K8s containers' Security Context
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
  containerSecurityContext:
    {}
    # allowPrivilegeEscalation: false
    # capabilities:
    #   drop:
  #       - all

  deployment:
    extraLabels: {}

# alpine image for some init containers
alpine:
  image:
    repository: alpine
    tag: 3.16
    pullPolicy: IfNotPresent

service:
  # service type: NodePort/ClusterIP
  type: "ClusterIP"
  # node port for devlake-ui if NodePort is enabled
  uiPort: 32001

ingress:
  enabled: true
  enableHttps: false
  # Set to false if you want to use a different ingress controller
  useDefaultNginx: true
  # ingress class name, example: alb for AWS load balancer controller
  className: nginx
  # domain name for hosting devlake, must be set if ingress is enabled
  hostname: dev-devlake.sample.com
  # annotations required for your ingress controller; see the examples below
  # for nginx, use the first two lines of annotations
  # for alb (w/ external-dns), use the last 5 (6) lines of annotations
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  #
  # alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
  # alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:xxx:certificate/xxx-xxx-xxx
  # alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
  # alb.ingress.kubernetes.io/scheme: internet-facing
  # alb.ingress.kubernetes.io/target-type: ip
  # external-dns.alpha.kubernetes.io/hostname: www.example.com

  # url prefix, not works right now, keep "/"
  prefix: /
  # if using https provides the certificates secret name
  tlsSecretName: ""
  # ingress http port
  httpPort: 80
  # ingress https port
  httpsPort: 443

  extraPaths: []
#  extraPaths:
#    - path: /*
#      pathType: ImplementationSpecific
#      backend:
#        service:
#          name: ssl-redirect
#          port:
#            name: use-annotation

option:
  # database type, supported: [mysql]
  database: mysql
  # the existing k8s secret name of db connection auth. The secret name should be as same as .Values.grafana.envFromSecret
  connectionSecretName: "devlake-mysql-auth"
  autoCreateSecret: true

Add ENCODE_KEY helm values inputs

We recently found out the hard way about the encKey used to encrypt things in the database. It would be helpful for users deploying on kubernetes if the default values file contained references to this feature so that it is more obvious that it can be set (rather than generated by a container that may not persist).

My suggestion is to add inputs in the values file like:

# This is the string used to encrypt sensitive things like PATs in the database.
# Alternatively, you may supply this value to the `lake` container directly as environment variable `ENCODE_KEY`
# If unset, a key will be created dynamically, in which case you should retrieve it and store it somewhere secure and persistent
encodeKey:
  secretName: "" # the name of the Secret containing this encryption key
  secretKey: "" # the name of the key within that Secret which contains the encryption key as its value

Running with readOnlyRootFilesystem

Hi,

support for Read only root fs would be great. So far I've encountered two issues:

Mysql container init wants to use /tmp and /var/run/mysqld, emptyDir (or an option to add emptyDir) volume would fix that easily (tested it).

Lake really wants to log to /app/logs/, same thing, emptyDir fixes it easily.

UI wants to create /etc/nginx/conf.d/default.conf. If I'm reading the source right, this might be a bit trickier to accomplish, the solution that comes to mind is a different entrypoint and skipping the templating, using a configmap instead? But maybe there's something more simple.

edit: Lake also wants /tmp during runtime

Error when calling chart in v0.20.0-beta3 with ui.basicAuth.enable set to False

Hello,

I hope this message finds you well. I am currently facing an issue while attempting to call the chart from version v0.20.0-beta3. Even after passing the parameter ui.basicAuth.enable as False, I encounter the following error:

Error: INSTALLATION FAILED: YAML parse error on devlake/templates/deployments.yaml: error converting YAML to JSON: yaml: line 58: did not find expected '-' indicator

Thank you for your assistance in resolving this matter.

[Feature Request]: Ability to add an `initContainer` for the MySQL StatefulSet

I am unable to run the MySQL database on the k8s cluster as the mysql user wherein I set the securityContext to runAsUser with ID 999, as available in the mysql:8 image. Running as root is not an option either, due to security concerns.

This is the log:

2023-10-24 19:36:31+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.27-1debian10 started.
2023-10-24 19:36:31+00:00 [Note] [Entrypoint]: Initializing database files
mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (OS errno 13 - Permission denied)
2023-10-24T19:36:31.447303Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.27) initializing of server in progress as process 22
2023-10-24T19:36:31.448251Z 0 [ERROR] [MY-010460] [Server] --initialize specified but the data directory exists and is not writable. Aborting.
2023-10-24T19:36:31.448255Z 0 [ERROR] [MY-013236] [Server] The designated data directory /var/lib/mysql/ is unusable. You can remove all files that the server added to it.
2023-10-24T19:36:31.448284Z 0 [ERROR] [MY-010119] [Server] Aborting
2023-10-24T19:36:31.448376Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete

In order to circumvent this, I need to have an initContainer where I can override the file permissions on /var/lib/mysql to chmod it recursively with 999:999 permissions.

This is a request to add the ability to instantiate a new initContainer for the MySQL DB StatefulSet.

Is it possible to add HPA?

Hello there, I was wondering if there is any way to add HPA or similar, as I wonder how it would perform with a large amount of data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.