getredash / contrib-helm-chart Goto Github PK
View Code? Open in Web Editor NEWCommunity maintained Redash Helm Chart
License: Apache License 2.0
Community maintained Redash Helm Chart
License: Apache License 2.0
Hello,
You have your chart version which is : 2.3.1
latest with a bug fix.
But in the chart repo we only have 2.3.0
Please update your chart repo.
Thx <3
Is there a plan to upgrade helm chart for redash version 10?
Hello folks, let me explain this issue a bit more in-depth.
I was migrating a redash instance from an AWS EC2 instance to a K8s cluster. Now, we use an external Postgres and Redis cluster for redash (amongst other things; which is quite typical in a production setup).
When I run the redash chart, it starts up two jobs one for migration and one for install, however, since our DB is already initialized and on the same version (version": "8.0.2+b37747) the jobs don't return a success code and get stuck in the progressing state forever.
enabled
for both the jobs such that they won't be run if enabled = false
The logs of the job if you need further insight
This will retry connections until PostgreSQL/Redis is up, then perform database installation/migrations as needed.
Using external postgresql database
Using external redis database
Starting attempt 0 of 10
Return code: 124
Status: {
"unused_query_results_count": 4,
"workers": [],
"redis_used_memory": 7790072,
"dashboards_count": 14,
"query_results_count": 19431,
"database_metrics": {
"metrics": [
[
"Query Results Size",
143056896
],
[
"Redash DB Size",
8204959
]
]
},
"manager": {
"queues": {
"celery": {
"size": 0
},
"scheduled_queries": {
"size": 0
},
"queries": {
"size": 0
}
},
"outdated_queries_count": "0",
"last_refresh_at": "1626684651.038901",
"query_ids": "[]"
},
"version": "8.0.2+b37747",
"queries_count": 172,
"redis_used_memory_human": "7.43M",
"widgets_count": 75
}
Status command timed out after 10 seconds.
Waiting 10 seconds before retrying.
Starting attempt 1 of 10
Return code: 0
Status: {
"unused_query_results_count": 0,
"workers": [],
"redis_used_memory": 7746952,
"dashboards_count": 14,
"query_results_count": 19427,
"database_metrics": {
"metrics": [
[
"Query Results Size",
143056896
],
[
"Redash DB Size",
8204959
]
]
},
"manager": {
"queues": {
"celery": {
"size": 0
},
"scheduled_queries": {
"size": 0
},
"queries": {
"size": 0
}
},
"outdated_queries_count": "0",
"last_refresh_at": "1626684681.114482",
"query_ids": "[]"
},
"version": "8.0.2+b37747",
"queries_count": 172,
"redis_used_memory_human": "7.39M",
"widgets_count": 75
}
Database appears to already be installed.
EDIT: if anyone from the collaborators can perhaps discuss this, and figure out a plan of action, I will be glad to submit PRs to address this :)
A summary of the issue and the browser/OS environment in which it occurs.
Deploying redash
on GKE using helm (οver istio
).
All containers reach a Running
state.
The REDASH_DATABASE_URL=postgresql://postgres:somepass@postgres/postgres
is correct.
I managed to connect to it both from an external and internal
However I get an
Internal Server Error 500
when trying tying to connect.
Here are the postgres
logs:
➢ k logs -f redash-postgres-56d9d75f7d-kczk4 redash-postgres
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
sh: locale: not found
performing post-bootstrap initialization ... No usable system locales were found.
Use the option "--debug" to see details.
ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....LOG: database system was shut down at 2019-11-15 09:58:36 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
done
server started
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
LOG: received fast shutdown request
LOG: aborting any active transactions
LOG: autovacuum launcher shutting down
waiting for server to shut down....LOG: shutting down
LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
LOG: database system was shut down at 2019-11-15 09:58:37 UTC
LOG: MultiXact member wraparound protections are now enabled
LOG: database system is ready to accept connections
LOG: autovacuum launcher started
ERROR: relation "queries" does not exist at character 979
STATEMENT: SELECT queries.query AS queries_query, queries.updated_at AS queries_updated_at, queries.created_at AS queries_created_at, queries.id AS queries_id, queries.version AS queries_version, queries.org_id AS queries_org_id, queries.data_source_id AS queries_data_source_id, queries.latest_query_data_id AS queries_latest_query_data_id, queries.name AS queries_name, queries.description AS queries_description, queries.query_hash AS queries_query_hash, queries.api_key AS queries_api_key, queries.user_id AS queries_user_id, queries.last_modified_by_id AS queries_last_modified_by_id, queries.is_archived AS queries_is_archived, queries.is_draft AS queries_is_draft, queries.schedule AS queries_schedule, queries.schedule_failures AS queries_schedule_failures, queries.options AS queries_options, queries.search_vector AS queries_search_vector, queries.tags AS queries_tags, query_results_1.id AS query_results_1_id, query_results_1.retrieved_at AS query_results_1_retrieved_at
FROM queries LEFT OUTER JOIN query_results AS query_results_1 ON query_results_1.id = queries.latest_query_data_id
Any other info e.g. Why do you consider this to be a bug? What did you expect to happen instead?
Have redash
running normally since all environmental variables are correct (at least in terms to DB connectivity)
redash/redash:8.0.0.b32245
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
➢ helm version --tls
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
istio 1.1.14
helm
charts from this docker compose configuration.What values do I need to fill in to:
Which exposes nodePort to reach redash on a URL?
A) service.port and service.type?
B) ?
Hello, I am currently exposing the service through istio and it works correctly.
I have sent an invitation email, but when I open the email link, it generates the following url:
http: ///invite/IjUi.ETWeOQ.DutSINrsKwE5yIC
losing the subdomain and the associated domain
But if I access the same url with the subdomain and domain as the following example it works correctly for me:
http://subdomain.domain.com/invite/IjUi.ETWeOQ.DutSINrsKwE5yIC
Any way to solve this error?
Hi,
I am currently trying to execute any query within Redash hosted on AKS and I always receive the same error:
Error running query: Invalid
I already generated the successful connection to my database.
Within the logs of my ingress I get the following:
[2020-02-24T19: 51: 35.111Z] "POST / api / query_results HTTP / 2" 400 - "-" "-" 84 60 4 - "10.240.0.5" "Mozilla / 5.0 (Windows NT 10.0; Win64; x64) AppleWebKit / 537.36 (KHTML, like Gecko) Chrome / 80.0.3987.106 Safari / 537.36 "" 17fa4c3e-5209-9c8e-8ab3-1172cd3a593d "" subdomain.domain.com "" - "- - 10.244.2.3:443 10.240. 0.5: 64281 subdomain.domain.com -
[2020-02-24T19: 51: 36.113Z] "POST / api / events HTTP / 2" 200 - "-" "-" 103 4 15 14 "10.240.0.5" "Mozilla / 5.0 (Windows NT 10.0; Win64; x64) AppleWebKit / 537.36 (KHTML, like Gecko) Chrome / 80.0.3987.106 Safari / 537.36 "" 0f1b8035-ef6b-98e2-9931-f8a94d2610fc "" subdomain.domain.com "" 10.244.3.65:5000 "outbound | 80 || redash.default.svc.cluster.local - 10.244.2.3:443 10.240.0.5:64281 subdomain.domain.com -
Why does this error occur?
If you setup ingress enabled with redash via load balancer you will get internal server error and from logs in pod.
[2021-06-28 09:23:05,737][PID:12][INFO][metrics] method=GET path=/favicon.ico endpoint=redash_index status=302 content_type=text/html; charset=utf-8 content_length=329 duration=0.46 query_count=0 query_duration=0.00 [2021-06-28 09:23:05,842] ERROR in app: Exception on /login [GET] Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app response = self.full_dispatch_request() File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request rv = self.handle_user_exception(e) File "/usr/local/lib/python2.7/site-packages/flask_restful/__init__.py", line 271, in error_router return original_handler(e) File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception reraise(exc_type, exc_value, tb) File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1610, in full_dispatch_request rv = self.preprocess_request() File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1831, in preprocess_request rv = func() File "/usr/local/lib/python2.7/site-packages/flask_limiter/extension.py", line 400, in __check_request_limit six.reraise(*sys.exc_info()) File "/usr/local/lib/python2.7/site-packages/flask_limiter/extension.py", line 365, in __check_request_limit if not self.limiter.hit(lim.limit, lim.key_func(), limit_scope): File "/usr/local/lib/python2.7/site-packages/limits/strategies.py", line 132, in hit self.storage().incr(item.key_for(*identifiers), item.get_expiry()) File "/usr/local/lib/python2.7/site-packages/limits/storage.py", line 446, in incr return self.lua_incr_expire([key], [expiry]) File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 3575, in __call__ return client.evalsha(self.sha, len(keys), *args) File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2761, in evalsha return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args) File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 772, in execute_command connection = pool.get_connection(command_name, **options) File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 994, in get_connection connection.connect() File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 502, in connect self.on_connect() File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 570, in on_connect if nativestr(self.read_response()) != 'OK': File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 642, in read_response raise response ResponseError: WRONGPASS invalid username-password pair
Values file:
redash:
cookieSecret: ##SECRET##
secretKey: ##SECRET##
postgresql:
enabled: false
adhocWorker:
replicaCount: 2
scheduledWorker:
replicaCount: 2
server:
replicaCount: 1
externalPostgreSQL: "postgresql://##SECRET##.rds.amazonaws.com:5432/redashdev"
ingress:
enabled: true
hosts:
- host: "*.dev.myapplication.com"
paths: ["/*"]
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: ##SECRET##
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
service:
port: 80
type: NodePort
When using as a dependency chart, we can't override the .Values.adhocWorker.env
values.
{{- range $key, $value := .Values.adhocWorker.env }}
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
Hello, I'm trying to deploy Redash 8.0.0 on GKE using this chart. So far everything worked, but the email setup
I set the environment variables on the values.yaml like bellow:
env:
PYTHONUNBUFFERED: 0
REDASH_LOG_LEVEL: "INFO"
REDASH_HOST: https://redash.myhost.com.br
REDASH_MAIL_SERVER: smtp.gmail.com
REDASH_MAIL_PORT: 465
REDASH_MAIL_USE_TLS: true
REDASH_MAIL_USE_SSL: true
REDASH_MAIL_USERNAME: [email protected]
REDASH_MAIL_PASSWORD: "password"
REDASH_MAIL_DEFAULT_SENDER: [email protected]
I also tried to use AWS SES, the same configuration worked on another Redash deployment.
Sample helmfile (it's wrapper around helm, should be self explanatory)
- name: redash
chart: redash/redash
version: 2.3.0
namespace: redash
values:
- redash:
secretKey: supersecretkey
cookieSecret: cookiesecretkey
postgresql:
enabled: false
externalPostgreSQLSecret:
name: redash-postgresql
key: connection-string
redis:
enabled: false
externalRedisSecret:
name: redash-redis
key: connection-string
Fails on validation with Error: execution error at (redash/templates/NOTES.txt:26:12): A value for one of the following variables is required: postgresql.postgresqlPassword (secure password), postgresql.existingSecret (secret name), externalPostgreSQL (connection string)
. The workaround is to add value like externalPostgreSQL: "notused"
. Validation logic in https://github.com/getredash/contrib-helm-chart/blob/v2.3.0/templates/NOTES.txt#L26 should probably be updated, or migrated to Helm v3 JSON Schema.
Hello.
I am currently using the Chart but testing the version 2 I am facing the issue of postgresql password required when I am using an external instance, so I am forced to pass a postgresqlPassword
even though it is not going to be used.
redash:
redash:
cookieSecret: +DkgxNdwbcUZOfDDSDMHMLxSY2Y8YocSfq3bJ602kn8=
secretKey: XYaeCNrq7RP968foyennRkMYQ3ZieJuecS3D25btGDY=
postgresql:
enabled: false
externalPostgreSQL: "postgresql://admin:pwd@localhost:5432/redash"
The fix should be easy and I can open a MR for that if needed
How can I send redash logs to external location such as S3 or PVC.
Is it possible to get logs out of pods.
I have the below error from redash server pod:
[2020-05-28 04:30:04 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1236)
[2020-05-28 04:30:04 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1237)
[2020-05-28 04:30:04 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1238)
[2020-05-28 04:30:04 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:1239)
Readiness probe failed: Get http://10.0.10.203:5000/static/images/redash_icon_small.png: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
How do I solve this?
When checking the generated yaml file with dry run:
helm install redash_ori --create-namespace -n redash -f redash_values.yaml --debug --dry-run redash/redash
i got this error when using externalPostgreSQL
and postgresql.enable: false
install.go:189: [debug] CHART PATH: /home/ec2-user/.cache/helm/repository/redash-2.0.0.tgz
Error: values don't meet the specifications of the schema(s) in the following chart(s):
postgresql:
- postgresqlPassword: Invalid type. Expected: string, given: null
helm.go:81: [debug] values don't meet the specifications of the schema(s) in the following chart(s):
postgresql:
- postgresqlPassword: Invalid type. Expected: string, given: null```
I follow the steps described but is not working as expected
helm repo add redash https://getredash.github.io/contrib-helm-chart/
"redash" has been added to your repositories
cat > my-values.yaml <<- EOM
> redash:
> cookieSecret: $(openssl rand -base64 32)
> secretKey: $(openssl rand -base64 32)
> postgresql:
> postgresqlPassword: $(openssl rand -base64 32)
> EOM
helm upgrade --install -f my-values.yaml my-release redash/redash
Release "my-release" does not exist. Installing it now.
**Error: failed post-install: timed out waiting for the condition**
KUBERNETES
kubectl get jobs
NAME COMPLETIONS DURATION AGE
my-release-install 0/1 3m22s 3m22s
kubectl logs my-release-install-v24t2
This will retry connections until PostgreSQL/Redis is up, then perform database installation/migrations as needed.
Using Database: postgresql://redash:******@my-release-postgresql:5432/redash
Using Redis: redis://:******@my-release-redis-master:6379/0
Starting attempt 0 of 10
Return code: 124
Status:
Status command timed out after 10 seconds.
Waiting 10 seconds before retrying.
Starting attempt 1 of 10
Return code: 124
...
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 4d17h
my-release-postgresql ClusterIP 10.43.200.133 <none> 5432/TCP 7m57s
my-release-postgresql-headless ClusterIP None <none> 5432/TCP 7m57s
my-release-redash ClusterIP 10.43.65.40 <none> 80/TCP 7m57s
my-release-redis-headless ClusterIP None <none> 6379/TCP 7m57s
my-release-redis-master ClusterIP 10.43.244.187 <none> 6379/TCP 7m57s
kubectl get pods
NAME READY STATUS RESTARTS AGE
my-release-install-bnhp2 1/1 Running 0 2m5s
my-release-install-f26pn 0/1 Error 0 5m37s
my-release-install-v24t2 0/1 Error 0 9m1s
my-release-postgresql-0 0/1 Pending 0 9m1s
my-release-redash-5f79c858bf-4hqmc 0/1 Running 2 9m1s
my-release-redash-adhocworker-6c76f8c8fb-8jk4r 1/1 Running 0 9m1s
my-release-redash-scheduledworker-5b4c45679d-fvq92 1/1 Running 0 9m1s
my-release-redis-master-0 0/1 Pending 0 9m1s
kubectl logs my-release-redash-scheduledworker-5b4c45679d-fvq92
[2020-08-31 15:38:45,889][PID:16][ERROR][Beat] beat: Connection error: Error 110 connecting to my-release-redis-master:6379. Connection timed out.. Trying again in 26.0 seconds...
[2020-08-31 15:40:48,769][PID:7][ERROR][MainProcess] consumer: Cannot connect to redis://:**@my-release-redis-master:6379/0: Error 110 connecting to my-release-redis-master:6379. Connection timed out..
Trying again in 28.00 seconds...
Postgress and redis has the same problem
kubectl describe pod redash-redis-master-0
Name: redash-redis-master-0
Namespace: default
Priority: 0
Node: <none>
Labels: app=redis
chart=redis-10.5.7
controller-revision-hash=redash-redis-master-946fb87cf
release=redash
role=master
statefulset.kubernetes.io/pod-name=redash-redis-master-0
Annotations: checksum/configmap: f91f2234624b0d9bf43af21db511272ffbabca5278f1262872d5291e18d18f1f
checksum/health: 561471f7913a0e158c2d388e9115b472ef162db6f680e1a57179255aeb5d3dfa
checksum/secret: 0e96078e6ac5c26fed6d6eab3bcb007c89ea19bf2adc11efe0e0ab82fc07de96
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/redash-redis-master
Containers:
redash-redis:
Image: docker.io/bitnami/redis:5.0.7-debian-10-r32
Port: 6379/TCP
Host Port: 0/TCP
Command:
/bin/bash
-c
if [[ -n $REDIS_PASSWORD_FILE ]]; then
password_aux=`cat ${REDIS_PASSWORD_FILE}`
export REDIS_PASSWORD=$password_aux
fi
if [[ ! -f /opt/bitnami/redis/etc/master.conf ]];then
cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
fi
if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
fi
ARGS=("--port" "${REDIS_PORT}")
ARGS+=("--requirepass" "${REDIS_PASSWORD}")
ARGS+=("--masterauth" "${REDIS_PASSWORD}")
ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
/run.sh ${ARGS[@]}
Liveness: exec [sh -c /health/ping_liveness_local.sh 5] delay=5s timeout=5s period=5s #success=1 #failure=5
Readiness: exec [sh -c /health/ping_readiness_local.sh 5] delay=5s timeout=1s period=5s #success=1 #failure=5
Environment:
REDIS_REPLICATION_MODE: master
REDIS_PASSWORD: <set to the key 'redis-password' in secret 'redash-redis'> Optional: false
REDIS_PORT: 6379
Mounts:
/data from redis-data (rw)
/health from health (rw)
/opt/bitnami/redis/etc/ from redis-tmp-conf (rw)
/opt/bitnami/redis/mounted-etc from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mskp5 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
redis-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: redis-data-redash-redis-master-0
ReadOnly: false
health:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: redash-redis-health
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: redash-redis
Optional: false
redis-tmp-conf:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
default-token-mskp5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mskp5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 6m31s default-scheduler error while running "VolumeBinding" filter plugin for pod "redash-redis-master-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling 5m1s default-scheduler error while running "VolumeBinding" filter plugin for pod "redash-redis-master-0": pod has unbound immediate PersistentVolumeClaims
Not sure what else can I check, or the way that the app connects to redis is correctly done
I want to add an existing secret as an environment variable for an external postgresql server.
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: redash-postgres
key: password
# how it will be used..
externalPostgreSQL: "postgresql://redash:$(POSTGRES_PASSWORD)@postgresql-headless.postgresql:5432/redash"
The problem is
contrib-helm-chart/templates/server-deployment.yaml
Lines 394 to 398 in 6efbc7c
The adhocworker and scheduledworker deployments are missing a proper readiness and/or liveness check. This leads to unresponsive containers in the case where redis connections get closed.
[2021-01-27 14:22:04,485][PID:6][WARNING][MainProcess] consumer: Connection to broker lost. Trying to re-establish the connection...
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 596, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python2.7/site-packages/celery/worker/loops.py", line 91, in asynloop
next(loop)
File "/usr/local/lib/python2.7/site-packages/kombu/asynchronous/hub.py", line 362, in create_loop
cb(*cbargs)
File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 1052, in on_readable
self.cycle.on_readable(fileno)
File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 348, in on_readable
chan.handlers[type]()
File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 679, in _receive
ret.append(self._receive_one(c))
File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 690, in _receive_one
response = c.parse_response()
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 3036, in parse_response
return self._execute(connection, connection.read_response)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 3013, in _execute
return command(*args)
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 637, in read_response
response = self._parser.read_response()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 290, in read_response
response = self._buffer.readline()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 224, in readline
self._read_from_socket()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 199, in _read_from_socket
(e.args,))
ConnectionError: Error while reading from socket: (u'Connection closed by server.',)
I think the best would be to crash the containers whenever those errors occur otherwise the liveness-check should report an error.
Possibly related to #34 - I've deployed Redash from the chart with settings for external PostgreSQL & Redis.
$ kubectl -n redash logs -f redash-7bb958dc7f-v8tfl
Using external postgresql database
Using external redis database
[2020-11-18 13:43:52 +0000] [6] [INFO] Starting gunicorn 19.7.1
[2020-11-18 13:43:52 +0000] [6] [INFO] Listening at: http://0.0.0.0:5000 (6)
[2020-11-18 13:43:52 +0000] [6] [INFO] Using worker: sync
[2020-11-18 13:43:52 +0000] [10] [INFO] Booting worker with pid: 10
[2020-11-18 13:43:52 +0000] [12] [INFO] Booting worker with pid: 12
...
[2020-11-18 13:44:32 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:10)
[2020-11-18 13:44:32 +0000] [10] [INFO] Worker exiting (pid: 10)
[2020-11-18 13:44:32 +0000] [32] [INFO] Booting worker with pid: 32
[2020-11-18 13:44:42 +0000] [6] [CRITICAL] WORKER TIMEOUT (pid:12)
[2020-11-18 13:44:42 +0000] [12] [INFO] Worker exiting (pid: 12)
[2020-11-18 13:44:42 +0000] [43] [INFO] Booting worker with pid: 43
As a result the pod never passes its readiness check and my Helm command eventually fails.
I've tried:
webWorkers
to 2
for lower resource utilisationNeither have fixed the problem.
In values.yaml
the resources
have a default value of null
but should be an empty map - resulting in:
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
coalesce.go:199: warning: destination for resources is a table. Ignoring non-table value <nil>
When setting resources in local values file.
While working on LDAP settings from helm chart getting error for ldap3 library, when checked in Docker hub images for Redash there is different image name mentioned to be used when Ldap we will be implementing for Redash.
https://hub.docker.com/layers/redash/redash/latest/images/sha256-55cedc91a09107e894a0ce872d138ec8c708ec537f6df282207bdf14eec72491?context=explore
Could you please check the image once used in helm chart ?? Also is there any fixes possible for mentioned error?
Its a blocker for ldap integrations :(
After some period of successful deployment, the Redash postgres server is failing and get’s stuck in a crashBackoffLoop with this log message:
FATAL: data directory "/bitnami/postgresql/data" has group or world access
DETAIL: Permissions should be u=rwx (0700).
From what I’ve read, it appears that when the postgres pod get is restarted, there is some change in the permissions which prevents the postgres pod from properly coming back online.
My sense is that it probably makes sense to keep the chart in the stable/redash directory, rather than moving to the top level. This keeps the door open to adding additional charts (e.g. for subcomponents) down the road and also keeps the CI mechanics separated out.
Given this, I think a top level readme would need an introduction to the project and basic license and contribution guide. The documentation on the chart itself would stay in the stable/redash readme.
I won't install dep with redis,how could use extra redis?
These two fields contain secret information, but cannot be configured outside of the helm values.yaml
file.
This depends on #2
This repo seems premature and there's no way to use it. I tried using it locally even and i get
$ helm install charts/redash/ --name redash --set cookieSecret=verysecret
Error: found in requirements.yaml, but missing in charts/ directory: redis, postgresql
No way to add it remotely
Error:
Failed to install app redash. Error: failed post-install: warning: Hook post-install redash/templates/hook-install-job.yaml failed: jobs.batch "redash-install" already exists
And logs...
Using external postgresql database
Using Redis: redis://:******@redash-redis-master:6379/0
[2020-12-24 00:48:26 +0000] [6] [INFO] Starting gunicorn 19.7.1
[2020-12-24 00:48:26 +0000] [6] [INFO] Listening at: http://0.0.0.0:5000 (6)
[2020-12-24 00:48:26 +0000] [6] [INFO] Using worker: sync
[2020-12-24 00:48:26 +0000] [10] [INFO] Booting worker with pid: 10
[2020-12-24 00:48:26 +0000] [12] [INFO] Booting worker with pid: 12
[2020-12-24 00:48:27 +0000] [14] [INFO] Booting worker with pid: 14
[2020-12-24 00:48:27 +0000] [16] [INFO] Booting worker with pid: 16
[2020-12-24 00:48:41,499] ERROR in app: Exception on /ping [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1607, in full_dispatch_request
self.try_trigger_before_first_request_functions()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1654, in try_trigger_before_first_request_functions
func()
File "/app/redash/version_check.py", line 85, in reset_new_version_status
latest_version = get_latest_version()
File "/app/redash/version_check.py", line 91, in get_latest_version
return redis_connection.get(REDIS_KEY)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 1264, in get
return self.execute_command('GET', name)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 772, in execute_command
connection = pool.get_connection(command_name, **options)
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 994, in get_connection
connection.connect()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 497, in connect
raise ConnectionError(self._error_message(e))
ConnectionError: Error 111 connecting to redash-redis-master:6379. Connection refused.
[2020-12-24 00:48:51,515] ERROR in app: Exception on /ping [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1607, in full_dispatch_request
self.try_trigger_before_first_request_functions()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1654, in try_trigger_before_first_request_functions
func()
File "/app/redash/version_check.py", line 85, in reset_new_version_status
latest_version = get_latest_version()
File "/app/redash/version_check.py", line 91, in get_latest_version
return redis_connection.get(REDIS_KEY)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 1264, in get
return self.execute_command('GET', name)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 772, in execute_command
connection = pool.get_connection(command_name, **options)
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 994, in get_connection
connection.connect()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 497, in connect
raise ConnectionError(self._error_message(e))
ConnectionError: Error 111 connecting to redash-redis-master:6379. Connection refused.
[2020-12-24 00:49:00,496][PID:12][INFO][metrics] method=GET path=/ping endpoint=redash_ping status=200 content_type=text/html; charset=utf-8 content_length=5 duration=0.13 query_count=0 query_duration=0.00
[2020-12-24 00:49:10,493][PID:12][INFO][metrics] method=GET path=/ping endpoint=redash_ping status=200 content_type=text/html; charset=utf-8 content_length=5 duration=0.11 query_count=0 query_duration=0.00
[2020-12-24 00:49:20,496][PID:16][INFO][metrics] method=GET path=/ping endpoint=redash_ping status=200 content_type=text/html; charset=utf-8 content_length=5 duration=0.14 query_count=0 query_duration=0.00
Currently redash-install job fails with:
Error: Couldn't find key googleClientSecret in Secret <ns>/redash │
This is a test setup and I'm not going to do Google Oauth, how do I disable and progress with the installaton?
Why is this a blocking requirement?
hello guys,
thanks for the chart.
I've the following namespace available -
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
istio-injection: enabled
all components work correctly other than hook-install-job
The job finished up correctly and exits, sidecar keeps on going while the pod moves into NotReady
status and would uninstall if installed with atomic and cleanup_on_fail option
A similar issue occurred while running tekton-pipelines. The workaround is to add sidecar.istio.io/inject: "false"
to pod annotations.
specifically, at
and also probably in upgrade job as well
the chart doesn't have configurable annotations for these jobs.
Alternatively, I'm making it work by installing the chart in a different namespace
with istio-injection disabled and manually configure podAnnotations
for all the deployment sub components which are configurable.
Thank you!
Hi everyone.
I am trying to modify the configuration data related to sendgrid mail. I am currently running this command:
helm install --set mail.server = smtp.sendgrid.net --set mail.port = 587 --set mail.useTls = true
--set mail.username = "value"
--set mail.password = "value"
--set mail.defaultSender = "value" --set googleOAuth.enabled = true
--set googleOAuth.redashGoogleClientId = "value"
--set googleOAuth.redashGoogleClientSecret = "value" my-release redash / redash
But when entering Settings -> Account I am getting the following message:
"It looks like your mail server isn't configured. Make sure to configure it for the invite emails to work."
My question is the following: Am I executing the command correctly or is it just a credential issue?
Also, how can I display this command if I am modifying the respective values?
Thanks
Redis password should be mandatory in the values manifest.
It actually will not prevent the chart to be installed the first time. However upon updating the chart and installing (upgrading) the application,the main dash worker will fail:
2021-01-27 14:48:25,434] ERROR in app: Exception on /login [GET]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python2.7/site-packages/flask_restful/init.py", line 271, in error_router
return original_handler(e)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1610, in full_dispatch_request
rv = self.preprocess_request()
File "/usr/local/lib/python2.7/site-packages/flask/app.py", line 1831, in preprocess_request
rv = func()
File "/usr/local/lib/python2.7/site-packages/flask_limiter/extension.py", line 400, in __check_request_limit
six.reraise(*sys.exc_info())
File "/usr/local/lib/python2.7/site-packages/flask_limiter/extension.py", line 365, in __check_request_limit
if not self.limiter.hit(lim.limit, lim.key_func(), limit_scope):
File "/usr/local/lib/python2.7/site-packages/limits/strategies.py", line 132, in hit
self.storage().incr(item.key_for(*identifiers), item.get_expiry())
File "/usr/local/lib/python2.7/site-packages/limits/storage.py", line 446, in incr
return self.lua_incr_expire([key], [expiry])
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 3575, in call
return client.evalsha(self.sha, len(keys), *args)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 2761, in evalsha
return self.execute_command('EVALSHA', sha, numkeys, *keys_and_args)
File "/usr/local/lib/python2.7/site-packages/redis/client.py", line 772, in execute_command
connection = pool.get_connection(command_name, **options)
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 994, in get_connection
connection.connect()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 502, in connect
self.on_connect()
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 570, in on_connect
if nativestr(self.read_response()) != 'OK':
File "/usr/local/lib/python2.7/site-packages/redis/connection.py", line 642, in read_response
raise response
ResponseError: WRONGPASS invalid username-password pair
This is due to Redis password not set and changed by Helm upon the second installation.
Setting it prevents this behaviour.
We need some basic linting - we can probably reproduce much of what the helm/charts repo does pretty easily
Instead of using annotations, it's expected to set ingressClassName
in ingress spec. Currently ingress template doesn't support this.
Hello guys, any plan to have a release version (non dev) for the Chart?
I am currently using it here and would like to have the newer version (specially because of the db migration execution), but I am just allowed to use "released/stable" version
Thank you!
Hello, I am currently exhibiting a series of Services through Istio. However, when it comes to exposing redash I am having problems.
First I present the configuration to expose the service with istio:
`apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway-name
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
namespace: istio-system
spec:
hosts:
However, when I access the service through the URL https://subdomain.domain.com/redash, I receive this URL:
https://subdomain.domain.com/login?next=https%3A%2F%2Fsubdomain.domain.com%2F
Any idea why this can happen?
Hi there,
we are trying to run this chart on our kubernetes cluster and have some startup errors with Postgresql when persistence is enabled:
Looks like this could be a problem with the version of the Postgresql chart or image: https://github.com/bitnami/bitnami-docker-postgresql/issues/91. Maybe this also correlates to #41.
Any ideas on how to fix this?
Where can I update the resource limit for postgresql?
As my kubernetes have only default 128Mi if not declared the resources.
create Pod my-release-postgresql-0 in StatefulSet my-release-postgresql failed error: Pod "my-release-postgresql-0" is invalid: spec.containers[0].resources.requests: Invalid value: "256Mi": must be less than or equal to memory limit
So far pod labels are not configurable. We may want to add labels.
I'll open a PR to propose such a configuration.
Various options here - the simplest is to use Github Pages, but we could also publish to a Google Cloud Store or S3 repository
The adhocworker has the same container args as the scheduledworker deployment template, this is causing an error when starting and resulting in no adhoc workers being started at all. Redash doesn't work properly in this configuration.
/templates/adhocworker-deployment.yaml:
line 38 is "args: ["-c", ". /config/dynamicenv.sh && /app/bin/docker-entrypoint scheduler"]"
line 38 should be "args: ["-c", ". /config/dynamicenv.sh && /app/bin/docker-entrypoint worker"]"
Hi.
I changed tag: 8.0.1.b33387
and helm install.
I added datasource for mysql . and I can see table in mysql
But. i got the error when i execute a query.
How can I setup for tag: 8.0.1.b33387 version. ?
$ kubectl logs redash-adhocworker-578696f94b-b6nhc
[2019-11-29 07:01:50,071][PID:166][INFO][ForkPoolWorker-138] task_name=redash.tasks.execute_query task_id=09521991-c4f9-4da5-8eb8-be6a58385b59 task=execute_query state=load_ds ds_id=1
[2019-11-29 07:01:50,271][PID:166][ERROR][ForkPoolWorker-138] Task redash.tasks.execute_query[09521991-c4f9-4da5-8eb8-be6a58385b59] raised unexpected: InvalidToken()
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 385, in trace_task
R = retval = fun(*args, **kwargs)
File "/app/redash/worker.py", line 84, in __call__
return TaskBase.__call__(self, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/celery/app/trace.py", line 648, in __protected_call__
return self.run(*args, **kwargs)
File "/app/redash/tasks/queries.py", line 436, in execute_query
scheduled_query).run()
File "/app/redash/tasks/queries.py", line 339, in __init__
self.data_source = self._load_data_source()
File "/app/redash/tasks/queries.py", line 422, in _load_data_source
return models.DataSource.query.get(self.data_source_id)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 924, in get
ident, loading.load_on_pk_identity)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 1007, in _get_impl
return db_load_fn(self, primary_key_identity)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 250, in load_on_pk_identity
return q.one()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2954, in one
ret = self.one_or_none()
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py", line 2924, in one_or_none
ret = list(self)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 98, in instances
util.raise_from_cause(err)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 79, in instances
rows = [proc(row) for row in fetch]
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 511, in _instance
loaded_instance, populate_existing, populators)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py", line 611, in _populate_full
dict_[key] = getter(row)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy/sql/type_api.py", line 1226, in process
return process_value(impl_processor(value), dialect)
File "/app/redash/models/types.py", line 28, in process_result_value
return ConfigurationContainer.from_json(super(EncryptedConfiguration, self).process_result_value(value, dialect))
File "/usr/local/lib/python2.7/site-packages/sqlalchemy_utils/types/encrypted/encrypted_type.py", line 409, in process_result_value
decrypted_value = self.engine.decrypt(value)
File "/usr/local/lib/python2.7/site-packages/sqlalchemy_utils/types/encrypted/encrypted_type.py", line 216, in decrypt
decrypted = self.fernet.decrypt(value)
File "/usr/local/lib/python2.7/site-packages/cryptography/fernet.py", line 75, in decrypt
return self._decrypt_data(data, timestamp, ttl)
File "/usr/local/lib/python2.7/site-packages/cryptography/fernet.py", line 119, in _decrypt_data
self._verify_signature(data)
File "/usr/local/lib/python2.7/site-packages/cryptography/fernet.py", line 108, in _verify_signature
raise InvalidToken
InvalidToken
[2019-11-29 07:01:51,025][PID:6][INFO][MainProcess] Received task: redash.tasks.record_event[46dff3bc-39ac-4ff2-b75d-264a9d902508]
[2019-11-29 07:01:51,032][PID:165][INFO][ForkPoolWorker-137] Task redash.tasks.record_event[46dff3bc-39ac-4ff2-b75d-264a9d902508] succeeded in 0.00644092302537s: None
[2019-11-29 07:01:57,699][PID:6][INFO][MainProcess] Received task: redash.tasks.refresh_queries[8e537b40-aa25-4eb4-a5c9-13c6a3146008]
Thank you.
contrib-helm-chart/values.yaml
Line 350 in f2a41b7
In future version : 10.0.0 we need to tell the adhocworker to run schemas queries
replace : "queries,celery"
to : "queries,schemas"
When deploying the latest dockers available (redash/redash:9.0.0-beta.b42121 or redash/redash:preview) the scheduled & schemas jobs are not executed
RQ are well filled but not processed
It is working fine if I'm updating the configuration with the following
adhocWorker:
# adhocWorker.env -- Redash ad-hoc worker specific envrionment variables.
env:
QUEUES: "queries,celery,schemas,periodic"
WORKERS_COUNT: 2
README says we can set server.securityContext
, but that has been removed from the template!?
Issue Summary
CPU usage always keep high
Steps to Reproduce
Deploy Redash on Kops using helm
All containers reach a Running state.
Capture processes by htop
It seems livenessProbe using ./manage.py status for probing
livenessProbe:
exec:
command:
- /bin/sh
- -c
- . /config/dynamicenv.sh && /app/manage.py status
failureThreshold: 10
initialDelaySeconds: 90
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
Technical details:
Redash Version: redash/redash:8.0.2.b37747
Kubernetes version: v1.15.6
Helm version:
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.2", GitCommit:"a8b13cc5ab6a7dbef0a58f5061bcc7c0c61598e7", GitTreeState:"clean"}
Hi everyone.
I am currently trying to install redash on AKS but I always get the following error: Error: failed to download "stable / redash" (hint: running helm repo update
may help)
I am executing this command:
helm install redash stable / redash
helm repo list
NAME URL
istio.io https://storage.googleapis.com/istio-release/releases/1.3.2/charts/
loki https://grafana.github.io/loki/charts
stable https://kubernetes-charts.storage.googleapis.com
my helm version is 3.0.0
Note that I have already run helm repo update and it doesn't work
Why this error?
Issue Summary
A summary of the issue and the browser/OS environment in which it occurs.
Im setup Redash by helmchart base on helm contrib-helm-chart
And get issue below:
[2020-09-03 08:00:04,549][PID:30][ERROR][redash.app] Exception on /api/data_sources [GET]
Traceback (most recent call last):
File “/usr/local/lib/python3.7/site-packages/cryptography/fernet.py”, line 104, in _verify_signature
h.verify(data[-32:])
File “/usr/local/lib/python3.7/site-packages/cryptography/hazmat/primitives/hmac.py”, line 66, in verify
ctx.verify(signature)
File “/usr/local/lib/python3.7/site-packages/cryptography/hazmat/backends/openssl/hmac.py”, line 74, in verify
raise InvalidSignature(“Signature did not match digest.”)
cryptography.exceptions.InvalidSignature: Signature did not match digest.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1949, in full_dispatch_request
rv = self.dispatch_request()
File “/usr/local/lib/python3.7/site-packages/flask/app.py”, line 1935, in dispatch_request
return self.view_functionsrule.endpoint
File “/usr/local/lib/python3.7/site-packages/flask_restful/init.py”, line 458, in wrapper
resp = resource(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/flask_login/utils.py”, line 261, in decorated_view
return func(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/flask/views.py”, line 89, in view
return self.dispatch_request(*args, **kwargs)
File “/app/redash/handlers/base.py”, line 33, in dispatch_request
return super(BaseResource, self).dispatch_request(*args, **kwargs)
File “/usr/local/lib/python3.7/site-packages/flask_restful/init.py”, line 573, in dispatch_request
resp = meth(*args, **kwargs)
File “/app/redash/permissions.py”, line 71, in decorated
return fn(*args, **kwargs)
File “/app/redash/handlers/data_sources.py”, line 116, in get
for ds in data_sources:
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py”, line 105, in instances
util.raise_from_cause(err)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py”, line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py”, line 153, in reraise
raise value
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py”, line 85, in instances
rows = [proc(row) for row in fetch]
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py”, line 85, in
rows = [proc(row) for row in fetch]
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py”, line 572, in _instance
populators,
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/loading.py”, line 693, in populate_full
dict[key] = getter(row)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py”, line 1266, in process
return process_value(impl_processor(value), dialect)
File “/app/redash/models/types.py”, line 31, in process_result_value
super(EncryptedConfiguration, self).process_result_value(value, dialect)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy_utils/types/encrypted/encrypted_type.py”, line 409, in process_result_value
decrypted_value = self.engine.decrypt(value)
File “/usr/local/lib/python3.7/site-packages/sqlalchemy_utils/types/encrypted/encrypted_type.py”, line 216, in decrypt
decrypted = self.fernet.decrypt(value)
File “/usr/local/lib/python3.7/site-packages/cryptography/fernet.py”, line 75, in decrypt
return self._decrypt_data(data, timestamp, ttl)
File “/usr/local/lib/python3.7/site-packages/cryptography/fernet.py”, line 117, in _decrypt_data
self._verify_signature(data)
File “/usr/local/lib/python3.7/site-packages/cryptography/fernet.py”, line 106, in _verify_signature
raise InvalidToken
cryptography.fernet.InvalidToken
=>> I have checked REDASH_COOKIE_SECRET and REDASH_SECRET_KEY. Both of them exist in env os.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.