immich-app / immich-charts Goto Github PK
View Code? Open in Web Editor NEWHelm chart implementation of Immich
Home Page: https://immich.app
License: GNU Affero General Public License v3.0
Helm chart implementation of Immich
Home Page: https://immich.app
License: GNU Affero General Public License v3.0
I'm trying to install the chart by using:
helm install immichtest -n immich ./charts/immich/ --set immich.persistence.library.existingClaim=nfs-immich --set redis.enabled=true --set postgresql.enabled=true --set global.postgresql.auth.postgresPassword=DBPassword123 --kube-context home-lab-cluster-0
But the logs for the POD immich/immichtest-server-7494c4fd5d-c7wm8
are:
Error: getaddrinfo ENOTFOUND immichtest-redis-master
Error: getaddrinfo ENOTFOUND immichtest-postgresql
HTTP probes should be more reliable and can convey more information than TCP probes. I would suggest that if HTTP is available then it should be used. TCP probes are really only useful for protocols which use TCP but not HTTP.
Support providing multiple volumes, instead of just a single one as is currently the case.
Use cases:
Both of these use cases apply to me. This blocks me from using Immich.
Currently we do a lot of copy pasting to reuse the common-library helm chart. There is a better way of including it multiple times which I don't fully understand, but see https://github.com/gabe565/charts/blob/7547b4632f8686cc6f402708a19215279c1a0ae3/charts/obico/templates/ml-api.yaml#L9-L12 for the magic sauce.
I recently updated to v1.49.0 image version and everything seems to keep working EXCEPT the machine-learning image.
Looking a git into the main repo, it sems as if the Dockerfile for machine-learning image has been changed to have the CMD within the image metadata.
I believe that there is no need to add args
and the shell in the deployment. I hot-patched that (removed those lines from the deployment manifest) and now the deployment seems to start.
I suppose that a new chart release, with a more up-to-date immich version, and changing this machine learning would work. But I always break things when I try to modify charts, sorry for not proposing the PR myself.
Hi Everyone,
I recently got a helm chart working based on the (now deprecated) k8s-at-home project and would be happy to contribute if there's a need for a starting point.
It's a bit bare-bones and is enough to get Immich deployed to a single node (everything (minus proxy) deployed into a single pod) and uses the Bitnami redis and postgresql charts.
I'd also be happy to help test- I'm deploying on k3s (single node for now) with a Traefik v2 ingress.
Thanks in advance!
persistence:
geodata-cache:
enabled: true
size: 1Gi
# Optional: Set this to pvc to avoid downloading the geodata every start.
type: emptyDir
accessMode: ReadWriteMany
# storageClass: your-class
I use kubernetes, it showed "Set this to pvc to avoid downloading the geodata every start.", but I don't know how to, and there is no example
Hi team, I'm getting a 404 when try to add the helm repository helm repo add immich https://immich-app.github.io/helm-charts
. Could you please take a look at the github pages config? Much appreciated
I could not install this helm chart,
PVC was created earlier and now it complains about access mode, II also checked examples for this repository and still have message:
Error: execution error at (immich/templates/web.yaml:26:4): accessMode is required for PVC photos-web-library
Hello,
I successfully upgraded to immich version 1.87 using this chart without encountering any issues. However, I've noticed a warning in the main immich interface regarding certain disruptive changes in version 1.88. You can find more details about these changes here: immich-app/immich#5086
Is there a plan to update the chart to align with these changes?
Thank you!
Postgres Pod Crash:
chmod: changing permissions of '/var/run/postgresql': Operation not permitted
PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres: could not access the server configuration file "/bitnami/postgresql/data/postgresql.conf": No such file or directory
Chart Values:
env:
DB_PASSWORD:
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
image:
tag: v1.91.0
immich:
persistence:
library:
existingClaim: va-unraid-photos-rw
postgresql:
enabled: true
auth:
existingSecret: postgres-secrets
redis:
enabled: true
The postgres statefulset is appropriately configured with the following image:
image: docker.io/tensorchord/pgvecto-rs:pg14-v0.1.11
Chart version: immich-0.3.0
Hi,
I would like the intgrate postgres don't use the default storage class for data. How can i configure the release to do that ?
I try this, but the PVC continue tu use default storage class :
postgresql:
enabled: true
persistence:
data:
enabled: true
size: 8Gi
storageClass: my-custom-storage-class
accessMode: ReadWriteMany
I'm trying to deploy the helm chart using the "existingSecret" option to specify postgresql credentials. The problem appears to be that while the postgresql pod reads the secret for the environment variables, the immich-server pod is still using the default "immich" value for DB_PASSWORD. Is there a values option to tell immich to read the secrets files, too?
$ kubectl -n immich get pod immich-server-6cdfd9bd66-4h42z -o yaml
...
- name: DB_PASSWORD
value: immich
...
$ kubectl -n immich get pod immich-postgresql-0 -o yaml
...
- name: POSTGRES_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: postgres-password
name: postgres-secrets
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: postgres-secrets
The runtime values were verified to be as expected based in the output above by exec'ing into pods and running env. Both pods show error messages to the effect of 'password authentication failed for user "immich"'
Here is the contents of postgres-secrets.yaml:
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
namespace: immich
stringData:
DB_USERNAME: immich
POSTGRES_USER: immich
DB_PASSWORD: not-actually-my-password
POSTGRES_PASSWORD: not-actually-my-password
postgres-password: not-actually-my-password
password: not-actually-my-password
DB_DATABASE_NAME: immich
POSTGRES_DB: immich
I probably don't need all of these, but I've been trying different variables to get this to work.
Hi this chart need some updating with the breaking changes
k logs -n storage immich-proxy-659f8bb687-ccb7z
exec /docker-entrypoint.sh: exec format error
Hi, I've deployed Immich on my kubernetes (k3s) cluster, everything is running except the immich-machine-learning pod it keeps crashing with no logs, so I really don't know what's happening and how to fix it. Any suggestions to debug this?
Command:
kubectl logs immich-machine-learning-8885d64cb-l2rvk
Output:
~$
Command:
kubectl describe pod immich-machine-learning-8885d64cb-l2rvk
Output:
Name: immich-machine-learning-8885d64cb-l2rvk
Namespace: tools
Priority: 0
Service Account: default
Node: k3s-master-02/10.0.100.102
Start Time: Mon, 20 Feb 2023 20:50:26 +0100
Labels: app=immich-machine-learning
pod-template-hash=8885d64cb
Annotations: <none>
Status: Running
IP: 10.42.2.161
IPs:
IP: 10.42.2.161
Controlled By: ReplicaSet/immich-machine-learning-8885d64cb
Init Containers:
postgresql-isready:
Container ID: containerd://f3de44384d61f7743b8e7da3398feedfcecd3df0e9a4dc52d7efd1f9d76f4cc5
Image: harbor.k8s.lan/dockerhub-proxy/bitnami/postgresql:14.5.0-debian-11-r6
Image ID: harbor.k8s.lan/dockerhub-proxy/bitnami/postgresql@sha256:4355265e33e9c2a786aa145884d4b36ffd4c41c516b35d60df0b7495141ec738
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
until pg_isready -U "${POSTGRESQL_USERNAME}" -d "dbname=${DB_DATABASE_NAME}" -h immich-postgresql-hl -p 5432 ; do sleep 2 ; done
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 20 Feb 2023 20:50:28 +0100
Finished: Mon, 20 Feb 2023 20:50:34 +0100
Ready: True
Restart Count: 0
Environment:
POSTGRESQL_USERNAME: <set to the key 'DB_USERNAME' in secret 'immich-secret-env'> Optional: false
POSTGRESQL_DATABASE: <set to the key 'DB_DATABASE_NAME' of config map 'immich-env'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5qj9 (ro)
Containers:
immich-machine-learning:
Container ID: containerd://877582bf1c4c88ab58a94a49a47505d11c10f9e0a2b8cb780c3b12c1a46b06ce
Image: harbor.k8s.lan/dockerhub-proxy/altran1502/immich-machine-learning:v1.43.0
Image ID: harbor.k8s.lan/dockerhub-proxy/altran1502/immich-machine-learning@sha256:3373962c8d64b264b42751614be88590e279cc6442db0d55615a8daa9cead8f9
Port: 3003/TCP
Host Port: 0/TCP
Command:
/bin/sh
Args:
./entrypoint.sh
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 132
Started: Mon, 20 Feb 2023 21:16:41 +0100
Finished: Mon, 20 Feb 2023 21:16:41 +0100
Ready: False
Restart Count: 10
Liveness: tcp-socket :3003 delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: tcp-socket :3003 delay=0s timeout=1s period=10s #success=1 #failure=3
Startup: tcp-socket :3003 delay=0s timeout=1s period=5s #success=1 #failure=30
Environment Variables from:
immich-env ConfigMap Optional: false
Environment:
DB_PASSWORD: <set to the key 'DB_PASSWORD' in secret 'immich-secret-env'> Optional: false
DB_USERNAME: <set to the key 'DB_USERNAME' in secret 'immich-secret-env'> Optional: false
MAPBOX_KEY: <set to the key 'MAPBOX_KEY' in secret 'immich-secret-env'> Optional: false
Mounts:
/usr/src/app/.reverse-geocoding-dump from geocoding-dump (rw)
/usr/src/app/upload from library (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5qj9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
geocoding-dump:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
library:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-nfs-immich-library
ReadOnly: false
kube-api-access-d5qj9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 29m default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Warning FailedScheduling 29m default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Normal Scheduled 29m default-scheduler Successfully assigned tools/immich-machine-learning-8885d64cb-l2rvk to k3s-master-02
Normal Pulled 29m kubelet Container image "harbor.k8s.lan/dockerhub-proxy/bitnami/postgresql:14.5.0-debian-11-r6" already present on machine
Normal Created 29m kubelet Created container postgresql-isready
Normal Started 29m kubelet Started container postgresql-isready
Warning Unhealthy 28m kubelet Startup probe failed: dial tcp 10.42.2.161:3003: connect: connection refused
Normal Pulled 28m (x4 over 28m) kubelet Container image "harbor.k8s.lan/dockerhub-proxy/altran1502/immich-machine-learning:v1.43.0" already present on machine
Normal Created 28m (x4 over 28m) kubelet Created container immich-machine-learning
Normal Started 28m (x4 over 28m) kubelet Started container immich-machine-learning
Warning BackOff 3m59s (x138 over 28m) kubelet Back-off restarting failed container
According to https://helm.sh/docs/topics/chart_repository/ it is possible to create a Helm repository pretty easy by just creating a corresponding Github page.
This would allow to implement an easy release flow for the Helm Charts.
Related: #13 - If using the chart releaser, the index.yaml can be updated automatically.
Hi,
I am currently looking into replacing nextcloud with immich as the former won't fix the broken auto-upload for iOS devices. From what I have read so far, the ML component in this chart is pretty high on resource consumption, which is a problem because I run on a single node cluster.
Can immich be deployed and function witholut the ML component?
If so, is setting "enabled: false" in the values.yaml file sufficient to achieve this goal?
What features etc. will I be missing out on when ML is not available to immich?
Thanks, this looks like a great app!
I tried to deploy the chart, and noticed that a secret appeared called immich-postgresql
with two fields called password
and postgres-password
.
With that in mind, I succeeded in doing a deployment with mostly defaults with the following additional configuration:
common_env:
DB_PASSWORD:
valueFrom:
secretKeyRef:
name: immich-postgresql
key: password
However, today I changed the chart deployment (by updating the tag, but I don't think that's rellevant). The immich-server was unable to connect to the database. I suspect that the secret manifest had changed and it broke the connection. My "repair" was to change the DB_PASSWORD
environment variable and hardcode the original password. However, that was confusing.
I am a n00b chart user so maybe I missed something obvious. I was somewhat expecting the secret thing to be generated on first deployment and be inmutable from that point onwards. I don't k now if that makes sense, or if that behavior may break more stuff. Unfortunately, I don't really know how to do that (otherwise I would try to do the PR myself).
It would be nice to limit the resources of the machine learning pod as it can stress the node on which it is running immensely.
Is there a plan to provide other based( like slim-buster...) image?
I want to use exist redis and postgres-sql, but it Looks like there is a dns resolution problem(alpine)
/usr/src/app # ping postgres-postgresql.app.svc.cluster.local
PING postgres-postgresql.app.svc.cluster.local (3.64.163.50): 56 data bytes
^C
--- postgres-postgresql.app.svc.cluster.local ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping postgres-postgresql.app.svc.cluster.local
PING postgres-postgresql.app.svc.cluster.local (3.64.163.50): 56 data bytes
^C
--- postgres-postgresql.app.svc.cluster.local ping statistics ---
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
I've just upgraded to Immich v1.88.1 using Helm chart version v0.2.0.
It looks like somehow in the removal of the web and proxy deployments, the default Ingress has ended up pointing to the immich-machine-learning service
. I assume it should be pointing at the immich-server
service? In fact it looks like the original ingress has been removed and replaced with a new one with a different name.
[jonathan@poseidon immich]$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
immich-machine-learning public immich.gazeley.uk 127.0.0.1 80, 443 11m
[jonathan@poseidon immich]$ kubectl describe ingress immich-machine-learning
Name: immich-machine-learning
Labels: app.kubernetes.io/instance=immich
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=machine-learning
app.kubernetes.io/version=v1.88.0
helm.sh/chart=immich-0.2.0
Namespace: immich
Address: 127.0.0.1
Ingress Class: public
Default backend: <default>
TLS:
ingress-tls terminates immich.mydomain.uk
Rules:
Host Path Backends
---- ---- --------
immich.mydomain.uk
/ immich-machine-learning:3003 (10.1.106.39:3003)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-prod
meta.helm.sh/release-name: immich
meta.helm.sh/release-namespace: immich
nginx.ingress.kubernetes.io/proxy-body-size: 0
Events: <none>
It seems that there is no option available to configure GPU resources in the Helm chart. This lack of configurability prevents the leveraging of GPU resources for tasks such as video transcoding, which could greatly benefit from the parallel computing capabilities provided by GPUs.
See https://artifacthub.io/docs/topics/repositories/helm-charts/
Probably depends on #12.
When a PR to the chart is made, a github action should run helm template
with the default values (and a placeholder for the library claim), diff the result against the main branch, and post this diff as a comment on the PR.
For bonus points, there should also be an option to request a diff with custom values by leaving a comment on the PR, such as
!diff
immich: metrics: enabled: true
I used to host Immich with https://cloudnative-pg.io, Perhaps since vector is a thing I am not able to run Immich with this helm chart.
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: pg-immich-cluster
namespace: cnpg-system
spec:
imageName: ghcr.io/bo0tzz/cnpgvecto.rs:15
postgresql:
shared_preload_libraries:
- "vectors.so"
affinity:
nodeSelector:
kubernetes.io/hostname: i11806-kube-node-2-03
instances: 1
storage:
pvcTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: manual-immich
monitoring:
enablePodMonitor: true
bootstrap:
initdb:
database: postgresql-immich-pgsql
owner: postgresql-immich-pgsql
secret:
name: post-init-immich-pgsql-secret
postInitTemplateSQL:
- CREATE EXTENSION IF NOT EXISTS cube;
- CREATE EXTENSION IF NOT EXISTS earthdistance;
immich logs
[Nest] 7 - 01/15/2024, 11:13:03 PM ERROR [TypeOrmModule] Unable to connect to the database. Retrying (3)...
QueryFailedError: permission denied to create extension "vectors"
at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:211:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async assertVectors (/usr/src/app/dist/infra/database.config.js:53:5)
at async UsePgVectors1700713871511.up (/usr/src/app/dist/infra/migrations/1700713871511-UsePgVectors.js:11:9)
at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:260:35)
at async DataSource.initialize (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:148:17)
[Nest] 7 - 01/15/2024, 11:13:03 PM ERROR [ExceptionHandler] permission denied to create extension "vectors"
QueryFailedError: permission denied to create extension "vectors"
at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:211:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async assertVectors (/usr/src/app/dist/infra/database.config.js:53:5)
at async UsePgVectors1700713871511.up (/usr/src/app/dist/infra/migrations/1700713871511-UsePgVectors.js:11:9)
at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:260:35)
at async DataSource.initialize (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:148:17)
Has anyone tried this Chart with Cloudnative Pg?
Although Immich supports arm64 architecture, the reddis and postgresql images by Bitnami do not.
Any reason why the official images for these packages are not used? These are multi-arch.
I can create a PR to use them ofc.
It would be great to incorporate something like https://github.com/helm/chart-releaser-action into the release workflow.
This would reduce the number of errors and could be used to fully automate a new release in conjunction with: #12
using minikube and manifest for volume
Immich release v1.51.0 added an extra Typesense container to the stack that is currently required. This is another infrastructure component like postgres and redis.
While there is a typesense helm chart at https://github.com/Spittal/typesense-helm, it hasn't been updated for a while. Typesense doesn't need anything specific and so it would also be totally fine to use the common-library to deploy it.
Hi,
In case I'm not mistaken, there is a "wired-in" ClusterIP in helm templates for services.
Can you add service type as a value in templates, to be able to set from values?
Thanks
Currently the chart is built and released whenever a new chartVersion is pushed to main. Depending on the options out there, it would be nice to make this a bit more deliberate to increase the control over when new versions are published.
Currently releases do not include any changelogs. This should be fixed so that both the github releases and the artifacthub page properly display a changelog, and ideally so that these changelogs also show up in PRs generated by tools like Renovate.
I believe the canonical location of images is on GHCR.
Expose ports & env vars for https://immich.app/docs/features/monitoring, and include a serviceMonitor definition.
Hey folks,
I can't seem to get helm to install this directly:
csm10495@csm10495-desk:~/Desktop/immich $ helm install --create-namespace --namespace immich immich immich/immich
Error: INSTALLATION FAILED: execution error at (immich/templates/checks.yaml:1:64): .Values.immich.persistence.library.existingClaim is required.
This matches the step on the readme. Am I missing something?
I always get a crashloopbackoff with the machinelearning container:
INFO: Started server process [7]
INFO: Waiting for application startup.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
Both because it's good practice, and because we need it for the official status badge on artifacthub.
We can usevalues.schema.json file to validate values inputs instead of template functions. It will also populate good document in artifacthub.io
https://blog.artifacthub.io/blog/helm-values-schema-reference/
Hi, could you update version of chart to match latest Immich release?
It would be nice to add AdditionalVolumeMounts and additionalVolumes to the helm chart.
For example for custom CA Certificates for OAUTH authentication (NODE_EXTRA_CA_CERTS=/path/to/your/ CA/cert/file) and for the upcoming PR Feat(server,web): libraries (immich-app/immich#3124)
Thanks!
See https://helm.sh/docs/topics/registries/.
This depends on helm/chart-releaser-action#107, unless we use a workaround as in https://github.com/fluxcd-community/helm-charts/pull/94/files#diff-87db21a973eed4fef5f32b267aa60fcee5cbdf03c67fafdc2a9b553bb0b15f34.
While playing around with the chart today I noticed that while the persistence options are set in the different containers (e.g. mounting them). If you (want to) create a new PVC using the values
no PVC is created.
Hi,
after the update to 1.72.1 the machine-learning container does not start:
Back-off restarting failed container immich-machine-learning in pod immich-machine-learning-5777ffff49-kdqcr_immich(d635ea9c-2008-4bb6-b434-2be763abdcda)
with version 1.71.0 everything was working fine
here the logs in the container:
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
ERROR: Traceback (most recent call last):
File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 677, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 566, in __aenter__
await self._router.startup()
File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 654, in startup
await handler()
File "/usr/src/app/main.py", line 46, in startup_event
await load_models()
File "/usr/src/app/main.py", line 40, in load_models
await app.state.model_cache.get(model_name, model_type, eager=settings.eager_startup)
File "/usr/src/app/models/cache.py", line 53, in get
model = InferenceModel.from_model_type(model_type, model_name, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/app/models/base.py", line 78, in from_model_type
return subclasses[model_type](model_name, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/app/models/facial_recognition.py", line 27, in __init__
super().__init__(model_name, cache_dir, **model_kwargs)
File "/usr/src/app/models/base.py", line 25, in __init__
loader(**model_kwargs)
File "/usr/src/app/models/base.py", line 35, in load
self.download(**model_kwargs)
File "/usr/src/app/models/base.py", line 32, in download
self._download(**model_kwargs)
File "/usr/src/app/models/facial_recognition.py", line 32, in _download
with zipfile.ZipFile(zip_file, "r") as zip:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/zipfile.py", line 1302, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.11/zipfile.py", line 1369, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
ERROR: Application startup failed. Exiting.
and the values:
machine-learning:
enabled: true
probes:
liveness:
spec:
initialDelaySeconds: 240
image:
repository: ghcr.io/immich-app/immich-machine-learning
pullPolicy: IfNotPresent
env:
TRANSFORMERS_CACHE: /cache
persistence:
cache:
enabled: true
size: 10Gi
# Optional: Set this to pvc to avoid downloading the ML models every start.
type: emptyDir
accessMode: ReadWriteOnce
storageClass: local-path-immich
the k8s cluster is a single node microk8s (and I use local-path for the storage).
Thank you
When downloading the reverse geocoding data from download.geonames.org, the microservices container runs into a DNS error. I tried resolving this by setting ndots
, but that then broke in-cluster connections to the database and such.
The server and microservices templates have quite a few similarities. We should be able to define those separately and then merge them in the respective templates.
Now that Immich supports YAML-formatted config files, support for those could be nicely integrated into the helm values
Maybe I am doing something wrong, but I was looking at the errors on a clean install (latest version 0.1.1 with image tag v1.60.0). It seems that the liveness probe is triggering a restart before things are able to initialize themselves.
The logs for the machine-learning pods shows me that it is downloading the pytorch-model, and the pod is restarted before being able to finish this download.
I haven't seen any values.yaml
related to the timeout or the livenessProbe. I suppose that once that after the first time, because I activated persistence, it will be faster. But I cannot reach that point because I am doing a clean install.
Hi thanks for this official chart it could cool to implement renovate to permit to auto upgrade the minor tagg version of the helm chart
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.