Giter Site home page Giter Site logo

immich-charts's People

Contributors

alextran1502 avatar bo0tzz avatar brandonros avatar btajuddin avatar choikangjae avatar hiteshnayak305 avatar hofq avatar immich-tofu[bot] avatar ndragon798 avatar obito1903 avatar orbatschow avatar pixeljonas avatar samholton avatar yesid-lopez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

immich-charts's Issues

Error: getaddrinfo ENOTFOUND

I'm trying to install the chart by using:

helm install immichtest -n immich ./charts/immich/ --set immich.persistence.library.existingClaim=nfs-immich --set redis.enabled=true --set postgresql.enabled=true --set global.postgresql.auth.postgresPassword=DBPassword123 --kube-context home-lab-cluster-0

But the logs for the POD immich/immichtest-server-7494c4fd5d-c7wm8 are:

Error: getaddrinfo ENOTFOUND immichtest-redis-master
Error: getaddrinfo ENOTFOUND immichtest-postgresql

image

The Redis server did not received any request:
image

Support providing multiple volumes

Support providing multiple volumes, instead of just a single one as is currently the case.

Use cases:

  • A user may have multiple source data volumes, which are infeasible to merge into a single one.
  • A user may not trust Immich to not modify their existing photo gallery, and wants to provide a read-only volume of existing data in addition to a read-write volume for immich to add new uploads to.

Both of these use cases apply to me. This blocks me from using Immich.

Changes on machine-learning break the deployment

I recently updated to v1.49.0 image version and everything seems to keep working EXCEPT the machine-learning image.

Looking a git into the main repo, it sems as if the Dockerfile for machine-learning image has been changed to have the CMD within the image metadata.

I believe that there is no need to add args and the shell in the deployment. I hot-patched that (removed those lines from the deployment manifest) and now the deployment seems to start.

I suppose that a new chart release, with a more up-to-date immich version, and changing this machine learning would work. But I always break things when I try to modify charts, sorry for not proposing the PR myself.

Contributing and testing

Hi Everyone,

I recently got a helm chart working based on the (now deprecated) k8s-at-home project and would be happy to contribute if there's a need for a starting point.

It's a bit bare-bones and is enough to get Immich deployed to a single node (everything (minus proxy) deployed into a single pod) and uses the Bitnami redis and postgresql charts.

I'd also be happy to help test- I'm deploying on k3s (single node for now) with a Traefik v2 ingress.

Thanks in advance!

how to set cache to pvc

      persistence:
        geodata-cache:
          enabled: true
          size: 1Gi
          # Optional: Set this to pvc to avoid downloading the geodata every start.
          type: emptyDir
          accessMode: ReadWriteMany
          # storageClass: your-class

I use kubernetes, it showed "Set this to pvc to avoid downloading the geodata every start.", but I don't know how to, and there is no example

Helm repository 404

Hi team, I'm getting a 404 when try to add the helm repository helm repo add immich https://immich-app.github.io/helm-charts. Could you please take a look at the github pages config? Much appreciated

compatibility with immich 1.88

Hello,

I successfully upgraded to immich version 1.87 using this chart without encountering any issues. However, I've noticed a warning in the main immich interface regarding certain disruptive changes in version 1.88. You can find more details about these changes here: immich-app/immich#5086

Is there a plan to update the chart to align with these changes?

Thank you!

v1.91.0 DB Crash

Postgres Pod Crash:

chmod: changing permissions of '/var/run/postgresql': Operation not permitted

PostgreSQL Database directory appears to contain a database; Skipping initialization

postgres: could not access the server configuration file "/bitnami/postgresql/data/postgresql.conf": No such file or directory

Chart Values:

    env:
      DB_PASSWORD:
        valueFrom:
          secretKeyRef:
            name: postgres-secrets
            key: password
    image:
      tag: v1.91.0
    immich:
      persistence:
        library:
          existingClaim: va-unraid-photos-rw
    postgresql:
      enabled: true
      auth:
        existingSecret: postgres-secrets
    redis:
      enabled: true

The postgres statefulset is appropriately configured with the following image:

        image: docker.io/tensorchord/pgvecto-rs:pg14-v0.1.11

Chart version: immich-0.3.0

How to configure persistence volume for postgres ?

Hi,

I would like the intgrate postgres don't use the default storage class for data. How can i configure the release to do that ?

I try this, but the PVC continue tu use default storage class :

postgresql:
  enabled: true
  persistence:
    data:
      enabled: true
      size: 8Gi
      storageClass: my-custom-storage-class
      accessMode: ReadWriteMany

Deploying helm chart using existingSecret option for postgresql credentials fails to authenticate

I'm trying to deploy the helm chart using the "existingSecret" option to specify postgresql credentials. The problem appears to be that while the postgresql pod reads the secret for the environment variables, the immich-server pod is still using the default "immich" value for DB_PASSWORD. Is there a values option to tell immich to read the secrets files, too?

$ kubectl -n immich get pod immich-server-6cdfd9bd66-4h42z -o yaml
...
    - name: DB_PASSWORD
      value: immich
...

$ kubectl -n immich get pod immich-postgresql-0 -o yaml
...
    - name: POSTGRES_POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          key: postgres-password
          name: postgres-secrets
    - name: POSTGRES_PASSWORD
      valueFrom:
        secretKeyRef:
          key: password
          name: postgres-secrets

The runtime values were verified to be as expected based in the output above by exec'ing into pods and running env. Both pods show error messages to the effect of 'password authentication failed for user "immich"'

Here is the contents of postgres-secrets.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: postgres-secrets
  namespace: immich
stringData: 
  DB_USERNAME: immich
  POSTGRES_USER: immich
  DB_PASSWORD: not-actually-my-password
  POSTGRES_PASSWORD: not-actually-my-password
  postgres-password: not-actually-my-password
  password: not-actually-my-password
  DB_DATABASE_NAME: immich
  POSTGRES_DB: immich

I probably don't need all of these, but I've been trying different variables to get this to work.

immich-machine-learning pod CrashLoopBackOff without logs

Hi, I've deployed Immich on my kubernetes (k3s) cluster, everything is running except the immich-machine-learning pod it keeps crashing with no logs, so I really don't know what's happening and how to fix it. Any suggestions to debug this?

Screenshot from 2023-02-20 21-13-11

Command:

kubectl logs immich-machine-learning-8885d64cb-l2rvk

Output:

~$

Command:

kubectl describe pod immich-machine-learning-8885d64cb-l2rvk

Output:

Name:             immich-machine-learning-8885d64cb-l2rvk
Namespace:        tools
Priority:         0
Service Account:  default
Node:             k3s-master-02/10.0.100.102
Start Time:       Mon, 20 Feb 2023 20:50:26 +0100
Labels:           app=immich-machine-learning
                  pod-template-hash=8885d64cb
Annotations:      <none>
Status:           Running
IP:               10.42.2.161
IPs:
  IP:           10.42.2.161
Controlled By:  ReplicaSet/immich-machine-learning-8885d64cb
Init Containers:
  postgresql-isready:
    Container ID:  containerd://f3de44384d61f7743b8e7da3398feedfcecd3df0e9a4dc52d7efd1f9d76f4cc5
    Image:         harbor.k8s.lan/dockerhub-proxy/bitnami/postgresql:14.5.0-debian-11-r6
    Image ID:      harbor.k8s.lan/dockerhub-proxy/bitnami/postgresql@sha256:4355265e33e9c2a786aa145884d4b36ffd4c41c516b35d60df0b7495141ec738
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      until pg_isready -U "${POSTGRESQL_USERNAME}" -d "dbname=${DB_DATABASE_NAME}" -h immich-postgresql-hl -p 5432 ; do sleep 2 ; done
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 20 Feb 2023 20:50:28 +0100
      Finished:     Mon, 20 Feb 2023 20:50:34 +0100
    Ready:          True
    Restart Count:  0
    Environment:
      POSTGRESQL_USERNAME:  <set to the key 'DB_USERNAME' in secret 'immich-secret-env'>    Optional: false
      POSTGRESQL_DATABASE:  <set to the key 'DB_DATABASE_NAME' of config map 'immich-env'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5qj9 (ro)
Containers:
  immich-machine-learning:
    Container ID:  containerd://877582bf1c4c88ab58a94a49a47505d11c10f9e0a2b8cb780c3b12c1a46b06ce
    Image:         harbor.k8s.lan/dockerhub-proxy/altran1502/immich-machine-learning:v1.43.0
    Image ID:      harbor.k8s.lan/dockerhub-proxy/altran1502/immich-machine-learning@sha256:3373962c8d64b264b42751614be88590e279cc6442db0d55615a8daa9cead8f9
    Port:          3003/TCP
    Host Port:     0/TCP
    Command:
      /bin/sh
    Args:
      ./entrypoint.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    132
      Started:      Mon, 20 Feb 2023 21:16:41 +0100
      Finished:     Mon, 20 Feb 2023 21:16:41 +0100
    Ready:          False
    Restart Count:  10
    Liveness:       tcp-socket :3003 delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:      tcp-socket :3003 delay=0s timeout=1s period=10s #success=1 #failure=3
    Startup:        tcp-socket :3003 delay=0s timeout=1s period=5s #success=1 #failure=30
    Environment Variables from:
      immich-env  ConfigMap  Optional: false
    Environment:
      DB_PASSWORD:  <set to the key 'DB_PASSWORD' in secret 'immich-secret-env'>  Optional: false
      DB_USERNAME:  <set to the key 'DB_USERNAME' in secret 'immich-secret-env'>  Optional: false
      MAPBOX_KEY:   <set to the key 'MAPBOX_KEY' in secret 'immich-secret-env'>   Optional: false
    Mounts:
      /usr/src/app/.reverse-geocoding-dump from geocoding-dump (rw)
      /usr/src/app/upload from library (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5qj9 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  geocoding-dump:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  library:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc-nfs-immich-library
    ReadOnly:   false
  kube-api-access-d5qj9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  29m                    default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  29m                    default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Normal   Scheduled         29m                    default-scheduler  Successfully assigned tools/immich-machine-learning-8885d64cb-l2rvk to k3s-master-02
  Normal   Pulled            29m                    kubelet            Container image "harbor.k8s.lan/dockerhub-proxy/bitnami/postgresql:14.5.0-debian-11-r6" already present on machine
  Normal   Created           29m                    kubelet            Created container postgresql-isready
  Normal   Started           29m                    kubelet            Started container postgresql-isready
  Warning  Unhealthy         28m                    kubelet            Startup probe failed: dial tcp 10.42.2.161:3003: connect: connection refused
  Normal   Pulled            28m (x4 over 28m)      kubelet            Container image "harbor.k8s.lan/dockerhub-proxy/altran1502/immich-machine-learning:v1.43.0" already present on machine
  Normal   Created           28m (x4 over 28m)      kubelet            Created container immich-machine-learning
  Normal   Started           28m (x4 over 28m)      kubelet            Started container immich-machine-learning
  Warning  BackOff           3m59s (x138 over 28m)  kubelet            Back-off restarting failed container

Deploy without ML component

Hi,

I am currently looking into replacing nextcloud with immich as the former won't fix the broken auto-upload for iOS devices. From what I have read so far, the ML component in this chart is pretty high on resource consumption, which is a problem because I run on a single node cluster.

Can immich be deployed and function witholut the ML component?
If so, is setting "enabled: false" in the values.yaml file sufficient to achieve this goal?
What features etc. will I be missing out on when ML is not available to immich?

Thanks, this looks like a great app!

Behaviour of `*-postgresql` secret is undocumented and confusing

I tried to deploy the chart, and noticed that a secret appeared called immich-postgresql with two fields called password and postgres-password.

With that in mind, I succeeded in doing a deployment with mostly defaults with the following additional configuration:

common_env:
  DB_PASSWORD: 
    valueFrom:
      secretKeyRef:
        name: immich-postgresql
        key: password

However, today I changed the chart deployment (by updating the tag, but I don't think that's rellevant). The immich-server was unable to connect to the database. I suspect that the secret manifest had changed and it broke the connection. My "repair" was to change the DB_PASSWORD environment variable and hardcode the original password. However, that was confusing.

I am a n00b chart user so maybe I missed something obvious. I was somewhat expecting the secret thing to be generated on first deployment and be inmutable from that point onwards. I don't k now if that makes sense, or if that behavior may break more stuff. Unfortunately, I don't really know how to do that (otherwise I would try to do the PR myself).

Add other based image

Is there a plan to provide other based( like slim-buster...) image?

I want to use exist redis and postgres-sql, but it Looks like there is a dns resolution problem(alpine)

/usr/src/app # ping postgres-postgresql.app.svc.cluster.local
PING postgres-postgresql.app.svc.cluster.local (3.64.163.50): 56 data bytes
^C
--- postgres-postgresql.app.svc.cluster.local ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping postgres-postgresql.app.svc.cluster.local
PING postgres-postgresql.app.svc.cluster.local (3.64.163.50): 56 data bytes
^C
--- postgres-postgresql.app.svc.cluster.local ping statistics ---
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
/usr/src/app # ping redis.app.cluster.svc.local
PING redis.app.cluster.svc.local (3.64.163.50): 56 data bytes
^C
--- redis.app.cluster.svc.local ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss

Ingress now points to machine learning service

I've just upgraded to Immich v1.88.1 using Helm chart version v0.2.0.

It looks like somehow in the removal of the web and proxy deployments, the default Ingress has ended up pointing to the immich-machine-learning service. I assume it should be pointing at the immich-server service? In fact it looks like the original ingress has been removed and replaced with a new one with a different name.

[jonathan@poseidon immich]$ kubectl get ingress
NAME                      CLASS    HOSTS               ADDRESS     PORTS     AGE
immich-machine-learning   public   immich.gazeley.uk   127.0.0.1   80, 443   11m
[jonathan@poseidon immich]$ kubectl describe ingress immich-machine-learning
Name:             immich-machine-learning
Labels:           app.kubernetes.io/instance=immich
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=machine-learning
                  app.kubernetes.io/version=v1.88.0
                  helm.sh/chart=immich-0.2.0
Namespace:        immich
Address:          127.0.0.1
Ingress Class:    public
Default backend:  <default>
TLS:
  ingress-tls terminates immich.mydomain.uk
Rules:
  Host               Path  Backends
  ----               ----  --------
  immich.mydomain.uk  
                     /   immich-machine-learning:3003 (10.1.106.39:3003)
Annotations:         cert-manager.io/cluster-issuer: letsencrypt-prod
                     meta.helm.sh/release-name: immich
                     meta.helm.sh/release-namespace: immich
                     nginx.ingress.kubernetes.io/proxy-body-size: 0
Events:              <none>

Preview diffs in PRs

When a PR to the chart is made, a github action should run helm template with the default values (and a placeholder for the library claim), diff the result against the main branch, and post this diff as a comment on the PR.

For bonus points, there should also be an option to request a diff with custom values by leaving a comment on the PR, such as

!diff

immich:
  metrics:
    enabled: true

Image to use with Cloudnative PG

I used to host Immich with https://cloudnative-pg.io, Perhaps since vector is a thing I am not able to run Immich with this helm chart.

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: pg-immich-cluster
  namespace: cnpg-system
spec:
  imageName: ghcr.io/bo0tzz/cnpgvecto.rs:15
  postgresql:
    shared_preload_libraries:
      - "vectors.so"
  affinity:
    nodeSelector:
      kubernetes.io/hostname: i11806-kube-node-2-03
  instances: 1
  storage:
    pvcTemplate:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      storageClassName: manual-immich
  monitoring:
    enablePodMonitor: true
  bootstrap:
    initdb:
      database: postgresql-immich-pgsql
      owner: postgresql-immich-pgsql
      secret:
        name: post-init-immich-pgsql-secret
      postInitTemplateSQL:
        - CREATE EXTENSION IF NOT EXISTS cube;
        - CREATE EXTENSION IF NOT EXISTS earthdistance;

immich logs

[Nest] 7  - 01/15/2024, 11:13:03 PM   ERROR [TypeOrmModule] Unable to connect to the database. Retrying (3)...
QueryFailedError: permission denied to create extension "vectors"
    at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:211:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async assertVectors (/usr/src/app/dist/infra/database.config.js:53:5)
    at async UsePgVectors1700713871511.up (/usr/src/app/dist/infra/migrations/1700713871511-UsePgVectors.js:11:9)
    at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
    at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:260:35)
    at async DataSource.initialize (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:148:17)
[Nest] 7  - 01/15/2024, 11:13:03 PM   ERROR [ExceptionHandler] permission denied to create extension "vectors"
QueryFailedError: permission denied to create extension "vectors"
    at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:211:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async assertVectors (/usr/src/app/dist/infra/database.config.js:53:5)
    at async UsePgVectors1700713871511.up (/usr/src/app/dist/infra/migrations/1700713871511-UsePgVectors.js:11:9)
    at async MigrationExecutor.executePendingMigrations (/usr/src/app/node_modules/typeorm/migration/MigrationExecutor.js:225:17)
    at async DataSource.runMigrations (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:260:35)
    at async DataSource.initialize (/usr/src/app/node_modules/typeorm/data-source/DataSource.js:148:17)

Has anyone tried this Chart with Cloudnative Pg?

Arm(64) is not supported

Although Immich supports arm64 architecture, the reddis and postgresql images by Bitnami do not.

Any reason why the official images for these packages are not used? These are multi-arch.

I can create a PR to use them ofc.

Add support for Typesense deployment

Immich release v1.51.0 added an extra Typesense container to the stack that is currently required. This is another infrastructure component like postgres and redis.

While there is a typesense helm chart at https://github.com/Spittal/typesense-helm, it hasn't been updated for a while. Typesense doesn't need anything specific and so it would also be totally fine to use the common-library to deploy it.

Request to support service type LoadBalancer

Hi,

In case I'm not mistaken, there is a "wired-in" ClusterIP in helm templates for services.
Can you add service type as a value in templates, to be able to set from values?

Thanks

Proper release process

Currently the chart is built and released whenever a new chartVersion is pushed to main. Depending on the options out there, it would be nice to make this a bit more deliberate to increase the control over when new versions are published.

Currently releases do not include any changelogs. This should be fixed so that both the github releases and the artifacthub page properly display a changelog, and ideally so that these changelogs also show up in PRs generated by tools like Renovate.

Install fails with default options

Hey folks,

I can't seem to get helm to install this directly:

csm10495@csm10495-desk:~/Desktop/immich $ helm install --create-namespace --namespace immich immich immich/immich
Error: INSTALLATION FAILED: execution error at (immich/templates/checks.yaml:1:64): .Values.immich.persistence.library.existingClaim is required.

This matches the step on the readme. Am I missing something?

Machine Learning CrashLoopBackoff

I always get a crashloopbackoff with the machinelearning container:

INFO:     Started server process [7]
INFO:     Waiting for application startup.
Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.

Persistence Objects are not auto-created

While playing around with the chart today I noticed that while the persistence options are set in the different containers (e.g. mounting them). If you (want to) create a new PVC using the values no PVC is created.

immich-machine-learning restarting

Hi,
after the update to 1.72.1 the machine-learning container does not start:

Back-off restarting failed container immich-machine-learning in pod immich-machine-learning-5777ffff49-kdqcr_immich(d635ea9c-2008-4bb6-b434-2be763abdcda)

with version 1.71.0 everything was working fine

here the logs in the container:

Could not find image processor class in the image processor config or the model config. Loading based on pattern matching with the model's feature extractor configuration.
ERROR:    Traceback (most recent call last):
  File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 677, in lifespan
    async with self.lifespan_context(app) as maybe_state:
  File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 566, in __aenter__
    await self._router.startup()
  File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 654, in startup
    await handler()
  File "/usr/src/app/main.py", line 46, in startup_event
    await load_models()
  File "/usr/src/app/main.py", line 40, in load_models
    await app.state.model_cache.get(model_name, model_type, eager=settings.eager_startup)
  File "/usr/src/app/models/cache.py", line 53, in get
    model = InferenceModel.from_model_type(model_type, model_name, **model_kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/models/base.py", line 78, in from_model_type
    return subclasses[model_type](model_name, **model_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/models/facial_recognition.py", line 27, in __init__
    super().__init__(model_name, cache_dir, **model_kwargs)
  File "/usr/src/app/models/base.py", line 25, in __init__
    loader(**model_kwargs)
  File "/usr/src/app/models/base.py", line 35, in load
    self.download(**model_kwargs)
  File "/usr/src/app/models/base.py", line 32, in download
    self._download(**model_kwargs)
  File "/usr/src/app/models/facial_recognition.py", line 32, in _download
    with zipfile.ZipFile(zip_file, "r") as zip:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/zipfile.py", line 1302, in __init__
    self._RealGetContents()
  File "/usr/local/lib/python3.11/zipfile.py", line 1369, in _RealGetContents
    raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file

ERROR:    Application startup failed. Exiting.

and the values:

machine-learning:
  enabled: true
  probes:
    liveness:
      spec:
        initialDelaySeconds: 240
  image:
    repository: ghcr.io/immich-app/immich-machine-learning
    pullPolicy: IfNotPresent
  env:
    TRANSFORMERS_CACHE: /cache
  persistence:
    cache:
      enabled: true
      size: 10Gi
      # Optional: Set this to pvc to avoid downloading the ML models every start.
      type: emptyDir
      accessMode: ReadWriteOnce
      storageClass: local-path-immich
     

the k8s cluster is a single node microk8s (and I use local-path for the storage).

Thank you

Microservices container cannot resolve geocoder data DNS

When downloading the reverse geocoding data from download.geonames.org, the microservices container runs into a DNS error. I tried resolving this by setting ndots, but that then broke in-cluster connections to the database and such.

Deduplicate shared values

The server and microservices templates have quite a few similarities. We should be able to define those separately and then merge them in the respective templates.

Config file support

Now that Immich supports YAML-formatted config files, support for those could be nicely integrated into the helm values

machine-learning needs less stress from the liveness probes in order to start properly

Maybe I am doing something wrong, but I was looking at the errors on a clean install (latest version 0.1.1 with image tag v1.60.0). It seems that the liveness probe is triggering a restart before things are able to initialize themselves.

The logs for the machine-learning pods shows me that it is downloading the pytorch-model, and the pod is restarted before being able to finish this download.

I haven't seen any values.yaml related to the timeout or the livenessProbe. I suppose that once that after the first time, because I activated persistence, it will be faster. But I cannot reach that point because I am doing a clean install.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.