Giter Site home page Giter Site logo

Comments (26)

zelogik avatar zelogik commented on May 29, 2024 1

I have replaced longhorn storage by hostPath storage ( more or less same than Docker)

Make sure that the directory have no config.toml or users.db (users-dev.db) in my case.

apply the manifest

volumes:
        # - name: lldap-data
        #   persistentVolumeClaim:
        #     claimName: lldap-pvc
        - name: lldap-data
          hostPath:
            path: /tmp/data/
            type: Directory

simple ls /tmp/data on the k8s runner where lldap running.
The users-dev.db is created ... but same error. I pulling my hair...

me("Administrator"), first_name: None, last_name: None, avatar: None, attributes: [] } | user_id: "admium"
2024-02-23T11:11:59.296988424+00:00  ERROR       ┕━ 🚨 [error]:  | error: Database error: `Execution Error: error returned from database: (code: 2067) UNIQUE constraint failed: users.email`
Error: while creating the admin user

Caused by:
    Error setting up admin login/account: Error creating admin user: Database error: `Execution Error: error returned from database: (code: 2067) UNIQUE constraint failed: users.email`: Execution Error: error returned from database: (code: 2067) UNIQUE constraint failed: users.email: error returned from database: (code: 2067) UNIQUE constraint failed: users.email

from lldap.

zelogik avatar zelogik commented on May 29, 2024 1

Got new!
tested:

  • lldap/lldap:2023-11-05-alpine : same error
  • lldap/lldap:v0.4.3-alpine : [info]: Starting the API/web server on port 17170 "Working"
  • lldap/lldap:v0.5.0-alpine : Not working ... same error 2067

So seem like there is a "feature" / bug added from 0.4.3 and 0.5.0

Edit: upgrade 0.4.3 to 2023-02-08-alpine and got:

Note: If you just migrated from <=v0.4 to >=v0.5, the previous version did not support key_seed, so it was falling back onto a key file. Remove the seed from the configuration.

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

It looks like your db already contains a user with no email address. If you don't have anything important in there, can you delete the DB? And make sure to grab the logs when you restart LLDAP, the first run logs might be able to tell us how we got there.

from lldap.

zelogik avatar zelogik commented on May 29, 2024

I have removed the Volume, deleted manifest and reapply my manifest (even changed the name of my PVC)
same problem.

WARNING: A key_seed was given, we will ignore the server_key and generate one from the seed!
2024-02-23T09:33:13.432173919+00:00  INFO     set_up_server [ 3.61ms | 25.21% / 100.00% ]
2024-02-23T09:33:13.432189868+00:00  INFO     ┝━ i [info]: Starting LLDAP version 0.5.1-alpha
2024-02-23T09:33:13.434667152+00:00  DEBUG    ┝━ get_schema_version [ 129Β΅s | 3.57% ]
2024-02-23T09:33:13.437503615+00:00  DEBUG    β”‚  ┕━ πŸ› [debug]:  | return: Some(SchemaVersion(9))
2024-02-23T09:33:13.438719793+00:00  DEBUG    ┝━ list_groups [ 478Β΅s | 13.26% ] filters: Some(DisplayName(GroupName("lldap_admin")))
2024-02-23T09:33:13.441709985+00:00  DEBUG    β”‚  ┕━ πŸ› [debug]:  | return: [Group { id: GroupId(1), display_name: GroupName("lldap_admin"), creation_date: 2024-02-23T09:33:10.090674621, uuid: Uuid("42eb52ba-9235-342a-919f-396fd35c48ba"), users: [], attributes: [] }]
2024-02-23T09:33:13.441726709+00:00  DEBUG    ┝━ list_groups [ 698Β΅s | 19.34% ] filters: Some(DisplayName(GroupName("lldap_password_manager")))
2024-02-23T09:33:13.442830278+00:00  DEBUG    β”‚  ┕━ πŸ› [debug]:  | return: [Group { id: GroupId(2), display_name: GroupName("lldap_password_manager"), creation_date: 2024-02-23T09:33:10.102386335, uuid: Uuid("61c78dee-15f0-323f-bc6e-0c85ef3bfceb"), users: [], attributes: [] }]
2024-02-23T09:33:13.442843433+00:00  DEBUG    ┝━ list_groups [ 581Β΅s | 16.10% ] filters: Some(DisplayName(GroupName("lldap_strict_readonly")))
2024-02-23T09:33:13.443837732+00:00  DEBUG    β”‚  ┕━ πŸ› [debug]:  | return: [Group { id: GroupId(3), display_name: GroupName("lldap_strict_readonly"), creation_date: 2024-02-23T09:33:10.113176064, uuid: Uuid("8c32feee-5efa-3111-b297-3ef4de7075e2"), users: [], attributes: [] }]
2024-02-23T09:33:13.443858025+00:00  DEBUG    ┝━ list_users [ 404Β΅s | 11.20% ] filters: Some(MemberOf(GroupName("lldap_admin"))) | _get_groups: false
2024-02-23T09:33:13.447519759+00:00  DEBUG    β”‚  ┕━ πŸ› [debug]:  | return: []
2024-02-23T09:33:13.447525576+00:00  WARN     ┝━ 🚧 [warn]: Could not find an admin user, trying to create the user "admin" with the config-provided password
2024-02-23T09:33:13.447537419+00:00  DEBUG    ┕━ create_user [ 408Β΅s | 11.32% ] request: CreateUserRequest { user_id: UserId(CaseInsensitiveString("XXXXXX")), email: Email("[email protected]"), display_name: Some("Administrator"), first_name: None, last_name: None, avatar: None, attributes: [] } | user_id: "XXXXXX"
2024-02-23T09:33:13.530890639+00:00  ERROR       ┕━ 🚨 [error]:  | error: Database error: `Execution Error: error returned from database: (code: 2067) UNIQUE constraint failed: users.email`
Error: while creating the admin user

Caused by:
    Error setting up admin login/account: Error creating admin user: Database error: `Execution Error: error returned from database: (code: 2067) UNIQUE constraint failed: users.email`: Execution Error: error returned from database: (code: 2067) UNIQUE constraint failed: users.email: error returned from database: (code: 2067) UNIQUE constraint failed: users.email

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

Either your database still exists or it's not the very first run of LLDAP (do you have something that auto-restarts it?)

You can see that because it's getting the current db schema version and getting version 9 (instead of no version for an empty db)

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

Where is the file "/data/users.db" from, and can you delete it?

from lldap.

zelogik avatar zelogik commented on May 29, 2024
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: lldap
  name: lldap-data-pvc
  namespace: private
spec:
  storageClassName: longhorn
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi


---
... Deployment ....
          volumeMounts:
            - mountPath: /data
              name: lldap-data
      restartPolicy: Always
      volumes:
        - name: lldap-data
          persistentVolumeClaim:
            claimName: lldap-data-pvc

And I have deleted the PVC ...

I understand what you mean... but I don't know where come from the /data/users.db as I create a fresh PV ...

ingress.networking.k8s.io "grafana-private-ingress" deleted
secret "lldap-credentials" deleted
persistentvolumeclaim "lldap-data-pvc" deleted
deployment.apps "lldap" deleted
service "lldap-service" deleted

Last Edit: k get persistentvolume -A, return 0 lldap-data-pvc

from lldap.

zelogik avatar zelogik commented on May 29, 2024

For test I use lldap/lldap:latest, and I have checked the Dockerfile, and entry-points.sh, and normally just check permissions, if I'm right

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

Sorry, I don't know enough about Kubernetes to help you... You can try changing the database path (change users.db to something else)

from lldap.

zelogik avatar zelogik commented on May 29, 2024

I have set: LLDAP_DATABASE_URL to "database_url: sqlite:///data/users-dev.db?mode=rwc,"

And I have exactly the same problem...

But, thank for your help, and your software that "look" really good and light (compared to FreeIPA/OpenLDAP..)

from lldap.

martadinata666 avatar martadinata666 commented on May 29, 2024

I think this is some incompatibility between storage type and sqlite. Something like NFS can't be used when using SQLITE type database. As taking the reference of https://github.com/Evantage-WS/lldap-kubernetes/blob/main/lldap-persistentvolumeclaim.yaml it use local-path instead longhorn that a networked storage I guess?

from lldap.

zelogik avatar zelogik commented on May 29, 2024

Yes, was thinking about that, but longhorn is not NFS, and haven't seen "bug", with sqlite and longhorn Storage.
Need to recheck with hostPath to check and verify if it's longhorn/sqlite, or lldap...

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

Sorry to push back again on this issue, but as long as we don't understand what's going on with your setup, we can't debug the issue in LLDAP. In particular, as I mentioned earlier, these logs cannot be the logs for a first start of LLDAP with an empty database. We should at the very least see DB migration messages, and user/group creations for the built-in users (admin, admin groups and so on).

from lldap.

zelogik avatar zelogik commented on May 29, 2024

Yes i understand the problem and look to get the same as:

docker compose up

lldap-1  | 2024-02-26T10:01:45.686631809+00:00  INFO     ┝━ i [info]: Starting LLDAP version 0.5.1-alpha
lldap-1  | 2024-02-26T10:01:45.710152111+00:00  INFO     ┝━ i [info]: Upgrading DB schema from version 1
lldap-1  | 2024-02-26T10:01:45.710154055+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 2
lldap-1  | 2024-02-26T10:01:45.716325170+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 3
lldap-1  | 2024-02-26T10:01:45.724658684+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 4
lldap-1  | 2024-02-26T10:01:45.729460821+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 5
lldap-1  | 2024-02-26T10:01:45.736737778+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 6
lldap-1  | 2024-02-26T10:01:45.741667446+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 7
lldap-1  | 2024-02-26T10:01:45.746038259+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 8
lldap-1  | 2024-02-26T10:01:45.749844935+00:00  INFO     ┝━ i [info]: Upgrading DB schema to version 9
lldap-1  | 2024-02-26T10:01:45.770137424+00:00  WARN     ┝━ 🚧 [warn]: Could not find lldap_admin group, trying to create it
lldap-1  | 2024-02-26T10:01:45.775474173+00:00  WARN     ┝━ 🚧 [warn]: Could not find lldap_password_manager group, trying to create it
lldap-1  | 2024-02-26T10:01:45.779813065+00:00  WARN     ┝━ 🚧 [warn]: Could not find lldap_strict_readonly group, trying to create it

But you can update that the k8s deployment donΒ΄t work up to lldap v4.5

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

Alright, until proof of the contrary, I'll assume the fault is in the k8s setup rather than in LLDAP itself, so downgrading from bug to integration + documentation.

from lldap.

onedr0p avatar onedr0p commented on May 29, 2024

@zelogik maybe you need to setup a startup probe due to kubernetes killing the pod before the DB migrations have completed? If that's the case, the pod would restart and might lead to the issue you are seeing since the migration hasn't completely finished?

from lldap.

zelogik avatar zelogik commented on May 29, 2024

@onedr0p , thanks for the suggestion, I have already done that, even tested with a initContainer with the same problem.
One test I havenΒ΄t done, as lldap docker image have build error 2 weeks ago was to remove the line:
HEALTHCHECK CMD ["/app/lldap", "healthcheck", "--config-file", "/data/lldap_config.toml"] on the DockerFile file

I don't know if it's not the problem with k8s. (ie: race condition with the CMD [run....])
Regards

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

The healthcheck shouldn't affect anything: it's essentially sending a ping on the HTTP/LDAP(S) interfaces to see if they're up. It wouldn't set up the DB, for instance. The interfaces only start listening after everything else is setup, including the DB.

from lldap.

zelogik avatar zelogik commented on May 29, 2024

..ok :/
I'm looking everywhere I can find a race condition
And I don't know how is proceeded the HEALTHCHECK cmd in k8s

from lldap.

zelogik avatar zelogik commented on May 29, 2024

my bad....

2024-03-18T10:18:45.765034085+00:00  DEBUG    ┝━ list_groups [ 166Β΅s | 0.30% ] filters: Some(DisplayName(GroupName("lldap_admin")))
2024-03-18T10:18:45.765367794+00:00  DEBUG    β”‚  ┕━ πŸ› [debug]:  | return: [Group { id: GroupId(1), display_name: GroupName("lldap_admin"), creation_date: 2024-03-18T10:18:45.656059292, uuid: Uuid("2febe2ab-f390-34b6-a4dc-60e8ada49718"), users: [], attributes: [] }]
2024-03-18T10:18:45.765372387+00:00  DEBUG    ┝━ add_user_to_group [ 75.9Β΅s | 0.14% ] user_id: "admium"
2024-03-18T10:18:45.777226702+00:00  INFO     ┝━ i [info]: Starting the LDAP server on port 3890
2024-03-18T10:18:45.777271814+00:00  DEBUG    ┝━ get_jwt_blacklist [ 27.8Β΅s | 0.05% ]
2024-03-18T10:18:45.777366770+00:00  INFO     ┕━ i [info]: Starting the API/web server on port 17170
2024-03-18T10:18:45.777488572+00:00  INFO     i [info]: starting 1 workers
2024-03-18T10:18:45.777494767+00:00  INFO     i [info]: Actix runtime found; starting in Actix runtime
2024-03-18T10:18:45.778031628+00:00  INFO     i [info]: DB Cleanup Cron started

seem like it's working now.
I have changed two things, the last version 2024-03-07-alpine vs "old" 2024-02-08-alpine
But I have increase the resource limit from 100m to 4000m and memory 50M to 500M...

I check if it's was the resources limit the problem and close the issue.
Sorry for that...

from lldap.

zelogik avatar zelogik commented on May 29, 2024

So it's the memory limit the problem, with 50M the app "crash" at init, without saying anything "useful". With 100M of ram it's seem to work well.

and when running:

NAME                     CPU(cores)   MEMORY(bytes)   
lldap-78ccb659c5-mg9bc   1m           3Mi

@nitnelave
I think, i can close the issue?

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

Yes, that sounds like the culprit. We need more RAM (by design) when setting/checking a password ("hashing" the password is intentionally resource intensive).

We can close this.

Maybe you want to add to the LLDAP K8s docs a note about the minimum resources required?

from lldap.

zelogik avatar zelogik commented on May 29, 2024

@nitnelave : It's not really docs but a more recent working and sane manifest for k8s than the good base from Evantage-WS/lldap-kubernetes

I don't know if we want to recreate new documentation for k8s specific or update Evantage-WS/lldap-kubernetes

A working simple k8s manifest as example: (requiring ingress-nginx + longhorn)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: lldap-private-ingress
  annotations:
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/proxy-body-size: 10m

spec:
  ingressClassName: private
  rules:
  - host: private.example.com
    http:
      paths:
        - pathType: ImplementationSpecific
          path: /ldap(/|$)(.*)
          backend:
            service:
              name: lldap-service
              port:
                name: http

---
# USE AS EXAMPLE NOT USE SECRETS IN PLAIN TEST!!
# prefere kustomize | sops | .env | vault | etc..., for production use

apiVersion: v1
kind: Secret
metadata:
  name: lldap-credentials
type: Opaque
stringData:
  LLDAP_UID: "1000"
  LLDAP_GID: "1000"
  LLDAP_TZ: Europe/Paris
  LLDAP_JWT_SECRET: # view lldap documentation "generate_secrets.sh"
  LLDAP_LDAP_BASE_DN: dc=example,dc=com
  LLDAP_LDAP_USER_PASS: ImaBadPassword
  LLDAP_KEY_SEED: # view lldap documentation "generate_secrets.sh"
  LLDAP_LDAP_USER_DN: admin
  LLDAP_LDAP_USER_EMAIL: [email protected]
  LLDAP_DATABASE_URL: sqlite:///data/users.db?mode=rwc
  LLDAP_HTTP_URL: "https://example.com/ldap/"

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: lldap
  name: lldap-conf-pvc
spec:
  storageClassName: longhorn
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Mi

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    lldap: https://github.com/nitnelave/lldap
  labels:
    app: lldap
  name: lldap
spec:
  replicas: 1
  selector:
    matchLabels:
      app: lldap
  strategy:
    # type: Recreate
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      annotations:
        lldap: https://github.com/nitnelave/lldap
        # k8s: https://github.com/Evantage-WS/lldap-kubernetes
      labels:
        app: lldap
    spec:
      containers:
        - name: lldap
          env:
            - name: UID
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_UID
            - name: GID
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_GID
            - name: TZ
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_TZ
            - name: LLDAP_JWT_SECRET
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_JWT_SECRET
            - name: LLDAP_HTTP_URL
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_HTTP_URL
            - name: LLDAP_LDAP_BASE_DN
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_LDAP_BASE_DN
            - name: LLDAP_KEY_SEED
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_KEY_SEED
            - name: LLDAP_LDAP_USER_DN
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_LDAP_USER_DN
            - name: LLDAP_LDAP_USER_PASS
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_LDAP_USER_PASS
            - name: LLDAP_LDAP_USER_EMAIL
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_LDAP_USER_EMAIL
            - name: LLDAP_DATABASE_URL
              valueFrom:
                secretKeyRef:
                  name: lldap-credentials
                  key: LLDAP_DATABASE_URL
            - name: LLDAP_VERBOSE
              value: "true"

          image: lldap/lldap:2024-03-07-alpine
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 400m
              memory: 100Mi # Can't be lower than 50Mi, lldap init phase take memory
            requests:
              cpu: 100m
              memory: 10Mi
          ports:
            - containerPort: 3890
            - containerPort: 6360
            - containerPort: 17170
          volumeMounts:
            - mountPath: /data
              name: lldap-conf
      restartPolicy: Always
      terminationGracePeriodSeconds: 120
      volumes:
        - name: lldap-conf
          persistentVolumeClaim:
            claimName: lldap-conf-pvc


---
apiVersion: v1
kind: Service
metadata:
  annotations:
    lldap: https://github.com/nitnelave/lldap
    # k8s: https://github.com/Evantage-WS/lldap-kubernetes
  labels:
    app: lldap-service
  name: lldap-service
  namespace: private
spec:
  ports:
    - name: ldap
      port: 389
      targetPort: 3890
    - name: ldaps
      port: 636
      targetPort: 6360
    - name: http
      port: 1717
      targetPort: 17170
  selector:
    app: lldap
    

from lldap.

zelogik avatar zelogik commented on May 29, 2024

Yes, that sounds like the culprit. We need more RAM (by design) when setting/checking a password ("hashing" the password is intentionally resource intensive).

The problem is not really the "design", but the log, even the the verbose mode haven't said anything except the strange "[debug]: | return: Some(SchemaVersion(9))" at the first run.

We can close this.

Done (too early?)

Maybe you want to add to the LLDAP K8s docs a note about the minimum resources required?

It's not really k8s specific finally, all productions servers using docker/k8s/distro with limit allocations (cpu/ram/...) could have that problem. no?

And @nitnelave thanks for the work on lldap!

from lldap.

martadinata666 avatar martadinata666 commented on May 29, 2024

it just hard to say. As the container directly terminated, even when there are logs for oom, it won't show. Any program that had limit allocations will act the same, like nodejs known as memory hog, as the program reach the allocation limit the container will terminated without any suspect of oom, it just dead.

edit: on docker usually determined by (137) that could be OOM or some other issue. Still unclear, essentially it just "container died non zero exit".

from lldap.

nitnelave avatar nitnelave commented on May 29, 2024

You should probably get some logs about the OOM from k8s, no? Maybe it should be more visible.

from lldap.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.