Giter Site home page Giter Site logo

Comments (17)

ibadullaev-inc4 avatar ibadullaev-inc4 commented on June 2, 2024 1

Hi @brianehlert

Than you for your previous response

Is it not possible to add health check if I don't use Nginx Plus ?

Warning  Rejected  28s   nginx-ingress-controller  VirtualServer public/pn-front-prod-arbitrum-nova-rpc was rejected with error: spec.upstreams[0].healthCheck: Forbidden: active health checks are only supported in NGINX Plus
[nariman@notebook nginx-health]$ kubectl -n public get virtualserver pn-front-prod-arbitrum-nova-rpc 
NAME                              STATE     HOST                               IP    PORTS   AGE
pn-front-prod-arbitrum-nova-rpc   Invalid   arbitrum-nova-rpc.example.com                 41d

from kubernetes-ingress.

github-actions avatar github-actions commented on June 2, 2024

Hi @ibadullaev-inc4 thanks for reporting!

Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this 🙂

Cheers!

from kubernetes-ingress.

brianehlert avatar brianehlert commented on June 2, 2024

After restart upstream server (backend) which has temproary IP address, our nginx ingress continue send traffic to the IP address which is not present yet.

NGINX Ingress Controller configures upstreams using endpointSlices and only those endpoints that also are 'ready'.
The exception to this would be externalName services, these rely on DNS resolution and the NGINX resolver.

Can you help me understand your scenario a bit deeper?
Are these back-end K8s services? ExternalName services?

If it is a timing issue we recommend using a healthcheck
https://docs.nginx.com/nginx-ingress-controller/configuration/virtualserver-and-virtualserverroute-resources/#upstreamhealthcheck

from kubernetes-ingress.

ibadullaev-inc4 avatar ibadullaev-inc4 commented on June 2, 2024

Hi, thank you for your response
Upstream server is k8s pods
ExternalName services? No we do not use ExternalName services

[nariman@notebook new-only-back]$ kubectl -n public get svc pn-backend -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"pn-backend","namespace":"public"},"spec":{"ports":[{"name":"http","port":4000,"protocol":"TCP","targetPort":4000},{"name":"grpc","port":9090,"protocol":"TCP","targetPort":9090}],"selector":{"app":"pn-backend"},"type":"NodePort"}}
    kubernetes.io/change-cause: kubectl edit svc pn-backend --context=fra --namespace=public
      --record=true
  creationTimestamp: "2023-04-03T13:36:05Z"
  name: pn-backend
  namespace: public
  resourceVersion: "227720878"
  uid: eb76e588-b3a4-4299-bf85-ee1e6e818ada
spec:
  clusterIP: 10.245.106.220
  clusterIPs:
  - 10.245.106.220
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    nodePort: 30569
    port: 4000
    protocol: TCP
    targetPort: 4000
  - name: ws
    nodePort: 30073
    port: 4001
    protocol: TCP
    targetPort: 4001
  - name: grpc
    nodePort: 30022
    port: 9090
    protocol: TCP
    targetPort: 9090
  - name: web
    nodePort: 30754
    port: 9091
    protocol: TCP
    targetPort: 9091
  - name: web-ws
    nodePort: 30693
    port: 9092
    protocol: TCP
    targetPort: 9092
  selector:
    app: pn-backend
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
[nariman@notebook new-only-back]$ kubectl -n public get endpointslices.discovery.k8s.io pn-backend-h9fwc -o yaml
addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
  - 10.244.48.184
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jjfif
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-5sqcb
    namespace: public
    uid: 363edb7a-aa40-4468-ba30-5cfaf712262a
- addresses:
  - 10.244.0.11
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jggse
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-9szzn
    namespace: public
    uid: 350bbdef-de2d-455d-8841-306337e8ad47
- addresses:
  - 10.244.40.54
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo28y
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-9hztz
    namespace: public
    uid: c82f3d93-9a50-4f7e-9ce7-cef6d55cf2ef
- addresses:
  - 10.244.2.141
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-j6x9c
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-gfzmd
    namespace: public
    uid: 03245b86-2905-4741-8674-7239efa3175c
- addresses:
  - 10.244.4.89
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-j6x98
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-pj9ph
    namespace: public
    uid: b54f1bcc-af6a-4e46-b7e6-7c923a3989c3
- addresses:
  - 10.244.3.157
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-j6x9u
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-k8vgf
    namespace: public
    uid: 6e83f356-836c-4872-b125-884e7f4c1d76
- addresses:
  - 10.244.51.85
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jb9u4
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-mxtq4
    namespace: public
    uid: 60167a0c-90a7-400a-8797-8f743a99a751
- addresses:
  - 10.244.52.54
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jb9c5
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-zvslm
    namespace: public
    uid: 47e7e6f2-8473-4ab9-86d8-2d87180c83f9
- addresses:
  - 10.244.49.232
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jjy7i
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-6m87p
    namespace: public
    uid: 0462dc9d-bd8c-4048-b3d9-1464c3b617a1
- addresses:
  - 10.244.0.163
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jggsa
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-rvhfc
    namespace: public
    uid: f111baaf-5701-400e-b696-49a4be3dc803
- addresses:
  - 10.244.11.130
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jols6
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-fvk98
    namespace: public
    uid: 09393517-cc36-4315-82b5-7a80c804bbfc
- addresses:
  - 10.244.24.99
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jols2
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-2b5fc
    namespace: public
    uid: e1985bea-331c-4235-9a16-9d9f7d9d01e9
- addresses:
  - 10.244.47.216
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jjfiq
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-5h56d
    namespace: public
    uid: f246aeaf-0408-450a-822c-28e861eea19b
- addresses:
  - 10.244.39.214
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo2ns
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-t99pc
    namespace: public
    uid: 8508225f-8cf2-48e6-bdff-9304964c82bf
- addresses:
  - 10.244.37.252
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jol98
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-7dcj8
    namespace: public
    uid: 21e8293b-4a51-4701-9c42-d45619919c95
- addresses:
  - 10.244.43.186
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo28r
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-h45nr
    namespace: public
    uid: 17a9ff72-b098-4f7f-a17e-a154a987113a
- addresses:
  - 10.244.42.195
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jo28j
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-84vqk
    namespace: public
    uid: c3db49b2-a931-4647-ac50-0ead08719f60
- addresses:
  - 10.244.51.146
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jb9ui
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-bfk8d
    namespace: public
    uid: 30f9461c-69f6-43b9-9be2-8489e8bcd13b
- addresses:
  - 10.244.24.217
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: public-node-k8s-pool-fra1-jolii
  targetRef:
    kind: Pod
    name: pn-backend-748b569678-bxvxn
    namespace: public
    uid: 94b6cfdb-dbe5-48e0-ae06-a333be26e9c5
kind: EndpointSlice
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2024-04-24T22:12:00Z"
  creationTimestamp: "2023-04-03T13:36:05Z"
  generateName: pn-backend-
  generation: 40139
  labels:
    endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
    kubernetes.io/service-name: pn-backend
  name: pn-backend-h9fwc
  namespace: public
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    controller: true
    kind: Service
    name: pn-backend
    uid: eb76e588-b3a4-4299-bf85-ee1e6e818ada
  resourceVersion: "249331656"
  uid: cb4a980f-e75f-4dfe-a26f-aa472daa0b93
ports:
- name: grpc
  port: 9090
  protocol: TCP
- name: web
  port: 9091
  protocol: TCP
- name: web-ws
  port: 9092
  protocol: TCP
- name: ws
  port: 4001
  protocol: TCP
- name: http
  port: 4000
  protocol: TCP

from kubernetes-ingress.

brianehlert avatar brianehlert commented on June 2, 2024

Passive health checks are always present. But Active health checks are a capability that is specific to NGINX Plus.
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-health-check/

By default, NGINX Ingress Controller won't add pods to the service upstream group until the pod reports ready.
So, the alternative to using the enterprise version of this project is to improve the ready probe on your service pods.

from kubernetes-ingress.

ibadullaev-inc4 avatar ibadullaev-inc4 commented on June 2, 2024

Hello,

Yes my deployment is configure with live and read probe
Also as you mentioned passive health is automatically enabled by nginx virtual host template
But I am faced this problem after pod is not present
And nginx try to send traffic to pod which died 10-20 minute before

[nariman@notebook nginx-ingress]$ kubectl -n public get deployments.apps pn-backend -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: pn-backend
    tags.datadoghq.com/service: pn-backend
  name: pn-backend
  namespace: public
spec:
  selector:
    matchLabels:
      app: pn-backend
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 10%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        ad.datadoghq.com/pn-backend.logs: '[{"source":"pn-backend","service":"pn-backend","auto_multi_line_detection":true}]'
      creationTimestamp: null
      labels:
        app: pn-backend
        tags.datadoghq.com/env: prod-fra
        tags.datadoghq.com/service: pn-backend
    spec:
      containers:
      - name: pn-backend
        image: public/pn-backend:fe1db1c
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /api/healthcheck
            port: 4000
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 3
          successThreshold: 1
          timeoutSeconds: 3
        ports:
        - containerPort: 4000
          protocol: TCP
        - containerPort: 9090
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /api/healthcheck
            port: 4000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 3
        resources:
          limits:
            cpu: "12"
            memory: 16Gi
          requests:
            cpu: "10"
            memory: 8Gi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/run/datadog
          name: apmsocketpath
        - mountPath: /app/geo_config.json
          name: cluster-configs-volume
          readOnly: true
          subPath: geo_config.json
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: docker-registry
      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          sysctl -w net.core.somaxconn=64000
          sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        image: busybox
        imagePullPolicy: Always
        name: init-sysctl
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - hostPath:
          path: /var/run/datadog/
          type: ""
        name: apmsocketpath
      - name: cluster-configs-volume
        secret:
          defaultMode: 420
          secretName: cluster-configs-f9c45ad972b6b8559cdc924581631d693f53d5d0

from kubernetes-ingress.

brianehlert avatar brianehlert commented on June 2, 2024

The deployment doesn't give us much information to assist with.
We would need you to share your configuration resources. The VirtualServer, VirtualServerRoute, TransportServer, or Ingress

If a pod of a service no longer exists, it should be removed from the ingress controller upstream group for that service.
Unless there is a configuration error (such as through snippets or customizations) that is preventing NGINX from being updated.

from kubernetes-ingress.

ibadullaev-inc4 avatar ibadullaev-inc4 commented on June 2, 2024

Hi, thank you for fast response
If i forgot something please inform me.

  1. We installed our nginx via helm chart
[nariman@notebook ~]$ helm ls -n nginx
NAME   	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART              	APP VERSION
ingress	nginx    	19      	2024-03-29 12:17:00.734183664 +0400 +04	deployed	nginx-ingress-1.1.0	3.4.0
  1. Nginx ConfigMap
[nariman@notebook ~]$ kubectl -n nginx get cm ingress-nginx-ingress -o yaml
apiVersion: v1
data:
  client-max-body-size: 100m
  http2: "true"
  log-format: date="$time_iso8601" status=$status request_completion=$request_completion
    msec=$msec connections_active=$connections_active connections_reading=$connections_reading
    connections_writing=$connections_writing connections_waiting=$connections_waiting
    connection=$connection connection_requests=$connection_requests connection_time=$connection_time
    client=$remote_addr method=$request_method request="$request" request_length=$request_length
    status=$status bytes_sent=$bytes_sent body_bytes_sent=$body_bytes_sent referer=$http_referer
    user_agent="$http_user_agent" upstream_addr=$upstream_addr upstream_status=$upstream_status
    request_time=$request_time upstream_response_time=$upstream_response_time upstream_connect_time=$upstream_connect_time
    upstream_header_time=$upstream_header_time request_body="$request_body host="$host"
    user_ip="$http_x_forwarded_for"
  log-format-escaping: json
  proxy-buffering: "false"
  proxy-request-buffering: "off"
  redirect-to-https: "true"
  ssl_buffer_size: 4k
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: ingress
    meta.helm.sh/release-namespace: nginx
  creationTimestamp: "2024-03-26T14:37:04Z"
  labels:
    app.kubernetes.io/instance: ingress
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/version: 3.4.0
    helm.sh/chart: nginx-ingress-1.1.0
  name: ingress-nginx-ingress
  namespace: nginx
  resourceVersion: "222418677"
  uid: 958b1e7e-d36e-44cf-bfc1-d5aee88b767d
  1. Service in nginx namespace, we do not use LoadBalancer type, because before Nginx we use CloudFlare tunnel
[nariman@notebook ~]$ kubectl -n nginx get svc
NAME                                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-ingress-controller           NodePort    10.245.73.204   <none>        80:32098/TCP,443:32685/TCP   43d
ingress-nginx-ingress-prometheus-service   ClusterIP   None            <none>        9113/TCP                     43d
  1. Nginx deployment
[nariman@notebook ~]$ kubectl -n nginx get deployments.apps ingress-nginx-ingress-controller -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "10"
    meta.helm.sh/release-name: ingress
    meta.helm.sh/release-namespace: nginx
  creationTimestamp: "2024-03-26T14:37:05Z"
  generation: 158
  labels:
    app.kubernetes.io/instance: ingress
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/version: 3.4.0
    helm.sh/chart: nginx-ingress-1.1.0
  name: ingress-nginx-ingress-controller
  namespace: nginx
  resourceVersion: "250051917"
  uid: 6cda1863-0a5a-4b90-b419-74e6f26540b2
spec:
  progressDeadlineSeconds: 600
  replicas: 10
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: ingress
      app.kubernetes.io/name: nginx-ingress
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/restartedAt: "2024-03-29T10:34:14+04:00"
        prometheus.io/port: "9113"
        prometheus.io/scheme: http
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: ingress
        app.kubernetes.io/name: nginx-ingress
    spec:
      automountServiceAccountToken: true
      containers:
      - args:
        - -nginx-plus=false
        - -nginx-reload-timeout=60000
        - -enable-app-protect=false
        - -enable-app-protect-dos=false
        - -nginx-configmaps=$(POD_NAMESPACE)/ingress-nginx-ingress
        - -ingress-class=nginx
        - -health-status=false
        - -health-status-uri=/nginx-health
        - -nginx-debug=false
        - -v=1
        - -nginx-status=true
        - -nginx-status-port=8080
        - -nginx-status-allow-cidrs=127.0.0.1
        - -report-ingress-status
        - -enable-leader-election=true
        - -leader-election-lock-name=nginx-ingress-leader
        - -enable-prometheus-metrics=true
        - -prometheus-metrics-listen-port=9113
        - -prometheus-tls-secret=
        - -enable-service-insight=false
        - -service-insight-listen-port=9114
        - -service-insight-tls-secret=
        - -enable-custom-resources=true
        - -enable-snippets=false
        - -include-year=false
        - -disable-ipv6=false
        - -enable-tls-passthrough=false
        - -enable-cert-manager=false
        - -enable-oidc=false
        - -enable-external-dns=false
        - -default-http-listener-port=80
        - -default-https-listener-port=443
        - -ready-status=true
        - -ready-status-port=8081
        - -enable-latency-metrics=true
        - -ssl-dynamic-reload=true
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        image: nginx/nginx-ingress:3.4.0
        imagePullPolicy: IfNotPresent
        name: nginx-ingress
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 9113
          name: prometheus
          protocol: TCP
        - containerPort: 8081
          name: readiness-port
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /nginx-ready
            port: readiness-port
            scheme: HTTP
          periodSeconds: 1
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "2"
            memory: 2Gi
          requests:
            cpu: "1"
            memory: 512Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          readOnlyRootFilesystem: false
          runAsNonRoot: true
          runAsUser: 101
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      serviceAccount: ingress-nginx-ingress
      serviceAccountName: ingress-nginx-ingress
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 10
  conditions:
  - lastTransitionTime: "2024-03-26T14:37:05Z"
    lastUpdateTime: "2024-03-29T06:34:21Z"
    message: ReplicaSet "ingress-nginx-ingress-controller-565c6849d5" has successfully
      progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2024-04-10T16:13:10Z"
    lastUpdateTime: "2024-04-10T16:13:10Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 158
  readyReplicas: 10
  replicas: 10
  updatedReplicas: 10

Follow manifest related our service

  1. VirtualServer manifest
[nariman@notebook ~]$ kubectl -n public get virtualservers.k8s.nginx.org pn-front-prod-arbitrum-sepolia-rpc 
NAME                                 STATE   HOST                                  IP    PORTS   AGE
pn-front-prod-arbitrum-sepolia-rpc   Valid   arbitrum-sepolia-rpc.public.com                 43d
[nariman@notebook ~]$ kubectl -n public get virtualservers.k8s.nginx.org pn-front-prod-arbitrum-sepolia-rpc -o yaml
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"k8s.nginx.org/v1","kind":"VirtualServer","metadata":{"annotations":{},"name":"pn-front-prod-arbitrum-sepolia-rpc","namespace":"public"},"spec":{"host":"arbitrum-sepolia-rpc.public.com","routes":[{"action":{"pass":"frontend"},"matches":[{"action":{"redirect":{"code":301,"url":"https://arbitrum-sepolia-rpc.public.com/"}},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/api/metrics"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/api"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/favicon.ico"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/platforms"},{"action":{"pass":"frontend"},"matches":[{"action":{"pass":"frontend"},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/_next"},{"action":{"pass":"backend"},"matches":[{"action":{"pass":"backend"},"conditions":[{"header":"Upgrade","value":"websocket"}]},{"action":{"proxy":{"rewritePath":"/arbitrum-sepolia","upstream":"frontend"}},"conditions":[{"value":"GET","variable":"$request_method"}]}],"path":"/"}],"server-snippets":"proxy_request_buffering off;\nssl_buffer_size 4k;\n","tls":{"secret":"public.com"},"upstreams":[{"name":"backend","port":4000,"service":"pn-backend"},{"name":"frontend","port":3000,"service":"pn-frontend"}]}}
  creationTimestamp: "2024-03-26T14:42:08Z"
  generation: 1
  name: pn-front-prod-arbitrum-sepolia-rpc
  namespace: public
  resourceVersion: "222416877"
  uid: e616e0dc-3433-4be4-807d-00786e8a217d
spec:
  host: arbitrum-sepolia-rpc.public.com
  routes:
  - action:
      pass: frontend
    matches:
    - action:
        redirect:
          code: 301
          url: https://arbitrum-sepolia-rpc.publicn.com/
      conditions:
      - value: GET
        variable: $request_method
    path: /api/metrics
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /api
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /favicon.ico
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /platforms
  - action:
      pass: frontend
    matches:
    - action:
        pass: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /_next
  - action:
      pass: backend
    matches:
    - action:
        pass: backend
      conditions:
      - header: Upgrade
        value: websocket
    - action:
        proxy:
          rewritePath: /arbitrum-sepolia
          upstream: frontend
      conditions:
      - value: GET
        variable: $request_method
    path: /
  server-snippets: |
    proxy_request_buffering off;
    ssl_buffer_size 4k;
  tls:
    secret: public.com
  upstreams:
  - name: backend
    port: 4000
    service: pn-backend
  - name: frontend
    port: 3000
    service: pn-frontend
status:
  message: 'Configuration for public/pn-front-prod-arbitrum-sepolia-rpc was added
    or updated '
  reason: AddedOrUpdated
  state: Valid
  • virtualserverroutes
[nariman@notebook ~]$ kubectl -n public get virtualserverroutes.k8s.nginx.org 
No resources found in public namespace.
  • transportservers
[nariman@notebook ~]$ kubectl -n public get transportservers.k8s.nginx.org 
No resources found in public namespace.
  • ingress
[nariman@notebook ~]$ kubectl -n public get ingress
No resources found in public namespace.
[nariman@notebook ~]$ kubectl -n public get svc
NAME                    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
pn-backend              NodePort   10.245.106.220   <none>        4000:30569/TCP,4001:30073/TCP,9090:30022/TCP,9091:30754/TCP,9092:30693/TCP   401d
pn-connections-broker   NodePort   10.245.65.60     <none>        8888:30605/TCP,9999:32137/TCP                                                71d
pn-cron                 NodePort   10.245.206.66    <none>        5005:32416/TCP                                                               196d
pn-frontend             NodePort   10.245.253.158   <none>        3000:31404/TCP                                                               401d
pn-internal-stats       NodePort   10.245.116.36    <none>        4000:30191/TCP,4444:32162/TCP                                                174d

from kubernetes-ingress.

ibadullaev-inc4 avatar ibadullaev-inc4 commented on June 2, 2024

Hi all,
Thanks for your previous responses

Do I need to provide anything else ? Some manifests I added in the first comments. As I mentioned earlier, nginx is trying to send requests to IP addresses of pods that are no longer alive.
For example:
We are updating our backend component - and want to update our deployment to a new version of the container image, as a result of updating to a new version of the image, all our pods are restarted and new pods are started with new IP addresses, but nginx still tries to send traffic to an IP that no longer exists.

from kubernetes-ingress.

pdabelf5 avatar pdabelf5 commented on June 2, 2024

@ibadullaev-inc4 are you able to confirm if this occurs only during a rolling upgrade of the pn-backend service?

Note, you have server-snippets configured, yet -enable-snippets=false is set.

from kubernetes-ingress.

pdabelf5 avatar pdabelf5 commented on June 2, 2024

@ibadullaev-inc4 before sending requests to the service, did the backend service deployment complete and the nginx reload finish?

from kubernetes-ingress.

ibadullaev-inc4 avatar ibadullaev-inc4 commented on June 2, 2024

Hello @pdabelf5

before sending requests to the service, did the backend service deployment complete ?

When I say that we are restarting the service, I mean we are deploying a new version of our service. After we update our deployment, our pods sequentially terminate and new ones appear, which have the new image version.
Yes, we observe this behavior after 30-40 minutes. That is, the deployment finishes updating, then 30 minutes pass, and we see that this pod has been gone for 30 minutes, but nginx is still trying to send traffic to it.

and the nginx reload finish?

I didn't understand this part of the question: we don't restart nginx when we deploy a new version of our backend.

are you able to confirm if this occurs only during a rolling upgrade of the pn-backend service?

Yes, we observe this issue only in this case.

Note, you have server-snippets configured, yet -enable-snippets=false is set.

Do you mean that we should switch this parameter to True?

Note: When I manually delete a pod using the command "kubectl delete pods backend-xxxxxxx" which has the IP, for example: X.X.X.X, I see that nginx removes this IP address from its upstream configuration. This means nginx behaves correctly and stops passive monitoring for this IP.

But when updating the backend service, nginx most likely does not remove the old IPs from its configuration and continues to send traffic to them.

from kubernetes-ingress.

pdabelf5 avatar pdabelf5 commented on June 2, 2024

Hello @ibadullaev-inc4
Thank you for clarifying the behaviour you are seeing.

In order for

  server-snippets: |
    proxy_request_buffering off;
    ssl_buffer_size 4k;

to take effect, -enable-snippets=true needs to be set in your manifest or controller.enableSnippets: true in the helm values file.

and the nginx reload finish?

Nginx should reload when the upstream pod ip's in the backend service are updated & the upstream config should contain the current list of pod ip's.

I am trying to replicate the issue you are seeing. Is the configuration you have provided the smallest example configuration that results in the problem you have encountered? Is there anything I might need when replicating the issue, i.e. websockets or grpc?

The example you provided suggests you are using 3.4.0, have you tried 3.4.3 or 3.5.1 and seen the same issue?

from kubernetes-ingress.

ibadullaev-inc4 avatar ibadullaev-inc4 commented on June 2, 2024

Hello @pdabelf5
I want to thank you for your support and quick response.

-enable-snippets=true

Yes, we forgot to enable this setting. Thank you for your attention.

Nginx should reload when the upstream pod ip's in the backend service are updated & the upstream config should contain the current list of pod ip's.

Oh, that's very strange. Are you sure I need to restart nginx every time I update my backend service (deployment)? Currently, we are using Istio and we don't restart it when deploying a new version.
Should the reload process happen automatically, or do I need to initiate it somehow?

I am trying to replicate the issue you are seeing. Is the configuration you have provided the smallest example configuration that results in the problem you have encountered? Is there anything I might need when replicating the issue, i.e. websockets or grpc?

"We also use gRPC and WebSockets in our configuration, but I'm not sure if they could influence or be the source of the problem. I think you can try without them, as we see that the timeout issue is related to the HTTP protocol."

The example you provided suggests you are using 3.4.0, have you tried 3.4.3 or 3.5.1 and seen the same issue?

We only tried version 3.4.0.

from kubernetes-ingress.

pdabelf5 avatar pdabelf5 commented on June 2, 2024

@ibadullaev-inc4

Should the reload process happen automatically, or do I need to initiate it somehow?

Apologies if I implied the nginx reload was something you needed to perform, it should happen automatically when nginx config is updated by the controller.

I will let you know how things go trying to replicate the issue as soon as I have something to tell.

from kubernetes-ingress.

jasonwilliams14 avatar jasonwilliams14 commented on June 2, 2024

@ibadullaev-inc4

Currently, we are using Istio and we don't restart it when deploying a new version.

Can you expand on how you are using Istio in your setup? Is it separate? Is it a sidecar within NGINX Ingress controller?
TY

from kubernetes-ingress.

pdabelf5 avatar pdabelf5 commented on June 2, 2024

@ibadullaev-inc4 I put together a test that deploys a test application, updates the test application whilst performing requests while it is updating. I'm afraid I wasn't able to reproduce the problem you have encountered.

from kubernetes-ingress.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.