Giter Site home page Giter Site logo

nginxinc / kubernetes-ingress Goto Github PK

View Code? Open in Web Editor NEW
4.5K 4.5K 1.9K 68.46 MB

NGINX and NGINX Plus Ingress Controllers for Kubernetes

Home Page: https://docs.nginx.com/nginx-ingress-controller

License: Apache License 2.0

Makefile 0.37% Go 68.21% Shell 0.17% Dockerfile 0.87% Python 29.66% JavaScript 0.34% Mustache 0.38%
docker go golang ingress ingress-controller k8s kubernetes nginx

kubernetes-ingress's Introduction

OpenSSFScorecard CI FOSSA Status License Go Report Card codecov GitHub release (latest SemVer) GitHub go.mod Go version Docker Pulls Docker Image Size (latest semver) Artifact Hub Slack Project Status: Active – The project has reached a stable, usable state and is being actively developed. Commercial Support

NGINX Ingress Controller

This repo provides an implementation of an Ingress Controller for NGINX and NGINX Plus from the people behind NGINX.


Join Our Next Community Call

We value community input and would love to see you at our next community call. At these calls, we discuss PRs by community members as well as issues, discussions and feature requests.

Zoom Link: KIC - GitHub Issues Triage
Password: 197738
Slack: Join our channel #nginx-ingress-controller on the NGINX Community Slack for updates and discussions.
When: 15:00 GMT / Convert to your timezone, every other Monday.

Community Call Dates
2024-05-06
2024-05-20
2024-06-03
2024-06-17

NGINX Ingress Controller works with both NGINX and NGINX Plus and supports the standard Ingress features - content-based routing and TLS/SSL termination.

Additionally, several NGINX and NGINX Plus features are available as extensions to the Ingress resource via annotations and the ConfigMap resource. In addition to HTTP, NGINX Ingress Controller supports load balancing Websocket, gRPC, TCP and UDP applications. See ConfigMap and Annotations docs to learn more about the supported features and customization options.

As an alternative to the Ingress, NGINX Ingress Controller supports the VirtualServer and VirtualServerRoute resources. They enable use cases not supported with the Ingress resource, such as traffic splitting and advanced content-based routing. See VirtualServer and VirtualServerRoute resources doc.

TCP, UDP and TLS Passthrough load balancing is also supported. See the TransportServer resource doc.

Read this doc to learn more about NGINX Ingress Controller with NGINX Plus.

Note

This project is different from the NGINX Ingress Controller in kubernetes/ingress-nginx repo. See this doc to find out about the key differences.

Ingress and Ingress Controller

What is the Ingress?

The Ingress is a Kubernetes resource that lets you configure an HTTP load balancer for applications running on Kubernetes, represented by one or more Services. Such a load balancer is necessary to deliver those applications to clients outside of the Kubernetes cluster.

The Ingress resource supports the following features:

  • Content-based routing:
    • Host-based routing. For example, routing requests with the host header foo.example.com to one group of services and the host header bar.example.com to another group.
    • Path-based routing. For example, routing requests with the URI that starts with /serviceA to service A and requests with the URI that starts with /serviceB to service B.
  • TLS/SSL termination for each hostname, such as foo.example.com.

See the Ingress User Guide to learn more about the Ingress resource.

What is the Ingress Controller?

The Ingress Controller is an application that runs in a cluster and configures an HTTP load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally. Different load balancers require different Ingress Controller implementations.

In the case of NGINX, the Ingress Controller is deployed in a pod along with the load balancer.

Getting Started

Note

All documentation should only be used with the latest stable release, indicated on the releases page of the GitHub repository.

  1. Install NGINX Ingress Controller using the Helm chart or the Kubernetes manifests.
  2. Configure load balancing for a simple web application:
  3. See additional configuration examples.
  4. Learn more about all available configuration and customization in the docs.

NGINX Ingress Controller Releases

We publish NGINX Ingress Controller releases on GitHub. See our releases page.

The latest stable release is 3.5.0. For production use, we recommend that you choose the latest stable release.

The edge version is useful for experimenting with new features that are not yet published in a stable release. To use it, choose the edge version built from the latest commit from the main branch.

To use NGINX Ingress Controller, you need to have access to:

  • An NGINX Ingress Controller image.
  • Installation manifests or a Helm chart.
  • Documentation and examples.

It is important that the versions of those things above match.

The table below summarizes the options regarding the images, Helm chart, manifests, documentation and examples and gives your links to the correct versions:

Version Description Image for NGINX Image for NGINX Plus Installation Manifests and Helm Chart Documentation and Examples
Latest stable release For production use Use the 3.5.0 images from DockerHub, GitHub Container, Amazon ECR Public Gallery or Quay.io or build your own image. Use the 3.5.0 images from the F5 Container Registry or the AWS Marketplace or Build your own image. Manifests. Helm chart. Documentation. Examples.
Edge/Nightly For testing and experimenting Use the edge or nightly images from DockerHub, GitHub Container, Amazon ECR Public Gallery or Quay.io or build your own image. Build your own image. Manifests. Helm chart. Documentation. Examples.

SBOM (Software Bill of Materials)

We generate SBOMs for the binaries and the Docker images.

Binaries

The SBOMs for the binaries are available in the releases page. The SBOMs are generated using syft and are available in SPDX format.

Docker Images

The SBOMs for the Docker images are available in the DockerHub, GitHub Container, Amazon ECR Public Gallery or Quay.io repositories. The SBOMs are generated using syft and stored as an attestation in the image manifest.

For example to retrieve the SBOM for linux/amd64 from Docker Hub and analyze it using grype you can run the following command:

docker buildx imagetools inspect nginx/nginx-ingress:edge --format '{{ json (index .SBOM "linux/amd64").SPDX }}' | grype

Contacts

We’d like to hear your feedback! If you have any suggestions or experience issues with our Ingress Controller, please create an issue or send a pull request on GitHub. You can contact us directly via [email protected] or on the NGINX Community Slack.

Contributing

If you'd like to contribute to the project, please read our Contributing guide.

Support

For NGINX Plus customers NGINX Ingress Controller (when used with NGINX Plus) is covered by the support contract.

kubernetes-ingress's People

Contributors

adubhlaoich avatar alexfenlon avatar brianehlert avatar ciarams87 avatar coolbry95 avatar dean-coakley avatar dependabot[bot] avatar galitskiy avatar haywoodsh avatar isaachawley avatar j1m-ryan avatar jasonwilliams14 avatar jcahilltorre avatar jjngx avatar jputrino avatar lorcanmcveigh avatar lucacome avatar martialonline avatar oseoin avatar pasmant avatar pdabelf5 avatar pleshakov avatar pre-commit-ci[bot] avatar rafwegv avatar shaun-nx avatar sigv avatar soneillf5 avatar tellet avatar travisamartin avatar vepatel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-ingress's Issues

Add something like /status or /health to NGINX config

Hi,

I have a setup, where an external LoadBalancer pointing to the NGINX Ingress Controller . The NGINX Ingress Controller serves multiple Kubernetes Namespaces. For the external LoadBalancer's health checker I need a location that is available in all NGINX config files.

ngx_http_stub_status_module comes to mind, which is already available in the shipped NGINX:

location /status {
    stub_status;
    allow all;
}

Something like this maybe? Is there a better way? How about a new annotation like nginx.org/status?

Container crashing makes nginx crash

If a container crashes, and it is the only one within a service. The next ingress the api happen to take will make nginx reload to fail because it wont be able to resolve this particular service ip at the time of daemon start.

Nginx should bind to tcp6 as well as tcp4

I would be interested in having nginx listen to ipv6. I am using this with hostNetwork: true so I am not limited by lack of IPv6 support in kubernetes.

@thetechnick commented that they have "pull requests incoming for ipv6 binding", I would like to inquire of the status of that.

I think a good default is to listen on IPv6 by default. For PoC I turned on IPv6 by editing the template.

listen 80{{if $server.ProxyProtocol}} proxy_protocol{{end}};<br/>listen [::]:80{{if $server.ProxyProtocol}} proxy_protocol{{end}};<br/>

Is that the correct way to make nginx listen on both v4 and v6?

Aha! Link: https://nginx.aha.io/features/IC-112

connect() failed (111: Connection refused) while connecting to upstream

After upgrading and building from master, (using config to increase server_names_hash_bucket_size + client_max_body_size) I'm not able to access my services via ingress routes.

The error I'm getting is..

[error] 26#26: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.36.5.79, server: gogs.default.beast.fabric8.io, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8181/", host: "gogs.default.beast.fabric8.io"

My Ingress looks like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gogs
spec:
  rules:
  - host: gogs.default.beast.fabric8.io
    http:
      paths:
      - backend:
          serviceName: gogs
          servicePort: 80
status:
  loadBalancer: {}

I can successfully access the gogs service from within the nginx ingress controller pod if I install curl and use the kubernetes service: curl http://gogs, so the cluster dns all seems to work fine.

By no means am I ruling out something I've done but I've checked a number of things and now out of ideas, I'm wondering if this upstream section in the logs is correct?

upstream default-gogs-gogs.default.beast.fabric8.io-gogs {

    server 127.0.0.1:8181;
}

The full log with the nginx config used and error at the bottom:

I0726 09:15:30.392509       1 nginx.go:234] Writing NGINX conf to /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    server_names_hash_max_size 1024;
    server_names_hash_bucket_size 512;

    include /etc/nginx/conf.d/*.conf;
}
I0726 09:15:30.393053       1 nginx.go:252] The main NGINX configuration file had been updated
I0726 09:15:30.393098       1 nginx.go:207] executing nginx
I0726 09:15:30.434586       1 controller.go:99] Adding service: elasticsearch
I0726 09:15:30.434629       1 controller.go:350] ignoring service elasticsearch: No ingress for service elasticsearch
I0726 09:15:30.434676       1 controller.go:99] Adding service: gogs-ssh
I0726 09:15:30.434680       1 controller.go:350] ignoring service gogs-ssh: No ingress for service gogs-ssh
I0726 09:15:30.434686       1 controller.go:99] Adding service: kube-dns
I0726 09:15:30.434689       1 controller.go:350] ignoring service kube-dns: No ingress for service kube-dns
I0726 09:15:30.434694       1 controller.go:99] Adding service: jenkinshift
I0726 09:15:30.434697       1 controller.go:350] ignoring service jenkinshift: No ingress for service jenkinshift
I0726 09:15:30.434709       1 controller.go:99] Adding service: kubernetes
I0726 09:15:30.434715       1 controller.go:350] ignoring service kubernetes: No ingress for service kubernetes
I0726 09:15:30.434720       1 controller.go:99] Adding service: nexus
I0726 09:15:30.434727       1 controller.go:350] ignoring service nexus: No ingress for service nexus
I0726 09:15:30.434732       1 controller.go:99] Adding service: elasticsearch-masters
I0726 09:15:30.434740       1 controller.go:350] ignoring service elasticsearch-masters: No ingress for service elasticsearch-masters
I0726 09:15:30.434745       1 controller.go:99] Adding service: fabric8-forge
I0726 09:15:30.434763       1 controller.go:350] ignoring service fabric8-forge: No ingress for service fabric8-forge
I0726 09:15:30.434770       1 controller.go:99] Adding service: grafana
I0726 09:15:30.434773       1 controller.go:350] ignoring service grafana: No ingress for service grafana
I0726 09:15:30.434780       1 controller.go:99] Adding service: kibana
I0726 09:15:30.434783       1 controller.go:350] ignoring service kibana: No ingress for service kibana
I0726 09:15:30.434792       1 controller.go:99] Adding service: prometheus
I0726 09:15:30.434796       1 controller.go:350] ignoring service prometheus: No ingress for service prometheus
I0726 09:15:30.434804       1 controller.go:99] Adding service: fabric8
I0726 09:15:30.434807       1 controller.go:350] ignoring service fabric8: No ingress for service fabric8
I0726 09:15:30.434816       1 controller.go:99] Adding service: gogs
I0726 09:15:30.434819       1 controller.go:350] ignoring service gogs: No ingress for service gogs
I0726 09:15:30.434829       1 controller.go:99] Adding service: jenkins-jnlp
I0726 09:15:30.434832       1 controller.go:350] ignoring service jenkins-jnlp: No ingress for service jenkins-jnlp
I0726 09:15:30.434837       1 controller.go:99] Adding service: fabric8-docker-registry
I0726 09:15:30.434862       1 controller.go:350] ignoring service fabric8-docker-registry: No ingress for service fabric8-docker-registry
I0726 09:15:30.434867       1 controller.go:99] Adding service: jenkins
I0726 09:15:30.434870       1 controller.go:350] ignoring service jenkins: No ingress for service jenkins
I0726 09:15:30.437675       1 controller.go:125] Adding endpoints: gogs-ssh
I0726 09:15:30.437707       1 controller.go:125] Adding endpoints: nexus
I0726 09:15:30.437713       1 controller.go:125] Adding endpoints: prometheus
I0726 09:15:30.437718       1 controller.go:125] Adding endpoints: elasticsearch-masters
I0726 09:15:30.437725       1 controller.go:125] Adding endpoints: fabric8-docker-registry
I0726 09:15:30.437730       1 controller.go:125] Adding endpoints: fabric8-forge
I0726 09:15:30.437734       1 controller.go:125] Adding endpoints: kube-controller-manager
I0726 09:15:30.437739       1 controller.go:125] Adding endpoints: kube-dns
I0726 09:15:30.437744       1 controller.go:125] Adding endpoints: elasticsearch
I0726 09:15:30.437750       1 controller.go:125] Adding endpoints: gogs
I0726 09:15:30.437755       1 controller.go:125] Adding endpoints: grafana
I0726 09:15:30.437759       1 controller.go:125] Adding endpoints: jenkins
I0726 09:15:30.437764       1 controller.go:125] Adding endpoints: kube-scheduler
I0726 09:15:30.437770       1 controller.go:125] Adding endpoints: fabric8
I0726 09:15:30.437778       1 controller.go:125] Adding endpoints: jenkins-jnlp
I0726 09:15:30.437782       1 controller.go:125] Adding endpoints: jenkinshift
I0726 09:15:30.437787       1 controller.go:125] Adding endpoints: kibana
I0726 09:15:30.437791       1 controller.go:125] Adding endpoints: kubernetes
I0726 09:15:30.437797       1 utils.go:70] Syncing default/gogs-ssh
I0726 09:15:30.437803       1 controller.go:257] Syncing endpoints default/gogs-ssh
I0726 09:15:30.437839       1 controller.go:350] ignoring service gogs-ssh: No ingress for service gogs-ssh
I0726 09:15:30.437845       1 utils.go:70] Syncing default/nexus
I0726 09:15:30.437848       1 controller.go:257] Syncing endpoints default/nexus
I0726 09:15:30.437852       1 controller.go:350] ignoring service nexus: No ingress for service nexus
I0726 09:15:30.437859       1 utils.go:70] Syncing default/prometheus
I0726 09:15:30.437862       1 controller.go:257] Syncing endpoints default/prometheus
I0726 09:15:30.437866       1 controller.go:350] ignoring service prometheus: No ingress for service prometheus
I0726 09:15:30.437870       1 utils.go:70] Syncing default/elasticsearch-masters
I0726 09:15:30.437872       1 controller.go:257] Syncing endpoints default/elasticsearch-masters
I0726 09:15:30.437876       1 controller.go:350] ignoring service elasticsearch-masters: No ingress for service elasticsearch-masters
I0726 09:15:30.437880       1 utils.go:70] Syncing default/fabric8-docker-registry
I0726 09:15:30.437882       1 controller.go:257] Syncing endpoints default/fabric8-docker-registry
I0726 09:15:30.437886       1 controller.go:350] ignoring service fabric8-docker-registry: No ingress for service fabric8-docker-registry
I0726 09:15:30.437890       1 utils.go:70] Syncing default/fabric8-forge
I0726 09:15:30.437892       1 controller.go:257] Syncing endpoints default/fabric8-forge
I0726 09:15:30.437896       1 controller.go:350] ignoring service fabric8-forge: No ingress for service fabric8-forge
I0726 09:15:30.437900       1 utils.go:70] Syncing kube-system/kube-controller-manager
I0726 09:15:30.437902       1 controller.go:257] Syncing endpoints kube-system/kube-controller-manager
I0726 09:15:30.437907       1 utils.go:70] Syncing kube-system/kube-dns
I0726 09:15:30.437909       1 controller.go:257] Syncing endpoints kube-system/kube-dns
I0726 09:15:30.437913       1 controller.go:350] ignoring service kube-dns: No ingress for service kube-dns
I0726 09:15:30.437919       1 utils.go:70] Syncing default/elasticsearch
I0726 09:15:30.437922       1 controller.go:257] Syncing endpoints default/elasticsearch
I0726 09:15:30.437926       1 controller.go:350] ignoring service elasticsearch: No ingress for service elasticsearch
I0726 09:15:30.437929       1 utils.go:70] Syncing default/gogs
I0726 09:15:30.437932       1 controller.go:257] Syncing endpoints default/gogs
I0726 09:15:30.437936       1 controller.go:350] ignoring service gogs: No ingress for service gogs
I0726 09:15:30.437939       1 utils.go:70] Syncing default/grafana
I0726 09:15:30.437942       1 controller.go:257] Syncing endpoints default/grafana
I0726 09:15:30.437946       1 controller.go:350] ignoring service grafana: No ingress for service grafana
I0726 09:15:30.437949       1 utils.go:70] Syncing default/jenkins
I0726 09:15:30.437952       1 controller.go:257] Syncing endpoints default/jenkins
I0726 09:15:30.437955       1 controller.go:350] ignoring service jenkins: No ingress for service jenkins
I0726 09:15:30.437959       1 utils.go:70] Syncing kube-system/kube-scheduler
I0726 09:15:30.437962       1 controller.go:257] Syncing endpoints kube-system/kube-scheduler
I0726 09:15:30.437965       1 utils.go:70] Syncing default/fabric8
I0726 09:15:30.437968       1 controller.go:257] Syncing endpoints default/fabric8
I0726 09:15:30.437972       1 controller.go:350] ignoring service fabric8: No ingress for service fabric8
I0726 09:15:30.437975       1 utils.go:70] Syncing default/jenkins-jnlp
I0726 09:15:30.437978       1 controller.go:257] Syncing endpoints default/jenkins-jnlp
I0726 09:15:30.437981       1 controller.go:350] ignoring service jenkins-jnlp: No ingress for service jenkins-jnlp
I0726 09:15:30.437985       1 utils.go:70] Syncing default/jenkinshift
I0726 09:15:30.437988       1 controller.go:257] Syncing endpoints default/jenkinshift
I0726 09:15:30.437993       1 controller.go:350] ignoring service jenkinshift: No ingress for service jenkinshift
I0726 09:15:30.437997       1 utils.go:70] Syncing default/kibana
I0726 09:15:30.438000       1 controller.go:257] Syncing endpoints default/kibana
I0726 09:15:30.438003       1 controller.go:350] ignoring service kibana: No ingress for service kibana
I0726 09:15:30.438007       1 utils.go:70] Syncing default/kubernetes
I0726 09:15:30.438009       1 controller.go:257] Syncing endpoints default/kubernetes
I0726 09:15:30.438013       1 controller.go:350] ignoring service kubernetes: No ingress for service kubernetes
I0726 09:15:30.438149       1 controller.go:73] Adding Ingress: gogs
I0726 09:15:30.438168       1 utils.go:70] Syncing default/gogs
I0726 09:15:30.438174       1 controller.go:316] Syncing default/gogs
I0726 09:15:30.438179       1 controller.go:331] Adding or Updating Ingress: default/gogs
I0726 09:15:30.438218       1 nginx.go:110] Updating NGINX configuration
I0726 09:15:30.438579       1 nginx.go:156] Writing NGINX conf to /etc/nginx/conf.d/default-gogs.conf

upstream default-gogs-gogs.default.beast.fabric8.io-gogs {

    server 127.0.0.1:8181;
}


server {
    listen 80;



    server_name gogs.default.beast.fabric8.io;





    location / {
        proxy_connect_timeout 60s;
        proxy_read_timeout 60s;
        client_max_body_size 2000m;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://default-gogs-gogs.default.beast.fabric8.io-gogs;
    }
}
I0726 09:15:30.439027       1 nginx.go:176] NGINX configuration file had been updated
I0726 09:15:30.439044       1 nginx.go:207] executing nginx -s reload
2016/07/26 09:15:30 [notice] 21#21: signal process started
I0726 09:15:30.456051       1 controller.go:160] Adding ConfigMap: nginx-config
I0726 09:15:30.456121       1 utils.go:70] Syncing default/nginx-config
I0726 09:15:30.456129       1 controller.go:279] Syncing configmap default/nginx-config
I0726 09:15:30.456265       1 nginx.go:234] Writing NGINX conf to /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

I0726 09:15:30.456419       1 nginx.go:252] The main NGINX configuration file had been updated
    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    server_names_hash_max_size 1024;
    server_names_hash_bucket_size 256;

    include /etc/nginx/conf.d/*.conf;
}
I0726 09:15:30.456468       1 utils.go:70] Syncing default/gogs
I0726 09:15:30.456489       1 controller.go:316] Syncing default/gogs
I0726 09:15:30.456494       1 controller.go:331] Adding or Updating Ingress: default/gogs
I0726 09:15:30.456505       1 nginx.go:110] Updating NGINX configuration
I0726 09:15:30.456715       1 nginx.go:156] Writing NGINX conf to /etc/nginx/conf.d/default-gogs.conf

upstream default-gogs-gogs.default.beast.fabric8.io-gogs {

    server 127.0.0.1:8181;
}


server {
    listen 80;



    server_name gogs.default.beast.fabric8.io;





    location / {
        proxy_connect_timeout 10s;
        proxy_read_timeout 10s;
        client_max_body_size 2000m;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://default-gogs-gogs.default.beast.fabric8.io-gogs;
    }
}
I0726 09:15:30.456979       1 nginx.go:176] NGINX configuration file had been updated
I0726 09:15:30.457230       1 nginx.go:207] executing nginx -s reload
2016/07/26 09:15:30 [notice] 25#25: signal process started
I0726 09:15:30.500502       1 controller.go:136] Endpoints kube-scheduler changed, syncing
I0726 09:15:30.500525       1 utils.go:70] Syncing kube-system/kube-scheduler
I0726 09:15:30.500529       1 controller.go:257] Syncing endpoints kube-system/kube-scheduler
I0726 09:15:32.522029       1 controller.go:136] Endpoints kube-controller-manager changed, syncing
I0726 09:15:32.522068       1 utils.go:70] Syncing kube-system/kube-controller-manager
I0726 09:15:32.522074       1 controller.go:257] Syncing endpoints kube-system/kube-controller-manager
I0726 09:15:32.716975       1 controller.go:136] Endpoints kube-scheduler changed, syncing
I0726 09:15:32.717006       1 utils.go:70] Syncing kube-system/kube-scheduler
I0726 09:15:32.717011       1 controller.go:257] Syncing endpoints kube-system/kube-scheduler
2016/07/26 09:15:34 [error] 26#26: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.36.5.79, server: gogs.default.beast.fabric8.io, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8181/", host: "gogs.default.beast.fabric8.io"
10.36.5.79 - - [26/Jul/2016:09:15:34 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" "-"
2016/07/26 09:15:34 [error] 26#26: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.36.5.79, server: gogs.default.beast.fabric8.io, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8181/favicon.ico", host: "gogs.default.beast.fabric8.io", referrer: "http://gogs.default.beast.fabric8.io/"
10.36.5.79 - - [26/Jul/2016:09:15:34 +0000] "GET /favicon.ico HTTP/1.1" 502 575 "http://gogs.default.beast.fabric8.io/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36" "-"
I0726 09:15:34.618370       1 controller.go:136] Endpoints kube-controller-manager changed, syncing

failed to "StartContainer" for "nginx-ingress" with CrashLoopBackOff: "Back-off 10s restarting failed container=nginx-ingress

I am trying to run the https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example and these the steps for the same.
And the on running the kubectl create -f nginx-ingress-rc.yaml

Output for kubectl logs
F1020 09:41:15.385209 1 main.go:46] Failed to create client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory.

Any pointers for the same?

NGINX Controller crashes when the host field empty in an Ingress rule

For such Ingress resource...

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-no-host
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        backend:
          serviceName: test
          servicePort: 80

NGINX fails to reload because with nginx: [emerg] invalid number of arguments in \"server_name\" directive in /etc/nginx/conf.d/default-ingress-no-host.conf:12\n"

ContainerCreating

I'm trying the "complete" example on my local machine which has kubernetes set up on top of docker (on OSX)

After this command:

kubectl create -f nginx-ingress-rc.yaml

I keep getting the status ContainerCreating

NAME                        DESIRED      CURRENT             AGE
coffee-rc                   2            2                   1h
nginx-ingress-rc            1            1                   1h
tea-rc                      3            3                   1h
NAME                        CLUSTER-IP   EXTERNAL-IP         PORT(S)     AGE
coffee-svc                  10.0.0.144   <none>              80/TCP      1h
kubernetes                  10.0.0.1     <none>              443/TCP     6d
tea-svc                     10.0.0.55    <none>              80/TCP      1h
NAME                        READY        STATUS              RESTARTS    AGE
coffee-rc-5wk2r             1/1          Running             0           1h
coffee-rc-akkgw             1/1          Running             0           1h
k8s-etcd-127.0.0.1          1/1          Running             0           22h
k8s-master-127.0.0.1        4/4          Running             3           22h
k8s-proxy-127.0.0.1         1/1          Running             0           22h
nginx-ingress-rc-ftf97      0/1          ContainerCreating   0           1h
tea-rc-djl5q                1/1          Running             0           1h
tea-rc-j90if                1/1          Running             0           1h
tea-rc-pxofg                1/1          Running             0           1h

How can I fix this?

proxy_pass hard-coded to HTTP

I have a setup in which I use SSL for inter-container communtication. To have the nginx-ingress-rc working I need to be able to set:

proxy_pass https://...

(not http://)

Since Kubernetes does not hold that information, my suggestion is: If a container opens port 443 (instead of 80 or whatever) we assume, that SSL is spoken.

For this suggestion I have already a patch. Let me know if you are interested in having it.

Default backend in Ingress Resource

Currently NGINX Controller doesn't handle default backends:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-default-backend
spec:
  backend:
    serviceName: testsvc
    servicePort: 80

Kubernetes on private network

I installed fabric8 on top of kubernetes1.4.7 and I can't reach the URL for fabric8. I checked and I found the below error. I have kubernetes cluster with 3 servers. If you could figureout the error or guide me how to increase log level so I can get more details about the error.

I'm running kubernetes behind a proxy and all system is using this proxy. how can I configure ingress to use this proxy? I'm not sure if this is the issue.

root@kuber-master:~# kubectl -n fabric8-system logs -f ingress-nginx-2015555637-0obbr
I1216 18:48:12.848439 1 nginx.go:240] Writing NGINX conf to /etc/nginx/nginx.conf

user nginx;
worker_processes 1;

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfile        on;
#tcp_nopush     on;

keepalive_timeout  65;

#gzip  on;

server_names_hash_max_size 512;


map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

include /etc/nginx/conf.d/*.conf;

}
I1216 18:48:12.849306 1 nginx.go:258] The main NGINX configuration file had been updated
I1216 18:48:12.849578 1 nginx.go:213] executing nginx
F1216 18:48:12.876396 1 nginx.go:196] Failed to start nginx


root@kuber-master:~# kubectl -n fabric8-system describe pod ingress-nginx-2015555637-0obbr
Name: ingress-nginx-2015555637-0obbr
Namespace: fabric8-system
Node: 10.103.12.20/10.103.12.20
Start Time: Fri, 16 Dec 2016 13:46:44 -0500
Labels: group=io.fabric8.devops.apps
pod-template-hash=2015555637
project=ingress-nginx
provider=fabric8
version=2.2.298
Status: Running
IP: 172.16.97.5
Controllers: ReplicaSet/ingress-nginx-2015555637
Containers:
nginx-ingress:
Container ID: docker://99835d6dc4bf725edcaefd2e18adcb6adf8c61e56f038033bf8825a0c5023b23
Image: nginxdemos/nginx-ingress:0.3.1
Image ID: docker://sha256:3773d84614b893eb2577d64286bca8bd21eeaf5967d9124638b8a105909f0166
Ports: 80/TCP, 443/TCP
Args:
-v=3
-nginx-configmaps=fabric8-system/nginx-config
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Fri, 16 Dec 2016 13:52:26 -0500
Finished: Fri, 16 Dec 2016 13:52:26 -0500
Ready: False
Restart Count: 6
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f7z19 (ro)
Environment Variables:
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-f7z19:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f7z19
QoS Class: BestEffort
Tolerations:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message


10m 10m 1 {default-scheduler } Normal Scheduled Successfully assigned ingress-nginx-2015555637-0obbr to 10.103.12.20
10m 10m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Created Created container with docker id 47ae0861f0ff; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Started Started container with docker id 47ae0861f0ff
10m 10m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Created Created container with docker id 5db192acf5ee; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Started Started container with docker id 5db192acf5ee
10m 10m 2 {kubelet 10.103.12.20} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress" with CrashLoopBackOff: "Back-off 10s restarting failed container=nginx-ingress pod=ingress-nginx-2015555637-0obbr_fabric8-system(fd94c9d3-c3bf-11e6-b3d9-0050560116bd)"

10m 10m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Created Created container with docker id e915dcf51f5b; Security:[seccomp=unconfined]
10m 10m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Started Started container with docker id e915dcf51f5b
10m 9m 2 {kubelet 10.103.12.20} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress" with CrashLoopBackOff: "Back-off 20s restarting failed container=nginx-ingress pod=ingress-nginx-2015555637-0obbr_fabric8-system(fd94c9d3-c3bf-11e6-b3d9-0050560116bd)"

9m 9m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Started Started container with docker id 15e4f22a796c
9m 9m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Created Created container with docker id 15e4f22a796c; Security:[seccomp=unconfined]
9m 9m 3 {kubelet 10.103.12.20} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress" with CrashLoopBackOff: "Back-off 40s restarting failed container=nginx-ingress pod=ingress-nginx-2015555637-0obbr_fabric8-system(fd94c9d3-c3bf-11e6-b3d9-0050560116bd)"

8m 8m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Created Created container with docker id 8bb3380e09ab; Security:[seccomp=unconfined]
8m 8m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Started Started container with docker id 8bb3380e09ab
8m 7m 7 {kubelet 10.103.12.20} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=nginx-ingress pod=ingress-nginx-2015555637-0obbr_fabric8-system(fd94c9d3-c3bf-11e6-b3d9-0050560116bd)"

7m 7m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Created Created container with docker id c1ab9788399b; Security:[seccomp=unconfined]
7m 7m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Started Started container with docker id c1ab9788399b
7m 4m 13 {kubelet 10.103.12.20} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress pod=ingress-nginx-2015555637-0obbr_fabric8-system(fd94c9d3-c3bf-11e6-b3d9-0050560116bd)"

10m 4m 7 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Pulled Container image "nginxdemos/nginx-ingress:0.3.1" already present on machine
4m 4m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Created Created container with docker id 99835d6dc4bf; Security:[seccomp=unconfined]
4m 4m 1 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Normal Started Started container with docker id 99835d6dc4bf
10m 11s 48 {kubelet 10.103.12.20} spec.containers{nginx-ingress} Warning BackOff Back-off restarting failed docker container
4m 11s 21 {kubelet 10.103.12.20} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=nginx-ingress pod=ingress-nginx-2015555637-0obbr_fabric8-system(fd94c9d3-c3bf-11e6-b3d9-0050560116bd)"

default route is wrong

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
spec:
  tls:
  - hosts:
    - cafe.minikube
    - tea.minikube
    - coffee.minikube
    secretName: cafe-secret
  rules:
  - host: tea.minikube
    http:
      paths:
      - path: /
        backend:
          serviceName: tea-svc
          servicePort: 80
  - host: coffee.minikube
    http:
      paths:
      - path: /
        backend:
          serviceName: coffee-svc
          servicePort: 80
  - host: cafe.minikube
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

in this example:

  • https://cofee.minikube/ gives coffee-rc-XYZ (correct)
  • https://cafe.minikube/coffee gives coffee-rc-XYZ (correct)
  • https://cafe.minikube/tea gives tea-rc-XYZ (correct)
  • https://wrong.minikube/ gives coffee-rc-XYZ (WRONG) it should return 404

Watch for secret changes

This is mentioned in #78 but it looks like the fix he is working on will only force a retry if the secret does not exist yet.

Would be nice if it also watched for secret changes so it triggered a reload when a certificate gets updated.

Created ingress without ADDRESS, and connection refused..

Hi ,

I followed the complete-example to deploy a nginx ingress controller, but it seems not working well for me.

Follow the steps and create an ingress , when I run kubectl get ingress or kubectl describe ingress, I can't see the expected ADDRESS.

And try to run curl --resolve ... , always be refused.

I noticed the logs of nginx ingress controller pod, every time I create the ingress, it print a start process information
2016/12/29 xx:xx:xx [notice] 19#19: signal process started 2016/12/29 xx:xx:xx [notice] 24#24: signal process started 2016/12/29 xx:xx:xx [notice] 29#29: signal process started

Thanks in advance

Allow ip_hash for backends

The controller should accept some annotation to indicate client affinity, if so then set ip_hash in the upstream block.

Lighter Docker image

Alpine linux supports nginx, got a great package manager and a full container with nginx weights 36MB instead of 255MB just from the ubuntu base image we are using here.
Something like (not tested btw)

FROM gliderlabs/alpine:3.3
# Install nginx
RUN apk add --update nginx=1.8.0-r1 && \
    rm -rf /var/cache/apk/* && \
    chown -R nginx:www-data /var/lib/nginx

# Add the files
ADD root /

# Expose the ports for nginx
EXPOSE 80 443

COPY nginx-ingress /
COPY nginx/ingress.tmpl /

RUN rm /etc/nginx/conf.d/*

CMD ["/nginx-ingress"]

New line at end of file required in TLS certificate & key

When using TLS, it is required to have a newline at the end of the certificate and key, else nginx will fall into a crash loop as the certificate is invalid. I believe this is because the certificate and key are concatenated and loaded from the same file, and without the new line the certificate and key merge into one.

This should either be optional or documented behaviour to save others spending ages debugging!

Side note: thanks for the ingress controller!

Improve secret handling

If there are two secrets with the same name but from different namespaces and Ingress resources reference those secrets, one secret will override the other.

One possible solution is to add the namespace of the secret to the pem file name of the secret when the controller stores the secret on the file system.

Problem with curl in examples

Hi

I've followed the complete-example on a k8s cluster.

When I try to curl as per your example, I see the following:

$curl --resolve cafe.example.com:443:IP_OF_NODE1 https://cafe.example.com/tea --insecure
curl: (7) Failed to connect to cafe.example.com port 443: Connection refused

Any ideas why this isn't working?

I have set all of the ReplicationControllers, Service and Ingresses up

Thanks

Errors when using examples

Hello!
I'm trying to use examples.
When I've executed:
kubectl create -f nginx-ingress-rc.yaml
I've saw in the docker:

[root@node1 ~]# docker logs a4dd854833f7
E0429 09:33:03.138743       1 helper.go:367] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
F0429 09:33:03.158596       1 main.go:57] Failed to get kube-dns service, err: Get https://10.254.0.1:443/api/v1/namespaces/kube-system/services/kube-dns: x509: certificate signed by unknown authority

k8s version: v1.2.3.

Thanks for your help!

Customization of NGINX configuration

Currently, there is no way to customize NGINX configuration other than change the template file and rebuild the image.

Add support for customization of some NGINX parameters, such as proxy_read_timeout, proxy_connect_timeout, client_max_body_size (#21) and others via ConfigMaps

Add the ability to redefine those parameters per Ingress Resource. Can be done via annotations --> #21

Is it possible to route on the basis of port?

I have two services (S1, S2). When a request comes on port 2775 or 8443 I have to forward the request to S1 and when request comes on port 443 I have to forward it to S2.

I was trying to do this using the example given but it seems that ingress doesn't allow routing on the basis of port but only on the basis of path.
https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/cafe-ingress.yaml

How do I achieve the above mentioned goal. It is possible that there is a better way of doing this. Please suggest if one exists.

nginx should auto-tune server_names_hash_bucket_size directive

When the nginx ingress controller tries to expose services which result in "long" hostnames that exceed default server_names_hash_bucket_size, nginx server dies with:

Error when adding or updating ingress "kube-system-default-ingress": "Invalid nginx configuration detected, not reloading: Command nginx -t stdout: \"\"\nstderr: \"nginx: [emerg] could not build server_names_hash, you should increase server_names_hash_bucket_size: 64\\nnginx: configuration file /etc/nginx/nginx.conf test failed\\n\"\nfinished with error: exit status 1"

The ingress controller should probably tune this nginx configuration knob automatically; as its aware of the length of the services' virtual host names and the max size of server_names_hash_bucket_size required. This would prevent manual directives being required in nginx's ConfigMap.

Issue with Kubernetes on bare metal

Hi,

I have kubernetes cluster running on bare metal. I wonder if this supports bare metal deployment. In my case, the node running ingress controller doesn't have any extenalIP. I tried accessing the tea/coffee by the node physical network IP that is routable outside K8S cluster as instructed in example page, can't get through it anyway. What should I do to make this work in the case?

Thanks
Ben

could not build server_names_hash, you should increase server_names_hash_bucket_size: 64

I'm deploying nginxdemos/nginx-ingress:0.3 as an RC but the pod goes into ERROR status.

From a bit of googling a suggested workaround is to edit the http{} nginx configuration..

http {
    server_names_hash_bucket_size 64;
    ...
}

but I can't figure out if I can do that. Even if the config isn't able to be customised yet yet I still don't know how I can build my own image adding in the http config. Any pointers?

Pod logs...

kubectl logs ingress-nginx-bq7ym
2016/06/20 11:00:31 [notice] 18#18: signal process started
2016/06/20 11:00:31 [notice] 23#23: signal process started
2016/06/20 11:00:31 [notice] 25#25: signal process started
2016/06/20 11:00:31 [notice] 27#27: signal process started
2016/06/20 11:00:31 [notice] 29#29: signal process started
2016/06/20 11:00:31 [notice] 31#31: signal process started
2016/06/20 11:00:31 [notice] 33#33: signal process started
2016/06/20 11:00:31 [emerg] 35#35: could not build server_names_hash, you should increase server_names_hash_bucket_size: 64
E0620 11:00:31.397974       1 nginx.go:207] Command nginx -s reload stdout: ""
E0620 11:00:31.398051       1 nginx.go:208] Command nginx -s reload stderr: "nginx: [emerg] could not build server_names_hash, you should increase server_names_hash_bucket_size: 64\n"
F0620 11:00:31.398064       1 nginx.go:209] Command nginx -s reload finished with error: exit status 1

Filter "spam" updates to endpoints from HA kube-controller-manager and kube-scheduler

The controller sync the kube-scheduler and kube-controller-manager every 2seconds when they are started with the leader-elect=true option.

I0120 02:06:19.739768       1 controller.go:190] Endpoints kube-controller-manager changed, syncing
I0120 02:06:19.739800       1 utils.go:78] Syncing kube-system/kube-controller-manager
I0120 02:06:19.739807       1 controller.go:323] Syncing endpoints kube-system/kube-controller-manager
I0120 02:06:20.933421       1 controller.go:190] Endpoints kube-scheduler changed, syncing
I0120 02:06:20.933454       1 utils.go:78] Syncing kube-system/kube-scheduler
I0120 02:06:20.933462       1 controller.go:323] Syncing endpoints kube-system/kube-scheduler
I0120 02:06:21.761402       1 controller.go:190] Endpoints kube-controller-manager changed, syncing

There should be a filer to exclude those pods as they don't have a "real" endpoint, they only use it to determine which is the master.

see some related Kube issues :
kubernetes/kubernetes#23812
kubernetes/kubernetes#26637

Watch for secret after ingress creation

The nginx ingress controller will ignore the TLS configuration of ingress objects, if the specified secret does not yet exist. If the secret is created afterwards, the nginx ingress controller does not update the rendered ingress configuration and the ingress will still be served without TLS.

I would expect the nginx ingress controller to wait for the secret to be created, or at least to update the generated configuration.
This feature, aside from #76, would be needed to support https://github.com/jetstack/kube-lego

License question

The nginx plus controller builds a docker images using 1 license. That's ok?
I mean I assumed that 1 license is for 1 running instance of nginx but using a replication controller with 2 replicas will use the same license

can't create nginx-ingress-rc

docker logs container:

E0524 08:48:01.817603       1 nginx.go:200] Command nginx stdout: ""
E0524 08:48:01.817785       1 nginx.go:201] Command nginx stderr: "nginx: [alert] could not open error log file: open() \"/var/log/nginx/error.log\" failed (13: Permission denied)\n2016/05/24 08:48:01 [emerg] 13#13: open() \"/var/log/nginx/error.log\" failed (13: Permission denied)\n"
F0524 08:48:01.817898       1 nginx.go:202] Command nginx finished with error: exit status 1

more customization nginx config

It looks like I have to manually edit ingress.tmpl and build docker image to get the following options into nginx configuration. Would it be good to support sort of more of customizing nginx through ingress object?
ssl_client_certificate /etc/nginx/certs/ca.crt;
ssl_verify_client on;
ssl_crl /etc/nginx/clientcertauth/my_crl.pem;
ssl_password_file /etc/nginx/clientcertauth/password;

how does one do SSL passthrough with source ip preservation

hi guys,
this has been a big question on the k8s slack group.. but nobody is a big enough expert in nginx to figure out.

the requirement is very simply - i dont want to terminate my ssl at the ingress controller: I want to terminate them on the nginx pods that i have running inside. how do I do this ?

also - nginx ingress controller does not preserve source ip. we need the original client source-ip for audit purposes.

now i figured out that one must use "stream" and "proxy-protocol" to somehow configure this, but we are just not able to figure this out on the k8s slack. An example would be truly awesome!

Support http to https redirection based on $http_x_forwarded_proto

I am using this ingress controller behind an ELB on AWS and have configured the ELB to do all of the SSL termination. However, since I also want to accept standard HTTP requests, I've opened port 80 on the ELB as well.

This works great except for one detail: I'd like all connections to be over HTTPS and thus need to be able to define (for each ingress resource) a 301 redirect to HTTPS if the request is sent over HTTP.

Looking through the code, I can see that this is the default behavior if SSL termination is configured on the ingress controller itself, but given that my ELB is doing the SSL termination, the connection from the ELB to the ingress controller is always HTTP-only.

Would it be an easy addition to add an annotation/ConfigMap key that would allow this behavior? Something like: nginx.org/redirect-to-https that would enable a rule in the conf file such as:

if ($http_x_forwarded_proto = 'http') {
    return 301 https://$host$request_uri;
}

...or would this conflict with other configuration options? If this type of behavior is already possible using another combination of annotations or ConfigMap settings, please let me know.

413 Request Entity Too Large

We've deployed a docker registry and created an ingress rule to it's kubernetes service whilst using the nginx ingress controller. When pushing larger images we quickly hit the nginx limits giving us the error below.

Error parsing HTTP response: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.9.14</center>\r\n</body>\r\n</html>\r\n"

I've forked the repo and hacked the nginx config adding a client_max_body_size attribute so we can push larger images. For a proper solution though, it might be nice to set a value in the kubernetes ingress rule and have that used when the nginx controller is updated?

Same host rule in multiple ingress objects

In Kubernetes it is possible to create multiple ingress objects with rules referencing the same host. How the nginx ingress controller should handle this case is not written down in the kubernetes documentation (at least I cant find it).

The official kubernetes nginx ingress controller old repo or new repo and the gce ingress controller are merging multiple ingress configurations into one nginx server object. So to specify one of the examples in multiple ingress objects is valid and leads to the expected behavior.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress1
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress2
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

The nginxinc ingress controller does not merge these rules. If there is more than one ingress object defined, it will write multiple separate server objects into the nginx config folder.

Nginx is not happy about this :P
conflicting server name "example.com" on 0.0.0.0:80, ignored

Some tools like kube-lego rely on the merging behavior to work, so simply ignoring additional ingress objects is not the best solution.

Before implementing this merge feature, it is necessary to formulate exactly how the nginx ingress controller should behave and how the order of merging is determined.

Collisions or conflicts should be dealt with gracefully, so the ingress controller is not "blocked" while a issue exists and detailed log entries should be generated, so users are able to monitor the ingress controller.

Missing ingress.class annotation ?

Hi,

I've tried running the examples in a Kubernetes cluster on GKE, and it looks the nginx controller isn't claiming the ingress object, instead the built-in GCE Ingress is.

The problem might be that there's no "kubernetes.io/ingress.class" annotation provided in the examples ? What's the correct value to use for this controller ?

Nginx controller example

The nginx controller example doesn't seem to work. The the actual config file isn't writing out to listen to anything.

kubectl get ing
NAME           RULE               BACKEND   ADDRESS   AGE
cafe-ingress   -                                      4m
               cafe.example.com   
               /tea               tea-svc:80
               /coffee            coffee-svc:80
:/etc/nginx# cat nginx.conf 

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

support HSTS header

To enable HSTS for a resource, we have to send a extra header entry.
In most cases the responsibility for sending extra headers lies by the backend service, but because the ingress controller is already responsible for SSL termination it would be nice to configure HSTS in the ingress object itself.

# ingress annotations
annotations:
  nginx.org/hsts: 'True' # default 'False'
  nginx.org/hsts-max-age: '31536000'
  nginx.org/hsts-include-subdomains: 'True' # default 'False'
# configmap
data:
  hsts: 'True' # default 'False'
  hsts-max-age: '31536000'
  hsts-include-subdomains: 'True' # default 'False'

Will result in the following config entry for servers with ssl enabled:

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

As far as I was able to see there is no reason to exclude the 'preload' directive, because it is ignored if the hostname is not in the HSTS preload list:
https://hstspreload.appspot.com/

Open for discussion is the default 'max-age' in the header entry.

HSTS header is sent multiple times

The current implementation of hsts may sent the header multiple times, if the backend application is already adding it to the http response.
This should not break clients as stated in #67, which is true for browsers.
But if the site is tested using https://www.ssllabs.com for example a error message is shown:
"Server sent invalid HSTS policy. See below for further information."
"Strict Transport Security (HSTS) Invalid Server provided more than one HSTS header "

Maybe we can find a way to add the header only if it does not exist already?

How would I implement customer DH parameters for DHE ciphers

I'm asking upfront before creating a pull-request. We need to pass in custom DH parameters by supplying ssl_dhparam <file> to the nginx config. I could create a patch that simply supports setting ssl_dhparam and it would then require the user to make sure he mounts <file> into the container at the given location.

On the other hand, you might prefer that the value to ssl-dhparam/nginx.org/ssl-dhparam is not a filename, but rather the name of a namespace/secret and we look for the key dhparam.pem - very much like certificates are handled right now. So let's explain by examples.

Variant 1: filename

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress
  namespace: kube-system
data:
  ssl-dhparam: ciphers/dh4096.pem

And then mount that file to /etc/nginx/ciphers/dh4096/pem in the DaemonSet or Deployment

Variant 2: reference to a secret

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress
  namespace: kube-system
data:
  ssl-dhparam: kube-system/dh-params
apiVersion: v1
kind: Secret
metadata:
  name: dh-params
  namespace: kube-system
type: Opaque
data:
  dhparam.pem: ABC==

The second variant would require the controller to automagically create the file from the reference, but it would also allow for a seamless update of these parameters - for whatever reason.

Of course variant 1 is easier to implement, so I would like to know, which version you prefer?

location URI is passed to proxy_pass

So I have an ingress path that looks like this:

      paths:
      - path: /blog/
        backend:
          serviceName: wordpress
          servicePort: 80

The wordpress service connects to a pod running the docker hub wordpress image. It serves content out of the root URI (/). However the config sends requests for /blog/ to the backend pod.

Adding a forward slash to the end of the proxy_pass line in the tmpl file and rebuilding the container works for me, but I'll like to either 1) change that upstream in this repo or 2) find an alternate way to do that with the current image.

rules for generation of cert/key for controller

Is there a specific format that the controller needs in order to be able to parse it? The example key seems to work fine, but when I generate a pair using
openssl req -x509 -nodes -days 365 -sha256 -newkey rsa:4096 -keyout mycert.pem -out mycert.pem
and then feed the key and cert (with tags) through a base64 encoder, the controller can't seem to decode it.

What was the exact process that generated the cafe-secret in the example?

Support proxy protocol

If the nginx ingress controller is scheduled using mechanisms like nodePort, the client ip address is not preserved, because the kube-proxy is setting up NAT to connect the backend pods to the host network.

The client ip address is needed in several use cases:

  • Enforcing rate limits
  • Detecting the country of the user
  • ...

Until kubernetes/enhancements#27 is completed, there are only a few workarounds to preserve the client ip address.

  1. Directly binding to the host network
    Using hostNetwork: true in combination with a daemon-set nginx can bind directly to the host network, so there is no NAT.
    In this deployment scenario it is necessary to deploy a external TCP loadbalancer with a alive checking mechanism to enable high availability, if a host goes down unexpected.

  2. Enable proxy protocol so a external TCP LB can forward the requests without loosing the client ip.
    A major advantage of this approach is, that the nginx ingress deployment can be scaled independently of the host systems.

To enable the proxy protocol feature i would suggest to add a new entry in the nginx configmap:

data:
  proxy-protocol: 'True'

This entry will reconfigure the generated nginx configs like described in the official blog post:
https://www.nginx.com/resources/admin-guide/proxy-protocol/

I am not sure nginx is able to mix server entries with and without proxy-protocol in the listen directive. But because this is more of a global deployment choice, I do not see the need to support to annotations for the ingress object.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.