Giter Site home page Giter Site logo

azure / application-gateway-kubernetes-ingress Goto Github PK

View Code? Open in Web Editor NEW
663.0 59.0 413.0 49.98 MB

This is an ingress controller that can be run on Azure Kubernetes Service (AKS) to allow an Azure Application Gateway to act as the ingress for an AKS cluster.

Home Page: https://azure.github.io/application-gateway-kubernetes-ingress

License: MIT License

CMake 0.39% Go 97.62% Dockerfile 0.08% Shell 1.24% Mustache 0.35% Makefile 0.32%
application-gateway ingress-controller aks kubernetes azure ingress go agic

application-gateway-kubernetes-ingress's Introduction

Application Gateway Ingress Controller

GitHub release (latest by date) Build Status Go Report Card GitHub go.mod Go version

Staging release (latest by date)

Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure's native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it is hosted on and continuously updates an App Gateway, so that selected services are exposed to the Internet.

The Ingress Controller runs in its own pod on the customer’s AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to App Gateway specific configuration and applied to the Azure Resource Manager (ARM).

Azure Application Gateway + AKS

AGIC is configured via the Kubernetes Ingress resource, along with Service and Deployments/Pods. It provides a number of features, leveraging Azure’s native App Gateway L7 load balancer. To name a few:

  • URL routing
  • Cookie-based affinity
  • SSL termination
  • End-to-end SSL
  • Support for public, private, and hybrid web sites
  • Integrated web application firewall

Changelog

Blogs and talks

Setup

Usage

Tutorials: Refer to these to understand how you can expose an AKS service over HTTP or HTTPS, to the internet, using an Azure Application Gateway.

Features: List of all available AGIC features.

Annotations: The Kubernetes Ingress specification does not allow all features of Application Gateway to be exposed through the ingress resource. Therefore we have introduced application gateway ingress controller specific annotations to expose application gateway features through an ingress resource. Please refer to these to understand the various annotations supported by the ingress controller, and the corresponding features that can be turned on in the application gateway for a given annotation.

Helm Values Configuration Options: This document lists the various configuration options available through helm.

Upgrade/Rollback AGIC using helm: This documents explains how to upgrade/rollback AGIC helm installation.

How-tos

Troubleshooting

For troubleshooting, please refer to this guide.

Frequently asked questions

For FAQ, please refer to this guide.

Reporting Issues

The best way to report an issue is to create a Github Issue for the project. Please include the following information when creating the issue:

  • Subscription ID for AKS cluster.
  • Subscription ID for Application Gateway.
  • AKS cluster name/ARM Resource ID.
  • Application Gateway name/ARM Resource ID.
  • Ingress resource definition that might causing the problem.
  • The Helm configuration used to install the ingress controller.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

application-gateway-kubernetes-ingress's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

application-gateway-kubernetes-ingress's Issues

Application Gateway with AKS fails with muliple backend pods app with Azure AAD authentication

I am using Application Gateway as ingress controller on my AKS cluster https://github.com/Azure/application-gateway-kubernetes-ingress.

  1. Developed a ASP.net Core application with Azure AAD application.
  2. Deployed to AKS cluster
  3. Created Application Gateway ingress controller to access this application via App Gateway URL
  4. All works fine when there is only one pod of application running in AKS.
  5. Things break when scale the AKS application to 2 or more pods, which is a requirement in production case. (for high Availability)
  6. The same app works when Accessed form public IP exposed by AKS (Load Balancer type service)
    An unhandled exception occurred while processing the request.
    Exception: Correlation failed.
    Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler+d__12.MoveNext()
    Stack Query Cookies Headers
    Exception: Correlation failed.
    Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler+d__12.MoveNext()
    System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
    System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
    System.Runtime.CompilerServices.TaskAwaiter.GetResult()
    Microsoft.AspNetCore.Authentication.AuthenticationMiddleware+d__6.MoveNext()
    System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
    System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
    Microsoft.AspNetCore.Session.SessionMiddleware+d__9.MoveNext()
    System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
    Microsoft.AspNetCore.Session.SessionMiddleware+d__9.MoveNext()
    System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
    System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
    Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware+d__7.MoveNext()
    Show raw exception details
    System.Exception: Correlation failed.
    at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler1.d__12.MoveNext() --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task) at System.Runtime.CompilerServices.TaskAwaiter1.GetResult()
    at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.d__6.MoveNext()
    --- End of stack trace from previous location where exception was thrown ---
    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
    at Microsoft.AspNetCore.Session.SessionMiddleware.d__9.MoveNext()
    --- End of stack trace from previous location where exception was thrown ---
    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
    at Microsoft.AspNetCore.Session.SessionMiddleware.d__9.MoveNext()
    --- End of stack trace from previous location where exception was thrown ---
    at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
    at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
    at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.d__7.MoveNext()

Travis-CI failing for PRs from forks of this repository.

Describe the bug
Travis-CI doesn't allow encrypted environment variables to be set for PRs from forks of this repository. We need to re-factor the PR so that we don't use encrypted environment variables only for PRs originating from this repository.

Removing service from Kubernetes doesn't remove HTTP settings from Application Gateway

Describe the bug
When removing (in this case) a service from the Kubernetes that has an Application Gateway ingress controller doesn't remove the rule on the Gateway. Instead, it errors in the log of the ingress controller:

I1022 10:44:19.643131 1 controller.go:45] controller.processEvent called with type k8scontext.Event
I1022 10:44:19.751977 1 backendhttpsettings.go:79] resolving port name http
I1022 10:44:19.752026 1 backendhttpsettings.go:79] resolving port name http
I1022 10:44:19.752036 1 backendhttpsettings.go:79] resolving port name http
I1022 10:44:19.752044 1 context.go:336] unable to get service from store, no such service default/some-service
I1022 10:44:19.752049 1 backendhttpsettings.go:44] unable to get the service [default/some-service]
I1022 10:44:19.752055 1 backendhttpsettings.go:79] resolving port name http
I1022 10:44:19.752064 1 backendhttpsettings.go:79] resolving port name http
I1022 10:44:19.752087 1 backendhttpsettings.go:79] resolving port name http
I1022 10:44:19.752095 1 backendhttpsettings.go:79] resolving port name http
E1022 10:44:19.752104 1 controller.go:65] unable to generate backend http settings, error [unable to resolve backend port for some services]
I1022 10:44:19.752110 1 eventqueue.go:120] Processing event failed

To Reproduce
Steps to reproduce the behavior:

  • Create a service in the Kubernetes Dashboard or via kubectl.
  • A HTTP Setting is created at the Azure Application Gateway
  • Remove the service from Kubernetes
  • HTTP Setting isn't removed and the ingress controller fails to process event, but it also blocks successive events to be processed

Ingress Controller details

Describve ingress controller:

Name:               application-gateway-kubernetes-ingress-ingress-azure-5fd4f9zvnd
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-agentpool-39900342-1/172.16.32.4
Start Time:         Mon, 22 Oct 2018 12:53:18 +0200
Labels:             aadpodidbinding=application-gateway-kubernetes-ingress-ingress-azure
                    app=ingress-azure
                    pod-template-hash=1980931949
                    release=application-gateway-kubernetes-ingress
Annotations:        <none>
Status:             Running
IP:                 172.16.32.16
Controlled By:      ReplicaSet/application-gateway-kubernetes-ingress-ingress-azure-5fd4f75f8f
Containers:
  ingress-azure:
    Container ID:   docker://a59d15250a9be101a779d08c7fdb32669f82a127173a2532487946e7f0b2511c
    Image:          mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.2
    Image ID:       docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:6409ce64cec973af72349d4fe9684c9ce860a0bdd23374c5dec85367af0d7b76
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 22 Oct 2018 12:53:22 +0200
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      application-gateway-kubernetes-ingress-cm-ingress-azure  ConfigMap  Optional: false
    Environment:                                               <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from application-gateway-kubernetes-ingress-sa-ingress-azure-tob2dsw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  application-gateway-kubernetes-ingress-sa-ingress-azure-tob2dsw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  application-gateway-kubernetes-ingress-sa-ingress-azure-tob2dsw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

Add unit-tests for `unorderedsets` package.

Is your feature request related to a problem? Please describe.
Need to add unit-tests to the unorderedsets package.

Describe the solution you'd like
Need to introduce the ginkgo test framework to add unit-tests for unorderesets.

Use sync.map insteadof `ThreadsafeMultiMap`

Is your feature request related to a problem? Please describe.
Since we are using Golang 10.3 we should be using the built in sync.map instead of using the ThreadsafeMultiMap package that we introduced. This would help simplify our code.

The link for Installation is broken - showing a 404 page.

When I try to go to this link to install. https://github.com/Azure/application-gateway-kubernetes-ingress/blob/master/docs/tutorial.md

Prerequisites
Installed ingress-azure helm chart. Please refer to installation instructions to install the Azure Application Gateway Ingress controller on your AKS cluster. If you want to use HTTPS on this application, you will need a x509 certificate and its private key.

The link is in Prerequisites section - installation

Need to support rolling update in the ingress controller.

Describe the bug
When performing an rolling update of any kind of service you want the site or service to stay online. But during an update a 502 is given with the exception bad gateway. The problem occurs due the fact that Application Gateway is using the internal IP-addres of the nodes in the backend-pool instead of the Cluster IP of the specified service.

So what happens is that kubernetes is spinning up different nodes with new ip-addresses depending on the replica count and that the original ip-addresses are removed which are used by the Application Gateway. A couple of minutes later the backend-pool is updated with the new IP-addreses of the nodes. But we want the ClusterIP address to be used in the backend-pool so Kubernetes can perform correct load-balancing

To Reproduce
Redeploy a service and check if it is online

Named backend ports not supported

Describe the bug
The ingress controller fails to find the backend port settings when the service references a container port by name.

To Reproduce
Deploy this YAML file

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: webapp
  annotations:
    kubernetes.io/ingress.class: azure/application-gateway
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: webapp-svc
          servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webapp-svc
spec:
  ports:
  - name: endpoint
    port: 80
    targetPort: web
    protocol: TCP
  selector:
    app: aksdemoweb
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-deployment
  labels:
    app: aksdemoweb
spec:
  replicas: 3
  selector:
    matchLabels:
      app: aksdemoweb
  template:
    metadata:
      labels:
        app: aksdemoweb
    spec:
      containers:
      - image: nginx
        name: simpleweb
        ports:
        - containerPort: 80
          name: web

You can verify that the service is working via the proxy

kubectl proxy

Then view

http://localhost:8001/api/v1/namespaces/default/services/http:webapp-svc:endpoint/proxy/

Ingress Controller details

  • Output of kubectl describe pod <ingress controller> . The pod name can be obtained by running helm list.
Name:               jazzy-meerkat-ingress-azure-54cf54f6f7-hcskb
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-frontendpool-75135322-2/10.11.15.35
Start Time:         Mon, 25 Feb 2019 14:30:23 +0000
Labels:             aadpodidbinding=jazzy-meerkat-ingress-azure
                    app=ingress-azure
                    pod-template-hash=54cf54f6f7
                    release=jazzy-meerkat
Annotations:        <none>
Status:             Running
IP:                 10.11.15.42
Controlled By:      ReplicaSet/jazzy-meerkat-ingress-azure-54cf54f6f7
Containers:
  ingress-azure:
    Container ID:   docker://2b59f235623f7ed148371aa067297acf08759ac7e436387d2d70c84127e8b97a
    Image:          mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4
    Image ID:       docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:b996e8d9812d4d92cf55d2b02fe8f404352b08e137757c4b8d8dabd11fb1901b
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 25 Feb 2019 14:30:25 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      jazzy-meerkat-cm-ingress-azure  ConfigMap  Optional: false
    Environment:
      KUBERNETES_PORT_443_TCP_ADDR:  aksdemo-edc85999.hcp.northeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://aksdemo-edc85999.hcp.northeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://aksdemo-edc85999.hcp.northeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       aksdemo-edc85999.hcp.northeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from jazzy-meerkat-sa-ingress-azure-token-thg46 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  jazzy-meerkat-sa-ingress-azure-token-thg46:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  jazzy-meerkat-sa-ingress-azure-token-thg46
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                  Message
  ----    ------     ----  ----                                  -------
  Normal  Scheduled  23m   default-scheduler                     Successfully assigned default/jazzy-meerkat-ingress-azure-54cf54f6f7-hcskb to aks-frontendpool-75135322-2
  Normal  Pulling    23m   kubelet, aks-frontendpool-75135322-2  pulling image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4"
  Normal  Pulled     23m   kubelet, aks-frontendpool-75135322-2  Successfully pulled image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4"
  Normal  Created    23m   kubelet, aks-frontendpool-75135322-2  Created container
  Normal  Started    23m   kubelet, aks-frontendpool-75135322-2  Started container


  • Output of `kubectl logs .
I0225 14:40:36.049038       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0225 14:40:36.095654       1 backendhttpsettings.go:85] resolving port name web
I0225 14:40:36.095676       1 backendhttpsettings.go:102] unable to resolve any backend port for service [default/webapp-svc]
E0225 14:40:36.095684       1 controller.go:70] unable to generate backend http settings, error [unable to resolve backend port for some services]
I0225 14:40:36.095690       1 eventqueue.go:126] Processing event failed
  • Any Azure support tickets associated with this issue.

Resources (backend pools/http listeners/rules) not created by ingress controller are removed when ingress controller updates the app gateway

Describe the bug
Resources (backend pools/http listeners/rules) not created by ingress controller are removed when ingress controller updates the app gateway.

To Reproduce
Steps to reproduce the behavior:

  • Setup an Azure web site with the default hello world deployment
  • Setup cluster with app gateway as the ingress
  • Deploy a service and expose it on the ingress (nginx-helloworld for example)
  • Setup a backend pool, listener, rule, etc on the app gateway to route traffic to the Azure web site

This should all work, both the Azure website and the k8s service should be accessible through the app gateway.

Now scale the hello world service up or down, or deploy a new service.

Now, the setup to rout traffic to the web site is gone, but the k8s service still works.

Ingress Controller details

  • Output of kubectl describe pod <ingress controller> . The pod name can be obtained by running helm list.
Name:               ingress-azure-d994f88c5-kdczl
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-agentpool-25149805-1/10.0.1.4
Start Time:         Wed, 30 Jan 2019 16:46:57 -0800
Labels:             aadpodidbinding=ingress-azure
                    app=ingress-azure
                    pod-template-hash=855094471
                    release=ingress-azure
Annotations:        <none>
Status:             Running
IP:                 10.0.1.28
Controlled By:      ReplicaSet/ingress-azure-d994f88c5
Containers:
  ingress-azure:
    Container ID:   docker://3416b31de53f893dd0a3f1c204d5e22cdab76693bef4adf60a3a8cd74a6ead7e
    Image:          mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4
    Image ID:       docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:b996e8d9812d4d92cf55d2b02fe8f404352b08e137757c4b8d8dabd11fb1901b
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Wed, 30 Jan 2019 16:50:03 -0800
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      ingress-azure  ConfigMap  Optional: false
    Environment:
      KUBERNETES_PORT_443_TCP_ADDR:  dev-deploy-sr-5690c452.hcp.centralus.azmk8s.io
      KUBERNETES_PORT:               tcp://dev-deploy-sr-5690c452.hcp.centralus.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://dev-deploy-sr-5690c452.hcp.centralus.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       dev-deploy-sr-5690c452.hcp.centralus.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-azure-token-8ssxc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  ingress-azure-token-8ssxc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-azure-token-8ssxc
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

WAF_v2 and Standard_v2 changes not being applied

Describe the bug
I've deployed the ingress controller to a cluster with an application gateway using the WAF_v2 tier and no changes at all get applied to it (the default rule, listener etc are all still there), the deployment logs occur but nothing changes on the app gateway

This worked fine for the GA AG of type WAF

To Reproduce
Steps to reproduce the behavior:
Create vnet and subnet
Create AG using arm template at the bottom
Create cluster (it's rbac enabled but it shouldn't matter)
Create identity, give it permissions on the AG
Install ingress controller
Assign identity to ingress controller

Install guestbook all in one application
Add ingress for it

Ingress Controller details

  • Output of kubectl describe pod <ingress controller> . The pod name can be obtained by running helm list.
$ kubectl describe -n ag-poc pod/app-gw-ingress-ingress-azure-5d8c7857f5-2pwlv
Name:               app-gw-ingress-ingress-azure-5d8c7857f5-2pwlv
Namespace:          ag-poc
Priority:           0
PriorityClassName:  <none>
Node:               aks-timjeurope-23425106-0/10.160.176.4
Start Time:         Tue, 19 Feb 2019 15:39:18 +0000
Labels:             aadpodidbinding=app-gw-ingress-ingress-azure
                    app=ingress-azure
                    pod-template-hash=5d8c7857f5
                    release=app-gw-ingress
Annotations:        <none>
Status:             Running
IP:                 10.160.176.5
Controlled By:      ReplicaSet/app-gw-ingress-ingress-azure-5d8c7857f5
Containers:
  ingress-azure:
    Container ID:   docker://88290df99358b9881a9f9e61053f4c0c228a774e08b2f0ff75e0f6612e4d55c7
    Image:          mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4
    Image ID:       docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:b996e8d9812d4d92cf55d2b02fe8f404352b08e137757c4b8d8dabd11fb1901b
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 19 Feb 2019 15:39:20 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      app-gw-ingress-cm-ingress-azure  ConfigMap  Optional: false
    Environment:
      KUBERNETES_PORT_443_TCP_ADDR:  timj-europe-21d08374.hcp.northeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://timj-europe-21d08374.hcp.northeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://timj-europe-21d08374.hcp.northeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       timj-europe-21d08374.hcp.northeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from app-gw-ingress-sa-ingress-azure-token-dt8lh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  app-gw-ingress-sa-ingress-azure-token-dt8lh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  app-gw-ingress-sa-ingress-azure-token-dt8lh
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
  • Output of `kubectl logs .
$ kubectl logs -n ag-poc pod/app-gw-ingress-ingress-azure-5d8c7857f5-2pwlv
I0219 15:39:20.169703       1 main.go:60] Creating authorizer from MSI
I0219 15:39:20.860867       1 context.go:296] k8s context run started
I0219 15:39:20.860890       1 context.go:383] start waiting for initial cache sync
I0219 15:39:20.861084       1 reflector.go:202] Starting reflector *v1.Endpoints (30s) from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
I0219 15:39:20.861103       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
I0219 15:39:20.861103       1 reflector.go:202] Starting reflector *v1.Secret (30s) from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:381
I0219 15:39:20.861117       1 reflector.go:240] Listing and watching *v1.Secret from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:381
I0219 15:39:20.861185       1 reflector.go:202] Starting reflector *v1.Service (30s) from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:380
I0219 15:39:20.861198       1 reflector.go:240] Listing and watching *v1.Service from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:380
I0219 15:39:20.961202       1 reflector.go:202] Starting reflector *v1beta1.Ingress (30s) from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:390
I0219 15:39:20.961225       1 reflector.go:240] Listing and watching *v1beta1.Ingress from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:390
I0219 15:39:20.973883       1 secretstore.go:119] converted secret [ag-poc/guestbook-cert]
I0219 15:39:21.061176       1 context.go:398] ingress initial sync done
I0219 15:39:21.061200       1 context.go:298] k8s context run finished
I0219 15:39:21.061219       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:39:21.061237       1 eventqueue.go:119] Processing event begin, time since event generation: 17.6µs
I0219 15:39:21.061258       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:39:21.061338       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:39:21.061362       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:39:21.061370       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:39:21.061375       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:39:21.061381       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:39:21.061386       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:39:21.110650       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:41:12.250325       1 controller.go:112] deployment took 1m51.13964609s
I0219 15:41:12.250354       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:41:12.250361       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:41:12.250373       1 eventqueue.go:119] Processing event begin, time since event generation: 1m51.189038353s
I0219 15:41:12.250378       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:41:12.290770       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:41:12.562373       1 controller.go:112] deployment took 271.571988ms
I0219 15:41:12.562403       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:41:12.562410       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:41:12.562421       1 eventqueue.go:119] Processing event begin, time since event generation: 1m51.501058774s
I0219 15:41:12.562426       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:41:12.602905       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:41:12.874378       1 controller.go:112] deployment took 271.444888ms
I0219 15:41:12.874406       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:41:12.874412       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:41:12.874425       1 eventqueue.go:119] Processing event begin, time since event generation: 1m51.813053794s
I0219 15:41:12.874430       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:41:12.929067       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:41:13.188960       1 controller.go:112] deployment took 259.86525ms
I0219 15:41:13.188988       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:41:13.188994       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:41:13.189006       1 eventqueue.go:119] Processing event begin, time since event generation: 1m52.127630023s
I0219 15:41:13.189010       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:41:13.239306       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:41:13.531056       1 controller.go:112] deployment took 291.721454ms
I0219 15:41:13.531083       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:41:13.531089       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:41:13.531100       1 eventqueue.go:119] Processing event begin, time since event generation: 1m52.469718441s
I0219 15:41:13.531103       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:41:13.576993       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:41:13.834655       1 controller.go:112] deployment took 257.617942ms
I0219 15:41:13.834701       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:41:13.834708       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:41:13.834720       1 eventqueue.go:119] Processing event begin, time since event generation: 1m52.773333334s
I0219 15:41:13.834725       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:41:13.898549       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:41:14.167105       1 controller.go:112] deployment took 268.505478ms
I0219 15:41:14.167178       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:41:14.167187       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:43:47.775238       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:43:47.775290       1 eventqueue.go:119] Processing event begin, time since event generation: 56.6µs
I0219 15:43:47.775297       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:43:47.900842       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:45:38.709368       1 controller.go:112] deployment took 1m50.808490032s
I0219 15:45:38.709395       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:45:38.709402       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 15:46:34.951594       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 15:46:34.951651       1 eventqueue.go:119] Processing event begin, time since event generation: 59.4µs
I0219 15:46:34.951664       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 15:46:34.995989       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 15:46:55.364151       1 controller.go:112] deployment took 20.368133189s
I0219 15:46:55.364177       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 15:46:55.364184       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
W0219 16:02:54.891207       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 8381 (9462)
I0219 16:02:55.891399       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 16:28:28.925440       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 12100 (13482)
I0219 16:28:29.925645       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 16:53:55.958816       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 16109 (17438)
I0219 16:53:56.959006       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 17:15:30.989571       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 20094 (20842)
I0219 17:15:31.989854       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 17:38:02.027362       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 23479 (24372)
I0219 17:38:03.027571       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 18:00:50.062117       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 27010 (27947)
I0219 18:00:51.062433       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 18:23:10.092432       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 30579 (31446)
I0219 18:23:11.092759       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 18:46:38.121344       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 34091 (35122)
I0219 18:46:39.121675       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 19:05:11.168606       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 37765 (38025)
I0219 19:05:12.168942       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 19:27:11.215916       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 40677 (41472)
I0219 19:27:12.216256       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 19:44:39.249070       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 44122 (44215)
I0219 19:44:40.249345       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 20:07:49.278814       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 46863 (47842)
I0219 20:07:50.279036       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 20:32:14.311301       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 50487 (51669)
I0219 20:32:15.311534       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
I0219 20:39:52.860029       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 20:39:52.860056       1 eventqueue.go:119] Processing event begin, time since event generation: 31.001µs
I0219 20:39:52.860075       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 20:39:52.999837       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 20:40:13.410066       1 controller.go:112] deployment took 20.410197295s
I0219 20:40:13.410091       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 20:40:13.410098       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 20:48:40.476669       1 secretstore.go:119] converted secret [ag-poc/guestbook-cert]
I0219 20:48:40.476824       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 20:48:40.476847       1 eventqueue.go:119] Processing event begin, time since event generation: 25µs
I0219 20:48:40.476865       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 20:48:40.578199       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
W0219 20:50:11.340305       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 54327 (54488)
I0219 20:50:12.340539       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
I0219 20:50:31.621828       1 controller.go:112] deployment took 1m51.04359856s
I0219 20:50:31.621854       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 20:50:31.621861       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "vnetName": {
      "defaultValue": "timj-europe",
      "type": "string",
      "metadata": {
        "description": "VNET name."
      }
    },
    "applicationGatewaySubnetAddressPrefix": {
      "defaultValue": "10.160.128.0/24",
      "type": "string",
      "metadata": {
        "description": "Application gateway subnet prefix."
      }
    },
    "applicationGatewayName": {
      "defaultValue": "poc-ag-2",
      "type": "string",
      "metadata": {
        "description": "Application gateway name."
      }
    },
    "size": {
      "defaultValue": "WAF_Medium",
      "type": "string",
      "metadata": {
        "description": "Application gateway size."
      }
    },
    "capacity": {
      "defaultValue": "2",
      "type": "string",
      "metadata": {
        "description": "Application gateway capity."
      }
    },
    "tier": {
      "defaultValue": "WAF",
      "type": "string",
      "metadata": {
        "description": "Application gateway tier."
      }
    }
  },
  "variables": {
    "applicationGatewaySubnetName": "app-gateways",
    "vnetId": "[resourceId(concat('core-infra-', parameters('vnetName')), 'Microsoft.Network/virtualNetworks', parameters('vnetName'))]",
    "applicationGatewaySubnetId": "[concat(variables('vnetID'),'/subnets/', variables('applicationGatewaySubnetName'))]",
    "applicationGatewayPublicIpId": "[resourceId('Microsoft.Network/publicIPAddresses',parameters('applicationGatewayName'))]",
    "applicationGatewayId": "[resourceId('Microsoft.Network/applicationGateways', parameters('applicationGatewayName'))]"
  },
  "resources": [
    {
      "type": "Microsoft.Network/publicIPAddresses",
      "name": "[parameters('applicationGatewayName')]",
      "apiVersion": "2018-08-01",
      "location": "[resourceGroup().location]",
      "sku": {
        "name": "Standard"
      },
      "properties": {
        "publicIPAllocationMethod": "Static"
      }
    },
    {
      "type": "Microsoft.Network/applicationGateways",
      "name": "[parameters('applicationGatewayName')]",
      "apiVersion": "2018-08-01",
      "location": "[resourceGroup().location]",
      "properties": {
        "sku": {
          "name": "WAF_v2",
          "tier": "WAF_v2",
          "capacity": "[parameters('capacity')]"
        },
        "gatewayIPConfigurations": [
          {
            "name": "appGatewayIpConfig",
            "properties": {
              "subnet": {
                "id": "[variables('applicationGatewaySubnetId')]"
              }
            }
          }
        ],
        "frontendIPConfigurations": [
          {
            "name": "appGatewayFrontendIP",
            "properties": {
              "PublicIPAddress": {
                "id": "[variables('applicationGatewayPublicIpId')]"
              }
            }
          }
        ],
        "frontendPorts": [
          {
            "name": "httpPort",
            "properties": {
              "Port": 80
            }
          },
          {
            "name": "httpsPort",
            "properties": {
              "Port": 443
            }
          }
        ],
        "backendAddressPools": [
          {
            "name": "bepool",
            "properties": {
              "backendAddresses": []
            }
          }
        ],
        "httpListeners": [
          {
            "name": "httpListener",
            "properties": {
              "protocol": "Http",
              "frontendPort": {
                "id": "[concat(variables('applicationGatewayId'), '/frontendPorts/httpPort')]"
              },
              "frontendIPConfiguration": {
                "id": "[concat(variables('applicationGatewayId'), '/frontendIPConfigurations/appGatewayFrontendIP')]"
              }
            }
          }
        ],
        "backendHttpSettingsCollection": [
          {
            "name": "setting",
            "properties": {
              "port": 80,
              "protocol": "Http"
            }
          }
        ],
        "requestRoutingRules": [
          {
            "name": "rule1",
            "properties": {
              "httpListener": {
                "id": "[concat(variables('applicationGatewayId'), '/httpListeners/httpListener')]"
              },
              "backendAddressPool": {
                "id": "[concat(variables('applicationGatewayId'), '/backendAddressPools/bepool')]"
              },
              "backendHttpSettings": {
                "id": "[concat(variables('applicationGatewayId'), '/backendHttpSettingsCollection/setting')]"
              }
            }
          }
        ],
        "webApplicationFirewallConfiguration": {
          "enabled": true,
          "firewallMode": "Prevention",
          "ruleSetType": "OWASP",
          "ruleSetVersion": "3.0",
          "disabledRuleGroups": [
            {
              "ruleGroupName": "REQUEST-931-APPLICATION-ATTACK-RFI",
              "rules": [
                931130
              ]
            }
          ]
        },
        "sslPolicy": {
          "policyType": "Custom",
          "cipherSuites": [
            "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
            "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
            "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384",
            "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
            "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA"
          ],
          "minProtocolVersion": "TLSv1_2"
        }
      },
      "dependsOn": [
        "[concat('Microsoft.Network/publicIPAddresses/', parameters('applicationGatewayName'))]"
      ]
    }
  ]
}
  • Any Azure support tickets associated with this issue.
    None can open one if you need

Use health checks from kubernetes deployment

Is your feature request related to a problem? Please describe.
We deploy our services to kubernetes with a custom health check endpoint (e.g. /private/ping). Kubernetes correctly uses this endpoint but the application gateway is using the default health probes which are described here: https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-probe-overview. No custom health probes are configured so the application gateway backend health is not doing the same checks as Kubernetes does which may cause wrong application gateway health check issues.

Describe the solution you'd like
Use the health check endpoint from the kubernetes deployment to configure a custom probe in the application gateway as part of the ingress configuration.

Add Azure templates for deploying new AKS and Application Gateway V2

Is your feature request related to a problem? Please describe.
We need a simpler way for users to bring up AKS and Application Gateway that conforms to the requirements of the ingress controller.

Describe the solution you'd like
Azure templates allow us to pre-define a lot of the properties that we require of an AKS cluster to support the ingress controller.

Pull TLS certificates from KeyVault

Is your feature request related to a problem? Please describe.
You currently need to pull secrets from Azure Key Vault manually, transform them and then create a Kubernetes secret. Ideally this would be tightly integrated with Key Vault and certificates, with rotating in keyvault updating the certificate on the application gateway.

Describe the solution you'd like
Either:

Remove shell scripts from CMake

Is your feature request related to a problem? Please describe.
This is again an optimization. We shouldn't be needing to invoke shell scripts to run the go, golint and test commands. This adds another level of abstraction and makes target management harder.

Describe the solution you'd like

  • Invoke the go commands directly instead of invoking it from a shell script.

Temporary scaling down deployments to zero stalls updates on Application Gateway

Describe the bug
When the deployment is (temporary) downscaled to zero (from 1 or more) pods, it results in errors in the ingress controller:

I0212 09:11:17.427539       1 backendhttpsettings.go:102] unable to resolve any backend port for service [default/user-profile-integration-service]
I0212 09:11:17.427545       1 backendhttpsettings.go:85] resolving port name http
I0212 09:11:17.427556       1 backendhttpsettings.go:85] resolving port name http
E0212 09:11:17.427565       1 controller.go:70] unable to generate backend http settings, error [unable to resolve backend port for some services]
I0212 09:11:17.427570       1 eventqueue.go:126] Processing event failed
I0212 09:12:23.424466       1 eventqueue.go:60] Enqueuing skip(false) item
I0212 09:12:23.424502       1 eventqueue.go:119] Processing event begin, time since event generation: 43.301µs
I0212 09:12:23.424511       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0212 09:12:23.516698       1 backendhttpsettings.go:85] resolving port name http
I0212 09:12:23.516733       1 backendhttpsettings.go:85] resolving port name http
I0212 09:12:23.516740       1 backendhttpsettings.go:102] unable to resolve any backend port for service [default/user-profile-integration-service]

Besides giving these errors, it also stops the processing loop of events, so changes after this event won't be processed until the deployment is set back to 1 or more pod.

To Reproduce
Steps to reproduce the behavior:

  1. Start 1 new deployment (with an ingress) with the number of instances to 1
  2. Scale that deployment down to 0
  3. Do another deployment (with an ingress) of an other service
  4. The ingress of this service will not be processed on the Application Gateway, until the first deployment is set back to 1 (or more) instances

Ingress Controller details

  • Output of `kubectl describe pod':
Name:               application-gateway-kubernetes-ingress-ingress-azure-798d9v2hr2
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-agentpool-39900342-2/172.16.32.35
Start Time:         Tue, 12 Feb 2019 09:45:46 +0100
Labels:             aadpodidbinding=application-gateway-kubernetes-ingress-ingress-azure
                    app=ingress-azure
                    pod-template-hash=3548564895
                    release=application-gateway-kubernetes-ingress
Annotations:        <none>
Status:             Running
IP:                 172.16.32.49
Controlled By:      ReplicaSet/application-gateway-kubernetes-ingress-ingress-azure-798d9b8df9
Containers:
  ingress-azure:
    Container ID:   docker://437b4d764cda5d69b0fd85208beb9b13655f64fe416ee1b7f895512d3394bc9d
    Image:          mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4
    Image ID:       docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:b996e8d9812d4d92cf55d2b02fe8f404352b08e137757c4b8d8dabd11fb1901b
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 12 Feb 2019 09:45:49 +0100
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      application-gateway-kubernetes-ingress-cm-ingress-azure  ConfigMap  Optional: false
    Environment:
      KUBERNETES_PORT_443_TCP_ADDR:  staging-xxxxxxxx.hcp.westeurope.azmk8s.io
      KUBERNETES_PORT:               tcp://staging-xxxxxxxx.hcp.westeurope.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://staging-xxxxxxxx.hcp.westeurope.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       staging-xxxxxxxx.hcp.westeurope.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from application-gateway-kubernetes-ingress-sa-ingress-azure-toxv5m5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  application-gateway-kubernetes-ingress-sa-ingress-azure-toxv5m5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  application-gateway-kubernetes-ingress-sa-ingress-azure-toxv5m5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                               Message
  ----    ------     ----  ----                               -------
  Normal  Scheduled  38m   default-scheduler                  Successfully assigned default/application-gateway-kubernetes-ingress-ingress-azure-798d9v2hr2 to aks-agentpool-39900342-2
  Normal  Pulling    38m   kubelet, aks-agentpool-39900342-2  pulling image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4"
  Normal  Pulled     38m   kubelet, aks-agentpool-39900342-2  Successfully pulled image "mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4"
  Normal  Created    38m   kubelet, aks-agentpool-39900342-2  Created container
  Normal  Started    38m   kubelet, aks-agentpool-39900342-2  Started container

Support re-write URLs capability in Ingress Resources

Is your feature request related to a problem? Please describe.
In order to support multiple services in the same domain it becomes a requirement to support URL re-write in ingress, especially if its not feasible to change the service pods running on AKS.

Describe the solution you'd like
The nginx ingress controller already supports re-writes using annotations. We probably need to support something similar.
https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite
The above would imply that we would need one ingress resource per service for the re-write to take affect, but that should be an acceptable trade-off.

Let's Encrypt Ingtegration

Hi, is there any plans to provide let's encrypt integration with this ingress controller? I.e. when you setup the ingress, rather than storing the certificate details as a manual step, the ingress integrates with the let's encrypt service for certificate creation and renewal. I think there is already a cert manager pod that does that the nginx ingress controller works well with

Routing to different namespace

i have created the ingress controller in the default namespace and i have a service in other namespace
when i try to route to the service - i cannot access the service

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: occm
namespace: scaleredis
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:

  • http:
    paths:
    • backend:
      serviceName: scaleredis-occm
      servicePort: 80

Add as Acs-engine plugin

Is your feature request related to a problem? Please describe.
Add as plugin for acs-engine.

Describe the solution you'd like
acs-engine is used by many customers to create clusters on Azure. Add app gw k8s ingress as a plugin to the acs-engine so that customers can get this by default with just a config value set.

cannot route when using path based

I have followed all your instructions , and i wanted to test the path base routing
this is my yaml filr

`apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:

  • http:
    paths:
    • path: /guest
      backend:
      serviceName: frontend
      servicePort: 80
    • backend:
      serviceName: frontend
      servicePort: 80`

however when i navigate to the publicip/guest - i get
image

those are the rules in the appgw

image

image

can't evaluate field apiServerAddress in type interface {}

I've created an AKS, followed the instructions for creating a Managed Identity and deploying AAD Identity Pod. When I try to run the helm install, it shows the following error:

Error: render error in "ingress-azure/templates/deployment.yaml": template: ingress-azure/templates/deployment.yaml:35:93: executing "ingress-azure/templates/deployment.yaml" at <.Values.aksClusterCo...>: can't evaluate field apiServerAddress in type interface {}

This doesn't happen if I use v0.1.2 downloaded and extracted locally.

Can't run more than 2 ingress-azure pods

Describe the bug
As long as specifying replicaCount equals or greater than 3 in helm value file, only 2 ingress-azure pods will work correctly.

To Reproduce
Specify any value bigger than 2 for replicaCount in value.yaml during helm installation. And then check logs for each pod.

Ingress Controller details

  • Output of kubectl describe pod <ingress controller> . The pod name can be obtained by running helm list.
$ helm list
agw                      	3       	Mon Jan 21 08:41:59 2019	DEPLOYED	ingress-azure-0.1.5       	1.0        	default

$ kubectl describe pod agw-ingress-azure-56ccd6f4c8-bwlm9
Name:               agw-ingress-azure-56ccd6f4c8-bwlm9
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-agentpool-30504654-1/10.11.0.35
Start Time:         Wed, 09 Jan 2019 13:09:05 +0100
Labels:             aadpodidbinding=agw-ingress-azure
                    app=ingress-azure
                    pod-template-hash=1277829074
                    release=agw
Annotations:        <none>
Status:             Running
IP:                 10.11.0.40
Controlled By:      ReplicaSet/agw-ingress-azure-56ccd6f4c8
Containers:
  ingress-azure:
    Container ID:   docker://51fc60de38b5661dfa229562b607dd9d044d00e3e44fa9335253deb64aa13335
    Image:          mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4
    Image ID:       docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:b996e8d9812d4d92cf55d2b02fe8f404352b08e137757c4b8d8dabd11fb1901b
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 17 Jan 2019 11:48:08 +0100
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Wed, 09 Jan 2019 13:09:11 +0100
      Finished:     Thu, 17 Jan 2019 10:49:56 +0100
    Ready:          True
    Restart Count:  1
    Environment Variables from:
      agw-cm-ingress-azure  ConfigMap  Optional: false
    Environment:
      KUBERNETES_PORT_443_TCP_ADDR:  XXXXXXXX
      KUBERNETES_PORT:               tcp://XXXXXXXX:443
      KUBERNETES_PORT_443_TCP:       tcp://XXXXXXXX:443
      KUBERNETES_SERVICE_HOST:       XXXXXXXX
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from agw-sa-ingress-azure-token-j49rn (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  agw-sa-ingress-azure-token-j49rn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  agw-sa-ingress-azure-token-j49rn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
  • Output of `kubectl logs .
$ kubectl logs --tails=7 agw-ingress-azure-56ccd6f4c8-bwlm9
I0121 07:52:07.689500       1 main.go:86] Retrying in 10s
E0121 07:52:17.749200       1 main.go:83] unable to get specified ApplicationGateway [xxxxxxxx-agw], error=[azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/XXXXXXXX/resourceGroups/MC_xxxxxxxx_xxxxxxxx_eastus2/providers/Microsoft.Network/applicationGateways/xxxxxxxx-agw?api-version=2018-06-01: StatusCode=403 -- Original Error: adal: Refresh request failed. Status Code = '403'. Response body: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Identity not found"}
]
I0121 07:52:17.749232       1 main.go:86] Retrying in 10s
E0121 07:52:27.813296       1 main.go:83] unable to get specified ApplicationGateway [xxxxxxxx-agw], error=[azure.BearerAuthorizer#WithAuthorization: Failed to refresh the Token for request to https://management.azure.com/subscriptions/XXXXXXXX/resourceGroups/MC_xxxxxxxx_xxxxxxxx_eastus2/providers/Microsoft.Network/applicationGateways/xxxxxxxx-agw?api-version=2018-06-01: StatusCode=403 -- Original Error: adal: Refresh request failed. Status Code = '403'. Response body: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_request","error_description":"Identity not found"}
]
I0121 07:52:27.813374       1 main.go:86] Retrying in 10s

Commit the `vendor` to the repo.

Is your feature request related to a problem? Please describe.
This is more of a build optimization. Because we have not committed the vendor to the repo we end up running glide everytime we perform a new build. This is time consuming. If we commit the vendor the builds can speed up.

Describe the solution you'd like

  • Commit the vendor to the repo.
  • Make vendoring a separate target in the CMake build system.
  • Do not invoke this target in Travis CI.

Add node selector to deployment

Is your feature request related to a problem? Please describe.
When deploying to a hybrid cluster the deployment often tries to schedule pods on windows nodes

Describe the solution you'd like
Add a nodeselector to the deployments template and values

Private IP frontend support

We are in need of creating an end point which is accessibly only internally.

If we move the associated listeners it automatically creates to the Private IP Address, it momentarily works as expected until kubernetes automatically re-applies the config to the public IP.

Ingress-azure container won't start - AZURE_AUTH_LOCATION error

Describe the bug
When I create the helm templates with the service principal option (not aad-pod-identity) when the container starts up it reports the following error:

I0206 12:30:59.179672 1 main.go:63] Creating authorizer from file referenced by AZURE_AUTH_LOCATION
F0206 12:30:59.179815 1 main.go:68] Error creating Azure client from config: invalid character 'c' looking for beginning of value

One suggest would be to provide clearer instructions on how to setup service principal configuration, you only have it for aad-pod-identity it seems in your docs.

To Reproduce
Setup a service principal in Azure and created an associated secret in AKS cluster using the following YAML:

apiVersion: v1
kind: Secret
metadata:
name: networking-appgw-k8s-azure-service-principal
type: Opaque
data:
username: removed
password: removed

I then use the following helm values configuration for the service principal:
armAuth:
type: servicePrincipal
secretName: networking-appgw-k8s-azure-service-principal
secretKey: password

Ingress Controller details
Pod describe
Name: *****************
Namespace: default
Priority: 0
PriorityClassName:
Node: **********************
Start Time: Wed, 06 Feb 2019 12:43:22 +0000
Labels: app=ingress-azure
pod-template-hash=2740828966
release=release-name
Annotations:
Status: Running
IP: 10.0.0.11
Controlled By: ReplicaSet/release-name-ingress-azure-6c84d6dfbb
Containers:
ingress-azure:
Container ID: docker://d78c07da94becbe167e1942cd5c29afb75d2457ea123214a3db4f05e15aa9193
Image: mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4
Image ID: docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:b996e8d9812d4d92cf55d2b02fe8f404352b08e137757c4b8d8dabd11fb1901b
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 06 Feb 2019 13:50:20 +0000
Finished: Wed, 06 Feb 2019 13:50:20 +0000
Ready: False
Restart Count: 18
Environment Variables from:
release-name-cm-ingress-azure ConfigMap Optional: false
Environment:
AZURE_AUTH_LOCATION: /etc/Azure/Networking-AppGW/auth/password
KUBERNETES_PORT_443_TCP_ADDR:
KUBERNETES_PORT:
KUBERNETES_PORT_443_TCP:
KUBERNETES_SERVICE_HOST:
Mounts:
/etc/Azure/Networking-AppGW/auth from networking-appgw-k8s-azure-service-principal-mount (ro)
/var/run/secrets/kubernetes.io/serviceaccount from release-name-sa-ingress-azure-token-mmsvd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
networking-appgw-k8s-azure-service-principal-mount:
Type: Secret (a volume populated by a Secret)
SecretName: networking-appgw-k8s-azure-service-principal
Optional: false
release-name-sa-ingress-azure-token-mmsvd:
Type: Secret (a volume populated by a Secret)
SecretName: release-name-sa-ingress-azure-token-mmsvd
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning BackOff 2m (x296 over 1h) kubelet, Back-off restarting failed container

Pod logs (are posted above in the bug description)

Allow users to define an AG ingress prefix per AKS cluster.

Currently the ingress controller uses a default prefix when adding sub-resources to the AG config. The ingress controller will use this prefix to keep track of the sub-resources it added in order to not modify resources it had not added.

However, the use of a default prefix allows an AG to be associated with only one AKS cluster. If we allow users to set a prefix per-cluster and automate the selection of the cluster prefix to make unique across different AKS clusters it will allow the same AG to be associated with different AKS clusters.

Fix default CIDR for Azure templates

Describe the bug
The deployment templates currently choose CIDR from 15.x.x.x. Technically while this works, we should be choosing default values from 10.x.x.x in accordance with RFC.

To Reproduce
Launch a green deployment template and should see this issue.

Raised events in addition to logs and point docs to events

Is your feature request related to a problem? Please describe.
Raised events in addition to logs and point docs to events

Describe the solution you'd like
Kubernetes events will provide an easy way for customers to debug and track what is going on with the app gw. The docs currently talk about looking at logs. Once this is added we should point it to the events for more streamlined monitoring of the state.

Multiple namespaces support

We're planning on provisioning an AKS for several users supporting multiple namespaces, one per project. Our idea is to deploy one ingress controller with one Application Gateway.

As I understand, this deployment only support one namespace that will be watched. Is there any plan on watching multiple namespaces for triggering the rules creation in the AppGw?

Add support for configuring backend HTTP settings

Is your feature request related to a problem? Please describe.
There are a few settings that we currently have to manually configure after the backend settings have been created by the ingress. Primarily we need to enable connection draining and increase the request timeout, but I imagine the ability to configure the rest of the settings would come in handy for others.

Describe the solution you'd like
The nginx Ingress handles this with annotations, for example:

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: example
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy‑connect‑timeout: 30

I imagine this could be handled in a similar fashion:

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: example
  annotations:
    kubernetes.io/ingress.class: "azure/application-gateway"
    appgw.ingress.kubernetes.io/http-cookie-based-affinity: "false"
    appgw.ingress.kubernetes.io/http-connection-draining: "true"
    appgw.ingress.kubernetes.io/http-request-timeout: "30"

Any plan when the project will be GA?

I searched and did not find any roadmap or plan when this project will be GA. The last release was in Oct 2018.

Could any of the maintainers share any info on this?

Thank you!

Occasionally Ingress Controller introduces significant delays in process ingress events

Describe the bug
Occasionally, when we modify the Ingress resource it takes to the order tens of minutes to update the Application Gateway configuration through ARM.

To Reproduce
Start a Kubernetes service. Expose it through the Application Gateway by creating an Ingress resource. Scale the service up or down, or remove the Ingress resource completely and observe the time it takes to apply this configuration on the Application Gateway. Most of the time the update would go through within a matter of seconds, but occasionally it will take to order tens of minutes to get applied.

Ingress Controller details

  • Output of kubectl describe pod <ingress controller> . The pod name can be obtained by running helm list.
  • Output of `kubectl logs .
    will add the logs here shortly.
  • Any Azure support tickets associated with this issue.

Autoscaled WAF_v2 support

Describe the bug
Application Gateway Ingress Controller does not work with Application Gateway WAF_v2 while autoscaling is on.

To Reproduce
Create Application Gateway WAF_v2 with Autoscale enabled. Configure Application Gateway Ingress Controller on Kubernetes.

Ingress Controller details

  • Output of kubectl logs .
I0108 15:46:43.528469       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
W0108 15:46:43.627344       1 controller.go:105] unable to send CreateOrUpdate request, error [network.ApplicationGatewaysClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=400 Code="ApplicationGatewayV2SkuMustSpecifyEitherCapacityOrAutoscaleConfiguration" Message="Application Gateway /subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Network/applicationGateways/app-gw must specify either Capacity or AutoscaleConfiguration for the selected SKU tier WAF_v2" Details=[]]
I0108 15:46:43.627380       1 controller.go:106] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0108 15:46:43.627386       1 eventqueue.go:126] Processing event failed
  • appgw-ingress help output:
root@app-gw-ingress-azure-5db8876bb5-92ms4:/# ./appgw-ingress -h
Usage of appgw-ingress:
      --apiserver-host string   The address of the Kubernetes apiserver. Optional if running in cluster; if omitted, local discovery is attempted.
      --in-cluster              If running in a Kubernetes cluster, use the pod secrets for creating a Kubernetes client. Optional. (default true)
      --kubeconfig string       Path to kubeconfig file with authorization and master location information.
      --sync-period duration    Interval at which to re-list and confirm cloud resources. (default 30s)
pflag: help requested
  • ingress configuration
apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: azure/application-gateway
    name: sample-app
    namespace: app-gw-ingress
  spec:
    rules:
    - http:
        paths:
        - backend:
            serviceName: sample-app
            servicePort: 80
    tls:
    - secretName: sample-app

Azure template deployment due to unsupported kubernetes version

Describe the bug
The templates used in green field deployment choose the default value of Kubernetes to be 1.11.5. This is currently unsupported by AKS. We should move the AKS engine to 1.12.5

To Reproduce
Try using the template for green field deployments with the default values. You will see the failure scenarios.

Add mock tests for `pkg/appgw`

Is your feature request related to a problem? Please describe.
Given that we have already been able to mock the pkg/k8scontext it should be straightforward to mock the config builder as well.

Describe the solution you'd like
We can use the fake k8s go-client to create mocks for the k8scontext and be able to emulate most of the inputs to the configbuilder and write unit-tests for the appgw package.

AppGW Ingress Demo not working

Hi
I've setup everything (AKS, AAD Pod ID)
After that I'm running the Demo you provided (guestbook + modified ingress file).

When I try to open the URL on the AppGW, I am getting 504 - after a long while..

The GW seems to be configured, I see the Route setup in the AZ Portal..

Hitting the SVC directly or through Node's IP:80 it works..

Looking at the logs of the AppGW Ingress pod, there's no errors..

Any ideas how to debug this?

Templates should allow choosing the SKU type for Application Gateway

Is your feature request related to a problem? Please describe.
With the current templates, its not possible to change the SKUType when used from the portal. One of the options provided in the template should be the SKU type and autoscale configuration of the template.

Describe the solution you'd like
Add default value for SkutType and capacity for Application Gateway in the template. Allow these values to be modified.

Add custom hostname & https support to default listener

The ingress controller, by default, creates an default http listener (see the picture below). Is it possible to change this default listener so that we:
*can add a custom hostname binding to it

  • can also add https support on this default binding with a secret available in the platform

afbeelding

Is this functionality already available in the package? If so, where can we set these configuration so this will be automatically created. I tried creating/changing this myself in the Application Gateway but my changes were automatically overwritten by the ingress controller.

Socket.IO Clients disconnect and reconnect repeatedly

Describe the bug
I moved my Socket.IO server from kops on AWS to Azure Kubernetes Service. Everything was working on AWS with zalando's kube-ingress-aws-controller. I had same structure on AWS except i have Kubernetes Service and application-gateway-kubernetes-ingress (which has required components as you outlined) on Azure Side. My problem is simply My Clients could connect without any problem on my old infrastructure but now whenever they connect to Flask Socket.IO Server, they disconnect in 30 seconds and reconnects again and it repeats like this. I have same client code and it still works with my old infrastructure but it doesnt work as i expected on Azure Infrastructure. Am i missing sth ? Everything works good but Socket.IO communication on azure side.
Example : Nginx conf for Socket.IO
https://github.com/socketio/socket.io/blob/master/examples/cluster-nginx/nginx/nginx.conf

To Reproduce
Steps to reproduce the behavior:

  • Follow https://azure.github.io/application-gateway-kubernetes-ingress/docs/install-new.html steps without RBAC
  • Create secret for tls certificate
  • Use any nodeJs or flask SocketIO Server docker Image
  • Deploy this test project by doing Azure identity binding, setting Ingress settings with secret and deployment to create Instance of app etc.
  • Assign your Application gateway's public address to your domain that used in Ingress settings.
  • Try to connect from any SocketIO Client

Example Server and Client App:
https://github.com/socketio/socket.io/tree/master/examples

Ingress Controller details

  • Output of kubectl describe pod <ingress controller> . The pod name can be obtained by running helm list.
Name:               ugly-rodent-ingress-azure-6b6cf4f969-2qcxc
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               aks-agentpool-37794972-0/10.240.0.4
Start Time:         Mon, 11 Feb 2019 23:30:06 +0300
Labels:             aadpodidbinding=ugly-rodent-ingress-azure
                    app=ingress-azure
                    pod-template-hash=6b6cf4f969
                    release=ugly-rodent
Annotations:        <none>
Status:             Running
IP:                 10.240.0.13
Controlled By:      ReplicaSet/ugly-rodent-ingress-azure-6b6cf4f969
Containers:
  ingress-azure:
    Container ID:   docker://3f510489472c7c3a3e4422f5e12ecf34f054a42908134007e60c996baa7da178
    Image:          mcr.microsoft.com/azure-application-gateway/kubernetes-ingress:0.1.4
    Image ID:       docker-pullable://mcr.microsoft.com/azure-application-gateway/kubernetes-ingress@sha256:b996e8d9812d4d92cf55d2b02fe8f404352b08e137757c4b8d8dabd11fb1901b
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 11 Feb 2019 23:30:26 +0300
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      ugly-rodent-cm-ingress-azure  ConfigMap  Optional: false
    Environment:
      KUBERNETES_PORT_443_TCP_ADDR:  practicalvr-a931c707.hcp.eastus.azmk8s.io
      KUBERNETES_PORT:               tcp://practicalvr-a931c707.hcp.eastus.azmk8s.io:443
      KUBERNETES_PORT_443_TCP:       tcp://practicalvr-a931c707.hcp.eastus.azmk8s.io:443
      KUBERNETES_SERVICE_HOST:       practicalvr-a931c707.hcp.eastus.azmk8s.io
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ugly-rodent-sa-ingress-azure-token-q98xz (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  ugly-rodent-sa-ingress-azure-token-q98xz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ugly-rodent-sa-ingress-azure-token-q98xz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>
  • Output of `kubectl logs .
I0213 15:29:58.491989       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0213 15:29:58.491996       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0213 15:30:51.942679       1 eventqueue.go:60] Enqueuing skip(false) item
I0213 15:30:51.942712       1 eventqueue.go:119] Processing event begin, time since event generation: 39.1µs
I0213 15:30:51.942720       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0213 15:30:52.029216       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0213 15:31:02.498898       1 controller.go:112] deployment took 10.469648601s
I0213 15:31:02.498921       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0213 15:31:02.498927       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
W0213 15:50:08.613718       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 239474 (239686)
I0213 15:50:09.613963       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0213 16:10:28.641033       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 241095 (241391)
W0217 22:27:08.324272       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 753978 (754276)
I0217 22:27:09.324443       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0217 22:49:40.356273       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 755684 (756161)
I0217 22:49:41.356582       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
I0217 22:50:40.158905       1 eventqueue.go:60] Enqueuing skip(false) item
I0217 22:50:40.158970       1 eventqueue.go:119] Processing event begin, time since event generation: 69.2µs
I0217 22:50:40.159013       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0217 22:50:40.486176       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0217 22:50:50.991059       1 controller.go:112] deployment took 10.504847568s
I0217 22:50:50.991089       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0217 22:50:50.991096       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0217 22:51:05.881487       1 eventqueue.go:60] Enqueuing skip(false) item
I0217 22:51:05.881518       1 eventqueue.go:119] Processing event begin, time since event generation: 55.7µs
I0217 22:51:05.881525       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0217 22:51:05.946492       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0217 22:51:16.390322       1 controller.go:112] deployment took 10.443797729s
I0217 22:51:16.390351       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0217 22:51:16.390357       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
W0217 23:08:22.385634       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 757716 (757764)
I0217 23:08:23.385896       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0217 23:25:58.412124       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 759169 (759234)
I0218 20:44:58.919529       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0218 21:07:21.000575       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 867497 (867961)
I0218 21:07:22.000794       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
I0218 21:28:51.344845       1 eventqueue.go:60] Enqueuing skip(false) item
I0218 21:28:51.344879       1 eventqueue.go:119] Processing event begin, time since event generation: 37.899µs
I0218 21:28:51.344899       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0218 21:28:51.723842       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0218 21:29:02.246797       1 controller.go:112] deployment took 10.522918158s
I0218 21:29:02.246825       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0218 21:29:02.246832       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0218 21:29:26.572933       1 eventqueue.go:60] Enqueuing skip(false) item
I0218 21:29:26.572978       1 eventqueue.go:119] Processing event begin, time since event generation: 50.4µs
I0218 21:29:26.572984       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0218 21:29:26.648498       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0218 21:29:37.097722       1 controller.go:112] deployment took 10.449169045s
I0218 21:29:37.097848       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0218 21:29:37.097974       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
W0218 21:47:43.065922       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 871243 (871372)
I0218 21:47:44.066193       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
I0219 04:51:28.465768       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 04:51:28.465862       1 eventqueue.go:119] Processing event begin, time since event generation: 97.399µs
I0219 04:51:28.465883       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 04:51:28.656497       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 04:51:39.312210       1 controller.go:112] deployment took 10.655666767s
I0219 04:51:39.312240       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 04:51:39.312247       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
I0219 04:52:46.806825       1 eventqueue.go:60] Enqueuing skip(false) item
I0219 04:52:46.806857       1 eventqueue.go:119] Processing event begin, time since event generation: 36.2µs
I0219 04:52:46.806863       1 controller.go:50] controller.processEvent called with type k8scontext.Event
I0219 04:52:46.925059       1 controller.go:98] ~~~~~~~~ ↓ ApplicationGateway deployment ↓ ~~~~~~~~
I0219 04:52:57.532696       1 controller.go:112] deployment took 10.607604987s
I0219 04:52:57.532734       1 controller.go:119] ~~~~~~~~ ↑ ApplicationGateway deployment ↑ ~~~~~~~~
I0219 04:52:57.532740       1 eventqueue.go:128] Processing event done, updating lastEventTimestamp
W0219 05:13:26.774435       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 908325 (908652)
I0219 05:13:27.774634       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
W0219 05:34:23.803920       1 reflector.go:341] github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379: watch of *v1.Endpoints ended with: too old resource version: 910061 (910405)
I0219 05:34:24.804180       1 reflector.go:240] Listing and watching *v1.Endpoints from github.com/Azure/application-gateway-kubernetes-ingress/pkg/k8scontext/context.go:379
  • Any Azure support tickets associated with this issue.
    None

Add Release.Namespace to chart templates

I am deploying this chart using helm template and kubectl apply (we do not use Tiller). I'd like to be able to deploy this controller into a namespace other than default. But without the {{ Release.Namespace }} in the helm charts I cannot see a way of doing this cleanly. Therefore, would it be possible to add this functionality into the template, then running Helm template --namespace <namespace_name> would give me what I need. Unless there is another way to achieve this, for now I have forked the repo.

Add a Go Linter target to the build system.

Is your feature request related to a problem? Please describe.
We currently don't run any linters, so we might not be conformant with Go standards.

Describe the solution you'd like
Add a linter target to our appgw-ingress target.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.