Giter Site home page Giter Site logo

yggdrasil's Introduction

Yggdrasil

Yggdrasil is an Envoy control plane that configures listeners and clusters based off Kubernetes ingresses from multiple Kube Clusters. This allows you to have an envoy cluster acting as a mutli-cluster loadbalancer for Kubernetes. This was something we needed as we wanted our apps to be highly available in the event of a cluster outage but did not want the solution to live inside of Kubernetes itself.

Note: Currently we support versions 1.20.x to 1.26.x of Envoy.
Note: Yggdrasil now uses Go modules to handle dependencies.

Usage

Yggdrasil will watch all Ingresses in each Kubernetes Cluster that you give it via the Kubeconfig flag. Any ingresses that match any of the ingress classes that you have specified will have a listener and cluster created that listens on the same Host as the Host defined in the Ingress object. If you have multiple clusters Yggdrasil will create a cluster address for each Kubernetes cluster your Ingress is in, the address is the address of the ingress loadbalancer.

Joseph Irving has published a blog post which describes our need for and use of Yggdrasil at Uswitch.

Setup

Please see the Getting Started guide for a walkthrough of setting up a simple HTTP service with Yggdrasil and envoy.

The basic setup is to have a cluster of envoy nodes which connect to Yggdrasil via GRPC and get given dynamic listeners and clusters from it. Yggdrasil is set up to talk to each Kubernetes api where it will watch the ingresses for any that are using the ingress class it's watching for.

Yggdrasil Diagram

Your envoy nodes only need a very minimal config where they are simply set up to get dynamic clusters and listeners from Yggdrasil. Example envoy config:

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

dynamic_resources:
  lds_config:
    resource_api_version: V3
    api_config_source:
      transport_api_version: V3
      api_type: GRPC
      grpc_services:
      - envoy_grpc:
          cluster_name: xds_cluster
  cds_config:
    resource_api_version: V3
    api_config_source:
      transport_api_version: V3
      api_type: GRPC
      grpc_services:
      - envoy_grpc:
          cluster_name: xds_cluster

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: yggdrasil
                port_value: 8080

Your ingress set up then looks like this:

Envoy Diagram

Where the envoy nodes are loadbalancing between each cluster for a given ingress.

Health Check

Yggdrasil always configures a path on your Envoy nodes at /yggdrasil/status, this can be used to health check your envoy nodes, it will only return 200 if your nodes have started and been configured by Yggdrasil.

Annotations

Yggdrasil allows for some customisation of the route and cluster config per Ingress through the annotations below.

Name type
yggdrasil.uswitch.com/healthcheck-path string
yggdrasil.uswitch.com/timeout duration
yggdrasil.uswitch.com/retry-on string

Health Check Path

Specifies a path to configure a HTTP health check to. Envoy will not route to clusters that fail health checks.

Timeout

Allows for adjusting the timeout in envoy. Currently this will set the following timeouts to this value:

Retries

Allows overwriting the default retry policy's config.route.v3.RetryPolicy.RetryOn set by the --retry-on flag (default 5xx). Accepts a comma-separated list of retry-on policies.

Example

Below is an example of an ingress with some of the annotations specified

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example-com
  namespace: default
  annotations:
    yggdrasil.uswitch.com/healthcheck-path: /healthz
    yggdrasil.uswitch.com/timeout: 30s
    yggdrasil.uswitch.com/retry-on: gateway-error,connect-failure
spec:
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: example
          servicePort: 80

Dynamic TLS certificates synchronization from Kubernetes secrets

Downstream TLS certificates can be dynamically fetched and updated from Kubernetes secrets configured under ingresses' spec.tls by setting syncSecrets true in Yggdrasil configuration (false by default).

In this mode, only a single certificate may be specified in Yggdrasil configuration. It will be used for hosts with misconfigured or invalid secret.

Note: ECDSA >256 keys are not supported by envoy and will be discarded. See envoyproxy/envoy#10855

Configuration

Yggdrasil can be configured using a config file e.g:

{
  "nodeName": "foo",
  "ingressClasses": ["multi-cluster", "multi-cluster-staging"],
  "syncSecrets": false,
  "certificates": [
    {
      "hosts": ["*.api.com"],
      "cert": "path/to/cert",
      "key": "path/to/key"
    }
  ],
  "clusters": [
    {
      "token": "xxxxxxxxxxxxxxxx",
      "apiServer": "https://cluster1.api.com",
      "ca": "pathto/cluster1/ca"
    },
    {
      "tokenPath": "/path/to/a/token",
      "apiServer": "https://cluster2.api.com",
      "ca": "pathto/cluster2/ca"
    }
  ]
}

The list of certificates will be loaded by Yggdrasil and served to the Envoy nodes by inlining the key pairs. These will then be used to group the ingress into different filter chains, split using hosts.

nodeName is the same node-name that you start your envoy nodes with. The ingressClasses is a list of ingress classes that yggdrasil will watch for. Each cluster represents a different Kubernetes cluster with the token being a service account token for that cluster. ca is the Path to the ca certificate for that cluster.

Metrics

Yggdrasil has a number of Go, gRPC, Prometheus, and Yggdrasil-specific metrics built in which can be reached by cURLing the /metrics path at the health API address/port (default: 8081). See Flags for more information on configuring the health API address/port.

The Yggdrasil-specific metrics which are available from the API are:

Name Description Type
yggdrasil_cluster_updates Number of times the clusters have been updated counter
yggdrasil_clusters Total number of clusters generated gauge
yggdrasil_ingresses Total number of matching ingress objects gauge
yggdrasil_listener_updates Number of times the listener has been updated counter
yggdrasil_virtual_hosts Total number of virtual hosts generated gauge

Flags

--address string                              yggdrasil envoy control plane listen address (default "0.0.0.0:8080")
--ca string                                   trustedCA
--cert string                                 certfile
--config string                               config file
--config-dump                                 Enable config dump endpoint at /configdump on the health-address HTTP server
--debug                                       Log at debug level
--envoy-listener-ipv4-address string          IPv4 address by the envoy proxy to accept incoming connections (default "0.0.0.0")
--envoy-port uint32                           port by the envoy proxy to accept incoming connections (default 10000)
--health-address string                       yggdrasil health API listen address (default "0.0.0.0:8081")
-h, --help                                        help for yggdrasil
--host-selection-retry-attempts int           Number of host selection retry attempts. Set to value >=0 to enable (default -1)
--http-ext-authz-allow-partial-message        When this field is true, Envoy will buffer the message until max_request_bytes is reached (default true)
--http-ext-authz-cluster string               The name of the upstream gRPC cluster
--http-ext-authz-failure-mode-allow           Changes filters behaviour on errors (default true)
--http-ext-authz-max-request-bytes uint32     Sets the maximum size of a message body that the filter will hold in memory (default 8192)
--http-ext-authz-pack-as-bytes                When this field is true, Envoy will send the body as raw bytes.
--http-ext-authz-timeout duration             The timeout for the gRPC request. This is the timeout for a specific request. (default 200ms)
--http-grpc-logger-cluster string             The name of the upstream gRPC cluster
--http-grpc-logger-name string                Name of the access log
--http-grpc-logger-request-headers strings    access logs request headers
--http-grpc-logger-response-headers strings   access logs response headers
--http-grpc-logger-timeout duration           The timeout for the gRPC request (default 200ms)
--ingress-classes strings                     Ingress classes to watch
--key string                                  keyfile
--kube-config stringArray                     Path to kube config
--max-ejection-percentage int32               maximal percentage of hosts ejected via outlier detection. Set to >=0 to activate outlier detection in envoy. (default -1)
--node-name string                            envoy node name
--retry-on string                             default comma-separated list of retry policies (default "5xx")
--tracing-provider                            name of HTTP Connection Manager tracing provider to include - currently only zipkin config is supported
--upstream-healthcheck-healthy uint32         number of successful healthchecks before the backend is considered healthy (default 3)
--upstream-healthcheck-interval duration      duration of the upstream health check interval (default 10s)
--upstream-healthcheck-timeout duration       timeout of the upstream healthchecks (default 5s)
--upstream-healthcheck-unhealthy uint32       number of failed healthchecks before the backend is considered unhealthy (default 3)
--upstream-port uint32                        port used to connect to the upstream ingresses (default 443)
--use-remote-address                          populates the X-Forwarded-For header with the client address. Set to true when used as edge proxy

yggdrasil's People

Contributors

aluxima avatar dependabot[bot] avatar dewaldv avatar hikhvar avatar joseph-irving avatar luke-scott avatar maria-robobug avatar meghaniankov avatar mmcgarr avatar pingles avatar rhysemmas avatar samb1729 avatar surajnarwade avatar tombooth avatar waiariki-koia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yggdrasil's Issues

Add listener IP option

I would like to add a flag that configures the IP address that envoy listens on, instead of 0.0.0.0 by default.
As --envoy-port already exists to configure the downstream port, I was thinking --envoy-address?

The default would still be 0.0.0.0 so no change for those who won't specify the flag.

Add annotation to configure ingress weight

Adding an yggdrasil.uswitch.com/weight annotation would be useful to configure lbEndpoint load_balancing_weight for each ingress.
Omitting it empty would not change the current behavior.

In addition to that, giving a weight of 0 could be a special case to remove an ingress from an Envoy cluster.

Upgrade Envoy API v2 to v3

Envoy 1.18+ no longer supports the v2 API.
Yggdrasil needs to use the v3 API in order to work with recent versions of Envoy.

Do you think it's conceivable to make Yggdrasil switch straight to using v3 therefore completely dropping the v2 API?
It would mean everybody will need to upgrade their envoy nodes to 1.13+.

Dynamically get certificates from ingresses' TLS secrets

Declaring all TLS certificates and managing them alongside Yggdrasil can be a challenge when working with many ingresses all using different certificates.

We would like to make Yggdrasil fetch (and watch) TLS secrets declared in ingresses' spec.tls and use them.

To make this functionality transparent to those who don't need it, we can imagine simply adding a syncSecrets Yggdrasil configuration option (false by default) that would ignore certificates if true:

{
  "nodeName": "foo",
  "ingressClasses": ["multi-cluster", "multi-cluster-staging"],
  "certificates": [
    {
      "hosts": ["*.api.com"],
      "cert": "path/to/cert",
      "key": "path/to/key"
    }
  ],
  "clusters": [
    {
      "token": "xxxxxxxxxxxxxxxx",
      "apiServer": "https://cluster1.api.com",
      "ca": "pathto/cluster1/ca"
    }
  ]
}

=>

{
  "nodeName": "foo",
  "ingressClasses": ["multi-cluster", "multi-cluster-staging"],
  "syncSecrets": true,
  "clusters": [
    {
      "token": "xxxxxxxxxxxxxxxx",
      "apiServer": "https://cluster1.api.com",
      "ca": "pathto/cluster1/ca"
    }
  ]
}

Any other approach in mind? Maybe one to be able to use both static certificates and TLS secrets at the same time?

Adopt go modules

What is your opinion on adopting go modules instead of dep?
From my point of view go modules have the advantage of eliminating an additional external tool.

Add Diagram

It'd be great to add a picture and some more explanation around the ingress class annotations to make it easier to see where yggrasil sits and how to get it running (maybe example terraform config for the envoy asg stuff?)

Envoy is not getting k8s ingress cluster config from yggdrasil control-plane

Envoy is not receiving k8s ingress configuration clusters/listeners from yggdrasil control-plane. I'm using the configuration reference:

Envoy docker container output:

[2019-07-16 03:39:18.751][8][info][main] [source/server/server.cc:207] statically linked extensions:
[2019-07-16 03:39:18.752][8][info][main] [source/server/server.cc:209]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2019-07-16 03:39:18.752][8][info][main] [source/server/server.cc:212]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2019-07-16 03:39:18.752][8][info][main] [source/server/server.cc:215]   filters.listener: envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2019-07-16 03:39:18.753][8][info][main] [source/server/server.cc:218]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2019-07-16 03:39:18.753][8][info][main] [source/server/server.cc:220]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2019-07-16 03:39:18.754][8][info][main] [source/server/server.cc:222]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.zipkin
[2019-07-16 03:39:18.755][8][info][main] [source/server/server.cc:225]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-07-16 03:39:18.756][8][info][main] [source/server/server.cc:228]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-07-16 03:39:18.756][8][info][main] [source/server/server.cc:234] buffer implementation: old (libevent)
[2019-07-16 03:39:18.766][8][warning][misc] [source/common/protobuf/utility.cc:173] Using deprecated option 'envoy.api.v2.Cluster.hosts' from file cds.proto. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.
[2019-07-16 03:39:18.768][8][info][main] [source/server/server.cc:281] admin address: 0.0.0.0:9901
[2019-07-16 03:39:18.769][8][info][config] [source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2019-07-16 03:39:18.769][8][info][config] [source/server/configuration_impl.cc:56] loading 1 cluster(s)
[2019-07-16 03:39:18.770][8][info][config] [source/server/configuration_impl.cc:60] loading 0 listener(s)
[2019-07-16 03:39:18.770][8][info][config] [source/server/configuration_impl.cc:85] loading tracing configuration
[2019-07-16 03:39:18.770][8][info][config] [source/server/configuration_impl.cc:105] loading stats sink configuration
[2019-07-16 03:39:18.770][8][info][main] [source/server/server.cc:478] starting main dispatch loop
[2019-07-16 03:39:19.064][8][info][upstream] [source/common/upstream/cluster_manager_impl.cc:133] cm init: initializing cds
[2019-07-16 03:39:19.067][8][info][upstream] [source/common/upstream/cluster_manager_impl.cc:137] cm init: all clusters initialized
[2019-07-16 03:39:19.067][8][info][main] [source/server/server.cc:462] all clusters initialized. initializing init manager
[2019-07-16 03:39:19.071][8][info][upstream] [source/server/lds_api.cc:74] lds: add/update listener 'listener_0'
[2019-07-16 03:39:19.071][8][info][config] [source/server/listener_manager_impl.cc:1006] all dependencies initialized. starting workers

yggdrasil docker container output:

time="2019-07-16T03:39:13Z" level=info msg="started snapshotter"
time="2019-07-16T03:39:14Z" level=debug msg="adding &Ingress{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:traefik-web-ui,GenerateName:,Namespace:kube-system-custom,SelfLink:/apis/extensions/v1beta1/namespaces/kube-system-custom/ingresses/traefik-web-ui,UID:37ba4ec6-a6b5-11e9-aa56-12311bc24cf8,ResourceVersion:7291364,Generation:1,CreationTimestamp:2019-07-15 04:01:26 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"kubernetes.io/ingress.class\":\"traefik\",\"traefik.ingress.kubernetes.io/frontend-entry-points\":\"http\"},\"name\":\"traefik-web-ui\",\"namespace\":\"kube-system-custom\"},\"spec\":{\"rules\":[{\"host\":\"traefik.cluster1.preprod.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"traefik\",\"servicePort\":\"web\"},\"path\":\"/\"}]}}]}}\n,kubernetes.io/ingress.class: traefik,traefik.ingress.kubernetes.io/frontend-entry-points: http,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:IngressSpec{Backend:nil,TLS:[],Rules:[{traefik.cluster1.preprod.com {HTTPIngressRuleValue{Paths:[{/ {traefik {1 0 web}}}],}}}],},Status:IngressStatus{LoadBalancer:k8s_io_api_core_v1.LoadBalancerStatus{Ingress:[],},},}"
time="2019-07-16T03:39:14Z" level=debug msg="took snapshot: {Endpoints:{Version: Items:map[]} Clusters:{Version:2019-07-16 03:39:14.0499594 +0000 UTC m=+1.109360801 Items:map[]} Routes:{Version: Items:map[]} Listeners:{Version:2019-07-16 03:39:14.0499448 +0000 UTC m=+1.109347101 Items:map[listener_0:name:\"listener_0\" address:<socket_address:<address:\"0.0.0.0\" port_value:10000 > > filter_chains:<filters:<name:\"envoy.http_connection_manager\" config:<fields:<key:\"access_log\" value:<list_value:<values:<struct_value:<fields:<key:\"config\" value:<struct_value:<fields:<key:\"format\" value:<string_value:\"{\\\"bytes_received\\\":\\\"%BYTES_RECEIVED%\\\",\\\"bytes_sent\\\":\\\"%BYTES_SENT%\\\",\\\"downstream_local_address\\\":\\\"%DOWNSTREAM_LOCAL_ADDRESS%\\\",\\\"downstream_remote_address\\\":\\\"%DOWNSTREAM_REMOTE_ADDRESS%\\\",\\\"duration\\\":\\\"%DURATION%\\\",\\\"forwarded_for\\\":\\\"%REQ(X-FORWARDED-FOR)%\\\",\\\"protocol\\\":\\\"%PROTOCOL%\\\",\\\"request_id\\\":\\\"%REQ(X-REQUEST-ID)%\\\",\\\"request_method\\\":\\\"%REQ(:METHOD)%\\\",\\\"request_path\\\":\\\"%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%\\\",\\\"response_code\\\":\\\"%RESPONSE_CODE%\\\",\\\"response_flags\\\":\\\"%RESPONSE_FLAGS%\\\",\\\"start_time\\\":\\\"%START_TIME(%s.%3f)%\\\",\\\"upstream_cluster\\\":\\\"%UPSTREAM_CLUSTER%\\\",\\\"upstream_host\\\":\\\"%UPSTREAM_HOST%\\\",\\\"upstream_local_address\\\":\\\"%UPSTREAM_LOCAL_ADDRESS%\\\",\\\"upstream_service_time\\\":\\\"%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%\\\",\\\"user_agent\\\":\\\"%REQ(USER-AGENT)%\\\"}\\n\" > > fields:<key:\"path\" value:<string_value:\"/var/log/envoy/access.log\" > > > > > fields:<key:\"name\" value:<string_value:\"envoy.file_access_log\" > > > > > > > fields:<key:\"http_filters\" value:<list_value:<values:<struct_value:<fields:<key:\"config\" value:<struct_value:<fields:<key:\"headers\" value:<list_value:<values:<struct_value:<fields:<key:\"exact_match\" value:<string_value:\"/yggdrasil/status\" > > fields:<key:\"name\" value:<string_value:\":path\" > > > > > > > fields:<key:\"pass_through_mode\" value:<bool_value:false > > > > > fields:<key:\"name\" value:<string_value:\"envoy.health_check\" > > > > values:<struct_value:<fields:<key:\"name\" value:<string_value:\"envoy.router\" > > > > > > > fields:<key:\"route_config\" value:<struct_value:<fields:<key:\"name\" value:<string_value:\"local_route\" > > fields:<key:\"virtual_hosts\" value:<list_value:<> > > > > > fields:<key:\"stat_prefix\" value:<string_value:\"ingress_http\" > > fields:<key:\"tracing\" value:<struct_value:<fields:<key:\"operation_name\" value:<string_value:\"EGRESS\" > > > > > fields:<key:\"upgrade_configs\" value:<list_value:<values:<struct_value:<fields:<key:\"upgrade_type\" value:<string_value:\"websocket\" > > > > > > > > > > listener_filters:<name:\"envoy.listener.tls_inspector\" > ]}}"
time="2019-07-16T03:39:14Z" level=debug msg="cache controller synced"
time="2019-07-16T03:39:14Z" level=debug msg="starting cache controller: &{config:{Queue:0xc4202b20b0 ListerWatcher:0xc42010c9a0 Process:0xf6b290 ObjectType:0xc42028c2c0 FullResyncPeriod:60000000000 ShouldResync:<nil> RetryOnError:false} reflector:<nil> reflectorMutex:{w:{state:0 sema:0} writerSem:0 readerSem:0 readerCount:0 readerWait:0} clock:0x20d0af0}"
time="2019-07-16T03:39:15Z" level=debug msg="cache controller synced"

yggdrasil.json config:

  "nodeName": "foo",
  "ingressClasses": ["multi-cluster", "traefik"],
  "clusters": [
    {
      "token": "xxx1",
      "apiServer": "https://api.cluster1.preprod.com",
      "ca": "cluster1_ca.crt"
    },
    {
      "token": "xxx2",
      "apiServer": "https://api.cluster2.preprod.com",
      "ca": "cluster2_ca.crt"
    }
  ]
}

Envoy v1.10.0 config file:

  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster
  cds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: yggdrasil, port_value: 8080 }}]

I'm expecting to see traefik-ui cluster/listener but envoy can't get it by discovery, only yggdrasil/status was added.

Support networking.k8s.io ingresses

Kubernetes extensions/v1beta1 Ingress will be removed in 1.22.

In order to support all Kubernetes versions, Yggdrasil could try working with networking.k8s.io ingresses first before falling back to extensions/v1beta1 if the cluster doesn't have the new API capability.

Any thoughts on other ways to handle both versions?

Errors on envoy startup with provided configuration

Using the latest envoy docker container:

root@f52a1a331546:/# /usr/local/bin/envoy --version
/usr/local/bin/envoy  version: 3cca9eea6befa5b300230a06516d8f9a46f519df/1.9.0-dev/Clean/RELEASE

Given the example configuration file from yggdrasil README, I get the following error when starting up envoy with it:

[2018-11-20 11:32:51.198][000007][critical][main] [source/server/server.cc:84] error initializing configuration '/etc/envoy/envoy.yaml': envoy::api::v2::core::ConfigSource::GRPC must not have a cluster name specified: api_type: GRPC
cluster_names: "xds_cluster"

This is the configuration I am using:

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [xds_cluster]
  cds_config:
    api_config_source:
      api_type: GRPC
      cluster_names: [xds_cluster]

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: 10.132.0.21, port_value: 8080 }}]

Edit:
Tried this configuration change:

dynamic_resources:
  lds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster
  cds_config:
    api_config_source:
      api_type: GRPC
      grpc_services:
        envoy_grpc:
          cluster_name: xds_cluster

And it seems to work. Maybe the example in the README has to be changed?

Ingress controllers under loadbalancer

Hi folks,

I was trying to test yggdrasil to achieve loadbalancing across two k8s clusters. Since yggdrasil using ingress controller's IP/Host name, I can't use my ELB here. Do we have any work around for this scenario?

Environment:

Cloud: AWS
Clusters in East and West
Nginx ingress ASG under internal Classic ELB.
External DNS service will update Route53 from ingress rules.

envoy.yaml

`admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }

dynamic_resources:
lds_config:
api_config_source:
api_type: GRPC
grpc_services:
envoy_grpc:
cluster_name: dev
cds_config:
api_config_source:
api_type: GRPC
grpc_services:
envoy_grpc:
cluster_name: dev

static_resources:
clusters:

  • name: dev
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    hosts: [{ socket_address: { address: 172.17.0.2, port_value: 8080 }}]`

yggdrasil.conf

{ "nodeName": "k8s-envoy-agt-w2-1", "ingressClasses": ["nginx-internal"], "clusters": [ { "token": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "apiServer": "https://west.dev.master.kube.com:6443", "ca": "ca.crt" } ] }

ingress.yaml

`apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: envoy.dev.kube.com
external-dns.alpha.kubernetes.io/target: internal-dev-k8s-ing-int-w2-xxxxxx.us-west-2.elb.amazonaws.com
kubernetes.io/ingress.class: nginx-internal
nginx.ingress.kubernetes.io/backend-protocol: HTTP
yggdrasil.uswitch.com/healthcheck-path: /
yggdrasil.uswitch.com/timeout: 30s
name: hello-world
namespace: default
spec:
rules:

  • host: envoy.dev.kube.com
    http:
    paths:
    • backend:
      serviceName: hello-world
      servicePort: 80
      path: /`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.