Giter Site home page Giter Site logo

aws / aws-app-mesh-examples Goto Github PK

View Code? Open in Web Editor NEW
846.0 83.0 394.0 26.16 MB

AWS App Mesh is a service mesh that you can use with your microservices to manage service to service communication.

License: MIT No Attribution

Shell 57.90% Dockerfile 8.28% Python 9.83% Go 13.58% Ruby 4.98% HTML 1.51% CSS 0.05% TypeScript 1.20% JavaScript 1.93% SCSS 0.11% Makefile 0.10% Mustache 0.53%
aws service-mesh app-mesh

aws-app-mesh-examples's Introduction

AWS App Mesh

Introduction

App Mesh makes it easy to run microservices by providing consistent visibility and network traffic controls for every microservice in an application. App Mesh separates the logic needed for monitoring and controlling communications into a proxy that runs next to every microservice. App Mesh removes the need to coordinate across teams or update application code to change how monitoring data is collected or traffic is routed. This allows you to quickly pinpoint the exact location of errors and automatically re-route network traffic when there are failures or when code changes need to be deployed.

You can use App Mesh with AWS Fargate, Amazon Elastic Container Service (ECS), Amazon Elastic Container Service for Kubernetes (EKS), and Kubernetes on EC2 to better run containerized microservices at scale. App Mesh uses Envoy, an open source proxy, making it compatible with a wide range of AWS partner and open source tools for monitoring microservices.

Learn more at https://aws.amazon.com/app-mesh

Availability

Today, AWS App Mesh is generally available for production use. You can use App Mesh with AWS Fargate, Amazon Elastic Container Service (ECS), Amazon Elastic Container Service for Kubernetes (EKS), applications running on Amazon EC2, and Kubernetes on EC2 to better run containerized microservices at scale. App Mesh uses Envoy, an open source proxy, making it compatible with a wide range of AWS partner and open source tools for monitoring microservices.

Learn more at https://aws.amazon.com/app-mesh

Getting started

For help getting started with App Mesh, take a look at the examples in this repo.

ARM64 support

All the walkthrough examples in this repo are compatible only with amd64 linux instances. arm64 is only supported from version v1.20.0.1 or later of aws-appmesh-envoy and v1.4.2 and later for Appmesh-controller. We are working on updating these walkthroughs to be arm64 compatible as well. See #473 for more up-to-date information.

China Regions

All the examples and walkthrough are written for commercial regions. You need to make few changes to make them work for China regions, below are some changes that will be needed:

  • Change ARN: For China regions include aws-cn in all arns. So instead of 'arn:aws:' it starts with 'arn:aws-cn:'. Replace 'arn:aws:' with 'arn:${AWS::Partition}:' to make it work for all partitions.
  • Change Endpoints: The endpoint domain for China regions is amazonaws.com.cn. Replace the endpoints from amazonaws.com to amazonaws.com.cn Refer this doc for a list of endpoints for cn-north-1. Do not change the Service Principal like ecs-tasks.amazonaws.com, it is a Service Principal not an endpoint.
  • Change TCP ports 80/8080/443 By default all AWS China accounts are blocked for TCP ports 80/8080/443 with EC2 and S3 services. These ports will be unlocked when an ICP license has been provided by customers. As a workaround you can use some other port for ex: 9090. The url that you curl for, needs to explicitly mention the port now. For example: http://appme-.....us-west-2.elb.amazonaws.com.cn:9090/color

Roadmap

The AWS App Mesh team maintains a public roadmap.

Participate

If you have a suggestion, request, submission, or bug fix for the examples in this repo, please open it as an Issue.

If you have a feature request for AWS App Mesh, please open an Issue on the public roadmap.

Security disclosures

If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here or email AWS security directly.

Why use App Mesh?

  1. Streamline operations by offloading communication management logic from application code and libraries into configurable infrastructure.
  2. Reduce troubleshooting time required by having end-to-end visibility into service-level logs, metrics and traces across your application.
  3. Easily roll out of new code by dynamically configuring routes to new application versions.
  4. Ensure high-availability with custom routing rules that help ensure every service is highly available during deployments, after failures, and as your application scales.
  5. Manage all service to service traffic using one set of APIs regardless of how the services are implemented.

What makes AWS App Mesh unique?

AWS App Mesh is built in direct response to our customers needs implementing a 'service mesh' for their applications. Our customers asked us to:

  • Make it easy to manage microservices deployed across accounts, clusters, container orchestration tools, and compute services with simple and consistent abstractions.
  • Minimize the cognitive and operational overhead in running a microservices application and handling its monitoring and traffic control.
  • Remove the need to build or operate a control plane for service mesh.
  • Use open source software to allow extension to new tools and different use cases.

In order to best meet the needs of our customers, we have invested into building a service that includes a control plane and API that follows the AWS best practices. Specifically, App Mesh:

  • Is an AWS managed service that works across container services with a design that allows us to add support for other computer services in the future.
  • Works with the open source Envoy proxy
  • Is designed to pluggable and will support bringing your own Envoy images and Istio Mixer in the future.
  • Implemented as a multi-tenant control plane to be scalable, robust, cost-effective, and efficient.
  • Built to work independently of any particular container orchestration system. Today, App Mesh works with both Kubernetes and Amazon ECS.

aws-app-mesh-examples's People

Contributors

0xlen avatar achevuru avatar ashishgore avatar bcelenza avatar bennettjames avatar bigdefect avatar cgchinmay avatar geremycohen avatar herbertgoto avatar hyandell avatar jamsajones avatar jlbutler avatar kiranmeduri avatar lavignes avatar mhausenblas avatar paulyeo21 avatar pksinghus avatar rakeb avatar rexnp avatar rishijatia avatar shenjianan97 avatar shibataka000 avatar shsahu avatar skiyani avatar sshver avatar subfuzion avatar suniltheta avatar y0username avatar yellowstonesoftware avatar ysdongamazon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-app-mesh-examples's Issues

unable to download images envoy and proxy image

Describe the bug
A clear and concise description of what the bug is.

Platform
EKS, ECS, EC2, etc.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Config files, and API responses
If applicable config files and responses from our API.

Additional context
Add any other context about the problem here.
App Mesh Envoy container image: 111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.8.0.2-beta
App Mesh proxy route manager: 111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager:latest

App Mesh API Revision - Developer Preview

What we're changing
On March 7, 2019, the AWS App Mesh team will release a new, backward incompatible version of our APIs in the AWS SDKs. The primary change is the introduction of a new resource type, VirtualService, which replaces the use of ServiceNames in the current APIs. These changes are primarily focused on addressing potential confusion around ServiceNames in VirtualNodes and VirtualRouters (see #77).

A VirtualService in AWS App Mesh is an abstraction of the real service implementation behind it. Dependent services will call your VirtualService by its name (formerly the ServiceName), and you are free to implement how your VirtualService is provided to those dependencies through VirtualNodes and VirtualRouters.

A full description of changes is available in the FAQ below.

What we're asking
We're inviting developers to try out these new APIs and provide feedback before we officially release them as part of the AWS SDK. We would especially love to hear your feedback on whether you feel this is an improvement over the existing APIs from the standpoint of issue #77.

How can you give feedback?
You can leave feedback on this issue. Or, you can email your feedback to [email protected].

How can you test this?
You can try the new APIs by downloading the trial JSON CLI file and add it to your existing AWS CLI via:

$ aws configure add-model \
    --service-name appmesh-trial \
    --service-model https://s3-us-west-2.amazonaws.com/aws-appmesh-cli-trials/app-mesh-2019-01-25.trial.json

Once you've added this model, you can begin using the new APIs on your existing mesh.

As a brief example, if you've setup the colorapp example from our aws-app-mesh-examples repository, you can use the new VirtualService API to determine where the ServiceName tcpecho.default.svc.cluster.local points to in your mesh:

$ aws appmesh-trial describe-virtual-service \
    --mesh-name colorapp \
    --virtual-service-name tcpecho.default.svc.cluster.local
 {
    "virtualService": {
        "status": {
            "status": "ACTIVE"
        },
        "metadata": {
            "createdAt": 1543434909.84,
            "version": 1,
            "arn": "arn:aws:appmesh:us-west-2:123456789012:mesh/colorapp/virtualService/tcpecho.default.svc.cluster.local",
            "lastUpdatedAt": 1543434909.84,
            "uid": "3a222219-fd1f-410b-93f8-852338307046"
        },
        "meshName": "colorapp",
        "virtualServiceName": "tcpecho.default.svc.cluster.local",
        "spec": {
            "provider": {
                "virtualNode": {
                    "virtualNodeName": "tcpecho-vn"
                }
            }
        }
    }
}

In the above example we can see that the VirtualService (formerly ServiceName) points to the VirtualNode named tcpecho-vn by showing it as the VirtualService provider. This means that requests sent to tcpecho.default.svc.cluster.local in the mesh will be routed to the tcpecho-vn. The provider can also be a VirtualRouter, as would be the case for the colorteller.default.svc.cluster.local ServiceName.

We would love to hear your feedback on whether you feel these changes clarify the use of ServiceNames in App Mesh. Let us know in the comments below!

Walkthrough

Using the existing colorapp example as a guide, let's walk through what setting up the mesh would look like in the new APIs. To simplify things, we'll ignore the tcpecho service in the existing example and focus just on colorgateway, colorteller, and how we connect them through the use of a VirtualRouter and VirtualService.

To start, we'll create the VirtualNodes for colorgateway and colorteller (white).

$ aws appmesh-trial create-mesh --mesh-name colorapp

$ aws appmesh-trial create-virtual-node \
    --mesh-name colorapp \
    --cli-input-json file://colorgateway-virtualnode.json
    
// colorgateway-virtualnode.json:
{
    "spec": {
        "listeners": [
            {
                "portMapping": {
                    "port": 9080,
                    "protocol": "http"
                }
            }
        ],
        "serviceDiscovery": {
            "dns": {
                "hostname": "colorgateway.default.svc.cluster.local"
            }
        },
        "backends": [
            {
                // Note: Although the VirtualService does not yet exist,
                // you can still specify it as a backend, just as you could
                // in the original APIs. When the VirtualService is created later,
                // App Mesh will make the necessary connections in your mesh.
                // ¯\_(ツ)_/¯
                "virtualService": {
                    "virtualServiceName": "colorteller.default.svc.cluster.local"
                }
            }
        ]
    },
    "virtualNodeName": "colorgateway-vn"
}

$ aws appmesh-trial create-virtual-node \
    --mesh-name colorapp \
    --cli-input-json file://colorteller-virtualnode.json

// colorteller-virtualnode.json:
{
    "spec": {
        "listeners": [
            {
                "portMapping": {
                    "port": 9080,
                    "protocol": "http"
                },
                "healthCheck": {
                    "protocol": "http",
                    "path": "/ping",
                    "healthyThreshold": 2,
                    "unhealthyThreshold": 2,
                    "timeoutMillis": 2000,
                    "intervalMillis": 5000
                }
            }
        ],
        "serviceDiscovery": {
            "dns": {
                "hostname": "colorteller.default.svc.cluster.local"
            }
        }
    },
    "virtualNodeName": "colorteller-vn"
}

Now we'll setup the VirtualRouter and an initial Route for the colorteller service.

$ aws appmesh-trial create-virtual-router \
    --mesh-name colorapp \
    --cli-input-json file://colorteller-virtualrouter.json
    
// colorteller-virtualrouter.json:
{
    "spec": {},
    "virtualRouterName": "colorteller-vr"
}

$ aws appmesh-trial create-route \
    --mesh-name colorapp \
    --virtual-router-name colorteller-vr \
    --cli-input-json file://colorteller-route.json
    
// colorteller-route.json:
{
    "routeName": "colorteller-route",
    "spec": {
        "httpRoute": {
            "action": {
                "weightedTargets": [
                    {
                        "virtualNode": "colorteller-vn",
                        "weight": 1
                    }
                ]
            },
            "match": {
                "prefix": "/"
            }
        }
    },
    "virtualRouterName": "colorteller-vr"
}

With the VirtualNodes, VirtualRouter, and Route setup, the final step is to create the VirtualService for the colorteller service and point it to the VirtualRouter by specifying it as a provider.

$ aws appmesh-trial create-virtual-service \
    --mesh-name colorapp \
    --cli-input-json file://colorteller-virtualservice.json
    
// colorteller-virtualservice.json:
{
    "spec": {
        "provider": {
            "virtualRouter": {
                "virtualRouterName": "colorteller-vr"
            }
        }
    },
    "virtualServiceName": "colorteller.default.svc.cluster.local"
}

Once the VirtualService has been created, requests made to colorteller.default.svc.cluster.local from the colorgateway VirtualNode will now be routed by the colorteller VirtualRouter to the colorteller VirtualNode.

Frequently Asked Questions

Why are we introducing this change?

The existing App Mesh APIs use the concept of a ServiceName to bind specific resources (VirtualNodes, VirtualRouters) together into a traversable service graph. A ServiceName is a fully qualified domain name (FQDN) that a client references in network calls to other services. This is used to route calls to a specific resource within Envoy Proxy, which may be discoverable by a different FQDN.

Today, there are several scenarios in the current APIs where a change to one resource's ServiceName setting may adversely affect that resource, or a different resource, with no indication of a problem to the customer. Additionally, it was discovered during the Public Preview period that the use of ServiceNames in our APIs is difficult to reason about for many customers (see #49, #71).

These changes seek to resolve the potential confusion around ServiceNames by treating the ServiceName as a first-class resource in AppMesh, which we're calling a VirtualService.

What specifically is being changed?

VirtualService

This new resource type is being introduced to formalize the existing concept of a ServiceName and provide clarity on how the entity is used within App Mesh.

This new resource type is used to point a ServiceName (e.g. “service-a.mesh.local”) to a specific networking resource (VirtualRouter, VirtualNode). Today this is done implicitly through fields on VirtualNodes and VirtualRouters. A VirtualService's name (formerly a ServiceName) points to a specific VirtualNode or VirtualRouter by way of the provider field in its spec.

This new resource type also allows customers to discover what ServiceNames exist in the mesh (i.e. aws appmesh-trial list-virtual-services), and where a particular ServiceName is pointing. Finally, it will allow a single actor to own the existing ServiceName and control how it is implemented through resource-based authorization in AWS Identity Access Management.

VirtualNode

  • The spec.backends field is being changed from a list of Strings to a list of structured objects to allow for additional upcoming features in AWS App Mesh. For example, egress policies like retries, circuit breakers, and timeouts will be available here.
// Current:
{"spec": {
    "backends": [
        "colorteller.default.svc.cluster.local"
    ]
}}

// Future:
{"spec": {
    "backends": [
        {
            "virtualService": {
                "virtualServiceName": "colorteller.default.svc.cluster.local"
            }   
        }
    ]
}}
  • The spec.serviceDiscovery.dns.serviceName field is being renamed to spec.serviceDiscovery.dns.hostname and will no longer be used as a ServiceName. The original field was used as both a ServiceName and for DNS discovery. In the new API, this field will only be used for DNS discovery. Instead of specifying an applicable ServiceName here, you can now create your VirtualService by the same name and set the VirtualNode as its provider.

VirtualRouter

  • The spec.serviceNames field is being removed. Instead of specifying applicable ServiceNames on the VirtualRouter, you can now create your VirtualService by the same name and set the VirtualRouter as its provider.

Will existing meshes still work?

Yes. Existing meshes will continue to work after the introduction of the new APIs.

Can I use the new APIs to see my existing meshes?

Yes. The new APIs were designed to work with the existing data in AWS App Mesh, and we encourage you to use the new APIs to get better clarity on how your existing mesh is configured.

Can I still use the existing APIs?

Yes. The new APIs are compatible with the existing APIs and data, so you are free to use either API during the trial period. Once the trial period is over, VirtualService will replace ServiceName.

unable to verify curl on color-gateway

We hosted kubernetes cluster on single node ,deployed the colorapp.yaml to get the app running .
Failing to get the desired results.

root@curler-65d5db4cd9-mz9c2:/# bash
root@curler-65d5db4cd9-mz9c2:/# echo ${SERVICES_DOMAIN}
default.svc.cluster.local
root@curler-65d5db4cd9-mz9c2:/# curl -s http://colorgateway.${SERVICES_DOMAIN}:9080/color
root@curler-65d5db4cd9-mz9c2:/# curl -v -s http://colorgateway.${SERVICES_DOMAIN}:9080/color

  • Hostname was NOT found in DNS cache
  • Trying 10.96.163.227...
  • connect to 10.96.163.227 port 9080 failed: Connection timed out
  • Failed to connect to colorgateway.default.svc.cluster.local port 9080: Connection timed out
  • Closing connection 0

We used proxy configuration but unable to resolve the hostname.

Network Error (dns_unresolved_hostname)




Your requested host "colorgateway.default.svc.cluster.local" could not be resolved by DNS.

[BUG] Deloy service to ECS stuck at PRE_INITIALIZING state

Describe the bug
Gateway and ColorTeller services stuck in PENDING state. Envoy won't become healthy because it is stuck at the PRE_INITIALIZING state.

Platform
ECS

To Reproduce
Follow the instructions in the colorapp README until step "Deploy services to ECS"

Expected behavior
Gateway and ColorTeller services in RUNNING state.

Config files, and API responses
server_info

 bash-4.2$ curl http://localhost:9901/server_info
{
 "version": "ae8c8aa036e58e39b3d2fba81f5bdc4683a30682/1.9.0/Clean/DEBUG/BoringSSL",
 "state": "PRE_INITIALIZING",
 "command_line_options": {
  "base_id": "0",
  "concurrency": 2,
  "config_path": "/tmp/envoy.yaml",
  "config_yaml": "",
  "allow_unknown_fields": false,
  "admin_address_path": "",
  "local_address_ip_version": "v4",
  "log_level": "debug",
  "component_log_level": "",
  "log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] %v",
  "log_path": "",
  "hot_restart_version": false,
  "service_cluster": "",
  "service_node": "",
  "service_zone": "",
  "mode": "Serve",
  "max_stats": "16384",
  "max_obj_name_len": "500",
  "disable_hot_restart": false,
  "enable_mutex_tracing": false,
  "restart_epoch": 0,
  "file_flush_interval": "10s",
  "drain_time": "600s",
  "parent_shutdown_time": "900s"
 },
 "uptime_current_epoch": "3046s",
 "uptime_all_epochs": "3046s"
}

envoy.yml

bash-4.2$ cat /tmp/envoy.yaml
admin:
  access_log_path: /tmp/envoy_admin_access.log
  # Provides access to: http://<envoy hostname>:9901/config_dump
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

node:
    id: mesh/appmesh-mesh/virtualNode/colorteller-black-vn
    cluster: mesh/appmesh-mesh/virtualNode/colorteller-black-vn

dynamic_resources:
  # Configure Envoy to get listeners and clusters via GRPC ADS
  ads_config:
    api_type: GRPC
    grpc_services:
      google_grpc:
        target_uri: appmesh-envoy-management.us-west-2.amazonaws.com:443
        stat_prefix: ads
        channel_credentials:
          ssl_credentials:
            root_certs:
              filename: /etc/pki/tls/cert.pem
        credentials_factory_name: envoy.grpc_credentials.aws_iam
        call_credentials:
          from_plugin:
            name: envoy.grpc_credentials.aws_iam
            config:
              region: us-west-2
              service_name: appmesh
  lds_config: {ads: {}}
  cds_config: {ads: {}}


tracing:
  http:
    name: envoy.xray
    config:
      daemon_endpoint: "127.0.0.1:2000"

stats_config:
  stats_tags:
    - tag_name: "appmesh.mesh"
      fixed_value: "appmesh-mesh"
    - tag_name: "appmesh.virtual_node"
      fixed_value: "colorteller-black-vn"

Additional context
Tested in us-west-2 and us-east-1 region.

[BUG] Links in walkthroughs/eks are out of date

Important note on security disclosures: If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here or email AWS security directly.

Describe the bug
I followed the instructions in https://github.com/aws/aws-app-mesh-examples/blob/master/walkthroughs/eks/base.md which installed older versions of controller and sidecar inject. This writeup needs updates or it can be deleted if all workflows are covered in any other https://github.com/aws/aws-app-mesh-examples/tree/master/walkthroughs/howto-k8s-cloudmap

Platform
EKS etc.

Expected behavior
I expect the latest version of the controller and to get the ability to use all latest features

Config files, and API responses
creating cloudmap based service discovery was failing

Additional context
Add any other context about the problem here.

Template validation errors when running ecs-colorapp.sh

Describe the bug
When trying to run the ecs-colorapp.sh script, I get the following errors:
Unknown parameter in input: "proxyConfiguration", must be one of: family, taskRoleArn, executionRoleArn, networkMode, containerDefinitions, volumes, placementConstraints, requiresCompatibilities, cpu, memory, tags, pidMode, ipcMode

Unknown parameter in containerDefinitions[0]: "dependsOn", must be one of: name, image, repositoryCredentials, cpu, memory, memoryReservation, links, portMappings, essential, entryPoint, command, environment, mountPoints, volumesFrom, linuxParameters, secrets, hostname, user, workingDirectory, disableNetworking, privileged, readonlyRootFilesystem, dnsServers, dnsSearchDomains, extraHosts, dockerSecurityOptions, interactive, pseudoTerminal, dockerLabels, ulimits, logConfiguration, healthCheck, systemControls

Is there a way to work around this?
Thanks

Platform
MacOS
awscli-1.16.143
botocore-1.12.133

To Reproduce
Steps to reproduce the behavior:

  1. define required environment variables
  2. follow instructions up to running ecs-colorapp.sh

Expected behavior
The colorapp service should be deployed.

Config files, and API responses
Template validation error

Additional context
Add any other context about the problem here.

[BUG] annotation 'appmesh.k8s.aws/mesh' dose not work

Important note on security disclosures: If you think you’ve found a potential security issue, please do not post it in the Issues. Instead, please follow the instructions here or email AWS security directly.

Describe the bug
I created a service mesh stack based on samples in the eskworkshop [https://eksworkshop.com/servicemesh_with_appmesh/] but this service mesh stack is created in another mesh in appmesh, which is not the default one used in injector configuration process("dj-app" for "APPMESH_NAME" in file .../2_create_injector/inject.yaml). All the resource is created successfully but when I enter into the dj container and executed the command "curl http://jazz.prod.svc.cluster.local:9080/;echo;" , the error prompted as "curl: (7) Failed to connect to jazz.dj-mesh-ns.svc.cluster.local port 9080: Connection refused".

Platform
EKS 1.14

To Reproduce
Steps to reproduce the behavior:

  1. kubectl apply -f dj-app-all.yaml
  2. kubectl get all -n dj-mesh-ns

NAME READY STATUS RESTARTS AGE
pod/dj-79fdb9d967-fzgtv 2/2 Running 0 16h
pod/jazz-v1-5cdbbdd469-trcsk 2/2 Running 0 16h
pod/metal-v1-ffdf7cc54-mkc5n 2/2 Running 0 16h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dj ClusterIP 10.100.106.91 9080/TCP 16h
service/jazz ClusterIP 10.100.4.141 9080/TCP 16h
service/jazz-v1 ClusterIP 10.100.116.162 9080/TCP 16h
service/metal ClusterIP 10.100.19.226 9080/TCP 16h
service/metal-v1 ClusterIP 10.100.150.52 9080/TCP 16h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dj 1/1 1 1 16h
deployment.apps/jazz-v1 1/1 1 1 16h
deployment.apps/metal-v1 1/1 1 1 16h

NAME DESIRED CURRENT READY AGE
replicaset.apps/dj-79fdb9d967 1 1 1 16h
replicaset.apps/jazz-v1-5cdbbdd469 1 1 1 16h
replicaset.apps/metal-v1-ffdf7cc54 1 1 1 16h

NAME AGE
virtualnode.appmesh.k8s.aws/dj 17h
virtualnode.appmesh.k8s.aws/jazz 17h
virtualnode.appmesh.k8s.aws/jazz-v1 17h
virtualnode.appmesh.k8s.aws/metal 17h
virtualnode.appmesh.k8s.aws/metal-v1 17h

NAME AGE
virtualservice.appmesh.k8s.aws/jazz.dj-mesh-ns.svc.cluster.local 16h
virtualservice.appmesh.k8s.aws/metal.dj-mesh-ns.svc.cluster.local 16h

  1. kubectl exec -n dj-mesh-ns -it dj-79fdb9d967-fzgtv -c dj bash
    root@dj-79fdb9d967-fzgtv:/usr/src/app# curl jazz.dj-mesh-ns.svc.cluster.local:9080
    curl: (7) Failed to connect to jazz.dj-mesh-ns.svc.cluster.local port 9080: Connection refused
    root@dj-79fdb9d967-fzgtv:/usr/src/app#

Expected behavior
I should be able to sucessfully get some result like:
curl http://jazz.prod.svc.cluster.local:9080/;echo;
["Astrud Gilberto","Miles Davis"]

Config files, and API responses
The content of "dj-app-all.yaml" file is:
dj-app-all.yaml.zip

Additional context
I follow all the prerequisite required in the eksworkshop for appmesh above and also could successfully run the end to end sample using the default mesh "dj-app" used in the injector creation process but failed with the same sample content in the new mesh scenario.

Remove external Docker repository references from examples

The Cloud Map How-To references external builds of the color apps and Envoy proxy:

I don't think we should be doing this long-term since it references images we or a customer cannot easily update or control.

We should be following the other examples and having the demo push images to a personal ECR repository.

[BUG] can't add 4 virtual nodes in examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml

Tasks

Edit https://github.com/aws/aws-app-mesh-examples/blob/master/examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml#L126-L132 add additional black virtual node. So totally have 4

  ColorTellerRoute:
    Type: AWS::AppMesh::Route
    DependsOn:
      - ColorTellerVirtualRouter
      - ColorTellerWhiteVirtualNode
      - ColorTellerRedVirtualNode
      - ColorTellerBlueVirtualNode
    Properties:
      MeshName: !Ref AppMeshMeshName
      VirtualRouterName: colorteller-vr
      RouteName: colorteller-route
      Spec:
        HttpRoute:
          Action:
            WeightedTargets:
              - VirtualNode: colorteller-white-vn
                Weight: 1
              - VirtualNode: colorteller-blue-vn
                Weight: 1
              - VirtualNode: colorteller-red-vn
                Weight: 1
              - VirtualNode: colorteller-black-vn
                Weight: 1
          Match:
            Prefix: "/"

Describe the bug

Property validation failure: [Number of items for property {/Spec/HttpRoute/Action/WeightedTargets} is greater than maximum allowed items {3}]

Platform
ECS

To Reproduce

Described above

Expected behavior

It is fine, when manually add new virtual nodes in aws console, but can't deploy via cloudformation.

Question: DNS name for a virtual-router with single route

If you have a single route in the virtual-router, should its DNS name match the DNS name of the virtual service?

For example,
In https://github.com/aws/aws-app-mesh-examples/blob/master/examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml

For all colours,

ColorTellerBlackVirtualNode:ServiceDiscovery:DNS.Hostname: = !Sub "colorteller-black.${ServicesDomain}"
ColorTellerBlueVirtualNode:ServiceDiscovery:DNS.Hostname: = !Sub "colorteller-blue.${ServicesDomain}"
ColorTellerRedVirtualNode:ServiceDiscovery:DNSHostname: = !Sub "colorteller-red.${ServicesDomain}"

but white,
ColorTellerWhiteVirtualNode:ServiceDiscovery:DNSHostname: = !Sub "colorteller.${ServicesDomain}"
(No -white??)

Can someone explain?

[BUG] Many errors when using mesh provisioning script

Describe the bug
The example script complains about numerous errors, usually NotFoundException. From the script it looks like many resources are created in rapid succession, are these operations asynchronous and I'm running into a race condition here?

Platform
ECS

[peter@nefilim colorapp]$ aws --version
aws-cli/1.16.80 Python/3.7.2 Darwin/18.2.0 botocore/1.12.70

To Reproduce
./servicemesh/deploy.sh

Expected behavior
Expect the script to complete without errors

Config files, and API responses
Here's the script output showing the errors

when inspecting some of the resources immediately after I see the following:

[peter@nefilim colorapp]$ aws appmesh list-meshes
{
    "meshes": [
        {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram",
            "meshName": "peteram"
        }
    ]
}
[peter@nefilim colorapp]$ aws appmesh list-virtual-nodes --mesh-name peteram
{
    "virtualNodes": [
        {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram/virtualNode/tcpecho-vn",
            "meshName": "peteram",
            "virtualNodeName": "tcpecho-vn"
        },
        {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram/virtualNode/colorteller-blue-vn",
            "meshName": "peteram",
            "virtualNodeName": "colorteller-blue-vn"
        },
        {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram/virtualNode/colorteller-red-vn",
            "meshName": "peteram",
            "virtualNodeName": "colorteller-red-vn"
        },
        {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram/virtualNode/colorteller-black-vn",
            "meshName": "peteram",
            "virtualNodeName": "colorteller-black-vn"
        },
        {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram/virtualNode/colorteller-vn",
            "meshName": "peteram",
            "virtualNodeName": "colorteller-vn"
        },
        {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram/virtualNode/colorgateway-vn",
            "meshName": "peteram",
            "virtualNodeName": "colorgateway-vn"
        }
    ]
}
[peter@nefilim colorapp]$ aws appmesh describe-virtual-node --mesh-name peteram --virtual-node-name colorgateway-vn
{
    "virtualNode": {
        "meshName": "peteram",
        "metadata": {
            "arn": "arn:aws:appmesh:us-west-2:574097476646:mesh/peteram/virtualNode/colorgateway-vn",
            "createdAt": 1546980949.923,
            "lastUpdatedAt": 1546980949.923,
            "uid": "95954b22-5bcd-48cb-aadc-2ebcebaac10b",
            "version": 1
        },
        "spec": {
            "backends": [
                "tcpecho.default.svc.cluster.local",
                "colorteller.default.svc.cluster.local"
            ],
            "listeners": [
                {
                    "portMapping": {
                        "port": 9080,
                        "protocol": "http"
                    }
                }
            ],
            "serviceDiscovery": {
                "dns": {
                    "serviceName": "colorgateway.default.svc.cluster.local"
                }
            }
        },
        "status": {
            "status": "ACTIVE"
        },
        "virtualNodeName": "colorgateway-vn"
    }
}

It looks reasonable enough but I'm not familiar with this service yet, so not really sure what to be looking for. Are some of these resources missing updates?

[BUG] TcpProxyValidationError caused by ClusterWeightValidationError: Weight must be greater than or equal to '\x01'

Describe the bug
I've been following the "App Mesh with EKS" walkthrough but with a few modifications to accommodate my scenario:

gateway service -> http mesh service 1 -> grpc mesh service 2

so the 2 mesh services behave similarly with colorteller in the example. It seems that the gateway service can properly reach the http service in the mesh based on the weighted routes i define, but either virtual node backend of that can't seem to communicate with the grpc mesh service over tcp.

Platform
EKS

Config files, and API responses
Logs of myapp1 pod envoy container:

[2019-06-19 16:35:20.947][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:132] cm init: initializing cds
[2019-06-19 16:35:20.948][000001][info][config] [source/server/configuration_impl.cc:67] loading 0 listener(s)
[2019-06-19 16:35:20.948][000001][info][config] [source/server/configuration_impl.cc:92] loading tracing configuration
[2019-06-19 16:35:20.948][000001][info][config] [source/server/configuration_impl.cc:101]   loading tracing driver: envoy.xray
[2019-06-19 16:35:20.948][000001][info][tracing] [source/extensions/tracers/xray/xray_tracer_impl.cc:95] send X-Ray generated segments to daemon address on 127.0.0.1:2000
[2019-06-19 16:35:20.948][000001][info][tracing] [source/extensions/tracers/xray/sampling.cc:114] unable to parse empty json file. falling back to default rule set.
[2019-06-19 16:35:20.948][000001][info][config] [source/server/configuration_impl.cc:112] loading stats sink configuration
[2019-06-19 16:35:20.949][000001][info][main] [source/server/server.cc:463] starting main dispatch loop
[2019-06-19 16:35:21.120][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:495] add/update cluster cds_ingress_demo_myapp1-green-default_http_80 during init
[2019-06-19 16:35:21.123][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:495] add/update cluster cds_egress_demo_amazonaws during init
[2019-06-19 16:35:21.123][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:136] cm init: all clusters initialized
[2019-06-19 16:35:21.123][000001][info][main] [source/server/server.cc:435] all clusters initialized. initializing init manager
[2019-06-19 16:35:21.129][000001][info][upstream] [source/server/lds_api.cc:80] lds: add/update listener 'lds_ingress_0.0.0.0_15000'
[2019-06-19 16:35:21.130][000001][info][upstream] [source/server/lds_api.cc:80] lds: add/update listener 'lds_egress_0.0.0.0_15001'
[2019-06-19 16:35:21.132][000001][info][config] [source/server/listener_manager_impl.cc:961] all dependencies initialized. starting workers
[2019-06-19 16:35:21.132][000025][info][tracing] [source/extensions/tracers/xray/sampling.cc:114] unable to parse empty json file. falling back to default rule set.
[2019-06-19 16:35:21.133][000026][info][tracing] [source/extensions/tracers/xray/sampling.cc:114] unable to parse empty json file. falling back to default rule set.
[2019-06-19 16:39:07.717][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:501] add/update cluster cds_egress_demo_myapp2-beta-default_tcp_50051 starting warming
[2019-06-19 16:39:07.721][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:501] add/update cluster cds_egress_demo_myapp2-blue-default_tcp_50051 starting warming
[2019-06-19 16:39:07.725][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:501] add/update cluster cds_egress_demo_myapp2-green-default_tcp_50051 starting warming
[2019-06-19 16:39:07.749][000001][warning][config] [bazel-out/k8-fastbuild/bin/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener lds_egress_0.0.0.0_15001: Proto constraint validation failed (TcpProxyValidationError.WeightedClusters: ["embedded message failed validation"] | caused by WeightedClusterValidationError.Clusters[i]: ["embedded message failed validation"] | caused by ClusterWeightValidationError.Weight: ["value must be greater than or equal to " '\x01']): stat_prefix: "egress"
weighted_clusters {
  clusters {
    name: "cds_egress_demo_myapp2-beta-default_tcp_50051"
  }
  clusters {
    name: "cds_egress_demo_myapp2-blue-default_tcp_50051"
  }
  clusters {
    name: "cds_egress_demo_myapp2-green-default_tcp_50051"
    weight: 1
  }
}

[2019-06-19 16:39:07.750][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:513] warming cluster cds_egress_demo_myapp2-blue-default_tcp_50051 complete
[2019-06-19 16:39:07.751][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:513] warming cluster cds_egress_demo_myapp2-green-default_tcp_50051 complete
[2019-06-19 16:39:07.751][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:513] warming cluster cds_egress_demo_myapp2-beta-default_tcp_50051 complete

It seems weight: 0 is ignored and set to null before the tcp proxy validation? The VirtualService and VirtualNode manifests are configured in the same way for both, yet they only seem to work with the http protocol:

myapp1

apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
  name: myapp1-beta
spec:
  backends:
  - virtualService:
      virtualServiceName: myapp2-beta
  listeners:
  - portMapping:
      port: 80
      protocol: http
  meshName: demo
  serviceDiscovery:
    dns:
      hostName: myapp1-beta.default.svc.cluster.local
---
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualService
metadata:
  name: myapp1-beta
spec:
  meshName: demo
  routes:
  - http:
      action:
        weightedTargets:
        - virtualNodeName: myapp1-beta
          weight: 0
        - virtualNodeName: myapp1-green
          weight: 1
        - virtualNodeName: myapp1-blue
          weight: 0
      match:
        prefix: /
    name: flask-route

myapp2

apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
  name: myapp2-beta
spec:
  listeners:
  - portMapping:
      port: 50051
      protocol: tcp
  meshName: demo
  serviceDiscovery:
    dns:
      hostName: myapp2-beta.default.svc.cluster.local
---
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualService
metadata:
  name: myapp2-beta
spec:
  meshName: demo
  routes:
  - name: grpc-route
    tcp:
      action:
        weightedTargets:
        - virtualNodeName: myapp2-beta
          weight: 0
        - virtualNodeName: myapp2-green
          weight: 1
        - virtualNodeName: myapp2-blue
          weight: 0

Any ideas what might be wrong here? Thanks.

UPDATE: When i set weight in the tcp VirtualService to any non-zero value it passes the validation and the request gets processed successfully.

Information Request: Virtual Router serviceName

Hi

Saw this issue: #49 but still not very clear to me (new to ECS also).

What exactly does spec.serviceNames refer to? From this example, it appears to refer to a DNS record created by the ECS Service Discovery

What is the spec.serviceNames used for internally?

Since Routes can reference multiple discrete Virtual Nodes ie. multiple discrete ECS Services - does this mean one need to update the Virtual Router's spec.serviceNames to include all the services referenced in the associated routes?
This doesn't appear to be the case, updating the routes with the above linked routes (refers to both black & blue services) but the Virtual Router still just refers to:

        "spec": {
            "serviceNames": [
                "colorteller.peteram.svc.cluster.local"
            ]
        },

which actually doesn't correspond to either of the Virtual Nodes referenced in the routes? So it seems somewhat meaningless?

Would be great to expand the documentation to specify exactly and verbosely what each element refers to.

Thanks!

Switch to wildcard DNS entry for service names in example

Because we still require some DNS entry for applications to resolve some IP before it will make a request that envoy can proxy, the examples end up duplicating names across service discovery and router service names/backends. This leads to confusion because it's implied there's some coordination between the two when there isn't.

We can fix our example apps by creating a private hosted zone for "#{meshName}.appmesh" with a wildcard entry to any IP. VirtualRouters will register on "colorgateway.#{meshName}.appmesh" and "colorteller.#{meshName}.appmesh" and the actual VNode service discovery can use existing kubedns/CloudMap names. I would expect this would be how we recommend customers configure things as well until we have a better solution for the DNS hole.

djapp "connection refused" error

Going over djapp walkthrough, done it 3 times.

Platform
EKS

To Reproduce
Steps to reproduce the behavior:

  1. Deploy app and virtual nodes and services following the guide
  2. Get into djapp pod antr try to access Jazz or Metal virtual services with curl jazz.prod.svc.cluster.local:9080;echo command
  3. Got curl: (7) Failed to connect to jazz.prod.svc.cluster.local port 9080: Connection refused message

Expected behavior
Expect that example should work

[BUG] aws-appmesh-envoy not pulling X-Ray sampling rules from managed service

Describe the bug
It looks like the aws-appmesh-envoy container is having difficulty pulling sampling rules from the managed X-Ray service. I have added my custom sampling rules to the X-Ray but it's not working. I'm seeing these in the log

[2019-08-19 14:55:03.045][000023][info][tracing] [source/extensions/tracers/xray/sampling.cc:114] unable to parse empty json file. falling back to default rule set.
[2019-08-19 14:55:03.045][000024][info][tracing] [source/extensions/tracers/xray/sampling.cc:114] unable to parse empty json file. falling back to default rule set.

Platform
ECS

To Reproduce
Set ENABLE_ENVOY_XRAY_TRACING environment variable to 1 for the aws-appmesh-envoy container

Expected behavior
Some other or none of the above messages in the log

Config files, and API responses
task-definition

    {
        "name": "envoy",
        "image": "111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.1.0-prod",
        "cpu": 64,
        "memory": 128,
        "memoryReservation": 64,
        "portMappings": [
            {
                "protocol": "tcp",
                "containerPort": 9901
            },
            {
                "protocol": "tcp",
                "containerPort": 15000
            },
            {
                "protocol": "tcp",
                "containerPort": 15001
            }
        ],
        "essential": true,
        "environment": [
            {
                "name": "APPMESH_VIRTUAL_NODE_NAME",
                "value": "mesh/appmesh/virtualNode/service1-node"
            },
            {
                "name": "ENABLE_ENVOY_XRAY_TRACING",
                "value": "1"
            }
        ],
        "healthCheck": {
            "command": [
                "CMD-SHELL",
                "curl -s http://localhost:9901/server_info | grep state | grep -q LIVE"
            ],
            "interval": 5,
            "timeout": 2,
            "retries": 3
        },
        "user": "1337",
        "logConfiguration": {
            "logDriver": "awslogs",
            "options": {
                "awslogs-group": "/appmesh/service1/envoy",
                "awslogs-region": "us-west-2"
            }
        }
    }

[BUG] EKS ASG desired capacity is set to maximum launching 20 instances

Describe the bug
The autoscaling group set for EKS cluster sets the desired capacity to the max param. It also easily hits region limit.

Platform
EKS

To Reproduce
Steps to reproduce the behavior:

  1. Launch EKS cluster template through 'examples/infrastructure/eks-cluster.yaml'
  2. 'NodeGroup' auto scaling group will attempt create 20 c4.large instances

Expected behavior
An initial desired capacity of 2 or 3 will reduce cost to run sample and avoid running out of instances capacity for account within region

Question: Egress traffic from ECS Tasks

Hi

I have noticed that none of my ECS Tasks created with envoy & aws-appmesh-proxy-route-manager containers as per documentation are able to make any outbound TCP connection, not to the internet, nor to other resources within the same VPC:

bash-4.4# telnet www.google.com 80
telnet: can't connect to remote host (172.217.14.228): Connection refused
bash-4.4# telnet nitroclouddevseed.cbvidc7yzeub.us-west-2.rds.amazonaws.com 80
telnet: can't connect to remote host (10.0.109.153): Connection refused

I've tried adding the RDS endpoint as a backend in the the virtual node:

{
    "spec": {
        "listeners": [
            {
                "portMapping": {
                    "port": 8080,
                    "protocol": "http"
                },
                "healthCheck": {
                    "protocol": "http",
                    "path": "/health",
                    "healthyThreshold": 2,
                    "unhealthyThreshold": 2,
                    "timeoutMillis": 2000,
                    "intervalMillis": 5000
                }
            }
        ],
        "serviceDiscovery": {
            "dns": {
                "serviceName": "dummy-service.peter.svc.cluster.local"
            }
        },
        "backends": [
            "nitroclouddevseed.cbvidc7yzeub.us-west-2.rds.amazonaws.com"
        ]
    },
    "virtualNodeName": "dummy-service-vn"
}

but that didn't seem to help.

The only way I could get egress traffic was to add the specific IP to this list:

            - Name: "APPMESH_EGRESS_IGNORED_IP"
              Value: { Ref: AppMeshEgressIgnoredIpCsv }

clearly, that's not a scalable solution for external dependencies, especially not dynamic ones.

What is the best way to allow egress TCP traffic to external dependencies?

Thanks
Peter

Question: Egress traffic from ECS Tasks

Hi

I have noticed that none of my ECS Tasks created with envoy & aws-appmesh-proxy-route-manager containers as per documentation are able to make any outbound TCP connection, not to the internet, nor to other resources within the same VPC:


Configuring stats sinks

First off, I'm not super well versed in Envoy nor Envoy configuration so apologies if this is an obvious answer. I'd like to configure the App Mesh Envoy deployment to use the envoy.dog_statsd sink to pull out Envoy metrics. However, I am not sure how to do this with App Mesh, specifically using the colorapp example as a sandbox. Is this a configuration lever that I have access to when using App Mesh? Looking at generate_templates.sh I don't see any way of inserting config and I don't see anything in the App Mesh APIs to add to the dynamic config either. Thanks!

External Routing (HTTP/TCP with TLS)

I can't find it in the documentation, but is it possible to have routing to an external service? Meaning I have appmesh in place and the container/microservice needs to connect to an externally managed service like gitlab? I added the URL as a backend to the virtual node but whenever it attempts to connect I get this error:

ENVOY LOG:
original_dst: New connection accepted
[2018-12-27 22:55:11.526][18][debug][filter] source/extensions/filters/listener/tls_inspector/tls_inspector.cc:73] tls inspector: new connection accepted
[2018-12-27 22:55:11.526][18][debug][filter] source/extensions/filters/listener/tls_inspector/tls_inspector.cc:126] tls:onServerName(), requestedServerName:
[2018-12-27 22:55:11.526][18][debug][main] source/server/connection_handler_impl.cc:193] closing connection: no matching filter chain found

POD LOG:
Error: initializing server: Get https://api/v4//version: read tcp 10.100.2.185:37252->:443: read: connection reset by peer

Connecting to this service works if I'm not using appmesh.

[Question] Load balancing ecs service multiple tasks with app mesh

We are trying out app-mesh. Our solution looks like the following:

Mesh Ingress (ALB) -> Service A -> Service B -> Mesh Egress -> External Service

In this setup, Service A and Service B are running over fargate with desired-count of 2 for each. These ECS services have cloud-map configuration enabled. We have verified that the traffic flows through envoy sidecar for both Service A and Service B.

When we hit ALB for ingress, the traffic is load-balanced for Service A by the ALB but the traffic from Service A always hit the same single task in Service B. We are unable to load balance traffic from Service A envoy proxy to Service B. Following is our understanding of the problem so far:

  • As each service has multiple tasks running, all of the tasks get registered to Cloud Map service registry under the same DNS name
  • This creates a couple of DNS A Records in Route 53 per service. It's worth mentioning that the ECS Service Discovery Options always creates a multi-value DNS record in Route53 if you don't select an existing service discovery name.
  • We can see in envoy admin config dump for Serice A, that the cluster config dump for Service B has LOGICAL_DNS as type
  • Now when Service A envoy proxy receives an egress request for Service B, it resolves the DNS from Route53 to the first IP in the list (as documented in LOGICAL_DNS config in envoy documents). This single IP of Task-1 is always targeted for outgoing request because of which the Task-2 for Service B never receives a request.

It will be helpful to understand why is it not working for us and also to get some idea about recommended practices around this pattern.

[BUG] Can't Setup EKS Cluster

Describe the bug
I setup EKS Cluster, but EKS Cluster has no worker node.

Platform
EKS

To Reproduce
Setup Infrastructure according to examples.

  1. Export environment variables.
  2. Setup Infrastructure. Make sure status of CloudFormation stack is CREATE_COMPLETE.
$ ./infrastructure/vpc.sh create-stack
$ ./infrastructure/mesh.sh 
$ ./infrastructure/eks-cluster.sh create-stack
  1. Prepare to use EKS Cluster according to Getting Started with Amazon EKS.
$ aws eks update-kubeconfig --name ${ENVIRONMENT_NAME}"
$ kubectl apply -f aws-auth-cm.yaml
  1. Try to find k8s worker node.
$ kubectl get node
No resources found.

Expected behavior
I expect that I can get some k8s worker node.

Config files, and API responses
None

Additional context
I can setup EKS Cluster completely by updating EKS Optimized AMI from ami-0a0b913ef3249b655 to ami-0440e4f6b9713faf6 in us-east-1. I think AMI is old probably.

[FEATURE] Add Listeners to Virtual Routers

What we're changing

To provide a safer and more understandable experience when using Virtual Routers, we're proposing the addition of Virtual Router Listeners to the Virtual Router Spec. Instead of using target Virtual Nodes in a route to generate listeners in Envoy configuration, customers will define listener ports on the Virtual Router itself.

With this new feature, you can now abstract the port clients talk to from the port Envoy sends traffic to on the destinations. This operates similar to load balancers like ALB and NLB, where you define a distinct listener and then bind targets to them.

In the CLI, this will look like

$ aws appmesh create-virtual-router \
    --mesh-name example-mesh \
    --cli-input-json file://virtualrouter.json
    
// virtualrouter.json:
{
    "virtualRouterName": "example-router"
    "spec": {
        listeners: [
            {
                port: 80,
                protocol: http
            }
        ]
    }
}

Which would configure a Virtual Router that listens on port 80 and applies to all HTTP routes for matching. When an HTTP route matches, Envoy would forward traffic to an endpoint in the target Virtual Node on the Virtual Node's listener port.

Let's relate this back to the Color Teller example. If we made this modification

$ aws appmesh update-virtual-router \
    --mesh-name colorapp \
    --cli-input-json file://colorteller-virtualrouter.json
    
// colorteller-virtualrouter.json:
{
    "virtualRouterName": "colorteller-vr"
    "spec": {
        listeners: [
            {
                port: 80,
                protocol: http
            }
        ]
    }
}

we can now have clients talk directly on port 80 to the colorteller router, abstracting away the actual destination port (9080) from clients.

As part of our plan to revise our API ahead of GA (#92 ), we would like to make this a required field on all Virtual Routers created via the new API. For backwards compatibility, calls using the preview API will continue to work and generate equivalent Envoy configuration, and all existing Virtual Routers will continue to work as before.

Why This is Important

We believe that the existing experience leads to several sharp edges in the API that can and will cause customer confusion, especially as their routes grow in number and complexity. I will explain the two most important scenarios below.

Blackholed Routes

Let's suppose you define the following Route

aws appmesh describe-route --mesh-name test-mesh --virtual-router-name test-router --route-name test-route
{
    "route": {
        "status": {
            "status": "ACTIVE"
        },
        "meshName": "test-mesh",
        "virtualRouterName": "test-router",
        "routeName": "test-route",
        "spec": {
            "httpRoute": {
                "action": {
                    "weightedTargets": [
                        {
                            "virtualNode": "vn-1", // Listens on port 8080
                            "weight": 1
                        },
                        {
                            "virtualNode": "vn-2", // Listens on port 8081
                            "weight": 1
                        }
                    ]
                },
                "match": {
                    "prefix": "/"
                }
            }
        }
}

Because the two target Virtual Nodes have different listener ports, App Mesh will fail to generate a route in the Envoy configuration (See bug #93 for more details). This is because the listener ports generated in the Envoy configuration come from the target Virtual Nodes, and there is no obvious choice as to which Virtual Node listener port to use.

With listeners on the Virtual Router, Virtual Node listener ports are abstracted away from clients and this route will be safely materialized.

Non-Obvious Routing behavior

Let's suppose you define the following two routes

aws appmesh describe-route --mesh-name test-mesh --virtual-router-name test-router --route-name test-route-1
{
    "route": {
        "status": {
            "status": "ACTIVE"
        },
        "meshName": "test-mesh",
        "virtualRouterName": "test-router",
        "routeName": "test-route-1",
        "spec": {
            "httpRoute": {
                "action": {
                    "weightedTargets": [
                        {
                            "virtualNode": "vn-1", // Listens on port 8080
                            "weight": 1
                        }
                    ]
                },
                "match": {
                    "prefix": "/"
                }
            }
        }
}

aws appmesh describe-route --mesh-name test-mesh --virtual-router-name test-router --route-name test-route-2
{
    "route": {
        "status": {
            "status": "ACTIVE"
        },
        "meshName": "test-mesh",
        "virtualRouterName": "test-router",
        "routeName": "test-route-2",
        "spec": {
            "httpRoute": {
                "action": {
                    "weightedTargets": [
                        {
                            "virtualNode": "vn-2", // Listens on port 8081
                            "weight": 1
                        }
                    ]
                },
                "match": {
                    "prefix": "/internal"
                }
            }
        }
}

At first glance, you would expect that all traffic to "/internal" would go to targets defined by vn-2, and all other traffic would go to vn-1. However, where traffic goes is additionally dependent on the port defined.

  1. When requests are sent to /internal on port 8080, requests go to vn-1
  2. When requests are sent to /internal on port 8081, requests go to vn-2
  3. When requests are sent to / on port 8080, requests go to vn-1
  4. When requests are sent to / on port 8081, requests are blackholed

With listeners on the Virtual Router, the target Virtual Node listener ports are no longer used for initial route matching in Envoy. Traffic to the router would always route requests matching prefix "/internal" to vn-2, and all other traffic would flow to vn-1.

In the future, support for multiple listeners on a Virtual Router and port matching rules on routes could be added to re-enable now disallowed routing logic. Importantly, customers would be explicitly configuring these routes rather than receiving them implicitly.

Question: X-Ray sidecar with Fargate

I've followed the instructions in colorapp example to take App Mesh and X-Ray into use in our existing service running on Fargate. Everything seems fine otherwise but we cannot get any data in to X-Ray. When looking at the x-ray container logs it might be that we have found the culprit:

[Error] Get instance id metadata failed: RequestError: send request failed
caused by: Get http://169.254.169.254/latest/meta-data/instance-id: dial tcp 169.254.169.254:80: connect: invalid argument

It seems that the x-ray container tries to get instance metadata but cannot. I wonder if this is because we are running the service in Fargate and not ECS.

[BUG] Liveliness/readiness probes not working

Describe the bug
While using the app mesh if a pod have liveliness/readiness probes it failed since the probes are configured on pod IP.

Platform
EKS

To Reproduce
Steps to reproduce the behavior:
Create a pod with liveliness and readiness probe enabled.
Deploy appmesh-inject and appmesh-system
Recreate pod to observe health check failed and the pod doesn't come up

Expected behavior
A way to use liveliness/readiness probe

Additional context
A bonus would be appmesh controller can configure health check on virtual nodes automatically by reading the deployment definition.

[BUG]App Mesh Setup failing

Describe the bug
I am going through the App mesh on EKS Tutorial at https://eksworkshop.com/servicemesh_with_appmesh/. I am running into an issue at this step https://eksworkshop.com/servicemesh_with_appmesh/port_to_app_mesh/create_the_mesh/.

Platform
EKS 1.11 or 1.12

To Reproduce
Steps to reproduce the behavior:
To Reproduce

  1. Have EKS Cluster up and running.
  2. Follow steps at https://eksworkshop.com/servicemesh_with_appmesh/
  3. Error at this step https://eksworkshop.com/servicemesh_with_appmesh/port_to_app_mesh/create_the_mesh/.

execute this --> kubectl create -f 4_create_initial_mesh_components/mesh.yaml

Error from server (NotFound): error when creating "4_create_initial_mesh_components/mesh.yaml": the server could not find the requested resource (post meshs.appmesh.k8s.aws) Any help or pointer you can provide will be greatly appreciated.

Expected behavior
Expect the script to complete without errors

[BUG] ColorGateway connection refused

Describe the bug
Following https://github.com/aws/aws-app-mesh-examples/blob/master/walkthroughs/eks/base.md

I am unable to perform the curl command from curler to the gateway due to a connection refused error.

I am using us-west-2
Platform
EKS

To Reproduce
I have executed the steps on the walkthrough.

Expected behavior
I expect the curl command to comeback with OK.
The current curl command response is as follows:

curl: (7) Failed to connect to colorgateway port 9080: Connection refused

Config files, and API responses

root@curler2-58cdb99d46-j8zsl:/# ping colorgateway
PING colorgateway.appmesh-demo.svc.cluster.local (10.100.251.99) 56(84) bytes of data.
^C
--- colorgateway.appmesh-demo.svc.cluster.local ping statistics ---
34 packets transmitted, 0 received, 100% packet loss, time 33776ms

SINC02W54EDHTD6:aws-app-mesh-controller-for-k8s andalak$ kubectl get pods -n appmesh-demo
NAME                                 READY   STATUS    RESTARTS   AGE
colorgateway-69cd4fc669-cpkcg        3/3     Running   0          129m
colorteller-845959f54-gkjr6          3/3     Running   0          129m
colorteller-black-6cc98458db-mppn6   3/3     Running   0          129m
colorteller-blue-88bcffddb-r5d59     3/3     Running   0          129m
colorteller-red-6f55b447db-9mxgh     3/3     Running   0          129m
curler-5875dfcc64-49mwf              1/1     Running   1          120m
curler2-58cdb99d46-j8zsl             1/1     Running   3          66m

API Gateway Envoy Logs:

INC02W54EDHTD6:aws-app-mesh-controller-for-k8s andalak$ kubectl -n appmesh-demo logs colorgateway-69cd4fc669-cpkcg envoy
Did not find Envoy configuration file at /tmp/envoy.yaml, creating one.
added envoy.xray tracing config to /tmp/envoy_tracing_config.yaml
Appending /tmp/envoy_tracing_config.yaml to /tmp/envoy.yaml
Appending /tmp/envoy_stats_config.yaml to /tmp/envoy.yaml
Starting Envoy.
[2019-04-26 12:51:53.319][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws_Init_Cleanup Initiate AWS SDK for C++ with Version:1.6.39
[2019-04-26 12:51:53.319][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] CurlHttpClient Initializing Curl library
[2019-04-26 12:51:53.319][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSConfigFileProfileConfigLoader Initializing config loader against fileName //.aws/config and using profilePrefix = 1
[2019-04-26 12:51:53.319][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSConfigFileProfileConfigLoader Unable to open config file //.aws/config for reading.
[2019-04-26 12:51:53.319][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSProfileConfigLoader Failed to reload configuration.
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:206] initializing epoch 0 (hot restart version=10.200.16384.567.options=capacity=16384, num_slots=8209 hash=228984379728933363 size=9863272)
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:208] statically linked extensions:
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:210]   access_loggers: envoy.file_access_log,envoy.http_grpc_access_log
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:213]   filters.http: envoy.buffer,envoy.cors,envoy.ext_authz,envoy.fault,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.rbac,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:216]   filters.listener: envoy.listener.original_dst,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:219]   filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:221]   stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:223]   tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.xray,envoy.zipkin
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:226]   transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2019-04-26 12:51:53.327][000001][info][main] [source/server/server.cc:229]   transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.capture,raw_buffer,tls
[2019-04-26 12:51:53.335][000001][info][main] [source/server/server.cc:271] admin address: 0.0.0.0:9901
[2019-04-26 12:51:53.338][000001][info][config] [source/server/configuration_impl.cc:50] loading 0 static secret(s)
[2019-04-26 12:51:53.338][000001][info][config] [source/server/configuration_impl.cc:56] loading 0 cluster(s)
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSConfigFileProfileConfigLoader Initializing config loader against fileName //.aws/config and using profilePrefix = 1
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSConfigFileProfileConfigLoader Initializing config loader against fileName //.aws/credentials and using profilePrefix = 0
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] ProfileConfigFileAWSCredentialsProvider Setting provider to read credentials from //.aws/credentials for credentials file and //.aws/config for the config file , for use with profile default
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] EC2MetadataClient Creating AWSHttpResourceClient with max connections2 and scheme http
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] CurlHandleContainer Initializing CurlHandleContainer with size 2
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] InstanceProfileCredentialsProvider Creating Instance with default EC2MetadataClient and refresh rate 300000
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] DefaultAWSCredentialsProviderChain Added EC2 metadata service credentials provider to the provider chain.
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSConfigFileProfileConfigLoader Unable to open config file //.aws/credentials for reading.
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSProfileConfigLoader Failed to reload configuration.
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSConfigFileProfileConfigLoader Unable to open config file //.aws/config for reading.
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSProfileConfigLoader Failed to reload configuration.
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] InstanceProfileCredentialsProvider Credentials have expired attempting to repull from EC2 Metadata Service.
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] CurlHandleContainer Pool grown by 2
[2019-04-26 12:51:53.340][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] CurlHandleContainer Connection has been released. Continuing.
[2019-04-26 12:51:53.341][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] CurlHandleContainer Connection has been released. Continuing.
[2019-04-26 12:51:53.343][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::EC2InstanceProfileConfigLoader Successfully pulled credentials from metadata service with access key ASIAYFZHB6A2OCGTODFH
[2019-04-26 12:51:53.343][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] CurlHandleContainer Connection has been released. Continuing.
[2019-04-26 12:51:53.345][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] EC2MetadataClient Detected current region as us-west-2
[2019-04-26 12:51:53.345][000001][info][misc] [source/common/aws/aws_sdk_config.cc:66] Aws::Config::AWSProfileConfigLoader Successfully reloaded configuration.
[2019-04-26 12:51:53.346][000001][info][upstream] [source/common/upstream/cluster_manager_impl.cc:132] cm init: initializing cds
[2019-04-26 12:51:53.346][000001][info][config] [source/server/configuration_impl.cc:67] loading 0 listener(s)
[2019-04-26 12:51:53.346][000001][info][config] [source/server/configuration_impl.cc:92] loading tracing configuration
[2019-04-26 12:51:53.346][000001][info][config] [source/server/configuration_impl.cc:101]   loading tracing driver: envoy.xray
[2019-04-26 12:51:53.346][000001][info][tracing] [source/extensions/tracers/xray/xray_tracer_impl.cc:95] send X-Ray generated segments to daemon address on 127.0.0.1:2000
[2019-04-26 12:51:53.346][000001][info][tracing] [source/extensions/tracers/xray/sampling.cc:114] unable to parse empty json file. falling back to default rule set.
[2019-04-26 12:51:53.347][000001][info][config] [source/server/configuration_impl.cc:112] loading stats sink configuration
[2019-04-26 12:51:53.347][000001][info][main] [source/server/server.cc:463] starting main dispatch loop
SINC02W54EDHTD6:aws-app-mesh-controller-for-k8s andalak$ kubectl -n appmesh-demo logs colorgateway-69cd4fc669-cpkcg colorgateway
2019/04/26 12:51:46 starting server, listening on port 9080
2019/04/26 12:51:46 using color-teller at colorteller.appmesh-demo:9080
```

**Additional context**
Here is the full `pods describe -n appmesh-demo`
Name:               colorgateway-69cd4fc669-cpkcg
Namespace:          appmesh-demo
Priority:           0
PriorityClassName:  <none>
Node:               ip-192-168-13-9.us-west-2.compute.internal/192.168.13.9
Start Time:         Fri, 26 Apr 2019 20:51:35 +0800
Labels:             app=colorgateway
                    pod-template-hash=69cd4fc669
                    version=v1
Annotations:        <none>
Status:             Running
IP:                 192.168.7.186
Controlled By:      ReplicaSet/colorgateway-69cd4fc669
Init Containers:
  proxyinit:
    Container ID:   docker://d9522182a2d9d2402e9a2748f220da972f926d8817c6a78aa287168929dc5114
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager:latest
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager@sha256:a055da31668a5dc6e68da49c4a8217726d8437e2a94ce6bb6a15abfdcbb1e925
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Apr 2019 20:51:43 +0800
      Finished:     Fri, 26 Apr 2019 20:51:43 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      APPMESH_START_ENABLED:       1
      APPMESH_IGNORE_UID:          1337
      APPMESH_ENVOY_INGRESS_PORT:  15000
      APPMESH_ENVOY_EGRESS_PORT:   15001
      APPMESH_APP_PORTS:           9080
      APPMESH_EGRESS_IGNORED_IP:   169.254.169.254
    Mounts:                        <none>
Containers:
  colorgateway:
    Container ID:   docker://62afda07fe5974d17f184f95975af9e8926b3508e3194ca05aa047796175190e
    Image:          970805265562.dkr.ecr.us-west-2.amazonaws.com/gateway:latest
    Image ID:       docker-pullable://970805265562.dkr.ecr.us-west-2.amazonaws.com/gateway@sha256:2a81ac9f74a20a02e32b7f56848eeeb99d0c5d47c73d92e484dd7875ed057780
    Port:           9080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:46 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      SERVER_PORT:            9080
      COLOR_TELLER_ENDPOINT:  colorteller.appmesh-demo:9080
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-22pk6 (ro)
  envoy:
    Container ID:   docker://6ad12dfd8ff63771b713355d35403219df025a0c15587b1c673d128777172394
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy@sha256:33e19cf3106b2ccb1ccc3f1d28b7e5b965d640f2a17a5c5564780720f63f258f
    Port:           9901/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:53 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  32Mi
    Environment:
      APPMESH_VIRTUAL_NODE_NAME:  mesh/color-mesh/virtualNode/colorgateway-appmesh-demo
      ENVOY_LOG_LEVEL:            info
      AWS_REGION:                 us-west-2
      ENABLE_ENVOY_XRAY_TRACING:  1
      ENABLE_ENVOY_STATS_TAGS:    1
    Mounts:                       <none>
  xray-daemon:
    Container ID:   docker://ceb3d11fef37a4981752b1f0aff6dbde85da50122922d6376259b8b63b989cff
    Image:          amazon/aws-xray-daemon
    Image ID:       docker-pullable://amazon/aws-xray-daemon@sha256:0f2270a1aa8e02acd735d0ec053b4aa554dbee3fc90614617e85509f8168663e
    Port:           2000/UDP
    Host Port:      0/UDP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:57 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     32Mi
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-22pk6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-22pk6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:               colorteller-845959f54-gkjr6
Namespace:          appmesh-demo
Priority:           0
PriorityClassName:  <none>
Node:               ip-192-168-78-215.us-west-2.compute.internal/192.168.78.215
Start Time:         Fri, 26 Apr 2019 20:51:37 +0800
Labels:             app=colorteller
                    pod-template-hash=845959f54
                    version=white
Annotations:        <none>
Status:             Running
IP:                 192.168.73.201
Controlled By:      ReplicaSet/colorteller-845959f54
Init Containers:
  proxyinit:
    Container ID:   docker://ed1d4850078a517ff3e47e4ada7936f0677f5e7b464c10dd031d97376d73af68
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager:latest
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager@sha256:a055da31668a5dc6e68da49c4a8217726d8437e2a94ce6bb6a15abfdcbb1e925
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Apr 2019 20:51:45 +0800
      Finished:     Fri, 26 Apr 2019 20:51:45 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      APPMESH_START_ENABLED:       1
      APPMESH_IGNORE_UID:          1337
      APPMESH_ENVOY_INGRESS_PORT:  15000
      APPMESH_ENVOY_EGRESS_PORT:   15001
      APPMESH_APP_PORTS:           9080
      APPMESH_EGRESS_IGNORED_IP:   169.254.169.254
    Mounts:                        <none>
Containers:
  colorteller:
    Container ID:   docker://1ec509fbbe9c1b4ff5916af8e4e4cecb0e8a77f8ddd12785e1aba13e2e8dba6a
    Image:          970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest
    Image ID:       docker-pullable://970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller@sha256:2c292abca87af64ad5380b6e1e3f621a982e75a0c848656dd5286531994b119a
    Port:           9080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:47 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      SERVER_PORT:  9080
      COLOR:        white
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-22pk6 (ro)
  envoy:
    Container ID:   docker://e50d408ef38bfd67cf73a7b9bdc0215c2af6452e8f547ce95be78fabfad1f482
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy@sha256:33e19cf3106b2ccb1ccc3f1d28b7e5b965d640f2a17a5c5564780720f63f258f
    Port:           9901/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:55 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  32Mi
    Environment:
      APPMESH_VIRTUAL_NODE_NAME:  mesh/color-mesh/virtualNode/colorteller-appmesh-demo
      ENVOY_LOG_LEVEL:            info
      AWS_REGION:                 us-west-2
      ENABLE_ENVOY_XRAY_TRACING:  1
      ENABLE_ENVOY_STATS_TAGS:    1
    Mounts:                       <none>
  xray-daemon:
    Container ID:   docker://6c0bd345d8f75de07111f8f7599563e8d77511eae692a28186f8b9a66a5898d5
    Image:          amazon/aws-xray-daemon
    Image ID:       docker-pullable://amazon/aws-xray-daemon@sha256:0f2270a1aa8e02acd735d0ec053b4aa554dbee3fc90614617e85509f8168663e
    Port:           2000/UDP
    Host Port:      0/UDP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:52:00 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     32Mi
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-22pk6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-22pk6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:               colorteller-black-6cc98458db-mppn6
Namespace:          appmesh-demo
Priority:           0
PriorityClassName:  <none>
Node:               ip-192-168-13-9.us-west-2.compute.internal/192.168.13.9
Start Time:         Fri, 26 Apr 2019 20:51:38 +0800
Labels:             app=colorteller
                    pod-template-hash=6cc98458db
                    version=black
Annotations:        <none>
Status:             Running
IP:                 192.168.12.200
Controlled By:      ReplicaSet/colorteller-black-6cc98458db
Init Containers:
  proxyinit:
    Container ID:   docker://d5bf1eeb97829e5118886fb5a8b7c239901cb690d7b58c83a766377f4b8507a2
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager:latest
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager@sha256:a055da31668a5dc6e68da49c4a8217726d8437e2a94ce6bb6a15abfdcbb1e925
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Apr 2019 20:51:43 +0800
      Finished:     Fri, 26 Apr 2019 20:51:43 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      APPMESH_START_ENABLED:       1
      APPMESH_IGNORE_UID:          1337
      APPMESH_ENVOY_INGRESS_PORT:  15000
      APPMESH_ENVOY_EGRESS_PORT:   15001
      APPMESH_APP_PORTS:           9080
      APPMESH_EGRESS_IGNORED_IP:   169.254.169.254
    Mounts:                        <none>
Containers:
  colorteller:
    Container ID:   docker://f47a9d55beb649fb7ff0d4753b48d8cd2efac9e838831d9033b46cf868801c4f
    Image:          970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest
    Image ID:       docker-pullable://970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller@sha256:2c292abca87af64ad5380b6e1e3f621a982e75a0c848656dd5286531994b119a
    Port:           9080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:45 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      SERVER_PORT:  9080
      COLOR:        black
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-22pk6 (ro)
  envoy:
    Container ID:   docker://bdbff9db3e81d2e8581776382b2bb7b103e27646da55bf7f383cf07a1ef899e7
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy@sha256:33e19cf3106b2ccb1ccc3f1d28b7e5b965d640f2a17a5c5564780720f63f258f
    Port:           9901/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:53 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  32Mi
    Environment:
      APPMESH_VIRTUAL_NODE_NAME:  mesh/color-mesh/virtualNode/colorteller-black-appmesh-demo
      ENVOY_LOG_LEVEL:            info
      AWS_REGION:                 us-west-2
      ENABLE_ENVOY_XRAY_TRACING:  1
      ENABLE_ENVOY_STATS_TAGS:    1
    Mounts:                       <none>
  xray-daemon:
    Container ID:   docker://0cbbdb51d5ee4bbc73ed6bc612597a9bd20423c8fae6124ffa0626c8f12d633c
    Image:          amazon/aws-xray-daemon
    Image ID:       docker-pullable://amazon/aws-xray-daemon@sha256:0f2270a1aa8e02acd735d0ec053b4aa554dbee3fc90614617e85509f8168663e
    Port:           2000/UDP
    Host Port:      0/UDP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:56 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     32Mi
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-22pk6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-22pk6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:               colorteller-blue-88bcffddb-r5d59
Namespace:          appmesh-demo
Priority:           0
PriorityClassName:  <none>
Node:               ip-192-168-78-215.us-west-2.compute.internal/192.168.78.215
Start Time:         Fri, 26 Apr 2019 20:51:40 +0800
Labels:             app=colorteller
                    pod-template-hash=88bcffddb
                    version=blue
Annotations:        <none>
Status:             Running
IP:                 192.168.75.251
Controlled By:      ReplicaSet/colorteller-blue-88bcffddb
Init Containers:
  proxyinit:
    Container ID:   docker://079f45d867648a99287a5aa7358fbb8a366583141e303d4538c08ea0f80d8a9f
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager:latest
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager@sha256:a055da31668a5dc6e68da49c4a8217726d8437e2a94ce6bb6a15abfdcbb1e925
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Apr 2019 20:51:45 +0800
      Finished:     Fri, 26 Apr 2019 20:51:45 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      APPMESH_START_ENABLED:       1
      APPMESH_IGNORE_UID:          1337
      APPMESH_ENVOY_INGRESS_PORT:  15000
      APPMESH_ENVOY_EGRESS_PORT:   15001
      APPMESH_APP_PORTS:           9080
      APPMESH_EGRESS_IGNORED_IP:   169.254.169.254
    Mounts:                        <none>
Containers:
  colorteller:
    Container ID:   docker://7da40e404871970650fcef74f84611e69c173828edd923c6d7a6c0f8226b2083
    Image:          970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest
    Image ID:       docker-pullable://970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller@sha256:2c292abca87af64ad5380b6e1e3f621a982e75a0c848656dd5286531994b119a
    Port:           9080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:47 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      SERVER_PORT:  9080
      COLOR:        blue
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-22pk6 (ro)
  envoy:
    Container ID:   docker://a958fd9e85555e7642429c97394a5993156f2c47caa2d7b7eb933041c4cf4c86
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy@sha256:33e19cf3106b2ccb1ccc3f1d28b7e5b965d640f2a17a5c5564780720f63f258f
    Port:           9901/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:55 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  32Mi
    Environment:
      APPMESH_VIRTUAL_NODE_NAME:  mesh/color-mesh/virtualNode/colorteller-blue-appmesh-demo
      ENVOY_LOG_LEVEL:            info
      AWS_REGION:                 us-west-2
      ENABLE_ENVOY_XRAY_TRACING:  1
      ENABLE_ENVOY_STATS_TAGS:    1
    Mounts:                       <none>
  xray-daemon:
    Container ID:   docker://48527440706d8a2ff65abce59d8c67aaed3c7602ab88ce7c10587069ef8290b2
    Image:          amazon/aws-xray-daemon
    Image ID:       docker-pullable://amazon/aws-xray-daemon@sha256:0f2270a1aa8e02acd735d0ec053b4aa554dbee3fc90614617e85509f8168663e
    Port:           2000/UDP
    Host Port:      0/UDP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:59 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     32Mi
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-22pk6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-22pk6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:               colorteller-red-6f55b447db-9mxgh
Namespace:          appmesh-demo
Priority:           0
PriorityClassName:  <none>
Node:               ip-192-168-78-215.us-west-2.compute.internal/192.168.78.215
Start Time:         Fri, 26 Apr 2019 20:51:42 +0800
Labels:             app=colorteller
                    pod-template-hash=6f55b447db
                    version=red
Annotations:        <none>
Status:             Running
IP:                 192.168.67.134
Controlled By:      ReplicaSet/colorteller-red-6f55b447db
Init Containers:
  proxyinit:
    Container ID:   docker://e73ab4b2de338f3a80dd896e7aeae4e06de6576281cf841c8709f1b8e1c2f925
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager:latest
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-manager@sha256:a055da31668a5dc6e68da49c4a8217726d8437e2a94ce6bb6a15abfdcbb1e925
    Port:           <none>
    Host Port:      <none>
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 26 Apr 2019 20:51:45 +0800
      Finished:     Fri, 26 Apr 2019 20:51:45 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      APPMESH_START_ENABLED:       1
      APPMESH_IGNORE_UID:          1337
      APPMESH_ENVOY_INGRESS_PORT:  15000
      APPMESH_ENVOY_EGRESS_PORT:   15001
      APPMESH_APP_PORTS:           9080
      APPMESH_EGRESS_IGNORED_IP:   169.254.169.254
    Mounts:                        <none>
Containers:
  colorteller:
    Container ID:   docker://079e59d3572e54a14723ae2c464ae1633185313bd0bf0458ce2c8776926e176f
    Image:          970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest
    Image ID:       docker-pullable://970805265562.dkr.ecr.us-west-2.amazonaws.com/colorteller@sha256:2c292abca87af64ad5380b6e1e3f621a982e75a0c848656dd5286531994b119a
    Port:           9080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:47 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      SERVER_PORT:  9080
      COLOR:        red
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-22pk6 (ro)
  envoy:
    Container ID:   docker://31113fdad5a925601e9b4e6606082b4fe759aafdba94a005089a2cf275933a21
    Image:          111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod
    Image ID:       docker-pullable://111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy@sha256:33e19cf3106b2ccb1ccc3f1d28b7e5b965d640f2a17a5c5564780720f63f258f
    Port:           9901/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:51:55 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  32Mi
    Environment:
      APPMESH_VIRTUAL_NODE_NAME:  mesh/color-mesh/virtualNode/colorteller-red-appmesh-demo
      ENVOY_LOG_LEVEL:            info
      AWS_REGION:                 us-west-2
      ENABLE_ENVOY_XRAY_TRACING:  1
      ENABLE_ENVOY_STATS_TAGS:    1
    Mounts:                       <none>
  xray-daemon:
    Container ID:   docker://4cf59385be1fc3f6996d9ab85891916a46f83ac0e81b30e5e29f3986d5e7eec3
    Image:          amazon/aws-xray-daemon
    Image ID:       docker-pullable://amazon/aws-xray-daemon@sha256:0f2270a1aa8e02acd735d0ec053b4aa554dbee3fc90614617e85509f8168663e
    Port:           2000/UDP
    Host Port:      0/UDP
    State:          Running
      Started:      Fri, 26 Apr 2019 20:52:01 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     32Mi
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-22pk6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-22pk6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:               curler-5875dfcc64-49mwf
Namespace:          appmesh-demo
Priority:           0
PriorityClassName:  <none>
Node:               ip-192-168-13-9.us-west-2.compute.internal/192.168.13.9
Start Time:         Fri, 26 Apr 2019 20:59:58 +0800
Labels:             pod-template-hash=5875dfcc64
                    run=curler
Annotations:        <none>
Status:             Running
IP:                 192.168.5.205
Controlled By:      ReplicaSet/curler-5875dfcc64
Containers:
  curler:
    Container ID:  docker://a0cb1d72fda692d750141de3cfd4bd65a45799e6d5e685393911d35f31b19abe
    Image:         tutum/curl
    Image ID:      docker-pullable://tutum/curl@sha256:b6f16e88387acd4e6326176b212b3dae63f5b2134e69560d0b0673cfb0fb976f
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/bash
    State:          Running
      Started:      Fri, 26 Apr 2019 21:04:44 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    130
      Started:      Fri, 26 Apr 2019 21:00:06 +0800
      Finished:     Fri, 26 Apr 2019 21:04:43 +0800
    Ready:          True
    Restart Count:  1
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-22pk6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-22pk6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-22pk6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:               curler2-58cdb99d46-j8zsl
Namespace:          appmesh-demo
Priority:           0
PriorityClassName:  <none>
Node:               ip-192-168-13-9.us-west-2.compute.internal/192.168.13.9
Start Time:         Fri, 26 Apr 2019 21:53:54 +0800
Labels:             pod-template-hash=58cdb99d46
                    run=curler2
Annotations:        <none>
Status:             Running
IP:                 192.168.3.88
Controlled By:      ReplicaSet/curler2-58cdb99d46
Containers:
  curler2:
    Container ID:  docker://4f74aa93d3a7b68adc0425ee64ef74c73dccdab42257ae8ea27d81aac3fc8ec7
    Image:         tutum/curl
    Image ID:      docker-pullable://tutum/curl@sha256:b6f16e88387acd4e6326176b212b3dae63f5b2134e69560d0b0673cfb0fb976f
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/bash
    State:          Running
      Started:      Fri, 26 Apr 2019 23:00:35 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 26 Apr 2019 22:35:45 +0800
      Finished:     Fri, 26 Apr 2019 23:00:32 +0800
    Ready:          True
    Restart Count:  3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-22pk6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-22pk6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-22pk6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age                  From                                                 Message
  ----    ------   ----                 ----                                                 -------
  Normal  Pulling  3m52s (x4 over 70m)  kubelet, ip-192-168-13-9.us-west-2.compute.internal  pulling image "tutum/curl"
  Normal  Pulled   3m51s (x4 over 70m)  kubelet, ip-192-168-13-9.us-west-2.compute.internal  Successfully pulled image "tutum/curl"
  Normal  Created  3m50s (x4 over 70m)  kubelet, ip-192-168-13-9.us-west-2.compute.internal  Created container
  Normal  Started  3m50s (x4 over 70m)  kubelet, ip-192-168-13-9.us-west-2.compute.internal  Started container

OAuth 2 integration example

It would be good to have some OAuth 2 integration example (Google OAuth, Github, etc.) to expose the EKS workload publicly.

Self-hosted Kubernetes cluster logging

Describe the bug
Unable to configure the logs to cloudwatch

Platform
ec2 running kubernetes

Unable to send logs from the pods to cloudwatch as ECS ,any help on it ?

[BUG] colorgateway does'nt work in EKS

Describe the bug
k8s pod of colorgateway fail repeatedly.

Platform
EKS

To Reproduce

  1. Setup infrastructure according to examples.
  2. Deploy color-teller and color-gateway to EKS according to colorapp.
$ ./kubernetes/generate-templates.sh && kubectl apply -f ./kubernetes/colorapp.yaml
  1. Pod of colorgateway fail repeatedly.

Expected behavior
colorgateway work correctly as described in colorapp.

Config files, and API responses
colorgateway pod log is below.

$ kubectl logs colorgateway-64bdb8d55-rhl4b colorgateway 
2019/01/19 02:01:47 Sleeping for 60s to allow Envoy to bootstrap
2019/01/19 02:02:47 Starting server, listening on port 9080
2019/01/19 02:02:47 TCP_ECHO_ENDPOINT is not set

Additional context
None

Service to service TLS support

I'm not seeing it in current docs/examples, but can we mount certs and set up service-to-service TLS or mTLS?

If not, when would that be available?

EDIT: Not sure why the issue is labeled "bug" but that can be removed.

Clarification of "serviceNames" field across virtualRouter and virtualNodes

From my understanding the "serviceNames" in virtualRouters appmesh updates for you to point to the virtualRouters you define. That is if you define serviceNames "a" and "b" it will update the records in AWS Service discovery to point to your virtualRouters.

However, in the "serviceName" referenced here it is purely referential and lookup based.

I was conflating the two thinking they were both referential (i.e. the service names the router will point to)

Is my current understanding correct?

Configuring Zipkin Tracing

Apologies if this is already covered elsewhere, but is there a variable that can be configured to pick up a Zipkin tracing config for Envoy, similar to #95 (comment)? In the Envoy logs, there is a line that says loading trace configuration, so I gave it a shot by setting ENVOY_TRACING_CONFIG_FILE, but I couldn't see anything in the logs to indicate that the file was being picked up. What would be the correct setting? Thanks!

Fargate support for initializing Envoy

Currently you cannot get elevated permissions in on a Fargate instacnce to change the iptables rules and route all traffic (ingress and egress) through Envoy.

[BUG] Routes to Target Virtual Nodes with Mismatched Ports Blackhole

Describe the bug

When a Route targets Virtual Nodes whose listeners do not agree on the port, App Mesh
will fail to create a route in the Envoy configuration distributed to clients.

For example, the following Route

aws appmesh describe-route --mesh-name test-mesh --virtual-router-name test-router --route-name test-route
{
    "route": {
        "status": {
            "status": "ACTIVE"
        },
        "meshName": "test-mesh",
        "virtualRouterName": "test-router",
        "routeName": "test-route",
        "spec": {
            "httpRoute": {
                "action": {
                    "weightedTargets": [
                        {
                            "virtualNode": "vn-1", // Listens on port 8080
                            "weight": 1
                        },
                        {
                            "virtualNode": "vn-2", // Listens on port 8081
                            "weight": 1
                        }
                    ]
                },
                "match": {
                    "prefix": "/"
                }
            }
        }
}

would not be distributed down to an Envoy subscribed to the Virtual Router. Worse, there is nothing in App Mesh APIs or Envoy configuration that would alert you that the route is not being distributed.

Platform
ALL

Expected behavior

Some possible solutions

  1. Provide synchronous checking in create-route/update-route to enforce all Virtual Nodes have the same listener port. This would not disallow changing listener ports after the fact.
  2. Provide asynchronous events or messages in API calls alerting customers that this route will not be materialized.
  3. Provide a mechanism to abstract Virtual Router listener ports from target Virtual Node listener ports

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.