Giter Site home page Giter Site logo

old_issues_repo's Introduction

Istio

CII Best Practices Go Report Card GoDoc

Istio logo

Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes.

  • For in-depth information about how to use Istio, visit istio.io
  • To ask questions and get assistance from our community, visit Github Discussions
  • To learn how to participate in our overall community, visit our community page

In this README:

In addition, here are some other documents you may wish to read:

You'll find many other useful documents on our Wiki.

Introduction

Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.

Istio is composed of these components:

  • Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Istiod - The Istio control plane. It provides service discovery, configuration and certificate management. It consists of the following sub-components:

    • Pilot - Responsible for configuring the proxies at runtime.

    • Citadel - Responsible for certificate issuance and rotation.

    • Galley - Responsible for validating, ingesting, aggregating, transforming and distributing config within Istio.

  • Operator - The component provides user friendly options to operate the Istio service mesh.

Repositories

The Istio project is divided across a few GitHub repositories:

  • istio/api. This repository defines component-level APIs and common configuration formats for the Istio platform.

  • istio/community. This repository contains information on the Istio community, including the various documents that govern the Istio open source project.

  • istio/istio. This is the main code repository. It hosts Istio's core components, install artifacts, and sample programs. It includes:

    • istioctl. This directory contains code for the istioctl command line utility.

    • operator. This directory contains code for the Istio Operator.

    • pilot. This directory contains platform-specific code to populate the abstract service model, dynamically reconfigure the proxies when the application topology changes, as well as translate routing rules into proxy specific configuration.

    • security. This directory contains security related code, including Citadel (acting as Certificate Authority), citadel agent, etc.

  • istio/proxy. The Istio proxy contains extensions to the Envoy proxy (in the form of Envoy filters) that support authentication, authorization, and telemetry collection.

  • istio/ztunnel. The repository contains the Rust implementation of the ztunnel component of Ambient mesh.

Issue management

We use GitHub to track all of our bugs and feature requests. Each issue we track has a variety of metadata:

  • Epic. An epic represents a feature area for Istio as a whole. Epics are fairly broad in scope and are basically product-level things. Each issue is ultimately part of an epic.

  • Milestone. Each issue is assigned a milestone. This is 0.1, 0.2, ..., or 'Nebulous Future'. The milestone indicates when we think the issue should get addressed.

  • Priority. Each issue has a priority which is represented by the column in the Prioritization project. Priority can be one of P0, P1, P2, or >P2. The priority indicates how important it is to address the issue within the milestone. P0 says that the milestone cannot be considered achieved if the issue isn't resolved.


Cloud Native Computing Foundation logo

Istio is a Cloud Native Computing Foundation project.

old_issues_repo's People

Contributors

geeknoid avatar jwendell avatar ldemailly avatar sakshigoel12 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

old_issues_repo's Issues

Got 503 when access bookinfo productpage on IBM Bluemix Container Service

I got 503 when run curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

[root@c582f1-n28-vm1 ~]# curl -v http://169.47.115.162/productpage
* About to connect() to 169.47.115.162 port 80 (#0)
*   Trying 169.47.115.162...
* Connected to 169.47.115.162 (169.47.115.162) port 80 (#0)
> GET /productpage HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 169.47.115.162
> Accept: */*
> 
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Thu, 22 Jun 2017 10:49:09 GMT
< server: envoy
< Connection: Keep-Alive
< 
* Connection #0 to host 169.47.115.162 left intact
upstream connect error or disconnect/reset before headers

I can see the ingress address, shown as the same in svc istio-ingress

# kubectl get ingress -o wide
NAME      HOSTS     ADDRESS          PORTS     AGE
gateway   *         169.xx.xx.xxx   80        2d
# kubectl get svc istio-ingress
NAME            CLUSTER-IP   EXTERNAL-IP      PORT(S)                      AGE
istio-ingress   10.10.10.6   169.xx.xx.xxx    80:30835/TCP,443:30012/TCP   2d
# export GATEWAY_URL=169.xx.xx.xxx:80

nothing in the log of productpage container kubectl logs productpage-v1-3210513262-nr4gm productpage

log in proxy container kubectl logs productpage-v1-3210513262-nr4gm proxy

I0622 14:26:47.020826       1 controller.go:152] Event update: key "default/istio-ingress-controller-leader-istio"

The full log of proxy
proxy.txt

I verified that the product page can be shown in the browser when use port-forward and access with url http://localhost:9080/productpage

What might be the issue? Thanks.

NodeJS container using Istio cannot connect to non-istiofied headless Kafka service

Hello,

I've been discussing this issue in the Users Group here:
https://groups.google.com/forum/#!topic/istio-users/wLyQ_i9WGfQ

After trying out other combinations, I believe there is an issue with an istiofied deployment connecting to a headless service that is not wrapped with istio. I say this because I am able to connect to an non-istiofied deployment with a regular service.

I've included the envoy config file from the pod. There aren't any errors, just my service hangs while waiting to connect to the other service and it restarts after a while. This does not happen if I deploy it without wrapping it with Istio.

https://gist.github.com/FuzzOli87/27519f99dbfeea8e6c327ec87a97143d

Document, test and improve "chained" ingress

In a multi-cluster environment, or if CDNs or other infrastructure is required, istio ingress
may be behind another L7 load balancer, which may handle TLS and route to a region/cluster
running istio.

The LB will update X-Forwarded-For, X-Forwarded-Proto - and possibly other headers in
case of TLS-client authentication.

In this mode, the istio ingress should only accept traffic from the frontend (no external IP),

  • or should have a whitelist or other mean to identify traffic from the front load
    balancer ( so the injected attributes are accepted only from the front load balancer,
    and filtered out/rejected if they are sent by an attacker)

XFF is most common, and each load balancer has a number of extensions. For examples
if Envoy is used as a global/front load balancer it may use x-envoy-external-address and
set other headers https://lyft.github.io/envoy/docs/configuration/http_conn_man/headers.html
Haproxy defines a number of TLS-related headers http://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

destination-policy: LEASTCONN and ROUNDROBIN loadbalancing policies not working.

I'm running istio 0.1.6 and I'm unable to get the LEASTCONN or ROUNDROBIN destination policies to work. However, the RANDOM destination policy works as expected.

Reproducing the Issue

Create the destination-policy using the ROUNDROBIN loadbalancing policy:

type: destination-policy
name: foo-roundrobin
spec:
  destination: foo.default.svc.cluster.local
  policy:
    - loadBalancing:
        name: ROUNDROBIN

Submit the destination policy to the istio-pilot:

istioctl create -f destination-policies/foo-round-robin.yaml

Notice the error from the istioctl create command:

Error: the server rejected our request for an unknown reason

Logs from the istio-pilot service:

cannot parse proto message: unknown value "ROUNDROBIN" for enum istio.proxy.v1.config.LoadBalancing_SimpleLBPolicy

Unable to forward XFF to istio proxy

Can't find any documentation that says that some http fields are to be stripped out by the proxy.
X-Forwarded-For is one of the fields that won't go through the istio proxy, why is that?

Istio route rules not respected for traffic from outside Kubernetes cluster

I created a service that selects among 2 deployments (different versions).
I created two istio route-rules.
The first defaults all traffic to v1.
The second sends requests that have a particular header to v2.
I then deployed the sleep sample so that I could test these rules from inside the cluster. I found that both rules worked as expected.
I then created an ingress so that I could initiate requests from outside the cluster.
When I tested it by sending requests from outside the cluster, I found that the default rule was applied to every request, including those that match the second rule.
Note that I am doing this in a non default namespace. However, this does not seem to be the issue, see also: https://groups.google.com/forum/#!topic/istio-users/yQEogcx2hJM

Here are the rules:

$ more *rule*
::::::::::::::
default-route-rule.yml
::::::::::::::
type: route-rule
name: default-route
namespace: user-kalantar
spec:
  destination: echo.user-kalantar.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: one
    weight: 100
::::::::::::::
user-route-rule.yml
::::::::::::::
type: route-rule
name: echo-user-rule
namespace: user-kalantar
spec:
  destination: echo.user-kalantar.svc.cluster.local
  precedence: 2
  match:
    httpHeaders:
      x-user:
        exact: test
  route:
  - tags:
      version: two

The ingress is:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: echo-ingress
  namespace: user-kalantar
  annotations:
    kubernetes.io/ingress.class: istio
spec:
  backend:
    serviceName: echo
    servicePort: 80

Cookie enable for the second services

I'm not sure this is the bug or feature. However, I'd like to share with you guys.

I tried to use the match feature. My services structure is like this.

ingress -> web-front -> web-service I want to route the web-service (2.0.0) for the limited test users using Cookie. However, it didn't work at the first time.

type: route-rule
name: web-service-test-v2
spec:
  destination: web-service.default.svc.cluster.local
  precedence: 2
  match:
    httpHeaders:
      cookie:
        regex: "^(.*?;)?(NAME=v2tester)(;.*)?$"
  route:
  - tags:
      version: 2.0.0

After reading the sample program, I realized that , we need to transfer cookie from the web-front to the web-service. It makes sense. However, we need to do the same thing for our legacy k8s code bases and it is not cool. I recommend to do it by the istio. Or you already have any road map for that?

Ingress ADDRESS not getting set on IBM Bluemix

Users running the Istio Bookinfo sample on an IBM Bluemix paid cluster, which supports external loadbalancers, are noticing that the LB external IP fails to get set in the Ingress resource.

kubectl get ingress -o wide
NAME      HOSTS     ADDRESS   PORTS     AGE
gateway   *                   80        9m

The address field is blank even though the ingress has an external ip:

Kubectl get svc
istio-ingress   10.10.10.204   169.47.245.10

The following error appears in the istio-ingress log:

I0523 19:48:25.146704       1 status.go:302] updating Ingress default/gateway status to [{169.47.245.10 }]
W0523 19:48:25.149243       1 status.go:306] error updating ingress rule: Forbidden: "/apis/extensions/v1beta1/namespaces/default/ingresses/gateway/status" (put ingresses.extensions gateway)

deploy bookinfo app in different namespace

i'm trying to deploy bookinfo app in a different namespace other than default where istio service mesh is deployed. is there any special access needed? i deployed non-auth version of istio.
and i see following errors on bookinfo app deployment

Error syncing pod, skipping: [failed to "InitContainer" for "init" with RunInitContainerError: "init container \"init\" exited with 1" , failed to "StartContainer" for "init" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=init pod=ratings-v1-1419296450-gglpr_istio-testing(7f4d39ac-506d-11e7-acf3-2cc2600c3cc4)" ]

Payload attribute

I am currently using Istio to log telemetry data.
For my master thesis I am looking into ways to find the cause/reason of failures based on all the data collected. For this use-case I would like to log the payload/body as well.
Apart from telemetry data, a payload attribute might be desirable for scenarios in which very strict rules, that check payloads, are desired.

While the basic implementation is straight forward, some technical repercussions were raised by @mandarjog:

Adding request body to the mix has several technical repercussions.

  1. Since payload sizes are arbitrary, only store payload up to a max size in Report call.
  2. Mixer - Proxy api evolves. Stream request body to Mixer only when requested to do so by Mixer.
  3. Mixer proxy filter should be injected at "decodeBody"
  4. Ensure that privacy and security concerns are handled while handling request body.

The technical repercussions are not important in the context of my master thesis, but certainly are in every other use-case.

The original topic can be found here:
https://groups.google.com/forum/#!topic/istio-users/tBfW8npfOGw

Deployment issue on OpenShift Platform

When we try to deploy istio on openshift, different issues are reported

Scenario followed

minishift start --memory 5000
oc new-project istio

oc apply -f install/kubernetes/istio-auth.yaml
oc apply -f install/kubernetes/addons/prometheus.yaml
oc apply -f install/kubernetes/addons/grafana.yaml
oc apply -f install/kubernetes/addons/servicegraph.yaml
oc apply -f install/kubernetes/addons/zipkin.yaml

oc expose svc/grafana
oc expose svc/servicegraph
oc expose svc/zipkin

Errors

  1. Permission denied

The docker user which is used by grafana/prometheus container doesn't has the permission to change ownership or create a folder.

oc logs grafana-851518138-z76l9
chown: changing ownership of '/data/grafana': Operation not permitted
chown: changing ownership of '/var/log/grafana': Operation not permitted

oc logs prometheus-3208567892-08nvd
time="2017-05-16T14:14:44Z" level=info msg="Starting prometheus (version=1.1.1, branch=release-1.0, revision=ab312a075f810e2ed124783c46d68674af071293)" source="main.go:73"
time="2017-05-16T14:14:44Z" level=info msg="Build context (go=go1.6.3, user=root@8ab14ddb4898, date=20160907-09:37:01)" source="main.go:74"
time="2017-05-16T14:14:44Z" level=info msg="Loading configuration file /etc/prometheus/prometheus.yml" source="main.go:221"
time="2017-05-16T14:14:44Z" level=error msg="Error opening memory series storage: mkdir data: permission denied" source="main.go:158"
  1. Waiting to start as they are looking abut the secrets

The ingress and egress contains can't be started

oc logs istio-egress-1575870412-s1rs7
Error from server (BadRequest): container "proxy" in pod "istio-egress-1575870412-s1rs7" is waiting to start: ContainerCreating
oc logs istio-ingress-2905358108-j1fl9
Error from server (BadRequest): container "istio-ingress" in pod "istio-ingress-2905358108-j1fl9" is waiting to start: ContainerCreating

Events reported

Unable to mount volumes for pod "istio-egress-1575870412-s1rs7_istio(50e9ec5e-3a38-11e7-9bb6-f23b3a6d93cf)": timeout expired waiting for volumes to attach/mount for pod "istio"/"istio-egress-1575870412-s1rs7". list of unattached/unmounted volumes=[istio-certs]
MountVolume.SetUp failed for volume "kubernetes.io/secret/50c6a34e-3a38-11e7-9bb6-f23b3a6d93cf-istio-certs" (spec.Name: "istio-certs") pod "50c6a34e-3a38-11e7-9bb6-f23b3a6d93cf" (UID: "50c6a34e-3a38-11e7-9bb6-f23b3a6d93cf") with: secrets "istio.default" not found
  1. istio manager

There is service account permission issue

I0516 14:19:15.164199       1 client.go:203] TPR "IstioConfig" is not ready (User "system:serviceaccount:istio:istio-manager-service-account" cannot list all istio.io.istioconfigs in the cluster). Waiting...
Error: 2 errors occurred:

* failed to register Third-Party Resources. User "system:serviceaccount:istio:istio-manager-service-account" cannot get extensions.thirdpartyresources at the cluster scope
* failed to register Third-Party Resources. Failed to create all TPRs
Usage:
  manager discovery [flags]

Flags:
      --discovery_cache   Enable caching discovery service responses (default true)
      --port int          Discovery service port (default 8080)
      --profile           Enable profiling via web interface host:port/debug/pprof (default true)

Global Flags:
      --kubeconfig string                Use a Kubernetes configuration file instead of in-cluster configuration
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --meshConfig string                ConfigMap name for Istio mesh configuration, config key should be "mesh" (default "istio")
  -n, --namespace string                 Select a namespace for the controller loop. If not set, uses ${POD_NAMESPACE} environment variable
      --resync duration                  Controller resync interval (default 1s)
  -v, --v Level                          log level for V logs (default 0)
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

E0516 14:19:16.165480       1 main.go:245] 2 errors occurred:

* failed to register Third-Party Resources. User "system:serviceaccount:istio:istio-manager-service-account" cannot get extensions.thirdpartyresources at the cluster scope
* failed to register Third-Party Resources. Failed to create all TPRs

Pod status stuck at ContainerCreating

I am trying to setup istio with minikube following the guide here.

After running kubectl apply -f install/kubernetes/istio.yaml, the pods are stuck in ContainerCreating state:

> ➜ istio-0.1.6 kubectl get pods

NAME READY STATUS

  • grafana-1395297218-36b3z 0/1 ContainerCreating
  • istio-egress-815883402-n7qm9 0/1 ContainerCreating
  • istio-ingress-1054723629-x4t9n 0/1 ContainerCreating
  • istio-mixer-2450814972-nzd0z 1/1 Running
  • istio-pilot-1836659236-cdcx4 0/2 ContainerCreating
  • prometheus-3067433533-hv6jk 0/1 ContainerCreating
  • servicegraph-3127588006-r05m0 0/1 ContainerCreating
  • zipkin-4057566570-m0tkm 0/1 ContainerCreating

Looking at the events for one of the pods:

kubectl describe pod zipkin-4057566570-m0tkm

13m 13m 1 default-scheduler Normal Scheduled Successfully assigned zipkin-4057566570-m0tkm to minikube
13m 13m 1 kubelet, minikube spec.containers{zipkin} Normal Pulling pulling image "docker.io/openzipkin/zipkin:latest"

All the pods are stuck at this step... Any support would be really helpful

istio-ingress repeating "no secret needed" message every second

Not sure why I see this log message continuously..

$ kubectl logs -f istio-ingress-.....

I0526 08:27:30.531678       1 ingress.go:120] no secret needed
I0526 08:27:31.532773       1 ingress.go:120] no secret needed
I0526 08:27:32.533969       1 ingress.go:120] no secret needed
I0526 08:27:33.535098       1 ingress.go:120] no secret needed
...

Integration example RBAC service account with mixer

I think it would be a good thing to have an example of service account whitelisting and blacklisting using mixer. Sometimes you want a service to only be able to communicate with certain services.
Is this something that's in the pipeline for the coming releases?

Feature Request: istioctl -f <directory>

kubectl can operate over a directory by automatically discover all the .yaml files within a directory. This is convenient when applying configurations organized into separate files. istioctl kube-inject -f can only operate on a single file at the moment.

istioctl version command fails with Bluemix Container Service on Kubernetes.

$ istioctl version
istioctl version:
Version: 0.1.6
GitRevision: dab2033
GitBranch: release-0.1
User: jenkins@ubuntu-16-04-build-12ac793f80be71
GolangVersion: go1.8
KubeInjectHub: docker.io/istio
KubeInjectTag: 0.1
apiserver version:
Error: an error on the server ("Error: 'dial tcp 172.30.206.203:8081: getsockopt: connection timed out'\nTrying to reach: 'http://172.30.206.203:8081/v1alpha1/version'") has prevented the request from succeeding

[Ingress] Support for Mutual TLS with external clients

@schmitzhermes commented on Thu May 25 2017

It would be great to support mutual TLS with Ingress clients.
A working example can be found in nginx ingress controller: https://github.com/kubernetes/ingress/blob/master/controllers/nginx/configuration.md#certificate-authentication

This should not be confused with mutual TLS inside your cluster (i.e. service-to-service communication) -> that's what https://github.com/istio/auth is for.


@rshriram commented on Mon May 29 2017

Hi,
Can you please post this issue on istio/issues repo? We are keeping track of all bugs and feature requests in that repo. The issues here are for our internal tracking purposes.

istioctl -o flag does not work

Using Deployment file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  labels:
    name: test-app
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: test-app
    spec:
      containers:
      - image: test/image
        name: test-app
        ports:
         - containerPort: 3000
           name: testapp

Executing command:

istioctl kube-inject -f deployment.yml -o istio-deployment.yml

Creates the istio-deployment.yml file but with no content.

Integration example: consul + nomad

Would like example integration to run with Hashicorp Consul + Nomad, instead of kubernetes

Please provide some general guidelines, I'll take a stab at doing a PoC

ACS Kubernetes: Istio RBAC install failed on RBAC cluster

Hey everyone,

I am running a standard istio installation on top of Azure Kubernetes (no auth).
Note that I have followed these steps for configuring an Azure service principal to access the cluster.
Also note I am running on Windows 10.

My installation looks like this:
(It seems good, I am able to deploy my own service and hit the endpoint through the istio-ingress)

>kubectl get all
NAME                                READY     STATUS    RESTARTS   AGE
po/istio-egress-815883402-hw6bx     1/1       Running   0          1h
po/istio-ingress-1054723629-p1zkm   1/1       Running   0          1h
po/istio-mixer-2450814972-rrcm3     1/1       Running   0          1h
po/istio-pilot-1836659236-3hn6t     2/2       Running   0          1h

NAME                CLUSTER-IP     EXTERNAL-IP     PORT(S)                       AGE
svc/istio-egress    10.0.211.242   <none>          80/TCP                        1h
svc/istio-ingress   10.0.30.172    13.90.212.195   80:31012/TCP,443:31949/TCP    1h
svc/istio-mixer     10.0.97.174    <none>          9091/TCP,9094/TCP,42422/TCP   1h
svc/istio-pilot     10.0.30.32     <none>          8080/TCP,8081/TCP             1h
svc/kubernetes      10.0.0.1       <none>          443/TCP                       7h

NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/istio-egress    1         1         1            1           1h
deploy/istio-ingress   1         1         1            1           1h
deploy/istio-mixer     1         1         1            1           1h
deploy/istio-pilot     1         1         1            1           1h

NAME                          DESIRED   CURRENT   READY     AGE
rs/istio-egress-815883402     1         1         1         1h
rs/istio-ingress-1054723629   1         1         1         1h
rs/istio-mixer-2450814972     1         1         1         1h
rs/istio-pilot-1836659236     1         1         1         1h

The cluster is RBAC enabled:

>kubectl api-versions | grep rbac
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1

When I try to apply the RBAC yaml, it fails:

>kubectl apply -f istio-rbac-beta.yaml
rolebinding "istio-pilot-admin-role-binding" configured
rolebinding "istio-ca-role-binding" configured
rolebinding "istio-ingress-admin-role-binding" configured
rolebinding "istio-sidecar-role-binding" configured

Error from server (Forbidden): error when creating "istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-pilot" is forbidden: attempt to grant extra privileges: [
{[*] [istio.io] [istioconfigs] [] []} {[*] [istio.io] [istioconfigs.istio.io] [] []}] user=&{kubeconfig  [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

Error from server (Forbidden): error when creating "istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-ca" is forbidden: attempt to grant extra privileges: [{[c
reate] [] [secrets] [] []} {[get] [] [secrets] [] []} {[watch] [] [secrets] [] []} {[list] [] [secrets] [] []} {[watch] [] [serviceaccounts] [] []} {[list] [] [serviceaccounts] [] []}] user=&{kubeconfig  [system:authenticated] map[]} ownerrul
es=[] ruleResolutionErrors=[]

Error from server (Forbidden): error when creating "istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-sidecar" is forbidden: attempt to grant extra privileges:
 [{[get] [istio.io] [istioconfigs] [] []} {[watch] [istio.io] [istioconfigs] [] []} {[list] [istio.io] [istioconfigs] [] []} {[get] [extensions] [thirdpartyresources] [] []} {[watch] [extensions] [thirdpartyresources] [] []} {[list] [extensio
ns] [thirdpartyresources] [] []} {[update] [extensions] [thirdpartyresources] [] []} {[get] [extensions] [ingresses] [] []} {[watch] [extensions] [ingresses] [] []} {[list] [extensions] [ingresses] [] []} {[update] [extensions] [ingresses] []
 []} {[get] [] [configmaps] [] []} {[watch] [] [configmaps] [] []} {[list] [] [configmaps] [] []} {[get] [] [pods] [] []} {[watch] [] [pods] [] []} {[list] [] [pods] [] []} {[get] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[list] [
] [endpoints] [] []} {[get] [] [services] [] []} {[watch] [] [services] [] []} {[list] [] [services] [] []}] user=&{kubeconfig  [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]

I've tried all sorts of variants of kubectl create clusterrolebinding to try and give myself permissions, at some point I just started randomly trying things:

kubectl create clusterrolebinding a1 --clusterrole=cluster-admin --user=kubeconfig
kubectl create clusterrolebinding a2 --clusterrole=cluster-admin --serviceaccount=kube-system:default
kubectl create clusterrolebinding a3 --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts

I also tried granting the user listed in my kube config.

What is strange is after I apply the rules above, I start to see a ruleResolutionError:

Error from server (Forbidden): error when creating "istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-pilot" is forbidden: attempt to grant extra privileges: [
{[*] [istio.io] [istioconfigs] [] []} {[*] [istio.io] [istioconfigs.istio.io] [] []}] user=&{kubeconfig  [system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

But it still does not work.

Any ideas?
Help very much appreciated :)

Trouble setting up Egress

I followed the instructions on the Egress task and haven't had luck being able to access the external service.

root@sleep-805739287-2jqjz:/# curl -v http://httpbin-external/headers
* Hostname was NOT found in DNS cache
*   Trying 23.23.159.159...
* Connected to httpbin-external (23.23.159.159) port 80 (#0)
> GET /headers HTTP/1.1
> User-Agent: curl/7.35.0
> Host: httpbin-external
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< date: Thu, 18 May 2017 21:47:09 GMT
* Server envoy is not blacklisted
< server: envoy
< content-length: 0
< 
* Connection #0 to host httpbin-external left intact
root@sleep-805739287-2jqjz:/# curl http://securegoogle:443
curl: (56) Recv failure: Connection reset by peer
root@sleep-805739287-2jqjz:/# curl -v http://securegoogle:443
* Rebuilt URL to: http://securegoogle:443/
* Hostname was NOT found in DNS cache
*   Trying 173.194.193.147...
* Connected to securegoogle (173.194.193.147) port 443 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: securegoogle:443
> Accept: */*
> 
* Empty reply from server
* Connection #0 to host securegoogle left intact
curl: (52) Empty reply from server

FWIW, curl'ing directly to the target domain names also didn't work:

root@sleep-805739287-2jqjz:/# curl -v http://httpbin.org/headers     
* Hostname was NOT found in DNS cache
*   Trying 54.243.225.13...
* Connected to httpbin.org (54.243.225.13) port 80 (#0)
> GET /headers HTTP/1.1
> User-Agent: curl/7.35.0
> Host: httpbin.org
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< date: Thu, 18 May 2017 21:48:47 GMT
* Server envoy is not blacklisted
< server: envoy
< content-length: 0
< 
* Connection #0 to host httpbin.org left intact
root@sleep-805739287-2jqjz:/# curl -v https://www.google.com/       
* Hostname was NOT found in DNS cache
*   Trying 74.125.142.106...
* Connected to www.google.com (74.125.142.106) port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to www.google.com:443 
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to www.google.com:443

In a non-istio shell, I was able to curl external sites.

Logs in the proxy sidecar showed:

[2017-05-18T21:42:02.879Z] "GET / HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.35.0" "c3df628a-bfa5-4718-a022-6b49da0cf1bc" "google.com" "-"
...
[2017-05-18T21:42:31.427Z] "GET / HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.35.0" "d154deff-8473-4d78-9281-a264be8ceb8f" "httpbin.org" "-"
...
[2017-05-18T21:42:35.114Z] "GET / HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.35.0" "6106cf97-629f-4e05-8918-428e454b70d7" "httpbin.org" "-"
...
[2017-05-18T21:42:45.641Z] "GET / HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.35.0" "7ef50188-24ea-4f6d-be11-4a3c7aa98d45" "www.google.com" "-"
[2017-05-18T21:42:49.675Z] "GET /header HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.35.0" "4887222b-663e-4547-95e5-618a6b4209f5" "httpbin.org" "-"
...
[2017-05-18T21:43:56.728Z] "GET /headers HTTP/1.1" 404 NR 0 0 0 - "-" "curl/7.35.0" "12a39936-8113-450b-a167-f0498d5e75d9" "httpbin.org" "-"

Listing the current services in the namespace:

$ kubectl --namespace=istio-demo get svc
NAME               CLUSTER-IP      EXTERNAL-IP       PORT(S)                       AGE
details            10.15.241.26    <none>            9080/TCP                      8d
grafana            10.15.249.155   35.184.243.219    3000:31682/TCP                8d
httpbin            10.15.241.178   <none>            8000/TCP                      8d
httpbin-external                   httpbin.org       80/TCP                        8d
istio-egress       10.15.251.251   <none>            80/TCP                        8d
istio-ingress      10.15.241.101   104.198.140.249   80:30372/TCP                  8d
istio-manager      10.15.248.235   <none>            8080/TCP,8081/TCP             8d
istio-mixer        10.15.241.116   <none>            9091/TCP,9094/TCP,42422/TCP   8d
productpage        10.15.254.178   <none>            9080/TCP                      8d
prometheus         10.15.243.205   104.155.151.253   9090:31683/TCP                8d
ratings            10.15.241.95    <none>            9080/TCP                      8d
reviews            10.15.247.111   <none>            9080/TCP                      8d
securegoogle                       www.google.com    443/TCP                       8d
service-one        10.15.241.167   <none>            80/TCP                        8d
service-two        10.15.246.217   <none>            80/TCP                        8d
servicegraph       10.15.241.42    35.184.185.46     8088:31209/TCP                8d
sleep              10.15.255.140   <none>            80/TCP                        8d

Kubernetes cluster with 100+ service namespaces

I'm investigating wether Istio is the right tool for our Kubernetes cluster, having 100+ service namespaces. Each of our services are running in their own namespace service-name-ENVIRONMENTz (e.g. echo-app-production), and we communicate cross-service using the provided Kubernetes DNS <service-name>.<namespace-name>.

I'm playing around with Istio + the bookinfo example on a local Minikube cluster, and followed the setup instructions:

I decided to deploy Istio in the kube-system namespace, and the bookinfo example in the bookinfo-staging namespace:

kubectl namespace create bookinfo-staging
istioctl kube-inject -n kube-system -f samples/apps/bookinfo/bookinfo.yaml | kubectl apply --namespace=bookinf-staging -f-

However, afterwards, I can see the pods not starting as expected, with the error being:

MountVolume.SetUp failed for volume "kubernetes.io/secret/e9d905c4-41a7-11e7-9f6f-6e7c3c7aea87-istio-certs" (spec.Name: "istio-certs")
  pod "e9d905c4-41a7-11e7-9f6f-6e7c3c7aea87" (UID: "e9d905c4-41a7-11e7-9f6f-6e7c3c7aea87")
  with: secrets "istio.default" not found

Am I supposed to deploy a small part of Istio in each of our 100+ namespaces? Or does Istio only support running apps into a single namespace (making it not usable for us right now)? Or is there something I am missing here?

commands don't work with zsh

Using this command as an example. I think there maybe one or two other commands like this one.

→ export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc istio-ingress -o jsonpath={.spec.ports[0].nodePort})

zsh: no matches found: jsonpath={.items[0].status.hostIP}
zsh: no matches found: jsonpath={.spec.ports[0].nodePort}

bash worked

bash-3.2$ export GATEWAY_URL=$(kubectl get po -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc istio-ingress -o jsonpath={.spec.ports[0].nodePort})

Failed to create route-rule: namespace "XXX" not present

Failed to create route-rule: namespace "workload" not present

OS: CentOS 7
K8S 1.7.1

istioctl version:

Version: 0.1.6
GitRevision: dab2033
GitBranch: release-0.1
User: jenkins@ubuntu-16-04-build-12ac793f80be71
GolangVersion: go1.8.1
KubeInjectHub: docker.io/istio
KubeInjectTag: 0.1

apiserver version:

Version: 0.1.6
GitRevision: dab2033
GitBranch: release-0.1
User: jenkins@ubuntu-16-04-build-12ac793f80be71
GolangVersion: go1.8.1

I'd deployed Istio and the applications into the same namespace named "workload", trying to create route rule as follow:

type: route-rule
name: svc-frontend
spec:
  destination: svc-frontend.workload.svc.cluster.local
  match:
    httpHeaders:
      uri:
        prefix: /front
  rewrite:
    uri: /
  route:
  - tags:
      version: "1"

istioctl create -f route-v1.yaml -n workload failed:

Error: an error on the server ("namespace \"workload\" not present") has prevented the request from succeeding

And then I'd tried to create it in the default namespace, or create the sample rule in samples/apps/bookinfo/route-rule-delay.yaml, failed again and again.

logs from pilot apiserver:

I0723 18:15:59.135368       1 handler.go:99] Adding config to Istio registry: key default/route-rule-rf, config &{Type:route-rule Name:rf Spec:map[destination:svc-frontend.default.svc.cluster.local match:map[httpHeaders:map[uri:map[prefix:/front]]] rewrite:map[uri:/] route:[map[tags:map[version:1]]]] ParsedSpec:destination:"svc-frontend.default.svc.cluster.local" match:<http_headers:<key:"uri" value:<prefix:"/front" > > > route:<tags:<key:"version" value:"1" > > rewrite:<uri:"/" > }
W0723 18:15:59.137823       1 handler.go:234] namespace "default" not present

wishlist: add istioctl --context flag

Personally, I consider the implicit kubectl context an anti-pattern. When managing multiple kubernetes clusters it may be easy to execute a dangerous command in production (e.g. when you meant to run it in a test environment). So I always use kubectl --context (via shell aliases or functions) to be very explicit about which cluster I'm working with (and possibly user or namespace). I think it would be beneficial to do the same with istioctl. So please add a --context CLI option to help avoid accidents.

I'm pretty new to istio - please correct me if I've missed something. Thanks.

gRPC traces are not correlated in Zipkin

I have an application deployed. Logs, metrics, routes and destination policies are working. However, I'm not able to get traces to correlate in Zipkin. I'm copying the required headers to other services as show here:

https://github.com/kelseyhightower/ping/blob/master/frontend/server.go#L34

I did notice that the x-b3-parentspanid is not set on the incoming context which means I'm not able to pass it along to other services. I've even tried to set the x-b3-parentspanid manually, but seems to be stripped and not visible to other services.

Environment Info

istioctl version
istioctl version:

Version: 0.1.6
GitRevision: dab2033
GitBranch: release-0.1
User: jenkins@ubuntu-16-04-build-12ac793f80be71
GolangVersion: go1.8
KubeInjectHub: docker.io/istio
KubeInjectTag: 0.1


apiserver version:

Version: 0.1.6
GitRevision: dab2033
GitBranch: release-0.1
User: jenkins@ubuntu-16-04-build-12ac793f80be71
GolangVersion: go1.8.1

Feature Request: Allow cluster-wide or namespace-wide intercepted egress ranges

Currently, Istio intercepts all egress by default. There is away to specify which ranges to intercept using --includeIPRanges , see doc but it's configured per deployment basis.

It would be useful to be able to specify the inclusion ranges, or exclusion ranges w/ a namespace default, or a cluster-wide default, rather than per-deployment basis.

Error CrashLoopBackOff

Im getting the error CrashLoopBackOff , is mostly kind of kubernets related , or could be more like docker related, why it causes crash in all the cluster?
Im using minikube.

getting the log :$ kubectl logs istio-egress-2038322175-p1h88
..
[2017-06-21 15:51:19.311][46][critical][backtrace] end backtrace thread 0
W0621 15:51:19.324500 1 agent.go:200] Epoch 0 terminated with an error: signal: segmentation fault (core dumped)
W0621 15:51:19.324533 1 agent.go:289] Aborted all epochs
I0621 15:51:19.324574 1 agent.go:220] Updated retry delay to 51.2s, budget to 1

init container fails with iptables: Chain already exists

init container fails with iptables: Chain already exists. message on openshift, any clue?

oc logs podname -c init gives me iptables: Chain already exists.

Version information
OpenShift Master: v3.5.5.5
Kubernetes Master: v1.5.2+43a9be4

External IP on gke

I'm trying to get the external (client ip) out from the istio proxy. I have modified the istio-auth.yaml file to include the following lines for the ingress:

  annotations:
    service.beta.kubernetes.io/external-traffic: "OnlyLocal"

Digging deeper shows that envoy has a flag called use_remote_address that actually uses the x-forwarded-for header, the default value for this flag is false. Is there some way to change this?

Ingress host header doesn't represent the actual host name / IP

It seems like requests through the ingress proxy will automatically be enriched w/ host header.
However, the header seem to indicate the internal host name, rather than the Ingress IP / hostname that the browser actually used to connect.

In this case, I entered through an Ingress IP, but the header to the service was "helloworld-ui-service.default.svc.cluster.local:80", this caused my Java application (Spring Boot) to think that "helloworld-ui-service.default.svc.cluster.local" is the domain name it should redirect the users to.

 [2017-05-22T02:59:50.385Z] "GET / HTTP/1.1" 200 - 0 3126 603 598 "77.60.103.234" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36" "93b734a6-1a38-9403-95d9-f3158a4e3532" "helloworld-ui-service.default.svc.cluster.local:80" "127.0.0.1:8080"

GKE unable to install with rbac beta on 1.6.4

Upgraded my cluster from 1.5 no rbac to 1.6.4
Deleted everything
Tried to follow installation steps :

ldemailly-macbookpro:istio-0.1.5 ldemailly$ gcloud container clusters get-credentials cluster-1 \
>     --zone us-west1-b --project ldemailly-gcp-1
Fetching cluster endpoint and auth data.
kubeconfig entry generated for cluster-1.
ldemailly-macbookpro:istio-0.1.5 ldemailly$ kubectl api-versions | grep rbac
rbac.authorization.k8s.io/v1beta1
ldemailly-macbookpro:istio-0.1.5 ldemailly$ kubectl apply -f install/kubernetes/istio-rbac-beta.yaml
rolebinding "istio-manager-admin-role-binding" created
rolebinding "istio-ca-role-binding" created
rolebinding "istio-ingress-admin-role-binding" created
rolebinding "istio-sidecar-role-binding" created
Error from server (Forbidden): error when creating "install/kubernetes/istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-manager" is forbidden: attempt to grant extra privileges: [{[*] [istio.io] [istioconfigs] [] []} {[*] [istio.io] [istioconfigs.istio.io] [] []} {[*] [extensions] [thirdpartyresources] [] []} {[*] [extensions] [thirdpartyresources.extensions] [] []} {[*] [extensions] [ingresses] [] []} {[*] [] [configmaps] [] []} {[*] [] [endpoints] [] []} {[*] [] [pods] [] []} {[*] [] [services] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
Error from server (Forbidden): error when creating "install/kubernetes/istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-ca" is forbidden: attempt to grant extra privileges: [{[create] [] [secrets] [] []} {[get] [] [secrets] [] []} {[watch] [] [secrets] [] []} {[list] [] [secrets] [] []} {[watch] [] [serviceaccounts] [] []} {[list] [] [serviceaccounts] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
Error from server (Forbidden): error when creating "install/kubernetes/istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-sidecar" is forbidden: attempt to grant extra privileges: [{[get] [istio.io] [istioconfigs] [] []} {[watch] [istio.io] [istioconfigs] [] []} {[list] [istio.io] [istioconfigs] [] []} {[get] [extensions] [thirdpartyresources] [] []} {[watch] [extensions] [thirdpartyresources] [] []} {[list] [extensions] [thirdpartyresources] [] []} {[update] [extensions] [thirdpartyresources] [] []} {[get] [extensions] [ingresses] [] []} {[watch] [extensions] [ingresses] [] []} {[list] [extensions] [ingresses] [] []} {[update] [extensions] [ingresses] [] []} {[get] [] [configmaps] [] []} {[watch] [] [configmaps] [] []} {[list] [] [configmaps] [] []} {[get] [] [pods] [] []} {[watch] [] [pods] [] []} {[list] [] [pods] [] []} {[get] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[list] [] [endpoints] [] []} {[get] [] [services] [] []} {[watch] [] [services] [] []} {[list] [] [services] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]

Also tried to add more permissions to my user in addition to "owner" but that didn't help

Ingress is constantly warning about some JSON schema violation

[2017-05-30 12:30:59.264][14][warning][router] rds: fetch failure: JSON at lines 21-27 does not conform to schema.
 Invalid schema: #/properties/routes
 Schema violation: type
 Offending document key: #/routes
[2017-05-30 12:31:00.463][14][warning][router] rds: fetch failure: JSON at lines 21-27 does not conform to schema.
 Invalid schema: #/properties/routes
 Schema violation: type
 Offending document key: #/routes
[2017-05-30 12:31:01.762][14][warning][router] rds: fetch failure: JSON at lines 21-27 does not conform to schema.
 Invalid schema: #/properties/routes
 Schema violation: type
 Offending document key: #/routes
[2017-05-30 12:31:03.490][14][warning][router] rds: fetch failure: JSON at lines 21-27 does not conform to schema.
 Invalid schema: #/properties/routes
 Schema violation: type
 Offending document key: #/routes

Output of kubectl get deployment istio-ingress -o=yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"istio-ingress","namespace":"default"},"spec":{"replicas":1,"template":{"metadata":{"annotations":{"alpha.istio.io/sidecar":"ignore"},"labels":{"istio":"ingress"}},"spec":{"containers":[{"args":["proxy","ingress","-v","2"],"env":[{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}}],"image":"docker.io/istio/proxy_debug:0.1.5","imagePullPolicy":"Always","name":"istio-ingress","ports":[{"containerPort":80},{"containerPort":443}]}],"serviceAccountName":"istio-ingress-service-account"}}}}
  creationTimestamp: 2017-05-30T10:50:18Z
  generation: 1
  labels:
    istio: ingress
  name: istio-ingress
  namespace: default
  resourceVersion: "26658"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/istio-ingress
  uid: c53613c2-4525-11e7-8884-0aa00da65f28
spec:
  replicas: 1
  selector:
    matchLabels:
      istio: ingress
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      annotations:
        alpha.istio.io/sidecar: ignore
      creationTimestamp: null
      labels:
        istio: ingress
    spec:
      containers:
      - args:
        - proxy
        - ingress
        - -v
        - "2"
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: docker.io/istio/proxy_debug:0.1.5
        imagePullPolicy: Always
        name: istio-ingress
        ports:
        - containerPort: 80
          protocol: TCP
        - containerPort: 443
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: istio-ingress-service-account
      serviceAccountName: istio-ingress-service-account
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: 2017-05-30T10:50:18Z
    lastUpdateTime: 2017-05-30T10:50:18Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

Servicegraph fails to render if service name contains dash/minus/"-"

I have two services named 'lw-erp' and 'lw-controller'.

When going to the servicegraph at http://localhost:8088/dotviz I get an empty page.

I looked at the source code for the page, the embedded digraph is:

digraph "istio-servicegraph" {
unknown -> lw-erp [label="qps: 0.000000, version: v1"];
unknown -> lw-controller [label="qps: 0.000000, version: v1"];
unknown [label="unknown"];
lw-erp [label="lw-erp"];
lw-controller [label="lw-controller"];
}

the Javascript console shows:

Uncaught Error: syntax error in line 3 near '-'
    at render (viz-lite.js:1072)
    at Viz (viz-lite.js:1058)
    at dotviz:25

I tested the digraph with http://www.webgraphviz.com/ and it reports:

Error: :2: syntax error near line 2 context: unknown -> >>> lw- <<< erp [label="qps: 0.000000, version: v1"]; 

If I change the digraph to:

digraph "istio-servicegraph" {
unknown -> "lw-erp" [label="qps: 0.000000, version: v1"];
unknown -> "lw-controller" [label="qps: 0.000000, version: v1"];
unknown [label="unknown"];
"lw-erp" [label="lw-erp"];
"lw-controller" [label="lw-controller"];
}

where I put the names in double quotes (https://stackoverflow.com/questions/14958346/dot-dash-in-name) then it works and I get the expected result:

image

So it seems Istio servicegraph should put quotes around the node names to avoid errors.

404 trying to route kubernetes ingress host:

trying to get basic kubernetes ingress routing on host:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway
  annotations:
    kubernetes.io/ingress.class: "istio"
spec:
  rules:
  - host: docs.testna.me
    http:
       paths:
       - backend:
           serviceName: docs
           servicePort: 8889

getting

[2017-06-15T20:51:27.033Z] "GET / HTTP/1.1" 404 NR 0 0 0 - "172.17.0.1" "curl/7.43.0" "300e8073-b30a-9f3c-9bbe-d922b893516e" "docs.testna.me:30794" "-"

service works fine with path routing... what am I missing?

Issues with mixer configuratation

Hi!

I have deployed an instance with istio-auth and everything is working fine except for a mixer rule i wrote. I'm using the following configuration files and I'm getting the following error message back when issuing an http request. What am I missing?

INTERNAL:unable to resolve config: 1 error occurred:

* unresolved attribute source.labels

#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.

##################################################################################################
# httpbin service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  ports:
  - name: http
    port: 8000
  selector:
    app: httpbin
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: httpbin-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      imagePullSecrets:
      - name: gcr-json-key
      containers:
      - image: docker.io/citizenstig/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 8000
type: route-rule
name: httpbin-default
namespace: default
spec:
  destination: httpbin.default.svc.cluster.local
  precedence: 1
  route:
  - tags:
      version: v1
    weight: 100
rules:
- selector: source.labels["version"] == "v1"
  aspects:
  - kind: quotas
    params:
      quotas:
      - descriptorName: RequestCount
        maxAmount: 1
        expiration: 1s
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gateway
  annotations:
    kubernetes.io/ingress.class: istio
spec:
  rules:
  - http:
      paths:
      - path: /.*
        backend:
          serviceName: httpbin
          servicePort: 8000

Istio RBAC install failed on RBAC cluster

Running Kubernetes 1.6.2 on GKE

it looks like rbac is supported:

$ kubectl api-versions | grep rbac
rbac.authorization.k8s.io/v1beta1

error when applying istio-rbac-beta.yaml

$ kubectl apply -f install/kubernetes/istio-rbac-beta.yaml
rolebinding "istio-manager-admin-role-binding" configured
rolebinding "istio-ca-role-binding" configured
rolebinding "istio-ingress-admin-role-binding" configured
rolebinding "istio-sidecar-role-binding" configured
Error from server (Forbidden): error when creating "install/kubernetes/istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-manager" is forbidden: attempt to grant extra privileges: [{[*] [istio.io] [istioconfigs] [] []} {[*] [istio.io] [istioconfigs.istio.io] [] []} {[*] [extensions] [thirdpartyresources] [] []} {[*] [extensions] [thirdpartyresources.extensions] [] []} {[*] [extensions] [ingresses] [] []} {[*] [] [configmaps] [] []} {[*] [] [endpoints] [] []} {[*] [] [pods] [] []} {[*] [] [services] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
Error from server (Forbidden): error when creating "install/kubernetes/istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-ca" is forbidden: attempt to grant extra privileges: [{[create] [] [secrets] [] []} {[get] [] [secrets] [] []} {[watch] [] [secrets] [] []} {[list] [] [secrets] [] []} {[watch] [] [serviceaccounts] [] []} {[list] [] [serviceaccounts] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]
Error from server (Forbidden): error when creating "install/kubernetes/istio-rbac-beta.yaml": clusterroles.rbac.authorization.k8s.io "istio-sidecar" is forbidden: attempt to grant extra privileges: [{[get] [istio.io] [istioconfigs] [] []} {[watch] [istio.io] [istioconfigs] [] []} {[list] [istio.io] [istioconfigs] [] []} {[get] [extensions] [thirdpartyresources] [] []} {[watch] [extensions] [thirdpartyresources] [] []} {[list] [extensions] [thirdpartyresources] [] []} {[get] [] [configmaps] [] []} {[watch] [] [configmaps] [] []} {[list] [] [configmaps] [] []} {[get] [] [pods] [] []} {[watch] [] [pods] [] []} {[list] [] [pods] [] []} {[get] [] [endpoints] [] []} {[watch] [] [endpoints] [] []} {[list] [] [endpoints] [] []} {[get] [] [services] [] []} {[watch] [] [services] [] []} {[list] [] [services] [] []}] user=&{[email protected]  [system:authenticated] map[]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /swaggerapi /swaggerapi/* /version]}] ruleResolutionErrors=[]

Minikube with Istio Gateway Connection Refused

I'm trying to get a local kubernetes cluster running with Minikube and Istio. I followed the instructions found in the istio docs here: https://istio.io/docs/tasks/installing-istio.html

Then I followed the steps to install the sample BookInfo sample here: https://istio.io/docs/samples/bookinfo.html

However when I try and curl the Gateway URL, I get a connection refused error. All my pods and services appear to be running. Here is the result of the kubectl get pods command:

NAME                             READY     STATUS    RESTARTS   AGE
details-v1-1932527472-ggpf1      2/2       Running   0          8m
grafana-1261931457-d7wwx         1/1       Running   0          12m
istio-ca-3887035158-hnmkr        1/1       Running   0          12m
istio-egress-1920226302-vx1ml    1/1       Running   0          12m
istio-ingress-2112208289-kkblh   1/1       Running   0          12m
istio-manager-2910860705-qj8wv   2/2       Running   0          12m
istio-mixer-2335471611-hnnsz     1/1       Running   0          12m
productpage-v1-241699992-kl5mt   2/2       Running   0          8m
prometheus-3067433533-mdmp5      1/1       Running   0          12m
ratings-v1-2565146534-112g5      2/2       Running   0          8m
reviews-v1-2536835021-fp16t      2/2       Running   0          8m
reviews-v2-3299280847-x687f      2/2       Running   0          8m
reviews-v3-4061726673-6f4gb      2/2       Running   0          8m
servicegraph-3127588006-zc1w4    1/1       Running   0          12m

Here is the result of the kubectl get services command:

NAME            CLUSTER-IP   EXTERNAL-IP   PORT(S)                       
AGE
details         10.0.0.151   <none>        9080/TCP                      10m
grafana         10.0.0.243   <pending>     3000:32076/TCP                14m
istio-egress    10.0.0.22    <none>        80/TCP                        14m
istio-ingress   10.0.0.96    <pending>     80:31126/TCP,443:30916/TCP    14m
istio-manager   10.0.0.90    <none>        8080/TCP,8081/TCP             14m
istio-mixer     10.0.0.68    <none>        9091/TCP,9094/TCP,42422/TCP   14m
kubernetes      10.0.0.1     <none>        443/TCP                       14m
productpage     10.0.0.139   <none>        9080/TCP                      10m
prometheus      10.0.0.95    <pending>     9090:32474/TCP                14m
ratings         10.0.0.110   <none>        9080/TCP                      10m
reviews         10.0.0.197   <none>        9080/TCP                      10m
servicegraph    10.0.0.230   <pending>     8088:32648/TCP                14m

Then I run these commands:

export GATEWAY_URL=$(kubectl get po -l istio=ingress -o 'jsonpath={.items[0].status.hostIP}'):$(kubectl get svc istio-ingress -o 'jsonpath={.spec.ports[0].nodePort}')
curl -o /dev/null -s -w "%{http_code}\n" http://${GATEWAY_URL}/productpage

The response I get is 000. Hitting the endpoint with my browser gives me a connection refused error.

Here is my $GATEWAY_URL: 192.168.99.100:31448

And here are the results of running curl -v http://$GATEWAY_URL/productpage:

*   Trying 192.168.99.100...
* connect to 192.168.99.100 port 31448 failed: Connection refused
* Failed to connect to 192.168.99.100 port 31448: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 192.168.99.100 port 31448: Connection refused

I had this working at some point, and I have no idea where it broke down the line. Any help would be greatly appreciated! Please let me know if you need any more information.

Version Information

Minikube

minikube version: v0.19.1

Kubectl

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T20:41:07Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-05-09T23:22:45Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

Istio

istioctl version:
Version: 0.1.5
GitRevision: 21f4cb4
GitBranch: master
User: jenkins@ubuntu-16-04-build-de3bbfab70500
GolangVersion: go1.8
KubeInjectHub: docker.io/istio
KubeInjectTag: 0.1


apiserver version:

Version: 0.1.5
GitRevision: 21f4cb4
GitBranch: master
User: jenkins@ubuntu-16-04-build-de3bbfab70500
GolangVersion: go1.8.1

What I've Tried So Far

I've tried deleting my minikube machine and even removing the entire ~/.minikube/ directory and re-installing. I've also tried editing the istio.yaml file, changing the ingress service from LoadBalancer to NodePort and uncommenting the nodePort entry.

Support for StatefulSets

Hello!

I was hoping to use Istio with my Kafka/Zookeeper clusters but after trying the inject command and reading through the docs, I see that StatefulSet is not a supported type.

Is this in the works? Is StatefulSet something that I should even be using with Istio?

istioctl unable to find configmap when deploying application

Trying to deploy an application I get the following error

kubectl -n dev create -f <(istioctl -n dev kube-inject -f foo.yaml)
Error: Istio configuration not found. Verify istio configmap is installed in namespace "dev" with `kubectl get -n dev configmap istio`
error: no objects passed to create

I run the recommended command and it shows that the config map does exist

bash-3.2$ kubectl get -n dev configmap istio
NAME      DATA      AGE
istio     1         12m

I've tried removing and reinstalling istio, which had no effect. I have seen this on k8s 1.6.2 and 1.6.6

istioctl version:

Version: 0.1.6
GitRevision: dab2033
GitBranch: release-0.1
User: jenkins@ubuntu-16-04-build-12ac793f80be71
GolangVersion: go1.8
KubeInjectHub: docker.io/istio
KubeInjectTag: 0.1

apiserver version:

Version: 0.2.0-ac94a44
GitRevision: ac94a44
GitBranch: master
User: jenkins@ubuntu-16-04-build-12744189088e0f
GolangVersion: go1.8.1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.