Giter Site home page Giter Site logo

api's Introduction

Istio

CII Best Practices Go Report Card GoDoc

Istio logo

Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes.

  • For in-depth information about how to use Istio, visit istio.io
  • To ask questions and get assistance from our community, visit Github Discussions
  • To learn how to participate in our overall community, visit our community page

In this README:

In addition, here are some other documents you may wish to read:

You'll find many other useful documents on our Wiki.

Introduction

Istio is an open platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies and aggregate telemetry data. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes.

Istio is composed of these components:

  • Envoy - Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Istiod - The Istio control plane. It provides service discovery, configuration and certificate management. It consists of the following sub-components:

    • Pilot - Responsible for configuring the proxies at runtime.

    • Citadel - Responsible for certificate issuance and rotation.

    • Galley - Responsible for validating, ingesting, aggregating, transforming and distributing config within Istio.

  • Operator - The component provides user friendly options to operate the Istio service mesh.

Repositories

The Istio project is divided across a few GitHub repositories:

  • istio/api. This repository defines component-level APIs and common configuration formats for the Istio platform.

  • istio/community. This repository contains information on the Istio community, including the various documents that govern the Istio open source project.

  • istio/istio. This is the main code repository. It hosts Istio's core components, install artifacts, and sample programs. It includes:

    • istioctl. This directory contains code for the istioctl command line utility.

    • operator. This directory contains code for the Istio Operator.

    • pilot. This directory contains platform-specific code to populate the abstract service model, dynamically reconfigure the proxies when the application topology changes, as well as translate routing rules into proxy specific configuration.

    • security. This directory contains security related code, including Citadel (acting as Certificate Authority), citadel agent, etc.

  • istio/proxy. The Istio proxy contains extensions to the Envoy proxy (in the form of Envoy filters) that support authentication, authorization, and telemetry collection.

  • istio/ztunnel. The repository contains the Rust implementation of the ztunnel component of Ambient mesh.

Issue management

We use GitHub to track all of our bugs and feature requests. Each issue we track has a variety of metadata:

  • Epic. An epic represents a feature area for Istio as a whole. Epics are fairly broad in scope and are basically product-level things. Each issue is ultimately part of an epic.

  • Milestone. Each issue is assigned a milestone. This is 0.1, 0.2, ..., or 'Nebulous Future'. The milestone indicates when we think the issue should get addressed.

  • Priority. Each issue has a priority which is represented by the column in the Prioritization project. Priority can be one of P0, P1, P2, or >P2. The priority indicates how important it is to address the issue within the milestone. P0 says that the milestone cannot be considered achieved if the issue isn't resolved.


Cloud Native Computing Foundation logo

Istio is a Cloud Native Computing Foundation project.

api's People

Contributors

ayj avatar bianpengyuan avatar diemtvu avatar douglas-reid avatar ericvn avatar frankbu avatar geeknoid avatar greghanson avatar guptasu avatar hklai avatar howardjohn avatar hzxuzhonghu avatar istio-testing avatar jacob-delgado avatar jasonwzm avatar kfaseela avatar kyessenov avatar linsun avatar mandarjog avatar nmittler avatar ostromart avatar ozevren avatar peterj avatar qiwzhang avatar ramaraochavali avatar rshriram avatar wora avatar yangminzhu avatar zackbutcher avatar zirain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

api's Issues

ForeignServices cardinality issues

@frankbu @wora @rshriram

Currently ForegnServices has the following properties

  • There can only be one resource of this type in the whole API
  • It defines a plural

The alternatives would be to (a) allow there to be many and (b) allow one per domain

Configuration access control

In 0.2, we're adding more flexible selectors to route rules that allow arbitrary matches on the source and destination services. The structure roughly corresponds to:

namespace: wildcard
source:
  name: x
destination:
  name: y

(the idea being that the source and destination inherit the rule namespace wildcard in every namespace)

How do we setup permissions to control who is allowed to edit this configuration?
It seems like we need some sort of meta-attribute reasoning - a decision is made based on what attributes are referenced in the rule and what the predicate values correspond to those attributes. For this example, the permissions are:

  1. edit client-side configuration of service x in any namespace
  2. edit server-side configuration of service y in any namespace
  3. edit routes x to y in any namespace

Note that we are not yet enforcing bijection between namespaces and the organizational structure. A namespace here can be an application, an org, or a replica, and is not tied to an organizational unit.

Thoughts? @mandarjog @frankbu @rshriram @geeknoid

Revise DestinationPolicy

@rshriram @ZackButcher

DestinationPolicy is a decorator for a service declared in the service registry. Its responsibility is to describe common policy that should be used when talking to all endpoints within that service. As such it defines timeouts, retries, hashing of endpoints for affinity/sharding etc.

A DestinationPolicy can be defined to match a DNS wildcard prefix so that a single policy can be applied to many services within a cluster / namespace etc.

TODO:

  • Align file and schema organization with the above concept & cleanup docs / comments
  • How does policy inherit
  • Address how weighting, hashing and sharding are addressed within a service
  • Evaluate how to create named subsets within a service to simplify routing.

Api isn't tagged with the versions

for 0.2.2 release and onward we should use the 0.2.2 api tag instead of some different shas in different children repos

if we can start by having the complete list of places referencing api that'd be good
I know of pilot/WORKSPACE - what is the complete list ?

ps: same for mixerclient and proxy

Establish API style guide for Istio

As we are creating more and more Istio APIs, I think we should establish a formal API style guide for Istio APIs. The same guide can also be used for other companies to build their APIs built on top of the Istio platform.

The reason for having an API style guide is to ensure consistency and minimize friction across APIs, such as HTTPRequest vs HttpRequest. For certain concepts, such as country_code vs region_code, it would be impossible for a client to translate data if two APIs use different standards.

Because Istio APIs are using proto3 and gRPC, I recommend we use Google's API Design Guide as the baseline. Because it was developed along with proto3 and gRPC, they are naturally very compatible. It has been widely used by hundreds of products over past 4 years, and reaches great stability at this time.

Recently, Envoy community has established its style guide based on Google's API Design Guide, https://github.com/envoyproxy/data-plane-api/blob/master/STYLE.md. I think Istio community can do the same. The consistent developer experience across Envoy, Istio, proto3, gRPC and Google APIs will provide great long term value.

Update proto comments to conform to protodoc syntax

With the plans to open source the docgen tooling that Istio currently uses on ice for the foreseeable future, we need an open source doc generation tool. The best tool at hand is likely protodoc, written for Envoy's proto APIs.

We need to update our existing proto comments to conform to protodoc's syntax, and use protodoc to generate our documentation from here on out.

Change ClusterId to a string, use as opaque config reference.

In the proxy config we currently have a ClusterId which is a complex type, and is expected to match exactly in multiple places in the config. This seems onerous, and instead we should have a simple string that is used for matching between multiple places in the config, and leave the tags part to the UpstreamCluster definition.

For example:

message UpstreamCluster {
string cluster_id = 1;
repeated string tags = 2;

string health_check_endpoint = 3;
...
}

message WeightedCluster {
// Must match a cluster_id from an UpstreamCluster definition.
string dst_cluster = 1;
uint32 weight = 2;
}

Proto API definitions should not use gogoproto

Mixer API definitions use language and implementation specific gogoproto options.

Mixer server implementation details are therefore leaked onto the API surface.

While generating code in other supported languages these options are at best irrelevant or worst, they make it difficult to compile.

We should not use language specific options in the published proto spec.
The language and implementation specific options should reside with the server implementation.

Standardize mixer and proxy protobuf definitions in istio/api

@vadimeisenbergibm commented on Mon Aug 14 2017

Currently, mixer uses "Required", while proxy used "REQUIRED". Should be "Required" in both.


@vadimeisenbergibm commented on Mon Aug 14 2017

Following @ZackButcher's proposal.


@ldemailly commented on Wed Aug 16 2017

similar to target/destination issue? we should get all the renames in before 0.2 ?


@rshriram commented on Wed Aug 16 2017

please move to istio/api


@sakshigoel12 commented on Wed Aug 16 2017

rshriram@ - if you are using zenhub there is an option to move issues between repos on the right pane (last option). It would be easier to not close and reopen issues.

Alignment of Data Plane config and Control Plane config

Currently our data plane config (proxy/v1/config/cfg.proto) and our control plane config (mixer/v1/config/cfg.proto) are quite different.

The mixer config is based on modeling user intent in a generic way, using aspects, which have a selector based on generic attribute model. This model is intended to be fully general, capable of modelling anything in the system. It was not meant as a "control plane only" configuration model. If we stray from this on any user-facing configuration models, I'd like to understand why and make sure we have very good reasons for doing so.

The data plane config is currently very specific to the problem it is trying to solve, with lots of hard-coded decisions made directly in the configuration model. This is troubling for a number of reasons, but the most basic is that it is inconsistent with the configuration philosophy of the rest of istio, where we specify intent in a uniform way. The benefit of this is that the proxy does not need to have a general purpose selection model, and can have potentially higher performance. I'm not sure about the performance aspect (we can "compile" selectors down into equivalent code if needed), but the need for a general selection model vs. a hand-coded selection based on a hand-coded api is certainly easier to get started with.

So I'm fine saying that we stick to a non-general-purpose model until we get the general purpose model in place, but we should agree on the end goal, that there is a uniform configuration model for the system as a whole, and that we can feed various artifacts (such as annotations on k8s native resources) into that generic model to produce a unified configuration object for any particular service.

So what would this look like in the short and medium term?

In the short term, we should be as close as we can to the end model while keeping the selector hand-coded. So that would look like this:

message ProxyConfig {
string subject = 1;
string revision = 2;
repeated RoutingRule rules = 2;
repeated UpstreamCluster clusters = 3;
}

message RoutingRule {
RoutingSelector selector = 1;
repeated RoutingAspect aspects = 2;
repeated RoutingRule rules = 3;
}

message RoutingSelector {
L4Selector l4_selector = 1;
HttpSelector http_selector = 2;
}

message L4Selector {
... // Existing L4MatchAttributes
}

message HttpSelector {
... // Existing HttpMatchCondition
}

message RoutingAspect {
string kind = 1;
map<string, string> inputs = 3;
google.protobuf.Struct params = 4;
}

Two of the RoutingAspects that we would build in would be:

message WeightedClusterAspect {
ClusterId dst_cluster = 1;
uint32 weight = 2;
}

message FaultInjectionAspect {
... // L4FaultInjection contents
}

This also gives us a nice place to add more routing aspects later, and lets partners plug in their own aspects once there is a plugin model.

In the long term, we'd drop the separate selectors and just have a string, and document what attributes are available for routing decisions. Then the proxy would have either its own implementation of the selector execution, or we'd limit the selectors available in the proxy to some simplistic functionality and make it a config validation error to use things that aren't supported.

There is a separate interesting discussion, which is how to model what we're currently putting in UpstreamCluster, but for general config modeling. How do we model reusable chunks of configuration, such as defining a quota group, an upstream cluster, etc? I'll open a separate issue about that too, let's keep this one about aligning the configuration models.

Consolidate proxy and mixer config

We haven't made an effort to ensure that mixer and proxy configurations have a similar look-and-feel. We need to go over both config schemas and ensure they use common language in comments, similar field names for similar concepts, similar style (e.g. the mixer uses descriptor objects throughout, does the proxy?), etc.

I don't think this needs to be done pre-alpha, but it almost certainly is needed pre-beta. The earlier the better since config changes will be breaking for our users.

Add Null Type to Mixer ValueTypes

Most other attribute processing frameworks have a notion of NULL type. We do not have this in Istio.

This will bring Istio attributes inline with other internal efforts.

DestinationRule tls settings ambiguity

  1. Why is the mode field in TLSSettings REQUIRED?
// REQUIRED: Indicates whether connections to this port should be secured
// using TLS. The value of this field determines how TLS is enforced.
TLSmode mode = 1;

Since the TLSSettings themselves are NOT required, there must be a default mode (DISABLE) when the tls settings are not provided at all.

  1. Why even is there a DISABLE value for mode? Seems that the only reason one needs to provide a TLSSettings field is to configure tls (SIMPLE or MUTUAL), the other fields in TLSSettings are only applicable for those cases. So, "no tls settings == disable" seems all that's needed.

Update mixer descriptor protos to be consistent

We have a mix of descriptors that reflect our current thinking when they were created. As a result, we have a slew of different styles. We've settled on the style used by the MetricDescriptor (the QuotaDescriptor standardized on this), the rest of the descriptors need to be brought up to date.

Should we include .pb.go files in this repository?

I think there is a lot of value in providing generic go tooling support in terms of IDE support, simple go get command working. Every go tool expects the code to be buildable on its own without requiring bazel build tool. Why don't we follow the convention and include the generated code here?
We can check consistency with a simple CI test validating code generation.

Recommend to add back Duration attribute value type

Given the use case, Duration is likely used often for things like latency. It is probably awkward to use double instead. If we are going to support Timestamp subtraction, it is better to produce a Duration than a double. If we don't use double for both timestamp and duration, then we should have explicit types for both.

External service use for BYON

The current ExternalService seems like a perfect fit for BYON (allow use of custom names,
backed by specified services), but may need additional documentation/fields to clarify how internal
services or destinations can be added to the set. I'm not sure if there is a better place in the new API to define this.

Design for safely manipulating secrets

@ayj commented on Wed Jun 14 2017

Galley needs a solution to allow producers and consumers of Istio configuration to manipulate references to secrets. The secrets themselves should not be stored in Galley.


@ayj commented on Fri Jun 16 2017

@istio/auth-hackers, @wenchenglu


@ldemailly commented on Fri Jun 16 2017

The secrets themselves should not be stored in Galley.

where then / why not?

shouldn't they just be protected ?


@myidpt commented on Fri Jun 16 2017

Are they K8s secrets? Can you be more specific about manipulating the references to secrets? Does it grant producers/consumers permissions to access the secrets or something else?


@ayj commented on Fri Jun 16 2017

From Proposal: Galley (Istio Configuration):

The Istio configuration model may include secrets which need to be stored in a secured vault and not the configuration storage. The configuration component should support the notion of secrets as a first class thing and provide necessary APIs to enable producers and consumers of Istio configuration to handle references to secrets.

In particular, its desirable to avoid re-inventing a secret store and instead use an existing system, e.g. https://www.vaultproject.io, k8s secrets. Istio configuration resources would reference secrets in these stores without storing the secrets themselves in Galley. For example,

  • Mixer needs secure credentials to program backend infrastructure.
  • Operator (producer) creates Istio configuration along with reference to secure credentials.
  • Mixer (consumer) uses the secret reference to get the actual secret via the secret store. Mixer can then program the backend infrastructure with corresponding configuration.

Galley would not be responsible for managing permission model of secrets.


@ldemailly commented on Mon Jun 19 2017

There is a bootstrap issue there and I don't think we can avoid managing secret as that's core to the functionality of Istio - but happy to be shown we don't.

also if we have a seperate secret store (is it in auth?) we need an implementation for rawvm and hybrid


@wenchenglu commented on Mon Jun 19 2017

There are different types of secrets we need to manage:
(1) key/cert for establishing mTLS for service-to-service auth
(2) credentials (api key, jwt, auth token, username/password, etc.) to access backends or remote services

Is this issue mostly about the case (2)? I agree that Vault is a good option considering its popularity as a secret store. For case (1), Vault is optional since key/cert can be stored in local memory of each node. We can still use Vault + PKI backend as CA, which is not mainly for secret storage in this case though.


@myidpt commented on Mon Jun 19 2017

So if the configuration is just a reference, Galley is not able to verify if the producer really has permission to access the secret, right? After the operator writes the secret in Vault, how does the operator grant permission to Mixer/service to access the secret?


@sakshigoel12 commented on Wed Aug 16 2017

Is this for 0.2 or 0.3?


@wenchenglu commented on Wed Aug 16 2017

not 0.2

Change the service config to allow multiple adapters to be specified for a single aspect

Right now the service config ties aspect configuration to a single adapter. This means that to configure e.g. two metrics backends, I have to duplicate substantial configuration:

rules:
- selector: true
  aspects:
  - kind: metrics
    adapter: "statsd"
    params:
      metrics:
      - descriptor_: "request_count"
        value: "1"
        labels:
          source: "source.name"
          target: "target.name"
          service: "api.name"
          response_code: "response.code"
      - descriptor_:  "request_latency"
        value: "response.latency"
        labels:
          source: "source.name"
          target: "target.name"
          service: "api.name"
  - kind: metrics
    adapter: "prometheus"
    params:
      metrics:
      - descriptor_: "request_count"
        value: "1"
        labels:
          source: "source.name"
          target: "target.name"
          service: "api.name"
          response_code: "response.code"
      - descriptor_:  "request_latency"
        value: "response.latency"
        labels:
          source: "source.name"
          target: "target.name"
          service: "api.name"

We should change the adapter field to be a repeated string instead, so that aspect level configuration can be tied to multiple adapters:

rules:
- selector: true
  aspects:
  - kind: metrics
    adapters:
    - statsd
    - prometheus
    params:
      metrics:
      - descriptor_: "request_count"
        value: "1"
        labels:
          source: "source.name"
          target: "target.name"
          service: "api.name"
          response_code: "response.code"
      - descriptor_:  "request_latency"
        value: "response.latency"
        labels:
          source: "source.name"
          target: "target.name"
          service: "api.name"

Longer term, I think we should think about separating the configuration for which adapters are executed for a given aspect under a specific selector from the configuration of that aspect under that selector.

Proposal: disallow unsigned integers for istio.* protobuf namespace

I would like to formally suggest to disallow unsigned integers for istio.* protobuf namespace for the following reasons.

  • If we allow unsigned integers, people inevitably will use mismatched signed and unsigned types for the same concept, such as network port. It leads to undefined behavior in different languages like C/C++/Java.
  • istio attributes only support signed integer. Adding unsigned integer increase system complexity for no real benefit.
  • Istio namespace is mostly for configs. The workflow for config is like this: proto schema => documentation => human => yaml => tools => proto config. The critical step is human => yaml. It is very unlikely humans can reliably remember the data types. And mismatch types will confuse users in practice.
  • Many important systems, like Java, Stackdriver monitoring, don't support unsigned types at all. If we want to build a generic system, it is better to avoid this unnecessary risk.
  • Google have discouraged unsigned integers for over a decade and we have not seen much downside across wide range of products.

To avoid unnecessary complexity to the system, I think we should avoid unsigned type for istio namespace.

ca_certificate in destination_rule

The comment on the ca_certificate:

"If omitted, the proxy will not verify the server's certificate."

That's really bad for outbound traffic (and for pretty much everything else).

Should be:

"If omitted, the proxy will verify the server certificate using existing root CAs".

If we want to also allow 'insecure' mode (no verification) - we can add a separate flag,
but it can't be default.

Also not sure about 'subject_alt_names' - for external traffic it could also match the
common name, which is most commonly used ( but I guess it's implicit ).
It would be worth clarifying the syntax for san - I assume for external traffic we will
want to allow domain SANs, but internal (and some external) SANs may be URL type.

used repeated field instead of map

It is better to use repeated field. map is very slow to build even after we fixes random seed issue.
In a couple gdb sections in istio/proxy, I notice that they are in map destructor.

If we use repeated field for our attributes, we can have one repeated field for all attribute types and use "oneof" for the value type.

it will be like:

repeated Attribute attributes = 3

message Attribute {
int name_index = 1;
one of {
int64 int64 = 1;
double double = 2;
int string_index = 3;
...
}
}

@geeknoid what do you think?

Unification of config API

Some decisions from a meeting:

  1. We do not need transactional semantics for the config updates, a regular key value store is sufficient for our purpose in Mixer and Manager
  2. We may need to explore PATCH semantics for updates to allow long-lasting rollouts (since parent config might be concurrently updated).
  3. Need to unify the key naming for config artifacts, in particular separate the query for mixer artifacts from the id of the artifact.

@andraxylia @mandarjog @ayj

Accessing Istio API

Is it possible to inject Envoy side cars using an API(using curl) instead of using cli (kubectl apply -f <(istioctl kube-inject -f test.yaml)

support for opaque config objects and references

@kyessenov commented on Thu Jul 13 2017

For transcoding, we need to store large configuration objects (protobuf descriptors) to provide transcoding at the proxy level. I'd like to have a way to store them in Galley and a naming scheme to get them back.


@jmuk commented on Mon Jul 17 2017

I think the final direction would be made after @mandarjog is back, but I talked with @lizan a bit to clarify some details. Here's my understandings;

requirements:

  • users will write their .proto files.
  • Istio proxy would do the transcoding based on the descriptor data of the proto file.
    • note that the descriptors need the imports, that could be large.

So maybe a natural workflow would be

  1. users invoke protoc with --descriptor_set_out (and --include_imports). Upload the binary file into Galley.
  2. Galley should accept these binary files as-is. No transformations, no validations, it just behaves as a simple file storage.
  3. Proxy fetches the binary data from Galley and uses it for transcoding.

Actually it breaks some assumptions Galley has currently;

  • users write/upload their configurations in yaml format or json
  • Galley works with its "validator" -- the validator can convert the yaml config into certain binary. Galley doesn't care about the contents in the binary chunk.
  • Watcher clients (i.e. Mixer/Pilot. or Proxy?) will receive the converted binary data, handle the data as they like.

If Galley accepts config data which users write and make transformation and validation server-side, it might be natural for Galley to accept the proto files and make a "validator" to transform it into binary descriptors data -- although this doesn't fit quite well into the internal exchanging data structure between Galley and validators (i.e. ConfigFile message).


@jmuk commented on Wed Jul 19 2017

@geeknoid may also be interested in, how the descriptors data can fit into the config structure Galley will have.


@geeknoid commented on Thu Jul 20 2017

I think we can generalize the API surface to tolerate binary blobs.

The value in ingesting text is that Galley is in the validation business. We can ensure semantic validity of the individual chunk of state, but also provide referential integrity of the chunk relative to the rest of config. This validation will simply not be possible for blobs.


@ayj commented on Thu Aug 17 2017

No longer relevant with k8s apiserver alignment.


@kyessenov commented on Thu Aug 17 2017

Doesn't really answer my question - how do we store third party extensions in CRDs?


@ayj commented on Thu Aug 17 2017

Oops, looks like I closed the wrong issue.

The maximum resource size in Kubernetes is 1MB (etcd limitation). An opaque Istio CRD type for this purpose could work if the size limitation isn't an issue. Otherwise we'd need to carve up the opaque resource into smaller parts, compress/optimize resource format, or use an alternate storage backend.

Rename Aspect/Adaptor params to settings

Settings is the most popular term in this context, such as "configuration settings". "configuration parameters" is more confusing. When we say function parameters, we normally refer to the parameter definitions, such as "int index". The actual value is more often referred as value or argument.

Clarity of destination policies

As part of revamping Istio configuration in 0.2, we’re looking at expanding and clarifying the destination policy definition. The destination policy decides the load balancing strategy and circuit-breaking per groups of network endpoints, as selected by a service and a label selector.
For example, a policy:

destination: reviews.default.svc.cluster.local
policy:
- labels:
    version: v1
  simple_cb:                                                                                                                                             
    max_connections: 100                                                                                                                                  

selects network endpoints of reviews labeled by version: v1 and applies a circuit breaker of max 100 connections to the group. The intention is that the route can refer to this group of endpoints and utilize the policy rule. However, there are several issues with this approach:

  • Policies apply by the label selector chosen exactly by rules in the destination section. The route rule must explicitly refer to this version: v1 and not have any extra label selectors, such as env: prod.
  • Policies are unique per each group of endpoints. The two policies for env: prod and version: v1, are not combined for env: prod, version: v1 policy. Moreover, if there is a need to apply special policy for certain sources (for example, an aggressive circuit breaker), then the current form does not permit specializing the rule by source or have two different policies for the same destination.

My personal proposal to deal with this problem, is to make the policy a general rule with match conditions for source and destination, with a special precedence field to disambiguate conflicts. Here is an example:

kind: DestinationPolicy
name: my-policy
namespace: default:
precedence: 100 # highest wins
source:
  name: ratings
destination:
  name: reviews
  labels:
    version: v1
simple_cb:                                                                                                                                             
    max_connections: 100     

applies exclusively to ratings instances that refer to any reviews destination pods that match the label selector version: v1. That means if there is a route rule that has the following destination:

destination:
  name: reviews
  labels:
    env: prod
    version: v1

then the destination of this rule inherits the CB policy above for all requests originating from ratings service.

Thoughts? Ideas?
cc @rshriram @frankb

Define general purpose and optimized read-only API for Galley

The configuration component has two logical sets of APIs: the general purpose RESTful API and the more specialized read-only sync API for high throughput configuration consumers. Both are gRPC APIs.

The RESTful version of the general purpose API is provided by grpc-gateway and later possibly through envoy grpc transcoding. An overview of this method is documented here.

ref https://docs.google.com/document/d/13UqmYpEjNGT2xucIgWaTQIQUgQuAASeIdm8Ks3zp5Fk/edit#bookmark=id.1ttx3vf6ypj2

OS X Build broken

INFO: From Linking external/com_google_protobuf/libprotobuf_lite.a [for host]:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host/bin/external/com_google_protobuf/libprotobuf_lite.a(arenastring.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host/bin/external/com_google_protobuf/libprotobuf_lite.a(atomicops_internals_x86_msvc.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host/bin/external/com_google_protobuf/libprotobuf_lite.a(io_win32.o) has no symbols
INFO: From Linking external/com_google_protobuf/libprotobuf.a [for host]:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host/bin/external/com_google_protobuf/libprotobuf.a(gzip_stream.o) has no symbols
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: bazel-out/host/bin/external/com_google_protobuf/libprotobuf.a(error_listener.o) has no symbols
INFO: From ProtoCompile google/rpc/status.pb.go:
../../external/com_google_protobuf/src: warning: directory does not exist.
INFO: From ProtoCompile mixer/v1/attributes.pb.go:
../../external/com_github_gogo_protobuf: warning: directory does not exist.
../../external/com_google_protobuf/src: warning: directory does not exist.
../../external/com_github_googleapis_googleapis: warning: directory does not exist.
INFO: From ProtoCompile mixer/v1/template/extensions.pb.go:
../../external/com_google_protobuf/src: warning: directory does not exist.
ERROR: istio.io/api/mixer/v1/BUILD:3:1: GoCompile mixer/v1/lib/go_default_library.o failed (Exit 1).
2017/10/12 15:24:39 missing strict dependencies:
bazel-out/darwin_x86_64-fastbuild/genfiles/mixer/v1/check.pb.go: import of istio.io/api/google/rpc, which is not a direct dependency
INFO: Elapsed time: 142.443s, Critical Path: 12.10s

istio/api needs a build

Anything can be committed in a proto file and there is no validation this builds until istio/istio is built.

istio/api needs a bazel build because it needs to compile the proto files.

On top of this, the build can run scripts/generate-protos.sh to update the generated files.

Retry Budgeting / Deadline propagation

Our ProxyConfig is currently point-to-point, meaning that we specify things in terms of timeouts and retries. In order to make this not lead to cascading failures, we need to support deadline propagation (and maybe retry budgets?).

I think the existing user controls make sense:

  • Users can specify a per-attempt timeout for routes.
  • Users can specify the number of attempts to make for each route.

The system will then support propagating deadlines as follows:

Client Proxy:
No existing deadline:
for # retry attempts:
Inject per-attempt timeout as timeout header

Existing deadline:
Read timeout from request (TBD headers etc. to use), calculate deadline = current time + timeout
for # retry attempts:
next_timeout = min(deadline - current time, per-attempt timeout)
if next_timeout <= 0, abort request.
Inject next_timeout as timeout header

We'll also need to support deadline propagation in a standard way as part of a broader context propagation story, which we still need to figure out (for starters we can just use grpc context propagation).

Retry budgeting is harder, the best way to do this is with load budgets rather than retry budgets. But I think doing this generically is a non-trivial task, so we're best punting on that for now I think.

Evaluate Subsetting of services

@rshriram @ZackButcher @kyessenov

Istio inherits its service registry from K8S, Consul etc but we want to be able to apply customized DestinationPolicy or RoutingRules to subsets of services. While in theory we can ask deployers to push more service declarations into a service registry this is likely to be cumbersome.

In particular we need to be able to model the following concepts:

  • Route traffic based on protocol or source metadata to a subset of endpoints within a service. Common use-case is routing to a specific version.

  • Attach custom destination policy (timeouts etc.) to a subset of endpoints within a service and then choose those endpoints with a routing rule

  • Create a weighting of traffic distribution among endpoints within a service. Common use-case is A/B testing & canarying

  • Create named 'shards' of a service for hard / soft routing affinity based on some hash of protocol information

It would also be desirable for subset names to be routable using SNI

Define the structure of istio/api repo

github.com/googleapis/googleapis has a well defined structure. We should do the same for istio/api repo. For example, we can structure it like:

  • istio/mixer/v1
  • istio/config/v1
  • istio/attribute/v1
  • istio/adapter/authentication/v1

When we add more components, it is easy to put them into the right place. We also want a clear versioning story, such as no breaking change within a major version, no dependency between two major versions of the same component, all new features must default to off, how to tag Alpha/Beta/GA of individual features.

When many people contribute to the system, we need more guidelines to scale the development. Such rules are simple to define and relatively easy to follow.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.