Giter Site home page Giter Site logo

cel-spec's Introduction

Common Expression Language

The Common Expression Language (CEL) implements common semantics for expression evaluation, enabling different applications to more easily interoperate.

Key Applications

  • Security policy: organizations have complex infrastructure and need common tooling to reason about the system as a whole
  • Protocols: expressions are a useful data type and require interoperability across programming languages and platforms.

Guiding philosophy:

  1. Keep it small & fast.
    • CEL evaluates in linear time, is mutation free, and not Turing-complete. This limitation is a feature of the language design, which allows the implementation to evaluate orders of magnitude faster than equivalently sandboxed JavaScript.
  2. Make it extensible.
    • CEL is designed to be embedded in applications, and allows for extensibility via its context which allows for functions and data to be provided by the software that embeds it.
  3. Developer-friendly.
    • The language is approachable to developers. The initial spec was based on the experience of developing Firebase Rules and usability testing many prior iterations.
    • The library itself and accompanying toolings should be easy to adopt by teams that seek to integrate CEL into their platforms.

The required components of a system that supports CEL are:

  • The textual representation of an expression as written by a developer. It is of similar syntax to expressions in C/C++/Java/JavaScript
  • A binary representation of an expression. It is an abstract syntax tree (AST).
  • A compiler library that converts the textual representation to the binary representation. This can be done ahead of time (in the control plane) or just before evaluation (in the data plane).
  • A context containing one or more typed variables, often protobuf messages. Most use-cases will use attribute_context.proto
  • An evaluator library that takes the binary format in the context and produces a result, usually a Boolean.

Example of boolean conditions and object construction:

// Condition
account.balance >= transaction.withdrawal
    || (account.overdraftProtection
    && account.overdraftLimit >= transaction.withdrawal  - account.balance)

// Object construction
common.GeoPoint{ latitude: 10.0, longitude: -5.5 }

For more detail, see:

Released under the Apache License.

Disclaimer: This is not an official Google product.

cel-spec's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cel-spec's Issues

Heterogeneous equality

Since [1, "foo"] is a valid list and _==_ is defined on lists, [1, "foo"] == ["foo", 1] should evaluate to false. It would be least surprising if list equality was determined by elementwise equality. Equivalently, [x] == [y] should have the same meaning as x == y. Therefore, _==_ should have signature A x B --> bool.

Similarly, _!=_ should work on heterogeneous types.

Restricting equality to homogeneous types was meant to prevent user errors. After all, heterogeneously 1 == 1u evaluates surprisingly to false. However the type checker can work with a stricter A x A --> bool signature for equality, catching these errors.

We might want to make heterogeneous order operators (_<_ and friends) too. We'll want an ordering across all values for deterministic map comprehensions, so why not expose it to users? The surprising consequences (e.g. if int < uint, then 1u < 2) can again be mitigated by having the type checker work heterogeneously.

Use symbol to signal uninitialized message fields

We recently had a customer question on how to conditionally set a field, wanting to do something like: MyProto{ my_field: x ? some_value : null }. If the field is scalar, the conditional alternative can be the default value. If there is only one conditionally-set field, the conditional can be lifted up at the cost of duplicating the initialization of the other fields. But for multiple conditionally-set message fields, or lots of duplication, we have no good solution, other than the standard cop-out of an extension function.

Given that we're using null to signal a read from an unset field for wrapper types, we should consider using null for indicating "don't set this field" for message types. We'd have to do some work on the type checker to make this legal.

There is some ambiguity for Value fields: does MyProto { value_field: null } mean unset or set to the null_value option?

Current datetime macro

I like that the language is small however I think it would be of great benefit to have a couple of convenience macros for the current datetime. A simple use case that comes to mind would be when writing a security expression that says "the users last login was within the last 30 days" which translated would look like
getSeconds(dateTime.UtcNow() - user.lastLogin) <= 30 * 24 * 7 * 60

Is this out of the question?

Add notation and/or constructors for durations in days, hours, or minutes

The current duration constructor takes only a decimal string with a number of seconds, e.g. duration("5184000s"). There is desire for more handy units, e.g. duration("60d"). (Assuming we ignore leap seconds.)

The underlying protobuf well-known type goes down to nanoseconds, so we could potentially have "ms", "µs", and "ns" too.

A week is unambiguous, but month, quarter, year, etc. seem error-prone.

We could go full ISO 8601 and allow durations like "2h30m45s", but it's easy enough to compose them with addition: duration("2h") + duration("30m") + duration("45s"), although it's clunkier.

We could alternatively have separate constructors, e.g. hours(7), maybe in an extension library.

Standard function for map lookup with default

A lookup of a non-existent key in a map yields an error, so a safe lookup must be guarded by a check like has(myMap.foo) && myMap['foo'] == 'bar' or 'foo' in myMap && myMap['foo'] == 'bar'. When working with nested maps, this stuttering gets even worse:

has(a.b) && has(a.b.c) ? a['b']['c'] == 'foo' : false

As a first step to making safe lookup more convenient, we could add a regular function .get() which takes a key and a default value and returns the map value if the key is present, otherwise the default value, so the above would be:

a.get('b', {}).get('c', '') == 'foo'

It could also work for lists, to avoid the size check.

A more comprehensive solution would require a macro that would capture any missing key/field/element errors and skip to a default instead, e.g.

get(a.b.c, '') == 'foo'

or

get(a.b.c == 'foo', false)

but care would need to be taken to make sure it wouldn't be used as a general try-catch construct. It would have be equivalent to an expression using has() or in. However, we'd do the more limited regular function first.

Is evaluating an object without knowing the proto definitions conceptually intended?

This may not be the best place to discuss this, but I wasn't quite sure where to put it otherwise.

We have a go service providing the CEL conformance functionality exposed via gRPC. We want to use it to evaluate expressions against arbitrary protobuf messages, about which the CEL service doesn't know anything about. Is this possible?

Even when I use a protobuf message that I know is known to the service (as it's part of its own API), I don't get the response I expect.

Here is a code snippet (complete code could be made available if that helps):

private static final ManagedChannel channel =
    ManagedChannelBuilder.forAddress("127.0.0.1", 10001).usePlaintext().directExecutor().build();
private static final ConformanceServiceBlockingStub stub =
    ConformanceServiceGrpc.newBlockingStub(channel);

// ...

var evaluationExpression = "status.message == \"test\"";

var parsed = stub.parse(ParseRequest.newBuilder().setCelSource(evaluationExpression).build());
var evalRequestBuilder = EvalRequest.newBuilder().setParsedExpr(parsed.getParsedExpr());

var sourcePosition = Status.newBuilder().setMessage("test").build();

var builder = ExprValue.newBuilder();
var typeUrl = "/google.rpc.Status";
builder.getValueBuilder().getObjectValueBuilder().setTypeUrl(typeUrl);
builder
    .getValueBuilder()
    .getObjectValueBuilder()
    .setValue(Any.pack(sourcePosition, typeUrl).toByteString());

evalRequestBuilder.putBindings("status", builder.build());
var request = evalRequestBuilder.build();
var evaluated = stub.eval(request);
System.out.println("result: " + evaluated);

And this is the result I get:

result: result {
  error {
    errors {
      code: 2
      message: "unknown type: \'google.rpc.Status\'"
    }
  }
}

If I use an unknown type, I get
io.grpc.StatusRuntimeException: UNKNOWN: can't convert binding status: any: message type "google.rpc.StatusNonExistingType" isn't linked in

I get the impression, that I'm missing some basic understanding and would appreciate any explanation and pointers towards understanding how this is supposed to work. Thank you!

Flexible comment style

Hi,

It looks like only // style line comment is supported in CEL. I wonder if this rule can be relaxed to support # style line comment.

Since CEL can be embedded in other config languages (e.g. having Cloud Armor configs in Terraform config), it would be ideal if flexible comment style can be supported. Having to write // comment within a language with # style comment is a bit weird.

Clarify order of well-known-type conversion vs. selection, indexing, and type check

The language spec doesn't clearly explain how the automatic conversion of dynamic value protos interacts with type checking and runtime selection/indexing. For instance, given the following proto definitions:

message FakeAny {
    bytes value = 2;
}

and the following declarations and bindings

a: google.protobuf.Int64Value = 7
b: google.protobuf.Any = {...encoding of FakeAny{ value: "\377\377\377" }...}

do the following expressions type check, and what do they evaluate to?

a + 3
a.int_value + 3
b.value
b.value.value

I believe that what's intended is that the first and third will execute without error yielding 10 and b'\377\377\377' respectively, while the second and fourth should fail.

Clarify how self_eval_int_negative_min test will be passed

It seems like the literal value in the expression -9223372036854775808 is outside the range of positive values. It's not clear this int64 object can exist as a CEL Value before the negation function (-_) is applied to it.

This seems like this should be a range error creating the initial integer value.

If it's not an immediate range error, this seems to imply the range check is deferred until the final value is returned. If negation is allowed on out-of-range int64 values, how much other computation can occur on out-of-range values?

String.size() definition is ambiguous

Standard Definitions states that string.size() should be the string length - but is it the number of UTF8 bytes or the number of unicode code points?

It's a bit confusing because values states that String is Strings of UTF-8 code points but later is defined as sequences of Unicode code points.. Should strings be treated as UTF-8 bytes or Unicode code points? I'd like to hope that it's Unicode code points, but the golang implementation uses Go's len() function which usually is UTF-8 bytes (demo) but are really just arbitrary byte containers, check out this blog post and this stack overflow post for details about golang. C++ is the same WRT to string encodings, but I couldn't find the size() function in my quick looking.

/cc @TristonianJones @ryanpbrewster

Add reduce macros

[1, 2, 3].reduce(s, x, s+x, 0) for example, would compute a sum. An alternative is L.reduce("_+_", 0) which would use implied variable bindings but severly limit the features available. The 4th parameter, the initial value, defaults to zero to make sum and count slightly simpler.

Specialized reductions would be available to avoid wordy constructs using the foundational reduce macro.

  • L.sum() -- implicitly L.sum(x, x)
  • L.min() -- L.min(x, x)
  • L.max() -- L.max(x, x)
  • L.count() -- L.count(x, x) == size(L.filter(x, x))

This would permit introduction of mean(), stdev(), variance() permitting CEL the be applied to statistically-based decisions. size(L.filter(sample, sample > mean(benchmark)+3.*stdev(benchmark))) > 1. We can then supply an appropriate benchmark value in a binding.

Update:

Additionally, an L.first() would also be helpful.

This is not based on the above reduce(). It's a kind of existence test and can use short-circuit processing to stop processing when the first value has been found.

For example, resource["Tags"].first(x, x["Key"] == "Name" ? x["Value"] : null, "Default"). This lets us examine JSON documents with a list of {"Key": x, "Value": y} values for the first instance of x == "Name" and extract the y value.

This can be slightly more pleasant than resource["Tags"].filter(x, x["Key"] == "Name")[0]["Value"] because the first() macro can return a default value instead of suffering from an index error in the event of a missing {"Key": "Name", ...} entry in the list being examined.

More Descriptive Missing Attribute Error Message

The current error message for missing attributes looks like this: "no such attribute: &{2 [0xc00039f030] 0xc000196cb0 0xc000196cb0 0xc000dac540}"

It would be nice if the message made it easier for humans to identify the attribute that is missing by including a variable name in the message.

Setting up the environment

Hi,

I wanted to have a look into cel-spec but I am having trouble setting up the environment and running it. Would it possible for you guys to provide documentation in regards to that.

Thank you :)

Disallow subsumption in qualified variable names

Change

I'm not a fan of qualified (dotted) variable names, largely because they effectively change the AST at eval-time. For instance, the expression a.b.c parses to the AST

select(select(ident("a"), "b"), "c")

but at eval-time it might also be interpreted as

select(ident("a.b"), "c")

or

ident("a.b.c")

depending on whether bindings are given for a, a.b, or a.b.c, respectively. Complicated disambiguation rules are given in case several of these bindings are present at once. The implementation must either rewrite the AST in a pre-pass at evaluation time, or perform the equivalent re-interpretation on-the-fly, at the cost of complexity.

A potentially simpler semantics is to work "as if" providing a qualified name binding a.b.c implicitly creates only a binding a as a map with field "b", which itself is a map with field "c". If several qualified bindings are present with similar prefixes, they share the implicitly-constructed maps. This allows the AST to retain its original interpretation.

Before anyone objects to the performance of this, the key here is the clause "as if". The implementation is free analyze the expression to fast-path the access to nested maps - but now this optimization can apply to all maps, not just those implicitly constructed from qualified bindings.

TBD: analyze the interaction with non-empty containers.

But this interpretation / implementation forbids one thing: having a as both an explicit binding and an implicitly-constructed binding. I.e. if you provide a binding for a.b.c, it is illegal to also provide bindings for a or a.b.

So the request here is to disallow such "subsumed" bindings when qualified identifiers are bound.

Example

Examples given above. Contrast the current complexity of name resolution.

Alternatives considered

I wouldn't object if qualified identifiers went away entirely, but I hear that there is some demand for them.

Utilities for derived messages

Assuming #146 gets adopted for conditionally setting message fields, we still don't have one thing we might desire: a simple idiom for deriving one message from another by copying most of the fields, which would ideally be expressed as

MyProto{f1: orig.f1, f2: orig.f2, ...}

Because we don't have the same value when reading an unset field as when writing it, we'd have to write

MyProto{f1: has(orig.f1) ? orig.f1 : unset, f2: has(orig.f2) ? orig.f2 : unset, ...}

which is substantially more verbose.

Instead, we can introduce utility functions which can accomplish the same thing more concisely, such as orig.copy(fields or field_mask) which would create a copy restricted to the listed subset of fields, orig.exclude(fields or field_mask) which would create a copy without the listed fields, and orig.merge({"f1": v1, ...}) which would overwrite select fields.

Open CEL Governance

The current governance process for CEL is not documented. Although CEL is Google-developed, it is intended to be openly governed and the documentation in the spec repo should indicate as much.

Update has() macro specification to support repeated proto fields

In proto2 all non-repeated fields are represented by pointers which will be nil if not set. In proto3 all message fields are represented by pointers which will be nil if not set. The has() macro must also return false if the proto3 primitive field is equal to its zero value.

It seems worth reexamining whether the semantics for proto3 tests of defaultness should also be extended to repeated types (array, map) for both proto2 and proto3 where:

has(e.repeated_f) returns false if repeated_f contains zero elements. 

Stylistically the following expressions would be equivalent and harmless:

// equivalent to e.map_f.size() == 0 || has(e.map_f["key"])
!has(e.map_f) || has(e.map_f["key"])

// equivalent to e.list_f.size() > 0 && e.list_f[0]
has(e.list_f) && e.list_f[0]

This change better aligns has between proto and json, especially for maps. The one key difference would be that for json lists, the has() would only indicate the definition of the field and would not indicate a non-empty value:

// Currently valid expression over a JSON object.
!has(json.map_f) || has(json.map_f["key"])

// Slight difference from the proto usage, but this is already a difference.
has(json.list_f) && json.list_f.size() > 0 && json.list_f[0]

Compatible protobuf updates create incompatible changes in CEL meaning

Protocol buffer definitions are designed to change over time and remain compatible with old software and data. In particular, it is safe to define new messages, enums, services, etc.

However, CEL interpretation of qualified identifiers depends on the non-existence of entities in the namespace. When interpreting a sequence of period-separated identifiers, CEL prefers longer names over field selection. For instance, given a.b.c, CEL attempts to interpret as name a.b.c, then field c of a.b, and finally field c of field b of a. The latter interpretations depend on the prior interpretations not appearing in the namespace.

Lastly, to avoid surprises for users, CEL has user-defined functions "shadow" any built-in functions in the function namespace, and user-supplied variables "shadow" other bindings in the identifier namespace. This is necessary for updates to the built-in functions and identifiers without breaking existing CEL expressions.

So an expression a.b.c as a selection of fields .b.c or just .c might change its interpretation if presented with an updated protocol buffer environment containing a newly-defined entity a.b.c or a.b.

There is a convention in Prototcol Buffers to give entities uppercase names, and a convention in CEL to name variables in lowercase, preventing such a problem. But CEL users don't necessarily have full control of the protos they need to import, and we'd like to have a stronger claim than "CEL is stably by convention".

To prevent this, CEL would have to first attempt to interpret a.b.c as a use of user-supplied variables a.b.c, a.b, or a in that order, before looking for entities supplied by the proto environment.

Missing timestamp(int) for timestamp from Unix Epoch

The spec currently supports one-way conversion from timestamp to Unix epoch, but ideally it should be possible to convert from Unix epoch to timestamp as well.

See google/cel-go#357 for a dev report requesting the same feature. Round-tripping seems to be an expected use case.

  • Add a new entry in the standard library function table docs.
  • Add conformance tests to ensure such conversions are possible on all runtimes.

Parser conformance test suite

Create a new conformance test suite for the parser, subsuming some of the functionality in the google/cel-go parser tests. The "simple" end-to-end tests always fail on parse errors, by design.

In such a suite, test that the non-constant reserved identifiers cause parse failures: as, break, const, continue, else, for, function, if, import, in, let, loop, package, namespace, return, var, void, while.

Error for missing map key inconsistent

The language spec says a lookup of a missing map key results in "no_such_field", but the conformance tests check for "no such key".

Langdef: runtime errors section
Conformance tests: simple, fields.textproto, test "map_no_such_key".

Variadic logical operators

The current AST representation of the logical operators (||, &&) are 2-argument calls only, so a CEL expression like x || y || z is represented as _||_( _||_(x, y), z). Some users automatically generate CEL expressions that contain large sequences of one logical operator or the other, which creates a lopsided call tree which can violate the recursion limit for protocol buffer implementations.

Currently, there is a workaround which converts the lopsided tree into a more balanced tree. Since the logical operators are associative, this doesn't change the semantics. However, makes proto-level comparison of ASTs from different parsers sensitive to the precise balancing algorithm used. It also only changes O(N) depth to O(log N).

Instead, the logical operators should simply be variadic, i.e. _||_(x, y, z) for the example above. No changes are required to the syntax, but the type checker and all runtimes would need to be able to support the new usage.

Clarify variable and function shadowing behavior

Clarify the behavior when the customer environment tries to use a variable, function, or type already in use by the standard library.

The primary goal is to preserve the meaning of expressions written for a given customer environment. A secondary goal is to allow the standard variables/types/functions to expand over time. A last goal is to allow all environments to use newly-added entries in the standard, possibly via a more verbose name.

governance.md references private/non-existent mailing list

the governance markdown doc references [email protected] which afaics is not a public group (or is non-existent). The cel-go-discuss group does exist and seems to be the only public mailing list for questions. If the cel-lang-discuss group exists, then the request would be to make it public so others can participate/learn. If it doesn't exist then the governance doc should be updated to reflect the mailing list in use, or another mailing should be created.

CEL REPL

The conformance service API is sufficient for writing a portable REPL for CEL.

The utility would be somewhat limited by the compiled-in protobuf definitions, so it should be easy to extend the build rule with additional definitions.

Remove list and map type conversion functions

The spec contains mysterious functions:

list: (type(A), list(A)) -> list(A)
map: (type(A), type(B), map(A, B)) -> map(A, B)

which don't make much sense and are not implemented anywhere. They are apparently vestigial from the prototype type checker, which was used to automatically generate the list of standard functions. They should be removed.

Clarify lack of Unicode normalization

The spec states that strings are sequences of Unicode code points. Make it clear that CEL does not perform any normalization on strings or advanced collation - the code points are compared for lexicographic ordering but are otherwise not interpreted or modified.

Syntax: has with map keys

CEL treats map lookups and field dereferences similarly in many ways, except for has() macro.
In some cases, we have to use bracket notation because the key has illegal ident characters. This prevents the use of has macro in such keys. For example:

has(request.headers['x-clientid'])

Because headers use dashes, we cannot use field notation.

The workaround is using in operator, but I wonder if permitting has(a[f]) would be better.

Separate implementation-private builtins and identifier namespaces.

In the CEL implementations, operator expressions like "a + b" are parsed to become function calls like "_+_"(a, b), for a function named "_+_". This does not conflict with the namespace of functions that users can add to the CEL evaluator, since "_+_" is not a valid identifier. However, we have private builtins in the implementations for "_in_", and soon to have "_not_strictly_false_". These are valid identifiers and trespass into the space that should be reserved for user extensions of CEL.

We should either:
A) Change "_in_" and "_not_strictly_false_" to have some gratuitous punctuation to make them invalid as identifiers, or
B) Change the definition of identifiers to disallow a leading underscore.

Add a test case for `int("2.0")`

I think this should produce an integer 2 value.

It would be handy for working with source documents in JSON notation where the source document string representations of numbers could include a mixture of number syntax ("2.0") and integer syntax ("0"). It seems awkward to have to provide more complex type conversions like int(double(document["field"])) to deal with irregular string values.

Annex the cel.* namespace for language extensions

Except for the reserved words, users are free to use whatever identifiers they wish for custom variables, types, and functions. The standard describes the variables, types, and functions provided in the standard environment. (We leverage the mechanisms of the protocol buffer package namespace to prevent collisions for message names.) User variables and functions will shadow these standard ones, so new additions to the standard won't change the meaning of user programs. However, it means that some users will not be able to access new functionality without potentially changing their user environment to rename things out of the way - which can be burdensome if they have a large library of expressions for that environment.

Proposed that the cel.* namespace be reserved for the standard CEL language. It is unlikely that any users are currently using this namespace, and those that are should not be surprised by this annexation. This will allow new entries to the language standard to be available to all users regardless of their environment.

Perhaps also add new entires to the unqualified namespace for more streamlined use when there is no collision.

Perhaps also copy all existing standard definitions into the cel.* namespace for uniformity, and to allow full use of unqualified names.

Perhaps use a modification of the container mechanism to implement the previous two properties.

Inconsistency in lexis and example

The lexis defines FLOAT_LIT ::= -? DIGIT* . DIGIT+ EXPONENT? | -? DIGIT+ EXPONENT

The text under numeric values states:

Double-precision floating-point is also supported, and the integer 7 would be written 7., 7.0, 7e0, or any equivalent representation using a decimal point or exponent.

It looks like these are at odds: 7. doesn't match the definition of FLOAT_LIT.

Support for Custom Conformance Environments

Currently, the conformance tests operate on the CEL standard environment. With the implementation of google/cel-go#316, it has become clear that extensions to core CEL behavior would benefit from being able to use the conformance framework as well.

In order to support his case and clearly identify what environment the conformance test is operating on, it would be ideal to be able to specify the name fo the environment, perhaps as it is understood by the conformance implementer, against which the expressions should be compiled and evaluated.

The feature would enable applications with custom CEL environments to test their extensions in a uniform way across implementations (e.g. Go, C++)

Allow numeric literals to include underscores for readability

Java (link), Rust (link), and a few other languages allow you to use underscores to make it easier to read numeric literals

1_000_000_000

vs

1000000000

might be a nice quality-of-life feature, and it's usually easy to add to the parser.

In general it's illegal to have leading underscores, and in general it's fine to have arbitrarily many adjacent underscores. There is some disagreement about whether trailing underscores are allowed. Java forbids them, Rust allows them. It's slightly easier to implement with trailing underscores allowed.

For reference, I believe the language grammar would change to look like this:

INT_LIT        ::= DIGITS | 0x HEXDIGITS
UINT_LIT       ::= INT_LIT [uU]
FLOAT_LIT      ::= DIGITS? . DIGITS EXPONENT? | DIGITS EXPONENT
EXPONENT       ::= [eE] [+-]? DIGITS

DIGIT          ::= [0-9]
DIGITS         ::= DIGIT (DIGIT | "_")*

HEXDIGIT       ::= [0-9abcdefABCDEF]
HEXDIGITS      ::= HEXDIGIT (HEXDIGIT | "_")*

Clarify homogeneous equality on aggregates

Clarify the somewhat surprising semantics of homogeneous equality on list and map aggregate values. In particular:

  • aggregates are unequal if the list length or map key sets do not match, even if there are elementwise type mismatches;
  • otherwise, the equality of aggregates is the CEL logical AND of the elementwise equalities.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.