Giter Site home page Giter Site logo

frameworks's Introduction

Open Policy Agent Frameworks

Open Policy Agent is a general-purpose policy system designed to policy-enable other projects and services. The OPA Frameworks repository defines opinionated APIs for policy that are less flexible than the OPA API but are well-suited to particular classes of use cases. For example, Role Based Acces Control (RBAC), Attribute Based Access Control, Access Control Lists (ACLs), and IAM can all be implemented on top of the OPA API and its policy language, and could each be defined as an OPA Framework. One analogy from the web development world that seems to help people is that Frameworks are to OPA as Rails is to Ruby.

frameworks's People

Contributors

acpana avatar adrianludwin avatar anderseknert avatar becky-hd avatar briantkennedy avatar brycecr avatar ctab avatar davis-haba avatar dependabot[bot] avatar dmitrytokarev avatar fedepaol avatar jaydipgabani avatar josh-ferrell-sas avatar juliankatz avatar luckoseabraham avatar maxsmythe avatar mrjoelkamp avatar mrueg avatar nilekhc avatar ribbybibby avatar ritazh avatar sozercan avatar srenatus avatar step-security-bot avatar timothyhinrichs avatar tsandall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

frameworks's Issues

Re-introduce `make generate` into CI

When updating conversion-gen to v0.20.2 in #111, I found a bug. Certain conversion functions were not being included in the generated output.

This bug is being tracked (kubernetes/kubernetes#101567) and is already triaged and assigned.

To get around this bug, I manually changed the conversion file. This will meet our needs in the short term, but is not sustainable. Once the fix is available to us, we should upgrade our conversion-gen and get our automation working correctly again.

[local.Driver] Replace PutModule with a local.Arg called Builtin

Built-in modules are only added on startup, and we never need to dynamically add/remove these builtins at runtime.

Explicitly: all usages of PutModule should be removed, as all usages can safely be replaced with Builtin. So this arg should initialize d.Modules if it doesn't exist, and then add the Module to it.

Recall that the client.Targets option accepts TargetHandlers, which include both GetName() and Library(), so this information is available to gatekeeper when it calls client.New() and passes in the Driver() argument. It should be gatekeeper's responsibility to configure Driver before passing it to Client (which makes sense since this is dependency injection).

So there should be a new local.Arg in local/args.go something like the following:

package local

...

func Builtin(name string, module *ast.Module) (Arg, error) {
  func(d *driver) {
    d.modules[name] = module
  }
}

This involves moving a lot of logic from client.init to gatekeeper. Note how currently, the file pkg/target/target_template_source.go in gatekeeper is very coupled to the contents of client.init. This prevents us from transitioning matching logic from Rego to Go as how matching works is not configurable by callers of Client, but they must use a set of predefined rego hooks.

By moving this logic to gatekeeper, it puts the responsibility of configuring Constraint matching logic on gatekeeper, meaning that later when we expose the ability to use Go to do the matching, gatekeeper can reconfigure Driver to use the new Go way of doing things (without a bunch of back-and-forth reconciling between repos).

Broken downstream dependencies due to replace directive in go.mod

I'm getting some broken build dependencies in downstream projects after rolling forward to 1307ba7.

go build ./...
# k8s.io/kubectl/pkg/scheme
../../../golang/work/pkg/mod/k8s.io/[email protected]/pkg/scheme/scheme.go:38:9: undefined: unstructured.NewJSONFallbackEncoder
# k8s.io/client-go/rest
../../../golang/work/pkg/mod/k8s.io/[email protected]+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher
	have (*versioned.Decoder)
	want (watch.Decoder, watch.Reporter)

After inspection, k8s.io/[email protected]+incompatible is being replaced by k8s.io/client-go v0.0.0-20191016110837-54936ba21026 in go.mod.

replace (
    k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.0.0-20191016113439-b64f2075a530
    k8s.io/apimachinery => k8s.io/apimachinery v0.0.0-20191004115701-31ade1b30762
    k8s.io/client-go => k8s.io/client-go v0.0.0-20191016110837-54936ba21026
)

The versions in the replace list seem to be required as a dependencies, so they should be in the require list rather than replace list otherwise dependant modules will have broken dependencies at incompatible versions.

From the docs: exclude and replace directives only operate on the current (“main”) module. exclude and replace directives in modules other than the main module are ignored when building the main module.

[local.Driver] Copy Constraint matching library from gatekeeper to frameworks

In short: We want to Driver to have access to the logic we currently use in gatekeeper for matching objects to mutators. Much of this logic already exists in gatekeeper, so really it just needs to be migrated to frameworks.

This code should probably live in its own package, pkg/client/match.

Files to copy:

  • gatekeeperpkg/mutation/match/match.go -> frameworks pkg/client/match/match.go
  • gatekeeper pkg/util/prefix_wildcard.go -> frameworks pkg/client/match/prefix_wildcard.go (don't put this in a "util" library)

(and their respective tests)

It's probably best to do this as its own PR before any additional work, so it's obvious if we end up changing any of the matching logic.

Improve support for `kubectl explain`

Currently, pointing the kubectl explain command at a constraint yields no description. spec and status are also missing descriptions. See this example:

❯ kubectl explain k8srequiredlabels
Alias tip: k explain k8srequiredlabels
KIND:     K8sRequiredLabels
VERSION:  constraints.gatekeeper.sh/v1beta1

DESCRIPTION:
     <empty>

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec <Object>

   status       <>

Doing this will require placing description information in the right place in Constraint CRDs. This comment from Jordan Liggitt suggests that structural schemas should do most of the heavy lifting here.

[client.Client] Record matchers for constraints

Prerequisites: #175 and #170

Now that we've copied over matching logic into frameworks, and callers explicitly call "AddConstraint" and "RemoveConstraint", we can start doing bookkeeping in local.Driver which we need to have in order to move matching from Rego to Go. For this issue, we're just keeping a copy of the matchers for each Constraint.

Add a field to local.Driver called matchers. You can probably make this a map similar to the following:

type matcherKey struct {
  Kind, Name string
}
type Driver struct {
  ...
  matchers map[matcherKey]*Matcher
}

All that needs to happen is when Constraints are added to the Driver, we unmarshal the Matcher field into the Matcher type we copied over previously. Use the bytes := json.Marshal(unstructured) and json.Unmarshal(bytes, matcher) pattern used elsewhere to conver to the Match type.

We can't actually use these matchers yet since we haven't sharded the compilation environment - but we need these ready for when we do.

Add non-deny-all tests to e2e_test.go

Prerequisite: #150

See constraint/pkg/client/e2e_test.go.

The current tests in e2e_test.go all require the CT under test to reject all objects being reviewed. Instead, these tests should include cases specifying CTs that don't reject objects, or conditionally reject them. (Since these are E2E/integration tests and not unit tests, these paths should be covered)

This likely requires moving around how expectations are set - moving got from being specified inline in the test to denyAllCases (which needs to be renamed).

[local.Driver] Make PutData and DeleteData private

Prerequisites: #173 and #170

At this point, the only times PutData and DeleteData are called is in tests. This will flag tests you need to alter to use (and test) AddInventory/RemoveInventory and AddConstraint/RemoveConstraint appropriately.

Quadratic runtime complexity for adding constraint templates

While testing config validator, I found that adding all 68 constraint templates from the policy-library takes about 15 seconds total. I added some timers and found that the bulk of this is in adding Constraint Templates which appears to have a runtime complexity of O(n^2). This isn't a huge deal for large analysis jobs, but there's a few places where I could see this becoming an issue:

  • it slows down TDD so it would be nice to fix this for people developing new CT/C (Better UX).
  • At 68 constraints, the incremental load time is about 350ms. If gatekeeper gets loaded with a lot of templates, I'm not sure what the implications of taking this long will be, but it's probably something to be aware of.
  • for realtime FCV, we plan to put FCV into a knative endpoint, so this would mean cold-starts are really costly

load-times

Debug

Basic debug capabilities exist, but more is better including:

  • Improved output for the dump of module contents so code is human readable (currently the output is one giant string)
  • Wrapping of errors to allow easier tracing of root causes. This is especially critical because many errors could take the form of JSON unmarshalling errors
  • The ultimate debug tool would be a way to drop into a REPL for the running OPA driver

Failed to clone git repository

git clone [email protected]:open-policy-agent/frameworks.git
Cloning into 'frameworks'...
remote: Enumerating objects: 11192, done.
remote: Counting objects: 100% (932/932), done.
remote: Compressing objects: 100% (701/701), done.
remote: Total 11192 (delta 174), reused 899 (delta 165), pack-reused 10260
Receiving objects: 100% (11192/11192), 16.36 MiB | 6.56 MiB/s, done.
Resolving deltas: 100% (4985/4985), done.
git-lfs filter-process: git-lfs: command not found
fatal: the remote end hung up unexpectedly
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
git --version
git version 2.31.1

Move to go113 errors

Prerequisite: #156 since it removes several error paths

Use go113 error conventions. Specifically, this issue covers errors emitted in:

  • constraint/pkg/client
  • constraint/pkg/client/drivers/local
  • constraint/pkg/regorewriter

(Feel free to do others; the above are the essential packages to cover)

Each path doesn't need its own error - it's fine to have a core errInvalidArg that gets wrapped with additional information depending on where it is.

Define Is() on Errors and ErrorMap so tests don't need to each define their own validation of these errors. Also determine and clarify in comments the intended usage of ErrorMap - What is the key in the map used for?

The big idea here is to make our usage of errors more friendly to tests.

Update constraints to use latest stable version of OPA

It's been a while since we've updated OPA vendoring. It would be good to revendor sooner rather than later to pickup improvements in upstream OPA.

I'm filing an issue instead of submitting a PR to avoid conflicting with the template compiler changes.

Properly handle OpenAPISpec Default values

If someone puts a default value in the OpenAPISpec for a ConstraintTemplate, this results in an error. The ConstraintFramework should handle populating the default if present.

[local.Driver] Replace PutModules and DeleteModules with AddTemplate and RemoveTemplate

func (d *Driver) AddTemplate(ct *templates.ConstraintTemplate) error
func (d *Driver) RemoveTemplate(ctx context.Context, ct *template.ConstraintTemplate) error

The driver itself should handle extracting libraries from the ConstraintTemplate and naming them - there's no need for Client to have any of this logic. Having Client not care how code from ConstraintTemplates is compiled is essential for us being able to shard ConstraintTemplates into their own environments under the hood.

For RemoveTemplate - note the inclusion of context.Context since this method will later be responsible for removing the corresponding Constraints.

It's fine to this and #170 as two separate PRs or as one PR - whatever works better for you.

Add Documentation for Error Handling

If/when clients start multiplexing requests across targets, the error handling becomes non-trivial. We should come up with some best practices and document them in the README.

[local.Driver] Add methods AddConstraint and RemoveConstraint

Should be done either after or concurrently with #171

Use these signatures:

func (d *Driver) AddConstraint(ctx context.Context, target string, constraint *unstructured.Unstructured) error

func (d *Driver) RemoveConstraint(ctx context.Context, target string, constraint *unstructured.Unstructured) error

Note that relpath is not specified by Client - this logic needs to move to Driver as Client should not care about how Driver is organizing Constraints. So createConstraintPath() logic should be moved to the Driver. Note that AddConstraint should probably still call PutData for the required database transaction. And similar for RemoveConstraint - which should use DeleteData.

Add client.AddConstraint benchmarks

Prerequisite: #151

Add benchmarks for adding Constraints. (client.AddConstraint)

Adding Constraints is unlikely to be a bottleneck for normal use cases, but we should have these just so we're aware.

Use Constraints for the ConstraintTemplates chosen to resolve the above issue, as long as their Constraint parameters are of differing complexity. If they are of the same (or very similar) complexity, make up a ConstraintTemplate which uses a complex set of parameters.

Support removal of templates and constraints by name

The current APIs for removing templates and constraints (client.RemoveTemplate), (client.RemoveConstraint) - expect populated template/constraint resources to support lookup in OPA prior to removal.

It would be useful to support name-based lookup during removal to support reference-based asynchronous removal workflows where the full resource is no longer available to the caller.

Constraints Root Should Be Lower

Constraints root should be rooted at the target.

Currently it is: data.constraints["<target name>"].cluster["constraints.gatekeeper.sh"].v1alpha1

This doesn't allow us to do things like support namespaced names, or potentially allow people to control constraint groups.

The remaining question if we open things up: what do we do about the version?

Migrate to GitHub Actions

travis-ci.org is going to be shutdown in December so we'll need to migrate off before then. Since open-policy-agent/{conftest, gatekeeper, opa} have been using GHA successfully for the past few months, we should migrate frameworks ASAP.

Compile Constraint Templates to Separate OPA Environments

Currently we are compiling all constraint templates into the same OPA artifact (d.compiler in the following code):

c := ast.NewCompiler().WithPathConflictsCheck(storage.NonEmpty(ctx, d.storage, txn)).
WithCapabilities(d.capabilities)
if c.Compile(updatedModules); c.Failed() {
d.storage.Abort(ctx, txn)
return 0, c.Errors
}
for name, mod := range insert {
if err := d.storage.UpsertPolicy(ctx, txn, name, []byte(mod.text)); err != nil {
d.storage.Abort(ctx, txn)
return 0, err
}
}
if err := d.storage.Commit(ctx, txn); err != nil {
return 0, err
}
d.compiler = c
d.modules = updatedModules

Which is then used to evaluate constraints here:

func (d *driver) eval(ctx context.Context, path string, input interface{}, cfg *drivers.QueryCfg) (rego.ResultSet, *string, error) {
d.modulesMux.RLock()
defer d.modulesMux.RUnlock()
args := []func(*rego.Rego){
rego.Compiler(d.compiler),
rego.Store(d.storage),
rego.Input(input),
rego.Query(path),
}
if d.traceEnabled || cfg.TracingEnabled {
buf := topdown.NewBufferTracer()
args = append(args, rego.Tracer(buf))
rego := rego.New(args...)
res, err := rego.Eval(ctx)
b := &bytes.Buffer{}
topdown.PrettyTrace(b, *buf)
t := b.String()
return res, &t, err
}
rego := rego.New(args...)
res, err := rego.Eval(ctx)
return res, nil, err
}

With the coordination of executing constraint templates done in Rego here:

package hooks["{{.Target}}"]
violation[response] {
data.hooks["{{.Target}}"].library.autoreject_review[rejection]
review := get_default(input, "review", {})
constraint := get_default(rejection, "constraint", {})
spec := get_default(constraint, "spec", {})
enforcementAction := get_default(spec, "enforcementAction", "deny")
response = {
"msg": get_default(rejection, "msg", ""),
"metadata": {"details": get_default(rejection, "details", {})},
"constraint": constraint,
"review": review,
"enforcementAction": enforcementAction,
}
}
# Finds all violations for a given target
violation[response] {
data.hooks["{{.Target}}"].library.matching_constraints[constraint]
review := get_default(input, "review", {})
inp := {
"review": review,
"parameters": get_default(get_default(constraint, "spec", {}), "parameters", {}),
}
inventory[inv]
data.templates["{{.Target}}"][constraint.kind].violation[r] with input as inp with data.inventory as inv
spec := get_default(constraint, "spec", {})
enforcementAction := get_default(spec, "enforcementAction", "deny")
response = {
"msg": r.msg,
"metadata": {"details": get_default(r, "details", {})},
"constraint": constraint,
"review": review,
"enforcementAction": enforcementAction,
}
}
# Finds all violations in the cached state of a given target
audit[response] {
data.hooks["{{.Target}}"].library.matching_reviews_and_constraints[[review, constraint]]
inp := {
"review": review,
"parameters": get_default(get_default(constraint, "spec", {}), "parameters", {}),
}
inventory[inv]
data.templates["{{.Target}}"][constraint.kind].violation[r] with input as inp with data.inventory as inv
spec := get_default(constraint, "spec", {})
enforcementAction := get_default(spec, "enforcementAction", "deny")
response = {
"msg": r.msg,
"metadata": {"details": get_default(r, "details", {})},
"constraint": constraint,
"review": review,
"enforcementAction": enforcementAction,
}
}
# get_default(data, "external", {}) seems to cause this error:
# "rego_type_error: undefined function data.hooks.<target>.get_default"
inventory[inv] {
inv = data.external["{{.Target}}"]
}
inventory[{}] {
not data.external["{{.Target}}"]
}
# get_default returns the value of an object's field or the provided default value.
# It avoids creating an undefined state when trying to access an object attribute that does
# not exist
get_default(object, field, _default) = object[field]
get_default(object, field, _default) = _default {
not has_field(object, field)
}
has_field(object, field) {
_ = object[field]
}

We should change the compile/execution flow so that each constraint template is compiled and evaluated as separate Rego artifacts, with the execution being coordinated by Golang. This will have a number of benefits:

  • No more quadratic runtime for ingestion per #79
  • The ability for individual template authors to leverage OPA's indexing, which the current Rego coordination code likely precludes. This may increase performance.
  • The ability to more tightly control cache duration for G8r's external data feature
  • Consolidate matcher implementation on Golang (which is used for mutation). This will remove code duplication, should be more performant, and will open up the door for more specialized indexing in the future
  • Allows for more granular metrics, per open-policy-agent/gatekeeper#1496

Library templates should use Rego methods, not string substitution

This code:

		if err := libTempl.Execute(libBuf, map[string]string{
			"ConstraintsRoot": fmt.Sprintf(`data.constraints["%s"].cluster["%s"]`, t.GetName(), constraintGroup),
			"DataRoot":        fmt.Sprintf(`data.external["%s"]`, t.GetName()),
		}); err != nil {
			return err
		}

Should be unnecessary. It would be nice if we could use strings of un-modified Rego and access various roots via import statements and/or helper functions.

This would make it easier to build and test TargetHandler library code while still abstracting away the root of the constraint and data trees.

Add client.AddTemplate benchmarks

Add two benchmarks for compiling ConstraintTemplate: one simple template, and one complex template.

For the simple CT (and the general benchmarking code), I recommend looking at this code. See BenchmarkClient_AddTemplate and makeModule.

For the simple CT, we're trying to find the answer to the question of "What is the largest number of CTs a user could possibly have?"

For the complex CT, find a reasonably complex one from the gatekeeper-library. It doesn't particularly matter which one - so long as it looks complex. The idea is to answer the question "What is the largest number of normal CTs a user can have?"

It'd be best for this to live in its own test file - maybe addtemplate_benchmark_test.go?

Remove UpsertPolicy/DeletePolicy from local.driver.altermodules

File: constraint/pkg/client/drivers/local/local.go

We've determined these checks aren't necessary, so they just clutter up the code. Remove this behavior.

This also means we can remove .WithPathConflictsCheck(storage.NonEmpty(ctx, d.storage, txn)), and all calls which relate to the storage transaction in altermodules.

The tests that fail should be mocking Storage to fail on these calls - there isn't a way to execute the error paths naturally. If a non-storage-mocking test fails, then we might need to keep them for now. (Reach out to willbeason@ - this would be very surprising)

Also remove ast.CheckPathConflicts since there isn't any way this can fail (if you do manage to make it fail without mocking storage, add a test that showcases this behavior).

[local.Driver] Add methods AddCachedData and RemoveCachedData

Proposed signatures:

func (d *Driver) AddCachedData(ctx context.Context, obj interface{}) error
func (d *Driver) RemoveCachedData(ctx context.Context, obj interface{}) error

Under the hood these call PutData and DeleteData - AddInventory just allows Driver to choose where to store the data (note that the path string argument is missing).

This should have a corresponding change in Gatekeeper - you may find it helpful to replace gatekeeper's dependency on frameworks with your local repo to figure this out.

For the purposes of this issue, it's fine to cast the data argument passed to client.AddData and return an error if it isn't the correct type. This will break client.testHandler - just use an Unstructured and pick fields that work similarly to how the client.targetData type works. Other than constructing an Unstructured instead of a targetData, this shouldn't require test code to change significantly.

Add client.Review benchmarks

Prerequisite: #151

See this benchmark for inspiration.

Add benchmarks testing how long it takes to run client.Review, using the CTs to resolve the above PR. The idea of these benchmarks is to establish a theoretical maximum throughput for reviewing incoming objects, ignoring controller overhead.

The benchmarks should use a variety of number of CTs and Constraints. For each test, each CT should be identical to the others so we can establish runtime/memory/etc complexity with number of CTs and number of Constraints per CT.

Add support for the new print() function

Taken from:

open-policy-agent/gatekeeper#1654

from @tsandall

In v0.34.0 we added a new print() function that helps with simple debugging use cases. Library embeddings have to be updated to capture print() output. Without opt-in, print() statements are just erased from the policy at compile-time. Note, print() enablement might be better solved in the constraint framework. Feel free to close and move the issue over to constraints if you want.

Conftest was recently updated to support print(). See this PR: open-policy-agent/conftest#629

For more info on the print() function see this blog post: https://blog.openpolicyagent.org/introducing-the-opa-print-function-809da6a13aee


from @maxsmythe

Nice!

QQ about print: is it still subject to indexing (as in a rule that is not called because of indexing would lead to print not being called), or is it always called?

This definitely sounds like something that should go into the constraint framework.

@willbeason FYI

Closing this and re-opening there.

Build a defaulting system that derives code from the ConstraintTemplate CRD yaml

ConstraintTemplate v1 introduced a new field: legacySchema. This field is crucial for correctly interpreting (i.e. transforming) non-structural schemas in ConstraintTemplate resources, namely those that were declared as v1beta1 CTs before structural schemas were introduced to kubernetes.

These defaults are already declared in the ConstraintTemplate CRD. They are placed there during CRD yaml generation, and are derived from kubebuilder annotations that live in constraint/pkg/apis/templates/v1/constrainttemplate_types.go (for example, that of legacySchema in CT v1).

For our golang logic to have these same defaults, we are required to write custom defaulting functions and include them in each api version package. This pattern requires that a new defaulting function be written each time a default is added. This toil will undoubtedly lead to mistakes and future bugs.

To remedy this, we should make a pipeline that saves the generated CT CRD schema as a string constant in the golang code. Generalized defaulting functions can be written that ingest this information and default values accordingly, just as is done in the API server.

Use string constants consistently

Strings like the following should live in their canonical locations in kubernetes packages:

"v1alpha1"
"v1beta1"
"constraints.gatekeeper.sh"

This aids discoverability, makes testing easier, and ensures that differences in manually-typed strings are impossible.

Excessive time spent in ast.(*parser).parseExpr

I wrote a FCV benchmark test and profiled it using go test -cpuprofile cpu.prof -bench BenchmarkReviewJSON -benchtime 60s ./pkg/gcv and it looks like it's spending quite a bit of time in the opa AST parser. This seems odd since the rego is only added during benchmark setup.

benchmark

More Testing

We could always use more testing. Some options:

  • Create a fake driver to allow us to test how the client handles errors
  • Increase e2e tests to cover conditions such as multiple targets and idempotence
  • Performance testing

Rename "master" branch to "main" branch

Context: https://github.com/github/renaming

I haven't tried this myself, but these are the supposed steps to do this:

# Move the master branch to main.
$ git branch -m master main

# Push the new "main" branch to GitHub.
$ git push -u origin main

# Point HEAD to main.
$ git symbolic-ref refs/remotes/origin/HEAD refs/remotes/origin/main

# Log in to GitHub , open the repository, and click Settings > Branches.
# Select "main" as your default from the drop-down.
# Click "Update" and when prompted, click "I Understand".

# Delete the "master" branch.
$ git push origin --delete master

PS There may be more things to change that I'm not aware of, such as build/release scripts.

Support external vendoring of manifests using go toolchain

Add a build tag gated "tools" package to allow a dependent to vendor constraint template manifests using the standard go toolchain.

The standard go mod vendor command will prune non-go packages as described in golang/go#26366.

We can cause a directory to be a go package by including a conditionally compiled go source file that does nothing. This trick is documented here.

Subsequently, a dependent module can vendor our manifests using a similar trick:

// +build tools

package tools

import _ "github.com/open-policy-agent/frameworks/constraint/deploy"

Design/Modify the driver.Driver interface

This is the current Driver interface:

type Driver interface {
	Init(ctx context.Context) error

	PutModule(ctx context.Context, name string, src string) error
	PutModules(ctx context.Context, namePrefix string, srcs []string) error
	DeleteModule(ctx context.Context, name string) (bool, error)
	DeleteModules(ctx context.Context, namePrefix string) (int, error)

	PutData(ctx context.Context, path string, data interface{}) error
	DeleteData(ctx context.Context, path string) (bool, error)

	Query(ctx context.Context, path string, input interface{}, opts ...QueryOpt) (*types.Response, error)

	Dump(ctx context.Context) (string, error)
}

It doesn't offer a clear delineation between libraries which are to be shared by all ConstraintTemplates and regular ConstraintTemplates. We probably want something along the lines of:

type Driver interface {
	PutConstraintTemplate(ctx context.Context, ct ConstraintTemplate) error
	DeleteConstraintTemplate(ctx context.Context, name string) error

	PutConstraint(ctx context.Context, c Constraint) error
	DeleteConstraint(ctx context.Context, kind string, name string) error
	
	PutInventory(ctx context.Context, path string, obj interface{}) error
	DeleteInventory(ctx context.Context, path string) (bool, error)

	Query(ctx context.Context, path string, input interface{}, opts ...QueryOpt) (*types.Response, error)

	Dump(ctx context.Context) (string, error)
}

(feel free to use something else if it makes sense; this is a rough approximation to get the idea of this issue across)

The major theme here is to make the Driver interface operate with the same abstractions as its callers. We don't dynamically add/remove core libraries - we know these at startup and so can pass them in as an argument.

Initialization processes should just happen when the Driver is created, rather than having a separate Init() method. Having an Init method outside of the constructor is error-prone - the constructor should do this work.

This issue does not involve actually changing how Constraints are handled (e.g. filtering/passing them in). That should be its own PR as it may have performance implications.

Handling the remote driver may be tricky here. It isn't well unit-tested.

[local.Driver] Move query violation binding logic to Go

Don't modify Rego code for this!

It's fine for the Rego to still construct the violation binding, what we care about is using Go to get the values that are already in the Result's Expressions.

This is what the rego.Result type looks like as returned by eval:

{
  "expressions": [
    {
      "value": {
        "msg": "yep"
      },
      "text": "data.hooks.violation[result]",
      "location": {
        "row": 1,
        "col": 1
      }
    }
  ],
  "bindings": {
    "result": {
      "msg": "yep"
    }
  }
}

Note that all that bindings does is provide us a convenient place to get msg from when it's already present in expressions. So we can just fetch the message from expressions. Later (in a different issue), this means we'll be able to remove the result bindings definition in Rego from gatekeeper.

Specifically: in local.Driver.Query(), for each Expression in each rego.Result in rs, add an element to var results []*types.Result if text is "data.hooks.violation[result]" which contains the contents of "value.msg".

The test code shouldn't need to change for this (unless you're just making it easier to read).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.