Giter Site home page Giter Site logo

coryodaniel / bonny Goto Github PK

View Code? Open in Web Editor NEW
370.0 20.0 27.0 1.19 MB

The Elixir based Kubernetes Development Framework

License: MIT License

Elixir 99.04% Dockerfile 0.31% Makefile 0.59% Shell 0.06%
kubernetes kubernetes-operator elixir k8s erlang kubernetes-api kubernetes-controller kubernetes-scheduler

bonny's Introduction

Bonny

Module Version Coverage Status Last Updated

Build Status CI Build Status Elixir Build Status K8s

Hex Docs Total Download License

Bonny: Kubernetes Development Framework

Extend the Kubernetes API with Elixir.

Bonny make it easy to create Kubernetes Operators, Controllers, and Custom Schedulers.

If Kubernetes CRDs and controllers are new to you, read up on the terminology.

Getting Started

Kickstarting your first controller with bonny is very straight-forward. Bonny comes with some handy mix tasks to help you.

mix new your_operator

Now add bonny to your dependencies in mix.exs

def deps do
  [
    {:bonny, "~> 1.0"}
  ]
end

Install dependencies and initialize bonny. This task will ask you to answer a few questions about your operator.

Refer to the kubernetes docs for API group and API version.

mix deps.get
mix bonny.init

Don't forget to add the generated operator module to your application supervisor.

Configuration

mix bonny.init creates a configuration file config/bonny.exs and imports it to config/config.exs for you.

Configuring Bonny

Configuring bonny is necessary for the manifest generation through mix bonny.gen.manifest.

config :bonny,
  # Function to call to get a K8s.Conn object.
  # The function should return a %K8s.Conn{} struct or a {:ok, %K8s.Conn{}} tuple
  get_conn: {K8s.Conn, :from_file, ["~/.kube/config", [context: "docker-for-desktop"]]},

  # Set the Kubernetes API group for this operator.
  # This can be overwritten using the @group attribute of a controller
  group: "your-operator.example.com",

  # Name must only consist of only lowercase letters and hyphens.
  # Defaults to hyphenated mix app name
  operator_name: "your-operator",

  # Name must only consist of only lowercase letters and hyphens.
  # Defaults to hyphenated mix app name
  service_account_name: "your-operator",

  # Labels to apply to the operator's resources.
  labels: %{
    "kewl": "true"
  },

  # Operator deployment resources. These are the defaults.
  resources: %{
    limits: %{cpu: "200m", memory: "200Mi"},
    requests: %{cpu: "200m", memory: "200Mi"}
  }

Running outside of a cluster

Running an operator outside of Kubernetes is not recommended for production use, but can be very useful when testing.

To start your operator and connect it to an existing cluster, one must first:

  1. Have configured your operator. The above example is a good place to start.
  2. Have some way of connecting to your cluster. The most common is to connect using your kubeconfig as in the example:
# config.exs
config :bonny,
  get_conn: {K8s.Conn, :from_file, ["~/.kube/config", [context: "optional-alternate-context"]]}

If you've used mix bonny.init to generate your config, it created a YourOperator.Conn module for you. You can edit that instead.

  1. If RBAC is enabled, you must have permissions for creating and modifying CustomResourceDefinition, ClusterRole, ClusterRoleBinding and ServiceAccount.
  2. Generate a manifest mix bonny.gen.manifest and install it using kubectl kubectl apply -f manifest.yaml

Now you are ready to run your operator

iex -S mix

Guides

Have a look at the guides that come with this repository. Some can even be opened as a livebook.

Talks

Example Operators built with this version of Bonny

  • Kompost - Providing self-service management of resources for devs

Example Operators built with an older version of Bonny

Telemetry

Bonny uses the telemetry to emit event metrics.

Events: Bonny.Sys.Telemetry.events()

[
    [:reconciler, :reconcile, :start],
    [:reconciler, :reconcile, :stop],
    [:reconciler, :reconcile, :exception],
    [:watcher, :watch, :start],
    [:watcher, :watch, :stop],
    [:watcher, :watch, :exception],
    [:scheduler, :binding, :start],
    [:scheduler, :binding, :stop],
    [:scheduler, :binding, :exception],
    [:task, :execution, :start],
    [:task, :execution, :stop],
    [:task, :execution, :exception],
]

Terminology

Custom Resource:

A custom resource is an extension of the Kubernetes API that is not necessarily available on every Kubernetes cluster. In other words, it represents a customization of a particular Kubernetes installation.

CRD Custom Resource Definition:

The CustomResourceDefinition API resource allows you to define custom resources. Defining a CRD object creates a new custom resource with a name and schema that you specify. The Kubernetes API serves and handles the storage of your custom resource.

Controller:

A custom controller is a controller that users can deploy and update on a running cluster, independently of the cluster’s own lifecycle. Custom controllers can work with any kind of resource, but they are especially effective when combined with custom resources. The Operator pattern is one example of such a combination. It allows developers to encode domain knowledge for specific applications into an extension of the Kubernetes API.

Operator:

A set of application specific controllers deployed on Kubernetes and managed via kubectl and the Kubernetes API.

Contributing

I'm thankful for any contribution to this project. Check out the contribution guide

Operator Blog Posts

bonny's People

Contributors

adriffaud avatar bradleyd avatar coryodaniel avatar dependabot[bot] avatar elliottneilclark avatar freedomben avatar gerred avatar kbredemeier avatar kianmeng avatar mindreframer avatar mruoss avatar pedep avatar rafaelgaspar avatar rodesousa avatar sleipnir avatar spunkedy avatar velimir avatar victoriavilasb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bonny's Issues

Watcher crashes when group version does not exist on cluster.

Environment

  • Bonny version
  • Elixir & Erlang/OTP versions (elixir --version):
  • Operating system:

Current behavior

Watcher crashes when a group version isn't available:

15:54:14.565 [error] GenServer Ballast.Controller.V1.PoolPolicy terminating
** (CaseClauseError) no case clause matching: {:error, :unsupported_group_version, "ballast.bonny.run/v1"}
    (k8s) lib/k8s/client/runner/watch.ex:76: K8s.Client.Runner.Watch.get_resource_version/2
    (k8s) lib/k8s/client/runner/watch.ex:30: K8s.Client.Runner.Watch.run/3
    (bonny) lib/bonny/watcher.ex:31: Bonny.Watcher.handle_info/2
    (stdlib) gen_server.erl:637: :gen_server.try_dispatch/4
    (stdlib) gen_server.erl:711: :gen_server.handle_msg/6
    (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Last message: :watch
State: %Bonny.Watcher.Impl{buffer: %Bonny.Watcher.ResponseBuffer{lines: [], pending: ""}, controller: Ballast.Controller.V1.PoolPolicy, resource_version: nil, spec: %Bonny.CRD{group: "ballast.bonny.run", names: %{kind: "PoolPolicy", plural: "poolpolicies", shortNames: ["pp"], singular: "poolpolicy"}, scope: :cluster, version: "v1"}}

Expected behavior

Logger or event dispatched that the group wasn't found.

Add mix bonny.init task

Add a task to initializing a new bonny operator.

  • add config/test.exs w/ HTTP and Discovery stubs
  • add example discovery mocks
  • add config/prod.exs defaulted to SA
  • add config/dev.exs default to docker-for-desktop?
  • either a Makefile for building docker images or a task?
  • bonny/operator logger, see eviction operator
  • test_helper.exs
    • K8s.Client.DynamicHTTPProvider.start_link([])

Auto deploy operator manifest on boot

When an operator boots, it should deploy its CRDs as a part of the boot process. This will make deploying and packaging operators more simple as currently you need an operator docker image and the CRDs.

Deploying operator should create a configuration manifest for the operator itself and the service accounts necessary to generate the CRDs of the operator creates.

Support /status and /scale subresources

Support status and scale subresources

Requires: k8s 1.13 beta

Expected behavior

CRD should generate w/ subresources:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: crontabs.stable.example.com
spec:
  group: stable.example.com
  versions:
    - name: v1
      served: true
      storage: true
  scope: Namespaced
  names:
    plural: crontabs
    singular: crontab
    kind: CronTab
    shortNames:
    - ct
  # subresources describes the subresources for custom resources.
  subresources:
    # status enables the status subresource.
    status: {}
    # scale enables the scale subresource.
    scale:
      # specReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Spec.Replicas.
      specReplicasPath: .spec.replicas
      # statusReplicasPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Replicas.
      statusReplicasPath: .status.replicas
      # labelSelectorPath defines the JSONPath inside of a custom resource that corresponds to Scale.Status.Selector.
      labelSelectorPath: .status.labelSelector

Will need to support add'l functions on the controller update_status/1 update_scale/1

Watch scoping

Question:

When i create a namespaced CRD, bonny watch one namespace.
But is not better to let the choice for namespaced CRD ?

I think there are more scenarios with operator that need to watch all the clusters with namespaced CRD.

or... i didn't see how i have to configurate it :s

Problem when i run in local

Environment

  • Bonny version: 0.3
  • Elixir & Erlang/OTP versions (elixir --version): 1.7
  • Operating system: Ubuntu

Current behavior

Hi,

i follow your article on medium about bonny. When a run the operator in local with iex i have always this error message (i have a good kube/config):

 (MatchError) no match of right hand side value: {:error, :cluster_not_registered}
(k8s) lib/k8s/cluster.ex:46: K8s.Cluster.url_for/2                                                                                                                             
(k8s) lib/k8s/client/runner/base.ex:89: K8s.Client.Runner.Base.run/4                                                                                                           
(bonny) lib/bonny/watcher/impl.ex:128: Bonny.Watcher.Impl.fetch_resource_version/1                                                                                             
(bonny) lib/bonny/watcher/impl.ex:62: Bonny.Watcher.Impl.get_resource_version/1                                                                                                
(bonny) lib/bonny/watcher/impl.ex:46: Bonny.Watcher.Impl.watch_for_changes/2                                                                                                   
(bonny) lib/bonny/watcher.ex:24: Bonny.Watcher.handle_info/2                                                                                                                   
(stdlib) gen_server.erl:637: :gen_server.try_dispatch/4                                                                                                                        
(stdlib) gen_server.erl:711: :gen_server.handle_msg/6    

Mix task for generating CRD swagger

In working on the k8s client I’ve alotted for extending the client with a custom swagger spec.

To auto generate the routes in the client we would need a task that takes a k8s version, downloads the swagger spec, then merged in the swagger operations and definitions for each CRD.

This will allow for simple CRUD style interactions with CRDs in the k8s API.

resources = K8s.list(“mygroup/myversion”, “MyCRDType”)

Customizable request headers

Headers are hard coded. Add defaults to Bonny.Config and add a configuration parameter for overwriting.

config :bonny, headers: [
  {"Accept", "im-static"},
  {"Foo", {m,f,a}}
]

Basic k8s HTTP client

Add a basic k8s HTTP client or URL generator for a "batteries included" experience.

I'm not sure that it should be as fully fledged as Kazan. I would rather defer to people to use that library if they needed a full implementation of a k8s client.

Rather, I'd like to include a small wrapper around HTTPoison or simple a URL/Path generator that:

  • encapsulates ssl options and headers w/ K8s.Conf so once its loaded at runtime, it doesnt have to be passed around to each HTTP call (for HTTPoison)
  • ship w/ a default list of namespaced vs clustered resources and make ad-hoc calls to /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/{name} to determine the scope of CRDs
  • Support misc operations (scale, status)

Support the following methods, accepting maps {"apiVersion": "apps/v1", "kind": "Deployment"} for mutations/statuses/reads and tuples {version, kind}|{version, kind, name} for reads/statuses

Write:

  • Create
  • Patch
  • Replace
  • Delete
  • Delete Collection

Read:

  • Read
  • List
  • List All Namespaces

Status:

  • Patch Status
  • Read Status
  • Replace Status

Scale:

  • Read Scale
  • Replace Scale
  • Patch Scale

dispatch handling and the Events API

While it is up the the individual controller to handle error cases, it would be nice to integrate the result of a controller dispatch with the kubernetes events API.

Return from add/modify/delete/reconcile would automatically create events:

  • :ok, {:ok, _} -> Normal
  • :error, {:error, _} -> Warning

There are a lot of fields for an Event. It would be nice to abstract as much of this away as possible, but allow for an explicit return value of something like {:ok, %K8s.Event{}} (doesnt exist) to override any logic bonny provides.

Handle DOWN on Watcher HTTP Calls

hello-operator-988b49645-lhsc2 hello-operator 01:25:37.352 [info]  SIGTERM received - shutting down
hello-operator-988b49645-lhsc2 hello-operator
hello-operator-988b49645-lhsc2 hello-operator =CRASH REPORT==== 30-Dec-2018::01:25:37.404011 ===
hello-operator-988b49645-lhsc2 hello-operator   crasher:
hello-operator-988b49645-lhsc2 hello-operator     initial call: tls_connection:init/1
hello-operator-988b49645-lhsc2 hello-operator     pid: <0.211.0>
hello-operator-988b49645-lhsc2 hello-operator     registered_name: []
hello-operator-988b49645-lhsc2 hello-operator     exception exit: {shutdown,sender_died,killed}
hello-operator-988b49645-lhsc2 hello-operator       in function  gen_statem:loop_event_result/9 (gen_statem.erl, line 1158)
hello-operator-988b49645-lhsc2 hello-operator       in call from tls_connection:init/1 (tls_connection.erl, line 134)
hello-operator-988b49645-lhsc2 hello-operator     ancestors: [tls_connection_sup,ssl_connection_sup,ssl_sup,<0.110.0>]
hello-operator-988b49645-lhsc2 hello-operator     message_queue_len: 2
hello-operator-988b49645-lhsc2 hello-operator     messages: [{'DOWN',#Ref<0.5003975.3906207745.114967>,process,<0.203.0>,
hello-operator-988b49645-lhsc2 hello-operator                           shutdown},
hello-operator-988b49645-lhsc2 hello-operator                   {'EXIT',<0.116.0>,shutdown}]
hello-operator-988b49645-lhsc2 hello-operator     links: []
hello-operator-988b49645-lhsc2 hello-operator     dictionary: [{ssl_pem_cache,ssl_pem_cache},{ssl_manager,ssl_manager}]
hello-operator-988b49645-lhsc2 hello-operator     trap_exit: true
hello-operator-988b49645-lhsc2 hello-operator     status: running
hello-operator-988b49645-lhsc2 hello-operator     heap_size: 4185
hello-operator-988b49645-lhsc2 hello-operator     stack_size: 27
hello-operator-988b49645-lhsc2 hello-operator     reductions: 13171
hello-operator-988b49645-lhsc2 hello-operator   neighbours:
hello-operator-988b49645-lhsc2 hello-operator
hello-operator-988b49645-lhsc2 hello-operator =SUPERVISOR REPORT==== 30-Dec-2018::01:25:37.520622 ===
hello-operator-988b49645-lhsc2 hello-operator     supervisor: {local,tls_connection_sup}
hello-operator-988b49645-lhsc2 hello-operator     errorContext: shutdown_error
hello-operator-988b49645-lhsc2 hello-operator     reason: {shutdown,sender_died,killed}
hello-operator-988b49645-lhsc2 hello-operator     offender: [{nb_children,1},
hello-operator-988b49645-lhsc2 hello-operator                {id,undefined},
hello-operator-988b49645-lhsc2 hello-operator                {mfargs,{tls_connection,start_link,[]}},
hello-operator-988b49645-lhsc2 hello-operator                {restart_type,temporary},
hello-operator-988b49645-lhsc2 hello-operator                {shutdown,4000},
hello-operator-988b49645-lhsc2 hello-operator                {child_type,worker}]

Replace HTTPoison calls w/ K8s.Client

Environment

  • Bonny version: 0.30
  • Elixir & Erlang/OTP versions (elixir --version):
  • Operating system:

Current behavior

After port from k8s_conf to k8s, bonny is still using HTTPoison directly under the hood.

Expected behavior

Migrate to K8s.Client and K8s.watch/N

Telemetry events

Environment

  • Bonny version: 0.2.3
  • Elixir & Erlang/OTP versions (elixir --version): (any supported)
  • Operating system: (any supported)

Current behavior

As a responsible operator, I like to instrument my long-running services for gathering metrics (with Prometheus, in my case) and currently must do so in my own code only, but cannot capture easily any events/data handled directly/only by the Bonny framework itself.

Desired behavior

Bonny could introduce a dependency on Telemetry for emitting runtime events from the framework, which can be captured by anyone who wishes to instrument such events. This library has been adopted by Ecto (currently implemented by ecto_sql in 3.x), among others within the Elixir community.

A less well-established alternative could be the OpenCensus BEAM implementations.

Bonny.Server.Scheduler

Kubernetes scheduler server.

All nodes and pods are automatically selected from Bonny.Config.cluster_name/0. pods/0 and nodes/0 are overridable callbacks of Bonny.Server.Scheduler

defmodule MyScheduler do
  use Bonny.Server.Scheduler, name: "foo"

  @impl Bonny.Server.Scheduler
  def select_node_for_pod(_pod, nodes) do
    nodes
    |> Stream.filter(fn(node) ->
      name = K8s.Resource.name(node)
      String.contains?(name, "preempt")
    end)
    |> Enum.take(1)
    |> List.first
  end
end

MyScheduler.start_link()

Migrate from k8s_client to k8s?

Environment

  • Bonny version: 0.2.3
  • Elixir & Erlang/OTP versions (elixir --version): (any supported)
  • Operating system: (any supported)

Current behavior

Current version depends on a client library marked as deprecated. Users which also use the replacement k8s library wind up with two such clients in their dependency tree, and Bonny's own code can't benefit from improvements made in the successor library.

Expected behavior

A new release of Bonny which migrates to k8s would be wonderful!

Add bonny.gen.config

Create a mix task bonny.gen.config that will create a new configuration file @ config/bonny.exs with the config example and comments from the README and also inject a line in the bottom of config/config.exs that includes the new config file.

Auto detecting operator controllers for Watcher

Currently you have to explicitly set the controllers to watch in config.exs. This made it easy to test. Its explicit, but it can also be error prone...

Is the indirection worth the convenience?

Could be detected with something like:

@doc """
  Loads all controllers in all code paths.
  """
  @spec load_all() :: [] | [atom]
  def load_all, do: get_controllers(Bonny.Controller)

  # Loads all modules that extend a given module in the current code path.
  @spec get_controllers(atom) :: [] | [atom]
  defp get_controllers(controller_type) when is_atom(controller_type) do
    available_modules(controller_type) |> Enum.reduce([], &load_controller/2)
  end

  defp load_controller(module, modules) do
    if Code.ensure_loaded?(module), do: [module | modules], else: modules
  end

  defp available_modules(controller_type) do
    # Ensure the current projects code path is loaded
    Mix.Task.run("loadpaths", [])
    # Fetch all .beam files
    Path.wildcard(Path.join([Mix.Project.build_path(), "**/ebin/**/*.beam"]))
    # Parse the BEAM for behaviour implementations
    |> Stream.map(fn path ->
      {:ok, {mod, chunks}} = :beam_lib.chunks('#{path}', [:attributes])
      {mod, get_in(chunks, [:attributes, :behaviour])}
    end)
    # Filter out behaviours we don't care about and duplicates
    |> Stream.filter(fn {_mod, behaviours} -> is_list(behaviours) && controller_type in behaviours end)
    |> Enum.uniq()
    |> Enum.map(fn {module, _} -> module end)
  end

Add --skip-deployment flag to bonny.gen.manifest

For local development it would be nice to have a way to generate a manifest, without generating a deployment so the operator can be run externally to the cluster.

Add:

  • mix bonny.gen.manifest --skip-deployment
  • help message
    To skip the `deployment` for running an operator outside of the cluster (like in development) run:
      mix bonny.gen.manifest --skip-deployment

API Aggregation

API Aggregation

If I recall API Aggregation is done via go modules. May need to make a go module shim and do gRPC calls to the bonny operator.

Is API Aggregation worth the fuss given operators and webhooks?

Kubernetes Events abstraction

Environment

  • Bonny version: 0.2.3
  • Elixir & Erlang/OTP versions (elixir --version): (any supported)
  • Operating system: (any supported)

Current behavior

Developers consuming this library will roll-their-own to emit Kubernetes events via the appropriate API.

Expected behavior

It would be wonderful to have a simple abstraction for emitting events as part of the Bonny library's contents.

Add a prometheus endpoint

  • Track metrics of success on dispatching
    • {apigroup, resource, version, event, success/fail}
  • Add /metrics endpoint

Automatically inject ownerReferences

Inject metadata.ownerReferences into resources.

I'm not positive the original idea below will work. Middleware is per cluster and it would need a per resource data to inject ownerReference.

Definitely worth looking into, but a simple solution for now could be to add an add_owner_references/2 function to the Controller behavior.

def added(todo = %{}) do
  %K8s.Operation{} # make an operation
  |> add_owner_references(todo) # attached `todo` crd references
  |> K8s.run(conn)
end
apiVersion: v1
kind: Pod
metadata:
  ...
  ownerReferences:
  - apiVersion: apps/v1
    controller: true
    blockOwnerDeletion: true
    kind: ReplicaSet
    name: my-repset
    uid: d9607e19-f88f-11e6-a518-42010a800195
  ...

original idea

This will require

  • K8s.Middleware to modify all JSON paylaods during a lifecycle event
  • Ensure lifecycle events are dispatched in their own process
  • Store custom resource's information by process ID so that it can be fetched by middleware
  • default implementation to make the delete lifecycle a no-op as deletion should happen via k8s/ownerReferences garbage collection

Related #79

generate only valid characters for operator names

First of all Daniel:

this. is. awesome!

Thanks for putting this into a single package and providing educational material around this, this is crazy amount of work.
Just couple of weeks ago I myself was thinking about writing an Kubernetes Operator in Elixir and felt some hesitation because of a rather large Yak-shaving-potential of this task (compared to Golang). But now, after seeing bonny, the effort is already reduced and this seems much more approachable. Really excited about what's possible!

So, I will try to use bonny for a real use case and will file smaller (and also bigger if needed) issues that i have noticed. If i see a clear fixing opportunity, I will open a PR.

Used version: :bonny, "0.2.2"

example:

mix bonny.gen.controller PgService pg-service

result:

@names %{
    plural: "pg-service",
    singular: "pg_service", # <--- would not be valid as URL / domain name
    kind: "PgService",
    short_names: []
  }

Bonny.Server.Controller

Prereqs: #58 #57

Refactor Reconciler and Watcher into smaller parts that the Bonny.Server.Controller can use

The higher order Bonny.Controller will use the Bonny.Server.Controller as well as add functionality around @rules, naming, etc

defmodule MyController do
  use Bonny.Server.Controller, cluster: :default
end

Shorthand for use Bonny.Server.Watcher and use Bonny.Server.Reconciler

A smorgasbord of dialyzer errors

Environment

  • Bonny version: master

Current behavior

Dialyzer is emitting a few errors, see below.

Expected behavior

Output should be pristine.

Finding suitable PLTs
Checking PLT...
[:asn1, :certifi, :compiler, :crypto, :earmark, :eex, :elixir, :ex_doc, :hackney, :httpoison, :idna, :jason, :k8s, :kernel, :logger, :makeup, :makeup_elixir, :metrics, :mimerl, :mix, :nimble_parsec, :public_key, :ssl, :ssl_verify_fun, :stdlib, :telemetry, :unicode_util_compat, :yamerl, :yaml_elixir]
PLT is up to date!
Starting Dialyzer
[
  check_plt: false,
  init_plt: '/Users/odanielc/Workspace/coryodaniel/bonny/_build/dev/dialyxir_erlang-21.2.4_elixir-1.8.1_deps-dev.plt',
  files_rec: ['/Users/odanielc/Workspace/coryodaniel/bonny/_build/dev/lib/bonny/ebin'],
  warnings: [:unknown]
]
Total errors: 9, Skipped: 0
done in 0m1.73s
lib/bonny/telemetry.ex:22:contract_range
Contract cannot be correct because return type on line number 33 is mismatched.

Function:
Bonny.Telemetry.emit([atom(), ...], _measurements :: map(), _metadata :: map())

Type specification:
Contract head:
(atom(), map(), map()) :: no_return()

Contract head:
(atom(), map(), (... -> any)) :: no_return()

Contract head:
([atom()], map(), (... -> any)) :: no_return()

Success typing (line 33):
:ok
________________________________________________________________________________
lib/bonny/watcher.ex:15:no_return
Function init/1 has no local return.
________________________________________________________________________________
lib/bonny/watcher.ex:21:unused_fun
Function schedule_watcher/0 will never be called.
________________________________________________________________________________
lib/bonny/watcher/impl.ex:19:no_return
Function new/1 has no local return.
________________________________________________________________________________
lib/bonny/watcher/impl.ex:21:call
The call:
Bonny.Telemetry.emit([:watcher, :initialized], %{:api_version => _, :kind => _, _ => _})

breaks the contract
Contract head:
(atom(), (... -> any)) :: no_return()

Contract head:
(atom(), map()) :: no_return()

in argument
1st
________________________________________________________________________________
lib/bonny/watcher/impl.ex:36:no_return
Function watch_for_changes/2 has no local return.
________________________________________________________________________________
lib/bonny/watcher/impl.ex:37:call
The call:
Bonny.Telemetry.emit([:watcher, :started], %{:api_version => _, :kind => _, _ => _})

breaks the contract
Contract head:
(atom(), (... -> any)) :: no_return()

Contract head:
(atom(), map()) :: no_return()

in argument
1st
________________________________________________________________________________
lib/bonny/watcher/impl.ex:100:invalid_contract
Invalid type specification for function.

Function:
Bonny.Watcher.Impl.do_dispatch/3

Success typing:
@spec do_dispatch(_, :add | :delete | :modify, _) :: {:ok, pid()}
________________________________________________________________________________
lib/bonny/watcher/impl.ex:102:no_return
The created fun has no local return.
________________________________________________________________________________
�[33mdone (warnings were emitted)�[0m

Bonny.Server.Reconciler

defmodule MyReconciler do
  use Bonny.Server.Reconciler, cluster: :default
  
  @doc "List operation to reconcile"
  @impl true
  def operation() do
    K8s.Client.list("v1", :pods, namespaces: :all)
  end

  @doc "Resource to reconcile"
  @impl true
  def reconcile(resource), do: :ok
end

Refactor Bonny.Controller to use Bonny.Server*

Once all 4 server modules are abstracted, refactor Bonny.Controller on Bonny.Server.Controller.

Bonny.Controller will still provide CRD integration/macros/etc while Bonny.Server.Controller will simply handle watching and lifeycles

Bonny.Server.Watcher

Behaviour / use macro for creating a watcher.

defmodule MyWatcher do
  use Bonny.Server.Watcher, cluster: :default
  
  @doc "Operation to watch"
  @impl true
  def operation() do
    K8s.Client.list("v1", :pods, namespaces: :all)
  end

  @doc "Resource was added"
  @impl true
  def add(resource), do: :ok

  @doc "Resource was modified"
  @impl true
  def modify(resource), do: :ok

  @doc "Resource was deleted"
  @impl true
  def delete(resource), do: :ok
end

Dispatch operator events in a task

Consider dispatch operator events in a task. We expect the operator to handle its own errors, so dispatch in a task could make it extra speedy.

defmodule Bonny.TestTaskSupervisor do
  def async_nolink(_, fun), do: fun.()
  def start_child(_, fun), do: fun.()
end

@task_supervisor Application.get_env(:bonny, :task_supervisor) || Task.Supervisor

Open API 3 Schema

Not sure if validation itself should be a part of Bonny since kubernetes >1.13 will validate the HTTP operation.

We should support generating the Open API as a part of the CRD though. This will help with kubectl explain my-resource, enable validation at the kube API, and allow use to generate resource manifests #10

It could be very similar to how phoenix generates models:

mix bonny.gen.controller Foo foos name:string fav_number:integer fav_color:string

Open API 3 Schema Validation

Elixir Library

Include schema in #26

Add mix bonny.test to e2e test an operator

Create a generator or mix task to deploy and verify the functionality of a CRD on a local cluster.

Thinking something along the lines of
mix bonny.test --user=[USER] --cluster=[CLUSTER] --namespace=[NAMESPACE] --keep-namespace

Steps

  • create namespace || bonny-${RAND}
  • mix bonny.gen.manifest -n [NAMESPACE] -o - | kubectl apply -f -
  • iex -S mix
  • mix bonny.gen.resource -o - | kubectl apply -f -
  • check /metrics for successful add/1
  • mix bonny.gen.resource --name=name-from-first-resource -o - | kubectl apply -f -
  • check /metrics for successful modify/1
  • kubectl delete crd-type/name-from-first-resource
  • check /metrics for successful delete/1
  • optionally destroy namespace

Dependencies

  • depends on OAI so we can generate a resource #10
  • depends on metrics #4 #5

Need to consider how to handle multiple CRDs.

Composable filters for node scheduling

Add submodules / functions to Bonny.Server.Scheduler for composable filtering of nodes:

  • affinity / anti-affinity for pods / nodes
  • labels
  • taints

Blocked by #55

subset_of_nodes = 
  Bonny.Server.Scheduler.Nodes.with_labels(["label-key", {"a-key", "with-a-value"}])
  |> Bonny.Server.Scheduler.Nodes.with_affinity(...)
  |> Bonny.Server.Scheduler.nodes()

Invalid JSON parsing when large watch streams are received

Environment

  • Bonny version: all
  • Elixir & Erlang/OTP versions (elixir --version): all
  • Operating system: all

Current behavior

If watching a resource with extremely high throughput, the streaming responses from k8s may be half of a JSON payload.

Current expectation is each chunk is a complete JSON event. Partial events cause process to crash and event to be lost.

Expected behavior

Buffer HTTP responses and parse when the entire response is present.

Dialyzer issues on controllers

Environment

  • Bonny version: 0.3+

Current behavior

When using dialyzer w/ a bonny controller a few warnings are emitted:

lib/ballast/controllers/v1/reserve_pool_policy.ex:1:invalid_contract
The @spec for the function does not match the success typing of the function.

Function:
Ballast.Controller.V1.ReservePoolPolicy.crd_spec/0

Success typing:
@spec crd_spec() :: %Bonny.CRD{
  :group => nil | binary(),
  :names => %{
    :kind => <<_::136>>,
    :plural => <<_::152>>,
    :shortNames => [<<_::24>>, ...],
    :singular => <<_::136>>
  },
  :scope => :cluster,
  :version => nil | binary()
}
________________________________________________________________________________
lib/ballast/controllers/v1/reserve_pool_policy.ex:1:guard_fail
Guard test:

  _ :: %{
    :kind => <<_::136>>,
    :plural => <<_::152>>,
    :shortNames => [<<_::24>>, ...],
    :singular => <<_::136>>
  }


===

false

can never succeed.
________________________________________________________________________________
lib/ballast/controllers/v1/reserve_pool_policy.ex:1:guard_fail
Guard test:
_ :: :cluster

===

false

can never succeed.
________________________________________________________________________________
lib/ballast/controllers/v1/reserve_pool_policy.ex:1:unused_fun
Function crd_spec_names/1 will never be called.

First-start reconcilation loop

Disclaimer: Code links are merely for my own reference and other less-familiar readers, and are by no means meant to imply that the maintainer does not know the contents of their own codebase. 😄

Environment

  • Bonny version: 0.2.3
  • Elixir & Erlang/OTP versions (elixir --version): (any supported)
  • Operating system: (any supported)

Current behavior

The current Bonny.Controller behaviour defines the add, modify and delete callbacks and includes these three events in the mix bonny.gen.controller templating.

Expected behavior

I would like to propose an additional callback of reconcile (or another representative terminology) that will, during the first boot cycle and again periodically on a configurable timer, examine existing CRDs within the operator's purview. Then it will ensure that any appropriate action has been taken based on their contents, i.e. that managed descendent resources exist with the desired content. With an eye for as much idempotency as possible, it should correct any configuration drift that may have happened while a Bonny-based operator was offline, misbehaving, etc.

This is a common practice in other operator codebases and frameworks, and would help enable the declarative, convergent behavior that we've come to appreciate from standard Kubernetes primitives.

Other notes

The current Bonny.Watcher implementation short-circuits the review of existing resources that match the watch request within the Bonny.Watcher.Impl module, which on the first request without a resource version:

Defaults to changes from the beginning of history

This feature probably also depends on #27 for clean access to the /status subresource to make it truly graceful to handle pre-existing CRDs which have already been realized.

Finally, I have half-formed ideas of trying this on my own project that may result in a PR here for discussion purposes, but I'm not yet using this library at work and will have difficulty committing the time soon.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.