An open service broker bridging the gap between Kubernetes deployments and CF marketplace.
When deployed with a Helm chart and added to the marketplace,
cf create-service
calls to Kibosh will create the collection of Kubernetes resources described by the chart.cf bind-service
calls to Kibosh will expose back any services and secrets created by the chart
Overriding/Setting values as defined in values.yaml via 'cf create-service' or 'cf update-service'. The format of the json string is a nested format. Also refer to the cf cli for an example of a valid JSON object.
Example for setting the mysqlUser on cf create-service
for the MySQL chart
values.yaml
##
# mysqlUser:
# mysqlPassword:
cf create-service mysql medium mysql-kibosh-service -c '{"mysqlUser":"admin"}'
Example for setting the resources.requests.memory on cf update-service
for the MySQL chart
values.yaml
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
requests:
memory: 256Mi
cpu: 100m
cf update-service mysql-kibosh-service -c '{"resources": {"requests": {"memory": "256Mi"}}}'
For some in depth discussion, see this blog post: Use Kubernetes Helm Packages to Build Pivotal Cloud Foundry tiles
- Plans (cf marketplace)
Kibosh requires that helm chart has additional file that describes plan in
plans.yaml
at root level- name: "small" description: "default (small) plan for mysql" file: "small.yaml" - name: "medium" description: "medium sized plan for mysql" file: "medium.yaml"
file
is a filename that exists in theplans
subdirectory of the chart. File names should consist of only lowercase letters, digits,.
, or-
. The standardvalues.yaml
file in the helm chart sets the defaults. Each plan's yaml file is a set of values overriding the defaults present invalues.yaml
. Copy any key/value pairs to override fromvalues.yaml
into a new plan file and change their value. See kibosh-sample's sample-charts for a few examples.
In order to successfully pull private images, we're imposing some requirements
on the values.yaml
file structure
- Single image charts should use this structure:
--- image: "my-image" imageTag: "5.7.14"
- Multi-image charts shoud use this structure:
--- images: thing1: image: "my-first-image" imageTag: "5.7.14" thing2: image: "my-second-image" imageTag: "1.2.3"
This feature is experimental and the syntax will likely change in the future
By default, Kibosh will create all deployments in the same cluster. It's also possible for each plan to
target a different cluster. In plans.yaml
, the plan specifies a credentials file:
---
- name: "small"
description: "default (small) plan for mysql"
file: "small.yaml"
credentials: "small-creds.yaml"
The contents of this file mirror what would appear in the .kube/config
file. For example, small-creds.yaml
would contain:
---
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: bXktY2VydA==
server: https://pks.example.com
name: my-cluster
contexts:
- context:
cluster: my-cluster
user: my-user
name: my-cluster
current-context: my-cluster
kind: Config
preferences: {}
users:
- name: my-user
user:
token: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
- When defining a
Service
, to expose this back to any applications that are bound,type: LoadBalancer
is a current requirement.NodePort
is also an option and Kibosh will add externalIPs and nodePort to bind json, butNodePort
does carry significant risks and probably should not be used in production: is not robust to cluster scaling events, upgrades or other IP changes. - Resizing disks has limitiations. To support upgrade:
- You can't resize a persistent volume claim (currently behind an alpha feature gate)
- Selectors are immutable
- This means that chart name cannot change (the name is generally used in selectors)
When the environment settings for a private registry are present (REG_SERVER
, REG_USER
, REG_PASS
),
then Kibosh will transform images to pull them from the private registry. It assumes
the image is already present (see the Kibosh deployment). It will patch
the default service account in the instance namespaces to add in the registry credentials.
Be sure that REG_SERVER
contains any required path information. For example, in gcp gcr.io/my-project-name
We welcome comments, questions, and contributions from community members. Please consider the following ways to contribute:
- File Github issues for questions, bugs and new features and comment and vote on the ones that you are interested in.
- If you want to contribute code, please make your code changes on a fork of this repository and submit a pull request to the master branch of Kibosh. We strongly suggest that you first file an issue to let us know of your intent, or comment on the issue you are planning to address.
To manually deploy the BOSH release, get the latest BOSH release (kibosh-release-X.X.XX.tgz
)
from the Github releases and upload
to your director.
Build a manifest by starting from the example bosh-lite manifest
lite-manifest.yml
and customize the cloud specific settings (az
, vm_type
, etc). This manifest
uses a set of input variables.
See
values-sample.yml
for example values.
Install Go dependencies
go get -u github.com/onsi/ginkgo/ginkgo
go get -u github.com/onsi/gomega
go get -u github.com/maxbrunsfeld/counterfeiter
go get -u github.com/golang/dep/cmd/dep
go get -u github.com/gosuri/uitable
Run make bootstrap
from a clean checkout to setup initial dependencies. This will restore
the locked dependency set specified by Gopkg.toml
(we're no longer checking in vendor
).
Copy local_dev.sh.template
to local_dev.sh
(which is in .gitignore
) and
configure the values (cluster.certificate-authority-data
, cluster.server
, and user.token
)
for a working cluster. Then run:
./local_dev.sh
Make sure minikube is running:
minikube start --vm-driver=hyperkit
Use local_dev_minikube
to set up all the secrets and start kibosh:
local_dev_minikube.sh
In production, tiller should be secured. It's probably good practice to use secure tiller in your local environment as well (at least some of the time) to catch issues.
To generate a set of credentials, run tiller_ssl.sh from inside
docs/tiller-ssl/
. This will create a CA cert, a cert/key pair for Tiller, and a client cert/key pair.
If debugging using the helm cli, include the tls flags. For example:
helm ls --all --tls-verify --tls-ca-cert docs/tiller-ssl/ca.cert.pem --tls-cert docs/tiller-ssl/tiller.cert.pem --tls-key docs/tiller-ssl/tiller.key.pem
See Helm's tiller_ssl.md for more details.
The Kibosh code loads charts from the HELM_CHART_DIR
, which defaults to charts
.
This directory can either be a single chart (with all the changes described in the
configuration, eg plans.yaml
and ./plans
), or, directory where each
subdirectory is a chart. The multiple charts feature isn't yet supported by tile-generator.
charts
├── mariadb
│ ├── Chart.yaml
├── plans
│ ├── medium.yaml
│ └── small.yaml
├── plans.yaml
├── templates
...
└── mysql
├── Chart.yaml
├── plans
│ └── default.yaml
...
We have modified some example charts from stable helm repository.
make test
To generate the test-doubles, after any interface change run:
make generate
The pipeline is backed by a cluster in the shared GKE account. The default admin user in GKE has a password while Kibosh is configured to use a token. To create a user in the cluster and fetch the token, do something like:
apiVersion: v1
kind: ServiceAccount
metadata:
name: kibosh-concourse-ci
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kibosh-concourse-ci
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kibosh-concourse-ci
namespace: kube-system
kubectl create -f [above contents in file].yml
kubectl get secrets --namespace=kube-system | grep "kibosh-concourse-ci"
kubectl get secret --namespace=kube-system kibosh-concourse-ci-token-pfnqs -o yaml
To add a dependency:
dep ensure -add github.com/pkg/errors
To update a dependency:
dep ensure -update github.com/pkg/errors
Dependency vendoring with respect to helm & k8s is trickier. dep
isn't able to build the
tree without significant help. The Gopkg.tml
has several overrides needed to get everything
to compile.
Updating to a new version of helm/k8s will probably require re-visiting the override & constraint matrix built. Useful inputs into this process are:
- The k8s Godeps
- Helm's Glide dependencies and dependency lock file
- Draft's
Gopkg.toml
file (they're doing the same thing we are, pulling in Helm as a library) - This helm tracker issue also has some useful context
Also run the make target cleandep
to wipe out the lock file an any local state when upgrading
helm/k8s, to make sure it can be rebuilt cleanly from the specified constraints.
More dep links:
- Common dep commands: https://golang.github.io/dep/docs/daily-dep.html
Gopks.toml
details: https://github.com/golang/dep/blob/master/docs/Gopkg.toml.md
Kibosh can also manage multiple charts more dynamically (without redeployment). This allows customers to add any available helm chart to their cf marketplace with minimal effort and cycle time.
There is also a corresponding cli (bazaarcli
) to manage these charts.
./bazaarcli.mac -t http://bazaar.v3.pcfdev.io -u admin -p 'monkey123' list
./bazaarcli.mac -t http://bazaar.v3.pcfdev.io -u admin -p 'monkey123' save ~/workspace/kibosh-sample/sample-charts/mysql-0.8.2.tgz
./bazaarcli.mac -t http://bazaar.v3.pcfdev.io -u admin -p 'monkey123' save ~/workspace/kibosh-sample/sample-charts/rabbitmq-1.1.9.tgz
./bazaarcli.mac -t http://bazaar.v3.pcfdev.io -u admin -p 'monkey123' list
cf enable-service-access mysql
cf enable-service-access rabbitmq
cf marketplace
./bazaarcli.mac -t http://bazaar.v3.pcfdev.io -u admin -p 'monkey123' delete rabbitmq
cf marketplace
To deploy in this way, start from the example bosh-lite manifest
lite-bazaar-manifest.yml
and customize the cloud specific settings (az
, vm_type
, etc). See the deploying section
for more details.
Alternatively, you can also deploy Bazaar as a tile. Follow the README on that page.
Sequence diagram source
via https://www.websequencediagrams.com/ title Kibosh
operator->cf: deploy tile with kibosh and helm chart
kibosh->cf: add offering to marketplaces via errand
user->cf: cf create-service
cf->kibosh: OSBAPI api provision call
kibosh-> k8s: deploy chart
user->cf: cf bind-service
cf->kibosh: OSBAPI api bind call
kibosh-> k8s: k8s api to get secrets & services
k8s->kibosh: secrets and services
kibosh->cf: secrets and services as credentials json
cf->app: secrets and services as env vars
MVP architecture, including Kibosh packaged by tile-generator