Giter Site home page Giter Site logo

k8s-handle's Introduction

k8s-handle

Easy CI/CD for Kubernetes clusters with python and jinja2

k8s-handle is a command line tool that facilitates continuous delivery for Kubernetes applications. Also k8s-handle supports environments, so you can use same deployment templates for different environments like staging and production. k8s-handle is a helm alternative, but without package manager

Table of contents

Features

  • Easy to use command line interface
  • Configure any variables in one configuration file (config.yaml)
  • Templating for kubernetes resource files (jinja2) with includes, loops, if-else and so on.
  • Loading variables from environment
  • Includes for configuration (includes in config.yaml) for big deploys
  • Async and sync mode for deploy (wait for deployment, statefulset, daemonset ready)
  • Strict mode, stop deploy if any warning appear
  • Easy integration with CI pipeline (gitlab ci for example)
  • Ability to destroy resources (deploy and destroy from git branches, gitlab environments)

k8s-handle vs helm

  • k8s-handle acts like template parser and provisioning tool, but not package manager included like in helm
  • k8s-handle don't need in cluster tools like The Tiller Server, you need only ServiceAccount for deploy
  • k8s-handle secure by default, you don't need to generate any certificates for deploying application, k8s-handle uses kubernetes REST API with https, like kubectl

Deploy process

Before you begin

$ cat > ~/.kube/kubernetes.ca.crt << EOF
> <paste your cluster CA here>
>EOF
cat > ~/.kube/config << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
    certificate-authority: kubernetes.ca.crt
    server: < protocol://masterurl:port >
  name: my-cluster
contexts:
- context:
    cluster: my-cluster
    namespace: my-namespace
    user: my-user
  name: my-context
current-context: my-context
users:
- name: my-user
  user:
    token: <your token>
EOF

Installation with pip

Required python 3.4 or higher

$ pip install k8s-handle
 -- or --
$ pip install --user k8s-handle

Usage with docker

$ cd $WORKDIR
$ git clone https://github.com/2gis/k8s-handle-example.git
$ cd k8s-handle-example
$ docker run --rm -v $(pwd):/tmp/ -v "$HOME/.kube:/root/.kube" 2gis/k8s-handle k8s-handle deploy -s staging --use-kubeconfig
INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
INFO:templating:Trying to generate file from template "secret.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/secret.yaml" successfully generated
INFO:templating:Trying to generate file from template "deployment.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/deployment.yaml" successfully generated
INFO:k8s.resource:ConfigMap "k8s-starter-kit-nginx-conf" already exists, replace it
INFO:k8s.resource:Secret "k8s-starter-kit-secret" already exists, replace it
INFO:k8s.resource:Deployment "k8s-starter-kit" does not exist, create it

                         _(_)_                          wWWWw   _
             @@@@       (_)@(_)   vVVVv     _     @@@@  (___) _(_)_
            @@()@@ wWWWw  (_)\    (___)   _(_)_  @@()@@   Y  (_)@(_)
             @@@@  (___)     `|/    Y    (_)@(_)  @@@@   \|/   (_)
              /      Y       \|    \|/    /(_)    \|      |/      |
           \ |     \ |/       | / \ | /  \|/       |/    \|      \|/
            \|//    \|///    \|//  \|/// \|///    \|//    |//    \|//
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Usage with CI/CD tools

If you are using Gitlab CI, TeamCity or something else, you can use docker runner/agent, script will be slightly different:

$ k8s-handle deploy -s staging

Configure checkout for https://github.com/2gis/k8s-handle-example.git and specific branch without-kubeconfig Also you need to setup next env vars:

  • K8S_NAMESPACE
  • K8S_MASTER_URI
  • K8S_CA_BASE64 (optional)
  • K8S_TOKEN

use image 2gis/k8s-handle:

Notice: If you use Gitlab CI, you can configure Kubernetes integration and just use --use-kubeconfig flag.

Usage

$ k8s-handle deploy -s staging --use-kubeconfig
INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
INFO:templating:Trying to generate file from template "secret.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/secret.yaml" successfully generated
INFO:templating:Trying to generate file from template "deployment.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/deployment.yaml" successfully generated
INFO:k8s.resource:ConfigMap "k8s-starter-kit-nginx-conf" already exists, replace it
INFO:k8s.resource:Secret "k8s-starter-kit-secret" already exists, replace it
INFO:k8s.resource:Deployment "k8s-starter-kit" does not exist, create it

                         _(_)_                          wWWWw   _
             @@@@       (_)@(_)   vVVVv     _     @@@@  (___) _(_)_
            @@()@@ wWWWw  (_)\    (___)   _(_)_  @@()@@   Y  (_)@(_)
             @@@@  (___)     `|/    Y    (_)@(_)  @@@@   \|/   (_)
              /      Y       \|    \|/    /(_)    \|      |/      |
           \ |     \ |/       | / \ | /  \|/       |/    \|      \|/
            \|//    \|///    \|//  \|/// \|///    \|//    |//    \|//
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
$ kubectl get configmap 
NAME                         DATA      AGE
k8s-starter-kit-nginx-conf   1         1m
$ kubectl get secret | grep starter-kit
k8s-starter-kit-secret   Opaque                                1         1m
$ kubectl get deploy
NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
k8s-starter-kit   1         1         1            1           3m

Now set replicas_count in config.yaml to 3, and run again in sync mode

$ k8s-handle deploy -s staging --use-kubeconfig --sync-mode
...
INFO:k8s.resource:Deployment "k8s-starter-kit" already exists, replace it
INFO:k8s.resource:desiredReplicas = 3, updatedReplicas = 3, availableReplicas = 1
INFO:k8s.resource:Deployment not completed on 1 attempt, next attempt in 5 sec.
INFO:k8s.resource:desiredReplicas = 3, updatedReplicas = 3, availableReplicas = 2
INFO:k8s.resource:Deployment not completed on 2 attempt, next attempt in 5 sec.
INFO:k8s.resource:desiredReplicas = 3, updatedReplicas = 3, availableReplicas = 3
INFO:k8s.resource:Deployment completed on 3 attempt
...
$ kubectl get deploy
NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
k8s-starter-kit   3         3         3            3           7m

Example

You can start by example https://github.com/2gis/k8s-handle-example. There are nginx with index.html and all needed kubernetes resources for deploy them.

$ cd $WORKDIR
$ git clone https://github.com/2gis/k8s-handle-example.git
$ cd k8s-handle-example
$ k8s-handle deploy -s staging --use-kubeconfig --sync-mode
INFO:__main__:Using default namespace k8s-handle-test
INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
INFO:templating:Trying to generate file from template "deployment.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/deployment.yaml" successfully generated
INFO:templating:Trying to generate file from template "service.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/service.yaml" successfully generated
INFO:k8s.resource:ConfigMap "example-nginx-conf" does not exist, create it
INFO:k8s.resource:Deployment "example" does not exist, create it
INFO:k8s.resource:desiredReplicas = 1, updatedReplicas = 1, availableReplicas = None
INFO:k8s.resource:Deployment not completed on 1 attempt, next attempt in 5 sec.
INFO:k8s.resource:desiredReplicas = 1, updatedReplicas = 1, availableReplicas = None
INFO:k8s.resource:Deployment not completed on 2 attempt, next attempt in 5 sec.
INFO:k8s.resource:desiredReplicas = 1, updatedReplicas = 1, availableReplicas = 1
INFO:k8s.resource:Deployment completed on 3 attempt
INFO:k8s.resource:Service "example" does not exist, create it
$ kubectl -n k8s-handle-test get svc 
NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
example   NodePort   10.100.132.168   <none>        80:31153/TCP   52s
$ curl http://<any node>:31153
<h1>Hello world!</h1>
Deployed with k8s-handle.

Docs

Configuration structure

k8s-handle works with 2 components:

  • config.yaml (or any other yaml file through -c argument) that stores all configuration for deploy
  • templates catalog, where your can store all required templates for kubernetes resource files (can be changed through TEMPLATES_DIR env var)

Environments

If your have testing, staging, production-zone-1, production-zone-2, etc, you can easily cover all environments with one set of templates for your application without duplication.

Common section

In the common section you can specify variables that you want to combine with the variables of the selected section:

common:
    app_name: my-shiny-app
    app_port: 8080

Both of these example variables will be added to variables of the selected section. Common section is optional and can be omitted.

Any other sections

Let's specify testing environment

testing:
    replicas: 1
    request_cpu: 100m 
    request_memory: 128M
    some_option: disabled

In testing in most cases we don't want performance from our application so we can keep 1 replica and small amount of resources for it. Also you can set some options to disabled state, in case when you don't want to affect any integrated systems during testing during testing.

staging:
    replicas: 2
    request_cpu: 200m 
    request_memory: 512M

Some teams use staging for integration and demo, so we can increase replicas and resources for our service.

production-zone-1:
    replicas: 50
    request_cpu: 1000m
    request_memory: 1G
    production: "true"
    never_give_up: "true"

In production we need to process n thousand RPS, so set replicas to 50, increase resources and set all production variables to ready for anything values.

Deploy specific environment

In your CI/CD script you can deploy any environment

$ k8s-handle deploy -s staging # Or testing or production-zone-1

In Gitlab CI for example you can create manual job for each environment

Templates

Templates in k8s-handle use jinja2 syntax and support all standard filters + some special

Filters

  • {{ my_var | b64encode }} - encode value of my_var to base64
  • {{ my_var | b64decode }} - decode value of my_var from base64
  • {{ my_var | hash_sha256 }} - encode value of my_var to sha256sum
  • {{ my_var | to_yaml(flow_style=True, width=99999) }} - Tries to render yaml representation of given variable(flow_style=True - render in one line, False multiline. width - max line width for rendered yaml lines)

Warning: You can use filters only for templates and can't for config.yaml

Functions

  • {{ include_file('my_file.txt') }} - include my_file.txt to resulting resource w/o parsing it, useful for include configs to configmap. my_file.txt will be searched in parent directory of templates dir(most of the time - k8s-handle project dir):
$ ls -1
config.yaml
templates
my_file.txt
...
  • {{ list_files('dir/or/glob*') }} - returns list of files in specified directory. Useful for including all files in folder to configmap. You specify directory path relative to parent of templates folder.

Note, both fuctions support unix glob. You can import all files from directory conf.d/*.conf for example.

You can put *.j2 templates in 'templates' directory and specify it in config.yaml

testing:
    replicas: 1
    request_cpu: 100m 
    request_memory: 128M
    some_option: disabled
    templates:
    - template: my-deployment.yaml.j2

the same template you can use in each section you want:

staging:
    ...
    templates:
    - template: my-deployment.yaml.j2
    
production-zone-1:
  ...
  templates:
  - template: my-deployment.yaml.j2

You can use regular expressions (not glob) for templates selection in TEMPLATES_DIR:

cluster-1:
  ...
  templates:
  - template: dir-1/.* # All files at TEMPLATES_DIR/dir-1 will be recognised as template and rendered

Template loader path

k8s-handle uses jinja2 template engine and initializes it with base folder specified in the TEMPLATES_DIR env variable. Jinja environment considers template paths as specified relatively to its base init directory.

Therefore, users must specify paths in {% include %} (and other) blocks relatively to the base (TEMPLATES_DIR) folder, not relative to the importer template location.

Example

We have the following templates dir content layout:

templates /
     subdirectory /
         template_A.yaml
         template_B.yaml

In that scheme, if template_A contains jinja2 import of the template_B, that import statement must be

{% include "subdirectory/template_B.yaml" %}

despite that included template lies as the same level as the template where include is used.

Tags

If you have a large deployment with many separate parts (for ex. main application and migration job), you can want to deploy them independently. In this case you have two options:

  • Use multiple isolated sections (like production_app, production_migration, etc.)
  • Use one section and tag yours templates. For example:
    production:
      templates:
      - template: my-job.yaml.j2
        tags: migration
      - template: my-configmap.yaml.j2
        tags: ['app', 'config']
      - template: my-deployment.yaml.j2
        tags:
        - app
        - deployment
      - template: my-service.yaml.j2
        tags: "app,service"

Since you templates are tagged you can use --tags/--skip-tags keys to partial deploy. For example, you can delete only a migration job:

k8s-handle destroy --section production --tags migration

Command line keys --tags and --skip-tags can be specified multiple times, for ex.:

k8s-handle deploy --section production --tags=tag1 --tags=tag2 --tags=tag3

Groups

You can make groups for templates. For example:

production:
  templates:
  - group:
    - template: my-configmap.yaml.j2
    - template: my-deployment.yaml.j2
    - template: my-service.yaml.j2
    tags: service-one
  - group:
    - template: my-job.yaml.j2

It is useful for creating different sets of templates for other environments, or tag a bunch of templates at once

Variables

Required parameters

k8s-handle needs several parameters to be set in order to connect to k8s, such as:

  • K8S master uri
  • K8S CA base64
  • K8S token

Each of these parameters can be set in various ways in any combination and are applied with the following order (from highest to lowest precedence):

  1. From the command line via corresponding keys
  2. From the config.yaml section, lowercase, underscore-delimited, e.g. k8s_master_uri
  3. From environment, uppercase, underscore-delimited, e.g K8S_MASTER_URI

If the --use-kubeconfig flag is used, these explicitly specified parameters are ignored.

In addition, the K8S namespace parameter also must be specified. k8s-handle uses namespace specified in metadata: namespace block of a resource. If it is not present, the default namespace is used, which is evaluated in the following order (from highest to lowest precedence):

  1. From the config.yaml k8s_namespace key
  2. From the kubeconfig current-context field, if --use-kubeconfig flag is used
  3. From the environment K8S_NAMESPACE variable

If the namespace is not specified in the resource spec, and the default namespace is also not specified, this will lead to a provisioning error.

The one of the common ways is to specify connection parameters and/or k8s_namespace in the common section of your config.yaml, but you can do it in another way if necessary.

Thus, the k8s-handle provides flexible ways to set the required parameters.

Merging with common

All variables defined in common will be merged with deployed section and available as context dict in templates rendering, for example:

common:
  common_var: common_value 
testing:
  testing_variable: testing_value

After the rendering of this template some-file.txt.j2:

common_var = {{ common_var }}
testing_variable = {{ testing_variable }}

file some-file.txt will be generated with the following content:

common_var = common_value
testing_variable = testing_value

If the variable is declared both in common section and the selected one, the value from the selected section will be chosen.

If the particular variable is a dictionary in both (common and the selected one) sections, resulting variable will contain merge of these two dictionaries.

Load variables from environment

If you want to use environment variables in your templates(for docker image tag generated by build for example), you can use next construction in config.yaml:

common:
  image_version: "{{ env='TAG' }}"

Load variables from yaml file

common:
  test: "{{ file='include.yaml' }}"

include.yaml:

- 1
- 2 
- 3

template:

{{ test[0] }}
{{ test[1] }}
{{ test[2] }}

After rendering you get:

1
2
3

How to use in CI/CD

Gitlab CI

Native integration

Use Gitlab CI integration with Kubernetes (https://docs.gitlab.com/ee/user/project/clusters/index.html#adding-an-existing-kubernetes-cluster) .gitlab-ci.yaml:

deploy:
  image: 2gis/k8s-handle:latest
  script:
    - k8s-handle deploy --section <section_name> --use-kubeconfig

Through variables

Alternatively you can setup Gitlab CI variables:

  • K8S_TOKEN_STAGING = < serviceaccount token for staging >
  • K8S_TOKEN_PRODUCTION = < serviceaccount token for production >

Don't forget mark variables as protected

then add next lines to config.yaml

staging:
  k8s_master_uri: <kubenetes staging master uri>
  k8s_token: "{{ env='K8S_TOKEN_STAGING' }}"
  k8s_ca_base64: <kubernetes staging ca>
  
production:
  k8s_master_uri: <kubenetes production master uri>
  k8s_token: "{{ env='K8S_TOKEN_PRODUCTION' }}"
  k8s_ca_base64: <kubernetes production ca>

Now just run proper gitlab job(without --use-kubeconfig option):

deploy:
  image: 2gis/k8s-handle:latest
  script:
    - k8s-handle deploy --section <section_name>

Working modes

Sync mode

Works only with Deployment, Job, StatefulSet and DaemonSet

By default k8s-handle just apply resources to kubernetes and exit. In sync mode k8s-handle wait for resources up and running

$ k8s-handle deploy --section staging  --sync-mode
...
INFO:k8s.resource:Deployment "k8s-starter-kit" already exists, replace it
INFO:k8s.resource:desiredReplicas = 3, updatedReplicas = 3, availableReplicas = 1
INFO:k8s.resource:Deployment not completed on 1 attempt, next attempt in 5 sec.
INFO:k8s.resource:desiredReplicas = 3, updatedReplicas = 3, availableReplicas = 2
INFO:k8s.resource:Deployment not completed on 2 attempt, next attempt in 5 sec.
INFO:k8s.resource:desiredReplicas = 3, updatedReplicas = 3, availableReplicas = 3
INFO:k8s.resource:Deployment completed on 3 attempt
...

You can specify number of tries before k8s-handle exit with non zero exit code and delay before checks:

--tries <tries> (360 by default)
--retry-delay <retry-delay in seconds> (5 by default)

Strict mode

In some cases k8s-handle warn you about ambiguous situations and keep working. With --strict mode k8s-handle warn and exit with non zero code. For example when some used environment variables is empty.

$ k8s-handle deploy -s staging --use-kubeconfig --strict
ERROR:__main__:RuntimeError: Environment variable "IMAGE_VERSION" is not set
$ echo $?
1

Destroy

In some cases you need to destroy early created resources(demo env, deploy from git branches, testing etc.), k8s-handle support destroy subcommand for you. Just use destroy instead of deploy. k8s-handle process destroy as deploy, but call delete kubernetes api calls instead of create or replace.

Sync mode is available for destroy as well.

Diff

You can get diff between objects in Kubernetes API and local working copy of configuration.

$ k8s-handle diff -s <section> --use-kubeconfig

Secrets are ignored by security reasons

Operating without config.yaml

The most common way for the most of use cases is to operate with k8s-handle via config.yaml, specifying connection parameters, targets (sections and tags) and variables in one file. The deploy command that runs after that, at first will trigger templating process: filling your spec templates with variables, creating resource spec files. That files become a targets for the provisioner module, which does attempts to create K8S resources.

But in some cases, such as the intention to use your own templating engine or, probably, necessity to make specs beforehand and to deploy them separately and later, there may be a need to divide the process into the separate steps:

  1. Templating
  2. Direct, kubectl apply-like provisioning without config.yaml context.

For this reason, k8s-handle render, k8s-handle apply, k8s-handle delete commands are implemented.

Render

render command is purposed for creating specs from templates without their subsequent deployment.

Another purpose is to check the generation of the templates: previously, this functionality was achieved by using the --dry-run optional flag. The support of --dry-run in deploy and destroy commands remains at this time for the sake of backward compatibility but it's discouraged for the further usage.

Just like with deploy command, -s/--section and --tags/--skip-tags targeting options are provided to make it handy to render several specs. Connection parameters are not needed to be specified cause no k8s cluster availability checks are performed.

Templates directory path is taken from env TEMPLATES_DIR and equal to 'templates' by default. Resources generated by this command can be obtained in directory that set in TEMP_DIR env variable with default value '/tmp/k8s-handle'. Users that want to preserve generated templates might need to change this default to avoid loss of the generated resources.

TEMP_DIR="/home/custom_dir" k8s-handle render -s staging
2019-02-15 14:44:44 INFO:k8s_handle.templating:Trying to generate file from template "service.yaml.j2" in "/home/custom_dir"
2019-02-15 14:44:44 INFO:k8s_handle.templating:File "/home/custom_dir/service.yaml" successfully generated

Apply

apply command with the -r/--resource required flag starts the process of provisioning of separate resource spec to k8s.

The value of -r key is considered as absolute path if it's started with slash. Otherwise, it's considered as relative path from directory specified in TEMP_DIR env variable.

No config.yaml-like file is required (and not taken into account even if exists). The connection parameters can be set via --use-kubeconfig mode which is available and the most handy, or via the CLI/env flags and variables. Options related to output and syncing, like --sync-mode, --tries and --show-logs are available as well.

$ k8s-handle apply -r /tmp/k8s-handle/service.yaml --use-kubeconfig
2019-02-15 14:22:58 INFO:k8s_handle:Default namespace "test"
2019-02-15 14:22:58 INFO:k8s_handle.k8s.resource:Using namespace "test"
2019-02-15 14:22:58 INFO:k8s_handle.k8s.resource:Service "k8s-handle-example" does not exist, create it

Delete

delete command with the -r/--resource required flag acts similarly to destroy command and does a try to delete the directly specified resource from k8s if any.

$ k8s-handle delete -r service.yaml --use-kubeconfig

2019-02-15 14:24:06 INFO:k8s_handle:Default namespace "test"
2019-02-15 14:24:06 INFO:k8s_handle.k8s.resource:Using namespace "test"
2019-02-15 14:24:06 INFO:k8s_handle.k8s.resource:Trying to delete Service "k8s-handle-example"
2019-02-15 14:24:06 INFO:k8s_handle.k8s.resource:Service "k8s-handle-example" deleted

Custom resource definitions and custom resources

Since version 0.5.5 k8s-handle supports Custom resource definition (CRD) and custom resource (CR) kinds. If your deployment involves use of such kinds, make sure that CRD was deployed before CR and check correctness of the CRD's scope.

k8s-handle's People

Contributors

dekhtyarev avatar dependabot[bot] avatar dmarkey avatar freakygranny avatar furiousassault avatar i-bogomazov avatar m-yakovenko avatar nevlkv avatar othernoscript avatar rajaravivarma-r avatar rvadim avatar seleznev avatar shreyderina avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-handle's Issues

need support HorizontalPodAutoscaler

Now, we have error:

ERROR:__main__:RuntimeError: Unknown kind "HorizontalPodAutoscaler" in generated file
ERROR: Job failed: exit code 1

Example manifest:

---

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: {{ app_name }}
spec:
  maxReplicas: {{ hpa_max_replicas }}
  minReplicas: {{ hpa_min_replicas }}
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {{ app_name }}
  targetCPUUtilizationPercentage: 70

Gitlab CI/CD и k8s-handle

В Gitlab используем следующую конструкцию:

staging:
  stage: deploy
  image: rvadim/k8s-handle:latest
  script:
    - export IMAGE_VERSION="master-${CI_COMMIT_SHA:0:8}"
    - k8s-handle deploy --section staging

Т.к. Gitlab, видимо, не переписывает существующий Entrypoint(ENTRYPOINT ["/usr/local/bin/python", "/opt/k8s-handle/k8s-handle.py"]) он пытается выполнить все команды в нём.

И мы получаем ошибку:

usage: k8s-handle.py [-h] {deploy,destroy} ...
k8s-handle.py: error: invalid choice: 'sh' (choose from 'deploy', 'destroy')

DoD:

  • При вызове команд(sh/bash) внутри k8s-handle они выполняются как задумано.
  • k8s-handle можно вызвать отдельной командой(не обязательно единственной)

k8s-handle fails on serialized deploy (CRD -> Object)

CRD defined object deploy fails even in right sequence serialized case:

  1. Deploy CRD
  2. Deploy custom object

config.yaml:

  - template: "zalando-postgres-operator/OperatorConfigurationCustomResourceDefinition.yaml.j2"
  - template: "zalando-postgres-operator/OperatorConfiguration.yaml.j2"

Log:

2020-01-09 07:24:52 ERROR:k8s_handle:RuntimeError: No valid plural name of resource definition discovered

Allow omit common section in config.yaml

Since connection parameters is not required in common section of config.yaml anymore, it should be possible to omit this section. Now it's still impossible due to context = {key: context[key] for key in ['common', section]}, KeyError.

Tbh, should be done in previous refactoring, but was forgotten due to other problems.

Broken build on hub.docker.com

Build on hub.docker.com is broken:

...
Extracting MarkupSafe-1.1.1-py3.6-linux-x86_64.egg to /usr/local/lib/python3.6/site-packages
Adding MarkupSafe 1.1.1 to easy-install.pth file
Installed /usr/local/lib/python3.6/site-packages/MarkupSafe-1.1.1-py3.6-linux-x86_64.egg
error: urllib3 1.25.2 is installed but urllib3<1.25,>=1.21.1 is required by {'requests'}
Removing intermediate container 898aedf7dc43
The command '/bin/sh -c apk --no-cache add git ca-certificates bash openssl gcc libc-dev libffi-dev openssl-dev make && cd /opt/k8s-handle && python setup.py install && apk del gcc libc-dev libffi-dev openssl-dev' returned a non-zero code: 1

Need repair

Need deploy `kind: APIService` and `apiVersion: apiregistration.k8s.io/v1beta1`

Now, we have a error:

ERROR:__main__:RuntimeError: Unknown kind "APIService" in generated file
ERROR: Job failed: exit code 1

Example for deploy:

---

apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100

Unit tests do not pass without setting env. variables

В проекте присутстуют юнит-тесты, которые, имхо, по-хорошему должны запускаться при клонировании репозитория и локально, без дополнительных манипуляций.

Вариант решения: добавить объявление и удаление env переменных в setUp / tearDown методы классов-тесткейсов, оставив в tox.ini только log level.

Переопределение k8s_namespace не работает

В некоторых случаях возникает ситуация, когда в рамках одного кластера k8s требуется деплоиться в несколько namespace'ов. Для этого используется переменная k8s_namespace. Но на текущий момент эта возможность не работает.

Пример:

  • config.yml
---

common:
  k8s_master_uri: "{{ env='K8S_MASTER_URI' }}"
  k8s_token: "{{ env='K8S_TOKEN' }}"
  k8s_ca_base64: "{{ env='K8S_CA_BASE64' }}"
  k8s_namespace: test

staging-test:
  kubectl:
  - template: configmap.yaml.j2

staging-hr1:
  k8s_namespace: hr-akxw5owd
  kubectl:
  - template: configmap.yaml.j2

staging-hr2:
  k8s_namespace: hr-eq6bknw0
  kubectl:
  - template: configmap.yaml.j2
  • В деплое это выглядит так(цветочки опущены):
$ k8s-handle deploy --section staging-test
2018-08-24 01:45:38 INFO:__main__:Using default namespace test
2018-08-24 01:45:38 INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
2018-08-24 01:45:38 INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
2018-08-24 01:45:38 INFO:k8s.resource:ConfigMap "k8s-handle-test-conf" does not exist, create it

$ k8s-handle deploy --section staging-hr1
2018-08-24 01:45:39 INFO:__main__:Using default namespace hr-akxw5owd
2018-08-24 01:45:39 INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
2018-08-24 01:45:39 INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
2018-08-24 01:45:39 INFO:k8s.resource:ConfigMap "k8s-handle-test-conf" already exists, replace it

$ k8s-handle deploy --section staging-hr2
2018-08-24 01:45:40 INFO:__main__:Using default namespace hr-eq6bknw0
2018-08-24 01:45:40 INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
2018-08-24 01:45:40 INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
2018-08-24 01:45:40 INFO:k8s.resource:ConfigMap "k8s-handle-test-conf" already exists, replace it

Финальная проверка показывает, что configmap создался только в одном из NS:

$ kubectl -n test get cm | grep k8s-handle-test-conf | wc -l
1
$ kubectl -n hr-akxw5owd get cm | grep k8s-handle-test-conf | wc -l
0
$ kubectl -n hr-eq6bknw0 get cm | grep k8s-handle-test-conf | wc -l
0

DoD:

  • При указанном config.yml, configmap создаётся во всех 3х namespace'ах

Accept connection parameters from CLI/env

We have to provide possibility to specify parameters, that are treated as required in config.yaml, from command line and from environment variables with priority:

  1. CLI
  2. config.yaml final context (after the section-common merge)
  3. Env

and throw error only if one or more of the required parameters is eventually missing.

Add support for variables that contain dashes

Currently, if a variable (or one of the child properties of a variable) in config.yaml contains a dash symbol, the deployment automatically fails.
It would be very useful to at least allow child properties to contain dashes, which are natively supported by the yaml format.

Use case: I'm building a template that would have different variable values based on region name, and I would have to remove the dash symbol from the region variable that comes from the deployment pipeline.

Error example:

ERROR:k8s_handle:RuntimeError: Variable names should never include dashes, check your vars, please: us-central1, us-east1

Update of PodDisruptionBudget cause exception

First deploy works ok:

$ k8s-handle deploy --section staging-pdb
2018-08-24 02:07:16 INFO:__main__:Using default namespace test
2018-08-24 02:07:16 INFO:templating:Trying to generate file from template "poddisruptionbudget.yaml.j2" in "/tmp/k8s-handle"
2018-08-24 02:07:16 INFO:templating:File "/tmp/k8s-handle/poddisruptionbudget.yaml" successfully generated
2018-08-24 02:07:16 INFO:k8s.resource:PodDisruptionBudget "k8s-starter-kit-dekhtyarev" does not exist, create it
2018-08-24 02:07:16 ERROR:k8s.resource:Invalid value for `disrupted_pods`, must not be `None`

Redeploy without changes cause exception:

$ k8s-handle deploy --section staging-pdb
2018-08-24 02:10:08 INFO:__main__:Using default namespace test
2018-08-24 02:10:08 INFO:templating:Trying to generate file from template "poddisruptionbudget.yaml.j2" in "/tmp/k8s-handle"
2018-08-24 02:10:08 INFO:templating:File "/tmp/k8s-handle/poddisruptionbudget.yaml" successfully generated
Traceback (most recent call last):
  File "/opt/k8s-handle/k8s-handle", line 117, in <module>
    main()
  File "/opt/k8s-handle/k8s-handle", line 90, in main
    p.run(resource)
  File "/opt/k8s-handle/k8s/resource.py", line 156, in run
    self._deploy(file_path)
  File "/opt/k8s-handle/k8s/resource.py", line 195, in _deploy
    if kube_client.get() is None:
  File "/opt/k8s-handle/k8s/resource.py", line 403, in get
    self.name, namespace=self.namespace)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/policy_v1beta1_api.py", line 1574, in read_namespaced_pod_disruption_budget
    (data) = self.read_namespaced_pod_disruption_budget_with_http_info(name, namespace, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/policy_v1beta1_api.py", line 1665, in read_namespaced_pod_disruption_budget_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 163, in __call_api
    return_data = self.deserialize(response_data, response_type)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 236, in deserialize
    return self.__deserialize(data, response_type)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 276, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 620, in __deserialize_model
    kwargs[attr] = self.__deserialize(value, attr_type)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 276, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 622, in __deserialize_model
    instance = klass(**kwargs)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/models/v1beta1_pod_disruption_budget_status.py", line 66, in __init__
    self.disrupted_pods = disrupted_pods
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/models/v1beta1_pod_disruption_budget_status.py", line 143, in disrupted_pods
    raise ValueError("Invalid value for `disrupted_pods`, must not be `None`")
ValueError: Invalid value for `disrupted_pods`, must not be `None`
ERROR: Job failed: exit code 1

yaml of PodDisruptionBudget:

apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  creationTimestamp: 2018-08-24T02:07:16Z
  generation: 1
  name: test-k8s-handle
  namespace: test
  resourceVersion: "257484967"
  selfLink: /apis/policy/v1beta1/namespaces/test/poddisruptionbudgets/test-k8s-handle
  uid: 6ca7f4ac-a742-11e8-bee7-fa163e24fbac
spec:
  minAvailable: 1
  selector:
    matchLabels:
      app: test-k8s-handle
status:
  currentHealthy: 1
  desiredHealthy: 1
  disruptedPods: null
  disruptionsAllowed: 0
  expectedPods: 1
  observedGeneration: 1

if hpa exists, need to check replicas greater or equal

in simple HPA implementation we have one bug.

Example, deploy set 4 replicas, but during deploy process HPA set 6 replicas:

2018-07-24 04:45:38 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:45:38 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 45 attempt, next attempt in 5 sec.
2018-07-24 04:45:43 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:45:43 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 46 attempt, next attempt in 5 sec.
2018-07-24 04:45:48 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:45:48 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 47 attempt, next attempt in 5 sec.
2018-07-24 04:45:53 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:45:53 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 48 attempt, next attempt in 5 sec.
2018-07-24 04:45:58 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:45:58 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 49 attempt, next attempt in 5 sec.
2018-07-24 04:46:03 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:46:03 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 50 attempt, next attempt in 5 sec.
2018-07-24 04:46:08 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:46:08 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 51 attempt, next attempt in 5 sec.
2018-07-24 04:46:14 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:46:14 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 52 attempt, next attempt in 5 sec.
2018-07-24 04:46:19 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:46:19 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 53 attempt, next attempt in 5 sec.
2018-07-24 04:46:24 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:46:24 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 54 attempt, next attempt in 5 sec.
2018-07-24 04:46:29 e7f64b77bac5 processor_k8s[1] INFO desiredReplicas = 4, updatedReplicas = None, availableReplicas = 6
2018-07-24 04:46:29 e7f64b77bac5 processor_k8s[1] INFO Deployment not completed on 55 attempt, next attempt in 5 sec.

Add custom resources support

After #85 will be done we will want to deploy our custom resources too.

Example (link):

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: example-com
spec:
  secretName: example-com-tls
  duration: 2160h # 90d
  renewBefore: 360h # 15d
  commonName: example.com
  dnsNames:
  - example.com
  - www.example.com
  issuerRef:
    name: ca-issuer
    # We can reference ClusterIssuers by changing the kind here.
    # The default value is Issuer (i.e. a locally namespaced Issuer)
    kind: Issuer

k8s-handle diff support

Hello,
How hard is to add kubectl diff functionality right into k8s-handle?
It is handy to have a test for Merge Request actually showing what would be changed in Cluster after all templates are rendered etc.
Right now we are using something like:

k8s-handle render -s section

kubectl config set-credentials user --token="$K8S_TOKEN"
kubectl config set-cluster cluster --server=https://k8s --insecure-skip-tls-verify=true
kubectl config set-context context --cluster=cluster --user=user
kubectl config use-context context

kubectl diff -n namespace -R -f /tmp/k8s-handle

While running steps in docker, this requires container with both k8s-handle and kubectl - which complicates stuff.
Advantage to have diff in python is that it's format would be easier to customize (drop unnecessary fields etc)

Another command where knowledge about objects in the Cluster could be useful is deploy.
Because for new users all these logs looks scary:
INFO:k8s_handle.k8s.provisioner:Deployment "prometheus" already exists, replace it
Why do you replace the object if it does not changed? For example kubectl apply shows if object was actually changed or not. Would be great to show diff on deploy too, or skip objects which has zero diff.

What do you think?

PVC try replace and failed

2019-01-14 02:31:09 INFO:k8s.resource:PersistentVolumeClaim "k8s-starter-kit-with-pvc" already exists, replace it
2019-01-14 02:31:09 ERROR:k8s.resource:Exception when calling "replace_namespaced_persistent_volume_claim": {
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {},
    "status": "Failure",
    "message": "PersistentVolumeClaim \"k8s-starter-kit-with-pvc\" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims",
    "reason": "Invalid",
    "details": {
        "name": "k8s-starter-kit-with-pvc",
        "kind": "PersistentVolumeClaim",
        "causes": [
            {
                "reason": "FieldValueForbidden",
                "message": "Forbidden: is immutable after creation except resources.requests for bound claims",
                "field": "spec"
            }
        ]
    },
    "code": 422
}

Но PVC не изменялся.

Проблема в отстутствии выхода из https://github.com/2gis/k8s-handle/blob/master/k8s/resource.py#L216

add key `--show-logs`

Вывод логов

Доступна только для Job (а точнее для Non-parallel Jobs)

После завершения работы пода, созданного джобой, выведет его логи на экран.
Завершением работы пода считается переход из состояния Running в состояниe Succeeded/Failed/Unknown
Рантайм режим временно не доступен из-за бага: kubernetes-client/python#199

k8s-handle deploy --section <section_name> --config <config-name> --show-logs

При необходимости указать кол-во строк

--tail-lines <tail-lines> (по умолчанию выведет все логи с момента создания пода)

PS Для Parallel Jobs можно пока не реализовывать. Запросов на них ещё не было

--show-logs is broken

k8s-handle deploy --section XXXXX --sync-mode --show-logs
...
2018-10-29 09:17:23 INFO:k8s.resource:Job not completed on 34 attempt, next attempt in 5 sec.
2018-10-29 09:17:28 INFO:k8s.resource:Job not completed on 35 attempt, next attempt in 5 sec.
2018-10-29 09:17:33 INFO:k8s.resource:Job completed on 36 attempt

                         _(_)_                          wWWWw   _
             @@@@       (_)@(_)   vVVVv     _     @@@@  (___) _(_)_
            @@()@@ wWWWw  (_)\    (___)   _(_)_  @@()@@   Y  (_)@(_)
             @@@@  (___)     `|/    Y    (_)@(_)  @@@@   \|/   (_)
              /      Y       \|    \|/    /(_)    \|      |/      |
           \ |     \ |/       | / \ | /  \|/       |/    \|      \|/
            \|//    \|///    \|//  \|/// \|///    \|//    |//    \|//
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Но вывода логов не последовало.

При этом логи в контейнере были подобного вида:

...
2018/10/29 09:17:05 output_http:19,0,19,238,47,43
2018/10/29 09:17:10 output_http:0,0,20,220,44,43
2018/10/29 09:17:15 output_http:25,0,25,280,56,43
2018/10/29 09:17:20 output_http:5,0,14,170,34,43
...

@furiousassault , говорит, что знает, что делать.

It is not possible to concatenate several environment variables into one value

Привет.
Столкнулся с невозможностью одновременного использования нескольких переменных окружения при объявлении переменной в config.yaml. Небольшой пример:
Мы при развертывании сервиса иногда создаем более одного деплоя, registry используем как приватные так и публичные, да еще и registry приватных у нас несколько. Так же при каждом деплое у нас выбирается образ по тэгу в git. Для того, что бы не создавать множество шаблонов деплоя, мы сделали один универсальный, в цикле которого добавляем все необходимые образы. По этому все образы объявляются в config.yaml

common:
  deployments:
    service_name:
      containers:
        app:
          image: "{{ env='CI_REGISTRY' }}/service-image:{{ env='TAG' }}"
        nginx:
          image: "{{ env='CI_REGISTRY' }}/custom-nginx:configmap"

Так вот при такой записи в случае с образом nginx все ОК, в случае с контейнером app после создания манифеста имеем "{{ env='CI_REGISTRY' }}/service-image:1.0.0". Заглянул в код и понял, что в текущей реализации не получится из нескольких переменных окружения получить нужное нам имя образа.

Добавить timestamp к логам k8s-handle

Сейчас логи выглядят вот так:

INFO:__main__:Using default namespace io
INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
INFO:templating:Trying to generate file from template "secret.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/secret.yaml" successfully generated
INFO:templating:Trying to generate file from template "deployment.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/deployment.yaml" successfully generated
INFO:templating:Trying to generate file from template "service.yaml.j2" in "/tmp/k8s-handle"
INFO:templating:File "/tmp/k8s-handle/service.yaml" successfully generated
INFO:k8s.resource:ConfigMap "k8s-starter-kit-nginx-conf" already exists, replace it
INFO:k8s.resource:Secret "k8s-starter-kit-secret" already exists, replace it
INFO:k8s.resource:Deployment "k8s-starter-kit" already exists, replace it
INFO:k8s.resource:Service "k8s-starter-kit" already exists, replace it

Было бы здорово добавить timestamp перед уровнем логов. Чтобы это выглядело как-то так:

2018-08-07 04:22:15 INFO:__main__:Using default namespace io
2018-08-07 04:22:15 INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
2018-08-07 04:22:15 INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
2018-08-07 04:22:15 INFO:templating:Trying to generate file from template "secret.yaml.j2" in "/tmp/k8s-handle"
2018-08-07 04:22:15 INFO:templating:File "/tmp/k8s-handle/secret.yaml" successfully generated
2018-08-07 04:22:15 INFO:templating:Trying to generate file from template "deployment.yaml.j2" in "/tmp/k8s-handle"
2018-08-07 04:22:16 INFO:templating:File "/tmp/k8s-handle/deployment.yaml" successfully generated
2018-08-07 04:22:16 INFO:templating:Trying to generate file from template "service.yaml.j2" in "/tmp/k8s-handle"
2018-08-07 04:22:16 INFO:templating:File "/tmp/k8s-handle/service.yaml" successfully generated
2018-08-07 04:22:16 INFO:k8s.resource:ConfigMap "k8s-starter-kit-nginx-conf" already exists, replace it
2018-08-07 04:22:16 INFO:k8s.resource:Secret "k8s-starter-kit-secret" already exists, replace it
2018-08-07 04:22:16 INFO:k8s.resource:Deployment "k8s-starter-kit" already exists, replace it
2018-08-07 04:22:16 INFO:k8s.resource:Service "k8s-starter-kit" already exists, replace it

Спасибо.

DoD:

  • В логах отображает timestamp конкретного шага.

Wrong error message while processing template with non-existing include.

Example:
We have template "X" that is specified in config.yaml. That template has jinja include `{% include Y.yaml %} in its body. The specified include does not exist.
We try to render this template and see error "Template 'X' not found", which is confusing and misleading: template X is present, the message should be "Template Y hasn't been found while processing X".

The second thing: Jinja environment considers all paths (includes too) specified relatively to its base init directory. In other words, users must specify paths in {% include %} blocks relatively to the base (TEMPLATES_DIR) folder, even if included template lies as the same level as the template where include is used. I do think that it should be reflected in README, even though it's a Jinja2 behaviour.

skip-tags does not work

Hello,
it seems --skip-tags does not work. Steps to reproduce:

git clone [email protected]:2gis/k8s-handle-example.git

Edit config.yaml, add some tag

staging:
  templates:
  - template: configmap.yaml.j2
  - template: deployment.yaml.j2
  - template: service.yaml.j2
    tags: manual

Render is empty:

$ docker run --rm -v `pwd`:/tmp -w /tmp 2gis/k8s-handle k8s-handle render -s staging --skip-tags manual

                         _(_)_                          wWWWw   _
             @@@@       (_)@(_)   vVVVv     _     @@@@  (___) _(_)_
            @@()@@ wWWWw  (_)\    (___)   _(_)_  @@()@@   Y  (_)@(_)
             @@@@  (___)     `|/    Y    (_)@(_)  @@@@   \|/   (_)
              /      Y       \|    \|/    /(_)    \|      |/      |
           \ |     \ |/       | / \ | /  \|/       |/    \|      \|/
            \|//    \|///    \|//  \|/// \|///    \|//    |//    \|//
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Separate templating and deploying processes

At the current moment in k8s-handle deploy command consists of two phases:

  1. Templating. In this phase, jinja templates combine with config.yaml and produce k8s config files.
  2. Actual deploy. In this phase, k8s-handle use generated k8s config files and start the deploy process.

I think it would be good to have an option to run this phases separately. These processes look pretty independent between themselves. And optional templating could give a possibility to use other template technics (not everybody are happy with jinja + config.yaml approach) while preserving convenience from the simple k8s-handle deploy logic.

Actually, I already can run the first phase without the actual deploy with --dry-run flag. But it looks like a side effect, not an intention.

As far as I understand, separation of actual deployment needs some extra work. Here are some problems I could see:

  1. k8s-handle requires config.yaml file with the common section. If actual deploy would be separated from the templating, k8s-handle should be able to work without config.yaml at all. Honestly, requiring common section feels very synthetic for me, even apart from the current task.
  2. Deploy requires some special keys in the config.yaml. I'm talking about k8s_namespace, k8s_master_uri, k8s_token and k8s_ca_base64. k8s-handle should be able to get these variables from the CLI parameters.

I can suggest two ways of implementation:

  1. Add two new commands: template and deploy-only (bad names, only for example). These commands will have different parameters. And the current deploy command will combine both these commands. This approach looks cleaner for me, and I think strategically this is a more reliable plan. But --dry-run flag should be deprecated in that case, because this flag will duplicate the template command.
  2. Add some more flags for the current deploy command. I think something like --no-templating could work. But I'm afraid of increasing complexity of flag combinations.

bug: ERROR:k8s.resource:Exception when calling "delete_namespaced_service"

Команда destroy падает при попытке удалить имеющийся k8s service.

config.yaml

$ cat config.yaml
---
common:
  k8s_namespace: io

  app_name: k8s-handle-example
  app_port: 80

staging:
  templates:
  - template: service.yaml.j2

service.yaml.j2

$ cat templates/service.yaml.j2
apiVersion: v1
kind: Service
metadata:
  name: {{ app_name }}
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: {{ app_port }}
  selector:
    app: {{ app_name }}

Проверили, что сейчас сервис не существует:

$ kubectl get service k8s-handle-example
Error from server (NotFound): services "k8s-handle-example" not found

Задеплоили сервис:

$ docker run --rm -v $(pwd):/tmp -v "$HOME/.kube:/root/.kube" 2gis/k8s-handle k8s-handle deploy -s staging --use-kubeconfig
2018-09-26 17:06:20 INFO:__main__:Using default namespace io
2018-09-26 17:06:20 INFO:templating:Trying to generate file from template "service.yaml.j2" in "/tmp/k8s-handle"
2018-09-26 17:06:20 INFO:templating:File "/tmp/k8s-handle/service.yaml" successfully generated
2018-09-26 17:06:20 INFO:k8s.resource:Service "k8s-handle-example" does not exist, create it

                         _(_)_                          wWWWw   _
             @@@@       (_)@(_)   vVVVv     _     @@@@  (___) _(_)_
            @@()@@ wWWWw  (_)\    (___)   _(_)_  @@()@@   Y  (_)@(_)
             @@@@  (___)     `|/    Y    (_)@(_)  @@@@   \|/   (_)
              /      Y       \|    \|/    /(_)    \|      |/      |
           \ |     \ |/       | / \ | /  \|/       |/    \|      \|/
            \|//    \|///    \|//  \|/// \|///    \|//    |//    \|//
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Проверили, что он появился:

$ kubectl get service k8s-handle-example
NAME                 TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
k8s-handle-example   NodePort   10.100.164.187   <none>        80:32473/TCP   23s

Пробуем прибрать за собой:

$ docker run --rm -v $(pwd):/tmp -v "$HOME/.kube:/root/.kube" 2gis/k8s-handle k8s-handle destroy -s staging --use-kubeconfig
2018-09-26 17:07:21 INFO:__main__:Using default namespace io
2018-09-26 17:07:21 INFO:templating:Trying to generate file from template "service.yaml.j2" in "/tmp/k8s-handle"
2018-09-26 17:07:21 INFO:templating:File "/tmp/k8s-handle/service.yaml" successfully generated
2018-09-26 17:07:21 INFO:k8s.resource:Trying to delete Service "k8s-handle-example"
2018-09-26 17:07:21 ERROR:k8s.resource:Exception when calling "delete_namespaced_service": {
    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {},
    "status": "Failure",
    "message": " \"\" is invalid: []: Invalid value: v1.DeleteOptions{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, GracePeriodSeconds:(*int64)(nil), Preconditions:(*v1.Preconditions)(nil), OrphanDependents:(*bool)(nil), PropagationPolicy:(*v1.DeletionPropagation)(0xc435738660)}: DeletionPropagation need to be one of \"Foreground\", \"Background\", \"Orphan\" or nil",
    "reason": "Invalid",
    "details": {
        "causes": [
            {
                "reason": "FieldValueInvalid",
                "message": "Invalid value: v1.DeleteOptions{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, GracePeriodSeconds:(*int64)(nil), Preconditions:(*v1.Preconditions)(nil), OrphanDependents:(*bool)(nil), PropagationPolicy:(*v1.DeletionPropagation)(0xc435738660)}: DeletionPropagation need to be one of \"Foreground\", \"Background\", \"Orphan\" or nil",
                "field": "[]"
            }
        ]
    },
    "code": 422
}

Add CustomResourceDefinition kind support

k8s-handle doesn't support CustomResourceDefinition yet:

2019-02-13 04:05:19 ERROR:k8s_handle:RuntimeError: Unknown kind "CustomResourceDefinition" in generated file

API group:

$ kubectl api-resources --api-group=apiextensions.k8s.io
NAME                        SHORTNAMES   APIGROUP               NAMESPACED   KIND
customresourcedefinitions   crd,crds     apiextensions.k8s.io   false        CustomResourceDefinition

Example (link):

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: certificates.certmanager.k8s.io
  labels:
    app: cert-manager
spec:
  additionalPrinterColumns:
  - JSONPath: .status.conditions[?(@.type=="Ready")].status
    name: Ready
    type: string
  - JSONPath: .spec.secretName
    name: Secret
    type: string
  - JSONPath: .spec.issuerRef.name
    name: Issuer
    type: string
    priority: 1
  - JSONPath: .status.conditions[?(@.type=="Ready")].message
    name: Status
    type: string
    priority: 1
  - JSONPath: .metadata.creationTimestamp
    description: |-
      CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.
      Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
    name: Age
    type: date
  group: certmanager.k8s.io
  version: v1alpha1
  scope: Namespaced
  names:
    kind: Certificate
    plural: certificates
    shortNames:
    - cert
    - certs

--sync-mode=True/true is not covered by backward compatibility measures

При использовании =true вместо просто true предыдущие принятые меры не действенны.
Без внесения модификаций на стороне пользователей, доп. энвов и проч. в голову приходит только вручную пройти по sys.argv и выполнить фильтрацию с выводом deprecation warning.

apiextensions.k8s.io/v1 not supported in k8s/adapters

I have a problem to deploy crd object with k8s-handle.

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
...
2020-10-21 07:16:46 INFO:k8s_handle.templating:Trying to generate file from template "topolvm/node/crd.yaml.j2" in "/tmp/k8s-handle"
...
2020-10-21 07:17:08 ERROR:k8s_handle:RuntimeError: Unknown apiVersion "apiextensions.k8s.io/v1" in template "/tmp/k8s-handle/topolvm/node/crd.yaml"

kubernetes-client/python#1172 - apiextensions.k8s.io/v1 available as 12.0.0b1 has been released.

Unwanted custom resource object merging on redeploy occurs

Bug exercise:

  1. Deploy some CRD into cluster: kubectl -f https://github.com/zalando/postgres-operator/blob/master/manifests/operatorconfiguration.crd.yaml.
  2. Deploy some CRD defined object with k8s-handle: operatorconfigurations.acid.zalan.do where .configuration.kubernetes.custom_pod_annotations: { "keya": "valuea" } has defined.
  3. Redeploy current object with fixed section .configuration.kubernetes.custom_pod_annotations: { "keyb": "valueb" }.

The old and new custom_pod_annotations will be merged at result, but 2020-01-09 06:15:39 INFO:k8s_handle.k8s.provisioner:OperatorConfiguration "zalando-postgres-operator" already exists, replace it be logged. (replace != merge)

kubectl -n kube-system get operatorconfigurations.acid.zalan.do zalando-postgres-operator -o jsonpath={.configuration.kubernetes.custom_pod_annotations}
map[keya:valuea keyb:valueb]

Create pypi-package

For local debugging very usefull install/remove via pip.

But currently setup.py does not present in project:

┖─── ♨  pip install git+https://github.com/2gis/k8s-handle.git
Collecting git+https://github.com/2gis/k8s-handle.git
  Cloning https://github.com/2gis/k8s-handle.git to /private/var/folders/55/s__vgbx920lg1lmp7c2fbjg00000gn/T/pip-req-build-0T5Ma8
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
    IOError: [Errno 2] No such file or directory: '/private/var/folders/55/s__vgbx920lg1lmp7c2fbjg00000gn/T/pip-req-build-0T5Ma8/setup.py'

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/55/s__vgbx920lg1lmp7c2fbjg00000gn/T/pip-req-build-0T5Ma8/

Deprecation checker blocks deploy on Kubernetes 1.9

I tried to deploy into Kubernetes 1.9 cluster and get following error:

$ k8s-handle deploy --use-kubeconfig --section deploy --sync-mode
2019-07-25 11:19:13 INFO:k8s_handle.templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
2019-07-25 11:19:13 INFO:k8s_handle.templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
2019-07-25 11:19:13 INFO:k8s_handle.templating:Trying to generate file from template "deployment.yaml.j2" in "/tmp/k8s-handle"
2019-07-25 11:19:13 INFO:k8s_handle.templating:File "/tmp/k8s-handle/deployment.yaml" successfully generated
2019-07-25 11:19:13 INFO:k8s_handle.templating:Trying to generate file from template "service.yaml.j2" in "/tmp/k8s-handle"
2019-07-25 11:19:13 INFO:k8s_handle.templating:File "/tmp/k8s-handle/service.yaml" successfully generated
2019-07-25 11:19:14 INFO:k8s_handle:Default namespace "io"
/usr/local/lib/python3.6/site-packages/urllib3-1.25.3-py3.6.egg/urllib3/connectionpool.py:851: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
  InsecureRequestWarning)
Traceback (most recent call last):
  File "/usr/local/bin/k8s-handle", line 11, in <module>
    load_entry_point('k8s-handle==0.0.0', 'console_scripts', 'k8s-handle')()
  File "/usr/local/lib/python3.6/site-packages/k8s_handle-0.0.0-py3.6.egg/k8s_handle/__init__.py", line 219, in main
    args.func(args_dict)
  File "/usr/local/lib/python3.6/site-packages/k8s_handle-0.0.0-py3.6.egg/k8s_handle/__init__.py", line 27, in handler_deploy
    _handler_deploy_destroy(args, COMMAND_DEPLOY)
  File "/usr/local/lib/python3.6/site-packages/k8s_handle-0.0.0-py3.6.egg/k8s_handle/__init__.py", line 68, in _handler_deploy_destroy
    args.get('show_logs')
  File "/usr/local/lib/python3.6/site-packages/k8s_handle-0.0.0-py3.6.egg/k8s_handle/__init__.py", line 107, in _handler_provision
    d = ApiDeprecationChecker(client.VersionApi().get_code().git_version[1:])
  File "/usr/local/lib/python3.6/site-packages/kubernetes-9.0.0-py3.6.egg/kubernetes/client/apis/version_api.py", line 55, in get_code
    (data) = self.get_code_with_http_info(**kwargs)
  File "/usr/local/lib/python3.6/site-packages/kubernetes-9.0.0-py3.6.egg/kubernetes/client/apis/version_api.py", line 124, in get_code_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python3.6/site-packages/kubernetes-9.0.0-py3.6.egg/kubernetes/client/api_client.py", line 334, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python3.6/site-packages/kubernetes-9.0.0-py3.6.egg/kubernetes/client/api_client.py", line 168, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.6/site-packages/kubernetes-9.0.0-py3.6.egg/kubernetes/client/api_client.py", line 355, in request
    headers=headers)
  File "/usr/local/lib/python3.6/site-packages/kubernetes-9.0.0-py3.6.egg/kubernetes/client/rest.py", line 231, in GET
    query_params=query_params)
  File "/usr/local/lib/python3.6/site-packages/kubernetes-9.0.0-py3.6.egg/kubernetes/client/rest.py", line 222, in request
    raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Thu, 25 Jul 2019 11:19:14 GMT', 'Content-Length': '186'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"2gis-test\" cannot get path \"/version/\"","reason":"Forbidden","details":{},"code":403}

Looks like it should be warning, not fatal error. :(

Add to Readme info about include ENV vars in deployment from another file

Russia edition:

  • Использование yaml include с вариативным количеством отступов:
...
    resources:
      requests:
        cpu: {{ requests_cpu }}
        memory: {{ requests_memory }}
      limits:
        cpu: {{ limits_cpu }}
        memory: {{ limits_memory }}
    env:{% macro incvars() %}{% include "env.yaml.j2" with context %}{% endmacro %}
    {{ incvars()|indent(8) }}

README needs clean up

As pointed in #108, there're several mistakes and inconsistencies in project's README. It would be better to reformat and clean this file up.

Multiple Kubernetes resources per template file

k8s-handle expects only one YAML documents per template file and throw an exception if found more:

2018-10-31 05:33:28 ERROR:__main__:RuntimeError: Unable to load yaml file: /tmp/k8s-handle/namespace.yaml, expected a single document in the stream
  in "<unicode string>", line 4, column 1:
    kind: Namespace
    ^
but found another document
  in "<unicode string>", line 7, column 1:
    ---
    ^

Multiple resources per template will be useful when you want to create many similar resources like namespaces, resource quotas, role bindings.

For example:

  • config.yaml:
common:
  k8s_master_uri: "{{ env='K8S_MASTER_URI' }}"
  k8s_token: "{{ env='K8S_TOKEN' }}"
  k8s_ca_base64: "{{ env='K8S_CA_BASE64' }}"

testing:
  kubectl:
  - template: namespaces.yaml.j2

  namespaces:
  - first
  - second
  - third
  • templates/namespaces.yaml.j2:
{% for namespace in namespaces %}
---

apiVersion: v1
kind: Namespace
metadata:
  name: {{ namespace }}
{% endfor %}

Add --tags / --skip-tags options for partial deploy

Что хочется:

  • Расширить список шаблонов в config.yaml тегами. Пример:

      kubectl:
      - template: configmap.yaml.j2
      - template: secret.yaml.j2
      - template: job.yaml.j2
        tags:
        - tag1
        - tag2
    
  • Добавить ключи командной строки для выборочного использования шаблонов (например, --tags и --skip-tags аналогично Ansible).

Такая возможность будет полезна в случаях:

  • Если есть Job с миграциями, которую нужно удалить (destroy) перед каждым редеплоем.
    Сейчас её приходится выносить в отдельную секцию, хотя можно бы было делать как-то так:
    $ k8s-handle destroy --section ${CI_ENVIRONMENT_NAME} --tags=migrate --sync-mode
    $ k8s-handle deploy --section ${CI_ENVIRONMENT_NAME} --sync-mode
    
  • Если необходимо деплоить приложения по частям (в отдельных CI jobs). Сейчас это можно сделать только разбив их по секциям.

Need optional "true" value for key `--sync-mode`

Изначальный интерфейс k8s-handle подразумевал под собой ключ true, к аргументу --sync-mode. Текущие вызовы --sync-mode используют этот ключ:

$ grep -ir sync-mode ./* | awk -F ":" {'print $2'}
    - k8s-handle deploy --section ${JOB_NAME[1]} --sync-mode true
    - k8s-handle deploy --section ${JOB_NAME[1]} --sync-mode true
    - k8s-handle destroy --section run-at-ci --sync-mode true || true
    - k8s-handle deploy --section run-at-ci --sync-mode true --show-logs
    - k8s-handle deploy --section es-shared --sync-mode true
    - k8s-handle deploy --section es-market --sync-mode true
    - k8s-handle deploy --section ${JOB_NAME[1]} --sync-mode true
    - k8s-handle destroy --section manage-${JOB_NAME[1]} --sync-mode true || true
    - k8s-handle deploy --section manage-${JOB_NAME[1]} --sync-mode true --show-logs
    - k8s-handle deploy --section exporter-${JOB_NAME[1]} --sync-mode true
    - k8s-handle deploy --section ${JOB_NAME[1]} --sync-mode true
    - k8s-handle deploy --section $SECTION_NAME --sync-mode true
    - k8s-handle deploy --section ${JOB_NAME[1]} --sync-mode true
    - k8s-handle deploy --section ${JOB_NAME[1]} --sync-mode true
    - k8s-handle deploy --section ${JOB_NAME[1]} --sync-mode true

Чтобы не ломать обратную совместимость - прошу добавить опциональную обработку этого ключа.

Спасибо.

Support pvc size expansion

Hello,
Currently PVC size increase is not supported via k8s-handle:

[10:39:09]W:	 [Step 4/4] 2020-04-03 07:39:09 INFO:k8s_handle.k8s.provisioner:PersistentVolumeClaim "thanos-compact" already exists, replace it
[10:39:09]W:	 [Step 4/4] 2020-04-03 07:39:09 ERROR:k8s_handle.k8s.provisioner:{'storage': '100Gi'} != {'storage': '200Gi'}
[10:39:09]W:	 [Step 4/4] 2020-04-03 07:39:09 ERROR:k8s_handle.k8s.adapters:Exception when calling "replace_namespaced_persistent_volume_claim": {
[10:39:09]W:	 [Step 4/4]     "kind": "Status",
[10:39:09]W:	 [Step 4/4]     "apiVersion": "v1",
[10:39:09]W:	 [Step 4/4]     "metadata": {},
[10:39:09]W:	 [Step 4/4]     "status": "Failure",
[10:39:09]W:	 [Step 4/4]     "message": "PersistentVolumeClaim \"thanos-compact\" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims",
[10:39:09]W:	 [Step 4/4]     "reason": "Invalid",
[10:39:09]W:	 [Step 4/4]     "details": {
[10:39:09]W:	 [Step 4/4]         "name": "thanos-compact",
[10:39:09]W:	 [Step 4/4]         "kind": "PersistentVolumeClaim",
[10:39:09]W:	 [Step 4/4]         "causes": [
[10:39:09]W:	 [Step 4/4]             {
[10:39:09]W:	 [Step 4/4]                 "reason": "FieldValueForbidden",
[10:39:09]W:	 [Step 4/4]                 "message": "Forbidden: is immutable after creation except resources.requests for bound claims",
[10:39:09]W:	 [Step 4/4]                 "field": "spec"
[10:39:09]W:	 [Step 4/4]             }
[10:39:09]W:	 [Step 4/4]         ]
[10:39:09]W:	 [Step 4/4]     },
[10:39:09]W:	 [Step 4/4]     "code": 422
[10:39:09]W:	 [Step 4/4] }
  Process exited with code 1

But I can edit it manually, its not immutable:

$ k get pvc thanos-compact
NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
thanos-compact                         Bound    pvc-d060c6b0-f1c5-4423-bda3-88eb09678421   100Gi      RWO            default        3d20h
$ k edit pvc thanos-compact
persistentvolumeclaim/thanos-compact edited
$ k get pvc thanos-compact
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
thanos-compact   Bound    pvc-d060c6b0-f1c5-4423-bda3-88eb09678421   200Gi      RWO            default        3d20h

And after this k8s-handle pass:

  2020-04-03 07:44:02 INFO:k8s_handle.k8s.provisioner:PersistentVolumeClaim "thanos-compact" already exists, replace it
  2020-04-03 07:44:02 INFO:k8s_handle.k8s.provisioner:PersistentVolumeClaim is not changed

Problem: get default namespace from kubeconfig

Try deploy app with kubeconfig.

Failed:

$ docker run --rm -v $(pwd):/tmp -v "$HOME/.kube:/root/.kube" 2gis/k8s-handle k8s-handle deploy -s staging --use-kubeconfig
2018-09-11 12:19:49 INFO:__main__:Using default namespace None
2018-09-11 12:19:49 INFO:templating:Trying to generate file from template "configmap.yaml.j2" in "/tmp/k8s-handle"
2018-09-11 12:19:49 INFO:templating:File "/tmp/k8s-handle/configmap.yaml" successfully generated
2018-09-11 12:19:49 INFO:templating:Trying to generate file from template "deployment.yaml.j2" in "/tmp/k8s-handle"
2018-09-11 12:19:49 INFO:templating:File "/tmp/k8s-handle/deployment.yaml" successfully generated
2018-09-11 12:19:49 INFO:templating:Trying to generate file from template "service.yaml.j2" in "/tmp/k8s-handle"
2018-09-11 12:19:49 INFO:templating:File "/tmp/k8s-handle/service.yaml" successfully generated
Traceback (most recent call last):
  File "/opt/k8s-handle/k8s-handle", line 117, in <module>
    main()
  File "/opt/k8s-handle/k8s-handle", line 90, in main
    p.run(resource)
  File "/opt/k8s-handle/k8s/resource.py", line 156, in run
    self._deploy(file_path)
  File "/opt/k8s-handle/k8s/resource.py", line 195, in _deploy
    if kube_client.get() is None:
  File "/opt/k8s-handle/k8s/resource.py", line 403, in get
    self.name, namespace=self.namespace)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 17481, in read_namespaced_config_map
    (data) = self.read_namespaced_config_map_with_http_info(name, namespace, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 17523, in read_namespaced_config_map_with_http_info
    raise ValueError("Missing the required parameter `namespace` when calling `read_namespaced_config_map`")
ValueError: Missing the required parameter `namespace` when calling `read_namespaced_config_map`

Into config.yaml not set k8s_namespace:

$ cat config.yaml 
---
common:
  app_name: k8s-handle-example
  app_port: 80

  replicas_count: 1
  image_path: nginx
  image_version: 1.13-alpine

  nginx_worker_process: 1
  nginx_worker_connections: 1024

staging:
  templates:
  - template: configmap.yaml.j2
  - template: deployment.yaml.j2
  - template: service.yaml.j2

DoD:

  • Deploy work
  • default namespace bring from default context, from "$HOME/.kube"

Add PriorityClass kind support

The priorityClass feature is considered beta and enabled by default since Kubernetes 1.11. But k8s-handle doesn't support it yet:

2018-10-29 09:47:46 INFO:templating:Trying to generate file from template "priorityclasses.yaml.j2" in "/tmp/k8s-handle"
2018-10-29 09:47:46 INFO:templating:File "/tmp/k8s-handle/priorityclasses.yaml" successfully generated
2018-10-29 09:47:46 INFO:__main__:Default namespace "None"
2018-10-29 09:47:46 INFO:__main__:Default namespace is not set. This may lead to provisioning error, if namespace is not set for each resource.
2018-10-29 09:47:46 ERROR:__main__:RuntimeError: Unknown kind "PriorityClass" in generated file

Examples of PriorityClasses:

---
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."

---
apiVersion: scheduling.k8s.io/v1alpha1
kind: PriorityClass
metadata:
  name: normal-priority
value: 100000
globalDefault: true

Supported API versions in python client:

  • scheduling.k8s.io/v1alpha1 - 6.0+
  • scheduling.k8s.io/v1beta1 - 7.0+

Stop supporting backward compatibility for optional keys

Необходимо определиться со сроком, в течение которого мы будем держать в коде условную поддержку старого формата для --dry-run и --show-logs (#39) с аргументами и по его истечении выпилить её.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.