Giter Site home page Giter Site logo

nytimes / drone-gke Goto Github PK

View Code? Open in Web Editor NEW
165.0 39.0 35.0 539 KB

Drone plugin for deploying containers to Google Kubernetes Engine (GKE)

Home Page: https://open.nytimes.com/continuous-deployment-to-google-cloud-platform-with-drone-7078fe0c2eaf

License: Apache License 2.0

Go 83.59% Dockerfile 0.70% Makefile 12.97% HCL 1.98% Shell 0.76%
drone-plugin kubernetes go containers ci-cd drone gke templating gcp kubectl

drone-gke's Introduction

drone-gke

Build Status

Drone plugin to deploy container images to Kubernetes on Google Container Engine. For the usage information and a listing of the available options please take a look at the docs.

Simplify deploying to Google Kubernetes Engine. Derive the API endpoints and credentials from the Google credentials and open the yaml file to templatization and customization with each Drone build.

Links

Releases and versioning

Tool

This tool follows semantic versioning.

Use the minor version (x.X) releases for stable use cases (eg 0.9). Changes are documented in the release notes.

  • Pushes to the main branch will update the image tagged latest.
  • Releases will create the images with each major/minor/patch tag values (eg 0.7.1 and 0.7).

Kubernetes API

Since the 237.0.0 (2019-03-05) Google Cloud SDK, the container image contains multiple versions of kubectl. The corresponding client version that matches the cluster version will be used automatically. This follows the minor release support that GKE offers.

If you want to use a different version, you can specify the version of kubectl used with the kubectl_version parameter.

Usage

⚠️ For usage within in a .drone.yml pipeline, please take a look at the docs

Executing locally from the working directory:

# Deploy the manifest templates in local-example/
cd local-example/

# Set to the path of your GCP service account JSON-formatted key file
export JSON_TOKEN_FILE=xxx

# Set to your cluster
export PLUGIN_CLUSTER=yyy

# Set to your cluster's zone
export PLUGIN_ZONE=zzz

# Set to a namespace within your cluster's
export PLUGIN_NAMESPACE=drone-gke

# Example variables referenced within .kube.yml
export PLUGIN_VARS="$(cat vars.json)"
# {
#   "app": "echo",
#   "env": "dev",
#   "image": "gcr.io/google_containers/echoserver:1.4"
# }

# Example secrets referenced within .kube.sec.yml
export SECRET_APP_API_KEY=123
export SECRET_BASE64_P12_CERT="cDEyCg=="

# Execute the plugin
docker run --rm \
  -v $(pwd):$(pwd) \
  -w $(pwd) \
  -e PLUGIN_TOKEN="$(cat $JSON_TOKEN_FILE)" \
  -e PLUGIN_CLUSTER \
  -e PLUGIN_ZONE \
  -e PLUGIN_NAMESPACE \
  -e PLUGIN_VARS \
  -e SECRET_APP_API_KEY \
  -e SECRET_BASE64_P12_CERT \
  nytimes/drone-gke --dry-run --verbose

# Remove --dry-run to deploy

drone-gke's People

Contributors

adammck avatar balser avatar bennettyates avatar brianfoshee avatar caplan avatar chrisfrank avatar chuhlomin avatar ctborg avatar dependabot[bot] avatar dgrizzanti avatar gadelkareem avatar jprobinson avatar kiafarhang avatar macdiva avatar mattgrunwald avatar melmaliacone avatar mlclmj avatar montmanu avatar msuterski avatar oliver-nyt avatar prathameshnyt avatar samdfonseca avatar smartfin avatar soeirosantos avatar stephansnyt avatar tonglil avatar yunzhu-li avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

drone-gke's Issues

Default variables not available in secrets templates

Describe the bug
The docs state that certain variables are always available to reference in any manifest, however they're not available in the secrets manifest.

To reproduce
Steps to reproduce the behavior:
1.

$ cat > .kube.sec.yml <<EOF
apiVersion: v1
kind: Secret
metadata:
  annotations:
    git_commit: '{{.COMMIT}}'
  name: postgresql-credentials
type: Opaque
data:
  POSTGRESQL_PW: "{{.SECRET_POSTGRESQL_PW}}"
  POSTGRESQL_USER: "{{.SECRET_POSTGRESQL_USER}}"
EOF

$ docker run --rm -v "$(pwd):/app" -w /app \
  -e PLUGIN_TOKEN="$(jq -rcM '.' credentials.json)" \
  -e PLUGIN_CLUSTER=some-cluster \
  -e PLUGIN_ZONE=us-central1-c \
  -e PLUGIN_NAMESPACE=some-namespace \
  -e SECRET_POSTGRESQL_PW=password \
  -e SECRET_POSTGRESQL_USER=postgres \
  -e DRONE_COMMIT=abc123 \
  nytimes/drone-gke:0.10.1 --dry-run --verbose
  1. Error rendering deployment manifest from template: template: .kube.sec.yml:5:19: executing ".kube.sec.yml" at <.COMMIT>: map has no entry for key "COMMIT"

Expected behavior
The secret manifest should render the default variables, BRANCH, BUILD_NUMBER, COMMIT, TAG, cluster, namespace, project, zone

Version (please complete the following information):

  • Drone: drone version 1.2.1
  • drone-gke: 0.10.1
  • Kubernetes: Client Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.17-dispatcher", GitCommit:"a39a896b5018d0c800124a36757433c660fd0880", GitTreeState:"clean", BuildDate:"2021-01-28T21:47:26Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Secret Template applies secrets successfully but stage fails

Hi Im using the secret_template option in the drone-gke plugin but running into a weird situation where the secrets are applied successfully on my kubernetes cluster but the stage returns an error saying.

Error (kubectl output redacted): Error: exit status 1

Here are screenshots after running once and twice
Screenshot 2022-03-09 at 6 30 53 PM

Screenshot 2022-03-09 at 6 17 58 PM

Heres the plugin configuration

  • name: deploy
    image: nytimes/drone-gke
    environment:
    TOKEN:
    from_secret: GOOGLE_CREDENTIALS
    SECRET_DOCKER_CFG:
    from_secret: KUBERNETES_PULL_SECRET
    ... all the other secrets stored in drone ..
    settings:
    cluster: clustername
    expand_env_vars: true
    namespace: ${DRONE_BRANCH}
    zone: us-west1-a
    template: kubernetes/deployment-template.yml
    secret_template: kubernetes/secret-template.yml

The secret template looks like this

apiVersion: v1
kind: Secret
metadata:
name: secret1
type: Opaque
data:
user_name: {{.SECRET_}}

apiVersion: v1
kind: Secret
metadata:
name: secret2
type: Opaque
data:
user_passt: {{.SECRET_}}

apiVersion: v1
kind: Secret
metadata:
name: secret3
type: Opaque
data:
user_id: {{.SECRET_}}

Any pointers as to what may be happening?

Handling secrets in drone v0.5+

Problem

Secrets handling has to change from v0.4 to v0.5 plugin style and there isn't an obvious solution that's equal to or better the current one.

Currently secrets are passed to this plugin in the drone v0.4 style like so:

Secrets map[string]string `json:"secrets"`

i.e. a json object (map) with string values under the key secrets, i.e. a "named map". secrets_base64 behaves much the same way so no need to call it out every time.

In drone v0.5+, secrets are passed to plugins as environment variables. From the docs:

The secrets in the above are exposed to the plugin as uppercase environment variables. The variable names are therefore important.

It turns out this makes it impossible for new style plugins to handle secrets by the same convention, a "named map".

Implications

Within drone-gke it's easy to select all secrets for processing/iteration because there's a very clear convention that they all belong under plugin vargs.Secrets. This makes it pretty safe to do something like exclude vargs.Secrets from verbose/debug output. This convention doesn't stop pipeline authors from injecting drone secrets outside of vargs.Secrets, so it's not perfect, but it's a little better than what is possible in drone v0.5.

Solutions

token is the only named secret in use in this plugin and this issue doesn't really explore that in depth.

Secret Variable Name Prefix

A similar convention could be adopted in drone v0.5, but I think it's less obvious how it works and it makes pipeline definitions slightly more repetitive. The convention is to have all secret targets start with secret_, then the plugin iterates over all environment variables and selects those with this prefix to process as secrets, meaning they are excluded from output and passed to secret templates for interpolation.

deploy:
  gke:
    ...
    secrets:
-      foo: $$FOO_PRD
-      bar: $$BAR_PRD
+      - source: FOO_PRD
+        target: secret_foo
+      - source: BAR_PRD
+        target: secret_bar

Here is a PR that implements this idea: https://github.com/stephansnyt/drone-gke/compare/feature/remove-drone-0.4-code...stephansnyt:secretmap?expand=1

Provide An Explicit List Of Secret Names

It's doubtful this adds value to the solution given above, because this is counting on pipeline authors to the right thing twice instead of just once.

deploy:
  gke:
    ...
    secrets:
-      foo: $$FOO_PRD
-      bar: $$BAR_PRD
+      - source: FOO_PRD
+        target: foo
+      - source: BAR_PRD
+        target: bar
+    secret_variable_names:
+      - foo
+      - bar

Compare Environment Variables Against Keys In Secret Templates

The plugin could get the list of keys from secret templates, and look for corresponding environment variables. Then the plugin could select those to process as secrets, e.g. to exclude them from verbose/debug output.

This isn't perfect either because secret templates could contain non-secret keys like app and env, to be able to reuse that template in multiple envs, along with actual secret values.

Token not being found

Hello

I am having trouble with the token in drone. My yaml is as follows

image: nytimes/drone-gke:k8s-1.11
expand_env_vars: true
token: >
   $$GOOGLE_CREDS

It is still saying that token is not set, but that env var is set as as secret


Drone GKE Plugin built from 88679fcb947304ea35dde46ed60db3ce315c4a9d
--
 Missing required param: token

From the docs you say you need to put it in secrets? And then I see blogs about putting it in the higher level where I have done. Can this be cleared up for me? I can run it locally by setting the env var to token, so not sure why the plugin is not working

Need ability to pass a cluster name to GKE plugin from a previous drone step

The problem: Hard coding cluster names into a GKE Drone step is not possible when dealing with Cloud Composer deployments.

We currently use a Drone Cloud Composer plugin to spin up a new Cloud Composer environment. The cluster name is generated dynamically by Google and is NOT predictable (e.g. us-central1-prd-1c7de959-gke).
This is where the complication arises. The next step in our drone file uses the drone-gke plugin to deploy a pod running a Cloud SQL Proxy onto the newly created GKE cluster (the proxy is needed to allow secure communication with Cloud SQL DBs talked to by Airflow). The problem is that the GKE step to deploy the proxy pod doesn’t know the name of the just created Cloud Composer cluster because it can't be hard coded into the Drone file ahead of time.

What we'd like to do to address the problem:
We'd like to modify the GKE plugin to allow the passing of a GKE cluster name from a previous drone step in the same pipeline. The only method for communication between Drone steps in the same pipeline run seems to be a shared volume that any step can read and write to. Here's the sequence of events for the solution we're thinking about:
Drone step #1: create the Cloud Composer environment
Drone step #2: capture the name of the cluster via a gcloud CLI call and save the name to a file on the shared volume.
Drone step #3: The GKE plugin step examines the cluster name and if it starts with the prefix 'file-path:' (e.g. cluster_name: file-path: tmp/cluster_name.txt), then the plugin reads the cluster name from the file on the shared volume. If the prefix is not present then it uses the hard coded cluster name as is.

Alternatives we've considered:
We could modify the Cloud Composer plugin to add the optional deployment of a SQL proxy onto the cluster but that would duplicate what the GKE plugin already does.

To do: go test -vet all

Not part of this PR, but in recent versions of Go go test and go vet can be combined with go test -vet all.

Originally posted by @fsouza in #109

Update "wait" fields

Is your feature request related to a problem?
Plugin is only capable of "waiting" on Deployment workloads.

The current wait_deployments fields is hard coded to running kubectl rollout status for Deployments only (https://github.com/NYTimes/drone-gke/blob/57c5e00070efb62240c37b5f5d6d22c5878a3ddf/main.go#L628).

The rollout command is capable of checking the status for multiple workload types (https://github.com/kubernetes/kubernetes/blob/426ef9d349bb3a277c3e8826fb772b6bdb008382/pkg/kubectl/cmd/rollout/rollout_status.go#L95).

Describe the solution you'd like

  • Create a wait wait field:
wait:
  - deployment/xyz
  - statefulset/abc
  - ds/my-daemon-set
  • Deprecate the wait_deployments field (with a warning message in the logs).
  • Remove the wait_deployments field in a future version bump.

Describe alternatives you've considered
n/a

Additional context
n/a

when`dry_run` is enabled `wait_deployments` should be ignored

Description

When dry_run is enabled and wait_deployments has been configured, the timeout command that implements the wait_deployments feature currently fails since no resources are applied (due to dry_run).

Test Case

Steps to reproduce the behavior:

  1. set dry_run: true
  2. configure wait_deployments to some valid reference

Expected behavior

drone-gke should not exit with a non-zero status due to failures from waiting on deployments that were not applied (dry_run: true)

The branch name should be stripped of slashes before used in config

$ kubectl apply --record --filename /tmp/namespace.json
error: error when retrieving current configuration of:
&{0xc4200e8f00 0xc42004d340  feature/TY-357-some-issue /tmp/namespace.json 0xc420be4678  false}
from server for: "/tmp/namespace.json": invalid resource name "feature/TY-357-some-issue": [may not contain '/']
Error: Error: exit status 1

Refactor method of creating namespaces

Instead of specifying the namespace in the configuration, that parameter should point to a K8s resource manifest template.

This would make Namespaces follow the same pattern as Secrets (.kube.sec.yml file).

Namespaces could then be created (maybe even updated via kubectl apply -f) with labels from vars:.

This functionality is asked mainly for namespace-deploy-on-pull-requests, which Drone 0.4 is not capable of, but is a functionality Drone 0.5+ is capable of.

Refactor where templated manifest files are written to

Idea

Currently they are output into the /tmp directory, which means they are discarded when this plugin completes.

One useful feature is to preserve the output and upload the applied manifests to S3/GCS/storage so they can be viewed/used later.

The output directory MUST be changed to somewhere in the workspace.Path (NOT workspace.Root) in order to comply with those plugins to access and upload them (see https://github.com/drone-plugins/drone-google-cloudstorage/issues/10).

Implications

The Secrets manifest (.kube.sec.yml) is no longer "ephemeral" in this plugin's container, and persists until the end of the entire Drone build.

This risk should be acceptable since the secret is already in Drone as environment variables ($$SECRET).

However, this would be a concern for users of this plugin who are uploading their entire workspace.Path to S3, as the output directory is now in the workspace.Path!

Alternatives

  1. Write the output to somewhere in the workspace.Root instead of workspace.Path, and fork existing plugins to support accessing files in workspace.Root.

  2. Alert users of this change, and ask them to update their S3/GCS configs to ignore: this output directory.

Next steps

  1. Determine what to do.
  2. This is related to #27.

wait until deploy completes

Ideally kubectl apply would have something like a --wait option, so you know the actual rollout of Pods (etc) completed successfully, but it doesn't. So the success of a deploy here doesn't actually indicate the deploy succeeded, as the rollout happens asynchronously.

We could use kubectl rollout status after the kubectl apply, which blocks on deployment completion. It looks like you can even use -f to track all the resources in the config file.

Refactor method of applying K8s resource files

As more complex K8s workloads are being created, teams would like to separate out their K8s resource manifests to be more maintainable.

cc @yunzhu-li.

Idea

Currently template: points to a single file for filling vars: variables.

The suggestion is to allow template: to point to a directory that could containing 0-n K8s manifests for filling vars: variables.

The resulting filled templates would be in some output: directory (default to workspace.Path + /drone-gke-output/*.yml).

Then kubectl apply -f would be performed on the entire output: directory.

Implications

  1. The same thing pattern need to be followed for K8s Secrets manifests (for filling vars:*, secrets:, and secrets_base64 variables), but also output to the same output: directory.

*note vars: is not being filled into .kube.sec.yml today, but is a desired feature #8.

  1. The debug of these files will not be accessible via the verbose: parameter as it could/would have difficulty filtering out the "rendered" .kube.sec.yml file(s) in the output: directory. We want to avoid leaking secrets.

  2. A solution (work around) for 2. is that since all files are writen to the output: directory, those files can be uploaded to GCS or S3 for viewing/redeployment/replication at a later time. This is detailed in #29.

*note the output: directory may need to create a new temp directory in workspace.Path
(/drone/src/github.com/org/repo/drone-gke-output) instead of something in the workspace.Root
(/drone/drone-gke/).

It may be problematic if the temp directory contains non K8s manifest files, but it cannot error (see the next point for a use case).

This is to support restrictions other plugins have on the directory tree that can be accessed (see https://github.com/drone-plugins/drone-google-cloudstorage/issues/10).

There are further implications of this method, please see #29.

  1. An output: directory will also enable other plugins to speak Kubernetes. One use case is for a vault-k8s plugin to write to the output: folder a Vault specific K8s resource for drone-gke to kubectl apply -f dir/.

  2. Following this issue #28, this would also allow the same pattern to be followed for creating and deploying into namespaces with labels and names that are templated with vars: variables. However controlling that the Namespace is created first (before other resources) is still TBD.

  3. For those that want to wait until the rollout of some resource to be complete using kubectl rollout status (#26), that feature can be implemented with the -f and -R flags to manage all resources in the output: directory, versus specifying the exact resource. Specifics of that feature should be continued in that issue.

Releasing

There may be a way to implement these changes without break changes (transparently for plugin consumers).

If not, we may have to 1) create a new tag for the Docker container, 2) update the existing container to add a very clear DEPRECATION message, and 3) announce a limited support period.

Next steps

This is an RFC, soliciting comments and implications.

Rollout functionality checks resources in series

See https://github.com/NYTimes/drone-gke/pull/58/files/e7eae61cf7b7a157c9c951502309d327083fb7ff#r156678397.

Need to confirm if checking for rollout of multiple resources can be performed in series, or are affected by time.

  • what happens when checking rollout of a resource that is already completed?
  • what if there are multiple rollouts of the same resource going on (for example, the next build started a rollout before the current build ran kubectl rollout)?

Set up CI

  • Use Travis
  • Remove .drone files
  • Publish to Docker Hub

GCP auth plugin will be unavailable in K8s 1.25

Is your feature request related to a problem?

The plugin outputs the following warning message due to the introduced changes from the GCP side in regard to the GKE authentication (ref: https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke):
WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.

Therefore it'll become a problem for some users using the newer Kubernetes versions.

Describe the solution you'd like
The official solution, provided by GCP is to set the USE_GKE_GCLOUD_AUTH_PLUGIN environment variable to True (check the ref doc above).

This can be set in the plugin's image (https://github.com/nytimes/drone-gke/blob/main/Dockerfile#L7) by default. But it has to be double-checked how it affects the previous Kubernetes versions. Maybe it's worth adding some logic to conditionally enable it based on some version detection at run time.

Update golang version

Version 1.19 is no longer supported since the release on 1.21

Update to version 1.20+

Variable shadowing occurs when variable name's casing is changed

Describe the bug
It is possible to shadow the available vars by changing the casing of the vars: variable.

To reproduce

---
kind: pipeline
type: docker
name: default

steps:
  - name: gke
    image: nytimes/drone-gke
    pull: always
    environment:
      TOKEN:
        from_secret: GOOGLE_CREDENTIALS
    settings:
      dry_run: true
      verbose: true
      zone: us-east1-b
      cluster: test
      namespace: test
      vars:
        NAMESPACE: "override"

Proceeds successfully:

---START VARIABLES AVAILABLE FOR ALL TEMPLATES---
{
	"BRANCH": "test",
	"BUILD_NUMBER": "78",
	"COMMIT": "xxx",
	"NAMESPACE": "override",
	"TAG": "",
	"cluster": "test",
	"namespace": "test",
	"project": "my-project-x",
	"zone": "us-east1-b"
}
---END VARIABLES AVAILABLE FOR ALL TEMPLATES---

Expected behavior
Should error out with Error: var "namespace" shadows existing var.

Version (please complete the following information):

  • Drone: 1.9.0
  • drone-gke: latest
  • Kubernetes:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:33:14Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.12-gke.16", GitCommit:"013d168bdbdca7bc365fae800272aede20848d7e", GitTreeState:"clean", BuildDate:"2020-07-29T03:47:52Z", GoVersion:"go1.12.17b4", Compiler:"gc", Platform:"linux/amd64"}

Additional context

Potential fix: lowercase all variables when comparing for shadowing.

https://github.com/nytimes/drone-gke/blob/master/main.go#L430
https://github.com/nytimes/drone-gke/blob/master/main.go#L514
https://github.com/nytimes/drone-gke/blob/master/main.go#L532

Replace custom rollout timeout implementation with kubectl native alternative

Description

Currently, support for the wait_deployments and wait_seconds parameters of drone-gke is implemented in part by a separate utility – a musl variant of timeout(1). This command is used in combination with the kubectl rollout status subcommand to ensure that rollout(s) of specific resource(s) succeed within a specific duration.

I'm not sure when support for the --timeout argument was added to kubectl rollout status, but I believe it can now be used to achieve the objective as the current implementation.

Desired Outcome

kubectl and drone-gke should both continue to function as expected when replacing the timeout command based implementation with the --timeout argument based implementation. In other words, both drone-gke and kubectl rollout status should continue to exit with a non-zero code if a rollout fails to complete within the specified duration; otherwise, each program should continue exiting with a 0 code.

The following is a psuedo-code example intended to demonstrate the core of a potential kubectl native alternative for the rollout duration threshold condition:

##
# $PLUGIN_WAIT_DEPLOYMENTS - a list of user specified resources which may have associated rollouts pending
# due to a previous `kubectl apply` execution (example: deployment/nginx)
##

##
# $PLUGIN_WAIT_SECONDS - a user specified rollout duration threshold (example: 180)
# any pending rollouts that are associated with the above resources must complete
# successfully within this duration; otherwise, the rollout is considered to have failed
# and the drone-gke process must exit with a non-zero status code
##

for resource in "${PLUGIN_WAIT_DEPLOYMENTS[@]}"; do
  ##
  # watch the rollout status of the latest revision associated with current $resource,
  # and wait until one of the following conditions is met:
  #
  # - the rollout has completed
  # - the duration represented by the --timeout argument has been exceeded
  ##
  kubectl rollout status "${resource}" --timeout "${PLUGIN_WAIT_SECONDS}s"

  ##
  # here, `$?` will be set to 0 if the rollout completed successfully within the specified duration.
  # otherwise `$?` will be set to a non-zero value.
  #
  # if during any iteration `kubectl rollout status` results in a non-zero exit code,
  # the `drone-gke` process must also exit with a non-zero code.
  ##
done

Alternatives Considered

None.

Related Resources

kubectl help rollout status

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-

#26

#58

#64

#121

Support specifying directories for template, secret template paths

Is your feature request related to a problem?

  1. Managing all k8s resources in a single file becomes difficult to maintain for larger projects
  2. Executing drone-gke multiple times for each individual file seems a bit excessive

Describe the solution you'd like

An API to drone-gke that is compatible with kubectl apply's -f / --filename argument:

-f, --filename=[]: Filename, directory, or URL to files that contains the configuration to apply

Describe alternatives you've considered

  1. Managing all resources in a single file
  2. Executing drone-gke once per manifest file

Additional context

Might be worth considering in combination with #104

Add support for loading templates from remote locations

Is your feature request related to a problem?

Currently, the plugin requires that resource manifest templates exist on disk:

drone-gke/main.go

Lines 516 to 524 in 6ac83b9

_, err := os.Stat(t)
if os.IsNotExist(err) {
if t == c.String("kube-template") {
return nil, fmt.Errorf("Error finding template: %s\n", err)
}
log("Warning: skipping optional template %s because it was not found\n", t)
continue
}

This complicates sharing / re-use of templates.

Describe the solution you'd like

An ideal implementation would support loading of resource manifest templates from a remote location; similar to kubectl apply [-f|-k]

Describe alternatives you've considered

Current alternatives / workarounds include:

  • pre-fetching remote manifest templates at build time (e.g., via curl, git submodules, etc)
  • use of kubectl / kustomize native features (instead of plugin)

Additional context

Any implementation of this feature request should consider potential interactions with #105 (i.e., how to load multiple templates from a remote location .. especially in the case of supporting directories as a valid argument)

Support for multiple templates

I would like to reuse parts of my YAML configurations across different pipeline steps for different cluster environments ex.: staging, testing and production. But for example not ingress configurations.

Currently is this plausible by defining different YAML files for every environment and assigning the different files to the template parameter. But this creates a lot of code duplication and makes it harder to preform changes to the Kubernetes configurations since the whole YAML configuration has to get duplicated.

Support for multiple template/secret files could allow configurations to be reused. Multiple templates are optional and could be defined by assigning an array to the template parameter instead of a string. This makes the feature both forward and backward compatible:

# Multiple template files:
template:
  - .kube.yml
  - .kube-ingress-test.yml

# Single template file:
template: .kube.yml

Besides the template parameter could the secret_template parameter also get support for multiple templates to keep consistency across the plugin configuration.

Publish images with `autotag`

http://plugins.drone.io/drone-plugins/drone-docker/#autotag

Pushes to master should only update latest.

Tags will create the image's tag, and update the image's existing "higher" tags.

action git tag image tags created image tags updated
0.4.0 0.4.0 0.4 and 0
patch 0.4.1 0.4.1 0.4 and 0
minor (backwards compatible) 0.7.0 0.7.0 and 0.7 0
major (breaking changes) 1.0.0 1.0.0, 1.0, and 1
patch 1.0.1 1.0.1 1.0 and 1

Cannot pull in Drone 0.7

When drone tries to pull the image it gets

panic: EOF
goroutine 1 [running]:
github.com/NYTimes/drone-gke/vendor/github.com/drone/drone-plugin-go/plugin.MustParse()

        /go/src/github.com/NYTimes/drone-gke/vendor/github.com/drone/drone-plugin-go/plugin/param.go:129 +0x4e
main.wrapMain(0x0, 0x0)
       /go/src/github.com/NYTimes/drone-gke/main.go:76 +0x5a3
main.main()
	/go/src/github.com/NYTimes/drone-gke/main.go:50 +0x22```

Ability to deploy an existing kube.yml file

Is it possible with this plugin to just put the bare minimum parameters to deploy a deployment.yml like file?

    image: nytimes/drone-gke

    zone: us-central1-a
    cluster: my-k8s-cluster
    namespace: $$BRANCH
    token: >
      $$GOOGLE_CREDENTIALS

We can use this to just log onto GKE and let it deploy the kube.yml without requiring all the vars and secret to put into the kube.yml file

kubectl rollout error: "timeout: unrecognized option: t"

Describe the bug
It appears that a breaking change was introduced into the busybox timeout command. timeout no longer recognizes the -t switch. Instead, the timeout is now the second argument.

To reproduce
Build a drone-gke container from master. The build process will pull in the latest version of the timeout utility, which has the breaking change. Run the updated plugin in a Drone pipeline.

The following error output should appear in the Drone pipeline output:

$ timeout -t 600 kubectl rollout status deployment skinny-gateway-server --namespace pubp-dev
timeout: unrecognized option: t
BusyBox v1.30.1 (2019-06-12 17:51:55 UTC) multi-call binary.

Usage: timeout [-s SIG] SECS PROG ARGS

Runs PROG. Sends SIG to it if it is not gone in SECS seconds.
Default SIG: TERM.
Error: Error: exit status 1

Expected behavior
The timeout command should run without failing.

Version (please complete the following information):

  • Drone: 0.8
  • drone-gke: master (6ac83b9)

Support server-side applies

Is your feature request related to a problem?
More context in this Slack thread, but TLDR we've got a failing build that we believe we could solve by doing a kubectl apply --server-side.

Describe the solution you'd like
A boolean flag we could pass, similar to dry_run, that, if true, would just append --server-side to the kubectl apply command the image runs.

Describe alternatives you've considered
Off the top of my head, the only way around this would be:

  • Migrate off our use of kube-prometheus, where the issue is occurring (definitely going to happen eventually, but not any time soon)
  • Don't use this image and run all kubectl stuff in our pipeline directly (Ick!)

Additional context
Glancing at #185 a new flag doesn't seem like a huge lift, so if PRs are welcome I'm happy to give this a shot! 😁

Add support for configmaps

With the changes to the plugin based on the RFCs (#27, #28), configmaps can be supported in a similar fashion:

configmaps: config-folder/

Configmap files in config-folder get templated, and each output file gets kubectl create configmap FILE.

Add support for regional clusters

For regional clusters, the --region argument to gcloud container clusters get-credentials is required. The plugin currently does not accept a region parameter. Also, the --region and --zone arguments to gcloud container clusters get-credentials are mutually exclusive; if both are provided, the command will fail.

To support regional clusters, the plugin will need to accept a new optional region parameter, and change the current zone parameter to be optional. It will need to select the corresponding argument used for gcloud container clusters get-credentials based on the parameters provided to the plugin.

Sample Log Output
Drone GKE Plugin built from 42efb697c0e95da3b3d94ecfee6ef29cd54dcd68

$ gcloud auth activate-service-account --key-file /tmp/gcloud.json
Activated service account credentials for: [[email protected]]

$ gcloud container clusters get-credentials my-cluster --project my-project --zone us-east1-b
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=404, message=The resource "projects/my-project/zones/us-east1-b/clusters/my-cluster" was not found.
Could not find [my-cluster] in [us-east1-b].
Did you mean [my-cluster] in [us-east1]?
Error: exit status 1

Add support for rollout reset

Is your feature request related to a problem?
When the tag on the image doesn't change, reapplication of the deployment doesn't delete the pods. The not changing tag could happen if you tag containers with :release/:master

Describe the solution you'd like
Add kubectl rollout restart

Describe alternatives you've considered
I don't think there are any possible ones

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.