Giter Site home page Giter Site logo

krane's Introduction

krane Build status

This project used to be called kubernetes-deploy. Check out our migration guide for more information including details about breaking changes.

krane is a command line tool that helps you ship changes to a Kubernetes namespace and understand the result. At Shopify, we use it within our much-beloved, open-source Shipit deployment app.

Why not just use the standard kubectl apply mechanism to deploy? It is indeed a fantastic tool; krane uses it under the hood! However, it leaves its users with some burning questions: What just happened? Did it work?

Especially in a CI/CD environment, we need a clear, actionable pass/fail result for each deploy. Providing this was the foundational goal of krane, which has grown to support the following core features:

​:eyes: Watches the changes you requested to make sure they roll out successfully.

⁉️ Provides debug information for changes that failed.

🔢 Predeploys certain types of resources (e.g. ConfigMap, PersistentVolumeClaim) to make sure the latest version will be available when resources that might consume them (e.g. Deployment) are deployed.

🔐 Creates Kubernetes secrets from encrypted EJSON, which you can safely commit to your repository

​:running: Running tasks at the beginning of a deploy using bare pods (example use case: Rails migrations)

If you need the ability to render dynamic values in templates before deploying, you can use krane render. Alongside that, this repo also includes tools for running tasks and restarting deployments.

demo-deploy.gif

missing-secret-fail


Table of contents

KRANE DEPLOY

KRANE GLOBAL DEPLOY

KRANE RESTART

KRANE RUN

KRANE RENDER

CONTRIBUTING


Prerequisites

  • Ruby 2.7+
  • Your cluster must be running Kubernetes v1.24.0 or higher1

Compatibility

1 We run integration tests against these Kubernetes versions. You can find our official compatibility chart below.

Krane provides support for official upstream supported versions Kubernetes, Ruby that are part of the compatibility matrix; Nevertheless, older releases are still likely to work.

Kubernetes version Currently Tested? Last officially supported in gem version
1.18 No 2.3.7
1.19 No 2.4.9
1.20 No 2.4.9
1.21 No 2.4.9
1.22 No 3.0.1
1.23 No 3.4.2
1.24 Yes --
1.25 No --
1.26 Yes --
1.27 Yes --
1.28 Yes --

Installation

  1. Install kubectl (requires v1.22.0 or higher) and make sure it is available in your $PATH
  2. Set up your kubeconfig file for access to your cluster(s).
  3. gem install krane

Usage

krane deploy <app's namespace> <kube context>

Environment variables:

  • $KUBECONFIG: points to one or multiple valid kubeconfig files that include the context you want to deploy to. File names are separated by colon for Linux and Mac, and semi-colon for Windows. If omitted, Krane will use the Kubernetes default of ~/.kube/config.
  • $GOOGLE_APPLICATION_CREDENTIALS: points to the credentials for an authenticated service account (required if your kubeconfig user's auth provider is GCP)

Options:

Refer to krane help for the authoritative set of options.

  • --filenames / -f [PATHS]: Accepts a list of directories and/or filenames to specify the set of directories/files that will be deployed, use - to specify reading from STDIN.
  • --no-prune: Skips pruning of resources that are no longer in your Kubernetes template set. Not recommended, as it allows your namespace to accumulate cruft that is not reflected in your deploy directory.
  • --global-timeout=duration: Raise a timeout error if it takes longer than duration for any resource to deploy.
  • --selector: Instructs krane to only prune resources which match the specified label selector, such as environment=staging. If you use this option, all resource templates must specify matching labels. See Sharing a namespace below.
  • --selector-as-filter: Instructs krane to only deploy resources that are filtered by the specified labels in --selector. The deploy will not fail if not all resources match the labels. This is useful if you only want to deploy a subset of resources within a given YAML file. See Sharing a namespace below.
  • --no-verify-result: Skip verification that workloads correctly deployed.
  • --protected-namespaces=default kube-system kube-public: Fail validation if a deploy is targeted at a protected namespace.
  • --verbose-log-prefix: Add [context][namespace] to the log prefix

NOTICE: Deploy Secret resources at your own risk. Although we will fix any reported leak vectors with urgency, we cannot guarantee that sensitive information will never be logged.

Sharing a namespace

By default, krane will prune any resources in the target namespace which have the kubectl.kubernetes.io/last-applied-configuration annotation and are not a result of the current deployment process, on the assumption that there is a one-to-one relationship between application deployment and namespace, and that a deployment provisions all relevant resources in the namespace.

If you need to, you may specify --no-prune to disable all pruning behaviour, but this is not recommended.

If you need to share a namespace with resources which are managed by other tools or indeed other krane deployments, you can supply the --selector option, such that only resources with labels matching the selector are considered for pruning.

If you need to share a namespace with different set of resources using the same YAML file, you can supply the --selector and --selector-as-filter options, such that only the resources that match with the labels will be deployed. In each run of deploy, you can use different labels in --selector to deploy a different set of resources. Only the deployed resources in each run are considered for pruning.

Using templates

All templates must be YAML formatted. We recommended storing each app's templates in a single directory, {app root}/config/deploy/{env}. However, you may use multiple directories.

If you want dynamic templates, you may render ERB with krane render and then pipe that result to krane deploy -f -.

Customizing behaviour with annotations

  • krane.shopify.io/timeout-override: Override the tool's hard timeout for one specific resource. Both full ISO8601 durations and the time portion of ISO8601 durations are valid. Value must be between 1 second and 24 hours.
    • Example values: 45s / 3m / 1h / PT0.25H
    • Compatibility: all resource types
  • krane.shopify.io/required-rollout: Modifies how much of the rollout needs to finish before the deployment is considered successful.
    • Compatibility: Deployment
      • full: The deployment is successful when all pods in the new replicaSet are ready.
      • none: The deployment is successful as soon as the new replicaSet is created for the deployment.
      • maxUnavailable: The deploy is successful when minimum availability is reached in the new replicaSet. In other words, the number of new pods that must be ready is equal to spec.replicas - strategy.RollingUpdate.maxUnavailable (converted from percentages by rounding up, if applicable). This option is only valid for deployments that use the RollingUpdate strategy.
      • Percent (e.g. 90%): The deploy is successful when the number of new pods that are ready is equal to spec.replicas * Percent.
    • Compatibility: StatefulSet
      • full: The deployment is successful when all pods are ready.
  • krane.shopify.io/predeployed: Causes a Custom Resource to be deployed in the pre-deploy phase.
    • Compatibility: Custom Resource Definition
    • Default: true
    • true: The custom resource will be deployed in the pre-deploy phase.
    • All other values: The custom resource will be deployed in the main deployment phase.
  • krane.shopify.io/deploy-method-override: Cause a resource to be deployed by the specified kubectl command, instead of the default apply.
    • Compatibility: Cannot be used for PodDisruptionBudget, since it always uses create/replace-force
    • Accepted values: create, replace, and replace-force
    • Warning: Resources whose deploy method is overridden are no longer subject to pruning on deploy.
    • This feature is experimental and may be removed at any time.

Running tasks at the beginning of a deploy

To run a task in your cluster at the beginning of every deploy, simply include a Pod template in your deploy directory. krane will first deploy any ConfigMap and PersistentVolumeClaim resources present in the provided templates, followed by any such pods. If the command run by one of these pods fails (i.e. exits with a non-zero status), the overall deploy will fail at this step (no other resources will be deployed).

Requirements:

  • The pod's name should include <%= deployment_id %> to ensure that a unique name will be used on every deploy (the deploy will fail if a pod with the same name already exists).
  • The pod's spec.restartPolicy must be set to Never so that it will be run exactly once. We'll fail the deploy if that run exits with a non-zero status.
  • The pod's spec.activeDeadlineSeconds should be set to a reasonable value for the performed task (not required, but highly recommended)

A simple example can be found in the test fixtures: test/fixtures/hello-cloud/unmanaged-pod-1.yml.erb.

The logs of all pods run in this way will be printed inline. If there is only one pod, the logs will be streamed in real-time. If there are multiple, they will be fetched when the pod terminates.

migrate-logs

Deploying Kubernetes secrets (from EJSON)

Note: If you're a Shopify employee using our cloud platform, this setup has already been done for you. Please consult the CloudPlatform User Guide for usage instructions.

Since their data is only base64 encoded, Kubernetes secrets should not be committed to your repository. Instead, krane supports generating secrets from an encrypted ejson file in your template directory. Here's how to use this feature:

  1. Install the ejson gem: gem install ejson
  2. Generate a new keypair: ejson keygen (prints the keypair to stdout)
  3. Create a Kubernetes secret in your target namespace with the new keypair: kubectl create secret generic ejson-keys --from-literal=YOUR_PUBLIC_KEY=YOUR_PRIVATE_KEY --namespace=TARGET_NAMESPACE

Warning: Do not use apply to create the ejson-keys secret. krane will fail if ejson-keys is prunable. This safeguard is to protect against the accidental deletion of your private keys.

  1. (optional but highly recommended) Back up the keypair somewhere secure, such as a password manager, for disaster recovery purposes.
  2. In your template directory (alongside your Kubernetes templates), create secrets.ejson with the format shown below. The _type key should have the value “kubernetes.io/tls” for TLS secrets and “Opaque” for all others. The data key must be a json object, but its keys and values can be whatever you need.
{
  "_public_key": "YOUR_PUBLIC_KEY",
  "kubernetes_secrets": {
    "catphotoscom": {
      "_type": "kubernetes.io/tls",
      "data": {
        "tls.crt": "cert-data-here",
        "tls.key": "key-data-here"
      }
    },
    "monitoring-token": {
      "_type": "Opaque",
      "data": {
        "api-token": "token-value-here"
      }
    }
  }
}
  1. Encrypt the file: ejson encrypt /PATH/TO/secrets.ejson
  2. Commit the encrypted file and deploy. The deploy will create secrets from the data in the kubernetes_secrets key. The ejson file must be included in the resources passed to --filenames it can not be read through stdin.

Note: Since leading underscores in ejson keys are used to skip encryption of the associated value, krane will strip these leading underscores when it creates the keys for the Kubernetes secret data. For example, given the ejson data below, the monitoring-token secret will have keys api-token and property (not _property):

{
  "_public_key": "YOUR_PUBLIC_KEY",
  "kubernetes_secrets": {
    "monitoring-token": {
      "_type": "kubernetes.io/tls",
      "data": {
        "api-token": "EJ[ENCRYPTED]",
        "_property": "some unencrypted value"
      }
    }
  }

A warning about using EJSON secrets with --selector: when using EJSON to generate Secret resources and specifying a --selector for deployment, the labels from the selector are automatically added to the Secret. If the same EJSON file is deployed to the same namespace using different selectors, this will cause the resource to thrash - even if the contents of the secret were the same, the resource has different labels on each deploy.

Deploying custom resources

By default, krane does not check the status of custom resources; it simply assumes that they deployed successfully. In order to meaningfully monitor the rollout of custom resources, krane supports configuring pass/fail conditions using annotations on CustomResourceDefinitions (CRDs).

Requirements:

  • The custom resource must expose a status subresource with an observedGeneration field.
  • The krane.shopify.io/instance-rollout-conditions annotation must be present on the CRD that defines the custom resource.
  • (optional) The krane.shopify.io/instance-timeout annotation can be added to the CRD that defines the custom resource to override the global default timeout for all instances of that resource. This annotation can use ISO8601 format or unprefixed ISO8601 time components (e.g. '1H', '60S').

Specifying pass/fail conditions

The presence of a valid krane.shopify.io/instance-rollout-conditions annotation on a CRD will cause krane to monitor the rollout of all instances of that custom resource. Its value can either be "true" (giving you the defaults described in the next section) or a valid JSON string with the following format:

'{
  "success_conditions": [
    { "path": <JsonPath expression>, "value": <target value> }
    ... more success conditions
  ],
  "failure_conditions": [
    { "path": <JsonPath expression>, "value": <target value> }
    ... more failure conditions
  ]
}'

For all conditions, path must be a valid JsonPath expression that points to a field in the custom resource's status. value is the value that must be present at path in order to fulfill a condition. For a deployment to be successful, all success_conditions must be fulfilled. Conversely, the deploy will be marked as failed if any one of failure_conditions is fulfilled. success_conditions are mandatory, but failure_conditions can be omitted (the resource will simply time out if it never reaches a successful state).

In addition to path and value, a failure condition can also contain error_msg_path or custom_error_msg. error_msg_path is a JsonPath expression that points to a field you want to surface when a failure condition is fulfilled. For example, a status condition may expose a message field that contains a description of the problem it encountered. custom_error_msg is a string that can be used if your custom resource doesn't contain sufficient information to warrant using error_msg_path. Note that custom_error_msg has higher precedence than error_msg_path so it will be used in favor of error_msg_path when both fields are present.

Warning:

You must ensure that your custom resource controller sets .status.observedGeneration to match the observed .metadata.generation of the monitored resource once its sync is complete. If this does not happen, krane will not check success or failure conditions and the deploy will time out.

Example

As an example, the following is the default configuration that will be used if you set krane.shopify.io/instance-rollout-conditions: "true" on the CRD that defines the custom resources you wish to monitor:

'{
  "success_conditions": [
    {
      "path": "$.status.conditions[?(@.type == \"Ready\")].status",
      "value": "True",
    },
  ],
  "failure_conditions": [
    {
      "path": '$.status.conditions[?(@.type == \"Failed\")].status',
      "value": "True",
      "error_msg_path": '$.status.conditions[?(@.type == \"Failed\")].message',
    },
  ],
}'

The paths defined here are based on the typical status properties as defined by the Kubernetes community. It expects the status subresource to contain a conditions array whose entries minimally specify type, status, and message fields.

You can see how these conditions relate to the following resource:

apiVersion: stable.shopify.io/v1
kind: Example
metadata:
  generation: 2
  name: example
  namespace: namespace
spec:
  ...
status:
  observedGeneration: 2
  conditions:
  - type: "Ready"
    status: "False"
    reason: "exampleNotReady"
    message: "resource is not ready"
  - type: "Failed"
    status: "True"
    reason: "exampleFailed"
    message: "resource is failed"
  • observedGeneration == metadata.generation, so krane will check this resource's success and failure conditions.
  • Since $.status.conditions[?(@.type == "Ready")].status == "False", the resource is not considered successful yet.
  • $.status.conditions[?(@.type == "Failed")].status == "True" means that a failure condition has been fulfilled and the resource is considered failed.
  • Since error_msg_path is specified, krane will log the contents of $.status.conditions[?(@.type == "Failed")].message, which in this case is: resource is failed.

Deploy walkthrough

Let's walk through what happens when you run the deploy task with this directory of templates. This particular example uses ERB templates as well, so we'll use the krane render task to achieve that.

You can test this out for yourself by running the following command:

krane render -f test/fixtures/hello-cloud --current-sha 1 | krane deploy my-namespace my-k8s-cluster -f -

As soon as you run this, you'll start seeing some output being streamed to STDERR.

Phase 1: Initializing deploy

In this phase, we:

  • Perform basic validation to ensure we can proceed with the deploy. This includes checking if we can reach the context, if the context is valid, if the namespace exists within the context, and more. We try to validate as much as we can before trying to ship something because we want to avoid having an incomplete deploy in case of a failure (this is especially important because there's no rollback support).
  • List out all the resources we want to deploy (as described in the template files we used).

Phase 2: Checking initial resource statuses

In this phase, we check resource statuses. For each resource listed in the previous step, we check Kubernetes for their status; in the first deploy this might show a bunch of items as "Not Found", but for the deploy of a new version, this is an example of what it could look like:

Certificate/services-foo-tls     Exists
Cloudsql/foo-production          Provisioned
Deployment/jobs                  3 replicas, 3 updatedReplicas, 3 availableReplicas
Deployment/web                   3 replicas, 3 updatedReplicas, 3 availableReplicas
Ingress/web                      Created
Memcached/foo-production         Healthy
Pod/db-migrate-856359            Unknown
Pod/upload-assets-856359         Unknown
Redis/foo-production             Healthy
Service/web                      Selects at least 1 pod

The next phase might be either "Predeploying priority resources" (if there's any) or "Deploying all resources". In this example we'll go through the former, as we do have predeployable resources.

Phase 3: Predeploying priority resources

This is the first phase that could modify the cluster.

In this phase we predeploy certain types of resources (e.g. ConfigMap, PersistentVolumeClaim, Secret, ...) to make sure the latest version will be available when resources that might consume them (e.g. Deployment) are deployed. This phase will be skipped if the templates don't include any resources that would need to be predeployed.

When this runs, we essentially run kubectl apply on those templates and periodically check the cluster for the current status of each resource so we can display error or success information. This will look different depending on the type of resource. If you're running the command described above, you should see something like this in the output:

Deploying ConfigMap/hello-cloud-configmap-data (timeout: 30s)
Successfully deployed in 0.2s: ConfigMap/hello-cloud-configmap-data

Deploying PersistentVolumeClaim/hello-cloud-redis (timeout: 300s)
Successfully deployed in 3.3s: PersistentVolumeClaim/hello-cloud-redis

Deploying Role/role (timeout: 300s)
Don't know how to monitor resources of type Role. Assuming Role/role deployed successfully.
Successfully deployed in 0.2s: Role/role

As you can see, different types of resources might have different timeout values and different success criteria; in some specific cases (such as with Role) we might not know how to confirm success or failure, so we use a higher timeout value and assume it did work.

Phase 4: Deploying all resources

In this phase, we:

  • Deploy all resources found in the templates, including resources that were predeployed in the previous step (which should be treated as a no-op by Kubernetes). We deploy everything so the pruning logic (described below) doesn't remove any predeployed resources.
  • Prune resources not found in the templates (you can disable this by using --no-prune).

Just like in the previous phase, we essentially run kubectl apply on those templates and periodically check the cluster for the current status of each resource so we can display error or success information.

If pruning is enabled (which, again, is the default), any kind not listed in the blacklist that we can find in the namespace but not in the templates will be removed. A particular message about pruning will be printed in the next phase if any resource matches this criteria.

Result

The result section will show:

  • A global status: if all resources were deployed successfully, this will show up as "SUCCESS"; if at least one resource failed to deploy (due to an error or timeout), this will show up as "FAILURE".
  • A list of resources and their individual status: this will show up as something like "Available", "Created", and "1 replica, 1 availableReplica, 1 readyReplica".

At this point the command also returns a status code:

  • If it was a success, 0
  • If there was a timeout, 70
  • If any other failure happened, 1

On timeouts: It's important to notice that a single resource timeout or a global deploy timeout doesn't necessarily mean that the operation failed. Since Kubernetes updates are asynchronous, maybe something was just too slow to return in the configured time; in those cases, usually running the deploy again might work (that should be a no-op for most - if not all - resources).

krane global deploy

Ship non-namespaced resources to a cluster

krane global-deploy (accessible through the Ruby API as Krane::GlobalDeployTask) can deploy global (non-namespaced) resources such as PersistentVolume, Namespace, and CustomResourceDefinition. Its interface is very similar to krane deploy.

Usage

krane global-deploy <kube context>

$ cat my-template.yml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: testing-storage-class
      labels:
        app: krane
    provisioner: kubernetes.io/no-provisioner

$ krane global-deploy my-k8s-context -f my-template.yml --selector app=krane

Options:

Refer to krane global-deploy help for the authoritative set of options.

  • --filenames / -f [PATHS]: Accepts a list of directories and/or filenames to specify the set of directories/files that will be deployed. Use - to specify STDIN.
  • --no-prune: Skips pruning of resources that are no longer in your Kubernetes template set. Not recommended, as it allows your namespace to accumulate cruft that is not reflected in your deploy directory.
  • --selector: Instructs krane to only prune resources which match the specified label selector, such as environment=staging. By using this option, all resource templates must specify matching labels. See Sharing a namespace below.
  • --selector-as-filter: Instructs krane to only deploy resources that are filtered by the specified labels in --selector. The deploy will not fail if not all resources match the labels. This is useful if you only want to deploy a subset of resources within a given YAML file. See Sharing a namespace below.
  • --global-timeout=duration: Raise a timeout error if it takes longer than duration for any resource to deploy.
  • --no-verify-result: Skip verification that resources correctly deployed.

krane restart

krane restart is a tool for restarting all of the pods in one or more deployments, statefuls sets, and/or daemon sets. It triggers the restart by patching template metadata with the kubectl.kubernetes.io/restartedAt annotation (with the value being an RFC 3339 representation of the current time). Note this is the manner in which kubectl rollout restart itself triggers restarts.

Usage

Option 1: Specify the deployments you want to restart

The following command will restart all pods in the web and jobs deployments:

krane restart <kube namespace> <kube context> --deployments=web jobs

Option 2: Annotate the deployments you want to restart

Add the annotation shipit.shopify.io/restart to all the deployments you want to target, like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  annotations:
    shipit.shopify.io/restart: "true"

With this done, you can use the following command to restart all of them:

krane restart <kube namespace> <kube context>

Options:

Refer to krane help restart for the authoritative set of options.

  • --selector: Only restarts Deployments which match the specified Kubernetes resource selector.
  • --deployments: Restart specific Deployment resources by name.
  • --global-timeout=duration: Raise a timeout error if it takes longer than duration for any resource to restart.
  • --no-verify-result: Skip verification that workloads correctly restarted.

krane run

krane run is a tool for triggering a one-off job, such as a rake task, outside of a deploy.

Prerequisites

  • You've already deployed a PodTemplate object with field template containing a Pod specification that does not include the apiVersion or kind parameters. An example is provided in this repo in test/fixtures/hello-cloud/template-runner.yml.
  • The Pod specification in that template has a container named task-runner.

Based on this specification krane run will create a new pod with the entrypoint of the task-runner container overridden with the supplied arguments.

Usage

krane run <kube namespace> <kube context> --arguments=<arguments> --command=<command> --template=<template name>

Options:

  • --template=TEMPLATE: Specifies the name of the PodTemplate to use.
  • --env-vars=ENV_VARS: Accepts a list of environment variables to be added to the pod template. For example, --env-vars="ENV=VAL ENV2=VAL2" will make ENV and ENV2 available to the container.
  • --command=: Override the default command in the container image.
  • --no-verify-result: Skip verification of pod success
  • --global-timeout=duration: Raise a timeout error if the pod runs for longer than the specified duration
  • --arguments:: Override the default arguments for the command with a space-separated list of arguments

krane render

krane render is a tool for rendering ERB templates to raw Kubernetes YAML. It's useful for outputting YAML that can be passed to other tools, for validation or introspection purposes.

Prerequisites

  • krane render does not require a running cluster or an active kubernetes context, which is nice if you want to run it in a CI environment, potentially alongside something like https://github.com/garethr/kubeval to make sure your configuration is sound.

Usage

To render all templates in your template dir, run:

krane render -f ./path/to/template/dir

To render some templates in a template dir, run krane render with the names of the templates to render:

krane render -f ./path/to/template/dir/this-template.yaml.erb

To render a template in a template dir and output it to a file, run krane render with the name of the template and redirect the output to a file:

krane render -f ./path/to/template/dir/template.yaml.erb > template.yaml

Options:

  • --filenames / -f [PATHS]: Accepts a list of directories and/or filenames to specify the set of directories/files that will be deployed. Use - to specify STDIN.
  • --bindings=BINDINGS: Makes additional variables available to your ERB templates. For example, krane render --bindings=color=blue size=large -f some-template.yaml.erb will expose color and size to some-template.yaml.erb.
  • --current-sha: Expose SHA current_sha in ERB bindings

You can add additional variables using the --bindings=BINDINGS option which can be formatted as a string, JSON string or path to a JSON or YAML file. Complex JSON or YAML data will be converted to a Hash for use in templates. To load a file, the argument should include the relative file path prefixed with an @ sign. An argument error will be raised if the string argument cannot be parsed, the referenced file does not include a valid extension (.json, .yaml or .yml) or the referenced file does not exist.

Bindings examples

# Comma separated string. Exposes, 'color' and 'size'
$ krane render --bindings=color=blue,size=large

# JSON string. Exposes, 'color' and 'size'
$ krane render --bindings='{"color":"blue","size":"large"}'

# Load JSON file from ./config
$ krane render --bindings='@config/production.json'

# Load YAML file from ./config (.yaml or yml supported)
$ krane render --bindings='@config/production.yaml'

# Load multiple files via a space separated string
$ krane render --bindings='@config/production.yaml' '@config/common.yaml'

Using partials

krane supports composing templates from so called partials in order to reduce duplication in Kubernetes YAML files. Given a directory DIR, partials are searched for in DIR/partialsand in 'DIR/../partials', in that order. They can be embedded in other ERB templates using the helper method partial. For example, let's assume an application needs a number of different CronJob resources, one could place a template called cron in one of those directories and then use it in the main deployment.yaml.erb like so:

<%= partial "cron", name: "cleanup",   schedule: "0 0 * * *", args: %w(cleanup),    cpu: "100m", memory: "100Mi" %>
<%= partial "cron", name: "send-mail", schedule: "0 0 * * *", args: %w(send-mails), cpu: "200m", memory: "256Mi" %>

Inside a partial, parameters can be accessed as normal variables, or via a hash called locals. Thus, the cron template could like this:

---
apiVersion: batch/v1
kind: CronJob
metadata:
  name: cron-<%= name %>
spec:
  schedule: <%= schedule %>
    successfulJobsHistoryLimit: 3
    failedJobsHistoryLimit: 3
    concurrencyPolicy: Forbid
    jobTemplate:
      spec:
        template:
          spec:
            containers:
            - name: cron-<%= name %>
              image: ...
              args: <%= args %>
              resources:
                requests:
                  cpu: "<%= cpu %>"
                  memory: <%= memory %>
            restartPolicy: OnFailure

Both .yaml.erb and .yml.erb file extensions are supported. Templates must refer to the bare filename (e.g. use partial: 'cron' to reference cron.yaml.erb).

Limitations when using partials

Partials can be included almost everywhere in ERB templates. Note: when using a partial to insert additional key-value pairs to a map you must use YAML merge keys. For example, given a partial p defining two fields 'a' and 'b',

a: 1
b: 2

you cannot do this:

x: yz
<%= partial 'p' %>

hoping to get

x: yz
a: 1
b: 2

but you can do:

```yaml
<<: <%= partial 'p' %>
x: yz

This is a limitation of the current implementation.

Contributing

We ❤️ contributors! To make it easier for you and us we've written a Contributing Guide

You can also reach out to us on our slack channel, #krane, at https://kubernetes.slack.com. All are welcome!

Code of Conduct

Everyone is expected to follow our Code of Conduct.

License

The gem is available as open source under the terms of the MIT License.

krane's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

krane's Issues

Bad output when scaling Deployment to zero

image

The Deployment was in fact successfully scaled to zero. The zero-replica integration test deploys zero replicas to begin with; we need one that deploys 1 and then scales to 0 in a second deploy.

Summary section inaccurate when deploy aborted

If you kill the deploy somewhere between the initiation and completion of an action, the deploy summary printed will be inaccurate. For example, if you abort the deploy between the predeploy and the main deploy, the summary will say "No actions taken". Ideas for solutions:

  1. Introduce a mechanism for tracking the action currently being attempted and reporting it on failure
  2. Don't print the summary section when the process has been killed
  3. Change the message printed to be more ¯\_(ツ)_/¯

Support Helm-style templating

  • {env}.yaml with most values app maintainers will change parameterized
  • partials
  • probably not ERB (cloudplatform team did prototypes of this in both ERB and Golang)

Rollback should skip unmanaged pods

Currently, Shipit rollbacks using this gem are identical to a deploy of the previous revision, which means that unmanaged pods will get run. This is likely contrary to expected behaviour (though I have not heard any specific complaints), and it unnecessarily slows down rollbacks while the pod is deployed and runs to completion. Note that our primary use case for unmanaged pods is Rails migrations, in which case this is launching a container with the old revision and running rake db:migrate as it would have when that revision was first deployed.

Support more resource types

We only selected a few that our own apps commonly use for the initial rollout. Currently unrecognized types will be kubectl applied and a warning will be logged about the fact that the script does not know how to check whether they actually came up.

ImagePullBackOff and RunContainerError should fail deploys

There is currently rudimentary detection for ImagePullBackOff and RunContainerError, but it results in warnings most people ignore. These states should fail the deploy either immediately or after they have been seen a couple times in a row (livenessProbe-style).

Related #34.

Use --kubeconfig instead of env var in kubectl invocations

Make Kubectl's commands use --kubeconfig instead of relying on the env var. This will enable us to stop modifying the env var in tests. We should still derive the value of the flag from the environment, as is standard in other tools, so this change will be transparent to end users.

Check if Image exists

If you deploy a SHA that doesn't exist, kubernetes-deploy will continue and the container will end up in ImagePullBackOff until the image is up.

Should we consider blocking in kubernetes-deploy or failing the deploy early, instead of resorting to a timeout?

How can we check this with the appropriate credentials?

It's worth noting that in Shopify we do this check in our present deployment wrapper around kubernetes-deploy (Capistrano).

What do you think @KnVerey @kirs?

Deployment rollout monitoring revamp

The fact that deployment rollout monitoring currently looks at all pods associated with that deployment instead of pods in the new ReplicaSet has caused several different bugs:

  • Deploys never succeed when there are evicted pods associated with the deployment, even though those pods are old (fixed another way)
  • Pod warnings get shown for pods that are being shut down, in the case where the last deploy was bad and the current one is actually succeeding (very confusing output)
  • Deploy success is delayed by waiting for all old pods to fully disappear
  • This false-positive deploy result is likely caused by this. Last poll before it "succeeded" was: (I now think it probably briefly became available before failing a probe, or something like that)
[KUBESTATUS] {"group":"Deployments","name":"jobs","status_string":"Waiting for rollout to finish: 0 of 1 updated replicas are available...","exists":true,"succeeded":true,"failed":false,"timed_out":false,"replicas":{"updatedReplicas":1,"replicas":1,"availableReplicas":1},"num_pods":1}

Related technical notes:

  • We are selecting related pods using an assumption that they are labelled with the deployment name. I believe this is a bad assumption that has flown under the radar so far because all our templates are labelled this way by convention. The new ReplicaSet version should not do this.
  • Here's how kubectl gets old/new rs

pod.rb:82:in `block in fetch_logs': undefined method `to_datetime' for nil:NilClass

Got this exception while trying to deploy wedge-viewer for the first time, looks like the @deploy_started variable hasn't been set by the time we try to use it there :(

Full trace:


[INFO][2017-06-06 23:07:21 +0000]	----------------------------Phase 2: Checking initial resource statuses-----------------------------
[INFO][2017-06-06 23:07:23 +0000]	
[INFO][2017-06-06 23:07:23 +0000]	------------------------------------------Result: FAILURE-------------------------------------------
[FATAL][2017-06-06 23:07:23 +0000]	No actions taken
/usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/kubernetes_resource/pod.rb:82:in `block in fetch_logs': undefined method `to_datetime' for nil:NilClass (NoMethodError)
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/kubernetes_resource/pod.rb:77:in `each'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/kubernetes_resource/pod.rb:77:in `each_with_object'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/kubernetes_resource/pod.rb:77:in `fetch_logs'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/kubernetes_resource/pod.rb:98:in `display_logs'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/kubernetes_resource/pod.rb:26:in `sync'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/runner.rb:90:in `each'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/lib/kubernetes-deploy/runner.rb:90:in `run'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.7.4/exe/kubernetes-deploy:61:in `<top (required)>'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/bin/kubernetes-deploy:22:in `load'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/bin/kubernetes-deploy:22:in `<main>'

Deploy here: https://shipit.shopify.io/shopify/wedge-viewer/production/deploys/452708

The first deploy for this stack failed in an interesting way where I forced shipit to deploy before the container was ready (because I forgot there was a container build step), so maybe it's failure condition has something to do with why this is happening now? See it here: https://shipit.shopify.io/shopify/wedge-viewer/production/deploys/452703

K8s 1.6 HorizontalPodAutoscaler is no longer supported in extensions/v1beta1

Daemon set timeouts fail

/usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.12.0/lib/kubernetes-deploy/kubernetes_resource/daemon_set.rb:42:in `timeout_message': undefined method `map' for nil:NilClass (NoMethodError)
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.12.0/lib/kubernetes-deploy/kubernetes_resource.rb:130:in `debug_message'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.12.0/lib/kubernetes-deploy/runner.rb:168:in `block in record_statuses'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.12.0/lib/kubernetes-deploy/runner.rb:168:in `each'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.12.0/lib/kubernetes-deploy/runner.rb:168:in `record_statuses'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.12.0/lib/kubernetes-deploy/runner.rb:124:in `run'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/lib/ruby/gems/2.3.0/gems/kubernetes-deploy-0.12.0/exe/kubernetes-deploy:70:in `<top (required)>'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/bin/kubernetes-deploy:22:in `load'
	from /usr/lib/ruby-shopify/ruby-shopify-2.3.3/bin/kubernetes-deploy:22:in `<main>'
kubernetes-deploy ci ci-east --bindings=region=us-east1 exited with status 1

ingress fails to deploy but reports success

I have an invalid ingress that I tried to deploy. It reported success but the ingress did not change or show me the error.

Error should have been: error: ingresses "web" could not be patched: cannot convert int64 to string

What I saw:

[INFO][2017-09-08 19:28:45 +0000]	Deploying resources:
[INFO][2017-09-08 19:28:45 +0000]	- Ingress/web (timeout: 30s)
Successfully deployed in 8.8s: Ingress/web

Support Shipit visualization

This feature will mostly be on the shipit-engine side, but will require some work in this gem. The primary purpose of the KUBESTATUS logs is to support such a visualization. Some thoughts:

  • Ideally these logs would be hidden from the deploy output if possible (though it probably isn't) to make the deploy output itself more human-friendly than it typically is today.
  • Having the visualization help educate app maintainers about what actually happens during kubernetes deploys should be a goal. For example:
    • make surge/unavailability visible
    • don't make it look like pods restart rather than are replaced
  • If we stay with representing the entities being rolled out with coloured boxes, perhaps we could add a hover state revealing more info about that entity, e.g. the name and status string, and the logs if it is a pod and has failed.

Zero-replicas deployments never succeed

Example

The monitoring for both the deployment (which currently has a guard clause on at least one replica being available) and the service (which waits to have endpoints to succeed) is causing this.

Enforce minimum kubernetes version

We currently require Kubernetes v1.6 and will soon move to v1.7. All three executables should check the version in the target cluster and abort early if it doesn't meet requirements.

Make Ingress wait to receive an IP

The class for ingresses currently uses a basic exists? check, although it isn't really ready until it has a public IP. It should be feasible to watch for this (see https://kubernetes.io/docs/api-reference/v1.6/#loadbalanceringress-v1-core), but I'm not 100% sure it is worthwhile. We'll need to look into whether there are cases when no IP would be expected and if so whether they're distinguishable. For example, IIRC the nginxinc ingress controller was not writing IPs back to ingress statuses when we were using it. And of course if you have no ingress controller deployed at all, you won't get an IP--I imagine this would be difficult to test in minikube.

Failed deploy reported successful

Syntax errors in the k8s template error out, but the deploy still reports as successful:

[WARN]	The following command failed: kubectl apply -f /tmp/testing-deployment.yml.erb20170209-12190-s9f69e --prune --all --prune-whitelist\=core/v1/ConfigMap --prune-whitelist\=core/v1/Pod --prune-whitelist\=core/v1/Service --prune-whitelist\=batch/v1/Job --prune-whitelist\=extensions/v1beta1/DaemonSet --prune-whitelist\=extensions/v1beta1/Deployment --prune-whitelist\=extensions/v1beta1/HorizontalPodAutoscaler --prune-whitelist\=extensions/v1beta1/Ingress --prune-whitelist\=apps/v1beta1/StatefulSet --namespace\=pipa-test --context\=pipa-test
[WARN]	error: unable to decode "/tmp/testing-deployment.yml.erb20170209-12190-s9f69e": [pos 290]: json: expect char '"' but got char '8'
Waiting for Deployment/pipa-test
[KUBESTATUS] {"group":"Pipa-test deployment","name":"pipa-test-1149558948-5l604","status_string":"Running (Ready: true)","exists":true,"succeeded":true,"failed":false,"timed_out":false}
...
Spent 0.58s waiting for Deployment/pipa-test
Deploy succeeded!

Should this error be caught in shipit-engine or in this gem?

Better error when the kube config has the wrong master IP

Currently seeing:

[INFO][2017-09-11 20:25:03 +0000]	
[INFO][2017-09-11 20:25:03 +0000]	------------------------------------Phase 1: Initializing deploy------------------------------------
[INFO][2017-09-11 20:25:03 +0000]	All required parameters and files are present
[INFO][2017-09-11 20:25:03 +0000]	Context pipa-test found
[INFO][2017-09-11 20:25:34 +0000]	
[INFO][2017-09-11 20:25:34 +0000]	------------------------------------------Result: FAILURE-------------------------------------------
[FATAL][2017-09-11 20:25:34 +0000]	Namespace pipa-test not found

This is a bit confusing to users since it says "Context pipa-test found" which seems to refer to the kube config file, but the actual error occurs at the namespace lookup.

More helpful output when deploys fail

We have more information on why it failed, so we should provide what we can to the user. For example, we could dump relevant pod logs and events, or minimally log an error along the lines of "Go look at your logs or bug tracker" for container-related failures. Note that the cloudplatform team is considering annotating namespaces with app info, possibly including logs/bugs urls, which could be used to enhance such messages if/when available.

Need to support a "restart" button in Shipit

Right now, app maintainers need to push a new commit, triggering a new container build, to "restart" their application. We need a true restart mechanism, ideally enabling users to choose to restart only a specific deployment rather than all of them if desired.

Note that there has been some discussion re: adding a feature like this to kubectl. It's still open, but my read on the tl;dr there is that the maintainers are conceptually opposed to implementing it, since restarts should not be necessary when no changes have been made to the spec and both liveness and readiness probes are passing.

As that k8s issue points out, rather than implementing selective pod deletion in accordance with the deployments rollout strategy, this can be achieved by patching the deployment's podspec, e.g. with an environment variable, a label or an annotation containing a timestamp.

Error: the server could not find the requested resource

Saw this:

[FATAL][2017-07-31 21:33:12 +0000]	> Error from kubectl:
[FATAL][2017-07-31 21:33:12 +0000]	    Error from server (InternalError): error when applying patch:
[FATAL][2017-07-31 21:33:12 +0000]	    {"spec":{"containers":[{"name":"command-runner","resources":{"limits":{"cpu":"1000m"}}}]}}
[FATAL][2017-07-31 21:33:12 +0000]	    to:
[FATAL][2017-07-31 21:33:12 +0000]	    &{0xc42100b680 0xc420152070 <app> n upload-assets-7b4d9314-1f4a36a4 /tmp/Pod-upload-assets-7b4d9314-1f4a36a420170731-5277-12341p6.yml 0xc4205cca28 0xc420f4c000 82075679 false}
[FATAL][2017-07-31 21:33:12 +0000]	    for: "/tmp/Pod-upload-assets-7b4d9314-1f4a36a420170731-5277-12341p6.yml": an error on the server ("Internal Server Error: \"/api/v1/namespaces/<app>/pods/upload-assets-7b4d9314-1f4a36a4\": the server could not find the requested resource") has prevented the request from succeeding (patch pods upload-assets-7b4d9314-1f4a36a4)
[FATAL][2017-07-31 21:33:12 +0000]	> Rendered template content:

It appears that the pod died while/before it was being updated with the resource limit.

Could this be a concurrency bug? Subsequent deploy passed fine.

Pod CrashBackoffLoop detection and handling

See this deploy for example. We should either:

  1. fail the deploy immediately; or
  2. warn the first 1 or 2 times this condition is seen and fail it on the next occurrence.

Theoretically, this could be caused by a condition that could resolve itself, but I don't recall ever seeing this be the case in practice. As a result I'd lean towards the simpler option (1) at least to start.

Demo Folder?

Hi Shopify,

I am trying to get started using this tool but would love an example or demo folder with sample app, template and template erb to hit the ground running. Is this something that exists in another repo?

Optimize polling interval

The initial interval was chosen pretty arbitrarily, and I have the subjective impression that is is too short, leading deploys to be noisier than necessary. We should gather some data on this and optimize it.

Reevaluate numberAvailable for Daemon Sets after k8s upgrade

What?

  • During testing with Daemon Sets and kubernetes deploy we ran into issues using numberAvailable component in the status field due to it being inconsistent
  • Currently we are using numberReady but we need to reevaluate numberAvailable after a kubernetes version upgrade

related to #148

Increase integration test coverage

  • Regression tests for fixes we merged before the framework was ready (e.g. #31)
  • Cloudsql and Redis third party resources?
  • Up-front validation failures
  • Audit code for cases that result in hard deploy failures and make sure they're all covered

Better documentation

  • Improve readme (better explanation of core functionality and options, screenshots)
  • Convert all existing comment-based docs to rdoc or yard
  • Add comment-based docs for the key methods you need to use the tasks from Ruby instead of from the CLI (i.e. the run and run! methods of each Task class).

Prioritize failed pod as debug log source for split-result RS

We're currently using kubectl's built-in logic for selecting which pod to dump logs from for deployments/replicaSets. That code is here and tl;dr is trying to select a pod that is more likely to actually have logs. When all the pods in an RS are failing, this is perfect. However, when some are succeeding, this logic is likely to select the good ones, which often have a large volume of irrelevant content. We should consider something like:

if @pods.map(&:deploy_succeeded?).uniq.length > 1 # split-result ReplicaSet
  most_useful_pod = @pods.find(&:deploy_failed?) || @pods.find(&:deploy_timed_out?)
  most_useful_pod.fetch_logs
else
  # current logic
end

It's worth noting that in most cases I've seen, the bad pods in a split-result ReplicaSet are failing at a very early stage (can't pull image, can't mount volume, etc.), so in practice the effect might be suppressing irrelevant logs rather than actually grabbing relevant ones.

cc @kirs @karanthukral

Generic Custom Resource Definition support

Instead of hardcoding details about what Shopify's TPR controllers are expected to create, we should provide generic support based on a conventional status field. According to @wfarr investigation, such a status field will be easier to implement in the new CustomResourceDefinitions.

  • We'd query the cluster for valid CRD types at the beginning of the deploy, before creating resource instances.
  • CRD instances would be expected to expose status fields from which we can derive their success/failure
    • We should have a convention for this that lines up with the fields/values first-party controllers set
    • We can provide an override mechanism, likely an annotation, for specifying custom fields/values

Pick a name?

When I extracted the script from shipit-engine, I named the repo as the snippet was named in Shipit. However this isn't a great name for the project because all Shipit snippets are named in a very straightforward manner:

$ ls lib/snippets
deploy-to-gke
extract-gem-version
fetch-gem-version
fetch-heroku-version
push-to-heroku
...

These are great names for a bash snippet in bin/, but not for a individual project.
My point here is that we didn't call it "sysv-resource-limiter", we called it "Semian".
I think this project would benefit from an expressive name.

thoughts? @sirupsen @KnVerey

Revisit supported ruby version

We nominally support ruby 2.1 or greater, and have Rubocop set up accordingly. However, it turns out this has not been true since we added the ActiveSupport dependency:

activesupport-5.1.1 requires ruby version >= 2.2.2, which is incompatible with the current version, ruby 2.1.8p440

Should we:

A) Change the dependency to fall in line with shipit-engine's minimum version as originally intended; or

B) Set an independent, higher requirement.

Given the extent to which we work with variable parsed JSON in this gem, the safe navigation operator introduced in 2.3 would be tremendously useful. And although we certainly designed this gem around Shipit's use case, it isn't all that functionally tied to Shipit. For those reasons, I vote for (B), and making that requirement ruby 2.3.

@Shopify/cloudplatform @kirs

Readiness probe message makes bad assumptions

A test in @karanthukral PR points out that the Pod code is assuming the readinessProbes are HTTP:
image
^ This message makes no sense, oops! 😄

I noticed today that that message can also get displayed when the probes are fine but the rollout was reeeallly slow so the container just happens to be starting. Maybe we should change the beginning to be less confident, e.g. "Your pods are running, but are not ready. Please make sure they're passing their readiness probes". And then either not push a probe-specific message if probe_location is blank, or adjust it to work / make sense for both types of probes.

@karanthukral would you be interested in fixing this?

kubernetes-restart command not found

Ran into this trying to restart via shipit

Kube-restart failed
$ bundle exec kubernetes-restart identity-production tier3
pid: 20076
bundler: failed to load command: kubernetes-restart (/app/data/bundler/ruby/2.3.0/bin/kubernetes-restart)

@KnVerey @kirs right now folks trying to do a restart via shipit are hitting this bug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.