Giter Site home page Giter Site logo

bonnefoa / kubectl-fzf Goto Github PK

View Code? Open in Web Editor NEW
438.0 7.0 33.0 740 KB

A fast kubectl autocompletion with fzf

License: MIT License

Python 0.30% Shell 4.75% Go 93.53% Dockerfile 0.60% Makefile 0.83%
fzf kubernetes kubectl fuzzy-search bash autocompletion completion

kubectl-fzf's Introduction

Kubectl-fzf

kubectl-fzf provides a fast and powerful fzf autocompletion for kubectl.

asciicast

Table of Contents

Features

  • Seamless integration with kubectl autocompletion
  • Fast completion
  • Label autocompletion
  • Automatic namespace switch

Requirements

  • go (minimum version 1.19)
  • awk
  • fzf

Installation

kubectl-fzf binaries

# Completion binary called during autocompletion
go install github.com/bonnefoa/kubectl-fzf/v3/cmd/kubectl-fzf-completion@main
# If you want to run the kubectl-fzf server locally
go install github.com/bonnefoa/kubectl-fzf/v3/cmd/kubectl-fzf-server@main

kubectl-fzf-completion needs to be in you $PATH so make sure that your $GOPATH bin is included:

PATH=$PATH:$GOPATH/bin

Shell autocompletion

Source the autocompletion functions:

# bash version
wget https://raw.githubusercontent.com/bonnefoa/kubectl-fzf/main/shell/kubectl_fzf.bash -O ~/.kubectl_fzf.bash
echo "source <(kubectl completion bash)" >> ~/.bashrc
echo "source ~/.kubectl_fzf.bash" >> ~/.bashrc

# zsh version
wget https://raw.githubusercontent.com/bonnefoa/kubectl-fzf/main/shell/kubectl_fzf.plugin.zsh -O ~/.kubectl_fzf.plugin.zsh
echo "source <(kubectl completion zsh)" >> ~/.zshrc
echo "source ~/.kubectl_fzf.plugin.zsh" >> ~/.zshrc

Zsh plugins: Antigen

You can use antigen to load it as a zsh plugin

antigen bundle robbyrussell/oh-my-zsh plugins/docker
antigen bundle bonnefoa/kubectl-fzf@main shell/

kubectl-fzf-server

Install kubectl-fzf-server as a pod

You can deploy kubectl-fzf-server as a pod in your cluster.

From the k8s directory:

helm template --namespace myns --set image.kubectl_fzf_server.tag=v3 --set toleration=aToleration . | kubectl apply -f -

You can check the latest image version here.

Install kubectl-fzf-server as a systemd service

You can install kubectl-fzf-server as a systemd unit server.

# Create user systemd config
mkdir -p ~/.config/systemd/user
wget https://raw.githubusercontent.com/bonnefoa/kubectl-fzf/main/systemd/kubectl_fzf_server.service -O ~/.config/systemd/user/kubectl_fzf_server.service
# Set fullpath of kubectl-fzf-server
sed -i "s#INSTALL_PATH#$GOPATH/bin#" ~/.config/systemd/user/kubectl_fzf_server.service

# Reload to pick up new service
systemctl --user daemon-reload

# Start the server
systemctl --user start kubectl_fzf_server.service

# Automatically enable it at startup
systemctl --user enable kubectl_fzf_server.service

# Get log
journalctl --user-unit=kubectl_fzf_server.service

Usage

kubectl-fzf-server: local version

flowchart TB
    subgraph TargetCluster
        k8s[api-server]
    end

    subgraph Laptop
        shell[Shell]
        fileNode([/tmp/kubectl_fzf_cache/TargetCluster/pods])
        comp[kubectl-fzf-completion]
        server[kubectl-fzf-server]
    end
    shell -- kubectl get pods TAB --> comp -- Read content and feed it to fzf --> fileNode
    server -- Write autocompletion informations --> fileNode

    k8s <-- Watch --o server
Loading

kubectl-fzf-server will watch cluster resources and keep the current state of the cluster in local files. By default, files are written in /tmp/kubectl_fzf_cache (defined by KUBECTL_FZF_CACHE)

Advantages:

  • Minimal setup needed.
  • Local cache is maintained up to date.

Drawbacks:

  • It can be CPU and memory intensive on big clusters.
  • It also can be bandwidth intensive. The most expensive is the initial listing at startup and on error/disconnection. Big namespace can increase the probability of errors during initial listing.
  • It can generate load on the kube-api servers if multiple user are running it.

To create cache files necessary for kubectl_fzf, just run in a tmux or a screen

kubectl-fzf-server

It will watch the cluster in the current context. If you switch context, kubectl-fzf-server will detect and start watching the new cluster. The initial resource listing can be long on big clusters and autocompletion might need 30s+.

connect: connection refused or similar messages are expected if there's network issues/interruptions and kubectl-fzf-server will automatically reconnect.

kubectl-fzf-server: pod version

flowchart TB
    subgraph TargetCluster
        k8s[api-server]
        server[kubectl-fzf-server]
    end

    subgraph Laptop
        shell[Shell]
        comp[kubectl-fzf-completion]
    end


    shell -- kubectl get pods TAB --> comp 
    comp -- Through port forward\nGET /k8s/resources/pods --> server

    k8s <-- Watch --o server
Loading

If the pod is deployed in your cluster, the autocompletion will be fetched automatically fetched using port forward.

Advantages:

  • No need to run a local kubectl-fzf-server
  • Only a single instance of kubectl-fzf-server per cluster is needed, lowering the load on the kube-api servers.

Drawbacks:

  • Resources need to be fetched remotely, this can increased the completion time. A local cache is maintained to lower this.

Completion

Once kubectl-fzf-server is running, you will be able to use kubectl_fzf by calling the kubectl completion

# Get fzf completion on pods on all namespaces
kubectl get pod <TAB>

# Open fzf autocompletion on all available label
kubectl get pod -l <TAB>

# Open fzf autocompletion on all available field-selector. Usually much faster to list all pods running on an host compared to kubectl describe node.
kubectl get pod --field-selector <TAB>

# This will fallback to the normal kubectl completion (if sourced) 
kubectl <TAB>

Configuration

By default, the local port used for the port-forward is 8080. You can override it through an environment variable:

KUBECTL_FZF_PORT_FORWARD_LOCAL_PORT=8081

Troubleshooting

Debug kubectl-fzf-completion

Build and test a completion with debug logs:

go build ./cmd/kubectl-fzf-completion && KUBECTL_FZF_LOG_LEVEL=debug ./kubectl-fzf-completion k8s_completion 'get pods '  

Force Tab completion to use the completion binary in the current directory:

export KUBECTL_FZF_COMPLETION_BIN=./kubectl-fzf-completion

Debug Tab Completion

To debug Tab completion, you can activate the shell debug logs:

export KUBECTL_FZF_COMP_DEBUG_FILE=/tmp/debug

Check that the completion function is correctly sourced:

type kubectl_fzf_completion
kubectl_fzf_completion is a shell function from /home/bonnefoa/.antigen/bundles/kubectl-fzf-main/shell/kubectl_fzf.plugin.zsh

Use zsh completion debug:

kubectl get pods <C-X>?
Trace output left in /tmp/zsh497886kubectl1 (up-history to view)

Debug kubectl-fzf-server

To launch kubectl-fzf-server with debug logs

kubectl-fzf-server --log-level debug

kubectl-fzf's People

Contributors

andromedarabbit avatar bonnefoa avatar brouberol avatar gashirar avatar j33ty avatar kui avatar liusheng-magictavern-com avatar n0gu avatar octplane avatar tomxor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubectl-fzf's Issues

Changing zstyle doesn't fix ESC/ctrl+c issue

The tool works fine, although, when I'm in interactive mode and try to quit with ESC or ctrl+c, it pops out three more times before I can actually quit.

I changed the zstyle line in the path ~/.oh-my-zsh/lib/completion.zsh here at line 3 to:

# case insensitive (all), partial-word and substring completion
if [[ "$CASE_SENSITIVE" = true ]]; then
  zstyle ':completion:*' matcher-list 'r:|=*' #HERE
else
  if [[ "$HYPHEN_INSENSITIVE" = true ]]; then
    zstyle ':completion:*' matcher-list 'm:{a-zA-Z-_}={A-Za-z_-}' 'r:|=*' 'l:|=* r:|=*'
  else
    zstyle ':completion:*' matcher-list 'm:{a-zA-Z}={A-Za-z}' 'r:|=*' 'l:|=* r:|=*'
  fi
fi
unset CASE_SENSITIVE HYPHEN_INSENSITIVE

but that didn't change anything really. Am I configuring the wrong oh-my-zsh file?

FZF Config error on tab completion

Hello! Thanks for making this plugin - I'm very excited to use it!

I'm running into an issue where tab completion does does attempt to trigger fzf but it appears that there is a configuration error. I have included the output of a few commands below

k get pods invalid preview window layout: down:
invalid preview window layout: down:
invalid preview window layout: down:
k get nodes awk: syntax error at source line 1
 context is
         >>> {a[$5][ <<<
awk: illegal statement at source line 1
awk: illegal statement at source line 1
join: malformed -o option field
invalid preview window layout: down:
awk: syntax error at source line 1
 context is
         >>> {a[$5][ <<<
awk: illegal statement at source line 1
awk: illegal statement at source line 1
join: malformed -o option field
invalid preview window layout: down:
awk: syntax error at source line 1
 context is
         >>> {a[$5][ <<<
awk: illegal statement at source line 1
awk: illegal statement at source line 1
join: malformed -o option field
invalid preview window layout: down:

I've installed using the following:

go get -u github.com/bonnefoa/kubectl-fzf/cmd/cache_builder
# If you update, you need to recompile cache_builder with
go install github.com/bonnefoa/kubectl-fzf/cmd/cache_builder

and


# zsh version
echo "source <(kubectl completion zsh)" >> ~/.zshrc
echo "source $GOPATH/src/github.com/bonnefoa/kubectl-fzf/kubectl_fzf.sh" >> ~/.zshrc

I have run cache_builder successfully

Anythoughts/. I am on OSX.

ARM compatibility?

I can't seem to be able to install it on ARM architecture. Is it compatible or I have to wait?

Unauthorized error even after login

Hi,
I have a local cache_builder started as a daemon. To access most of my clusters I have to login.

The problem is that cache_builder returns Unauthorized errors until I restart it :

E0201 08:48:53.598442    1067 reflector.go:138] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:167: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized

Would it be possible to support "hot" client refresh ?

Thanks !

Completition tries to cut file that doesnt exists

Hi there,

just installed the cache_builder. Works great, files are written to standard tmp directory.
After sourcing completion for zsh i get an empty output:
Screenshot from 2020-11-13 18-57-35

If a interupt Ive get following msg:
cut: /tmp/kubectl_fzf_cache/xx/pods_resource: No such file or directory

Do you know whats wrong here and why cache builder isnt createing this file?

How to install it using zplugin

Hi!

I was using this tool with my previous zplug installation. It was pretty easy to install.

I'm trying to install it using zplugin (with is my plugin manager for now on) and I am not being able to make it work. Zplugin also does not have the defer option of zplug (I think it might be related to it.

I've already installed cache_builder and it is running, it still does not work.

Anyone can help?

Issues with k8s.io/client-go and k8s.io/utils/trace

I tried blowing away the ~/go/src/k8s.io but still am having issues trying to go get cache_builder

➜  src go version
go version go1.12.9 darwin/amd64

➜  src go get -u github.com/bonnefoa/kubectl-fzf/cmd/cache_builder
# k8s.io/utils/trace
k8s.io/utils/trace/trace.go:100:57: invalid operation: stepThreshold == 0 || stepDuration > stepThreshold || klog.V(4) (mismatched types bool and klog.Verbose)
k8s.io/utils/trace/trace.go:112:56: invalid operation: stepThreshold == 0 || stepDuration > stepThreshold || klog.V(4) (mismatched types bool and klog.Verbose)
# k8s.io/client-go/transport
k8s.io/client-go/transport/round_trippers.go:70:11: cannot convert klog.V(9) (type klog.Verbose) to type bool
k8s.io/client-go/transport/round_trippers.go:72:11: cannot convert klog.V(8) (type klog.Verbose) to type bool
k8s.io/client-go/transport/round_trippers.go:74:11: cannot convert klog.V(7) (type klog.Verbose) to type bool
k8s.io/client-go/transport/round_trippers.go:76:11: cannot convert klog.V(6) (type klog.Verbose) to type bool

No Auth Provider found for name "gcp"

Hi, great project!

My fzf completion for k8 resources shows empty list and once I quit it, it displays following error:

kubectl get pods awk: cannot open /tmp/kubectl_fzf_cache/my-cluster/pods_header (No such file or directory)
cat: /tmp/kubectl_fzf_cache/my-cluster/pods_header: No such file or directory
cut: invalid field range
Try 'cut --help' for more information.
cat: /tmp/kubectl_fzf_cache/my-cluster/pods: No such file or directory
cut: invalid field range
Try 'cut --help' for more information.
kubectl

I think that it might be connected with the error I get when running cache_builder separately:

cache_builder
Fatal error: No Auth Provider found for name "gcp"

Any ideas how to fix that?

Edit: I found that @eekwong did some adjustments for gcp and it worked also for me.
Here is the commit: eekwong@18b3ff0

It looks more like a workaround than final solution but maybe you could incorporate it somehow.

Any idea why kubectl autocompletion a random alias?

I know the problem is on my side, but I cannot figure out why.

When I type kubectl <TAB> the aucompletion will offer me the list of every command I can use. If I choose any of them and TAB again then it'll just start NPM in my terminal. If I have a package.json present it'll start npm run test. If not then I have a error code ENOENT.

If I remove my test alias, which calls npm run test then it calls something else.

Thanks in advance if by any chance you could help me with that :)

SSH/Rsync error on tabbing the pods

When I type kubectl get <tab> it shows me the right list of resources, I select pods and tab again for the exact pods when I get:

ssh: Could not resolve hostname rsync: Name or service not known
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]

But the process does succeed, I get the right pod name and everything works, except there's that error message. I.e. after selecting the pod name I end up with:

$ kubectl get pods ssh: Could not resolve hostname rsync: Name or service not known
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]
 argo-ui-5799bcf744-l8hqb -n argo 
NAME                       READY   STATUS        RESTARTS   AGE
argo-ui-5799bcf744-l8hqb   1/1     Terminating   0          9d

Same happens for other resources as well.

I'm on Ubuntu, using Zsh with Zplug and locally running cache_builder.

Unable to go get on mac

I get the following error and cannot install

$ go get -u github.com/bonnefoa/kubectl-fzf/cmd/cache_builder
package kubectlfzf/pkg/k8sresources: unrecognized import path "kubectlfzf/pkg/k8sresources" (import path does not begin with hostname)
package kubectlfzf/pkg/resourcewatcher: unrecognized import path "kubectlfzf/pkg/resourcewatcher" (import path does not begin with hostname)
package kubectlfzf/pkg/util: unrecognized import path "kubectlfzf/pkg/util" (import path does not begin with hostname)

[BUG] kubectl-fzf-server `--watch-namespaces` does not exclude all other namespaces.

#TLDR:
--watch-namespaces does not exclude all other namespaces. It still watches all namespaces.

Details

Good day,

Thanks for such a useful tool for making Kubernetes administration faster!

Got tired of the slow auto complete in kubectl. Installed kubectl-fzfbut I use a private cluster that requires a VPN. The VPN is billed per data transferred so I did not want to monitor all namespaces and resources. I didn't deploy the kubectl-fzf-server server to the cluster, this is probably the best solution for my billing issues.

Issue

I started with: kubectl-fzf-server --watch-namespaces e2e-testing
But the output was unexpected, it was still watching all namespaces!

INFO[2023-10-20T12:06:04-04:00]resource_watcher.go:295 github.com/bonnefoa/kubectl-fzf/v3/internal/k8s/resourcewatcher.(*ResourceWatcher).watchResource() Start watch for pods on all namespaces
INFO[2023-10-20T12:09:59-04:00]resource_watcher.go:295 github.com/bonnefoa/kubectl-fzf/v3/internal/k8s/resourcewatcher.(*ResourceWatcher).watchResource() Start watch for configmaps on all namespaces
<etc.>

Workaround

I was able to work around this issue by using the --exclude-namespaces and a negative Perl regex, '/^(e2e-testing)/'

For example: kubectl-fzf-server --watch-namespaces 'e2e-testing' --exclude-namespaces '/^(e2e-testing)/'

Which results in the following output and works well:

INFO[2023-10-20T12:10:53-04:00]resource_watcher.go:195 github.com/bonnefoa/kubectl-fzf/v3/internal/k8s/resourcewatcher.(*ResourceWatcher).FetchNamespaces() namespace ack-lambda-controller not in watched namespace, excluding
<SNIP>
INFO[2023-10-20T12:10:53-04:00]resource_watcher.go:290 github.com/bonnefoa/kubectl-fzf/v3/internal/k8s/resourcewatcher.(*ResourceWatcher).watchResource() Start watch for pods on namespace [e2e-testing]
INFO[2023-10-20T12:10:53-04:00]resource_watcher.go:290 github.com/bonnefoa/kubectl-fzf/v3/internal/k8s/resourcewatcher.(*ResourceWatcher).watchResource() Start watch for configmaps on namespace [e2e-testing]
<SNIP>

Documentation

At first, I thought I would have to recompile kubectl-fzf-server since there was no documentation of CLI arguments. But I did find them tracing though the code and was able to get more details with kubectl-fzf-server --help The --help documentation says that the CLI arguments are parsed with regex. The regex is Perl, so I had to swap my standard Linux regex for Perl.

Requests

These are just what I saw and don't need to be implemented since there is a workaround. Documenting them for completeness.

  • Update the --help documentation to call out it is Perl regex.
  • Please add the kubectl-fzf-server arguments to the documentation.
  • Change how --watch-namespaces works to exclude all other namespaces.

Not working with multiple kubectl configs

Looks like cache_builder doesn't work with multiple paths in KUBECONFIG

$ cache_builder
I0603 13:15:47.314887   38163 main.go:190] Building role blacklist from "[]"
goroutine 1 [running]:
runtime/debug.Stack(0xc00004e05b, 0x39, 0x0)
        /usr/local/Cellar/go/1.15.5/libexec/src/runtime/debug/stack.go:24 +0x9f
runtime/debug.PrintStack()
        /usr/local/Cellar/go/1.15.5/libexec/src/runtime/debug/stack.go:16 +0x25
kubectlfzf/pkg/util.FatalIf(0x23c7ae0, 0xc0003da750)
        /Users/anthoninbonnefoy/git-repos/kubectl-fzf/pkg/util/util.go:197 +0x3a
main.getClientConfigAndCluster(0x8, 0x22dbc70, 0xc00035d410)
        /Users/anthoninbonnefoy/git-repos/kubectl-fzf/cmd/cache_builder/main.go:176 +0x105
main.start()
        /Users/anthoninbonnefoy/git-repos/kubectl-fzf/cmd/cache_builder/main.go:222 +0xca
main.main()
        /Users/anthoninbonnefoy/git-repos/kubectl-fzf/cmd/cache_builder/main.go:250 +0x785
Fatal error: open /Users/Martin/.kube/config-k3s:/Users/Martin/.kube/config: no such file or directory

$ echo $KUBECONFIG 
/Users/Martin/.kube/config-k3s:/Users/Martin/.kube/config

Cache builder as startup service

Hi!

I'm trying to add cache_builder as a startup service on my computer, but I get the following error on systemctl status:

● cache-builder.service - Cache Builder for Kubectl-FZF Service
     Loaded: loaded (/etc/systemd/system/cache-builder.service; disabled; vendor preset: disabled)
     Active: failed (Result: exit-code) since Mon 2020-03-30 12:35:39 -03; 9min ago
    Process: 37840 ExecStart=/usr/local/bin/cache_builder (code=exited, status=255/EXCEPTION)
   Main PID: 37840 (code=exited, status=255/EXCEPTION)

mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]:         /Users/anthoninbonnefoy/git-repos/kubectl-fzf/pkg/util/uti>
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]: main.getClientConfigAndCluster(0x8, 0x1382dd8, 0xc00027e390)
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]:         /Users/anthoninbonnefoy/git-repos/kubectl-fzf/cmd/cache_bu>
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]: main.start()
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]:         /Users/anthoninbonnefoy/git-repos/kubectl-fzf/cmd/cache_bu>
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]: main.main()
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]:         /Users/anthoninbonnefoy/git-repos/kubectl-fzf/cmd/cache_bu>
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br cache_builder[37840]: Fatal error: open : no such file or directory
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br systemd[1]: cache-builder.service: Main process exited, code=exited, status=255/EXCEPTION
mar 30 12:35:39 SPF1LT-PJ000443.ldap.quintoandar.com.br systemd[1]: cache-builder.service: Failed with result 'exit-code'.

Any idea on how to fix it? It apppears that it is trying to use a different user upon startup. Also, when I run this as a normal command after startup, it works as expected.

can't use in clusters without read namespace permission

When i try to run the server against a cluster in which I have access to only a certain list of namespaces, The application will exit with current log:

FATA[2022-08-29T17:30:04+04:30]util.go:38 kubectlfzf/pkg/util.FatalIf() Fatal error: error fetching namespaces: namespaces is forbidden: User "xxxxxx" cannot list namespaces at the cluster scope: User "xxxxx" cannot list all namespaces in the cluster

Also I tried setting --excluded-resources nodes,namespaces --watched-namespaces xxxxx flags for the server, but it didn't help. One possible solution that I can think of is to provide a way to bypass the execution of this line.

namespaces, err := clientset.CoreV1().Namespaces().List(ctx, metav1.ListOptions{})

Configure tab to cycle through options

I didnt see where this is configurable, but it would be nice to have similar or configurable semantics as zsh's normal completion

  1. tab to cycle through options
  2. space to accept

ARM64 support available?

I didn't get to try, but will this (and cachebuilder) compile on ARM64? (i.e. OROID N2, pi4 etc)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.