Giter Site home page Giter Site logo

charts's People

Contributors

air3ijai avatar alexandrechichmanian avatar antiarchitect avatar cortopy avatar drpebcak avatar farodin91 avatar haegar avatar huytran-tiki avatar jared-schmidt-niceincontact avatar mrsrvman avatar naimadswdn avatar nefelim4ag avatar pservit avatar rhzs avatar rngcntr avatar rofafor avatar sangwa avatar sergeyshaykhullin avatar sergeyshevch avatar viceice avatar vy-nguyentan avatar zetaab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Switch to alpine linux docker image

Hello there.

Now we have the alpine linux based image, e.g eqalpha/keydb:alpine_x86_64_v6.0.16
and the size is about 10x smaller (8.33MB vs. 79.33MB).

However, I can not get it work because the alpine one does not bundle with the bash binary, so we have to modify the shebang
from

#!/bin/bash

to

#!/bin/sh

In order to use alpine built-in ash
And some changes to the script are also needed.

Are you interested in switching to alpine based image?

Cannot set client-output-buffer-limit

Cannot set client-output-buffer-limit on helm chart. Pod fails to start with invalid syntax.

Tried

configExtraArgs:
client-output-buffer-limit: normal 0 0 0 replica 268435456 67108864 60 pubsub 33554432 8388608 60

configExtraArgs:
client-output-buffer-limit: pubsub 33554432 8388608 60

serviceMonitor changes

if you have multiple keydb charts installed in the same cluster but in different namespaces, each prometheus target will scrape all the keydb instances from the cluster across all the namespaces.

Servicemonitor template needs to be changed from

  namespaceSelector:
    any: true

to :

  namespaceSelector:
    matchNames: 
    - {{.Release.Namespace}}

^^ this is not tested with helm to see if actually works.

[Feature Request] Split image name and tag

This would be very useful when using a private registry to cache upstream images, so we can override the image name once and not have to change the value every time the tag changes.

Scaling up/down - Downtime/Missing data

Hello,

We just tried your helm chart and it works fine. And we also did some test to check how it will handle scaling up/down events.

Methodology

  1. Create Multi-Master setup with 2 nodes
  2. Run GET <KEY> continuously
  3. Scale up pods
  4. Scale down pods

Test script

while true; do
  reply=$(keydb-cli -h keydb-scaling.default.svc.cluster.local -p 6379 GET test)
  echo "`date +"%T.%3N"` - $reply"
  sleep 0.2
done

Scale up 2 --> 3

11:01:30.116 - test value
11:01:30.428 - 
11:01:30.656 - test value

Scale down 3 --> 2

11:13:11.826 - test value
11:13:12.056 - 
11:13:12.278 - test value

We see that sometimes we get an empty reply and it is probably because data is not yet replicated and pod was already added to the service.

Is there anyway to improve that?

Thank you!

redis.conf lost after statefullset update

Currently, when we use Access Control Lists (ACL) or update some configuration with Command-Line Interface (CLI), all changes are lost on pod update, and there is no persistence of the redis.conf file. This behavior is inconvenient as it requires manually recreating the configuration file after each update.

One possible solution is to use the /data directory as the default directory for this file. This approach would allow for the configuration file to persist across updates, ensuring that changes made with ACL or CLI are retained.

To implement this solution, we suggest modifying the secret-utils.yaml file located at https://github.com/Enapter/charts/blob/master/keydb/templates/secret-utils.yaml#L21. Specifically, we recommend updating the file to specify the /data directory as the default directory.

What do you think about it?

podAntiAffinity - option to change it from values

Hi, can you add option to change the podAntiAffinity rules from values ?

For example i want to change it to requiredDuringSchedulingIgnoredDuringExecution and it's not possible via additionalAffinities variable right now because the sts template has hardcoded podAntiAffinity rules.

My antiaffinity rules :

  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
        - key: app.kubernetes.io/instance
          operator: In
          values:
          - "keydb" 
      topologyKey: "kubernetes.io/hostname"       

Thanks

cleanupTempfiles.minutes - default value

Hello,

We just did a test how the Pod will handle multiple restarts during backups.

  1. At some point there maybe snapshot creation started and interrupted
  2. As a result we may have a temporary backup file, which was not finished
    drwxr-xr-x. 1 root root          56 Jan  4 11:04 ..
    -rw-r--r--. 1 root root 20547669028 Jan  4 10:02 dump.rdb
    -rw-r--r--. 1 root root  5188599808 Jan  4 11:03 temp-1-3.rdb
    -rw-r--r--. 1 root root  1432674655 Jan  4 10:46 temp-1-9.rdb
    -rw-r--r--. 1 root root  1078273848 Jan  4 10:46 temp-2086607563.1.rdb
    -rw-r--r--. 1 root root           0 Jan  4 11:06 temp-2088105784.1.rdb
    
  3. At the next start KeyDB will load the data and then start to sync from the Master
  4. After the sync it will perform a new backup
  5. This backup can be interrupted and as a result we may have one more temp file.

Doing this in a loop, we may running out of disk space. It is for sure a corner case.

Current value for the cleanupTempfiles.minutes is 60 minutes and it will not delete all precedent crashes happened just some minutes ago.

What is the main reason to have such a big value?

For Bitnami Redis Chart we use the following

master:
  preExecCmds: "rm -rf /data/temp*.*"

So, we will delete all temporary files right before the Redis start.

16 fixables vulnerabilities are present on eqalpha/keydb:x86_64_v6.3.2

Request for Update Docker Image

I hope this message finds you well. I would like to use your image, but I have found multiple vulnerabilities on the latest version of the image on ArtifactHub.

There is 16 fixables vulnerabilities present on the latest image.

Could you please update the Docker image to fix these vulnerabilities?

Additionally, could I have access to the Dockerfile? I couldn't find it.

Thank you for your time and efforts.

image

image

Resource names are insufficiently truncated to account for generated name suffixes

Trying to install the Enapter chart using some automated tooling we have that generates long unique per-user Helm release names the chart fails to bring up KeyDB because the chart does not truncate the names sufficiently e.g.

> helm install --values keydb.yaml really-really-long-generated-name-abcdef-123456 enapter/keydb
NAME: really-really-long-generated-name-abcdef-123456
LAST DEPLOYED: Tue Sep  8 10:44:56 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
> kubectl get statefulset
NAME                                                    READY   AGE
really-really-long-generated-name-abcdef-123456-keydb   0/3     10s
> kubectl describe statefulset really-really-long-generated-name-abcdef-123456-keydb
Name:               really-really-long-generated-name-abcdef-123456-keydb
Namespace:          default
CreationTimestamp:  Tue, 08 Sep 2020 10:48:14 +0100
Selector:           app.kubernetes.io/instance=really-really-long-generated-name-abcdef-123456,app.kubernetes.io/name=keydb
Labels:             app.kubernetes.io/instance=really-really-long-generated-name-abcdef-123456
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=keydb
                    app.kubernetes.io/version=6.0.13
                    helm.sh/chart=keydb-0.13.0
Annotations:        <none>
Replicas:           3 desired | 0 total
Update Strategy:    RollingUpdate
  Partition:        824641741084
Pods Status:        0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:       app.kubernetes.io/instance=really-really-long-generated-name-abcdef-123456
                app.kubernetes.io/managed-by=Helm
                app.kubernetes.io/name=keydb
                app.kubernetes.io/version=6.0.13
                helm.sh/chart=keydb-0.13.0
  Annotations:  checksum/secret-utils: 416dc9844a04e7e19ec741bdf97a09a32dbdb735995fb59e9589792329c32113
  Containers:
   keydb:
    Image:      eqalpha/keydb:x86_64_v6.0.13
    Port:       6379/TCP
    Host Port:  0/TCP
    Command:
      /utils/server.sh
    Liveness:     tcp-socket :keydb delay=15s timeout=1s period=10s #success=1 #failure=3
    Readiness:    tcp-socket :keydb delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /data from keydb-data (rw)
      /utils from utils (ro)
  Volumes:
   utils:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  really-really-long-generated-name-abcdef-123456-keydb-utils
    Optional:    false
   keydb-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Volume Claims:  <none>
Events:
  Type     Reason        Age                From                    Message
  ----     ------        ----               ----                    -------
  Warning  FailedCreate  3s (x13 over 23s)  statefulset-controller  create Pod really-really-long-generated-name-abcdef-123456-keydb-0 in StatefulSet really-really-long-generated-name-abcdef-123456-keydb failed error: Pod "really-really-long-generated-name-abcdef-123456-keydb-0" is invalid: metadata.labels: Invalid value: "really-really-long-generated-name-abcdef-123456-keydb-5b96996b88": must be no more than 63 characters

> kubectl describe statefulset <TODO>

Where keydb.yaml is the following:

persistentVolume:
  enabled: false

Since this is just my local laptop test cluster where I don't have a PV provider setup.

This problem occurs because while the templates all sensibly truncate generated names to 63 characters (the normal K8S length limit for anything that will end up in a label value or used for DNS) this does not account for the fact that some resource types are going to generate other resources whose names have additional suffixes applied to them.

So in the above example the stateful set has the generated name really-really-long-generated-name-abcdef-123456-keydb which then has a pod name suffix added to it to produce the pod name really-really-long-generated-name-abcdef-123456-keydb-0 which is below the character limit. But then the generated pod hash in the labels is above the character limit e.g. really-really-long-generated-name-abcdef-123456-keydb-5b96996b88 as seen above. This label is above the 63 character limit which causes K8S to refuse to create the pods meaning that KeyDB can never come up.

Obviously our own tooling can do a better job of generating shorter release names but equally the Chart should try and avoid long names breaking it.

KeyDB data won't replicate

Hi,

I tried the default setup on GKE v16 with Helm v3. helm install keydb enapter/keydb

Then, run redis client:

$ kubectl run -it redis-cli --image=redis --restart=Never /bin/bash
root@redis-cli:/data# redis-cli -c -p 6379 -h 10.117.44.3
10.117.44.3:6379> set foo bar
OK
10.117.44.3:6379> get foo
-> Redirected to slot [12182] located at 10.117.44.7:6379
"bar"
10.117.44.7:6379> quit
root@redis-cli:/data# redis-cli -c -p 6379 -h 10.8.2.11
10.8.2.11:6379> get foo
(nil)          ----> THIS IS SUPPOSE TO RETURN "bar" in Multi master environment

Any idea why I can't get data on second pod at 10.8.2.11 ?

uuidv4 usage cause GitOps OutOfSync

I started to use this chart in ArgoCD and I will get an OutOfSync every time when I refreshed the config.
Can we add the ability to specify static non-existing key instead of uuidv4 usage in ping_readiness_local.sh?

Screenshot 2023-01-23 at 13 12 13

Chart claim volume

Hello
I use the chart to deploy a multimaster keydb cluster.
I have set value for persistentVolume.enabled to false
persistentVolume.enabled: "false"
Despite this, a volume is claimed, created and attached to each pod.
I seeems to be a bug , can you confirm ?
Thanks
Arnaud
Values used, are :

values:
imageRepository: eqalpha/keydb
imageTag: x86_64_v6.3.2
imagePullPolicy: IfNotPresent
nodes: 3
multiMaster: "yes"
activeReplicas: "yes"
protectedMode: "no"
appendonly: "no"

Add save "" support

HI,

I would like to add a simple --save "" argument to the keydb-server to disable background saving, but configExtraArgs in values.yaml can't let me do that, because templating messing up the result.

Could you help me with the correct syntax please?

KeyDB Exporter extraArgs doesn't work as intended

Follow-up of yesterday's PR. This config:

exporter:
    enabled: true
    extraArgs:
      - count-keys: "some-key*,some-other-key*"

does compile and generate this output:

args:
  - >-
    --count-keys
    "some-key*,some-other-key*"
    \

That, however, causes the exporter to fail with the following log:

flag provided but not defined: -count-keys "some-key*,some-other-key*" \

I did not yet figure out why it says -count-keys instead of --count-keys but I assume that this is part of the issue.

KeyDB replication questions

How does KeyDB replication and master-slave works with this chart?
As i understand each master node can be a slave to another. Is it true?
In 3 nodes setup i have 3 master nodes? One downtime and i am loosing cluster?
How to specify number of slaves on each master?
Does adding more nodes require resharding?

Zombi processes from readiness / liveness probes

I've got a lot zombi processes.

       0 2382791  0.8  0.2 712820  8716 ?        Sl   May17  88:11 /var/lib/rancher/k3s/data/8c2b0191f6e36ec6f3cb68e2302fcc4be850c6db31ec5f8a74e4b3be403101d8/bin/containerd-shim-runc-v2 -namespace k8s.io -id 21da19fa6f824bc4dd21aafe9148d07e95886390c7ef9caad10dcb181b585f58 -address /run/k3s/containerd/containerd.sock
   65535 2382814  0.0  0.0    972     4 ?        Ss   May17   0:00  \_ /pause
       0 1523052  1.5  0.6 648720 23988 ?        Ssl  May19  96:49  \_ keydb-server 0.0.0.0:6379
       0 2207040  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_readiness_] <defunct>
       0 2398171  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_readiness_] <defunct>
       0 2419093  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_readiness_] <defunct>
       0 2921360  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_readiness_] <defunct>
       0 2921383  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_liveness_l] <defunct>
       0 3941935  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_liveness_l] <defunct>
       0 3941970  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_readiness_] <defunct>
       0 3942325  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_readiness_] <defunct>
       0  517206  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_readiness_] <defunct>
       0  517224  0.0  0.0      0     0 ?        Z    May20   0:00      \_ [ping_liveness_l] <defunct>
       0 1082427  0.0  0.0      0     0 ?        Z    May21   0:00      \_ [ping_readiness_] <defunct>
       0 1292829  0.0  0.0      0     0 ?        Z    May21   0:00      \_ [ping_readiness_] <defunct>
       0 3612252  0.0  0.0      0     0 ?        Z    May21   0:00      \_ [ping_readiness_] <defunct>
       0 3999899  0.0  0.0      0     0 ?        Z    May22   0:00      \_ [ping_readiness_] <defunct>
       0  316962  0.0  0.0      0     0 ?        Z    May22   0:00      \_ [ping_readiness_] <defunct>
       0 1221761  0.0  0.0      0     0 ?        Z    May22   0:00      \_ [ping_readiness_] <defunct>
       0 2383088  0.0  0.0      0     0 ?        Z    May22   0:00      \_ [ping_readiness_] <defunct>
       0 2770818  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0 2899448  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0 4044700  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0  235003  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0 1007972  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0 1203442  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0 1203464  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_liveness_l] <defunct>
       0 1203886  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_liveness_l] <defunct>
       0 1203888  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0 1204235  0.0  0.0      0     0 ?        Z    May23   0:00      \_ [ping_readiness_] <defunct>
       0 2429265  0.0  0.0      0     0 ?        Z    06:33   0:00      \_ [ping_readiness_] <defunct>
       0 2451119  0.0  0.0      0     0 ?        Z    06:42   0:00      \_ [ping_readiness_] <defunct>
       0 2466469  0.0  0.0      0     0 ?        Z    06:49   0:00      \_ [ping_readiness_] <defunct>
       0 2557980  0.0  0.0      0     0 ?        Z    07:32   0:00      \_ [ping_readiness_] <defunct>

values.yml

persistentVolume:
  enabled: true
  storageClass: local-path
  size: 1Gi

resources:
  requests:
    memory: 64Mi
  limits:
    memory: 256Mi

loadBalancer:
  enabled: true
  extraSpec:
    externalTrafficPolicy: Local
    loadBalancerIP: 1.2.3.4

existingSecret: some-secret

PVC created with no access modes and name

I am using the following yaml to deploy Keydb into my cluster

---
# Source: keydb/templates/cm-utils.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: keydb-utils
  labels:
    helm.sh/chart: keydb-0.8.0
    app.kubernetes.io/name: keydb
    app.kubernetes.io/instance: keydb
    app.kubernetes.io/version: "5.3.3"
    app.kubernetes.io/managed-by: Helm
data:
  server.sh: |
    #!/bin/bash
    set -euxo pipefail

    host="$(hostname)"
    port="6379"
    replicas=()
    for node in {0..2}; do
      if [ "$host" != "keydb-${node}" ]; then
          replicas+=("--replicaof keydb-${node}.keydb ${port}")
      fi
    done
    keydb-server /etc/keydb/redis.conf \
        --active-replica yes \
        --multi-master yes \
        --appendonly no \
        --bind 0.0.0.0 \
        --port "$port" \
        --protected-mode no \
        --server-threads 2 \
        "${replicas[@]}"
---
# Source: keydb/templates/svc.yaml
# Headless service for proper name resolution
apiVersion: v1
kind: Service
metadata:
  name: keydb
  labels:
    helm.sh/chart: keydb-0.8.0
    app.kubernetes.io/name: keydb
    app.kubernetes.io/instance: keydb
    app.kubernetes.io/version: "5.3.3"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  clusterIP: None
  ports:
  - name: server
    port: 6379
    protocol: TCP
    targetPort: keydb
  selector:
    app.kubernetes.io/name: keydb
    app.kubernetes.io/instance: keydb
---
# Source: keydb/templates/sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: keydb
  labels:
    helm.sh/chart: keydb-0.8.0
    app.kubernetes.io/name: keydb
    app.kubernetes.io/instance: keydb
    app.kubernetes.io/version: "5.3.3"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 3
  serviceName: keydb
  selector:
    matchLabels:
      app.kubernetes.io/name: keydb
      app.kubernetes.io/instance: keydb
  template:
    metadata:
      annotations:
        checksum/cm-utils: e0806d2d0698a10e54131bde1119e44c51842191a777c154c308eab52ebb2ec7
      labels:
        helm.sh/chart: keydb-0.8.0
        app.kubernetes.io/name: keydb
        app.kubernetes.io/instance: keydb
        app.kubernetes.io/version: "5.3.3"
        app.kubernetes.io/managed-by: Helm
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - keydb
            topologyKey: kubernetes.io/hostname
      containers:
      - name: keydb
        image: eqalpha/keydb:x86_64_v5.3.3
        imagePullPolicy: IfNotPresent
        command:
        - /utils/server.sh
        ports:
        - name: keydb
          containerPort: 6379
          protocol: TCP
        livenessProbe:
          tcpSocket:
            port: keydb
        readinessProbe:
          tcpSocket:
            port: keydb
        resources:
          limits:
            cpu: 200m
            memory: 2Gi
          requests:
            cpu: 100m
            memory: 1Gi
        volumeMounts:
        - name: keydb-data
          mountPath: /data
        - name: utils
          mountPath: /utils
          readOnly: true
      volumes:
      - name: utils
        configMap:
          name: keydb-utils
          defaultMode: 0700
          items:
          - key: server.sh
            path: server.sh
  volumeClaimTemplates:
  - metadata:
      name: keydb-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi
      storageClassName: "gp2"

But the pod is not getting scheduled with the following error

 status:                                                                                                                                                                                                  
  conditions:                                                                                                                                                                                            
  - lastProbeTime: null                                                                                                                                                                                  
    lastTransitionTime: "2020-04-24T15:44:39Z"                                                                                                                                                           
    message: pod has unbound immediate PersistentVolumeClaims (repeated 3 times)                                                                                                                         
    reason: Unschedulable                                                                                                                                                                                
    status: "False"                                                                                                                                                                                      
    type: PodScheduled                                                                                                                                                                                   
  phase: Pending                                                                                                                                                                                         
  qosClass: Burstable

When I check the PVC, its created with no access modes or storage class.

$ kubectl get  pvc
NAME                 STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
keydb-data-keydb-0   Pending 

Secret specific key

By default, the key checked in the secret is "password"

Making this value not hardcoded can be the solution

[Request feature] Tolerations setting

Hi, thanks for your creating this Charts for KeyDB.

Can I have a PR for tolerations setting in this Chart ?

My use case is running keyDB on separate node pool.

I already ran my local Chart.

Thanks you.

Allow setting of more Config Options

Right now the only way I see to change config options like save and maxmemory is to build my own docker container with the redis.conf inside. Is it possible to expose configuring redis.conf through values.yaml?

Feature request: Install with restore from rdb.

Fresh install works perfectly.

We have a use case where we'd use KeyDB instead of redis, but we need to do install-with-restore (in our case from an rdb file stored in a bucket).

While it's reasonable to assume that the mechanism for getting the RDB from s3 might be too large an ask, it would be nice to have:

  1. Fetch an rdb (ie wget by default) - this could be a url or a passthrough to a shell script perhaps, to allow for someone writing their own handler to get the file down to disk.
  2. restore the rdb as part of initial bootup
  3. mark the restore as done so a helm upgrade doesn't retry it.

If it's an unreasonable request, please let me know, and we will evaluate the effort required to fork and apply this ourselves.

Thanks!

Add redis-exporter port to svc

the service created, keydb, does not expose the redis-exporter port.

could you please add the redis-exporter port to the service created by default. this will ease setting up the prometheus service monitor.

Allow using default StorageClass

The current configuration makes it impossible to automatically use whatever the default StorageClass is for the cluster. If persistence is enabled, the storageclass should use the default.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.