Giter Site home page Giter Site logo

flant / loghouse Goto Github PK

View Code? Open in Web Editor NEW
922.0 42.0 76.0 18.14 MB

Ready to use log management solution for Kubernetes storing data in ClickHouse and providing web UI.

License: Apache License 2.0

Ruby 44.28% JavaScript 21.66% CSS 2.70% HTML 22.99% Shell 1.47% Dockerfile 2.94% Smarty 3.96%
kubernetes clickhouse fluentd logs

loghouse's Introduction


UPDATE (December'20): Please note loghouse is no longer being actively developed.

Back in 2017 (when we created this project), there was no other solution to get what we (in Flant) desperately needed. However creating such tools is out of our primary focus, hence — while maintaining loghouse — we've been also waiting for other projects to emerge. Luckily, the Kubernetes ecosystem grows and evolves at an amazing pace. Today, we are happy to admit the existence of other solid log management solutions we can rely on. For most cases, we use Loki. (Graylog and Elasticsearch-based solutions — i.e. ELK, EFK — might be other options to consider.)

It means we don't need to improve loghouse anymore. Since it's still Open Source, you are very welcome to contribute (or even contact us to become a maintainer).


Ready to use log management solution for Kubernetes. Efficiently store big amounts of your logs (in ClickHouse database), process them using a simple query language and monitor them online through web UI. Easy and quick to deploy in an already functioning Kubernetes cluster.

Status is alpha. However we (Flant) use it in our production Kubernetes deployments since September, 2017. Data structure might change during alpha releases, so please be careful when updating (all relevant information is published in corresponding release notes). Data structure will become stable in beta version.

Loghouse-dashboard UI demo in action (~3 Mb):

loghouse web UI

Features

  • Collecting and storing logs from all Kubernetes pods efficiently:
    • Fluentd processes upto 10,000 log entries per second consuming 300 MB of RAM (installed at each K8s node).
    • ClickHouse makes disk space usage minimal. Examples of logs stored in our production deployments: 3.7 million entries require 1.2 GB, 300m — 13 GB, 5,35 billion — 54 GB.
  • Simple query language: Easy to select entries by exact keys values or regular expressions, multiple conditions are supported with AND/OR. Learn more in query language docs.
  • Selecting entries based on additional containers' data available in Kubernetes API (pod's and container's names, host, namespace, labels, etc).
  • Quickly & straightforward deployable to Kubernetes via Dockerfiles and Helm chart.
  • Web UI made cosy and powerful:
    • Papertrail-like user experience.
    • Customizable time frames: from date to date / from now till given period (last hour, last day, etc) / seek to specific time and show logs around it.
    • Infinite scrolling of older log entries.
    • Save your queries to use in future.
    • Basic permissions (limiting entries shown for users by specifying Kubernetes namespaces).
    • Exporting current query's results to CSV (more formats will be supported).
  • fluentd monitoring via Prometheus with Grafana dashboards for ClickHouse and fluentd.

Installation

To install loghouse, you need to use Helm. Minimal kubernetes cluster version is >=1.9. Also, it is considered that cert-manager is already installed in your cluster.

The whole process is as simple as these two steps:

  1. Add loghouse charts:
# helm repo add loghouse https://flant.github.io/loghouse/charts/
  1. Install a chart.

2.1. Easy way:

# helm fetch loghouse/loghouse --untar
# vim loghouse/values.yaml
# helm install --namespace loghouse -n loghouse loghouse

Note: use --timeout 1200 flag in case slow image pulling.

2.2. Using specific parameters (check variables in chart's values.yaml — not documented yet):

# helm install -n loghouse loghouse/loghouse --set 'param=value' ...

Web UI (loghouse-dashboard) will be reachable via address specified in values.yaml config as loghouse_host. You'll be prompted by basic authorization generated via htpasswd and configured in auth parameter of your values.yaml.

To clean old logs in cron, you can use a script in this issue.

Architecture

loghouse architecture

A pod with fluentd collecting logs will be installed on each node of your Kubernetes cluster. Technically, it is implemented by means of DaemonSet having tolerations for all possible taints, so it includes all the nodes available. Logs directories from all hosts are mounted to fluentd pods and watched by fluentd. Kubernetes_metadata filter is applied for all the Docker containers' logs to get additional information about containers via Kubernetes API. Another filter, record_modifier, is then used to "prepare" all the data we have. And the last step is passing this data to fluentd output plugin executing clickhouse-client CLI tool to insert new entries into ClickHouse DBMS.

Note on logs format: If log entry is in JSON, it will be formatted according to its values' types, i.e. each field will be stored in corresponding table: string_fields, number_fields, boolean_fields, null_fields or labels (the last one is used for containers labels to make further filtering and lookups easy). ClickHouse built-in functions will be used to process these data types. If log entry isn't in JSON, it will be stored in string_fields table.

By default, ClickHouse DBMS is deployed as a single instance via StatefulSet which brings this instance to a random K8s node (this behaviour can be changed by using nodeSelector and tolerations to choose a specific node). ClickHouse stores its data in hostPath volume or Persistent Volumes Claim (PVC) created with any storageClass you prefer. You can find other deployment options for ClickHouse (i.e. using an external ClickHouse instance) here.

Web UI (screenshot) is composed of two components:

  • frontend — nginx with basic authorization. This authorization is used to limit user's access with logs from given Kubernetes namespaces only;
  • backend — Ruby application displaying logs from ClickHouse.

ClickHouse and Fluentd components have optional Prometheus exporters. You can find our default dashboards for Grafana here.

Upgrading

In v0.3.0, Helm chart for loghouse has been rewritten. All API versions as well as Helm hook policy have been updated. To upgrade your loghouse installation to v0.3.0, you have to remove conflicting objects:

kubectl -n loghouse delete jobs,ing --all

Warning! Database schema has been also changed. Please, prepare a backup of ClickHouse data before upgrading. Note that the migration task will be automatically started at the end of the upgrade process.

Roadmap

We're going to add logs filtering, another deployment options (having ClickHouse instances on each K8s node), migrate frontend to AngularJS and backend to Golang, add command-line interface (CLI) and much more.

More details will be available in our issues.

loghouse's People

Contributors

andrewkoryakin avatar dependabot[bot] avatar diafour avatar dm3ch avatar gsmetal avatar gyrter avatar ilya-lesikov avatar kolashtov avatar kramarama avatar max107 avatar may-cat avatar quite4work avatar qw1mb0 avatar shurup avatar tyranron avatar valent-ex avatar vitaliy-sn avatar z9r5 avatar zuzzas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

loghouse's Issues

Query improvements

An ability to save a query by user. Admin can save public queries.
Queries can be saved in configmap.

Simplify downloading records

Download button must activate a background server process of archiving of resulting records, so user will be able to see a progress bar of that process. As archiving process is completed, a secret temporary link should become available for user. User can download a file by clicking on it or share it. Link should expire and archive should be deleted from the server in 2 days.

Details:

  • Make it possible to download data in different formats (csv, json …)
  • Provide an option (checkbox) to compress (or not) resulting file
  • Use Clickhouse capabilities (instead of Ruby) for getting records in several formats
  • Implement a queue (in backend) for background processes
  • Consider a download job as a system entity (not a one time task) which should be visible in UI

Get rid of 2 selects for displaying/hiding columns

  • Remove two selects with columns. Use text input with autocomplete and select with columns displayed instead.
  • If a column is used in a strict conditioned filter, this column should be hidden automatically. For example, clicking on a “instance=worker-1” label should add this condition to the filter and remove column “instance” from the visible list.
  • Autocomplete must be empty if all columns are visible.
  • Reset button. It should reveal all columns.

Highlight search results

When we looking for a part of big field, by regular expression it is very difficult to understand where the part you are looking for

Negation in queries

Sometimes need to use global negation in queries like not (expr1 and expr2)

Settings for log rotation (max_days, max_size)

Old logs should be cleaned from time to time. 3 options will be introduced for that purpose:

  • max_days N — keep records not older than N days;
  • max_size M — store not more than M GBs of records;
  • max_days N + max_size M — if both options are in use, both limits (in days and GBs) should be always applied.

Timeout when installing on bare metal k8s

I'm on Kubernetes ver. 1.8.3 running on my own server.
I'm getting the following error:

root@db2new:~/kube# helm install -n loghouse loghouse
Error: release loghouse failed: Timeout: request did not complete within allowed duration
root@db2new:~/kube#

Tiller logs:

[storage] 2017/11/27 04:08:34 getting release history for "loghouse"
[tiller] 2017/11/27 04:08:34 uninstall: Release not loaded: loghouse
[tiller] 2017/11/27 04:08:41 preparing install for loghouse
[storage] 2017/11/27 04:08:41 getting release history for "loghouse"
[tiller] 2017/11/27 04:08:41 rendering loghouse chart using values
2017/11/27 04:08:41 info: manifest "loghouse/templates/tabix/tabix.yaml" is empty. Skipping.
2017/11/27 04:08:41 info: manifest "loghouse/templates/clickhouse/clickhouse-pvc.yaml" is empty. Skipping.
2017/11/27 04:08:41 info: manifest "loghouse/templates/tabix/tabix-svc.yaml" is empty. Skipping.
2017/11/27 04:08:41 info: manifest "loghouse/templates/clickhouse/clickhouse-ingress.yaml" is empty. Skipping.
2017/11/27 04:08:41 info: manifest "loghouse/templates/tabix/tabix-ingress.yaml" is empty. Skipping.
[tiller] 2017/11/27 04:08:42 performing install for loghouse
[tiller] 2017/11/27 04:08:42 executing 2 pre-install hooks for loghouse
[tiller] 2017/11/27 04:08:42 hooks complete for pre-install loghouse
[storage] 2017/11/27 04:08:42 getting release history for "loghouse"
[storage] 2017/11/27 04:08:42 creating release "loghouse.v1"
[kube] 2017/11/27 04:08:42 building resources from manifest
[kube] 2017/11/27 04:08:42 creating 19 resource(s)
[tiller] 2017/11/27 04:09:13 warning: Release "loghouse" failed: Timeout: request did not complete within allowed duration
[storage] 2017/11/27 04:09:13 updating release "loghouse.v1"
[tiller] 2017/11/27 04:09:13 failed install perform step: release loghouse failed: Timeout: request did not complete within allowed duration

It turns out first installation to a fresh k8s node always succeeds, but all subsequent ones (after helm del --purge loghouse) fail with timeout.

Autocomplete for query field

Query field must be autocompleted with labels list.
Prometheus query (see example below) should be used as a reference.

image

Add tooltips for fields, buttons, labels

  • Tooltip for each field providing details on what should be entered and how it affects the logs displayed
  • Tooltip for buttons explaining its action
  • Tooltip for labels displaying condition being added to filter

How to reduse clickhouse memory usage?

In our setup, we have several nodes with ~8Gb RAM. Clickhouse eats all of memory and not give it back. So clickhouse memory usage grows in expluatation period. And when it's more than node can give, node crashes with OutOfMemory error.
Is there some option to reduse memory usage of clickhouse service

Smart “play” button

  • When user selects “now” as a time in “to” field, play button should be activated automatically (with follow mode enabled).
  • Currently, if user scrolls up in follow mode, records view is not fixed as the screen is automatically scrolled up when new records arrive. If user scrolls up, follow mode should be disabled. Follow mode should be re-enabled again when user scrolls down. This logic may be implemented in some other way.

docker-compose

the current implementation is only for kubernetes?
or in docker-compose can I also run?

Clicking on time should display records with no filter applied

No clear decision yet.

Options available:

  • clicking on time will open a new tab with records from corresponding time and no (empty) filter applied
  • ctrl-clicking on time will open a new tab with records from corresponding time and no filter applied
  • clicking on time will clear a filter and show records from corresponding time. However, there should be a way to “go back” (applying filter being used before) — some kind of “breadcrumbs”.

Settings for records view

  • pagination. Records per page with 250 as default value;
  • time format. Now time is displayed as “2017-10-31 18:31:35.910607721”, this format should be adjustable;
  • timezone. Log records usually have UTC timestamp, however user may have another timezone.

Need to use helm.sh/hook-delete-policy

[kube] root@kube ~ # vi loghouse/values.yaml
[kube] root@kube ~ # helm upgrade --namespace loghouse loghouse loghouse/
Error: UPGRADE FAILED: jobs.batch "loghouse-init-tables" already exists
[kube] root@kube ~ # kubectl -n loghouse delete jobs/loghouse-init-tables
job "loghouse-init-tables" deleted
[kube] root@kube ~ # kubectl -n loghouse delete jobs/loghouse-init-tables
job "loghouse-init-tables" deleted
[kube] root@kube ~ # helm upgrade --namespace loghouse loghouse loghouse/
Release "loghouse" has been upgraded. Happy Helming!
LAST DEPLOYED: Fri Dec 8 14:18:30 2017

No Endpoint Created on Ingress

I am using Kubernetes 1.8.1 and when I use helm to deploy this, the ingress never gets a public ip endpoint and has no errors to speak of (nothing in events etc) any ideas?

Dashboard should be SPA

SPA (single-page application) has no page switching and it’s more responsive (all the data is loaded via AJAX). Query saving should be in a separate view or dialog (e.g. graph editor in Grafana).

Angular is preferable since it’s used in Grafana and kubernetes-dashboard. Angular+React (like in grafana) may be another option.

Build loghouse in RedHat-based distribution?

Hello!
I try build loghouse in RedHat-based, but get some error.
I view https://github.com/flant/loghouse/blob/master/Dockerfile and have a few question.
Is ruby:2.3.4 minimal version?
You are add 2 file and run 2 command

ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock
RUN bundle config git.allow_insecure true
RUN bundle install

how simple build without docker?
Thank you
Account:
https://habrahabr.ru/users/chemtech/
Email:
patsev(dot)anton(at)gmail(dot)com
May be talk at russian language

Make more installation configurations available

All clusters are different: bare metal, GCE, AWS, Azure… There should be several installation configurations:

  • ClickHouse with local index on each node. Best option for bare metal; it lowers network delays between fluentd and ClickHouse;
  • ClickHouse as a cluster. Best option for cloud based K8s (GCE, AWS, Azure);
  • Standalone ClickHouse. For small K8s clusters or testing purposes. ClickHouse is installed on single node while fluentd is placed on each node.
  • External ClickHouse. ClickHouse is installed outside the cluster and is used as an external service while fluentd and dashboard are installed in the cluster.

Wrong query language definitions

Wrong query language definitions (https://github.com/flant/loghouse/blob/master/docs/ru/query-language.md)

This query

SELECT string_fields.names,
string_fields.values
FROM
logs.logs

WHERE
date=today() AND string_fields.values = '1'

Order by timestamp
LIMIT 100

throw

Code: 43, e.displayText() = DB::Exception: Illegal types of arguments (Array(String), String) of function equals, e.what() = DB::Exception

Images:
flant/loghouse-clickhouse:0.0.2
flant/loghouse-fluentd:0.0.2
flant/loghouse-tabix:0.0.2

Clickhouse server crash

Hi,

I try to deploy Clickhouse in my Kubernetes cluster but Clickhouse pod crash at startup.

Here is pod's logs :

╰─# k -n loghouse logs -f clickhouse-server-79dc494dfc-sz268                                                                                                                                          
Include not found: clickhouse_remote_servers
Include not found: clickhouse_compression
2017.11.02 03:01:08.091964 [ 1 ] <Warning> Application: Logging to console
2017.11.02 03:01:08.103679 [ 1 ] <Warning> ConfigProcessor: Include not found: networks
2017.11.02 03:01:10.103799 [ 2 ] <Warning> ConfigProcessor: Include not found: clickhouse_remote_servers
2017.11.02 03:01:10.103848 [ 2 ] <Warning> ConfigProcessor: Include not found: clickhouse_compression
2017.11.02 03:01:10.107723 [ 3 ] <Warning> ConfigProcessor: Include not found: networks

I used the Helm chart :

name: loghouse
version: 0.0.1

And don't made any changes in templates/clickhouse/clickhouse-configmap.yml

Kubernetes v1.8.2
Calico CNI
IPv6 only (I made the changes to make it work about Service ClusterIP)
Debian 9.2

Display non-Kubernetes (system) logs

There are Docker logs, kernel logs, and others having no namespace and no pod_name label, so it’s difficult to distinguish user permissions for these records and to filter them. We need to find a solution.

Button resetting columns to their auto/default state

After the filter is changed and some columns are hidden (manually), user should have an ability to restore columns to their “auto” state, i.e. display all the columns with an exception for automatically hidden.

Internal Server Error if user not add in config

If you do not add the user to the configmap loghouse, then when you try to login, we get Internal Server Error with an error:
95.213.149.180, 95.213.149.180 - - [04/Nov/2017:07:14:51 +0000] "GET /query HTTP/1.0" 500 30 0.0010 2017-11-04 07:14:52 - RuntimeError - no user permissions configured for user

Negative lookahead for regex

Sometimes we need to search some logs, that does not match some regex, but re2 does not have simple solution out of the box (google/re2#156). New operator !~ would be simple and powerfull solution for this situation.

Query language improvements

Query language improvements:

  • Round brackets support
  • Search by owner support. It is useful to search records from particular deployment/cronjob/statefulset/daemonset/etc.

Error installing loghouse to GKE

I'm trying to install loghouse to Google Cloud-bases Kubernetes cluster (node pool version is 1.8.3-gke.0).
I'm getting the following error:

➜  cm-scripts git:(master) ✗ helm install -n loghouse loghouse
Error: release loghouse failed: clusterroles.rbac.authorization.k8s.io "fluentd" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["namespaces"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]}] user=&{system:serviceaccount:kube-system:default 3d69e5b6-d03a-11e7-9bc4-42010af00233 [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]
➜  cm-scripts git:(master) ✗

Hiding/displaying labels should be implemented in a single select

There should be the only select filled with the labels currently displayed. Hidden labels which are available should be displayed in autocomplete list. If all labels are displayed, autocomplete list will be empty. “Clear” button resetting labels to their default state should be available.

Currently this default state is simply “display everything” but in future it should be improved via special algorithm hiding labels depending on a filter query.

Internal Server Error on incorrect query

If you enter an incorrect request, you get the error Internal server Error and in loghouse pod see logs:
2017-11-04 07:27:14 - LoghouseQuery::BadFormat - logs: Failed to match sequence (EXPRESSION subquery:(SUBQUERY?)) at line 1 char 1.:

clickhouse-client insert into wrong table

cat $1 | clickhouse-client --host="${CLICKHOUSE_SERVER}" --port="${CLICKHOUSE_PORT}" --user="${CLICKHOUSE_USER}" --password=${CLICKHOUSE_PASS} --database="${CLICKHOUSE_DB}" --compression true --query="INSERT INTO logs${TABLE} FORMAT JSONEachRow" && rm -f $1

INSERT INTO logs${TABLE}

не совпадает с настройками clickhouse по таблице, и пытается записать в не существующую таблицу.

Add ability to store list of columns to show in query template

I would like to have an ability to specify what keys should be shown in query template.

With this feature I will be able to specify list of hidden keys only one time - when I'm creating query template. And every time when I'll open this template, I will only see keys that I am needed.

In fluentd Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: _SELINUX_CONTEXT

I have following in logs when deploy loghouse via helm:

Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: _SELINUX_CONTEXT
 2017-11-08 05:28:55 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-11-08 05:28:55 +0000 error_class="RuntimeError" error="command returns 29952: bash /usr/local/bin/insert_ch.sh /tmp/fluent-plugin-exec-20171108-7-kfyf2o" plugin_id="object:3fce70caaa48"
  2017-11-08 05:28:55 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/plugin/out_exec.rb:104:in `write'
  2017-11-08 05:28:55 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/buffer.rb:354:in `write_chunk'
  2017-11-08 05:28:55 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/buffer.rb:333:in `pop'
  2017-11-08 05:28:55 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/output.rb:342:in `try_flush'
  2017-11-08 05:28:55 +0000 [warn]: /var/lib/gems/2.3.0/gems/fluentd-0.12.40/lib/fluent/output.rb:149:in `run'
Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: _SELINUX_CONTEXT
 2017-11-08 05:28:55 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-11-08 05:28:57 +0000 error_class="RuntimeError" error="command returns 29952: bash /usr/local/bin/insert_ch.sh /tmp/fluent-plugin-exec-20171108-7-fm3rdj" plugin_id="object:3fce70caaa48"
  2017-11-08 05:28:55 +0000 [warn]: suppressed same stacktrace
Code: 117. DB::Exception: Unknown field found while parsing JSONEachRow format: _SELINUX_CONTEXT
 2017-11-08 05:28:57 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2017-11-08 05:29:01 +0000 error_class="RuntimeError" error="command returns 29952: bash /usr/local/bin/insert_ch.sh /tmp/fluent-plugin-exec-20171108-7-1tfk6fi" plugin_id="object:3fce70caaa48"

What with it could be connected?

Scheduling of download jobs (S3)

Since rotation deletes old log records, making their upload to S3 available on schedule will be great.

  • Add UI for scheduling settings.
  • Schedule can be added to the saved query or to the job created from filter.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.