Giter Site home page Giter Site logo

aws-samples / aws-parallelcluster-monitoring Goto Github PK

View Code? Open in Web Editor NEW
31.0 9.0 23.0 4.69 MB

Monitoring Dashboard for AWS ParallelCluster

License: MIT No Attribution

Shell 81.57% Python 2.47% HTML 15.95%
aws-parallelcluster grafana-dashboard metrics slurm monitoring hpc gpu aws parallelcluster

aws-parallelcluster-monitoring's Introduction

Grafana Dashboard for AWS ParallelCluster

This is a sample solution based on Grafana for monitoring various component of an HPC cluster built with AWS ParallelCluster. There are 6 dashboards that can be used as they are or customized as you need.

  • ParallelCluster Summary - this is the main dashboard that shows general monitoring info and metrics for the whole cluster. It includes Slurm metrics and Storage performance metrics.
  • HeadNode Details - this dashboard shows detailed metric for the HeadNode, including CPU, Memory, Network and Storage usage.
  • Compute Node List - this dashboard show the list of the available compute nodes. Each entry is a link to a more detailed page.
  • Compute Node Details - similarly to the HeadNode details this dashboard show the same metric for the compute nodes.
  • GPU Nodes Details - This dashboard shows GPUs releated metrics collected using nvidia-dcgm container.
  • Cluster Logs - This dashboard shows all the logs of your HPC Cluster. The logs are pushed by AWS ParallelCluster to AWS ClowdWatch Logs and finally reported here.
  • Cluster Costs(beta / in developemnt) - This dashboard shows the cost associated to AWS Service utilized by your cluster. It includes: EC2, EBS, FSx, S3, EFS.

Quickstart

Create a cluster using AWS ParallelCluster and include the following configuration:

PC 3.X

Update your cluster's config by adding the following snippet in the HeadNode and Scheduling section:

CustomActions:
  OnNodeConfigured:
    Script: https://raw.githubusercontent.com/aws-samples/aws-parallelcluster-monitoring/main/post-install.sh
    Args:
      - v0.9
Iam:
  AdditionalIamPolicies:
    - Policy: arn:aws:iam::aws:policy/CloudWatchFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess
    - Policy: arn:aws:iam::aws:policy/AmazonSSMFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
Tags:
  - Key: 'Grafana'
    Value: 'true'

See the complete example config: pcluster.yaml.

AWS ParallelCluster

AWS ParallelCluster is an AWS supported Open Source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters in the AWS cloud. It automatically sets up the required compute resources and a shared filesystem and offers a variety of batch schedulers such as AWS Batch, SGE, Torque, and Slurm.

Solution components

This project is build with the following components:

  • Grafana is an open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics as well as create, explore, and share dashboards fostering a data driven culture.
  • Prometheus open-source project for systems and service monitoring from the Cloud Native Computing Foundation. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
  • The Prometheus Pushgateway is on open-source tool that allows ephemeral and batch jobs to expose their metrics to Prometheus.
  • Nginx is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server.
  • Prometheus-Slurm-Exporter is a Prometheus collector and exporter for metrics extracted from the Slurm resource scheduling system.
  • Node_exporter is a Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.

Note: while almost all components are under the Apache2 license, only Prometheus-Slurm-Exporter is licensed under GPLv3, you need to be aware of it and accept the license terms before proceeding and installing this component.

Example Dashboards

Cluster Overview

ParallelCluster

HeadNode Dashboard

Head Node

ComputeNodes Dashboard

Compute Node List

Logs

Logs

Cluster Cost

Costs

Quickstart

  1. Create a Security Group that allows you to access the HeadNode on Port 80 and 443. In the following example we open the security group up to 0.0.0.0/0 however we highly advise restricting this down further. More information on how to create your security groups can be found here
read -p "Please enter the vpc id of your cluster: " vpc_id
echo -e "creating a security group with $vpc_id..."
security_group=$(aws ec2 create-security-group --group-name grafana-sg --description "Open HTTP/HTTPS ports" --vpc-id ${vpc_id} --output text)
aws ec2 authorize-security-group-ingress --group-id ${security_group} --protocol tcp --port 443 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id ${security_group} --protocol tcp --port 80 —-cidr 0.0.0.0/0
  1. Create a cluster with the post install script post-install.sh, the Security Group you created above as AdditionalSecurityGroup on the HeadNode, and a few additional IAM Policies. You can find a complete AWS ParallelCluster template here. Please note that, at the moment, the installation script has only been tested using Amazon Linux 2.
CustomActions:
  OnNodeConfigured:
    Script: https://raw.githubusercontent.com/aws-samples/aws-parallelcluster-monitoring/main/post-install.sh
    Args:
      - v0.9
Iam:
  AdditionalIamPolicies:
    - Policy: arn:aws:iam::aws:policy/CloudWatchFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess
    - Policy: arn:aws:iam::aws:policy/AmazonSSMFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
Tags:
  - Key: 'Grafana'
    Value: 'true'
  1. Connect to https://headnode_public_ip or http://headnode_public_ip (all http connections will be automatically redirected to https) and authenticate with the default Grafana password. A landing page will be presented to you with links to the Prometheus database service and the Grafana dashboards.

Login Screen Login Screen

Note: Because of the higher volume of network traffic due to the compute nodes continuously pushing metrics to the HeadNode, in case you expect to run a large scale cluster (hundreds of instances), we would recommend to use an instance type slightly bigger than what you planned for your master node.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

aws-parallelcluster-monitoring's People

Contributors

afalzonelab avatar alfred-stokespace avatar amazon-auto avatar bollig avatar nicolaven avatar perifaws avatar sean-smith avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-parallelcluster-monitoring's Issues

Prometheus fails to start

I noticed the prometheus container was hitting an issue:

[ec2-user@ip-172-31-26-12 ~]$ docker ps
CONTAINER ID   IMAGE                              COMMAND                  CREATED          STATUS                          PORTS     NAMES
0cc5f7d27c63   prom/prometheus                    "/bin/prometheus --c…"   12 minutes ago   Restarting (2) 31 seconds ago             prometheus
8b26a21de381   grafana/grafana                    "/run.sh"                12 minutes ago   Up 12 minutes                             grafana
7740aa93faf7   quay.io/prometheus/node-exporter   "/bin/node_exporter …"   12 minutes ago   Up 12 minutes                             node-exporter
19248efd1648   nginx                              "/docker-entrypoint.…"   12 minutes ago   Up 12 minutes                             nginx
f3d3c110005d   prom/pushgateway                   "/bin/pushgateway"       12 minutes ago   Up 12 minutes                             pushgateway
You have new mail in /var/spool/mail/ec2-user

The container shows:

[ec2-user@ip-172-31-26-12 ~]$ docker logs 0cc5f7d27c63
ts=2023-04-12T06:34:40.244Z caller=main.go:468 level=error msg="Error loading config (--config.file=/etc/prometheus/prometheus.yml)" file=/etc/prometheus/prometheus.yml err="parsing YAML file /etc/prometheus/prometheus.yml: EC2 SD configuration requires a region"

Looks like prometheus.yaml needs the region.

Cannot see dashboards

Hello,
When I log in to Grafana, the dashboards are not there (it feels like it has the data but doesn't create the dashboards with them). Checking the master and compute nodes show the following running containers:

CONTAINER ID   IMAGE                              COMMAND                  CREATED          STATUS          PORTS     NAMES
9594c6bde24f   nginx                              "/docker-entrypoint.…"   21 minutes ago   Up 21 minutes             nginx
b427735a4851   prom/pushgateway                   "/bin/pushgateway"       21 minutes ago   Up 21 minutes             pushgateway
42b4c6967e5a   prom/prometheus                    "/bin/prometheus --c…"   21 minutes ago   Up 21 minutes             prometheus
9c18bbc6826e   quay.io/prometheus/node-exporter   "/bin/node_exporter …"   21 minutes ago   Up 21 minutes             node-exporter
3516f5f1c85a   grafana/grafana                    "/run.sh"                21 minutes ago   Up 21 minutes             grafana

Compute
701c27e08b7b   quay.io/prometheus/node-exporter   "/bin/node_exporter …"   15 minutes ago   Up 15 minutes             node-exporter

I was wondering if anything is missing so I can rule it out as a potential cause.
Thanks.

FSx cost broken

FSx cost does not work when the cluster is deployed with a pre-existing FSx.
The issue is here

cannot build the cluster

fails at master instance creation

Cloudformation failed with error: The security token included in the request is expired

I checked the credentials...deleted all and did a new aws configure..but issue prevails...

IAM permission: Principle of least privilege

Great solution, however, IAM permissions required provide a significant level of access to the head & compute nodes restricting the ability to deploy the solution into certain environments due to security concerns,

- Policy: arn:aws:iam::aws:policy/CloudWatchFullAccess
- Policy: arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess
- Policy: arn:aws:iam::aws:policy/AmazonSSMFullAccess
- Policy: arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess

A set of policies that follow the principle of least privilege providing the bare minimum required would help address security concerns

Region long names inconsistent between API and Botocore

aws_region_long_name=$(python /usr/local/bin/aws-region.py $cfn_region)

eu-west-1 => "Europe (Ireland)" here when using the botocore library (see https://github.com/boto/botocore/blob/develop/botocore/data/endpoints.json)

our get-pricing api requires eu-west-1 => "EU (Ireland)"

a quick fix is to add a patch:

aws_region_long_name=${aws_region_long_name/Europe/EU}

Question about passwords

Hello,
Which are the credentials to log in on the Grafana dashboards? My username/password combination is not working. Is there any way to change the password?
Thanks.

Validation errors

Hello,
Maybe there is something wrong in my script

[global]
update_check = true
sanity_check = true
cluster_template = w1cluster

[aws]
aws_region_name = us-east-1
aws_access_key_id = ***
aws_secret_access_key = ***

[cluster w1cluster]
vpc_settings = odyvpc
placement_group = DYNAMIC
placement = compute
key_name = llave_i3
master_instance_type = t3.micro
compute_instance_type = c5.large
cluster_type = spot
disable_hyperthreading = true
initial_queue_size = 2
max_queue_size = 2
maintain_initial_size = true
scheduler = slurm
base_os = alinux2
post_install = https://raw.githubusercontent.com/aws-samples/aws-parallelcluster-monitoring/main/post-install.sh
post_install_args = https://github.com/aws-samples/aws-parallelcluster-monitoring/tarball/main,aws-parallelcluster-monitoring,install-monitoring.sh
additional_iam_policies = arn:aws:iam::aws:policy/CloudWatchFullAccess,arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess,arn:aws:iam::aws:policy/AmazonSSMFullAccess,arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
tags = {“Grafana” : “true”}

[vpc odyvpc]
master_subnet_id = ***
vpc_id = ***

[aliases]
ssh = ssh {CFN_USER}@{MASTER_IP} {ARGS}

because it complains about validation as soon as it starts creating CloudWatchLogsSubstack.

$pcluster create w1cluster
Beginning cluster creation for cluster: w1cluster
Creating stack named: parallelcluster-w1cluster
Status: parallelcluster-w1cluster - ROLLBACK_IN_PROGRESS
Cluster creation failed.  Failed events:
  - AWS::EC2::SecurityGroup MasterSecurityGroup Resource creation cancelled
  - AWS::CloudFormation::Stack CloudWatchLogsSubstack Resource creation cancelled
  - AWS::CloudFormation::Stack EBSCfnStack Resource creation cancelled
  - AWS::EC2::EIP MasterEIP Resource creation cancelled
  - AWS::IAM::Role RootRole 2 validation errors detected: Value '?true?' at 'tags.1.member.value' failed to satisfy constraint: Member must satisfy regular expression pattern: [\p{L}\p{Z}\p{N}_.:/=+\-@]*; Value '?Grafana?' at 'tags.1.member.key' failed to satisfy constraint: Member must satisfy regular expression pattern: [\p{L}\p{Z}\p{N}_.:/=+\-@]+ (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: 2f882ed6-38fd-4736-9dbb-42a78abc7fe1; Proxy: null)
  - AWS::IAM::Role CleanupResourcesFunctionExecutionRole 2 validation errors detected: Value '?true?' at 'tags.1.member.value' failed to satisfy constraint: Member must satisfy regular expression pattern: [\p{L}\p{Z}\p{N}_.:/=+\-@]*; Value '?Grafana?' at 'tags.1.member.key' failed to satisfy constraint: Member must satisfy regular expression pattern: [\p{L}\p{Z}\p{N}_.:/=+\-@]+ (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: 4d8a9724-99f5-4d22-bc15-9c29c9f0de5f; Proxy: null)

Thanks.

Support parallel cluster v3

Will you be able to provide the documentation updates a necessary script changes to support Parallel Cluster 3.x ?

post-install.sh is incompatible with pcluster 2.11.4

While diagnosing a pcluster creation error I noticed that cat /etc/parallelcluster/cfnconfig
has

cfn_node_type=MasterServer

which breaks the scripts assumption of

case ${cfn_node_type} in
    HeadNode)
        wget ${monitoring_url} -O ${monitoring_tarball}

and results in a exit code which chains back up to chef-client.

disable cron email

Cron daemon generates an email for each upload (every minute). Send crontab to /dev/null or a log file.

Message 44:
From [email protected]  Tue Aug 17 05:38:03 2021
Return-Path: <[email protected]>
Date: Tue, 17 Aug 2021 05:38:03 GMT
From: "(Cron Daemon)" <[email protected]>
To: [email protected]
Subject: Cron <ec2-user@ip-10-0-19-26> /usr/local/bin/1m-cost-metrics.sh
Content-Type: text/plain; charset=UTF-8
Auto-Submitted: auto-generated
Precedence: bulk
X-Cron-Env: <XDG_SESSION_ID=50>
X-Cron-Env: <XDG_RUNTIME_DIR=/run/user/1000>
X-Cron-Env: <LANG=en_US.UTF-8>
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/home/ec2-user>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=ec2-user>
X-Cron-Env: <USER=ec2-user>
Status: R

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    21    0     0  100    21      0  21000 --:--:-- --:--:-- --:--:-- 21000
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    32    0     0  100    32      0  32000 --:--:-- --:--:-- --:--:-- 32000

GPU monitoring not working (for p3 instances)

After applying PR #4, GPU monitoring data is still not coming through in Grafana. I have a multi-queue ParallelCluster 2.10.0 setup with the following to enable monitoring:

post_install = https://raw.githubusercontent.com/amilsted/aws-parallelcluster-monitoring/main/post-install.sh
post_install_args = https://github.com/amilsted/aws-parallelcluster-monitoring/tarball/main,aws-parallelcluster-monitoring,install-monitoring.sh
additional_iam_policies = arn:aws:iam::aws:policy/CloudWatchFullAccess,arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess,arn:aws:iam::aws:policy/AmazonSSMFullAccess,arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
tags = {"Grafana" : "true"}

Build not working if missing a git config

Hi,
building prometheus-slurm-exporter binary I found this error at head node post-install:

 cd /home/ec2-user/aws-parallelcluster-monitoring/prometheus-slurm-exporter; git status --porcelain
fatal: detected dubious ownership in repository at '/home/ec2-user/aws-parallelcluster-monitoring/prometheus-slurm-exporter'
To add an exception for this directory, call:

        git config --global --add safe.directory /home/ec2-user/aws-parallelcluster-monitoring/prometheus-slurm-exporter
error obtaining VCS status: exit status 128
        Use -buildvcs=false to disable VCS stamping.
make: *** [bin/prometheus-slurm-exporter] Error 1

Testing manually, I solved with the fix following:

--- install-monitoring.sh.orig  2024-03-28 11:51:28.513604048 +0000
+++ install-monitoring.sh       2024-03-28 11:39:43.415747930 +0000
@@ -89,6 +89,7 @@
                # More info here: https://github.com/vpenso/prometheus-slurm-exporter/blob/master/LICENSE
                cd ${monitoring_home}
                git clone https://github.com/vpenso/prometheus-slurm-exporter.git
+                git config --global --add safe.directory ${monitoring_home}/prometheus-slurm-exporter
                sed -i 's/NodeList,AllocMem,Memory,CPUsState,StateLong/NodeList: ,AllocMem: ,Memory: ,CPUsState: ,StateLong:/' prometheus-slurm-exporter/node.go
                cd prometheus-slurm-exporter
                GOPATH=/root/go-modules-cache HOME=/root go mod download

Please, could you fix the on repo?

Thanks,
Alberto

sed fails due to path containing "/"

The log_group_names variable is a full path with "/". sed fails to interpret the substitution without adding backslash \ before the forward slashes on the path.

sed -i "s/__LOG_GROUP__NAMES__/${log_group_names}/g" /home/${cfn_cluster_user}/${monitoring_dir_name}/grafana/dashboards/logs.json

Can be corrected in the log_group_names definition here:

log_group_names="\/aws\/parallelcluster\/$(echo ${stack_name} | cut -d "-" -f2-)"

grafana origin not allowed error when cluster build + FIX

After building the cluster when we logon to the grafana dashboard I get this error when I view the master node or compute nodes details....origin now allowed and a bunch of errors...

After looking at it the fix is to add the proxy_set_header Host $http_host line to the nginx.conf file..here is the updated one

server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl;
ssl_certificate /etc/ssl/nginx.crt;
ssl_certificate_key /etc/ssl/nginx.key;
server_name localhost;
server_tokens off;

root /usr/share/nginx/html;

location /grafana/ {
proxy_set_header Host $http_host;
proxy_pass http://localhost:3000/;
}

location /prometheus/ {
proxy_pass http://localhost:9090/;
proxy_set_header Host $http_host;
}

location /pushgateway/ {
proxy_pass http://localhost:9091/;
proxy_set_header Host $http_host;
}

location /slurmexporter/ {
proxy_pass http://localhost:8080/;
proxy_set_header Host $http_host;
}
}

vpenso/prometheus-slurm-exporter parsing errors

git clone https://github.com/vpenso/prometheus-slurm-exporter.git

This issue vpenso/prometheus-slurm-exporter#55 impacts pcluster users. Quick fix is provided in this patch: MK4H/prometheus-slurm-exporter@c48dc3d

Until the fix is merged upstream, change

git clone https://github.com/vpenso/prometheus-slurm-exporter.git

to

git clone https://github.com/vpenso/prometheus-slurm-exporter.git
sed -i 's/NodeList,AllocMem,Memory,CPUsState,StateLong/NodeList: ,AllocMem: ,Memory: ,CPUsState: ,StateLong:/' prometheus-slurm-exporter/node.go

Add quickstart section to readme

I'm proposing you add a small quickstart session to the README.md that has everything you need to get started, i.e.

Quickstart

Create a cluster using AWS ParallelCluster with the following configuration:

[cluster yourcluster]
...
post_install = https://raw.githubusercontent.com/aws-samples/aws-parallelcluster-monitoring/main/post-install.sh
post_install_args = https://github.com/aws-samples/aws-parallelcluster-monitoring/tarball/main,aws-parallelcluster-monitoring,install-monitoring.sh
additional_iam_policies = arn:aws:iam::aws:policy/CloudWatchFullAccess,arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess,arn:aws:iam::aws:policy/AmazonSSMFullAccess,arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
tags = {“Grafana” : “true”}
...

The hosting location of the post_install script could also be a release artifact, then each release would have a corresponding versioned post_install script and tar.gz.

Happy to implement this and submit it as a PR just wanted your thoughts before I get started.

Docs incorrectly suggest comma separated post_install_args with commas

The docs suggest this...

[cluster yourcluster]
...
post_install = https://raw.githubusercontent.com/aws-samples/aws-parallelcluster-monitoring/main/post-install.sh
post_install_args = https://github.com/aws-samples/aws-parallelcluster-monitoring/tarball/main,aws-parallelcluster-monitoring,install-monitoring.sh
additional_iam_policies = arn:aws:iam::aws:policy/CloudWatchFullAccess,arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess,arn:aws:iam::aws:policy/AmazonSSMFullAccess,arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
tags = {"Grafana" : "true"}

But that fails due to the scripts incorrect assumption that

post_install_args = https://github.com/aws-samples/aws-parallelcluster-monitoring/tarball/main,aws-parallelcluster-monitoring,install-monitoring.sh

is a bash array initialization. Which it's not. So the script then fails at this point ...


monitoring_url=${cfn_postinstall_args[0]}
monitoring_dir_name=${cfn_postinstall_args[1]}               <===== THIS IS EMPTY STRING
monitoring_tarball="${monitoring_dir_name}.tar.gz"        <===== THIS IS ".tar.gz"
setup_command=${cfn_postinstall_args[2]}
monitoring_home="/home/${cfn_cluster_user}/${monitoring_dir_name}"

case ${cfn_node_type} in
    MasterServer)
        wget ${monitoring_url} -O ${monitoring_tarball}     <===== THIS TRYS TO SAVE ".tar.gz"
        mkdir -p ${monitoring_home}                                   <===== THIS IS IMPACTED BY EMPTY STRING
        tar xvf ${monitoring_tarball} -C ${monitoring_home} --strip-components 1  <==== THIS JUST FAILS
    ;;
    ComputeFleet)

    ;;
esac

What I did to fix the issue was hack the script up to change the post_install_args to be like this...

cfn_postinstall_args=("https://github.com/aws-samples/aws-parallelcluster-monitoring/tarball/main" "aws-parallelcluster-monitoring" "install-monitoring.sh")

And I was able to get an exit code of 0 :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.