Giter Site home page Giter Site logo

andersonjaraujo / aws-fargate-with-rstudio-open-source Goto Github PK

View Code? Open in Web Editor NEW

This project forked from aws-samples/aws-fargate-with-rstudio-open-source

0.0 0.0 0.0 4.57 MB

This project delivers AWS CDK Python code to provision serverless infrastructure in AWS Cloud to run Open Source RStudio Server and Shiny.

License: MIT No Attribution

Python 48.90% Shell 47.51% Dockerfile 1.05% R 2.39% Batchfile 0.15%

aws-fargate-with-rstudio-open-source's Introduction

Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: MIT-0

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

This package depends on and may incorporate or retrieve a number of third-party software packages (such as open source packages) at install-time or build-time or run-time ("External Dependencies"). The External Dependencies are subject to license terms that you must accept in order to use this package. If you do not accept all of the applicable license terms, you should not use this package. We recommend that you consult your company's open source approval policy before proceeding.

Provided below is a list of External Dependencies and the applicable license identification as indicated by the documentation associated with the External Dependencies as of Amazon's most recent review.

THIS INFORMATION IS PROVIDED FOR CONVENIENCE ONLY. AMAZON DOES NOT PROMISE THAT THE LIST OR THE APPLICABLE TERMS AND CONDITIONS ARE COMPLETE, ACCURATE, OR UP-TO-DATE, AND AMAZON WILL HAVE NO LIABILITY FOR ANY INACCURACIES. YOU SHOULD CONSULT THE DOWNLOAD SITES FOR THE EXTERNAL DEPENDENCIES FOR THE MOST COMPLETE AND UP-TO-DATE LICENSING INFORMATION.

YOUR USE OF THE EXTERNAL DEPENDENCIES IS AT YOUR SOLE RISK. IN NO EVENT WILL AMAZON BE LIABLE FOR ANY DAMAGES, INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, CONSEQUENTIAL, SPECIAL, INCIDENTAL, OR PUNITIVE DAMAGES (INCLUDING FOR ANY LOSS OF GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS OR DATA, OR COMPUTER FAILURE OR MALFUNCTION) ARISING FROM OR RELATING TO THE EXTERNAL DEPENDENCIES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, EVEN IF AMAZON HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THESE LIMITATIONS AND DISCLAIMERS APPLY EXCEPT TO THE EXTENT PROHIBITED BY APPLICABLE LAW.

RStudio Server Open Source Edition - https://www.rstudio.com/products/rstudio - AGPL-3.0 Shiny Server Open Source Edition - https://www.rstudio.com/products/shiny/shiny-server - AGPL-3.0


Open Source RStudio/Shiny on AWS Fargate

This project delivers infrastructure code to run a scalable and highly available RStudio and Shiny Server installation on AWS utilizing various AWS services like Fargate, ECS, EFS, DataSync, S3 etc. The repository contains both the main project and subsets that host certain functionaliities from the main project in case you are only interested in deploying subsets of the entire project. The individual readmes within the folders contain deployment instructions for each project.

Solution Architecture

The following diagram depicts the overall solution architecture of the project.

Rstudio/Shiney Open Source Architecture on AWS

Figure 1. RStudio/Shiny Open Source Architecture on AWS

Numbered items refer to Figure 1.

  1. R users access RStudio Server and Shiny App via Amazon Route 53. Route 53 is a DNS service for incoming requests.
  2. Route 53 resolves incoming requests and forwards those onto AWS WAF (Web Application Firewall) for security checks.
  3. Valid requests reach an Amazon Application Load Balancer (ALB) which forwards these to the Amazon Elastic Containers Service (ECS) cluster.
  4. The cluster service controls the containers and is responsible for scaling up and down the number of instances as needed.
  5. Incoming requests are processed by RStudio server; users are authenticated and R sessions are spawned for valid requests. Shiny users are routed to the Shiny container.
  6. If the R session communicates with public internet, outbound requests can be filtered via a proxy server and then sent to a NAT Gateway.
  7. NAT Gateway sends outbound requests to be processed via an Internet Gateway. Route to internet can also be configured by AWS Transit Gateway.
  8. The R users require data files to be transported onto the container. To facilitate this, files are transferred to Amazon Simple Storage Service (S3) using AWS Transfer for SFTP or S3 upload.
  9. The uploaded files from S3 are synced to Amazon Elastic File System (EFS) by AWS DataSync.
  10. Amazon EFS provides the persistent file system required by RStudio server. Data scientists can deploy Shiny apps from their RStudio Server container to the Shiny Server container easily by the shared file system.
  11. RStudio can be integrated with S3 and R sessions can query Amazon Athena tables built on S3 data using a JDBC connection. Athena is a serverless interactive query service that analyses data in Amazon S3 using standard SQL.
  12. For ease of RStudio Server administration, you can deploy a bastion container in the public subnet to access the RStudio and Shiny containers using SSH.

Deployment With AWS CodePipeline

The development resources for the RStudio/Shiny deployment (AWS CodeCommit for hosting the AWS CDK in Python code, AWS CodePipeline for deployment of services, Amazon ECR repository for container images) are created in a central AWS account. From this account, AWS Fargate services for RStudio and Shiny along with the integrated services like Amazon ECS, Amazon EFS, AWS DataSync, AWS KMS, AWS WAF, Amazon ALB, and Amazon VPC constructs like Internet Gateway, NAT gateway, Security Groups etc are deployed into another AWS account. There can be multiple RStudio/Shiny accounts and instances to suit your requirements. You can also host multiple non-production instances of RStudio/Shiny in a single account.

The RStudio/Shiny deployment accounts obtain the networking information for the publicly resolvable domain from a central networking account and the data feed for the containers come from a central data repository account. Users upload data to the S3 buckets in the central data account or configure an automated service like AWS Service for SFTP to programmatically upload files. The uploaded files are transferred to the containers using AWS DataSync and Amazon EFS. The RStudio/Shiny containers are integrated with Amazon Athena for directly interacting with tables built on top of S3 data in the central data account.

It is assumed that AWS Shield or AWS Shield Advanced is already configured for the networking account, Amazon GuardDuty is enabled in all accounts along with AWS Config and AWS CloudTrail for monitoring and alerting on security events before deploying the infrastructure code. It is recommended that you use an egress filter for network traffic destined for the internet. The configuration of egress filter is not in scope for this codebase.

All services in this deployment are meant to be in one particular AWS region. The AWS services used in this architecture are managed services and configured for high availability. As soon as a service becomes unavailable, the service will automatically be brought up in the same Availability Zone (AZ) or in a different AZ within the same AWS Region. The following diagram depicts the deployment architecture of Open SOurce Rstudio/Shiny on AWS.

Rstudio/Shiney Open Source Architecture on AWS

Figure 2. RStudio/Shiny Open Source Deployment on AWS Serverless Infrastructure

Deployment Architecture

The infrastructure code provided in this repository creates all resources described in the architecture above.

Numbered items refer to Figure 2.

  1. The infrastructure code is developed using AWS CDK for Python and stored in an AWS CodeCommit repository.
  2. The CDK stacks are integrated into AWS CodePipeline for automated builds. The stacks are segregated into four different stages and are organised by AWS services.
  3. The container images used in the build are fetched from public Docker Hub using AWS CodePipeline and are stored into Amazon ECR repositories for cross-account access. These images are accessed by the pipeline to create the Fargate containers in the deployment accounts.
  4. Secrets like RStudio front-end password, public key for bastion containers and central data account access keys are configured in AWS Secrets Manager using an AWS KMS key and passed into the deployment pipeline using parameters in cdk.json for cross-account access.
  5. The central networking account has the pre-configured base public domain. This is done outside the automated pipeline and the base domain info is passed on as a parameter in cdk.json
  6. The base public domain will be delegated to the deployment accounts using AWS SSM Parameter Store.
  7. An AWS Lambda function retrieves the delegated Route 53 zone for configuring the RStudio and Shiny sub-domains.
  8. AWS Certificate Manager https certificates are applied on the RStudio and Shiny sub-domains
  9. Amazon ECS cluster is created to control the RStudio, Shiny and Bastion containers and to scale up and down the number of containers as needed.
  10. RStudio container is configured for the instance in a private subnet. RStudio container is not horizontally scalable for the Open Source version of RStudio. If you create only one container, the container will be configured for multiple front-end users. You can specify the user names in cdk.json. You can also create one RStudio container for each Data Scientist depending on your compute requirements. A cdk.json parameter will control your installation type. You can also control the container memory/vCPU using cdk.json. Further details are provided in the readme. If your compute requirements exceed Fargate container compute limits, you can use the EC2 launch type of Amazon ECS which offers a range of EC2 servers to fit your compute requirement. The code delivered with this blog caters for EC2 launch types as well controlled by the installation type paramter in cdk.json.
  11. A bastion container will be created in the public subnet to help you ssh to RStudio and Shiny containers for administration tasks. The bastion container will be restricted by a security group and you can only access it from the IP range you provide in the cdk.json.
  12. Shiny containers will be configured in the private subnet to be horizontally scalable. You can specify the number of containers and memory you need for Shiny Server in cdk.json.
  13. Application Load Balancers are registered with RStudio and Shiny services for routing traffic to the containers and to perform health checks.
  14. AWS WAF rules are built to provide additional security to RStudio and Shiny endpoints. You can specify whitelisted IPs in the WAF stack to restrict access to RStudio and Shiny from only allowed IPs.
  15. Users will upload files to be analysed to a central data lake account either with manual S3 upload or programmatically using AWS Transfer for SFTP.
  16. AWS DataSync will push files from Amazon S3 to cross-account Amazon EFS on an hourly interval schedule.
  17. An AWS Lambda trigger will be configured to trigger DataSync transfer on demand outside of the hourly schedule for files that require urgent analysis. It is expected that bulk of the data transfer will happen on the hourly schedule and on demand trigger will only be used when necessary.
  18. Amazon EFS file systems are attached to the containers for persistent storage. All containers will share the same file systems except the user home directories. This is to facilitate deployment of Shiny Apps from RStudio containers using shared file system and to access data uploaded in S3 buckets. These file systems will live through container recycles.
  19. You can create Amazon Athena tables on the central data account S3 buckets for direct interaction using JDBC from RStudio container. Access keys for cross account operation will be configured in the RStudio container R environment. It is recommended that you implement short term credential vending for this operation.

Prerequisites

To deploy the CDK stacks, you should have the following prerequisites:

  1. Access to 4 AWS accounts (https://signin.aws.amazon.com/signin?redirect_uri=https%3A%2F%2Fportal.aws.amazon.com%2Fbilling%2Fsignup%2Fresume&client_id=signup) (minimum 3) for a basic multi-account deployment
  2. Permission to deploy all AWS services mentioned in the solution overview
  3. Review RStudio and Shiny Open Source Licensing: AGPL v3 (https://www.gnu.org/licenses/agpl-3.0-standalone.html)
  4. Basic knowledge of R, RStudio Server, Shiny Server, Linux, AWS Developer Tools (AWS CDK in Python, CodePipeline, CodeCommit), AWS CLI and AWS services mentioned in the solution overview
  5. Review the readmes delivered with the code and ensure you understand how the parameters in cdk.json control the deployment and how to prepare your environment to deploy the CDK stacks via the pipeline detailed below.

Installation

  1. Create the AWS accounts to be used for deployment and ensure you have admin permissions access to each account. Typically, the following accounts are required:

     a. Central Development account - this is the account where the AWS Secret Manager parameters, CodeCommit repository, ECR repositories, and CodePipeline will be created.
    
     b. Central Network account - the Route53 base public domain will be hosted in this account
    
     c. Rstudio instance account - You can use as many of these accounts as required, this account will deploy RStudio and Shiny containers for an instance (dev, test, uat, prod etc) along with a bastion container and associated services as described in the solution architecture.
    
     d. Central Data account - this is the account to be used for deploying the data lake resources - such as S3 bucket for picking up ingested source files.
    
  2. Install (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) AWS CLI and create (https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) AWS CLI profile for each account (pipeline, rstudio, network, datalake ) so that AWS CDK can be used.

  3. Install (https://docs.aws.amazon.com/cdk/latest/guide/work-with-cdk-python.html) AWS CDK in Python and bootstrap each account and allow the Central Development account to perform cross-account deployment to all the other accounts.

     export CDK_NEW_BOOTSTRAP=1
     npx cdk bootstrap --profile <AWS CLI profile of central development account> --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess aws://<Central Development Account>/<Region>
    
     cdk bootstrap \
     --profile <AWS CLI profile of rstudio deployment account> \
     --trust <Central Development Account> \
     --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
     aws://<RStudio Deployment Account>/<Region>
    
     cdk bootstrap \
     --profile <AWS CLI profile of central network account> \
     --trust <Central Development Account> \
     --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
     aws://<Central Network Account>/<Region>
    
     cdk bootstrap \
     --profile <AWS CLI profile of central data account> \
     --trust <Central Development Account> \
     --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
     aws://<Central Data Account>/<Region>
    
  4. Ensure you have a docker hub login account, otherwise you might get an error while pulling the container images from Docker Hub with the pipeline - You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limits

  5. Build the docker container images in Amazon ECR in the central development account by running the image build pipeline as instructed in the readme.

     a. Using the AWS console, create a CodeCommit repository to hold the source code for building the images - e.g. rstudio_docker_images
    
     b. Clone the GitHub repository and move into the rstudio_image_build folder
    
     c. Using the CLI - Create a secret to store your DockerHub login details as follows:
    
     d. aws secretsmanager create-secret --profile <AWS CLI profile of central development account> --name ImportedDockerId --secret-string '{"username":"<dockerhub username>","password":"<dockerhub password>"}'
    
     e. Create a CodeCommit repository to hold the source code for building the images - e.g. rstudio_docker_images and pass the repository name to the name parameter in cdk.json for the image build pipeline. See https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-create-repository.html for instructions.
    
     f. Pass the account numbers (comma separated) where rstudio instances will be deployed in the cdk.json paramter rstudio_account_ids. Refer readme in   rstudio_image_build folder.
    
     g. Synthesize the image build stack 
    
         cdk synth --profile <AWS CLI profile of central development account>
    
     i. Commit the changes into the CodeCommit repo you created using git
    
     j. Deploy the pipeline stack for container image build
    
         cdk deploy --profile <AWS CLI profile of central development account>
    
     l. Log into AWS console in the central development account and navigate to CodePipeline service. Monitor the pipeline (pipeline name is the name you provided in the name parameter in cdk.json) and confirm the docker images build successfully.
    
  6. Move into the rstudio-fargate folder

  7. Provide the comma separated accounts where rstudio/shiny will be deployed in the cdk.json against the parameter rstudio_account_ids.

  8. Synthesize the stack Rstudio-Configuration-Stack in the Central Development account

     cdk synth Rstudio-Configuration-Stack --profile <AWS CLI profile of central development account>
    
  9. Deploy the Rstudio-Configuration-Stack. This stack should create a new CMK KMS Key to use for creating the secrets with AWS Secrets Maanger. The stack will output the AWS ARN for the KMS key. Note down the ARN. Set the parameter "encryption_key_arn" inside cdk.json to the above ARN

     cdk deploy Rstudio-Configuration-Stack --profile <AWS CLI profile of rstudio deployment account>
    
  10. Run the script rstudio_config.sh after setting the required cdk.json parameters. Refer readme in rstudio_fargate folder. Remember to run this script if you change paramters like rstudio_users, rstudio_install_types and rstudio_individual_containers in cdk.json

    sh ./rstudio_config.sh <AWS CLI profile of the central development account> "arn:aws:kms:<region>:<profile of central development account>:key/<key hash>" <AWS CLI profile of central data account> <comma separated AWS CLI profiles of the rstudio deployment accounts>
    
  11. Run the script check_ses_email.sh with comma separated profiles for rstudio deployment accounts. This will check whether all user emails have been registed with Amazon SES for all the rstudio deployment accounts in the region before you can deploy rstudio/shiny. Remember to run this script whenever you run rstudio_config.sh

    sh ./check_ses_email.sh <comma separated AWS CLI profiles of the rstudio deployment accounts>
    
  12. Before committing the code into the CodeCommit repository, synthesize the pipeline stack against all the accounts involved in this deployment. The reason behind this is to ensure all the necessary context values are populated into cdk.context.json file and to avoid the DUMMY values being mapped.

    cdk synth --profile <AWS CLI profile of the central development account>
    cdk synth --profile <AWS CLI profile of the central network account>
    cdk synth --profile <AWS CLI profile of the central data account>
    cdk synth --profile <repeat for each AWS CLI profile of the RStudio deplyment account>
    
  13. Deploy the Rstudio Fargate pipeline stack

    cdk deploy --profile <AWS CLI profile of the central development account> Rstudio-Piplenine-Stack 
    

    Once the stack is deployed, monitor the pipeline by using the AWS CodePipeline service from the central development account. The name of the pipeline is RstudioDev. Different stacks will be visible in AWS CloudFormation from the relevant accounts.

Notes about the Deployment

  1. Once you have deployed RStudio and Shiny Server using the automated pipeline following the readme, you will be able to access the installation using a URL like below:

     Shiny server - https://shiny.<instance>.<r53_sub_domain>.<r53_base_domain> -- where instance, r53_base_domain and r53_sub_domain are the values you specified in cdk.json
    
     If you mentioned individual_containers as false in cdk.json,
    
     RStudio Server - https://rstudio.<instance>.<r53_sub_domain>.<r53_base_domain> -- where instance, r53_base_domain and r53_sub_domain are the values you specified in cdk.json
    
     If you mentioned rstudio_individual_containers as true in cdk.json,
    
     RStudio Server - https://<user name>.rstudio.<instance>.<r53_sub_domain>.<r53_base_domain> -- where user name, instance, r53_base_domain and r53_sub_domain are the values you specified in cdk.json
    
  2. For RStudio server, the default username is rstudio and the password is randomly generated and stored in AWS Secrets Manager. Individual user passwords are also randomly generated and stored in AWS Secrets Manager. Users will receive their passwords by email against the email ids configured in cdk.json. Only the users named rstudio will have sudo access in the containers. The passwords for the default rstudio user is sent to the email given in sns_email_id in cdk.json

  3. To work with your dataset in RStudio, you will need to upload files in the S3 bucket in the Central Data account. There are two folders in the S3 bucket - one is for hourly scheduled file transfer and another is to trigger the data transfer as soon as the files arrive in the folder. These files are transferred to the EFS mounts (/s3_data_sync/hourly_sync and /s3_data_sync/instant_upload) which are mounted to all the RStudio and Shiny containers.

  4. The RStudio and Shiny containers share a common EFS mount (/srv/shiny-server) for sharing shiny app files. From Rstudio, save your files in /srv/shiny-server to deploy Shiny apps.

  5. RStudio containers are configured with a different persistent EFS mount for /home in each container. The Shiny containers share a similar /home EFS mount. Although the /home file systems are persistent, recreating the containers will delete /home file systems. You are encouraged to save your work eithe rin a Git repository or under your folder in /srv/shiny-server

  6. Although EFS mounts are persistent and live through container restarts, when you delete the RstudioStage-<stage number>-Fargate-RstudioStack-<instance> or RstudioStage-<stage number>-EC2-RstudioStack-<instance> stacks, the /home file systems in the containers also get deleted. This is to faciliate automatic stack updates when you change the cdk.json paramaters like rstudio_individual_containers or rstudio_install_types or rstudio_users. Save your work and files in the other EFS mounts in the container such as /s3_data_sync/hourly_sync or /s3_data_sync/instant_upload/s3_instant_sync locations before you recreate containers. You can also save files to a Git repository directly from the RStudio IDE.

  7. Deleting RstudioStage-<stage number>-Efs-RstudioStack-<instance> stack deletes the other EFS mounts mentioned above. Although all EFS mountpointa are enabled for backup, you should check and verify that backups server your purpose.

  8. If you are changing the rstudio_users parameter in cdk.json, there is no need to delete anything. If you replace, add, delete or modify user names, the build will automatically update the stacks. Same holds true for the other configurable parameters like rstudio_install_types, rstudio_individual_containers, rstudio_container_memory_in_gb, shiny_container_memory_in_gb, rstudio_ec2_instance_types, number_of_shiny_containers and so on. The pipeline will automatically update the stacks with the changes. Remember to run rstudio_config.sh and ses_check_email.sh after your cdk.json changes.

     Please bear in mind that if you have already deployed the pipeline and created rstudio instances, you should not attempt to deploy an instance with the same instance name in another rstudio deployment account. This is because rstudio and shiny URLs depend on the instance name to make it unique.
    
     The only parameters that require stack deletion after you have deployed the pipeline and want to change the parameter values in cdk.json are:
    
             a. r53_base_domain
             b. r53_sub_domain
    
     This is because these two parameter values are used to form rstudio, shiny URLS in route 53 hosted zones which are exported and used by downstream stacks via CFN imports. Plan your URL formation for rstudio and shiny so that you do not need to change these parameters if you want to avoid stack deletion.
    
     Once you have deployed the pipeline, you can add new values to the paramter instances and the corresponding rstudio deployment account number in the parameter rstudio_account_ids, however, if you want to change a particular instance name you will need to delete the existing stacks for that instance name first.
    
  9. The WAF rules will allow connection to Rstudio and Shiny containers only from the IPs/IP ranges you specify in cdk.json. If you do not want to restrict any IP, do not provide any value against the parameter allowed_ips in cdk.json.

  10. Note that when you use EC2 launch type, use EC2 instance type with enough memory for the pipeline to place a new task in the conatiner instance during blue/green ecs deployment. Otherwise, the fargate stack build may fail and you will need to delete stacks up to the fargate stack before rerunning the pipeline.

  11. If you need to download libraries for your Shiny app, install the packages from your app.R files for the packages to be downloaded at runtime, or you can use the bastion container to login to the containers and install libraries. If you recycle containers, these downloaded libraries need to be relaoded.

Deletions and Stack Ordering

Please refer the individual readmes for stack deletion sequence.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.

aws-fargate-with-rstudio-open-source's People

Contributors

pchayan avatar michaelhsieh42 avatar mmmukwev avatar amazon-auto avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.