Giter Site home page Giter Site logo

infrastructure's Introduction

Infrastructure

Set up infrastructure for Weenhanceit projects. It covers one-time tasks that need to be done after bringing up a stock Ubuntu 18.04 server, and tasks that need to be done for each domain name/web site that you want to deploy to the server.

This document does not cover:

  • How to create an AWS server (or any other kind of server)
  • How to set up a DNS entry to point at the server. In order for users to access a web site that is set up according to this documentation, you need to have a DNS somewhere that associates the domain name with the IP address of the server.

Architecture

One server with multiple static sites and/or Rails applications, and one database instance on another server.

On the application server, Nginx proxies the requests through to the appropriate Rails application, or serves up the pages of the appropriate static web site. There is an instance of Puma for each of the Rails applications.

Each Rails application has its database configuration to talk to a database on the database server.

For applications that need Redis, we run a separate Redis instance for each application. The Redis instance listens on a domain socket at /tmp/redis.domain_name.

Prerequisites

This document assumes you've already set up an Ubuntu 18.04 server, and a Postgres instance.

Common First Steps

These steps are necessary whether you want to deploy a static web site or a Rails application.

  1. Log in to the server via SSH (<original key file> is from AWS or other provider)
    ssh -i ~/.ssh/<original key file> ubuntu@<URL-of-server>
  1. Get the set-up scripts:
    wget https://github.com/lcreid/rails-5-jade/raw/master/build/build.sh
  1. Unzip it:
    sudo apt install unzip
    unzip master.zip

(The legacy way for step 2, pre 20.04, was wget https://github.com/weenhanceit/infrastructure/archive/master.zip.)

One-Time Server Set-Up

This step is only necessary after you first set up the server. It installs additional software needed on the server.

sudo ./build.sh -ncst server -r 6.1

You can do ./build.sh --help to see what all those parameters mean.

(The legacy was for the above, pre 20.04, was infrastructure-master/basic-app-server/build.sh.)

Also install the gem:

sudo gem install shared-infrastructure --no-document

Creating Users

Here's how to create named users with sudo privileges on the server. These instructions work if your desktop is running Ubuntu.

  1. Create a key pair on your local workstation if you don't already have one. Accept the defaults, and don't enter a pass phrase:
    mkdir ~/.ssh
    chmod 700 ~/.ssh  
    ssh-keygen -t rsa

This leaves a key pair in ~/.ssh/id_rsa (the private key) and ~/.ssh/id_rsa.pub (the public key).

  1. Log into the new (remote) server:
    ssh -i ~/.ssh/<original key file> ubuntu@<URL-of-server>
  1. Create the new user. You have to enter a password here. You will need the password when executing sudo, but it won't be used for logging in:
    sudo adduser --gecos "" <new-user-name>
    sudo adduser <new-user-name> sudo
  1. On the local workstation again, upload your public key to allow login to the remote server without a password:
    ssh-copy-id <new-user-name>@<URL-of-server>
  1. Test that the login works without a password:
    ssh <new-user-name>@<URL-of-server>
  1. If login is working without a password, go back to the remote server, and disallow logins with password, and clean out the history so as not to possibly leak any information about users:
    # See, for example: https://www.cyberciti.biz/faq/how-to-disable-ssh-password-login-on-linux/
    history -c && history -w

Note that you don't want to expire the user's password as some on the internet suggest, because that prevents the user from doing sudo.

Creating a Static Web Site

This sets up an Nginx server block for a given domain name. In all the examples that follow, replace domain-name with your domain name.

sudo create-server-block -u <deploy-run-user-name> <domain-name>

The root directory of the static web site files is /var/www/<domain-name>/html. Note that the script doesn't seem to create the html directory, so you have to do that yourself.

Now you can deploy the static web site. [TODO: How to deploy.]

Once deployed, remember to reload the Nginx configuration:

sudo nginx -s reload

Creating a Rails Application

This sets up:

  • An Nginx server block for a given domain name. The server block proxies via a domain socket to Puma
  • A systemd service file that runs an instance of Puma receiving requests on the same domain socket.

Rails Environment Variables

Earlier versions of this gem initialized some environment variables in the systemd unit file. We no longer do so, because it's incompatible with the Rails 5.2 way of handling secrets, which is now our standard way of handling secrets.

Create the Rails Application

Creating the Rails application includes:

  • Running a program to set up:
    • Nginx configuration files to forward requests to your domain, to the Rails application
    • systemd files to start the Rails application automatically when the server starts
  • Creating a user in the database for the application

If the application does not use send_file to ask Nginx to send private files:

sudo create-rails-app -u <deploy-run-user-name> <domain-name>

If the application uses uses send_file to ask Nginx to send private files, add the -a flag:

sudo create-rails-app -u <deploy-run-user-name> -a <location> <domain-name>

Where location is the name of the directory under Rails.root where the private files are found (typically /private).

If you forget to use the -a or -u flags, you can safely re-run this script later with the flag.

The above will tell you how to get a certificate for the site, but you can't do that yet. You need to deploy the application the first time, so the directories get created.

Before deploying the application, however, you need to set up the database. The following is what you need for an AWS RDS database:

psql -h <DB host name> -U <master user> -d postgres
create role <user name for application>
  with createdb
  login password '<password for application user>';
grant rds_superuser to <user name for application>;
\q
psql -h <DB host name> -U <user name for application> -d postgres
<enter password for application user>
create database <database for application>;
\q

The first step above will ask you for the password for the root user in the Postgres database.

For a self-hosted Postgres database set up with a postgres user, and local database access authenticated by Linux user:

sudo -iu postgres psql -h <DB host name> -d postgres
create role <user name for application>
  with createdb
  login password '<password for application user>';
alter user <user name for application> with superuser;
\q
psql -h <DB host name> -U <user name for application> -d postgres
<enter password for application user>
create database <database for application>;
\q

The root directory of the Rails application is /var/www/<domain-name>/html.

Now you can deploy the Rails app. Note that before you deploy, you have to do bundle binstubs puma in your Rails app, and then commit the bin directory to Github (or another accessible repository).

Assuming the Rails application has been prepared for deploy via Capistrano, try to deploy it now:

cap production deploy

The first deploy will fail, since neither the credentials nor the database exist. After the first deploy has failed, copy the master.key file to the appropriate place on the server. Go to the root directory of the Rails application on your workstation and type:

rsync config/master.key <URL-of-server>:/var/www/<domain-name>/shared/config

Then, log in to the application server as the deploy/run user and do:

cd /var/www/<domain name>/html/
rails db:setup

Once deployed, remember to reload the Nginx configuration:

sudo nginx -s reload

The deployment script should start the Puma service. Check Puma's status with:

sudo systemctl status <domain-name>

If it's not running, try to restart it with:

sudo systemctl restart <domain-name>

If Puma isn't running, /var/log/syslog is a good place to look for hints as to what's wrong. Note that systemd tries several times to start the application, so you will likely see the same set of messages repeated near the end of /var/log/syslog.

Set Up Redis for a Rails Application

The One Time Server Setup installs Redis with a basic configuration useful for simple Redis testing. For production applications on a shared host further build and configuration are required.

Our approach is to use at least one Redis instance per application. This allows us to configure each Redis application, and reduces the impact of a possible compromise of a Redis instance.

To create an instance of Redis for a Rails application at a given domain-name, type:

sudo ./create-redis-instance.sh <domain-name>

This creates the configuration file and necessary directories to run an instance of Redis for the application. A systemd unit file is created, so you can start the instance of Redis using:

sudo systemctl start redis.<domain-name>

and stop the instance of Redis using:

sudo systemctl stop redis.<domain-name>

You can see the status of the instance of Redis using:

sudo systemctl status redis.<domain-name>

The Redis log messages are written to /var/log/syslog.

The create script sets up the Redis instance to automatically start when the server starts, but it doesn't start the instance when it's first created. Remember to start the instance with:

sudo systemctl start redis.<domain-name>

Set Up the Rails Application for Sidekiq

The Sidekiq configuration in the Rails application must be set correctly to work with the our standard production infrastructure. config/sidekiq.yml must contain at least this:

:queues:
  - default
  - mailers

config/initializers/sidekiq.rb must contain at least this:

url = case Rails.env
when "production"
  ENV[REDIS_URL]
else
  "localhost:6379"
end

end
Sidekiq.configure_server do |config|
  config.redis = { url: url }
end

Sidekiq.configure_client do |config|
  config.redis = { url: url }
end

Set Up Sidekiq for a Rails Application

To set up Sidekiq for a Rails application, first Set Up Redis for a Rails Application Then, create a Rails application.

and install the Rails application and make sure it's running. Since Sidekiq in many ways is running the Rails application in another instance, make sure the Rails application runs before trying to debug Sidekiq problems.

To create an instance of Sidekiq for a Rails application at a given domain-name, set up the environment variables for the Rails application, if they're not already set.

Then, type:

sudo -E ./create-sidekiq-instance.sh <domain-name>

(Don't forget the -E to sudo. It causes the environment variables to be passed to the script.) This creates the configuration file and necessary directories to run an instance of Sidekiq for the application. A systemd unit file is created, so you can start the instance of Sidekiq using:

sudo systemctl start sidekiq.<domain-name>

and stop the instance of Sidekiq using:

sudo systemctl stop sidekiq.<domain-name>

You can see the status of the instance of Sidekiq using:

sudo systemctl status sidekiq.<domain-name>

The Sidekiq log messages are written to /var/log/syslog.

The create script sets up the Sidekiq instance to automatically start when the server starts, but it doesn't start the instance when it's first created. Remember to start the instance with:

sudo systemctl start sidekiq.<domain-name>

TLS (formerly SSL)

All Internet traffic should be encrypted, if you want to be a good Internet citizen. We get certificates from Let's Encrypt, using the EFF's Certbot. You should read the documentation for both of those before you run the scripts to create HTTPS sites.

Certbot needs a running web server, and these scripts require that the web server be responding to port 80. So you have to do the above installation steps to create the site for HTTP before proceeding with the rest of this section, which configures the site for HTTPS.

For Rails apps, run this command and answer its questions to obtain and install certificates:

sudo certbot certonly --webroot -w /var/www/<domain-name>/html/public -d <domain-name> [-d <domain-name>]...

For static sites, run this command and answer its questions to obtain and install certificates:

sudo certbot certonly --webroot -w /var/www/<domain-name>/html -d <domain-name> [-d <domain-name>]...

(The difference is the path after the -w argument.)

For either type of site, re-run the site set-up script. It will detect the key files, and configure the site for TLS/HTTPS access, including redirecting HTTP requests to HTTPS.

Rails:

sudo create-rails-app -u <deploy-run-user-name> <domain-name>
sudo nginx -s reload

Static site:

sudo create-server-block -u <deploy-run-user-name> <domain-name>
sudo nginx -s reload

Test renewal with:

sudo certbot renew --dry-run

Testing TLS

Go to the SSL test page to test that the TLS implementation is working. You should get an A+ with the above set-up.

Default Site

We redirect anything asking for the IP address of the server to our home page. To set that up, go to the directory with all the scripts and run:

sudo ./create-default-server.sh
sudo nginx -s reload

Reverse Proxy

(Experimental implementation.) To create a reverse proxy to forward HTTPS requests to an HTTP-only site (this works, but keep reading before you do it):

sudo gem install shared-infrastructure
sudo create-reverse-proxy <proxy-domain-name> <target-url>

This creates a reverse proxy accessible via HTTP.

The Let's Encrypt certbot program for getting certificates expects a publicly accessible directory behind the proxy-domain-name. A proxy normally doesn't have that, and create-reverse-proxy doesn't create one.

One solutions is to create a certificate with multiple URLs when configuring the original domain. For example, if the main site is example.com but you want to reverse-proxy https://search.example.com to another address, you could create the site like this:

create-server-block example.com search.example.com

then get the certificate, which will cover both domains. Then create the reverse proxy like this:

create-reverse-proxy --certificate-domain example.com http://search.example.com

I thought this would fail because the site is now using HTTPS, so the certbot command wouldn't work, but it does.

However, it does fail because each site does indeed need to get the download. So the reverse proxy needs to have a location that maps to /.well-known.

infrastructure's People

Contributors

lcreid avatar

Watchers

James Cloos avatar  avatar

Forkers

geoffnettaglich

infrastructure's Issues

Add Redis set-up to Ruby script

  • Get a stock 18.04 box
  • Run rails-5-jade set-up script with the right arguments
  • See where all the Redis files are at
  • See if Redis config is set up for systemd -- no, it's not. Well, actually it is set up to start running automatically and all.
  • Configure Redis to run for a different database for each user
  • Try it out until it's right
  • Write test cases for scripted version
  • Refactor a bit if necessary
  • Write the code to build Redis support

No `html` directory to start.

A Capistrano deploy expects that the "root directory" of the Rails application is actually a symbolic link that it points at different release directories. As long as non-Capistrano deploys work okay if they don't find the html directory, then Capistrano works if the directory isn't there at all.

Ruby version wasn't working for multiple domains in one site

I had never implemented multiple domains for one site in the Ruby version. When I used the script, I broke autism-funding.ca, because it was the second domain on autism-funding.com. I didn't realize it until the cert came up for renewal. I spent days looking for the fix before I realized it, and then another two days because I didn't fix all the places.

Docker AKA Containerized Infrastructure

We should consider containerized infrastructure.

Advantages

  • Reduce the need for Jade Systems and WeenhanceIT to support a lot of our own infrastructure builds. Docker has pre-built Postgres, MS SQL Server, Nginx, Redis, Ruby, and other containers. Currently, we have both this project, for setting up the server and individual sites, and the Vagrant box builds that would require significant new development to build new Ubuntu 20.04 boxes. This project never did get the Redis install converted into the new Ruby implementation with tests, so there's a gap there that would need to be filled if we continued with the non-containers approach
  • The development model is more naturally close to the production model. The containerized development environment would involve a stand-alone Postgres server or a stand-alone Redis server. It would also be more practical to spin up an Nginx container to test the production configuration
  • Looks like it will be lighter weight, at least on Linux workstations
  • May afford some cost savings for production deployments -- although to take advantage of them on AWS we probably have to learn a whole new set of AWS complexities
  • More people are doing containers than Vagrant, so there are more resources on-line
  • Some of us can probably leverage some work time to learn about and practice Docker

Disadvantages

  • Time spent learning the new technology
  • Uncertainty whether we'll have more "it works for me" issues because the container technology has to provide different amounts of VM support, depending on the host workstation
  • The above is perhaps better expressed as: Linux workstation users will have the smoothest experience. macOS and Windows users may have to do more work to get things running and keep things running
  • Surprises down the line where things don't work the way we had hoped
  • It's not the Rails way to do things, if you want to work on Rails itself or submit PRs to Rails (but why not do our own thing)?

Outstanding Questions

  • How to deploy? You're supposed to deploy containers. Does it make sense to do a Capistrano deploy to a container? I suppose we could
  • What's the model for static web sites? Do they get their own container? What does that container look like?
  • How to manage the Nginx configuration files in staging and production? This may take us down the path of the Kubernetes-like tools -- yet another level of tooling
  • It seems like Dockerfiles need to change based on the host OS. May need some investigation here to see if it's true

To Do

  • Try a development environment for one of our apps (probably Outages or the Dashboard home site)
  • Write up the production deployment model AKA figure out how to put Nginx in front of multiple containers

Links

Don't use group `www-data`

Now that we use one user per site, is there a reason to give group www-data to all the files? Is it in fact a bad thing to do?

Logrotate

Need to set up logrotate for each application's log/production.log.

The Puma logs don't grow at all, so no need to rotate them.

Redis user

Should Redis run under the same user as the rest of the site? How do we control access to Redis anyway?

Create the user as part of the web site creation

Now that we're moving to one user per site, it might make sense to create the user for the site in the script.

One pitfall is that the script is written so that it can be run many times (e.g. to update the configuration of a site), but we only want to create the user once.

Restarting Rails Apps

The current way of restarting a Rails app requires sudo privileges because we start with systemd. And we start with systemd because we want the site to be available automatically on start-up.

We could simply try to restart with the tmp/restart.txt approach. The net seems to say that works for Puma.

There is an interesting way you can start services on demand ("socket activation"), which covers the automatic start issue. Maybe that can be combined with a service that runs under a user's permission, to make it so restarting with systemctl can be done by the deploy user.

Docker for Production

Split from #23 . #23 had questions that apply more to a production deploy than to development use.

  • How to deploy? You're supposed to deploy containers. Does it make sense to do a Capistrano deploy to a container? I suppose we could
  • What's the model for static web sites? Do they get their own container? What does that container look like?
  • How to manage the Nginx configuration files in staging and production? This may take us down the path of the Kubernetes-like tools -- yet another level of tooling
  • Where do the Letsencrypt files go? And how to handle them?
  • How to handle Redis? Can each application have its own Redis container, or is it easy to have one Redis for everyone (Redis doesn't really do great authentication if it faces a hostile world).

An "official" view of Rails on Docker: https://github.com/docker/labs/tree/master/developer-tools/ruby

Nginx show maintenance page if site is down.

Add to the Nginx configuration to show a maintenance page if the site returns 503 (to be confirmed), or doesn't respond, or there should be some way to make it happen when planned maintenance.

How to use CodeDeploy to deploy to test/staging/training-type environments

There is an immediate need to deploy to staging, which would be an Internet-accessible site that is as identical as possible to production (except for data in the database). "Identical" includes the way it's deployed. The purpose in the short term at least is to be able to test and fix deploys without affecting production.

Because of the way we share environments on our production server, it's not obvious how CodeDeploy can be made to deploy to different directories, without a lot of risky manual intervention.

Restart after deploy without sudo

Partly because we now have a user per application, we might be able to enable a restart of the application (after a deploy, for example) without needing sudo privileges.

Create key files for user

One way of deploying from GitHub or BitBucket private repositories involves the server having its own key files. One puts the public key from the server on the repository. (The hubs don't allow you to use the same public key for two repositories, which makes it difficult to use one's default key from one's workstation.)

If that's the approach, it would make sense to generate the keys as part of the user build process.

Related to #16.

Support HTTPS for the Home Page

Github pages with a custom domain doesn't support HTTPS (for reasons that make sense if you think about it -- how would they support everyone setting up their own certificates?).

One of the suggestions for Github pages is to use a proxy for the custom domain, and have it proxy to the .github.io domain, which does support HTTPS (they have a wildcard certificate for *.github.io). Since we already have a server redirecting, we could have it proxy instead, and force scheme to HTTPS.

So the Nginx default server would also be the server for weenhanceit.com. It would force HTTP to HTTPS, and would proxy requests to weenhanceit.github.io over HTTPS.

But then if we were doing all that, we might just generate the Jekyll locally, and upload it to our server as a static site, just like any other static site we host.

Implicit in this issue is redirecting the default site from our server(s) to weenhanceit.com.

Set RAILS_ENV to environment build.

We need to use this script to set up more than one RAILS_ENV. Add a -e/--rails-env flag to set the RAILS_ENV. The RAILS_ENV is used in the systemd unit file, and in the logrotate file.

Migrate to Local Server

It's great to use AWS compared to real servers, but the reality is that it costs money, especially when you need a database.

You can self-host like in the old days. There are a number of ways:

  1. Everything on a single physical server.
  2. Multiple VMs on a single physical server.
  3. Docker containers on a single physical server.

At the end there's a section on issues common to all approaches.

Single Physical Server

C'mon. We stopped doing this 15 years ago.

Multiple VMs on a Single Physical Server

This is the model we know best. But we have to learn to set up the server. I did this maybe 15 years ago and I still have essentially the old set-up running on my server. There must be advances, e.g. pre-built template VMs (currently you have to install Ubuntu for every server you create).

There are multiple variants on this with the more modern tools for managing clouds, like the Canonical tool I looked at last year, whose name has completely slipped my mind right now.

The architecture, if it were like what we were doing on AWS:

  • A single VM, just bigger than the AWS ones, to run Nginx, all Rails instances, and Redis if needed.
  • A Postgres server.

If we wanted to be more clever, we could try:

  • A small VM for Nginx.
  • A small VM for each Rails instance, with Redis if needed.
  • A Postgres server.

This approach would solve the Redis set-up issue (it's pretty easy to use the out-of-the-box Redis set-up if it's only dealing with one application). It may take more resources.

This approach would also

Advantages

  • We mostly know how to do this.
  • We have scripts to do most of this already (although they're for Ubuntu 18.04, they're probably good enough).
  • In particular, we know what the architecture would look like.

Disadvantages

  • We never had a good solution for setting up Redis for various servers.
  • Take more resources than the Docker solution.
  • More set-up than Docker in some ways, in that you can get Redis, Nginx, Postgres, and Ruby images from the repository.
  • Traditional Postgres upgrades.

Docker Containers on a Single Physical Server

While we don't really know what this would look like, presumably we'd have something like this:

  • An Nginx container sending requests to each application we have deployed.
  • A Postgres container for all applications that need a database.
  • A Ruby application container for each application, or possibly more than one (just for fun).
  • A Redis container for each application that needs it.
  • A cron container. TBD whether there's a convenient way to do a single cron container, or if it's better to have one for each application that needs it. (This is one of the fun differences about a container -- it's a single process, so it doesn't have an operating system running periodic processes, like cron, in the background.)

We would probably want to run the Docker containers in a VM, to keep the host physical server in a fairly stock configuration, and to give us some flexibility in trying Docker things. We can try new stuff without breaking our existing VM, and if we really make a mess we just blow it away and start with a fresh VM.

Advantages

  • We can get Redis, Nginx, Postgres, and Ruby images from the repository.
  • Supposedly will take fewer resources.
  • While we have to learn the overall architecture, the fact that we can spin up a really lightweight Redis for each app that needs it is attractive.
  • Postgres upgrades by getting a new container from a new image.

Disadvantages

  • We have to learn how to set up a "good" Docker server.
  • If we discover that we need more than Docker Compose, there's another learning curve (probably for Kubernetes, which seems to be getting a bad rap of late for its complexity).
  • There are lots of differences between a container and a VM that will trip us up.
  • The whole question about how to build and deploy Rails containers will have to be addressed.
  • Postgres upgrades we have to see how Postgres databases like it when suddenly a newer version starts using it.

Issues Common to All Approaches

  • I have to let SSH through my firewall. I'm not doing that now. A number of machines internally don't allow any connections, but I'm not sure I want to rely on that. I might have to do a bit more monitoring and control of SSH access.
  • My IP will change whenever the power fails or I need to do electrical maintenance in the house.
  • We'll have more outages, and may need more monitoring because we won't otherwise know the outage is happening.

Jump Server

See:

A small jump server has some interesting benefits:

  • By doing administration from the jump server, we only need to support admin tasks from a Linux box, although going straight from desktop to target server isn't that hard.
  • Once we get it set up, it's likely easier to keep updated and secure.
  • A jump server is a model used in lots of places, so we're likely to find examples on the Internet to help us.

It also has issues or at least challenges:

  • The jump server needs access to the Internet for updates, access to GitHub, etc., but ideally it doesn't have any access to the rest of my internal network, or even to the host it runs on. This requires Larry to pretend he's an network admin, with Stan now four time zones away.
  • There may be use cases where we want to connect straight through from our workstations to containers, or at least the Docker host VM. I haven't set up that level proxying/forwarding before, but it seems to be pretty much out-of-the-box functionality for ssh.

Network Noodling

  • The VMs should not be accessible from the containers.
  • The VMs should not have access to my local network, and therefore neither should the containers.
  • The VMs should not have access to their host.

Each of these levels of routing control needs to be enforced outside the controlled entity. So the networking in the docker-compose.yml has to be set up right. Or is there another way?

This might be really hard or impossible.

`chown` and `chmod` domain root.

The domain root has to belong to the deploy user and www-data group, and it has to have the sticky bit set (I believe). While we're at it, look at UMASK (the last one might be on the deploy side).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.