Giter Site home page Giter Site logo

kni-upi-lab's Introduction

KNI UPI Automation Framework

The KNI UI AF provides a series of scripts that simplify the deployment of an OCP 4.x cluster using the UPI method of installation. KNI AF is designed to leverage a prescribed hardware / network setup in order to simplify the deployment process.

The prescribed infrastructure architecture is shown below.

KNI AF

The architecture requires:

  • A provisioning/bastion server where several helper applications are run. Either Centos 7.6 or greater can be used as well as RHEL.
  • One or more hardware nodes for Master Nodes
  • Zero or more hardware nodes for Worker Nodes
  • All master/worker hosts require two physical interfaces and an Out-Of-Band management network connection (IPMI).
    • Interface #1 will be used for provisioning and must have pxeboot capability. This interface is attached to the provisioning network.
    • Interface #2 will carry all API/Cluster and application traffic. This interface is attached to the baremetal network.
    • The OOB management network must be accessible from the provisioning host
  • Network
    • Provisioning network -- A private, non-routed network that carries pxeboot traffic
    • Baremetal network --This network carries all OpenShift SDN traffic.
      • DHCP -- By default, DHCP is provided by an instance of DNSMASQ running on the provisioning host. An external DHCP server can be used instead.
      • DNS -- By default, DNS is provided by an instance of CoreDNS running on the provisioning host. An external DNS server can be used instead (See OpenShift UPI/baremetal documentation for DNS requirements)
      • NTP -- An NTP server must be accessible from this network

UPI Background

Prerequistes

Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster.

SSH Private Key

For OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, you must provide an SSH key that your ssh-agent process uses to the installer.

You can use this key to SSH into the master nodes as the user core. When you deploy the cluster, the key is added to the core user’s ~/.ssh/authorized_keys list.

Most times, this key is the id_rsa.pub for the installation account. This might need to be created on the provisioning host.

PULL-SECRET

From the [OpenShift Infrastructure Providers] (https://cloud.redhat.com/openshift/install) page, download your installation pull secret. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.

Quick Start

  1. Install a Centos7/RHEL provisioning host
  2. Install git
  3. For a fully-virtualized install, please see Virtualized Install
  4. For a disconnected installation of the cluster, please see Disconnected Install

Populate cluster/site-config.yaml

The site-config.yaml file describes the infrastructure environment for the cluster. Fill in the fields to fit your environment.

infrastructure:
  network:
    provisioningIpCidr: 172.22.0.0/24  # The provisioning network's CIDR
    baremetalIpCidr: 192.168.111.0/24  # The baremetal networks's CIDR
    # baremetal network default gateway, set to proper IP if provHost/services/baremetalGateway == false
    baremetalGWIP: 192.168.111.4
    dns:
      # cluster DNS, change to proper IP address if provHost/services/clusterDNS == false
      cluster: 192.168.111.3
      external1: 10.11.5.19 
#     external2: 10.11.5.19 
#     external3: 10.11.5.19 

  provHost:
    interfaces:
      provisioning: eno2                  # Prov host provisioning network intf
      provisioningIpAddress: 172.22.0.10  # Prov host provisioning network intf IP address
      baremetal: ens1f0                   # Prov host baremetal network intf
      baremetalIpAddress: 192.168.111.6   # Prov host baremetal network intf IP address
      external: eno1                      # Prov host external (internet) interface
    bridges:
      provisioning: provisioning          # Name to use for the prov host provisioning bridge
      baremetal: baremetal                # Name to use for the prov host baremetal bridge
    services:
      # Does the provsioning host provide DHCP services for the baremetal network?
      baremetalDHCP: true
      # Does the provisioning host provide DNS services for the cluster?
      clusterDNS: true
      # Does the provisioning host provide a default gateway for the baremetal network?
      baremetalGateway: true

Populate cluster/install-config.yaml

The baseDomain field sets the cluster domain. The metadata/name field sets the name of the cluster. The remaining fields (except for hosts defined below) should not be changed.

Populate the hosts: section with information about the hardware hosts you are going to use for the cluster.

Field Default Definition
name none Either master-N or worker-M
role none Either master
bmc/address none ipmi:/ipaddr
bmc/credentialsName none Name of yaml file containing user/password
bootMacAddress none MAC address of the provisioning MAC
sdnMacAddress none MAC address of the baremetal MAC
hardwareProfile unused
osProfile/
type rhcos OS Type rhcos, centos, or rhel
pxe bios boot method, bios or uefi
install_dev sda DISK to install OS onto...
initrd depends on type location of initramfs (Should not be set)
kernel depends on type location of boot kernel (Should not be set)
kickstart depends on type kickstart file to use (Should not be set)

When role==nodeploy, the host is not provisioned. nodeploy can be used to temporarily remove / add nodes.

Example below:

   hosts:
      # Master nodes are always RHCOS-based
      # You must define either 1 or 3 masters
      -  name: master-0 # Must be of the form master-N or worker-M
         role: master   # Can be either master|worker|nodeploy
         bmc:
            address: ipmi://10.19.110.12
            credentialsName: ha-lab-ipmi
         bootMACAddress: 0C:C4:7A:DB:A9:93  # provisioning network mac
         sdnMacAddress: 0c:c4:7a:19:6f:92   # baremetal network mac
         # sdnIPAddress: 192.168.111.11     # Optional -- Set static IP
         hardwareProfile: default           # NOT USED
         osProfile:
            # With role == master, the osType is always rhcos
            # And with type rhcos, the following are settings are available
            pxe: bios         # pxe boot type either bios (default if not specified) or                      # uefi
                              # all masters must have the same pxe value.  
                              # Either defaulting to bios
                              # or all masters with pxe: uefi
            install_dev: sda  # where to install the operating system (sda is the default)
      -  name: worker-2
         role: worker
         bmc:
            address: ipmi://110.19.110.8
            credentialsName: ha-lab-ipmi
         bootMACAddress: 0C:C4:7A:DB:AC:03
         sdnMacAddress: 0c:c4:7a:19:6f:7e
         hardwareProfile: default
         osProfile:
            type: rhel
            # Don't set the following unless you know what you are doing
            initrd: assets/rhel8/images/pxeboot/initrd.img # (default if not specified)
            kernel: assets/rhel8/images/pxeboot/vmlinuz # (default if not specified)

Select version

Edit common.sh file and either leave the current default values

OPENSHIFT_RHCOS_MAJOR_REL="4.2"
export OPENSHIFT_RHCOS_MAJOR_REL

OPENSHIFT_RHCOS_MINOR_REL="4.2.0"
export OPENSHIFT_RHCOS_MINOR_REL

or change to latest

OPENSHIFT_RHCOS_MAJOR_REL="latest"
export OPENSHIFT_RHCOS_MAJOR_REL

OPENSHIFT_RHCOS_MINOR_REL="latest"
export OPENSHIFT_RHCOS_MINOR_REL

Run prep_bm_host.sh script

The prep_bm_host.sh is located in the repo directory and prepares the host for provisioning.

./prep_bm_host.sh

Make Procedure

make clean
make all
make con-start

If there are no errors,

cd terraform/cluster
terraform init
terraform apply --auto-approve
cd ..
cd ..
openshift-install --dir ocp wait-for install-complete

Wait for master nodes to deploy

cd ../workers
terraform init
terraform apply --auto-aprove

Clean up procedure if you want to reinstall.

cd terraform/cluster
terraform destroy --auto-approve
cd ../..
make clean
./clean_bm_host.sh 

Enjoy your Openshift cluster

Automation Operation

Procedure

  1. Configure DHCP.

  2. Provision the required load balancers.

  3. Configure the ports for your machines.

  4. Configure DNS.

  5. Ensure network connectivity.

DHCP

The KNI AF conditionally instantiates two dnsmasq to provide DHCP for the infrastructure. The dnsmasq containers provide DHCP for the provisioning network and the network where the OCP SND network resides. A CoreDNS server is also started to provide DNS services for the cluster.

Load Balancers

The KNI AF conditionally instantiates haproxy as a load balancer for the cluster. KNI AF can either provide the haproxy load balancer only during install or as a permanent part of the cluster setup. A load balancer is required for installation.

DNS

The KNI AF condistionally instantiates a coredns container to provide mininal DNS support for the cluster. The coredns server provides an authoritive and recursive DNS server required by OCP.

Networking requirements for user-provisioned infrastructure

All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the Machine Config Server.

During the initial boot, the machines require a DHCP server in order to establish a network connection to download their Ignition config files.

It is recommended to use the DHCP server to manage the machines for the cluster long-term. Ensure that the DHCP server is configured to provide persistent IP addresses and host names to the cluster machines.

The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another acceptable approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests.

You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster.

Kubernetes API

Host level services, including the node exporter on ports 9100-9101 and the Cluster Version Operator on port 9099.

The default ports that Kubernetes reserves

10256

openshift-sdn

30000-32767

Kubernetes NodePort

NETWORK TOPOLOGY REQUIREMENTS

The infrastructure that you provision for your cluster must meet the following network topology requirements.

OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat.

Load balancers

Before you install OpenShift Container Platform, you must provision two layer-4 load balancers.

Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

Kubernetes API server

22623

Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane.

Machine Config server

443

The machines that run the Ingress router pods, compute, or worker, by default.

HTTPS traffic

80

The machines that run the Ingress router pods, compute, or worker by default.

HTTP traffic

A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes.

User-provisioned DNS requirements

The following DNS records are required for a OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file.

api.<cluster_name>.<base_domain>

This DNS record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

api-int.<cluster_name>.<base_domain>

This DNS record must point to the load balancer for the control plane machines. This record must be resolvable from all the nodes within the cluster.

The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If it cannot resolve the node names, proxied API calls can fail, and you cannot retrieve logs from Pods.

Routes

*.apps.<cluster_name>.<base_domain>

A wildcard DNS record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster.

etcd

etcd-.<cluster_name>.<base_domain>

OpenShift Container Platform requires DNS records for each etcd instance to point to the control plane machines that host the instances. The etcd instances are differentiated by values, which start with 0 and end with n-1, where n is the number of control plane machines in the cluster. The DNS record must resolve to an unicast IPV4 address for the control plane machine, and the records must be resolvable from all the nodes in the cluster.

_etcd-server-ssl._tcp.<cluster_name>.<base_domain>

For each control plane machine, OpenShift Container Platform also requires a SRV DNS record for etcd server on that machine with priority 0, weight 10 and port 2380. A cluster that uses three control plane machines requires the following records:

_service._proto.name. TTL class SRV priority weight port target
_etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-0.<cluster_name>.<base_domain>.
_etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-1.<cluster_name>.<base_domain>.
_etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-2.<cluster_name>.<base_domain>.
_service._proto.name. TTL class SRV priority weight port target.
_etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-0.<cluster_name>.<base_domain>.
_etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-1.<cluster_name>.<base_domain>.
_etcd-server-ssl._tcp.<cluster_name>.<base_domain> 86400 IN SRV 0 10 2380 etcd-2.<cluster_name>.<base_domain>.

kni-upi-lab's People

Contributors

abays avatar aneeshkp avatar atyronesmith avatar larsks avatar madorn avatar oglok avatar yrobla avatar zshi-redhat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kni-upi-lab's Issues

Support for large root disks

Worker node, gives below error for cases when disk size is >2 TB, To fix this, below need to be added in centos kickstart file. This should be added in documentation

part biosboot --fstype=biosboot --size=1

Terraform apply is failing

Terraform apply is failing with below error.

matchbox_profile.default: Creating...
module.masters.matchbox_profile.master[0]: Creating...
matchbox_profile.default: Creation complete after 0s [id=test1]
module.masters.matchbox_profile.master[0]: Creation complete after 0s [id=test1-master-0]
module.bootstrap.matchbox_profile.bootstrap: Creating...
matchbox_group.default: Creating...
module.masters.matchbox_group.master[0]: Creating...
matchbox_group.default: Creation complete after 0s [id=test1]
module.masters.matchbox_group.master[0]: Creation complete after 0s [id=test1-master-0]
module.bootstrap.matchbox_profile.bootstrap: Creation complete after 0s [id=test1-bootstrap]
module.bootstrap.matchbox_group.bootstrap: Creating...
module.bootstrap.local_file.vm_bootstrap: Creating...
module.bootstrap.local_file.vm_bootstrap: Creation complete after 0s [id=489ef9df0c85d5cbf1e9a422332c69e9e9c01bcd]
module.bootstrap.matchbox_group.bootstrap: Creation complete after 0s [id=test1-bootstrap]
module.masters.null_resource.ipmi_master[0] (local-exec): Set Boot Device to pxe
module.bootstrap.null_resource.vm_bootstrap: Creating...
module.bootstrap.null_resource.vm_bootstrap: Provisioning with 'local-exec'...
module.bootstrap.null_resource.vm_bootstrap (local-exec): Executing: ["/bin/sh" "-c" "rm -f /var/lib/libvirt/images/bootstrap.img || true\nqemu-img create -f qcow2 /var/lib/libvirt/images/bootstrap.img 800G\nchown qemu:qemu /var/lib/libvirt/images/bootstrap.img\nvirsh create /tmp/test1-bootstrap-vm.xml\n"]
module.bootstrap.null_resource.vm_bootstrap (local-exec): qemu-img: /var/lib/libvirt/images/bootstrap.img: Could not create file: No such file or directory
module.bootstrap.null_resource.vm_bootstrap (local-exec): Formatting '/var/lib/libvirt/images/bootstrap.img', fmt=qcow2 size=858993459200 encryption=off cluster_size=65536 lazy_refcounts=off
module.masters.null_resource.ipmi_master[0] (local-exec): Chassis Power Control: Cycle
module.masters.null_resource.ipmi_master[0]: Creation complete after 1s [id=1486622495056344077]
module.bootstrap.null_resource.vm_bootstrap (local-exec): chown: cannot access ‘/var/lib/libvirt/images/bootstrap.img’: No such file or directory
module.bootstrap.null_resource.vm_bootstrap (local-exec): /bin/sh: line 3: virsh: command not found


I checked and found that libvirt wasn't installed. so I tried below and it worked.

rpm -qa |grep libv # it showed no output for libvirt
systemctl status libvirtd # it showed libvirt was not running
yum install virt-manager libvirt
systemctl start libvirtd
systemctl enable libvirtd

Centos/RHEL Image download

CentOS binary download before worker node installation is not covered in documentation.

mount -o loop CentOS-7-x86_64-DVD-1810.iso /mnt/
cd /mnt
mkdir –p /home/kni-upi-lab/matchbox-data/var/lib/matchbox/assets/centos7
cp -av * /home/kni-upi-lab/matchbox-data/var/lib/matchbox/assets/centos7/
umount /mnt/

Bootstrap node auto-deletion

Bootstrap node auto deletion is neither part of automation nor the documentation covering the step to do this.

issue with OCP_CLIENT_BINARY_URL and OCP_INSTALL_BINARY_URL return

If we set OPENSHIFT_RHCOS_MAJOR_REL and OPENSHIFT_RHCOS_MINOR_REL to "4.1", OCP_CLIENT_BINARY_URL and OCP_INSTALL_BINARY_URL return and wrong address. The problem is locate at the end of "images_and_binaries.sh" file, at the "grep install-linux.tar" more exactelly.
Solution, We must delete the extension "tar" and the issue is solved.

OCP_CLIENT_BINARY_URL="${OCP_BINARIES["$OPENSHIFT_RHCOS_MAJOR_REL"]}$(curl -sS "${OCP_BINARIES["$OPENSHIFT_RHCOS_MAJOR_REL"]}" | grep client-linux. | cut -d '"' -f $FIELD_SELECTOR)"
and
OCP_INSTALL_BINARY_URL="${OCP_BINARIES["$OPENSHIFT_RHCOS_MAJOR_REL"]}$(curl -sS "${OCP_BINARIES["$OPENSHIFT_RHCOS_MAJOR_REL"]}" | grep install-linux. | cut -d '"' -f $FIELD_SELECTOR)"

When adding workers following warnings shown

Steps to create

 cd ../workers
terraform init
terraform apply --auto-aprove

You see following warnings

Warning: Value for undeclared variable

  on terraform.tfvars line 9:
   9: worker_baremetal_interface = "ens803f0"
Warning: Value for undeclared variable

  on terraform.tfvars line 12:
  12: worker_provisioning_interface = "ens803f1"** 

README file missing content

README file doesn't have following covered-

  1. installing git and git clone of repository to start this work.
    yum install git
    git clone https://github.com/redhat-nfvpe/kni-upi-lab.git

  2. Update of cluster/ha-lab-ipmi-creds and giving username password in base64 format

  3. Details on OS requirement of installer node. like what all OS and versions are supported?

  4. if BM and Prov CIDR is not same as used in AF then where and how to change it?

  5. Instruction on whether or not, Provisioning and baremetal network interfaces should be set prior to running this automation or just the management IP is enough?

  6. Add this instruction- Run prep_bm_host.sh while connected thru management IP. No SSH from BM or PROV IP to management IP will work since it will disconnect the session during bridge creation.

Worker ROOTPW

Worker node kickstart file (centos-worker-kickstart.cfg) parameter rootpw is not automatically updated, one needs to manually update it. It should be covered in document or should be automated.

prep_bm_host warnings

I have re-executed this script after running clean_up_host script, I am getting below messages, please see if these are ok and won't affect anything.

Configuring baremetal interface (em2) and bridge (baremetal)...
device em2 is not a slave of baremetal
bridge baremetal is still up; can't delete it

You are using pip version 8.1.2, however version 19.2.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.

Make all error related to openshift-install file download

Getting below error while running "make all"

tar: openshift-install-linux-.tar.gz: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
mv: cannot stat ‘openshift-install’: No such file or directory
make: *** [/usr/local/bin/openshift-install] Error 1

Issue was resolved when we add below lines in common.sh, not sure if this is the right fix.

OPENSHIFT_OCP_MINOR_REL="4.1.0"
export OPENSHIFT_OCP_MINOR_REL

No error is shown when you have wrong ssh key

The installer does not warn or fail when the user has entered the wrong ssh key in install-config.yaml .

In fact, Ther installer continues to install openshift and fails after 30ms without proper error message.
Expected: BM preparation script should check for this or make it interactive with a menu to choose ssh key other configurations like openshift installer.

Terraform init is failing

Getting below error.

[root@localhost cluster]# terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...

  • Checking for available provider plugins...
    Provider "matchbox" not available for installation.
    A provider named "matchbox" could not be found in the Terraform Registry.
    This may result from mistyping the provider name, or the given provider may
    be a third-party provider that cannot be installed automatically.
    In the latter case, the plugin must be installed manually by locating and
    downloading a suitable distribution package and placing the plugin's executable
    file in the following directory:
    terraform.d/plugins/linux_amd64
    Terraform detects necessary plugins by inspecting the configuration and state.
    To view the provider versions requested by each module, run
    "terraform providers".
    Error: no provider exists with the given name

Detailed analysis of this issue is present in attachment.

terrform for matchbox provider error.txt

As a workaround, I tried following and it worked. Please see if any changes are required in prep_host_bm.sh

Yum install wget

VERSION=v0.3.0

wget https://github.com/poseidon/terraform-provider-
matchbox/releases/download/$VERSION/terraform-provider-matchbox-$VERSION-linux-amd64.tar.gz

tar xvf terraform-provider-matchbox-v0.3.0-linux-amd64.tar.gz

cd terraform-provider-matchbox-v0.3.0-linux-amd64/

mv terraform-provider-matchbox ~/.terraform.d/plugins/

Static IP specification in install-config.yaml is not working for master and worker

Here is the error I am getting,

[root@localhost kni-upi-lab]# make all
./scripts/gen_config_prov.sh
Generating /home/kni-upi-lab/dnsmasq/prov/etc/dnsmasq.d/dnsmasq.conf...
./scripts/gen_config_bm.sh bm
Using cached manifest values...
Key with no value for key "install-config.platform.hosts.0.sdnIPAddress" failed...
make: *** [dnsmasq/bm/etc/dnsmasq.d/dnsmasq.conf] Error 1

Second RHCOS worker doesn't join the cluster

A workaround is to manually execute:

ipmitool -I lanplus -H $IPMI_IP -U xxx -P xxx chassis bootdev pxe
ipmitool -I lanplus -H $IPMI_IP -U xxx -P xxx power cycle

It seems those instructions are not executed by Terraform on a second worker. It'd be handy to add TFLOG var somewhere.

Worker node doesn't install rhcos

Hello,
I'm facing some issues when trying to install worker node with rhcos, despite the master installation is going well.
The following capture shows that it fails fetching image headers but I don't know what it is about.
image

Many thanks for any help.

Error from prep_bm_host

ipxe.efi is not getting downloaded. Here is the error.

_

Setting up tftpboot...

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0curl: (6) Could not resolve host: boot.ipxe.org; Unknown error

_

[root@localhost kni-upi-lab]# ls /var/lib/tftpboot/
undionly.kpxe

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.