Giter Site home page Giter Site logo

terraform-packet-openshift's Introduction

Terraform CI

OpenShift via Terraform on Packet

This collection of modules will deploy will deploy a bare metal OpenShift consisting of (1) ephemeral bootstrap node, (3) control plane nodes, and a user-configured count of worker nodes1 on Packet. DNS records are automatically configured using Cloudflare.

Install Terraform

Terraform is just a single binary. Visit their download page, choose your operating system, make the binary executable, and move it into your path.

Here is an example for macOS:

curl -LO https://releases.hashicorp.com/terraform/0.12.25/terraform_0.12.26_darwin_amd64.zip
unzip terraform_0.12.25_darwin_amd64.zip
chmod +x terraform
sudo mv terraform /usr/local/bin/

Example for Linux:

wget https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_linux_amd64.zip
unzip terraform_0.12.26_linux_amd64.zip
sudo install terraform /usr/local/bin/

Additional requirements

local-exec provisioners require the use of:

  • curl
  • jq

To install jq on RHEL/CentOS:

wget https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
sudo install jq-linux64 /usr/local/bin/jq

To install jq on Debian/Ubuntu:

sudo apt-get install jq

Download this project

To download this project, run the following command:

git clone https://github.com/RedHatSI/terraform-packet-openshift.git
cd terraform-packet-openshift

Usage

  1. Follow this to configure your Packet Public Cloud project and collect required parameters.

  2. Follow this to configure your Cloudflare account and collect required parameters.

  3. Obtain an OpenShift Cluster Manager API Token for pullSecret generation.

  4. Configure TF_VARs applicable to your Packet project, Cloudflare zone, and OpenShift API Token:

    export TF_VAR_project_id="kajs886-l59-8488-19910kj"
    export TF_VAR_auth_token="lka6702KAmVAP8957Abny01051"
    
    export TF_VAR_cf_email="[email protected]"
    export TF_VAR_cf_api_key="21df29762169c002ca656"
    export TF_VAR_cf_zone_id="706767511sf7377900"
    
    export TF_VAR_cluster_basedomain="domain.com"
    export TF_VAR_ocp_cluster_manager_token="eyJhbGc...d8Agva"
  5. Initialize and validate terraform:

    terraform init
    terraform validate
  6. Provision all resources and start the installation. This process takes between 30 and 50 minutes:

    terraform apply
  7. Cleanup the boostrap node once provisioning and installation is complete by permanently (recommended) or temporarily setting count_bootstrap=0

    terraform apply -var="count_bootstrap=0"

    If you need to obtain your kubeadmin credentials at a later time:

    terraform output
    

Experimental Statement

This repository is Experimental!


1 As of OpenShift Container Platform 4.5 you can deploy three-node clusters on bare metal. Setting count_compute=0 will support deployment of a 3-node cluster.

terraform-packet-openshift's People

Contributors

dfedorov-ciena avatar liveaverage avatar wtcross avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

terraform-packet-openshift's Issues

Publish this module in the Terraform registry

In order for users to more rapidly take advantage of this Terraform configuration, it should be packaged as a Terraform module (or set of modules).

https://www.terraform.io/docs/modules/publish.html
https://www.terraform.io/docs/registry/modules/publish.html

The modules should fit the best practices: https://www.terraform.io/docs/modules/index.html

https://registry.terraform.io/browse/modules?provider=packet

Modules should be reusable as the base of new projects:
terraform init --from-module=packet/openshift/packet packet-openshift

And modules should allow for reuse as dependencies in more complex projects:

module "openshift" {
  source = "packet/openshift/packet"
  version = "0.1.0"
  packet_token = "..."

  ...
}

provider "kubernetes" {
  config_path = module.openshift.kube_config 
  // this does not exist, but it could be a new output ${abspath(path.root)}/auth/kubeconfig (maybe path.module)
}

Steps to publish:

  1. Adopt file naming conventions (main.tf, outputs.tf, variables.tf)
  2. Ensure paths are module safe (path.module + “/assets/foo.sh“)
  3. Ensure that all variables and outputs have a description
  4. Ensure that the README.md is present and in good shape (mentions module install and use)
  5. All sub-modules must also adhere to the previous four bullets as if they were root modules
  6. Include examples/ showing how to use this project as a module
  7. Rename the project (terraform-packet-openshift) (or terraform-packet-redhat-openshift) (github automatically redirects visitors and git users using the old name)
  8. Tag the project
  9. Publish the project: registry.terraform.io/sign-in

Uniform Standards Request: Experimental Repository

Hello!

We believe this repository is Experimental and therefore needs the following files updated:

If you feel the repository should be maintained or end of life or that you'll need assistance to create these files, please let us know by filing an issue with https://github.com/packethost/standards.

Packet maintains a number of public repositories that help customers to run various workloads on Packet. These repositories are in various states of completeness and quality, and being public, developers often find them and start using them. This creates problems:

  • Developers using low-quality repositories may infer that Packet generally provides a low quality experience.
  • Many of our repositories are put online with no formal communication with, or training for, customer success. This leads to a below average support experience when things do go wrong.
  • We spend a huge amount of time supporting users through various channels when with better upfront planning, documentation and testing much of this support work could be eliminated.

To that end, we propose three tiers of repositories: Private, Experimental, and Maintained.

As a resource and example of a maintained repository, we've created https://github.com/packethost/standards. This is also where you can file any requests for assistance or modification of scope.

The Goal

Our repositories should be the example from which adjacent, competing, projects look for inspiration.

Each repository should not look entirely different from other repositories in the ecosystem, having a different layout, a different testing model, or a different logging model, for example, without reason or recommendation from the subject matter experts from the community.

We should share our improvements with each ecosystem while seeking and respecting the feedback of these communities.

Whether or not strict guidelines have been provided for the project type, our repositories should ensure that the same components are offered across the board. How these components are provided may vary, based on the conventions of the project type. GitHub provides general guidance on this which they have integrated into their user experience.

Add documentation for cluster scale-up

Super easy, but good to have it noted:

## You should permanently set count_bootstrap=0 by updating your vars.tf, but for the sake of time:
terraform apply -var="count_compute=4" -var="count_bootstrap=0" --auto-approve

Once the node has been provisioned, approve the pending CSR as cluster admin:

oc get csr -oname | xargs oc adm certificate approve

Once CSR is approved, you can quickly see the node appear, first NotReady then Ready:

[jrmorgan@localhost terraform]$ oc get nodes
NAME                        STATUS     ROLES    AGE   VERSION
master-0.og.pkt.shifti.us   Ready      master   35h   v1.17.1+912792b
master-1.og.pkt.shifti.us   Ready      master   35h   v1.17.1+912792b
master-2.og.pkt.shifti.us   Ready      master   35h   v1.17.1+912792b
worker-0.og.pkt.shifti.us   Ready      worker   34h   v1.17.1+912792b
worker-1.og.pkt.shifti.us   Ready      worker   34h   v1.17.1+912792b
worker-2.og.pkt.shifti.us   Ready      worker   34h   v1.17.1+912792b
worker-3.og.pkt.shifti.us   NotReady   worker   3s    v1.17.1+912792b
[jrmorgan@localhost terraform]$ oc get nodes
NAME                        STATUS   ROLES    AGE     VERSION
master-0.og.pkt.shifti.us   Ready    master   35h     v1.17.1+912792b
master-1.og.pkt.shifti.us   Ready    master   35h     v1.17.1+912792b
master-2.og.pkt.shifti.us   Ready    master   35h     v1.17.1+912792b
worker-0.og.pkt.shifti.us   Ready    worker   34h     v1.17.1+912792b
worker-1.og.pkt.shifti.us   Ready    worker   34h     v1.17.1+912792b
worker-2.og.pkt.shifti.us   Ready    worker   34h     v1.17.1+912792b
worker-3.og.pkt.shifti.us   Ready    worker   2m45s   v1.17.1+912792b

Optional Components

Hey @liveaverage:
I think we should be trying to keep OpenShift 100% vanilla and opinionated in things such as NFS.
This can be done with variables such as enable_nfs = ture and if it's not enabled then NFS doesn't get installed. This will allow us to make these things modular and if we want to do things like OpenShift Container Storage we can optionally.

I'd also like to see if we could make NS1 and Route53 options for DNS as well.

Document deploy of OpenShift Container Storage (OCS)

This would help with #22, particularly with consideration of VM live migration when running CNV, and provide an integrated s3 endpoint for OCP tenants. Couple of notes:

  • only applies to worker count >= 3
  • requires a few extra steps to enable/subscribe to the operator
  • might require custom configuration for supplemental storage on worker nodes

operator subscription manifest (and prereqs for singleNamespace deployment):

apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
  name: openshift-storage
spec: {}
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: openshift-storage-operatorgroup
  namespace: openshift-storage
spec:
  serviceAccount:
    metadata:
      creationTimestamp: null
  targetNamespaces:
  - openshift-storage
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: ocs-operator
  namespace: openshift-storage
spec:
  channel: stable-4.3
  installPlanApproval: Automatic
  name: ocs-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  startingCSV: ocs-operator.v4.3.0

Document (re)deploy of registry via operator patch

Something like this should work fine since we're using NFS storage:

oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"managementState": "Managed", "storage":{"pvc":{"claim":""}}}}'

That should cover enabling managementState + allowing an automated claim via NFS-provisioner

Add support for NFS storage provisioner

90% completed, but need to test provisioner creation post-install. Snippet to be moved to a template:

resource "null_resource" "ocp_nfs_provisioner" {

  depends_on = [ null_resource.ocp_installer_wait_for_completion ]

  provisioner "local-exec" {
  command    = <<EOT
    source ${path.root}/artifacts/install/auth/kubeconfig;
    curl https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml > ${path.root}/artifacts/install/nfsp-rbac.yaml
    curl https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/deployment.yaml > ${path.root}/artifacts/install/nfsp-deployment.yaml
    curl https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/class.yaml > ${path.root}/artifacts/install/nfsp-class.yaml
    export oc=${path.root}/artifacts/oc
    $oc create namespace openshift-nfs-storage
    $oc label namespace openshift-nfs-storage "openshift.io/cluster-monitoring=true"
    NAMESPACE=`$oc project openshift-nfs-storage -q`
    sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ${path.root}/artifacts/install/nfsp-rbac.yaml
    sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ${path.root}/artifacts/install/nfsp-deployment.yaml
    sed -i'' "s/10.10.10.60/${var.bastion_ip}/g" ${path.root}/artifacts/install/nfsp-deployment.yaml
    sed -i'' "s/fuseim.*/storage.io\/nfs/g" ${path.root}/artifacts/install/nfsp-deployment.yaml
    sed -i'' "s/\/var\/nfs/\/mnt\/nfs\/ocp/g" ${path.root}/artifacts/install/nfsp-deployment.yaml
    sed -i'' "s/fuseim.*/storage.io\/nfs/g" ${path.root}/artifacts/install/nfsp-deployment.yaml
    sed -i'' "s/fuseim.*/storage.io\/nfs/g" ${path.root}/artifacts/install/nfsp-class.yaml
    $oc create -f ${path.root}/artifacts/install/nfsp-rbac.yaml
    $oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner
    $oc create -f ${path.root}/artifacts/install/nfsp-class.yaml
    $oc create -f ${path.root}/artifacts/install/nfsp-deployment.yaml
  EOT
  }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.