Giter Site home page Giter Site logo

k3d-irsa's Introduction

IRSA on k3d

This is a guide on how to setup IRSA on a local k3d cluster. The goal of this guide is to provide authentication between a local dev cluster and remote AWS resources (S3, etc). It is primarily based on the guide from AWS with specific steps for k3d and streamlined for simplicity with aws-cli.

The only pre-requisites are to have k3d, aws-cli, and go installed locally, as well as local access to an AWS account with permissions to operate on IAM and S3 resources. For a seamless copy-paste experience it is also helpful to run export AWS_PAGER="" which will ensure that the aws-cli will not open an interactive pager after resource creation.

Generate the keypair

export PRIV_KEY="sa-signer.key"
export PUB_KEY="sa-signer.key.pub"
export PKCS_KEY="sa-signer-pkcs8.pub"
# Skipping passphrase for the key
ssh-keygen -t rsa -b 2048 -f $PRIV_KEY -m pem -P ""
ssh-keygen -e -m PKCS8 -f $PUB_KEY > $PKCS_KEY

Make S3 Bucket

Note: By default the below commands set S3_BUCKET to part of your AWS username + a suffix with a few random characters and -irsa. You may want to change this to something you can remember easily, the default is for easy use when copy-pasting from this guide.

export S3_BUCKET=$(aws sts get-caller-identity --query Arn --output text | cut -f 2 -d '/' | awk -F'.' '{print $1}')-$(openssl rand -base64 20 | tr -dc 'a-z' | head -c 3)-irsa
_bucket_name=$(aws s3api list-buckets  --query "Buckets[?Name=='$S3_BUCKET'].Name | [0]" --out text)
if [ $_bucket_name = "None" ]; then
  aws s3api create-bucket --bucket $S3_BUCKET --create-bucket-configuration LocationConstraint=$AWS_REGION --object-ownership=BucketOwnerPreferred
fi
aws s3api delete-public-access-block --bucket $S3_BUCKET
export HOSTNAME=s3.$AWS_REGION.amazonaws.com
export ISSUER_HOSTPATH=$HOSTNAME/$S3_BUCKET

Create OIDC documents

cat <<EOF > discovery.json
{
    "issuer": "https://$ISSUER_HOSTPATH",
    "jwks_uri": "https://$ISSUER_HOSTPATH/keys.json",
    "authorization_endpoint": "urn:kubernetes:programmatic_authorization",
    "response_types_supported": [
        "id_token"
    ],
    "subject_types_supported": [
        "public"
    ],
    "id_token_signing_alg_values_supported": [
        "RS256"
    ],
    "claims_supported": [
        "sub",
        "iss"
    ]
}
EOF

go run ./main.go -key $PKCS_KEY > keys.json

Then copy these documents to your S3 bucket:

aws s3 cp --acl public-read ./discovery.json s3://$S3_BUCKET/.well-known/openid-configuration
aws s3 cp --acl public-read ./keys.json s3://$S3_BUCKET/keys.json

Configure OIDC provider in AWS IAM

Note that since we are using S3 for our OIDC provider, the thumbprint list is not important but is required by the AWS CLI. In a production environment with a different provider you could follow this guide to find the thumbprint.

aws iam create-open-id-connect-provider --url https://$ISSUER_HOSTPATH --client-id-list irsa --thumbprint-list demodemodemodemodemodemodemodemodemodemo

Create your k3d cluster

Note that the volume mount requires that this be run from this repo, or modify the command as necessary to volume mount from a different location.

k3d cluster create -v $(pwd):/irsa \
  --k3s-arg "--kube-apiserver-arg=--service-account-key-file=/irsa/${PKCS_KEY}"@server:\* \
  --k3s-arg "--kube-apiserver-arg=--service-account-signing-key-file=/irsa/${PRIV_KEY}"@server:\* \
  --k3s-arg "--kube-apiserver-arg=--api-audiences=kubernetes.svc.default"@server:\* \
  --k3s-arg "--kube-apiserver-arg=--service-account-issuer=https://${ISSUER_HOSTPATH}"@server:\*

Wait until the cluster default resources (networking, etc) are healthy before proceeding.

Apply the pod identity webhook

# Create namespace and cert job
kubectl apply -f deploy/namespace.yaml
kubectl apply -f deploy/create-job.yaml
# Sleep for secret creation
sleep 10
# Deploy webhook resources
kubectl apply -f deploy/auth.yaml
kubectl apply -f deploy/deployment-base.yaml
kubectl apply -f deploy/mutatingwebhook.yaml
kubectl apply -f deploy/service.yaml
# Sleep for webhook to be created
sleep 10
# Create webhook cert patch job
kubectl apply -f deploy/patch-job.yaml

Validate that the webhook pod is running and cert jobs completed successfully:

kubectl get po -n irsa

Create an IAM role and annotate a service account to use IRSA

From this point everything should be configured you can follow typical IRSA flows:

  • Create an IAM Policy (for example: allow access to get objects from your bucket)
  • Create an IAM Role associated with your service account
  • Create a service account and pod with the irsa/role-arn annotation to assume

Note that the annotation is intentionally different from the standard EKS annotation, and could be set to anything by modifying the annotation-prefix to something different in deploy/deployment-base.yaml.

For a more in depth demo see the demo walkthrough which will go through the above steps with examples to give a pod access to an S3 bucket.

Cleanup

If you were just doing this for a demo you can clean up all the pieces you created by doing the following:

k3d cluster delete
account_id=$(aws sts get-caller-identity --query "Account" --output text)
aws iam delete-open-id-connect-provider --open-id-connect-provider-arn arn:aws:iam::${account_id}:oidc-provider/${ISSUER_HOSTPATH}
aws s3 rm s3://$S3_BUCKET --recursive
aws s3api delete-bucket --bucket $S3_BUCKET
# Cleanup local files
rm -rf $PRIV_KEY $PUB_KEY $PKCS_KEY discovery.json keys.json

Don't forget to also clean up the pieces from the walkthrough if you created those.

k3d-irsa's People

Contributors

mjnagel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

k3d-irsa's Issues

Switch deploy manifests to helm chart

Currently the deploy manifests under deploy/ are simple and work for the demo deploy, but could be wrapped in a helm chart. This would solve a couple issues:

  • Configuration of the annotation key (this is called out in the docs)
  • Timing of jobs (leverage helm hooks to orchestrate these)
  • Configuration of the region (current setup does not mutate in the region if I recall correctly)

The current image used in this repo will also likely be out of date eventually, so abstracting this into a helm value would enable easier updates for end usage without an update in this repo.

Alternatives to aws cli commands

Currently the setup in this repo relies on a lot of aws cli command for setup/teardown. While this works for simple dev use cases it could result in lingering resources since nothing is "in code". We could switch the AWS commands to instead use opentofu / similar to manage those pieces as IaC.

This would be a decent rewrite to the docs, but hopefully make the setup more approachable with a simple tofu apply. We could still keep things simple and local for key setup, state, etc, but this would make the tutorial a bit easier to follow and less error prone.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.