Giter Site home page Giter Site logo

zackbutcher / java-k8s Goto Github PK

View Code? Open in Web Editor NEW

This project forked from aws-samples/kubernetes-for-java-developers

0.0 2.0 0.0 57.48 MB

A Day in Java Developer’s Life, with a taste of Kubernetes

License: Apache License 2.0

Java 74.21% Smarty 25.79%

java-k8s's Introduction

A Day in Java Developer’s Life, with a taste of Kubernetes

Deploying your Java application in a Kubernetes cluster could feel like Alice in Wonderland. You keep going down the rabbit hole and don’t know how to make that ride comfortable. This repository explains how a Java application can be deployed, tested, debugged and monitored in Kubernetes. In addition, it also talks about canary deployment and deployment pipeline.

Application

We will use a simple Java application built using Thorntail (nee WildFly Swarm). The application publishes a REST endpoint that can be invoked at http://{host}:{port}/resources/greeting.

The source code is in the app directory.

Build and Test using Maven

  1. Run application:

    cd app
    mvn wildfly-swarm:run
  2. Test application

    curl http://localhost:8080/resources/greeting

Build and Test using Docker

  1. Create Docker image:

    docker image build -t arungupta/greeting .
  2. Run container:

    docker container run --name greeting -p 8080:8080 -p 5005:5005 -d arungupta/greeting
  3. Access application:

    curl http://localhost:8080/resources/greeting
  4. Remove container:

    docker container rm -f greeting

Build and Test using Kubernetes

Kubernetes can be easily enabled on a development machine using Docker for Mac as explained at https://docs.docker.com/docker-for-mac/#kubernetes.

  1. Configure kubectl CLI for Kubernetes cluster

    kubectl config use-context docker-for-desktop
  2. Install the Helm CLI:

    brew install kubernetes-helm

    If Helm CLI is already installed then use brew upgrade kubernetes-helm.

  3. Install Helm in Kubernetes cluster:

    helm init

    If Helm has already been initialized on the cluster, then you may have to upgrade Tiller:

    helm init --upgrade
  4. Install the Helm chart:

    helm install --name myapp manifests/myapp
  5. Access the application:

    curl http://$(kubectl get svc/greeting-service \
    	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/resources/greeting
  6. Delete the Helm chart:

    helm delete --purge myapp

Debug Docker and Kubernetes using IntelliJ

You can debug a Docker container and a Kubernetes Pod if they’re running locally on your machine. This was tested using Docker for Mac.

  1. Run container:

    docker container run --name greeting -p 8080:8080 -p 5005:5005 -d arungupta/greeting
  2. Check container:

    $ docker container ls -a
    CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                                            NAMES
    724313157e3c        arungupta/greeting   "java -jar app-swarm…"   3 seconds ago       Up 2 seconds        0.0.0.0:5005->5005/tcp, 0.0.0.0:8080->8080/tcp   greeting
  3. Run, Debug, Remote:

    docker debug1
  4. Click on Debug, setup a breakpoint in the class:

    docker debug2
  5. Access the application using curl http://localhost:8080/resources/greeting to hit the breakpoint:

    docker debug3

Kubernetes Cluster on AWS

kops

kops is a commmunity-supported way to get a Kubernetes cluster up and running on AWS.

  1. Set AZs:

    export AWS_AVAILABILITY_ZONES="$(aws ec2 describe-availability-zones \
    	--query 'AvailabilityZones[].ZoneName' \
    	--output text | \
    	awk -v OFS="," '$1=$1')"
  2. Set state store: export KOPS_STATE_STORE=s3://kubernetes-aws-io

  3. Create cluster:

    kops create cluster \
    	--zones ${AWS_AVAILABILITY_ZONES} \
    	--master-count 1 \
    	--master-size m4.xlarge \
    	--node-count 3 \
    	--node-size m4.2xlarge \
    	--name cluster.k8s.local \
    	--yes

Migrate from Dev to Prod

  1. Get the list of configs:

    $ kubectl config get-contexts
    CURRENT   NAME                 CLUSTER                      AUTHINFO             NAMESPACE
              aws                  kubernetes                   aws
              cluster.k8s.local    cluster.k8s.local            cluster.k8s.local
    *         docker-for-desktop   docker-for-desktop-cluster   docker-for-desktop
  2. Change the context:

    kubectl config use-context cluster.k8s.local
  3. Get updated list of configs:

    $ kubectl config get-contexts
    CURRENT   NAME                 CLUSTER                      AUTHINFO             NAMESPACE
              aws                  kubernetes                   aws
    *         cluster.k8s.local    cluster.k8s.local            cluster.k8s.local
              docker-for-desktop   docker-for-desktop-cluster   docker-for-desktop
  4. Redeploy the application

Istio

Istio is is a layer 4/7 proxy that routes and load balances traffic over HTTP, WebSocket, HTTP/2, gRPC and supports application protocols such as MongoDB and Redis. Istio uses the Envoy proxy to manage all inbound/outbound traffic in the service mesh.

Istio has a wide variety of traffic management features that live outside the application code, such as A/B testing, phased/canary rollouts, failure recovery, circuit breaker, layer 7 routing and policy enforcement (all provided by the Envoy proxy). Istio also supports ACLs, rate limits, quotas, authentication, request tracing and telemetry collection using its Mixer component. The goal of the Istio project is to support traffic management and security of microservices without requiring any changes to the application; it does this by injecting a sidecar into your pod that handles all network communications.

The following sections are also explained in the playlist:

istio kubernetes playlist

Install and Configure

  1. Enable admission controllers as explained at https://istio.io/docs/setup/kubernetes/quick-start/#aws-w-kops. Rolling update the cluster to enable admission controllers.

    Alternatively, create the cluster without --yes, edit the cluster to enable admission controllers, and then update the cluster using kops update cluster --name cluster.k8s.local --yes.

  2. Install and configure:

    curl -L https://github.com/istio/istio/releases/download/0.8.0/istio-0.8.0-osx.tar.gz | tar xzvf -
    cd istio-0.8.0
    export PATH=$PWD/bin:$PATH
    kubectl apply -f install/kubernetes/istio-demo.yaml
  3. Verify:

    kubectl get pods -n istio-system
    NAME                                        READY     STATUS      RESTARTS   AGE
    grafana-cd99bf478-59qmx                     1/1       Running     0          4m
    istio-citadel-ff5696f6f-zkpzt               1/1       Running     0          4m
    istio-cleanup-old-ca-6nmrg                  0/1       Completed   0          4m
    istio-egressgateway-58d98d898c-bjd4f        1/1       Running     0          4m
    istio-ingressgateway-6bc7c7c4bc-sc7s6       1/1       Running     0          4m
    istio-mixer-post-install-g67rd              0/1       Completed   0          4m
    istio-pilot-6c5c6b586c-nfwt9                2/2       Running     0          4m
    istio-policy-5c7fbb4b9f-f2xtn               2/2       Running     0          4m
    istio-sidecar-injector-dbd67c88d-j8882      1/1       Running     0          4m
    istio-statsd-prom-bridge-6dbb7dcc7f-ms846   1/1       Running     0          4m
    istio-telemetry-54b5bf4847-nlqjx            2/2       Running     0          4m
    istio-tracing-67dbb5b89f-9zd5j              1/1       Running     0          4m
    prometheus-586d95b8d9-mz9bm                 1/1       Running     0          4m
    servicegraph-6d86dfc6cb-tbwwt               1/1       Running     0          4m
  4. Deploy pod with sidecar:

    kubectl apply -f <(istioctl kube-inject -f manifests/app.yaml)
  5. Check pods and note that it has two containers (one for application and one for sidecar):

    kubectl get pods
    NAME                        READY     STATUS    RESTARTS   AGE
    greeting-5ff78ddc8b-pbb4z   2/2       Running   0          1m
  6. Get list of containers in the pod:

    kubectl get pods -l app=greeting -o jsonpath={.items[*].spec.containers[*].name}
    greeting istio-proxy
  7. Get response:

    curl http://$(kubectl get svc/greeting-service \
    	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/resources/greeting

Traffic Shifting

  1. Deploy application with two versions of greeting, one that returns Hello and another that returns Howdy:

    kubectl delete -f manifests/app.yaml
    kubectl apply -f <(istioctl kube-inject -f manifests/app-hello-howdy.yaml)
  2. Access application multipe times to see different response:

    for i in {1..10}
    do
    	curl -q http://$(kubectl get svc/greeting-service -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/resources/greeting
    	echo
    done
  3. Setup an Istio rule to split traffic between 75% to Hello and 25% to Howdy version of the greeting service:

    kubectl apply -f manifests/greeting-rule-75-25.yaml
  4. Invoke the service again to see the traffic split between two services.

Canary Deployment

  1. Setup an Istio rule to divert 10% traffic to canary:

    kubectl delete -f manifests/greeting-rule-75-25.yaml
    kubectl apply -f manifests/greeting-canary.yaml
  2. Access application multipe times to see ~10% greeting messages with Howdy:

    for i in {1..50}
    do
    	curl -q http://$(kubectl get svc/greeting-service -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/resources/greeting
    	echo
    done

Distributed Tracing

Istio is deployed as a sidecar proxy into each of your pods; this means it can see and monitor all the traffic flows between your microservices and generate a graphical representation of your mesh traffic. We’ll use the application you deployed in the previous step to demonstrate this.

Setup access to the tracing dashboard URL using port-forwarding:

kubectl port-forward \
	-n istio-system \
	$(kubectl get pod \
		-n istio-system \
		-l app=jaeger \
		-o jsonpath='{.items[0].metadata.name}') 16686:16686 &

Access the dashboard at http://localhost:16686.

istio dag

Metrics using Grafana

  1. Install the Grafana add-on:

    kubectl apply -f install/kubernetes/addons/grafana.yaml
  2. Verify:

    kubectl get pods -l app=grafana -n istio-system
    NAME                       READY     STATUS    RESTARTS   AGE
    grafana-6bb556d859-v5tzt   1/1       Running   0          1m
  3. Forward Istio dashboard using Grafana UI:

    kubectl -n istio-system \
    	port-forward $(kubectl -n istio-system \
    		get pod -l app=grafana \
    		-o jsonpath='{.items[0].metadata.name}') 3000:3000 &
  4. View Istio dashboard http://localhost:3000/d/1/istio-dashboard?

  5. Invoke the endpoint:

    curl http://$(kubectl get svc/greeting-service -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')/resources/greeting
istio dashboard

Timeouts

Delays and timeouts can be injected in services.

  1. Deploy the application:

    kubectl delete -f manifests/app.yaml
    kubectl apply -f <(istioctl kube-inject -f manifests/app-ingress.yaml)
  2. Add a 5 seconds delay to calls to the service:

    kubectl apply -f manifests/greeting-delay.yaml
  3. Invoke the service using a 2 seconds timeout:

    export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http")].port}')
    export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
    curl --connect-timeout 2 http://$GATEWAY_URL/resources/greeting

The service will timeout in 2 seconds.

Chaos using kube-monkey

kube-monkey is an implementation of Netflix’s Chaos Monkey for Kubernetes clusters. It randomly deletes Kubernetes pods in the cluster encouraging and validating the development of failure-resilient services.

  1. Create kube-monkey configuration:

    kubectl apply -f manifests/kube-monkey-configmap.yaml
  2. Run kube-monkey:

    kubectl apply -f manifests/kube-monkey-deployment.yaml
  3. Deploy an app that opts-in for pod deletion:

    kubectl apply -f manifests/app-kube-monkey.yaml

This application agrees to kill up to 40% of pods. The schedule of deletion is defined by kube-monkey configuration and is defined to be between 10am and 4pm on weekdays.

Deployment Pipeline

Skaffold is a command line utility that facilitates continuous development for Kubernetes applications. With Skaffold, you can iterate on your application source code locally then deploy it to a remote Kubernetes cluster.

  1. Download Skaffold:

    curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-darwin-amd64 \
    	&& chmod +x skaffold
  2. Run Skaffold in the application directory:

    cd app
    skaffold dev
  3. Access the service:

    curl http://$(kubectl \
    	get svc/skaffold-greeting-service \
    	-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

java-k8s's People

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.