Giter Site home page Giter Site logo

doitintl / kubeip Goto Github PK

View Code? Open in Web Editor NEW
377.0 17.0 71.0 535 KB

Assign static public IPs to Kubernetes nodes (GKE, EKS)

Home Page: https://kubeip.com

License: MIT License

Makefile 2.28% Go 95.26% Dockerfile 0.83% Mustache 1.63%
kubernetes-addons golang kubernetes gke-cluster gke google-cloud google-cloud-platform aws eks eks-cluster

kubeip's Introduction

build Go Report Card Docker Pulls

KubeIP v2

Welcome to KubeIP v2, a complete overhaul of the popular DoiT KubeIP v1-main open-source project, originally developed by Aviv Laufer.

KubeIP v2 expands its support beyond Google Cloud (as in v1) to include AWS, and it's designed to be extendable to other cloud providers that allow assigning static public IP to VMs. We've also transitioned from a Kubernetes controller to a standard DaemonSet, enhancing reliability and ease of use.

What happens with KubeIP v1

KubeIP v1 is still available in the v1-main branch. No further development is planned. We will fix critical bugs and security issues, but we will not add new features.

What KubeIP v2 does?

Kubernetes' nodes don't necessarily need their own public IP addresses to communicate. However, there are certain situations where it's beneficial for nodes in a node pool to have their own unique public IP addresses.

For instance, in gaming applications, a console might need to establish a direct connection with a cloud virtual machine to reduce the number of hops.

Similarly, if you have multiple agents running on Kubernetes that need a direct server connection, and the server needs to whitelist all agent IPs, having dedicated public IPs can be useful. These scenarios, among others, can be handled on a cloud-managed Kubernetes cluster using Node Public IP.

KubeIP is a utility that assigns a static public IP to each node it manages. The IP is allocated to the node's primary network interface, chosen from a pool of reserved static IPs using platform-supported filtering and ordering.

If there are no static public IPs left, KubeIP will hold on until one becomes available. When a node is removed, KubeIP releases the static public IP back into the pool of reserved static IPs.

How to use KubeIP?

Deploy KubeIP as a DaemonSet on your desired nodes using standard Kubernetes mechanism. Once deployed, KubeIP will assign a static public IP to each node it operates on. If no static public IP is available, KubeIP will wait until one becomes available. When a node is deleted, KubeIP will release the static public IP and reassign ephemeral public IP to the node.

IPv6 Support

KubeIP supports dual-stack IPv4/IPv6 GKE clusters and Google Cloud static public IPv6 addresses. To enable IPv6 support, set the ipv6 flag (or set IPV6 environment variable) to true (default is false).

Kubernetes Service Account

KubeIP requires a Kubernetes service account with the following permissions:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubeip-service-account
  namespace: kube-system
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kubeip-cluster-role
rules:
  - apiGroups: [ "" ]
    resources: [ "nodes" ]
    verbs: [ "get" ]
  - apiGroups: [ "coordination.k8s.io" ]
    resources: [ "leases" ]
    verbs: [ "create", "get", "delete" ]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubeip-cluster-role-binding
subjects:
  - kind: ServiceAccount
    name: kubeip-service-account
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: kubeip-cluster-role
  apiGroup: rbac.authorization.k8s.io

Kubernetes DaemonSet

Deploy KubeIP as a DaemonSet on your desired nodes using standard Kubernetes selectors. Once deployed, KubeIP will assign a static public IP to the node's primary network interface, selected from a list of reserved static IPs using platform-supported filtering. If no static public IP is available, KubeIP will wait until one becomes available. When a node is deleted, KubeIP will release the static public IP.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kubeip
spec:
  selector:
    matchLabels:
      app: kubeip
  template:
    metadata:
      labels:
        app: kubeip
    spec:
      serviceAccountName: kubeip-service-account
      terminationGracePeriodSeconds: 30
      priorityClassName: system-node-critical
      nodeSelector:
        kubeip.com/public: "true"
      containers:
        - name: kubeip
          image: doitintl/kubeip-agent
          resources:
            requests:
              cpu: 100m
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: FILTER
              value: PUT_PLATFORM_SPECIFIC_FILTER_HERE
            - name: LOG_LEVEL
              value: debug
            - name: LOG_JSON
              value: "true"

AWS

Make sure that KubeIP DaemonSet is deployed on nodes that have a public IP (node running in public subnet) and uses a Kubernetes service account bound to the IAM role with the following permissions:

Version: '2012-10-17'
Statement:
  - Effect: Allow
    Action:
      - ec2:AssociateAddress
      - ec2:DisassociateAddress
      - ec2:DescribeInstances
      - ec2:DescribeAddresses
    Resource: '*'

KubeIP supports filtering of reserved Elastic IPs using tags and Elastic IP properties. To use this feature, add the filter flag (or set FILTER environment variable) to the KubeIP DaemonSet:

- name: FILTER
  value: "Name=tag:env,Values=dev;Name=tag:app,Values=streamer"

KubeIP AWS filter supports the same filter syntax as the AWS describe-addresses command. For more information, see describe-addresses. If you specify multiple filters, they are joined with an AND, and the request returns only results that match all the specified filters. Multiple filters must be separated by semicolons (;).

Google Cloud

Ensure that the KubeIP DaemonSet is deployed on nodes with a public IP (nodes in a public subnet) and uses a Kubernetes service account bound to an IAM role with the following permissions:

title: "KubeIP Role"
description: "KubeIP required permissions"
stage: "GA"
includedPermissions:
  - compute.instances.addAccessConfig
  - compute.instances.deleteAccessConfig
  - compute.instances.get
  - compute.addresses.get
  - compute.addresses.list
  - compute.addresses.use
  - compute.zoneOperations.get
  - compute.subnetworks.useExternalIp
  - compute.projects.get

KubeIP Google Cloud filter supports the same filter syntax as the Google Cloud gcloud compute addresses list command. For more information, see gcloud topic filter. If you specify multiple filters, they are joined with an AND, and the request returns only results that match all the specified filters. Multiple filters must be separated by semicolons (;).

To use this feature, add the filter flag (or set FILTER environment variable) to the KubeIP DaemonSet:

- name: FILTER
  value: "labels.env=dev;labels.app=streamer"

How to contribute to KubeIP?

KubeIP is an open-source project, and we welcome your contributions!

How to build KubeIP?

KubeIP is written in Go and can be built using standard Go tools. To build KubeIP, run the following command:

make build

How to run KubeIP?

KubeIP is a standard command-line application. To explore the available options, run the following command:

kubeip-agent run --help
NAME:
   kubeip-agent run - run agent

USAGE:
   kubeip-agent run [command options] [arguments...]

OPTIONS:
   Configuration

   --filter value [ --filter value ]  filter for the IP addresses [$FILTER]
   --ipv6                             enable IPv6 support (default: false) [$IPV6]
   --kubeconfig value                 path to Kubernetes configuration file (not needed if running in node) [$KUBECONFIG]
   --node-name value                  Kubernetes node name (not needed if running in node) [$NODE_NAME]
   --order-by value                   order by for the IP addresses [$ORDER_BY]
   --project value                    name of the GCP project or the AWS account ID (not needed if running in node) [$PROJECT]
   --region value                     name of the GCP region or the AWS region (not needed if running in node) [$REGION]
   --release-on-exit                  release the static public IP address on exit (default: true) [$RELEASE_ON_EXIT]
   --retry-attempts value             number of attempts to assign the static public IP address (default: 10) [$RETRY_ATTEMPTS]
   --retry-interval value             when the agent fails to assign the static public IP address, it will retry after this interval (default: 5m0s) [$RETRY_INTERVAL]
   --lease-duration value             duration of the kubernetes lease (default: 5) [$LEASE_DURATION]
   --lease-namespace value            namespace of the kubernetes lease (default: "default") [$LEASE_NAMESPACE]

   Development

   --develop-mode  enable develop mode (default: false) [$DEV_MODE]

   Logging

   --json             produce log in JSON format: Logstash and Splunk friendly (default: false) [$LOG_JSON]
   --log-level value  set log level (debug, info(*), warning, error, fatal, panic) (default: "info") [$LOG_LEVEL]

How to test KubeIP?

To test KubeIP, create a pool of reserved static public IPs, ensuring that the pool has enough IPs to assign to all nodes that KubeIP will operate on. Use labels to filter the pool of reserved static public IPs.

Next, create a Kubernetes cluster and deploy KubeIP as a DaemonSet on your desired nodes. Ensure that the nodes have a public IP (nodes in a public subnet). Configure KubeIP to use the pool of reserved static public IPs, using filters and order by.

Finally, scale the number of nodes in the cluster and verify that KubeIP assigns a static public IP to each node. Scale down the number of nodes in the cluster and verify that KubeIP releases the static public IP addresses.

AWS EKS Example

The examples/aws folder contains a Terraform configuration that creates an EKS cluster and deploys KubeIP as a DaemonSet on the cluster nodes in a public subnet. The Terraform configuration also creates a pool of reserved Elastic IPs and configures KubeIP to use the pool of reserved static public IPs.

To run the example, follow these steps:

cd examples/aws
terraform init
terraform apply

Google Cloud GKE Example

The examples/gcp folder contains a Terraform configuration that creates a GKE cluster and deploys KubeIP as a DaemonSet on the cluster nodes in a public subnet. The Terraform configuration also creates a pool of reserved static public IPs and configures KubeIP to use the pool of reserved static public IPs.

To run the example, follow these steps:

cd examples/gcp
terraform init
terraform apply -var="project_id=<your-project-id>"

To run the example with GKE dual-stack IPv4/IPv6 cluster, follow these steps:

cd examples/gcp
terraform init
terraform apply -var="project_id=<your-project-id>" -var="ipv6_support=true"

kubeip's People

Contributors

aimbot31 avatar alexei-led avatar antonmatsiuk avatar avivl avatar bo98 avatar caramelmilk avatar cloudmark avatar danielsel avatar dependabot[bot] avatar eranchetz avatar grumvalski avatar haizaar avatar hernanliendo avatar jesserichmannyc avatar joshuafox avatar loganrobertclemons avatar nirforer avatar oscar-h64 avatar pdecat avatar remoe avatar rlnrln avatar robbertkl avatar spark2ignite avatar think-divergent avatar ureesoriano avatar yinzara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeip's Issues

KUBEIP_NODEPOOL prefix/wildcard

Hello! We're going to create our node pool using terraform's google_container_node_pool with a name_prefix = "default-pool-". This means the names of our node pools will be something like default-pool-12345 where 12345 will be randomly generated. Is there any way to pass in a pattern/regex/wildcard to kubeip such that it will assign static ip addresses to a node pool that matches that pattern (e.g. default-pool-*)? Thanks!

Manage labels on nodes to allow having less static IPs than nodes

Documentation states that there must be at least as many static IP addresses as nodes.

I'd love to be able to use kubeip and have fewer static IP addresses than nodes.

Kubeip could probably add specific labels to the nodes and pods needing egress using these IPs could have nodeSelector or affinity constraints to be scheduled on those.

In my current use case, I currently have two NAT instances whose static IPs are whitelisted by several partners. Adding new IPs is not an easy task.

Addresses for newly created nodes update on ticker, not on creation

Describe the bug
When a new node is created, it is not updated until the "On ticker" check runs.
There is a "Did not found node pool" error when the node is created though, so the "created" event does reach the controller, it just fails for some reason.

To Reproduce
Steps to reproduce the behavior:

  1. Create a new node in the KUBEIP_NODEPOOL node pool

Expected behavior
The node IP address should immediately update.

Screenshots
image
The node was created at 14:01 and only updated at 14:08 on the ticker.
(The ticker was set to 10 minutes here for test, normally it's 1 minute.)

instance tagging

Please bring back (existed in v1) instance tagging after IP assignment so we can check that the instance was assigned the IP address before running workloads

Kubeip should update the node it is running on with care

Describe the bug

This is a reflection I've had two weeks ago when examining the kubeip code related to updating external NAT access configs, and I've now managed to trigger the issue as described below.

When Kubeip deletes the external NAT access config with the ephemeral public IP address for the node it is running on, it looses access to the Google Compute API as it is exposed on public IP addresses.
Kubeip is then prevented to access the Google Compute API to update the kubip_assigned tag on nodes, or worse, create the new access config with the reserved static public IP address (more on that later).

To Reproduce

  1. Reserve a static public IP address and tag it so kubeip will manage it
  2. Create a single node GKE cluster to make sure kubeip runs on the node it will try to update
  3. Install the kubeip ConfigMap and Deployment on that cluster
  4. When kubeip starts, it notices the the node is not using a reserved address:
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:16Z" level=info msg="kubeIP is starting" Build Date="2018-10-02-10:55" Cluster name=test-pdecat-kubeip Project name=myproject Version=b6aae34d8cdd3aba3132c04afad51c701a8fcdd9
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:16Z" level=info msg="Starting forceAssignment" function=forceAssignment pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:16Z" level=info msg="Starting kubeip controller" pkg=kubeip-node
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:16Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:16Z" level=info msg="kubeip controller synced and ready" pkg=kubeip-node
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:17Z" level=info msg="Found un assigned node gke-test-pdecat-kubei-cluster1-pool-a-073827fa-s3c5" function=processAllNodes pkg=kubeip
  1. It then proceeds with that node and locates the free reserved IP address (here 35.233.39.239):
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:17Z" level=info msg="Working on gke-test-pdecat-kubei-cluster1-pool-a-073827fa-s3c5 in zone europe-west1-b" function=Kubeip pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:17Z" level=info msg="Found node without tag gke-test-pdecat-kubei-cluster1-pool-a-073827fa-s3c5" function=assignMissingTags pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:17Z" level=info msg="Found reserved address 35.233.39.239" function=replaceIP pkg=kubeip
  1. When it tries to replace the node's public IP address, it hangs on the computeService.Instances.AddAccessConfig() call, then times out after 40 seconds
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:21:57Z" level=error msg="AddAccessConfig \"Post https://www.googleapis.com/compute/v1/projects/myproject/zones/europe-west1-b/instances/gke-test-pdecat-kubei-cluster1-pool-a-073827fa-s3c5/addAccessConfig?alt=json&networkInterface=nic0: read tcp 10.36.0.11:49752->74.125.133.95:443: read: connection reset by peer\""

Notice the connection reset by peer error. After that the Replaced IP for message is never logged.

At this point, the node is left without the kubip_assigned tag.
In my initial reflection, I guessed it should also have been left without any public IP address, but somehow in this case, the AddAccessConfig call reached the Google Compute API with the old ephemeral IP address (here 35.205.240.76):

selection_041

On the next kubeip ticker about 5 minutes later, a new node list collection occurred and the missing tag was added:

[kubeip-678bf66895-nkcqf] time="2018-10-03T12:26:16Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:26:16Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:26:16Z" level=info msg="Found node without tag gke-test-pdecat-kubei-cluster1-pool-a-073827fa-s3c5" function=assignMissingTags pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:26:17Z" level=info msg="Node ip is reserved 35.233.39.239" function=IsAddressReserved pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:26:17Z" level=info msg="Tagging gke-test-pdecat-kubei-cluster1-pool-a-073827fa-s3c5" function=AddTagIfMissing pkg=kubeip
[kubeip-678bf66895-nkcqf] time="2018-10-03T12:26:17Z" level=info msg="Tagging node gke-test-pdecat-kubei-cluster1-pool-a-073827fa-s3c5 as 35.233.39.239" function=tagNode pkg=kubeip

I'm currently trying to get more detailed traces and running other tests to check if the behavior is always the same.

Expected behavior

Kubeip should not fail when updating the public IP address and tags of the node it is running on.

Workaround

I've not actually been confronted to the issue yet in production clusters but as a workaround, I've already enabled Private Google Access on the GKE cluster's subnetwork to allow an instance having only a private IP address to still access Google Compute API: https://cloud.google.com/vpc/docs/private-google-access

Resolution

I'm not sure yet.

Updating an access config to replace the public IP address in a UpdateAccessConfig Google Compute API operation does not seem possible:

An instance can have only one external IP address. If the instance already has an external IP address, you must remove that address first by deleting the old access configuration. Then, you can add a new access configuration with the new external IP address.
Last time I tried, it failed stating the natIP field is immutable or something like that.

Maybe batching the delete and create operations in a single Google Compute API call would be possible but I've seen no indication that the second operation would wait for the first to successfully complete and therefore succeed itself.

Other potential alternatives I've thought about:

  • Update the kubeip documentation to recommend enabling Private Google Access on the GKE cluster subnetwork to workaround this issue.
  • Deploy kubeip with two or more replicas and each replica would avoid altering its own node. That would require some synchronization code between the replicas to avoid races when altering nodes.
  • Make kubeip evict itself from the node it is running on to force it's migration to another node (not sure the replacement is guaranteed to be scheduled on another node and it won't work in single node cases).

Restrict kubeIP cluster permissions

The suggested Kubernetes deployment manifests assign the cluster-admin ClusterRole to the default/default ServiceAccount.

Preferably, a separate ServiceAccount is created with a limited set of permissions which is then assigned to kubeip.

Support GCP Global Addresses

Hi.

Is there any limitation to why it is forcing the region to assign IP Addresses in GCP?
Because sometimes it is needed to use Global Addresses instead of Regional ones. kubeip is fetching the region from the metadata server:
"Found 0 Addresses to remove project my-project in region europe-southwest1. Addresses []"

Unable to switch to using kubeip v2, returning region-related error

Running the latest version of KubeIP v2's DaemonSet on GKE, pods start in the correct place but then immediately throw this error:

func: "main.assignAddress"
msg: "failed to assign static public IP address to node (node id)"
error: "check if static public IP is already assigned to instance (node id): failed to list assigned addresses: failed to list available addresses: googleapi: Error 400: Invalid value for field 'region': 'us-central1-a'. Unknown region., invalid"

(It should fail as it's got a static IP - it's currently using Kubeip v1 which we want to upgrade from as it seems to wipe other Kubernetes labels when setting its own - but I'm suspecting this region error is different, as the region presumably should be us-central1? No region is defined in the user-facing KubeIP config so I'm unclear what could need tweaking here).

Any assistance appreciated!

"Found node without tag"

Hello,

I have 2 static reserved ips free left with tags like : kubeip-node-pool:<pool_name>
Kubeip deployment is running on a completely another node-pool.
I'm using the last version of kubeip : kubeip: doitintl/kubeip:latest | Apr 14, 2022, 8:46:39 PM
GKE version : 1.21.11-gke.1100
I don't understand why I got this :

level=info msg="Working on gke-xxx-prod-xxx-c-pool-apps-mcs-30d09a52-9090 in zone europe-west1-b" function=Kubeip pkg=kubeip"  
level=info msg="Found node without tag gke-xxx-prod-xxx-pool-frontal-api-3582aad4-m3ni" function=assignMissingTags pkg=kubeip"  
level=info msg="no free address found" 

Here's the configmap used :

Name:         kubeip-config                                                                                                                                                                                                                
โ”‚ Namespace:    kube-system                                                                                                                                                                                                                  
โ”‚ Labels:       app=kubeip                                                                                                                                                                                                                   
โ”‚ Annotations:  <none>                                                                                                                                                                                                                       โ”‚
โ”‚                                                                                                                                                                                                                                            โ”‚
โ”‚ Data                                                                                                                                                                                                                                       โ”‚
โ”‚ ====                                                                                                                                                                                                                                       โ”‚
โ”‚ KUBEIP_FORCEASSIGNMENT:                                                                                                                                                                                                                    โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ true                                                                                                                                                                                                                                       โ”‚
โ”‚ KUBEIP_LABELKEY:                                                                                                                                                                                                                           โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ kubeip-node-pool                                                                                                                                                                                                                           โ”‚
โ”‚ KUBEIP_LABELVALUE:                                                                                                                                                                                                                         โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx                                                                                                                                                                                                             โ”‚
โ”‚ KUBEIP_ADDITIONALNODEPOOLS:                                                                                                                                                                                                                โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ pool-frontal-api                                                                                                                                                                                                                           โ”‚
โ”‚ KUBEIP_ALLNODEPOOLS:                                                                                                                                                                                                                       โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ false                                                                                                                                                                                                                                      โ”‚
โ”‚ KUBEIP_CLEARLABELS:                                                                                                                                                                                                                        โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ true                                                                                                                                                                                                                                       โ”‚
โ”‚ KUBEIP_COPYLABELS:                                                                                                                                                                                                                         โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ true                                                                                                                                                                                                                                       โ”‚
โ”‚ KUBEIP_DRYRUN:                                                                                                                                                                                                                             โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ false                                                                                                                                                                                                                                      โ”‚
โ”‚ KUBEIP_NODEPOOL:                                                                                                                                                                                                                           โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ pool-apps-mcs                                                                                                                                                                                                                              โ”‚
โ”‚ KUBEIP_ORDERBYDESC:                                                                                                                                                                                                                        โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ true                                                                                                                                                                                                                                       โ”‚
โ”‚ KUBEIP_ORDERBYLABELKEY:                                                                                                                                                                                                                    โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ priority                                                                                                                                                                                                                                   โ”‚
โ”‚ KUBEIP_TICKER:                                                                                                                                                                                                                             โ”‚
โ”‚ ----                                                                                                                                                                                                                                       โ”‚
โ”‚ 5                                                                                                                                                                                                                                          โ”‚
โ”‚                             

Thank you.

Question: What happens if the IP-pool is too small?

I've installed kubeip in our GKE cluster that consists of 3 nodes.
The IP-pool consists of 3 static IP-adresses.

The intention is that those 3 IP adresses automatically get reassigned when GKE does node updates.
But what if GKE creates a 4. node and only starts to drain and remove an old node after the new one is fully up?

In that case, there are 4 nodes for a very short time. How does kubeip handle this case?
Should we register an additional IP just in case?

External IP addresses do not seem to be reassigned

Describe the bug
Static IPs are created but node pool is still being allocated to a ephemeral IP instead of a labeled static IP. Does not seem to be doing anything other than collecting node list.

Edit:
One thing to note, as I was creating the cluster-admin-binding it did state that cluster-admin-binding already exists.

Expected behavior
IPs should be assgined.

Logs

time="2020-06-29T18:07:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:07:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:12:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:12:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:17:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:17:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:22:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:22:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:27:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:27:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:32:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:32:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:37:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:37:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:42:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:42:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:47:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:47:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:52:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:52:20Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2020-06-29T18:57:20Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2020-06-29T18:57:20Z" level=info msg="Collecting Node List..." function=processAllNodes 

Feature Request: kubeip helm chart

Is your feature request related to a problem? Please describe.
instead of people creating their own way to automate the installation of kubeip ... please provide a helm chart.

Describe the solution you'd like
Fast and easy way to deploy kubeip to my environment via helm chart.

Minor Fix for the sed command in README to be working for macOS users.

amit:kubeip/ (masterโœ—) $ sed -i -e "s/reserved/$GKE_CLUSTER_NAME/g" -e "s/default-pool/$KUBEIP_NODEPOOL/g" deploy/kubeip-configmap.yaml 

sed: -e: No such file or directory

The error you're seeing is likely due to the differences in the sed command between macOS and Linux.
The -i option in macOS's version of sed expects an extension for backup files, whereas the Linux version does not.

To fix this on macOS, you can provide an empty string after the -i option to avoid creating a backup file:

sed -i '' -e "s/reserved/$GKE_CLUSTER_NAME/g" -e "s/default-pool/$KUBEIP_NODEPOOL/g" deploy/kubeip-configmap.yaml

If you're on Linux, the command you provided should work as-is.

This should be done for all the sed commands.

kubeip pods forbiddent from using priorityClass system-cluster-critical

Describe the bug
The kubeip pods are not able to be started in GKE 1.11.2-gke.18 due to:
forbidden: pods with system-cluster-critical priorityClass is not permitted in default namespace

To Reproduce
Steps to reproduce the behavior:

  1. Deploy kubeip to GKE 1.11.2-gke.18 cluster
  2. See error (ex. pods "kubeip-7474fdb9c4-" is forbidden: pods with system-cluster-critical priorityClass is not permitted in default namespace)

Expected behavior
kubeip pods successfully deployed and started.

Additional context
This may be related to changes in Kubernetes 1.11 to restrict use of priorityClass = system-cluster-critical to the kube-system namespace

IPv6 / Dual-Stack Support

Hi,

first off, thanks for providing this functionality! ๐Ÿ‘

Since Iโ€˜m planning on running dual-stack clusters, I wanted to ask if thereโ€˜s support for IPv6 and dual-stack dtatic IP addresses planned?

If not, any notes on how much effort is estimated to implement it? Iโ€˜d be happy to add support if my time permits.

External IPs not being assigned

Describe the bug
I deployed Kubeip with the default settings (after changing pool names etc), and I left force assignment to true. Once deployed, my two external IPs were not assigned to either worker, so I assigned them manually. I'm using pre-emptible instances and when they were replaced overnight the IPs were left unassigned. Logs show nothing except the API query every 5 minutes. I am running the deployment in the same node pool as the IPs are in so I appreciate there may be a few minutes delay before the IPs are re-assigned when the workers are pre-empted, however this is acceptable to me. I would expect force assignment to do its thing here and reconcile things after a few minutes.

Expected behavior
IPs to be assigned to workers.

Additional context
Please let me know if I can provide any further logs.

Implementation in on-premises/Bare metal clusters.

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like

Possibility of deployment on a bare metal cluster such as a private cluster or used together with K3s or MicroK8s.

This implementation would be for cases where solutions in which using a Load Balancer at a service level such as MetalLB does not bring all the expected resolution.

Describe alternatives you've considered

Some applications require a static IP to function correctly in a HomeLab and when the Pod ends, restart it, due to the image version update of a deployment manifest, the application such as Freeipa does not go up due to the IP validation of the " host" and certificates for the domain at Pod level. Help us, mere mortals who just want to carry on with their personal data in our home laboratories... =) please. Thank you.

Documentation fix - configure nodeSelector before deploying

Describe the bug
A clear and concise description of what the bug is.
The documentation does not mention that the kubeip deployment YAML has a nodeSelector (for itself) - "cloud.google.com/gke-nodepool: pool-kubip". This needs to be either edited before deployment to match the node labels on which it will be deployed, or the label needs to be added to the node(s).
Doing it without this step would result in a deployment failure.

To Reproduce
Steps to reproduce the behavior:

  1. README.md does not have any mention of the nodeSelector

Expected behavior
kubeip to get deployed

Screenshots
NA

Desktop (please complete the following information):
NA

Smartphone (please complete the following information):
NA

Additional context

Apply arbitrary labels to nodes

Similar to #15, I'd like to have fewer static IPs than nodes. I'd like to, however have the ability to tag the node with a more customized label to allow the static IPs to serve different purposes (this is a small scale deployment at this time).

For instance:

gcloud beta compute addresses update kubeip-ip-mx0 --update-labels kubeip=$GKE_CLUSTER_NAME,kubeip-node-labels=mx:second-label

gcloud beta compute addresses update kubeip-ip-web0 --update-labels kubeip=$GKE_CLUSTER_NAME,kubeip-node-labels=web

Would this be something that fits in the scope of this project? Would it be hard to add? I can take a stab when I have time if you can point me in the right direction.

Pool independent IPs

We have a gke cluster with multiple node pools. We would like to assign static IPs to all nodes in the cluster.

Currently it seems the only way to achieve this is to reserve multiple static IPs and then exlusively pre-allocate each IP to a specific nodepool via the kubeip-node-pool=<pool_name> label.

This is kind of problematic since whenever we change the number of pools or rename pools (sometimes it's necessary to replace one pool with another in order add for example a new oauth permission etc.) we also have to update the labels of all static IPs used in that cluster.

Would it be possible to simply have a single / cluster-wide "pool of ip addresses" which is used to satisfy the static IP address needs of multiple or all node pools in a specific cluster?

I.e. imagine this:

  1. We reserve 20 static IPs and label them with kubeip: <clusterName>
  2. We create a cluster with 3 nodepools with random names, each nodepool having 5 nodes (i.e. 15 total)
  3. Kubeip auto assigns static IPs to all 15 nodes.

I'm fine still having to specify KUBEIP_NODEPOOL and KUBEIP_ADDITIONALNODEPOOLS if that's really required but it'd like to avoid "exlusively" tying a set of IPs to a single nodepool.

Unable to create kubeip-sa ClusterRole in GKE

I am unable to create the following yaml definition in GKE:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubeip-sa
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["get","list","watch","patch"]
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get","list","watch"]

I am seeing the following error with kubectl apply -f test.yml

Error from server (Forbidden): error when creating "test.yml": clusterroles.rbac.authorization.k8s.io "kubeip-sa" is forbidden: attempt to grant extra privileges: [{[get] [] [nodes] [] []} {[list] [] [nodes] [] []} {[watch] [] [nodes] [] []} {[patch] [] [nodes] [] []} {[get] [] [pods] [] []} {[list] [] [pods] [] []} {[watch] [] [pods] [] []}] user=&{108986779098263313539 [system:authenticated] map[user-assertion.cloud.google.com:[AKUJVpkfdwSx+bWxo/aF+P5w9CGBq35lpuQHLrR6UoSqiHrAfH+K9HscFiH+0lA2EHESVwnigsnTJt6n3dC5xZSzO51HzAZTxIneD23JR7FOoLiT2cdi5EIyBNdaT7zX/kqBkLiYRTnQYa5NKFARsyVPk9Ql2GyOwv38udtfDoWky0JXIsFsS1Soqsiu/bwlFWwrL0jDpYK1gs5hVPYRat+ncQjZIkjy0OXntqXyQg==]]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]}] ruleResolutionErrors=[]

The service account I am using for this has already been granted cluster-admin

$ gcloud config list --format 'value(core.account)'
[email protected]

$ kubectl describe clusterrolebinding terraform-cluster-admin-binding
Name:         terraform-cluster-admin-binding
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  cluster-admin
Subjects:
  Kind  Name                                                Namespace
  ----  ----                                                ---------
  User  [email protected]  

I am not sure if this is an issue with GKE RBAC or with the specific permissions in the clusterrole?

Tests fail

Describe the bug
Make fails to build because the tests fail:

$ make
"go" build   -o "kubeip"
"go" test -race -v github.com/doitintl/kubeip github.com/doitintl/kubeip/pkg/client github.com/doitintl/kubeip/pkg/config github.com/doitintl/kubeip/pkg/controller github.com/doitintl/kubeip/pkg/kipcompute github.com/doitintl/kubeip/pkg/types github.com/doitintl/kubeip/pkg/utils
# github.com/doitintl/kubeip/pkg/controller
pkg/controller/controller.go:311:4: Infof call needs 1 arg but has 2 args
?   	github.com/doitintl/kubeip	[no test files]
?   	github.com/doitintl/kubeip/pkg/client	[no test files]
?   	github.com/doitintl/kubeip/pkg/config	[no test files]
make: *** [Makefile:19: test] Error 2

To Reproduce
Steps to reproduce the behavior:

  1. git clone https://github.com/doitintl/kubeip
  2. cd kubeip
  3. make
  4. See error

Expected behavior
The build should succeed.

Desktop (please complete the following information):

  • OS: ArchLinux

Assign static IPs to existing nodes, not just newly added ones

Currently static IPs are only assigned to nodes created after the kubeIP deployment is available.

This means that if a node is added while the deployment is unavailable (e.g. it was disrupted during a node upgrade event), it won't get a static IP. It also means that any nodes existing before the deployment is available won't get static IPs assigned.

I'd suggest checking all existing nodes on boot, and at a regular interval as well, to ensure consistency.

In addition to the boot-time and unavailable kubeIP deployment cases, if you have the exact number of nodes as IPs, then during a node repair where the new node is created before the static IP is released, it won't get an assigned IP (due to none being available). If the kubeIP controller implemented the behavior to check all nodes at a regular interval, the static IP may have been freed up and could be assigned.

kubeip not work with GKE 1.12.6 single node?

I try to setup kubeip on a single node (one node pool with one node) cluster. After I apply the configuration the node goes in the "NotReady" state. The node still stay in this state after deleting the kubeip deployment.

Node label not applied when node running kubeIP has its IP address forcibly changed

Thanks for a great tool, we're wrestling with the target use case (a partner that requires us to make requests from a whitelisted IP) and this should really help!

I gave it a go on a test cluster which only has 2 nodes, and it worked as described. However, the node that was running kubeIP had its IP address forcibly changed and the logs look like this:

$ kubectl -n kube-ip logs kubeip-68678b8754-jpqdv
time="2018-06-28T05:44:45Z" level=info msg="kubeIP is starting" Build Date="2018-06-26-06:03" Cluster name=foo-bar Project name=foo-bar Version=bf0d9612f00f7363a58327c21857cd4102f5f6d6
time="2018-06-28T05:44:45Z" level=info msg="Starting forceAssignment" function=forceAssignment pkg=kubeip
time="2018-06-28T05:44:45Z" level=info msg="Starting kubeip controller" pkg=kubeip-node
time="2018-06-28T05:44:45Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2018-06-28T05:44:45Z" level=info msg="kubeip controller synced and ready" pkg=kubeip-node
time="2018-06-28T05:44:46Z" level=info msg="Found un assigned node gke-foo-bar-pool-a-4289fdda-s8cz" function=processAllNodes pkg=kubeip
time="2018-06-28T05:44:46Z" level=info msg="Working on gke-foo-bar-pool-a-4289fdda-s8cz in zone australia-southeast1-a" function=Kubeip pkg=kubeip
time="2018-06-28T05:44:47Z" level=info msg="Found reserved address 35.197.175.73" function=replaceIP pkg=kubeip
time="2018-06-28T05:45:27Z" level=error msg="AddAccessConfig \"Post https://www.googleapis.com/compute/v1/projects/foo-bar/zones/australia-southeast1-a/instances/gke-foo-bar-pool-a-4289fdda-s8cz/addAccessConfig?alt=json&networkInterface=nic0: read tcp 10.32.4.14:44296->216.58.200.106:443: read: connection reset by peer\""
time="2018-06-28T05:49:45Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2018-06-28T05:49:45Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2018-06-28T05:54:45Z" level=info msg="On Ticker" function=processAllNodes pkg=kubeip
time="2018-06-28T05:54:45Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip

When I examine the nodes via the GCP web console I can see that the gke-foo-bar-pool-a-4289fdda-s8cz node successfully had it's external IP changed to 35.197.175.73, however no kubip_assigned label was added to the node.

My guess is the address changed triggered the read: connection reset by peer error, and setting the label never happened?

Our intention is to run kubeIP on a cluster with 10+ nodes and only assign fixed addresses to 2-3 of them, so the labels are required for pods that want to target those nodes (as described in #15).

Limit RBAC rights

The suggested RBAC rights (full cluster admin) are wildly excessive.

It would be great if a more limited RBAC role could be suggested in the repository.

KubeIP not able to assign an IP to a node in state NotReady

Describe the bug
If a new node is starting and it is in NotReady state - kubeIP removes the public IP (assigns 0.0.0.0) and the node loses internet connection and can't provision properly

To Reproduce
This behavior has been seen in a customer using KubeIP version 1.0.3
Expected behavior
KubeIP assigns a public fixed IP address from a managed pool.

Logs

time="2023-09-21T13:09:02Z" level=info msg="&{kubeip serve-prod spot-fixedip-v5 true [] 5ns false priority true true true false}"
time="2023-09-21T13:09:02Z" level=info msg="[]"
time="2023-09-21T13:09:02Z" level=info msg="kubeIP is starting" Build Date="2023-09-19T10:15:21+0000" Cluster name=serve-prod Project name=key-cistern-95411 Version=v0
time="2023-09-21T13:09:02Z" level=info msg="Processing initial force assignment check" function=forceAssignment pkg=kubeip
time="2023-09-21T13:09:02Z" level=info msg="Starting forceAssignmentOnce" function=forceAssignmentOnce pkg=kubeip
time="2023-09-21T13:09:02Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:02Z" level=info msg="Starting kubeip controller" pkg=kubeip-node
time="2023-09-21T13:09:02Z" level=info msg="Collected 3 Nodes of interest...calculating number of IPs required" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:02Z" level=info msg="Collected 3 Nodes of interest...processing 3 nodes instances within region europe-west1" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:02Z" level=info msg="Retrieving addresses used in project key-cistern-95411 in region europe-west1" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="kubeip controller synced and ready" pkg=kubeip-node
time="2023-09-21T13:09:03Z" level=info msg="Project key-cistern-95411 in region europe-west1 should use the following IPs [34.78.106.50 35.205.90.80 35.205.250.74]... Checking that the instances follow these assignments" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="Found 0 Addresses to remove project key-cistern-95411 in region europe-west1. Addresses []" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:09:03Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:09:04Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:09:12Z" level=info msg="Deleted IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 zone europe-west1-c" function=DeleteIP pkg=kubeip
time="2023-09-21T13:09:12Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 as 0.0.0.0" function=tagNode pkg=kubeip
time="2023-09-21T13:09:12Z" level=info msg="Clear label tag for node gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 with ip 0.0.0.0 and clear tags map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-boot-disk:pd-ssd cloud.google.com/gke-container-runtime:containerd cloud.google.com/gke-cpu-scaling-level:2 cloud.google.com/gke-logging-variant:DEFAULT cloud.google.com/gke-max-pods-per-node:110 cloud.google.com/gke-nodepool:spot-fixedip-v5 cloud.google.com/gke-os-distribution:ubuntu cloud.google.com/gke-spot:true cloud.google.com/machine-family:n1 failure-domain.beta.kubernetes.io/region:europe-west1 failure-domain.beta.kubernetes.io/zone:europe-west1-c kubernetes.io/arch:amd64 kubernetes.io/hostname:gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 node.kubernetes.io/masq-agent-ds-ready:true nodepool:preemtible-fixedip projectcalico.org/ds-ready:true topology.kubernetes.io/region:europe-west1 topology.kubernetes.io/zone:europe-west1-c]" function=tagNode pkg=kubeip
time="2023-09-21T13:09:12Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 as 0.0.0.0 with tags {"kubip_assigned":"0-0-0-0" ,"nodepool":null ,"[projectcalico.org/ds-ready](http://projectcalico.org/ds-ready%5C)":null} " function=tagNode pkg=kubeip
time="2023-09-21T13:09:17Z" level=info msg="Added IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 zone europe-west1-c new ip 35.205.250.74" function=addIP pkg=kubeip
time="2023-09-21T13:09:17Z" level=info msg="Replaced IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 zone europe-west1-c new ip 35.205.250.74" function=replaceIP pkg=kubeip
time="2023-09-21T13:09:17Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 as 35.205.250.74" function=tagNode pkg=kubeip
time="2023-09-21T13:09:17Z" level=info msg="Clear label tag for node gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 with ip 35.205.250.74 and clear tags map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-boot-disk:pd-ssd cloud.google.com/gke-container-runtime:containerd cloud.google.com/gke-cpu-scaling-level:2 cloud.google.com/gke-logging-variant:DEFAULT cloud.google.com/gke-max-pods-per-node:110 cloud.google.com/gke-nodepool:spot-fixedip-v5 cloud.google.com/gke-os-distribution:ubuntu cloud.google.com/gke-spot:true cloud.google.com/machine-family:n1 failure-domain.beta.kubernetes.io/region:europe-west1 failure-domain.beta.kubernetes.io/zone:europe-west1-c kubernetes.io/arch:amd64 kubernetes.io/hostname:gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 kubernetes.io/os:linux kubip_assigned:0-0-0-0 node.kubernetes.io/instance-type:n1-standard-2 node.kubernetes.io/masq-agent-ds-ready:true topology.kubernetes.io/region:europe-west1 topology.kubernetes.io/zone:europe-west1-c]" function=tagNode pkg=kubeip
time="2023-09-21T13:09:17Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-cj60 as 35.205.250.74 with tags {"kubip_assigned":"35-205-250-74"} " function=tagNode pkg=kubeip
time="2023-09-21T13:09:17Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:09:18Z" level=info msg="no free address found"
time="2023-09-21T13:09:18Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:09:18Z" level=info msg="no free address found"
time="2023-09-21T13:14:02Z" level=info msg="Tick received for force assignment check" function=forceAssignment pkg=kubeip
time="2023-09-21T13:14:02Z" level=info msg="Starting forceAssignmentOnce" function=forceAssignmentOnce pkg=kubeip
time="2023-09-21T13:14:02Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:02Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:02Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:14:02Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:02Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:14:03Z" level=info msg="no free address found"
time="2023-09-21T13:14:03Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:14:03Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:14:03Z" level=info msg="no free address found"
time="2023-09-21T13:14:37Z" level=info msg="Processing removal to node: gke-serve-prod-spot-fixedip-v4-0d87ac80-0khz " function=processItem pkg=kubeip-node
time="2023-09-21T13:14:37Z" level=info msg="Starting forceAssignmentOnce" function=forceAssignmentOnce pkg=kubeip
time="2023-09-21T13:14:37Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:37Z" level=info msg="Collected 3 Nodes of interest...calculating number of IPs required" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:37Z" level=info msg="Collected 3 Nodes of interest...processing 3 nodes instances within region europe-west1" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:37Z" level=info msg="Retrieving addresses used in project key-cistern-95411 in region europe-west1" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:38Z" level=info msg="Project key-cistern-95411 in region europe-west1 should use the following IPs [34.78.106.50 35.205.90.80 35.205.250.74]... Checking that the instances follow these assignments" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:38Z" level=info msg="Found 0 Addresses to remove project key-cistern-95411 in region europe-west1. Addresses []" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:38Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:38Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:14:38Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:38Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:14:38Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:14:42Z" level=info msg="Processing removal to node: gke-serve-prod-spot-fixedip-v4-0d87ac80-w9mq " function=processItem pkg=kubeip-node
time="2023-09-21T13:14:42Z" level=info msg="Starting forceAssignmentOnce" function=forceAssignmentOnce pkg=kubeip
time="2023-09-21T13:14:42Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:42Z" level=info msg="Collected 3 Nodes of interest...calculating number of IPs required" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:42Z" level=info msg="Collected 3 Nodes of interest...processing 3 nodes instances within region europe-west1" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:43Z" level=info msg="Retrieving addresses used in project key-cistern-95411 in region europe-west1" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:43Z" level=info msg="Project key-cistern-95411 in region europe-west1 should use the following IPs [34.78.106.50 35.205.90.80 35.205.250.74]... Checking that the instances follow these assignments" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:43Z" level=info msg="Found 0 Addresses to remove project key-cistern-95411 in region europe-west1. Addresses []" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:43Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:44Z" level=info msg="Found unassigned node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in pool spot-fixedip-v5" function=processAllNodes pkg=kubeip
time="2023-09-21T13:14:44Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:14:44Z" level=info msg="Found node without tag gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft" function=assignMissingTags pkg=kubeip
time="2023-09-21T13:14:47Z" level=info msg="Deleted IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb zone europe-west1-c" function=DeleteIP pkg=kubeip
time="2023-09-21T13:14:47Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb as 0.0.0.0" function=tagNode pkg=kubeip
time="2023-09-21T13:14:47Z" level=info msg="Clear label tag for node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb with ip 0.0.0.0 and clear tags map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-boot-disk:pd-ssd cloud.google.com/gke-container-runtime:containerd cloud.google.com/gke-cpu-scaling-level:2 cloud.google.com/gke-logging-variant:DEFAULT cloud.google.com/gke-max-pods-per-node:110 cloud.google.com/gke-nodepool:spot-fixedip-v5 cloud.google.com/gke-os-distribution:ubuntu cloud.google.com/gke-spot:true cloud.google.com/machine-family:n1 failure-domain.beta.kubernetes.io/region:europe-west1 failure-domain.beta.kubernetes.io/zone:europe-west1-c kubernetes.io/arch:amd64 kubernetes.io/hostname:gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 node.kubernetes.io/masq-agent-ds-ready:true nodepool:preemtible-fixedip projectcalico.org/ds-ready:true topology.kubernetes.io/region:europe-west1 topology.kubernetes.io/zone:europe-west1-c]" function=tagNode pkg=kubeip
time="2023-09-21T13:14:47Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb as 0.0.0.0 with tags {"kubip_assigned":"0-0-0-0" ,"[projectcalico.org/ds-ready](http://projectcalico.org/ds-ready%5C)":null ,"nodepool":null} " function=tagNode pkg=kubeip
time="2023-09-21T13:14:54Z" level=info msg="Added IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb zone europe-west1-c new ip 34.78.106.50" function=addIP pkg=kubeip
time="2023-09-21T13:14:54Z" level=info msg="Replaced IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb zone europe-west1-c new ip 34.78.106.50" function=replaceIP pkg=kubeip
time="2023-09-21T13:14:54Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb as 34.78.106.50" function=tagNode pkg=kubeip
time="2023-09-21T13:14:54Z" level=info msg="Clear label tag for node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb with ip 34.78.106.50 and clear tags map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-boot-disk:pd-ssd cloud.google.com/gke-container-runtime:containerd cloud.google.com/gke-cpu-scaling-level:2 cloud.google.com/gke-logging-variant:DEFAULT cloud.google.com/gke-max-pods-per-node:110 cloud.google.com/gke-nodepool:spot-fixedip-v5 cloud.google.com/gke-os-distribution:ubuntu cloud.google.com/gke-spot:true cloud.google.com/machine-family:n1 failure-domain.beta.kubernetes.io/region:europe-west1 failure-domain.beta.kubernetes.io/zone:europe-west1-c kubernetes.io/arch:amd64 kubernetes.io/hostname:gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb kubernetes.io/os:linux kubip_assigned:0-0-0-0 node.kubernetes.io/instance-type:n1-standard-2 node.kubernetes.io/masq-agent-ds-ready:true topology.kubernetes.io/region:europe-west1 topology.kubernetes.io/zone:europe-west1-c]" function=tagNode pkg=kubeip
time="2023-09-21T13:14:54Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb as 34.78.106.50 with tags {"kubip_assigned":"34-78-106-50"} " function=tagNode pkg=kubeip
time="2023-09-21T13:14:54Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:15:03Z" level=info msg="Deleted IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft zone europe-west1-c" function=DeleteIP pkg=kubeip
time="2023-09-21T13:15:03Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft as 0.0.0.0" function=tagNode pkg=kubeip
time="2023-09-21T13:15:03Z" level=info msg="Clear label tag for node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft with ip 0.0.0.0 and clear tags map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-boot-disk:pd-ssd cloud.google.com/gke-container-runtime:containerd cloud.google.com/gke-cpu-scaling-level:2 cloud.google.com/gke-logging-variant:DEFAULT cloud.google.com/gke-max-pods-per-node:110 cloud.google.com/gke-nodepool:spot-fixedip-v5 cloud.google.com/gke-os-distribution:ubuntu cloud.google.com/gke-spot:true cloud.google.com/machine-family:n1 failure-domain.beta.kubernetes.io/region:europe-west1 failure-domain.beta.kubernetes.io/zone:europe-west1-c kubernetes.io/arch:amd64 kubernetes.io/hostname:gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 node.kubernetes.io/masq-agent-ds-ready:true nodepool:preemtible-fixedip projectcalico.org/ds-ready:true topology.kubernetes.io/region:europe-west1 topology.kubernetes.io/zone:europe-west1-c]" function=tagNode pkg=kubeip
time="2023-09-21T13:15:03Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft as 0.0.0.0 with tags {"kubip_assigned":"0-0-0-0" ,"[projectcalico.org/ds-ready](http://projectcalico.org/ds-ready%5C)":null ,"nodepool":null} " function=tagNode pkg=kubeip
time="2023-09-21T13:15:08Z" level=info msg="Added IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft zone europe-west1-c new ip 35.205.90.80" function=addIP pkg=kubeip
time="2023-09-21T13:15:08Z" level=info msg="Replaced IP for gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft zone europe-west1-c new ip 35.205.90.80" function=replaceIP pkg=kubeip
time="2023-09-21T13:15:08Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft as 35.205.90.80" function=tagNode pkg=kubeip
time="2023-09-21T13:15:08Z" level=info msg="Clear label tag for node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft with ip 35.205.90.80 and clear tags map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/gke-boot-disk:pd-ssd cloud.google.com/gke-container-runtime:containerd cloud.google.com/gke-cpu-scaling-level:2 cloud.google.com/gke-logging-variant:DEFAULT cloud.google.com/gke-max-pods-per-node:110 cloud.google.com/gke-nodepool:spot-fixedip-v5 cloud.google.com/gke-os-distribution:ubuntu cloud.google.com/gke-spot:true cloud.google.com/machine-family:n1 failure-domain.beta.kubernetes.io/region:europe-west1 failure-domain.beta.kubernetes.io/zone:europe-west1-c kubernetes.io/arch:amd64 kubernetes.io/hostname:gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft kubernetes.io/os:linux kubip_assigned:0-0-0-0 node.kubernetes.io/instance-type:n1-standard-2 node.kubernetes.io/masq-agent-ds-ready:true topology.kubernetes.io/region:europe-west1 topology.kubernetes.io/zone:europe-west1-c]" function=tagNode pkg=kubeip
time="2023-09-21T13:15:08Z" level=info msg="Tagging node gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft as 35.205.90.80 with tags {"kubip_assigned":"35-205-90-80"} " function=tagNode pkg=kubeip
time="2023-09-21T13:15:08Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-qlbb in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:15:09Z" level=info msg="no free address found"
time="2023-09-21T13:15:09Z" level=info msg="Working on gke-serve-prod-spot-fixedip-v5-d09bfc56-r0ft in zone europe-west1-c" function=Kubeip pkg=kubeip
time="2023-09-21T13:15:09Z" level=info msg="no free address found"

Additional context
Reference ticket with the information https://doitintl.zendesk.com/agent/tickets/148774

Make kubeip compatible with Workload Identity

Is your feature request related to a problem? Please describe.
Workload Identity is the recommended way for workloads running on Google Kubernetes Engine (GKE) to access Google Cloud services without a Google Service Account key mounted.

Describe the solution you'd like
Right now, a Google Service Account key should be generated and stored in a Kubernetes secret which is then used by kubeip. Using Workload Identity no key will be used and kubeip's Kubernetes service account will be able to impersonate an IAM service account, giving kubeip the required permissions.

Describe alternatives you've considered
Mounting the Service Account key in JSON format is the only supported method right now

Additional context

deployment yaml Deployment object missing serviceAccountName

Describe the bug
After following deploy steps for using built image I was getting the following in the log:
"ERROR: logging before flag.Parse: E1019 16:31:10.936989 1 reflector.go:205] github.com/doitintl/kubeip/pkg/controller/controller.go:159: Failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:default:default" cannot list pods at the cluster scope: Unknown user "system:serviceaccount:default:default"

To Reproduce
Steps to reproduce the behavior:

  1. Follow Readme docker image deploy steps.
  2. Review log
  3. See error

Expected behavior
New IP assigned to node(s) in the pool.

Screenshots
n/a

Desktop (please complete the following information):
n/a

Smartphone (please complete the following information):
n/a

Additional context

use with multiple clusters in the same projet

Firstly, thanks for taking the time to create this tool. It has solved a key problem for us! :)

Describe the bug
I have two clusters, one for UAT and one for Live. I have set KubeIP up on the UAT cluster smoothly; no errors and IP addresses are being assigned. Working like a dream.

However, applying the same* process to the Live cluster has not rendered the same result. The KubeIP pod seems stuck in the ContainerCreating state.

The process for setting up the Live cluster (which I did second, after UAT) was a little different because I did not have to set up the service accounts, etc. These are project-bound as far as I know. Is this correct? I think that his also means that I can use the same secret key.

This command:

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user gcloud config list --format 'value(core.account)'

Returned:

Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "cluster-admin-binding" already exists

Which did not happen the first time, when I was setting this up for UAT. I am a little suspicious that maybe this is the problem...

Also, this is how my status IP addresses are labelled. Is this configuration OK?

image

To Reproduce
Steps to reproduce the behavior:

  1. Set up a cluster to use KubeIP (everything works fine);
  2. Attempt to set up a second cluster (the KubeIP pod gets stuck in a ContainerCreating state).

Expected behavior
I believe that the pod should launch and I should get static IPs assigned.

Screenshots
See above.

Additional context
None.

External IPs not being assigned if node's access config name != 'external-nat'

1. Describe the bug:

After successfully - and VERY easily :kudos: - deploying Kubeip on my GKE cluster, I noticed it wasn't assigning free external IPs to nodes with ephemeral IPs. In the logs, the following error was appearing:

level=error msg="AddAccessConfig \"googleapi: Error 400: At most one access config currently supported., badRequest\"

2. Cause:

The steps that Kubeip follows to replace an ephemeral IP with an external IP on a given node are, among others:

The cause of the error in the log is that Kubeip was not successfully deleting the node's access config. And this happened because, in my case, the access config's name was not 'external-nat' (interestingly enough though, the gcloud API did return a 200).
Due to this, when Kubeip later attempted to add another access config with the external IP, the gcloud API returned the aforementioned error in my logs, stating that 1 is the max number of access configs allowed.

3. Expected behavior

IMHO, Kubeip should assign free external IPs to nodes with ephemeral IPs independently of the nodes' access config name.

4. Proposed solution:

Although my experience with go is scarce, building products on top of AWS/Azure/GCloud/Kubernetes APIs is my daily job :)
My preffered approach of this case would be, instead of assuming that the state of the resources on GCloud will be X (IE: the name of the access config will be 'external-nat'), query the API and actively retrieve the state of, in this case, the node's access config. Finally, rely on the access config 's state retrieved from the API (more specifically, its name) to issue the DeleteAccessConfig request.

I have submitted a MR implementing this solution: #60
I'd appreciate if you could review it and please, feel free to suggest any change you see fit.
(the "patched" version is already running on my cluster and producing the expected results)

Support for internal IP

Is your feature request related to a problem? Please describe.
I have a requirement for the internal IPs of the VMs due to a combination of a VPN and a firewall.

Describe the solution you'd like
Set the internal IPs from a ip pool to the cluster VMs.

Describe alternatives you've considered
Use a reduced subnet to limit the range of available IPs. A proxy VM to proxy-pass the requests to the VPN.

Additional context
An external client needs to connect to our VMs (ports exposed in the VM) via VPN and we can't risk the node pool to recreate the VMs and lose the agreed internal IP.

Feature Request: export prom style metrics about what kubeip is doing.

Is your feature request related to a problem? Please describe.
No

Describe the solution you'd like
Feature Request: export prom style metrics about what kubeip is doing in order to monitor it for errors.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Is roles/compute.storageAdmin role really needed?

I was surprised to see roles/compute.storageAdmin in the list of required roles, so I experimented with removing it and kubeIP seems to still work.

I might be missing something, but is it definitely required? The permissions kubeIP requires are quite broad, so anything that reduces the surface area would be helpful.

Is there no way to assign a static IP to the self node pool?

When I got it right the self node pool will not get a static IP because KubeIP delete somehow 0.0.0.0 for reassign and the can not connect anymore to gcloud. I tried it and it completly make the cluster unusable.

Can someone please explain why and how this happens?
Is there is a way to assign all nodes an static IP?

KUBEIP_COPYLABELS not applying labels

Describe the bug
Using the sample from the README and KUBEIP_COPYLABELS=true as well as KUBEIP_CLEARLABELS=true no tag "platform_whitelisted" is created on the node, only "kubip_assigned" is created.

To Reproduce

  1. Tag your IP as per documentation
  2. Deploy DS/CM with KUBEIP_COPYLABELS=true
  3. Observe missing tag

Expected behavior
Tag to exist on nodes

Screenshots
./.

Desktop (please complete the following information):

  • GKE v1.27.3-gke.100

Smartphone (please complete the following information):
???

Additional context
Add any other context about the problem here.

NodePool Selector required

Most people who would use something like kubeIP, will allocate specific node-pool to run pods requiring fixed set of static addresses.

kubeIP should support assigning static IP addresses only to nodes assigned to specific node-pool

Setting KUBEIP_FORCEASSIGNMENT to "false" still reassigns the IP of the existing nodes

My cluster has only one node.

$ cat deploy/kubeip-configmap.yaml

apiVersion: v1
data:
  KUBEIP_LABELKEY: "kubeip"
  KUBEIP_LABELVALUE: "cluster-1"
  KUBEIP_NODEPOOL: "default-pool"
  KUBEIP_FORCEASSIGNMENT: "false"
kind: ConfigMap
metadata:
  labels:
    app: kubeip
  name: kubeip-config
  namespace: default

Log output:

$ kubectl logs -f kubeip-8968b4d98-htdw6
time="2018-06-28T08:09:01Z" level=info msg="kubeIP is starting" Build Date="2018-06-26-06:03" Cluster name=cluster-1 Project name=doit-playground Version=bf0d9612f00f7363a58327c21857cd4102f5f6d6
time="2018-06-28T08:09:01Z" level=info msg="Starting forceAssignment" function=forceAssignment pkg=kubeip
time="2018-06-28T08:09:01Z" level=info msg="Collecting Node List..." function=processAllNodes pkg=kubeip
time="2018-06-28T08:09:01Z" level=info msg="Starting kubeip controller" pkg=kubeip-node
time="2018-06-28T08:09:01Z" level=info msg="kubeip controller synced and ready" pkg=kubeip-node
time="2018-06-28T08:09:02Z" level=info msg="Found un assigned node gke-cluster-1-default-pool-e5acb86c-tzrs" function=processAllNodes pkg=kubeip
time="2018-06-28T08:09:02Z" level=info msg="Working on gke-cluster-1-default-pool-e5acb86c-tzrs in zone us-central1-a" function=Kubeip pkg=kubeip
time="2018-06-28T08:09:03Z" level=info msg="Found reserved address 35.188.2.192" function=replaceIP pkg=kubeip
time="2018-06-28T08:09:46Z" level=error msg="ZoneOperations.Get \"Get https://www.googleapis.com/compute/v1/projects/doit-playground/zones/us-central1-a/operations/operation-1530173357207-56faf3f66add8-f0e7d0d6-826d4735?alt=json: read tcp 10.8.0.13:37408->108.177.112.95:443: read: connection reset by peer\" operation-1530173357207-56faf3f66add8-f0e7d0d6-826d4735"
time="2018-06-28T08:09:46Z" level=info msg="Replaced IP for gke-cluster-1-default-pool-e5acb86c-tzrs zone us-central1-a new ip 35.188.2.192" function=replaceIP pkg=kubeip
time="2018-06-28T08:09:53Z" level=error msg="Get https://10.11.240.1:443/api/v1/nodes?labelSelector=kubip_assigned%3D35-188-2-192: read tcp 10.8.0.13:52992->10.11.240.1:443: read: connection reset by peer"
time="2018-06-28T08:09:53Z" level=info msg="Tagging node gke-cluster-1-default-pool-e5acb86c-tzrs as 35.188.2.192" function=tagNode pkg=kubeip

Unable to Build Docker Image

Readme was not updated for new changes. make image has been removed from the Makefile.

Error response from daemon: Dockerfile parse error line 19: Unknown flag: mount```

I'm unable to build the image. What's the new process?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.