Comments (63)
I folks, we are planning to upgrade our k8s distribution to the latest k8s version, but to support GCE we need to deploy CCM.
any position about the release process of this repo? or each one need to clone and DIY?
the images will be delivery publicly?
from cloud-provider-gcp.
@jprzychodzen Any updates on this issue? Seems like there are still no images available.
from cloud-provider-gcp.
@aojea , thanks for the effort.
What is the reasoning behind the tag v26.2.4? Does it mean that this CCM vendors k8s.io v0.26.x and is mainly compatible with K8s 1.26?
I could suggest to follow the versioning policy of the other CCMs. They all release versions like v1.25.x, v1.26.x, v1.27.x where the minor version of the CCM corresponds to the K8s version it vendors. In this way it is much easier to track the things and to use the proper CCM version according to the K8s version of your cluster. tl;dr v1.26.x is much more intuitive than v26.2.4.
Shouldn't we also have a corresponding github release? Currently I only see tags but no github release.
Do you also plan to release CCM versions that support (and vendor) K8s 1.24, 1.25, 1.27?
PS. I am not grumpy, just wanted to provide a constructive feedback.
from cloud-provider-gcp.
@aojea Thanks! Could you also add the v27.1.0 release to the registry?
from cloud-provider-gcp.
Tagged released are still not pushed to the official registry (registry.k8s.io/cloud-provider-gcp/cloud-controller-manager
), and even the staging registry doesn't contain the tagged versions.
from cloud-provider-gcp.
I am bringing the topic raised in #300 (comment). What is the ETA for container images that can be consumed by end users?
from cloud-provider-gcp.
Sent kubernetes/k8s.io#5231 to promote the image
from cloud-provider-gcp.
I would like to help here, we will need to use this in CAPG and I feel that is missing some parts to have all in place
how can I help? @aojea
@cpanato we have now automation to publish the images after each commit https://console.cloud.google.com/gcr/images/k8s-staging-cloud-provider-gcp/GLOBAL/cloud-controller-manager , we just need to make them available in the registry so users can consume them, I think that is the only part missing
@sdmodi @jprzychodzen we need to document better our release process , https://github.com/kubernetes/cloud-provider-gcp/tags , is any of you tagging the releases?
from cloud-provider-gcp.
crane ls registry.k8s.io/cloud-provider-gcp/cloud-controller-manager
sha256-e70becd7b8cc50a3ac80f36ad0db8781742a225563e759897f846ac728da87db.sig
sha256-f057f6c934d6afa73a38f94b71d7da2f99033e9a6e689d59b4ee1e689031ef00.sig
v26.2.4
v27.1.6
latest release published
from cloud-provider-gcp.
here you go folks
docker pull registry.k8s.io/cloud-provider-gcp/cloud-controller-manager:v26.2.4
v26.2.4: Pulling from cloud-provider-gcp/cloud-controller-manager
Digest: sha256:e70becd7b8cc50a3ac80f36ad0db8781742a225563e759897f846ac728da87db
Status: Image is up to date for registry.k8s.io/cloud-provider-gcp/cloud-controller-manager:v26.2.4
registry.k8s.io/cloud-provider-gcp/cloud-controller-manager:v26.2.4
from cloud-provider-gcp.
I've removed tag
v26.2.4
and tagv26.4.0
is in place.
@jprzychodzen could you please check again? Thanks.
$ regctl tag list registry.k8s.io/cloud-provider-gcp/cloud-controller-manager
sha256-e70becd7b8cc50a3ac80f36ad0db8781742a225563e759897f846ac728da87db.sig
v26.2.4
Edit: It seems you were talking about git tags, not container images. Unfortunately we're now in a situation where the only published image doesn't have a tagged version of manifests to go with it, could somebody look into publishing an image for v26.4.0 (and ideally v27.1.0 too, pretty please :))
from cloud-provider-gcp.
I would like to help here, we will need to use this in CAPG and I feel that is missing some parts to have all in place
how can I help? @aojea
from cloud-provider-gcp.
@cpanato 👋 Exciting! You probably already know, but it's possible to use GCP CCM with CAPG, we've been doing so in prod for about half a year now IIRC.
We had to make some minor adjustments to the deployment manifests which are shipped with this repo to get things working in our setup. I'll leave some snippets of the relevant bits below in spoiler tags to not clutter up this issue, just in case you guys are still in the brainstorming phase, as it took a fair amount of work to figure out and might help. Just in case. :)
click to open
Please note that this is pretty old (about 6 months), parts of it might not be relevant anymore, but they were back when we set things up. It also likely contains some changes specific to us running a Shared VPC setup off a CAPG fork (couldn't find time to vet and polish our changes to for upstreaming yet, I'll try to get around to it in Q4).
# GCP CCM DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cloud-controller-manager
namespace: kube-system
spec:
# ...
template:
# ...
spec:
nodeSelector:
node-role.kubernetes.io/control-plane: ""
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
tolerations:
- ...
# We're adding this taint in the KubeadmControlPlane/MachineDeployment templates instead of relying on cilium's
# default of having the operator do it at runtime (ran into a race condition IIRC), which causes CCM to be unschedulable
# (due to the taint itself) and cilium to never fully start its agent(s) because it needs PodCIDR to be set on nodes, which
# is now done by CCM instead of KCM, which doesn't get scheduled to nodes without this taint.
- key: node.cilium.io/agent-not-ready
value: "true"
effect: NoExecute
serviceAccountName: cloud-controller-manager
containers:
- name: cloud-controller-manager
image: our-registry.com/gcp-ccm/cloud-controller-manager:v27.1.6
command: ['/cloud-controller-manager']
args:
# The --help output of the controller binary suggests that profiling is enabled by default
- --profiling=false
- --v=4
- --leader-elect=true
# We generate a ConfigMap for this file using Kustomize and apply it together with the CAPI manifests in the
# management cluster, then use it in KubeadmControlPlane.spec.kubeadmConfigSpec.files to have cloud-init
# write its contents to a file on controlplane nodes. See below for contents but I'm fairly sure we only needed
# to explicitly provide it to make Shared VPC work.
- --cloud-config=/etc/kubernetes/cloud.config
# Default stuff
- --cloud-provider=gce
- --use-service-account-credentials=true
- --bind-address=127.0.0.1
- --secure-port=10258
# These took a bit of trial and error, most of them probably aren't universally applicable, as we run cilium without
# kube-proxy and use Shared VPC + Secondary VPC Ranges for "native" routing (https://docs.cilium.io/en/stable/network/concepts/routing/#google-cloud)
- --cluster-name=my-cluster
- --cluster-cidr=10.0.0.0/8
- --allocate-node-cidrs=true
- --configure-cloud-routes=false
- --cidr-allocator-type=CloudAllocator
- --controllers=cloud-node,cloud-node-lifecycle,nodeipam,service
env:
# This probably won't work when running HA controlplanes, but without kube-proxy we don't get DNS resolution
# for services until cilium is up and running, which doesn't happen until after CCM itself is deployed.
- name: KUBERNETES_SERVICE_HOST
value: "127.0.0.1"
- name: KUBERNETES_SERVICE_PORT
value: "6443"
volumeMounts:
- mountPath: /etc/kubernetes/cloud.config
name: cloudconfig
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/cloud.config
type: ""
name: cloudconfig
# KCM/CCM cloudconfig
[Global]
# ref: https://github.com/kubernetes/cloud-provider-gcp/blob/bd346bb711bdd32d1e9a502ec651231c0b93664d/providers/gce/gce.go#L186-L219
project-id = "my-project"
network-project-id = "my-vpc-host-project"
network-name = "my-network"
subnetwork-name = "my-subnetwork"
secondary-range-name = "pods-default"
We wrap the above files (and some other stuff) into a ConfigMap using Kustomize, then point a CAPI ClusterResourceSet
at it to install and reconcile CCM from within the management cluster. There might be more elegant ways to do this if CAPG was to default to CCM over KCM in the future.
# CAPI KCP manifest
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: my-cluster
spec:
kubeadmConfigSpec:
clusterConfiguration:
# ...
apiServer:
certSANs:
- 127.0.0.1
- ...
clusterName: my-cluster
controllerManager:
extraArgs:
# PodCIDR allocation is handled by cloud-controller-manager now
allocate-node-cidrs: "false"
initConfiguration:
skipPhases:
# This is for kube-proxy-less cilium
- addon/kube-proxy
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
taints:
# Register nodes with a taint indicating that CNI is not ready to avoid possible race conditions
# ref: https://docs.cilium.io/en/v1.13/installation/taints/
- key: node.cilium.io/agent-not-ready
value: "true"
effect: NoExecute
# Because specifying taints here overwrites default taints, we need to re-add controlplane markers
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
kubeletExtraArgs:
# Use cloud-controller-manager
cloud-provider: external
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
taints:
- key: node.cilium.io/agent-not-ready
value: "true"
effect: NoExecute
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
kubeletExtraArgs:
# Use cloud-controller-manager
cloud-provider: external
files:
- path: /etc/kubernetes/cloud.config
contentFrom:
secret:
name: cloud-controller-manager-config
key: cloud-config
# This file is read by the google guest agent on node VMs, telling it to set up routes for IPs from the (Pod)CIDR slice
# of the Secondary VPC Range assigned to it by CAPG (part of our fork)
- path: /etc/default/instance_configs.cfg
content: |
[NetworkInterfaces]
ip_alias = true
ip_forwarding = false
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
metadata:
name: my-controlplane
spec:
# ...
template:
spec:
# Use Shared VPC and a designated Secondary Range (part of our fork)
subnet: projects/my-vpc-host-project/regions/europe-west3/subnetworks/my-subnetwork,aliases=pods-default:/24
ipForwarding: Disabled
from cloud-provider-gcp.
@sdmodi @aojea I think makes sense when we have a release publish the manifests in the github release, to make it easier to people to consume that.
I would not like to run bazel to build manifests locally, instead consume from upstream and adjust if needed
from cloud-provider-gcp.
also looks like this repo is tagged with
ccm/tag_version
and I think this will not build the expected image in cloudbuild, because the test-infra job will not be triggered with that tag: https://github.com/kubernetes/test-infra/blob/master/config/jobs/image-pushing/k8s-staging-cloud-provider-gcp.yaml#L76i can be totally wrong
@cpanato I think you are right
from cloud-provider-gcp.
@jingxu97 @saad-ali @cici37 do you have idea could owners of this repository build the image and start publishing it? We (kOps) are thinking to start implementing external ccm for GCP but it is really difficult if the basics like CI release pipeline is missing.
from cloud-provider-gcp.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from cloud-provider-gcp.
/remove-lifecycle stale
from cloud-provider-gcp.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from cloud-provider-gcp.
/remove-lifecycle stale
from cloud-provider-gcp.
/assign @jprzychodzen
from cloud-provider-gcp.
/priority important-soon
from cloud-provider-gcp.
ping @jprzychodzen
from cloud-provider-gcp.
/assign @aojea
I guess this has happened, right?
from cloud-provider-gcp.
I guess this has happened, right?
It hasn't. You can check with
curl -sSL registry.k8s.io/v2/cloud-controller-manager/tags/list | jq .tags[]
from cloud-provider-gcp.
There is a postsubmit jjobs that publish images after each. merge
https://gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager
Those registry.k8s.io/v2/cloud-controller-manager/tags/list sounds like the released images , I really don't have time for that, but I'm happy to help with reviews
from cloud-provider-gcp.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from cloud-provider-gcp.
/remove-lifecycle stale
from cloud-provider-gcp.
https://console.cloud.google.com/gcr/images/k8s-staging-cloud-provider-gcp/GLOBAL/cloud-controller-manager has the released images
docker pull gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager:v26.2.4
v26.2.4: Pulling from k8s-staging-cloud-provider-gcp/cloud-controller-manager
fc251a6e7981: Already exists
1a8af71790f3: Already exists
db4f354738a1: Pull complete
Digest: sha256:e70becd7b8cc50a3ac80f36ad0db8781742a225563e759897f846ac728da87db
Status: Downloaded newer image for gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager:v26.2.4
gcr.io/k8s-staging-cloud-provider-gcp/cloud-controller-manager:v26.2.4
from cloud-provider-gcp.
@aojea, are there also non-staging images?
from cloud-provider-gcp.
/cc @msau42
from cloud-provider-gcp.
has the released images
There is only a single image that is not tagged as alpha. Also, the registry URL says staging, which suggests that these images shouldn't be used in production. If these are release images indeed, it would be great if they could be tagged with a release tag and be made available under the common URL that was used before: registry.k8s.io/cloud-controller-manager
.
from cloud-provider-gcp.
@justinsb you may be interested in this too ^^^
from cloud-provider-gcp.
PS. I am not grumpy, just wanted to provide a constructive feedback.
lol, I know I know, don't worry for my comment from the other issue, that was a misunderstanding
Shouldn't we also have a corresponding github release? Currently I only see tags but no github release.
yes, we should
What is the reasoning behind the tag v26.2.4? Does it mean that this CCM vendors k8s.io v0.26.x and is mainly compatible with K8s 1.26?
honestly I don't know, @sdmodi @jprzychodzen do we have some special meaning for the tags
from cloud-provider-gcp.
What is the reasoning behind the tag v26.2.4?
Actually, that was my mistaken when creating tag v26.4.0
Regarding naming schema - when I was adding recent tags I was following convention set by previous maintainers.
Single patch release of the K8s (eg. v1.2.3
) will create a specific tag in cloud-provider-gcp
repo (v2.3.0
for our example). If we will need to improve CCM between patch releases of the K/K, we can use patch number to indicate another release (v2.3.1
in this case).
from cloud-provider-gcp.
@jprzychodzen are there any plans to retag v26.2.4 as v26.4.0 (not saying/requesting it needs to be done, just curious as to what i should be using) ?
from cloud-provider-gcp.
I've removed tag v26.2.4
and tag v26.4.0
is in place.
from cloud-provider-gcp.
ok, binaries are published now too https://console.cloud.google.com/storage/browser/k8s-staging-cloud-provider-gcp/auth-linux-amd64-master;tab=objects?project=k8s-staging-cloud-provider-gcp&supportedpurview=project&prefix=&forceOnObjectsSortingFiltering=false
from cloud-provider-gcp.
Any update on this? Seems like we have to use registry.k8s.io/cloud-provider-gcp/cloud-controller-manager:v26.2.4
image now, but as far as I can see v26.2.4
doesn't mean anything. It is not even a ccm tag under the repo. Is there any plan for images to be auto pushed on tag to be sure we can always find the correct image to deploy?
from cloud-provider-gcp.
thank you @itspngu
from cloud-provider-gcp.
I don't have permissions to tag releases on this repository.
from cloud-provider-gcp.
I don't have permissions to tag releases on this repository.
I see, let me redo the question, do we want to tag releases on this repo? do we want to adhere to a cadence? match kubernetes releases? ...
then we'll figure out the technical details, don't worry about that
from cloud-provider-gcp.
It would be ideal to have periodic sync up with Kubernetes libraries and tag the releases accordingly. For example, in the current state we are synced up with k8s 1.28.0 when 1.28.2 is already available.
from cloud-provider-gcp.
Ok, but what if we have a bug and we want to release something, what about v1.28.2-001 , where 001 is the patch release of ccm inside 1.28.2 ?
from cloud-provider-gcp.
We can create a new tag for that. That would be ccm/v28.2.1. In this repo, the K8s minor version takes the major version tag. That leaves us the patch version for our internal changes.
from cloud-provider-gcp.
sgtm, do you mind formalizing that submitting a PR , it just a brief doc or section in the README
from cloud-provider-gcp.
I'll do that today.
from cloud-provider-gcp.
from cloud-provider-gcp.
The images for v27.1.6 aren't available. Is there a work around I can use to get the images that are necessary?
from cloud-provider-gcp.
The images for v27.1.6 aren't available. Is there a work around I can use to get the images that are necessary?
You'd have to check out the corresponding git tag of this repo and build KCM/CCM from source yourself in the meantime.
from cloud-provider-gcp.
also looks like this repo is tagged with ccm/tag_version
and I think this will not build the expected image in cloudbuild, because the test-infra job will not be triggered with that tag: https://github.com/kubernetes/test-infra/blob/master/config/jobs/image-pushing/k8s-staging-cloud-provider-gcp.yaml#L76
i can be totally wrong
from cloud-provider-gcp.
I made a build process to do this and published here if anyone needs the GCP CPI :) https://github.com/mesosphere/cloud-provider-gcp/pkgs/container/cloud-controller-manager-gcp/136544418?tag=v27.1.6.d2iq.0
from cloud-provider-gcp.
I hope that does the trick! Please let us know if that ends up being the issue
from cloud-provider-gcp.
@aojea nice! Can we have a release here in GitHub with the manifests as well?
from cloud-provider-gcp.
@aojea nice! Can we have a release here in GitHub with the manifests as well?
what do you mean (more specifically)?
from cloud-provider-gcp.
@aojea nice! Can we have a release here in GitHub with the manifests as well?
what do you mean (more specifically)?
do we have any deployments/clusterrole/etc to publish as well? or just the image
from cloud-provider-gcp.
@aojea nice! Can we have a release here in GitHub with the manifests as well?
what do you mean (more specifically)?
do we have any deployments/clusterrole/etc to publish as well? or just the image
from cloud-provider-gcp.
I see, that is #359
from cloud-provider-gcp.
oh bazel :)
from cloud-provider-gcp.
Is anyone working on addressing cpanto's comment about cloudbuild not recognizing our tagging format? I think we are still not publishing images on tagged releases
from cloud-provider-gcp.
The CCM image for Kubernetes 1.29.0 is not yet available / published, there's only
from cloud-provider-gcp.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
from cloud-provider-gcp.
/remove-lifecycle stale
from cloud-provider-gcp.
Related Issues (20)
- cloud-provider-gcp and kubernetes 1.28 out of sync HOT 5
- TEST ISSUE 2 HOT 2
- go.sum is out of sync (requires go 1.20?) HOT 6
- Make `crd/apis` a Go module? HOT 6
- Remove Bazel from cloud-provider-gcp and use make to build it HOT 4
- Cloud.InstanceID() does not actually return a unique Instance ID HOT 16
- The networking logic to detect preempted VMs based on the node.Spec.PodCIDR values is not generalizable HOT 2
- The cloud-controller-manager relies on the `system:serviceaccount:kube-system:cloud-provider` principal which is deprecated HOT 3
- Internal Loadbalancers don't work if a cluster is using custom mode subnets HOT 7
- The worker number for controllers should be configurable HOT 5
- Error: failed to get instance metadata HOT 8
- GNP controller does not log API errors HOT 5
- Remove backends from backend services before updating HOT 7
- RFE: Enable logging via service annotations for load balancer backends HOT 7
- `gke-gcloud-auth-plugin`: Support passing credentials via a specific JSON filepath HOT 2
- Add DevContainer Definitons For One-Click Dev Env Setup HOT 5
- Working example doc HOT 4
- CAPG: Upstream CCM manifest doesn't work HOT 6
- Create documentation for setting up and using the out-of-tree provider HOT 4
- Failure when trying to run kubetest2 --up with gcp in non-legacy mode HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cloud-provider-gcp.