Giter Site home page Giter Site logo

openyurtio / openyurt Goto Github PK

View Code? Open in Web Editor NEW
1.7K 52.0 390.0 25.77 MB

OpenYurt - Extending your native Kubernetes to edge(project under CNCF)

Home Page: https://openyurt.io

License: Apache License 2.0

Makefile 0.28% Go 97.61% Shell 2.00% Smarty 0.11% Ruby 0.01%
kubernetes k8s edge-computing cloud-native golang

openyurt's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openyurt's Issues

edge node cannot be properly set if `--bootstrap-kubeconfig` is used by kubelet

Several users have reported that if kubelet of edge nodes using --bootstrap-kubeconfig, the /var/lib/openyurt/kubelet.conf will keep being reset. This is because /var/lib/openyurt/kubelet.conf is created based on /etc/kubernetes/kubelet.conf, while /etc/kubernetes/kubelet.conf will be reset if kubelet is relying on the --bootstrap-kubeconfig.

[feature request]Add commands so that single node could convert to a standard Kubernetes node or an OpenYurt edgenode

What would you like to be added:
Add commands, such as yurtctl convert edgenode and yurtctl revert edgenode , so that single node could use these commands to convert a standard Kubernetes node to an OpenYurt edgenode or revert an OpenYurt edgenode.

Why is this needed:
The commands yurtctl convertyurtctl revert will convert all edge nodes to OpenYurt cluster or Kubernetes cluster. If a new edge node is added after conversion, there are no corresponding commands to convert it to an OpenYurt edgenode or a Kubernetes node. We can only use commands yurtctl convertyurtctl revert again to convert all edge nodes, which may cause errors and be low efficiency. So commands are needed to allow a single node to convert and revert.

After completing the test node autonomy, the edge node status still keep ready

Situation description

  1. I installed the kubernetes cluster using kubeadm. The version of the cluster is 1.16. The cluster has a master and three nodes.

  2. After I finished installing open-yurt manually, I started trying to test whether the result of my installation was successful

  3. I used the Test node autonomy chapter in https://github.com/alibaba/openyurt/blob/master/docs/tutorial/yurtctl.md to test

  4. After I completed the actions in the Test node autonomy chapter, the edge node status still keep reday

Operation steps

  1. I created a sample pod
kubectl apply -f-<<EOF
apiVersion: v1
kind: Pod
metadata:
  name: bbox
spec:
  nodeName: node3       
  containers:
  - image: busybox
    command:
    - top
    name: bbox
EOF
  • node3 is the edge node. I chose the simplest way to schedule the sample pod to the edge node, although this method is not recommended in the kubernetes documentation
  1. I modified yurt-hub.yaml. make the value of --server-addr= a non-existent ip and port
    - --server-addr=https://1.1.1.1:6448
    
  2. Then I used the curl -s http://127.0.0.1:10261 command to test and verify whether the edge node can work normally in offline mode. the result of the command is as expected
    {
      "kind": "Status",
      "metadata": {
    
      },
      "status": "Failure",
      "message": "request( get : /) is not supported when cluster is unhealthy",
      "reason": "BadRequest",
      "code": 400
    }
    
  3. But node3 status still keep ready. and yurt-hub enters pending state
    kubectl get nodes
    NAME     STATUS   ROLES    AGE   VERSION
    master   Ready    master   23h   v1.16.6
    node1    Ready    <none>   23h   v1.16.6
    node2    Ready    <none>   23h   v1.16.6
    node3    Ready    <none>   23h   v1.16.6
    
    # kubectl get pods -n kube-system | grep yurt
    yurt-controller-manager-59544577cc-t948z   1/1     Running   0          5h42m
    yurt-hub-node3                             0/1     Pending   0          5h32m
    

Some configuration items and logs that may be used as reference

  1. Label information of each node
    root@master:~# kubectl describe nodes master | grep Labels
    Labels:             alibabacloud.com/is-edge-worker=false
    root@master:~# kubectl describe nodes node1 | grep Labels
    Labels:             alibabacloud.com/is-edge-worker=false
    root@master:~# kubectl describe nodes node2 | grep Labels
    Labels:             alibabacloud.com/is-edge-worker=false
    root@master:~# kubectl describe nodes node3 | grep Labels
    Labels:             alibabacloud.com/is-edge-worker=true
    
  2. Configuration of kube-controller-manager
        - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
        - --controllers=*,bootstrapsigner,tokencleaner,-nodelifecycle
        - --kubeconfig=/etc/kubernetes/controller-manager.conf
    
  3. /etc/kubernetes/manifests/yurthub.yml
    # cat yurthub.yml
    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        k8s-app: yurt-hub
      name: yurt-hub
      namespace: kube-system
    spec:
      volumes:
      - name: pki
        hostPath:
          path: /etc/kubernetes/pki
          type: Directory
      - name: kubernetes
        hostPath:
          path: /etc/kubernetes
          type: Directory
      - name: pem-dir
        hostPath:
          path: /var/lib/kubelet/pki
          type: Directory
      containers:
      - name: yurt-hub
        image: openyurt/yurthub:latest
        imagePullPolicy: Always
        volumeMounts:
        - name: kubernetes
          mountPath: /etc/kubernetes
        - name: pki
          mountPath: /etc/kubernetes/pki
        - name: pem-dir
          mountPath: /var/lib/kubelet/pki
        command:
        - yurthub
        - --v=2
        - --server-addr=https://1.1.1.1:6448
        - --node-name=$(NODE_NAME)
        livenessProbe:
          httpGet:
            host: 127.0.0.1
            path: /v1/healthz
            port: 10261
          initialDelaySeconds: 300
          periodSeconds: 5
          failureThreshold: 3
        resources:
          requests:
            cpu: 150m
            memory: 150Mi
          limits:
            memory: 300Mi
        securityContext:
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
      hostNetwork: true
      priorityClassName: system-node-critical
      priority: 2000001000
    
  4. /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    # cat  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    # Note: This dropin only works with kubeadm and kubelet v1.11+
    [Service]
    #Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/var/lib/openyurt/kubelet.conf"
    Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/var/lib/openyurt/kubelet.conf"
    Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
    # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
    EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
    # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
    # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
    EnvironmentFile=-/etc/default/kubelet
    ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
    
  5. /var/lib/openyurt/kubelet.conf
    # cat /var/lib/openyurt/kubelet.conf
    apiVersion: v1
    clusters:
    - cluster:
        server: http://127.0.0.1:10261
      name: default-cluster
    contexts:
    - context:
        cluster: default-cluster
        namespace: default
      name: default-context
    current-context: default-context
    kind: Config
    preferences: {}
    users:
    - name: default-auth
    
  6. Use kubectl describe to view yurt-hub pod information
    # kubectl describe pods yurt-hub-node3 -n kube-system
    Name:                 yurt-hub-node3
    Namespace:            kube-system
    Priority:             2000001000
    Priority Class Name:  system-node-critical
    Node:                 node3/
    Labels:               k8s-app=yurt-hub
    Annotations:          kubernetes.io/config.hash: 7be1318d63088969eafcd2fa5887f2ef
                          kubernetes.io/config.mirror: 7be1318d63088969eafcd2fa5887f2ef
                          kubernetes.io/config.seen: 2020-08-18T08:41:27.431580091Z
                          kubernetes.io/config.source: file
    Status:               Pending
    IP:
    IPs:                  <none>
    Containers:
      yurt-hub:
        Image:      openyurt/yurthub:latest
        Port:       <none>
        Host Port:  <none>
        Command:
          yurthub
          --v=2
          --server-addr=https://10.10.13.82:6448
          --node-name=$(NODE_NAME)
        Limits:
          memory:  300Mi
        Requests:
          cpu:     150m
          memory:  150Mi
        Liveness:  http-get http://127.0.0.1:10261/v1/healthz delay=300s timeout=1s period=5s #success=1 #failure=3
        Environment:
          NODE_NAME:   (v1:spec.nodeName)
        Mounts:
          /etc/kubernetes from kubernetes (rw)
          /etc/kubernetes/pki from pki (rw)
          /var/lib/kubelet/pki from pem-dir (rw)
    Volumes:
      pki:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/kubernetes/pki
        HostPathType:  Directory
      kubernetes:
        Type:          HostPath (bare host directory volume)
        Path:          /etc/kubernetes
        HostPathType:  Directory
      pem-dir:
        Type:          HostPath (bare host directory volume)
        Path:          /var/lib/kubelet/pki
        HostPathType:  Directory
    QoS Class:         Burstable
    Node-Selectors:    <none>
    Tolerations:       :NoExecute
    Events:            <none>
    
  7. Use docker ps on the edge node to view the log of the yurt-hub container. Intercept the last 20 lines
    # docker logs 0c89efbe949b --tail 20
    I0818 13:54:13.293068       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
    I0818 13:54:13.561262       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 331.836µs, left 10 requests in flight
    I0818 13:54:15.746576       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 83.127µs, left 10 requests in flight
    I0818 13:54:15.828560       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 436.489µs, left 10 requests in flight
    I0818 13:54:15.829628       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 307.187µs, left 10 requests in flight
    I0818 13:54:17.831366       1 util.go:177] kubelet delete pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 147.492µs, left 10 requests in flight
    I0818 13:54:17.833762       1 util.go:177] kubelet create pods: /api/v1/namespaces/kube-system/pods with status code 201, spent 111.762µs, left 10 requests in flight
    I0818 13:54:22.273899       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
    I0818 13:54:23.486523       1 util.go:177] kubelet watch configmaps: /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-flannel-cfg&resourceVersion=2161&timeout=7m54s&timeoutSeconds=474&watch=true with status code 200, spent 7m54.000780359s, left 9 requests in flight
    I0818 13:54:23.648871       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 266.182µs, left 10 requests in flight
    I0818 13:54:25.748497       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 189.694µs, left 10 requests in flight
    I0818 13:54:25.830919       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 1.375535ms, left 10 requests in flight
    I0818 13:54:25.835015       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 1.363765ms, left 10 requests in flight
    I0818 13:54:33.733913       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 303.499µs, left 10 requests in flight
    I0818 13:54:34.261504       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0818 13:54:35.751002       1 util.go:177] kubelet update leases: /apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node3?timeout=10s with status code 200, spent 144.723µs, left 10 requests in flight
    I0818 13:54:35.830895       1 util.go:177] kubelet get pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3 with status code 200, spent 1.146812ms, left 10 requests in flight
    I0818 13:54:35.834366       1 util.go:177] kubelet patch pods: /api/v1/namespaces/kube-system/pods/yurt-hub-node3/status with status code 200, spent 744.857µs, left 10 requests in flight
    I0818 13:54:42.274049       1 health_checker.go:151] ping cluster healthz with result, Get https://1.1.1.1:6448/healthz: dial tcp 1.1.1.1:6448: connect: connection refused
    I0818 13:54:43.818381       1 util.go:177] kubelet get nodes: /api/v1/nodes/node3?resourceVersion=0&timeout=10s with status code 200, spent 248.672µs, left 10 requests in flight
    
  8. Use kubectl logs to view the logs of yurt-controller-manager. Intercept the last 20 lines
    # kubectl logs yurt-controller-manager-59544577cc-t948z -n kube-system --tail 20
    E0818 13:56:07.239721       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:10.560864       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:13.288544       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:16.726605       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:19.623694       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:23.572803       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:26.809117       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:29.021205       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:31.271086       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:34.083918       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:37.493386       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:40.222869       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:44.149011       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:47.699211       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:50.177053       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:52.553163       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:55.573328       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:56:58.677034       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:57:02.844152       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    E0818 13:57:05.044990       1 leaderelection.go:330] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
    

At last

​ I very much hope that you can help me solve the problem or point out my mistakes. If there is any other information that needs to be provided, please communicate with me in time

kube-controller-manager will not run as normal after "revert"

After revert, the ServiceAccount: node-controller will be created by yurtctl and the binding secret will be generated , but the kube-controller-manager will not work, keep raising exception like this:

Error updating node thump2.fyre.ibm.com: Unauthorized
Failed while getting a Node to retry updating node health. Probably Node thump2.fyre.ibm.com was deleted

Restart kube-controller-manager could fix the issue, but kube-controller-manager is a static pod(kubeadm, kind, openshift), manged by kublet, can not restart it gracefully, I think we need document it.

[feature request]yurt-hub add impersonate user info for request that does not come from kubelet

What would you like to be added:
if a request through yurt-hub does not come from kubelet, yurt-hub should add impersonate user info in request header before proxying the request. the impersonate header is:

Impersonate-User=system:serviceaccount:{namespace}:{name}

Why is this needed:
we have used node certificate as yurt-hub certificate, the code is here, so the request like list endpoints from other component(eg: kube-proxy) will get 403 response status code for no permission. in order to get correct rights for these requests(does not come from kubelet), we need to impersonate a particular user in terms of the source of request.

for the user who wants to use impersonate feature:
in order to impersonate a particular component by yurt-hub certificate, user should prepare the impersonate settings(rabc setting) before starting the component.

Slack channel for disscussion?

Hi Openyurt team,

As most projects in CNCF will adopt Slack as a communication tool for technical/planning discussion. Do we have plan to open slack channel for Openyurt?

Openyurt why not choose k3s/k8s cluster as a edge ?

In openyurt we can chose a node as a edge. But in our case, we want to choose k3s cluster as a edge because of we master a lot of cluster in county. K3s cluster register itself as a k8s node. In cloud, we use nodeselector to deploy pods to edge. And we want to use AdmissionWebhook to proxy read write to edge. Is it good more than openyurt?

support kubernetes v1.16.x version

in order to support k8s v1.16.x version in openyurt, we need to improve the following associated components

  • yurthub: bookmark process
  • yurt-controller-manager: following kube-controller-manager of k8s v1.16.x version
  • yurtctl: convert v1.16.x k8s to openyurt

Forbidden error in yurt-controller-manager

I saw permission forbidden error in kube-system/yurt-controller-manager:

E0615 08:50:17.179354       1 leaderelection.go:306] error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"

BTW, I test yurtctl on my kubeadm env with k8s version v1.17.3, I think you can add the version to pkg/yurtctl/util/kubernetes/util.go, 👍

The network connection between Cloud and Edge

Maybe it's a stupid question, sorry firstly.

I'm little confuse about the scenario of network between Cloud and Edge.
openyurt introduce Tunnel server and Tunnel agent, I guess it handle the case the cloud and edge can not directly contact each other with IP? but seems the Tunnel server only redirect request to kubelet (10250) then send it to Tunnel agent (correct me if it's not true).

My question is: in such scenario, how about other request from cloud to edge? for example prometheus, istio or knatiive?

Seems in kubeedge case, it requires extra manual step to apply iptables rule and hostnetwork mode to enable metriic-server, so, how about openyurt? thanks.

[feature request] Command convert of yurtctl support param for kubelet service kubeadm conf

What would you like to be added:
Command convert of yurtctl support param for kubelet service kubeadm conf

Why is this needed:
In centos, 10-kubeadm.conf save in /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf and yurtctl convert use /etc/systemd/system/kubelet.service.d/10-kubeadm.conf by default .

I want to solve this problem And i have a question about this param name->Do you think the name kubelet-service-kubeadm-conf-path will be too long? Thanks any !

failed to delete the configMap(yurtctl-lock)

ENV

k8s version: v1.16.12
openyurt version: don't find "-v" or "version", sync code from github on 2020.7.30
work node: centos7, kenel:3.10.0-957.1.3.el7.x86_64, docker:17.12.1-ce
master node: ubuntu-16.04, kenel:4.15.0-45-generic, docker:19.03.6

DESC

convert a k8s cluster to openyurt cluster, wait a minutes....., use Ctrl+C to exit.
After that excute any Cmd has such error:
image
by the way, the configmap which name is yurtctl-lock can't be deleted. api-server is normal.

What real thing the function kubeutil.RunServantJob() do

The image openyurt/yurtctl-servant source code will be open source?
deploy yurt-hub and reset the kubelet service in a kubernetes job use the openyurt/yurtctl-servant:latest image. But I can't know the job change the things of kubelet.

ping cluster healthz with result, connection reset by peer

yurthub logs

I0814 20:04:52.261636       1 util.go:177] kubelet get pods: /api/xxxxxx with status code 200, spent 942.859µs, left 38 requests in flight
I0814 20:04:52.264445       1 util.go:177] kubelet patch pods: /api/xxxxxx with status code 200, spent 858.975µs, left 38 requests in flight
I0814 20:04:52.265982       1 util.go:177] kubelet get pods: /api/xxxxxxx with status code 200, spent 746.922µs, left 38 requests in flight
I0814 20:04:52.267781       1 util.go:177] kubelet patch pods: /api/xxxxxxx with status code 200, spent 582.398µs, left 38 requests in flight
I0814 20:04:52.869514       1 health_checker.go:151] ping cluster healthz with result, Get https://xxxxxxxx:8443/healthz: write tcp 10.110.18.89:41558->10.110.18.98:8443: write: connection reset by peer
I0814 20:04:53.696785       1 util.go:177] kubelet get leases: /apis/xxxxxxx with status code 200, spent 131.039µs, left 38 requests in flight
I0814 20:04:53.697706       1 util.go:177] kubelet update leases: /apis/xxxxxxx with status code 200, spent 79.877µs, left 38 requests in flight
I0814 20:04:56.061666       1 util.go:177] kubelet get nodes: /api/xxxxxxx with status code 200, spent 296.485µs, left 38 requests in flight
I0814 20:05:02.008666       1 util.go:177] kubelet watch secrets: /api/xxxxxx with code 200, spent 7m33.000467637s, left 37 requests in flight
I0814 20:05:02.261132       1 util.go:177] kubelet get pods: /api/xxxxxxxx with status code 200, spent 753.254µs, left 38 requests in flight
I0814 20:05:02.263179       1 util.go:177] kubelet patch pods: /api/xxxxxxxx with status code 200, spent 631.923µs, left 38 requests in flight
I0814 20:05:02.264435       1 util.go:177] kubelet get pods: /api/xxxxxxx with status code 200, spent 507.423µs, left 38 requests in flight
I0814 20:05:02.267621       1 util.go:177] kubelet patch pods: /api/xxxxxxx with status code 200, spent 528.197µs, left 38 requests in flight
I0814 20:05:02.869575       1 health_checker.go:151] ping cluster healthz with result, Get https://xxxxxxx:8443/healthz: write tcp 10.110.18.89:41558->10.110.18.98:8443: write: connection reset by peer
I0814 20:05:03.699013       1 util.go:177] kubelet get leases: /apis/xxxxxxx with status code 200, spent 123.254µs, left 38 requests in flight
I0814 20:05:03.699828       1 util.go:177] kubelet update leases: /apis/xxxxxxx with status code 200, spent 57.788µs, left 38 requests in flight
I0814 20:05:06.067954       1 util.go:177] kubelet get nodes: /api/xxxxxx with status code 200, spent 248.469µs, left 38 requests in flight
I0814 20:05:12.262824       1 util.go:177] kubelet get pods: /api/xxxxxxx with status code 200, spent 904.375µs, left 38 requests in flight
I0814 20:05:12.264815       1 util.go:177] kubelet patch pods: /api/xxxxxxx with status code 200, spent 550.946µs, left 38 requests in flight
I0814 20:05:12.265909       1 util.go:177] kubelet get pods: /api/xxxxxxx with status code 200, spent 505.152µs, left 38 requests in flight
I0814 20:05:12.267276       1 util.go:177] kubelet patch pods: /api/xxxxxxx with status code 200, spent 496.811µs, left 38 requests in flight
I0814 20:05:12.869516       1 health_checker.go:151] ping cluster healthz with result, Get https://xxxxxxx:8443/healthz: write tcp 10.110.18.89:41558->10.110.18.98:8443: write: connection reset by peer
  • use command curl https://xxxxx:8443/healthz, got ok
  • run yurthub as static pod, use images openyurt/yurthub:v0.1.1
  • ps -fp $(pidof yurthub) -o cmd=
yurthub --v=2 --server-addr=https://xxxxxx:8443 --node-name=xxxxxx
  • use kubectl get no xxxx, got notReady
  • kubectl describe no xxxx, got
Conditions:
  Type                 Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----                 ------    -----------------                 ------------------                ------              -------
  NetworkUnavailable   False     Fri, 14 Aug 2020 16:59:27 +0800   Fri, 14 Aug 2020 16:59:27 +0800   CalicoIsUp          Calico is running on this node
  MemoryPressure       Unknown   Fri, 14 Aug 2020 21:00:15 +0800   Fri, 14 Aug 2020 21:00:58 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure         Unknown   Fri, 14 Aug 2020 21:00:15 +0800   Fri, 14 Aug 2020 21:00:58 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure          Unknown   Fri, 14 Aug 2020 21:00:15 +0800   Fri, 14 Aug 2020 21:00:58 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready                Unknown   Fri, 14 Aug 2020 21:00:15 +0800   Fri, 14 Aug 2020 21:00:58 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  • docker version
Client:
 Version:           18.09.8
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        0dd43dd87f
 Built:             Wed Jul 17 17:41:19 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.8
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       0dd43dd
  Built:            Wed Jul 17 17:07:25 2019
  OS/Arch:          linux/amd64
  Experimental:     false

I restart yurthub at last, it works well.

yurt-cntroller-manager's log show an error: error retrieving resource lock kube-system/yurt-controller-manager

I follow the GitHub Manually Setup doc, but I find a error in yurt-controller-manager:

error retrieving resource lock kube-system/yurt-controller-manager: leases.coordination.k8s.io "yurt-controller-manager" is forbidden: User "system:serviceaccount:kube-system:default" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"

and I think the yurt-controller-manager is not working, because when I disconnect the yurthub on edgenode, the edgenode's status is still ready.

project can not compile

run make error.

go: finding module for package golang.org/x/oauth2
go: finding module for package github.com/Azure/go-ansiterm/winterm
go: finding module for package github.com/Azure/go-ansiterm
go: finding module for package github.com/Sirupsen/logrus
go: found github.com/Azure/go-ansiterm/winterm in github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78
go: found github.com/Sirupsen/logrus in github.com/Sirupsen/logrus v1.6.0
go: github.com/alibaba/openyurt/cmd/yurt-controller-manager/app imports
        k8s.io/apiserver/pkg/util/term imports
        github.com/docker/docker/pkg/term imports
        github.com/docker/docker/pkg/term/windows imports
        github.com/Sirupsen/logrus: github.com/Sirupsen/[email protected]: parsing go.mod:
        module declares its path as: github.com/sirupsen/logrus
                but was required as: github.com/Sirupsen/logrus

yurt-hub can't start when I reboot node

yurt-hub can't start when I reboot node.

The reason is that the yurt-hub configuration file takes the apiserver environment variables, but the kubelet does not inject the apiserver environment variables because the yurt-hub is not running properly, so both yurt-hub and kubelet are not running normally at the end.

edge node support arm/arm64 arch

in order to support arm/arm64 arch for edge node, we need to refactor the following components.

  1. yurthub: add arm/arm64 image building
  2. yurtctl: adapt for deploying arm/arm64 image

@hwq830 would you take care of it?

I deployed openyurt on my kubeadm k8s cluster, but tested node autonomy failed

Hi everyone, I tryed to deploy openyurt on my kubeadm k8s cluster v1.14, I used ack option.
At first, the container needed the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file, I refered to this issue and copied one to there. I'm not sure if it's ok but the openyurt run well.

kube-system   yurt-controller-manager-6947f6f748-7qgtn   1/1     Running   0          14m   192.168.235.203   k8s-master   <none>           <none>
kube-system   yurt-hub-k8s-node01                        1/1     Running   0          14m   10.1.11.58        k8s-node01   <none>           <none>

But when I wanted to test node autonomy, I got this before change the yurt-hub.yaml file:

[root@k8s-node01 ~]# curl -s http://127.0.0.1:10261
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:node:k8s-node01\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403

After I changed the yurt-hub.yaml file, it showed the right "BadRequest" reason, but I waited minutes, and the node is still ready. I don't know why.
And another question, after openyurt started, it changed the kubeconfig file from /etc/kubernetes/kubelet.conf to /var/lib/openyurt/kubelet.conf, but I contrasted these and it seems no differents between them, is it right? Thank you for any help.

tunnel-server: server connection closed

I have setup a k8s cluster with the master and a working node on separate networks. I referenced this tutorial to setup the tunnel sever and agent, but I can't access the pod on edge node through yurt-tunnel. The logs from the tunnel-server:

$ kubectl logs yurt-tunnel-server-74cfdd4bc7-7rrmr -n kube-system
I1110 12:53:57.737387       1 cmd.go:143] server will accept yurttunnel-agent requests at: 192.168.1.101:10262, server will accept master https requests at: 192.168.1.101:10263server will accept master http request at: 192.168.1.101:10264
W1110 12:53:57.737429       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1110 12:53:57.968315       1 iptables.go:474] clear conntrack entries for ports ["10250" "10255"] and nodes ["192.168.1.101" "192.168.122.55" "127.0.0.1"]
E1110 12:53:57.992841       1 iptables.go:491] clear conntrack for 192.168.1.101:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
E1110 12:53:58.011089       1 iptables.go:491] clear conntrack for 192.168.122.55:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
E1110 12:53:58.025873       1 iptables.go:491] clear conntrack for 127.0.0.1:10250 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
E1110 12:53:58.035197       1 iptables.go:491] clear conntrack for 192.168.1.101:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
E1110 12:53:58.042357       1 iptables.go:491] clear conntrack for 192.168.122.55:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
E1110 12:53:58.048433       1 iptables.go:491] clear conntrack for 127.0.0.1:10255 failed: "conntrack v1.4.4 (conntrack-tools): 0 flow entries have been deleted.\n", error message: exit status 1
I1110 12:54:03.073595       1 csrapprover.go:52] starting the crsapprover
I1110 12:54:03.209064       1 csrapprover.go:174] successfully approve yurttunnel csr(csr-sqfnw)
I1110 12:54:08.070368       1 anpserver.go:101] start handling request from interceptor
I1110 12:54:08.070787       1 anpserver.go:137] start handling https request from master at 192.168.1.101:10263
I1110 12:54:08.070872       1 anpserver.go:151] start handling http request from master at 192.168.1.101:10264
I1110 12:54:08.071365       1 anpserver.go:189] start handling connection from agents
I1110 12:54:09.087254       1 server.go:418] Connect request from agent ubuntu-standard-pc-i440fx-piix-1996
I1110 12:54:09.087319       1 backend_manager.go:99] register Backend &{0xc000158480} for agentID ubuntu-standard-pc-i440fx-piix-1996
W1110 12:54:24.273510       1 server.go:451] stream read error: rpc error: code = Canceled desc = context canceled
I1110 12:54:24.273532       1 backend_manager.go:119] remove Backend &{0xc000158480} for agentID ubuntu-standard-pc-i440fx-piix-1996
I1110 12:54:24.273562       1 server.go:531] <<< Close backend &{0xc000158480} of agent ubuntu-standard-pc-i440fx-piix-1996
I1110 12:54:37.682857       1 csrapprover.go:174] successfully approve yurttunnel csr(csr-6lcjl)
I1110 12:54:42.969063       1 server.go:418] Connect request from agent ubuntu-standard-pc-i440fx-piix-1996
I1110 12:54:42.969111       1 backend_manager.go:99] register Backend &{0xc000158180} for agentID ubuntu-standard-pc-i440fx-piix-1996

Login to the edge node, the tunnel-agent container log indicates a "connection closed" error. Any idea how to solve this issue? Thanks.

I1110 12:54:37.583915       1 cmd.go:106] neither --kube-config nor --apiserver-addr is set, will use /etc/kubernetes/kubelet.conf as the kubeconfig
I1110 12:54:37.583964       1 cmd.go:110] create the clientset based on the kubeconfig(/etc/kubernetes/kubelet.conf).
I1110 12:54:37.647689       1 cmd.go:135] yurttunnel-server address: 192.168.1.101:31302
I1110 12:54:37.647990       1 anpagent.go:54] start serving grpc request redirected from yurttunel-server: 192.168.1.101:31302
E1110 12:54:37.657318       1 clientset.go:155] rpc error: code = Unavailable desc = connection closed
I1110 12:54:42.970218       1 stream.go:255] Connect to server 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
I1110 12:54:42.970241       1 clientset.go:184] sync added client connecting to proxy server 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1
I1110 12:54:42.970266       1 client.go:122] Start serving for serverID 3656d4f5-746f-411f-b0eb-c1c69b1ff2c1

[feature request] the definition of NodePool and UnitedDeployment

What would you like to be added:

In the edge scenario, compute nodes have strong regional attributes. There may only be one WOker node in the same physical location, and there may also be multiple worker nodes in the same physical location. Therefore, from the work Node resource perspective of edge nodes, they need to be divided into different node pools (NodePool) to represent the same set of features. After dividing nodes pool, there will be a demand for application of grouping management, users need to be deployed application according to the unit, combined with the concept of node pool, to deploy applications on different nodes in the pool, pool dimensions at the nodes to expansion of application, upgrade, such as operation, at the same time, network access is also carried out in accordance with the node pool dimensions of network communication.

Why is this needed:

So we define NodePool and UnitedDeployment for both scenarios using Kubernetes CRD as the abstract

NodePool:

Nodes can be added and removed from the node pool.
Nodes within the node pool can be managed uniformly, such as label, annotation, and taints.

UnitedDeployment:
UnitedDeployment can use one of k8s Deployment, Daemonset, StatefulSet as a template for Deployment in a different node pool. At the same time, you can set the number of POD replicas deployed by different node pools

ref #124

cc @huangyuqi @Fei-Guo @rambohe-ch @charleszheng44

task list:

yurtctl convert 失败

执行命令如下(镜像全部用的阿里云镜像)

[root@sdcentosvm1 ~]# yurtctl convert --deploy-yurttunnel --provider ack --cloud-nodes sdcentosvm1 \
 --yurt-controller-manager-image registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurt-controller-manager \
--yurt-tunnel-agent-image registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurt-tunnel-agent \
--yurt-tunnel-server-image registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurt-tunnel-server \
--yurtctl-servant-image registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurtctl-servant \
--yurthub-image registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurthub

输出日志

I1121 22:41:38.924948    5766 convert.go:209] mark sdcentosvm1 as the cloud-node
I1121 22:41:38.940194    5766 convert.go:217] mark sdcentosvm2 as the edge-node
I1121 22:41:39.250719    5766 convert.go:273] yurt-tunnel-server is deployed
I1121 22:41:39.460884    5766 convert.go:281] yurt-tunnel-agent is deployed
I1121 22:41:39.460928    5766 convert.go:285] deploying the yurt-hub and resetting the kubelet service...
E1121 22:43:39.638069    5766 util.go:306] fail to run servant job(yurtctl-servant-convert-sdcentosvm2): wait for job to be complete timeout
I1121 22:43:39.638981    5766 convert.go:295] the yurt-hub is deployed

k8s集群节点信息

[root@sdcentosvm1 ~]# kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
sdcentosvm1   Ready    master   5h4m   v1.18.12
sdcentosvm2   Ready    <none>   5h3m   v1.18.12

pods

[root@sdcentosvm1 ~]# kubectl get pods -A
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   coredns-7ff77c879f-7pskc                    1/1     Running   2          5h25m
kube-system   coredns-7ff77c879f-twsr9                    1/1     Running   2          5h25m
kube-system   etcd-sdcentosvm1                            1/1     Running   2          5h25m
kube-system   kube-apiserver-sdcentosvm1                  1/1     Running   2          5h25m
kube-system   kube-controller-manager-sdcentosvm1         1/1     Running   2          5h25m
kube-system   kube-flannel-ds-6w7c8                       1/1     Running   2          5h24m
kube-system   kube-flannel-ds-l8blb                       1/1     Running   2          5h24m
kube-system   kube-proxy-fr7gp                            1/1     Running   2          5h25m
kube-system   kube-proxy-sf982                            1/1     Running   2          5h24m
kube-system   kube-scheduler-sdcentosvm1                  1/1     Running   2          5h25m
kube-system   yurt-controller-manager-68cf8c7899-w5zsq    1/1     Running   0          4m16s
kube-system   yurt-hub-sdcentosvm2                        1/1     Running   0          4m10s
kube-system   yurt-tunnel-agent-kmcmx                     1/1     Running   0          4m15s
kube-system   yurt-tunnel-server-6447f794fb-mp22m         1/1     Running   0          4m16s
kube-system   yurtctl-servant-convert-sdcentosvm2-v8r2t   0/1     Error     5          4m15s


yurtctl-servant-convert-sdcentosvm2-v8r2t的详情信息

[root@sdcentosvm1 ~]# kubectl describe pod yurtctl-servant-convert-sdcentosvm2-v8r2t -n kube-system
Name:         yurtctl-servant-convert-sdcentosvm2-v8r2t
Namespace:    kube-system
Priority:     0
Node:         sdcentosvm2/192.168.36.130
Start Time:   Sat, 21 Nov 2020 22:56:46 +0800
Labels:       controller-uid=97f980fb-f600-41ff-acf6-0b60ac625b1f
              job-name=yurtctl-servant-convert-sdcentosvm2
Annotations:  <none>
Status:       Running
IP:           10.244.1.2
IPs:
  IP:           10.244.1.2
Controlled By:  Job/yurtctl-servant-convert-sdcentosvm2
Containers:
  yurtctl-servant:
    Container ID:  docker://1d49a1b77741dba6c1a16d3517a9c4015c7681d9aa9b84f57bf91c69e20dbd3d
    Image:         registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurtctl-servant
    Image ID:      docker-pullable://registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt@sha256:9f32829d2738fbbe926ecef72b1ff82b6d8d7e4e314c7f37f523186e66d4c4d0
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
    Args:
      sed -i 's|__kubernetes_service_host__|$(KUBERNETES_SERVICE_HOST)|g;s|__kubernetes_service_port_https__|$(KUBERNETES_SERVICE_PORT_HTTPS)|g;s|__node_name__|$(NODE_NAME)|g;s|__yurthub_image__|registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurthub|g' /var/lib/openyurt/setup_edgenode && cp /var/lib/openyurt/setup_edgenode /tmp && nsenter -t 1 -m -u -n -i /var/tmp/setup_edgenode convert ack
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 21 Nov 2020 23:00:38 +0800
      Finished:     Sat, 21 Nov 2020 23:00:48 +0800
    Ready:          False
    Restart Count:  5
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /tmp from host-var-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tvn4f (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  host-var-tmp:
    Type:          HostPath (bare host directory volume)
    Path:          /var/tmp
    HostPathType:  Directory
  default-token-tvn4f:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tvn4f
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age                    From     Message
  ----     ------   ----                   ----     -------
  Normal   Started  4m53s (x4 over 6m12s)  kubelet  Started container yurtctl-servant
  Normal   Pulling  4m3s (x5 over 6m14s)   kubelet  Pulling image "registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurtctl-servant"
  Normal   Pulled   4m2s (x5 over 6m13s)   kubelet  Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/lxk8s/yurt:yurtctl-servant"
  Normal   Created  4m2s (x5 over 6m12s)   kubelet  Created container yurtctl-servant
  Warning  BackOff  69s (x18 over 5m50s)   kubelet  Back-off restarting failed container

docker版本

[root@sdcentosvm1 ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:03:45 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:02:21 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

CNCF SIG-Runtime discussion/presentation?

Hello OpenYurt team,

I'm one of the co-chairs of the CNCF SIG-Runtime, and since you are CNCF Sandbox project I think it would be great for you to present/discuss the project in one of our meetings. For example, discuss things such as adoption, architecture, etc.

Let me know if this something you'd be interested in doing. If yes, please feel free to add it to our agenda or reach out to me (raravena80 at gmail.com)

Thanks!

When deploying tunnel server, I encountered a parameter error.

As described above,I met an error , the pod log is:

Error: unknown flag: --server-count
Usage:
Launch yurttunnel-server [flags]

Flags:
--add_dir_header If true, adds the file directory to the header
--alsologtostderr log to standard error as well as files
--bind-address string the ip address on which the yurttunnel-server will listen. (default "0.0.0.0")
--cert-dns-names string DNS names that will be added into server's certificate. (e.g., dns1,dns2)
--cert-ips string IPs that will be added into server's certificate. (e.g., ip1,ip2)
--egress-selector-enable if the apiserver egress selector has been enabled.
--enable-iptables if allow iptable manager to set the dnat rule. (default true)
-h, --help help for Launch
--iptables-sync-period int the synchronization period of the iptable manager. (default 60)
--kube-config string path to the kubeconfig file.
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--log_file string If non-empty, use this log file
--log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr log to standard error instead of files (default true)
--skip_headers If true, avoid header prefixes in the log messages
--skip_log_headers If true, avoid headers when opening log files
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level number for the log level verbosity
--version print the version information of the yurttunnel-server.
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging

F0916 07:28:18.939231 1 server.go:32] yurttunnel-server failed: unknown flag: --server-count

when Manually Setup openyurt, /var/lib/openyurt/kubelet.conf will reset after restart kubelet.service

steps:
1、follow https://github.com/alibaba/openyurt/blob/master/docs/tutorial/manually-setup.md
2、at the last step
https://github.com/alibaba/openyurt/blob/master/docs/tutorial/manually-setup.md#reset-the-kubelet
i set kubeconfig(/var/lib/openyurt/kubelet.conf), like below:

apiVersion: v1
clusters:
- cluster:
    server: http://127.0.0.1:10261
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    namespace: default
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth

3、after reset kubelet.service, kubeconfig(/var/lib/openyurt/kubelet.conf) return the original one,

how do your guys keep kubeconfig into the correct one?

yaml error

there's an error in
/config/setup/yurthub.yaml
line 12
Can this quality really be used in industry?

rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: EOF"

yurt-tunnel-agent can not run.
Log:

I1103 11:23:42.583775 25717 cmd.go:110] create the clientset based on the kubeconfig(/root/my/admin.conf). I1103 11:23:42.584908 25717 loader.go:375] Config loaded from file: /root/my/admin.conf I1103 11:23:42.585568 25717 cmd.go:135] yurttunnel-server address: 192.168.5.240:10262 I1103 11:23:42.585606 25717 certificate_store.go:129] Loading cert/key pair from "/var/lib/yurt-tunnel-agent/pki/yurttunnel-agent-current.pem". I1103 11:23:42.592953 25717 certificate_manager.go:254] Certificate rotation is enabled. I1103 11:23:42.593147 25717 anpagent.go:54] start serving grpc request redirected from yurttunel-server: 192.168.5.240:10262 I1103 11:23:42.593166 25717 certificate_manager.go:507] Certificate expiration is 2030-07-29 04:23:07 +0000 UTC, rotation deadline is 2028-10-30 13:59:30.229113936 +0000 UTC I1103 11:23:42.593232 25717 certificate_manager.go:260] Waiting 70042h35m47.635884354s for next certificate rotation E1103 11:23:42.595043 25717 clientset.go:155] rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: EOF"

yurtctl convert --provider minikube timeout

os:centos 7 k8s:v1.16.0
[bearlu@control-plane amd64]$ ./yurtctl convert --provider minikube
I1115 12:04:28.705547 4730 convert.go:217] mark minikube as the edge-node
I1115 12:04:28.713233 4730 convert.go:217] mark minikube-m02 as the edge-node
I1115 12:04:28.764969 4730 convert.go:285] deploying the yurt-hub and resetting the kubelet service...
E1115 12:06:28.800491 4730 util.go:288] fail to run servant job(yurtctl-servant-convert-minikube): wait for job to be complete timeout
E1115 12:06:28.813800 4730 util.go:288] fail to run servant job(yurtctl-servant-convert-minikube-m02): wait for job to be complete timeout
I1115 12:06:28.813852 4730 convert.go:295] the yurt-hub is deployed

openyurt cloud-edge environment lab----ask for advice

hello,
system environment: k8s version 1.14.8 OS:ubuntu18.04 docker version : 19.03.0
cloud-edge configure has already configed OK, running _output/bin/yurtctl convert --provider ack --cloud-nodes a-cloud result display success!
But at edge node,yurthub is listenint at port 10261 ,yurthub's status is time wait,yurthub and cluster IP 443 is ESTABLISHED,kubelet and apiserver is ESTABLISHED
tcp 0 0 127.0.0.1:10261 127.0.0.1:38630 TIME_WAIT -
tcp 0 0 101.124.28.104:43814 101.124.47.10:6443 ESTABLISHED 55343/kubelet
tcp 0 0 101.124.28.104:49896 10.20.0.1:443 ESTABLISHED 18094/yurthub

by executing systemctl restart kubelet,the /var/lib/openyurt/kubelet.conf is restored original content。

thank you very much!

Do we plan to push the image to other registry

Since the docker hub introduce the rate limit, deployment has no pull secret, so be treat as Unauthenticated request, and for Unauthenticated, docker hub will mark it by IP, that's bad for a cluster use a single public IP for internet access.

So, do we have plan to push the image to other registry, like gcr.io or whatever has no such limitation.

Check state of node before run revert

I think we should check the state of node(worker) to make sure it's Ready before run revert.
or will cause the worker node in a inconsistent state:
The label/annotation is removed but yurthub not removed, kubelet not reset, it will cause issue when try to run convert again.

UnitedDeployment usages

Would like to understand how UnitedDeployment can be used?

  1. How Subset used to select a group of Node (e.g. NodePool Name)? Does it define a NodePool CRD first?
  2. if need to deploy 2 deployments (with different deploymentTemplate) in same Unit, How to define the UnitedDeployment? Is there any sample for reference? Thanks!

tunnel-server: Couldn't load target `TUNNEL-PORT':No such file or directory

I have setup a k8s cluster with the master and a working node on separate networks. I referenced this tutorial to setup the tunnel sever and agent, but I still can't access the pod on edge node through yurt-tunnel. The tunnel server log indicates below error, any idea what I might be missing? Thanks.

PS: My setup is based on ubuntu 18.04.5 with k8s v1.19.3 installed.

Regards,
Tonny

I1109 15:09:46.895389       1 cmd.go:143] server will accept yurttunnel-agent requests at: 192.168.1.101:10262, server will accept master https requests at: 192.168.1.101:10263server will accept master http request at: 192.168.1.101:10264
W1109 15:09:46.905219       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
E1109 15:09:47.896815       1 iptables.go:189] failed to delete rule that nat chain OUTPUT jumps to TUNNEL-PORT: error checking rule: exit status 2: iptables v1.6.0: Couldn't load target `TUNNEL-PORT':No such file or directory

Try `iptables -h' or 'iptables --help' for more information.

Combine the same constants into a single constant.go

We have same constants spread in YurtCtl, YurtTunnel and e2e test such as
LabelEdgeWorker, YurtEdgeNodeLabel and YURT_NODE_LABLE. We should make all the Labels be consistent especially when we have ability to change the Lable prefix via the compile option. A global constant.go may be a better choice in this case.

Nothing output use incorrect "provider"

I took a try with yurtctl on my local kubeadm env, I'm not notice the validate provider should be minikube or ack, I omit it, then I get nothing after run the command.

BTW, will we not support kubeadm or kind or others?

[feature request]all resources can be cached by yurt-hub

What would you like to be added:
only resources in the resourceToKindMap can be cached by yurt-hub component, and this a limitation. yurt-hub should cache all kubernetes resources, including crd resource that defined by user.

Why is this needed:
when network between cloud and edge disconnected, if any pod(eg: calico) on the edge node that used some resources(like crd) not in the above map want to run continuously, that is to say, the pod can not restarted successfully because resources(like crd) are not cached by yurt-hub.

ask some advice: the commnication is wrong by eip

hello,
I want to ask some advice:
1、when deploying openyurt, between the edge node and cloud node , the communication is well by pulic IP
and ,the communication is wrong by EIP(elastic IP). the edge node is not joined to the k8s cluster by command kubeadm join。。。。。。
2、the edge node is EIP, the edge node is joined to the aliyun Internet of things platform ,is well by EIP. so, please consult , Is the aliyun Internet of things platform using the public IP or EIP (elastic IP)?

thank you very much!

[BUG] _output/bin/yurtctl: No such file or directory

What happened:
I run commands step-by-step as describe in Getting started , and then got error: _output/bin/yurtctl: No such file or directory

What you expected to happen:
openyurt install successfully

How to reproduce it (as minimally and precisely as possible):
Run commands step-by-step as describe in Getting started

Anything else we need to know?:

Environment:

  • OpenYurt version: master branch
  • Kubernetes version (use kubectl version):
  • OS (e.g: cat /etc/os-release):CentOS Linux 8
  • Kernel (e.g. uname -a):Linux k8s-0001 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Thu Nov 19 17:20:08 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:minikube
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.