Giter Site home page Giter Site logo

sdn's Issues

when kube-proxy is disabled, sdn should not startMetricsServer

I have deploy openshift-sdn successfully and update it by edit configmap sdn-config:

mode: "disabled"

after that I deploy a kube-proxy ,and RE-DEPLOY sdn again, this time, sdn pod stuck in phase: RunContainerError. logs:

W1210 10:17:38.676349   44123 proxy.go:63] Built-in kube-proxy is disabled
E1210 10:17:38.678956   44123 proxy.go:254] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
I1210 10:17:38.701436   44123 multitenant.go:158] SyncVNIDRules: 0 unused VNIDs
E1210 10:17:43.679198   44123 proxy.go:254] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E1210 10:17:48.681093   44123 proxy.go:254] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use

I tried update sdn‘s configmap to erase kube-proxy-metric-addr:

metricsBindAddress: ""

but seem uneless.

I check the code and found that :

func (sdn *OpenShiftSDN) runProxy(waitChan chan<- bool) {
	if string(sdn.ProxyConfig.Mode) == "disabled" {
		klog.Warningf("Built-in kube-proxy is disabled")
		sdn.startMetricsServer()
		close(waitChan)
		return
	}
...
func (sdn *OpenShiftSDN) startMetricsServer() {
	if sdn.ProxyConfig.MetricsBindAddress == "" {
		return
	}

I think maybe we should cancel call sdn.startMetricsServer() if sdn.ProxyConfig.Mode is "disabled"

NodeLocal DNS causes "172.30.0.0/16 conflicts with host network: 172.30.0.10/32"

We recently had to setup NodeLocal DNS as we ran ran into random resolution issues. Since applying it our resolution issues are fully gone.

After a master node reboot we ran into this error:

I0309 15:16:40.762994       1 master.go:52] Initializing SDN master
F0309 15:16:40.768189       1 network_controller.go:59] Error starting OpenShift Network Controller: service IP: 172.30.0.0/16 conflicts with host network: 172.30.0.10/32

For now we worked around the issue by no scheduling the NodeLocal DNS on the master nodes.

Is there a chance to for openshift/sdn to support NodeLocal DNS?

Thanks!

openshift-sdn does not allow to configure a custom mac

openshift-sdn does not allow to configure a custom mac.

I am copying here @booxter thorough explanation for the similar issue on ovn-kubernetes:

OK, let me expand because it sounds like perhaps Alon asked the project to come up with its own solution for hardware addresses, while in reality there is prior art in CNI plugins for the same.

The Network Plumbing working group of Kubernetes came up with a standard for network attachment definitions: https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit

This document explains how additional networks can be attached to pods and is implemented by Multus CNI plugin. This document defines, among other things, the mac attribute that can be passed via CNI_ARGS into the plugin. If it's passed, then the plugin is supposed to enforce it for the prepared binding. This feature is already implemented in multiple CNI plugins (e.g. ovs-cni, sriov-cni), it can also be implemented not natively via tuning plugin chaining. Other projects like KubeVirt / kubemacpool rely on this attribute for some features to work.

It would be very helpful if OVN-Kubernetes plugin adds support for this attribute like other plugins did, so that we can use it in KubeVirt / kubemacpool environments.

Thank you for consideration.

Fail to start sdn pod when clusterCIDR is equal to hostCIDR

When running a single node, the user might configure the network like this:

 networking:
   clusterNetwork:
   - cidr: 10.217.0.0/23
     hostPrefix: 23
   machineNetwork:
   - cidr: 192.168.126.0/24
   networkType: OpenShiftSDN
   serviceNetwork:
   - 10.217.2.0/23

We tried it for CRC and sadly it fails (see crc-org/snc#311). The installer fails: the sdn pod exited with the following error.

[root@crc-d4mxd-master-0 core]# crictl ps -a
CONTAINER           IMAGE                                                                                                                    CREATED             STATE               NAME                      ATTEMPT             POD ID
424b2df2c1e48       eab80d387b5835140e41965e775371ab9f75cc64422605bd56f7b8b89bd52381                                                         7 seconds ago       Running             kube-multus               3                   1e8937168a96d
0265f5cfb4d29       2c5d2c2b51082e6ce5deca684aaa7a8f3c970616f7d192accfa34bc75011fb6c                                                         4 minutes ago       Exited              sdn                       10                  77f4ca6be7c2a
34f470bfffa5b       5283a59259736046ba55075e4f4ff03675d8d41553fbbdc3d1e6d267c5360c4d                                                         4 minutes ago       Exited              kube-rbac-proxy           7                   77f4ca6be7c2a
1dd95ba351421       eab80d387b5835140e41965e775371ab9f75cc64422605bd56f7b8b89bd52381                                                         10 minutes ago      Exited              kube-multus               2                   1e8937168a96d
72eb46c63539e       2c5d2c2b51082e6ce5deca684aaa7a8f3c970616f7d192accfa34bc75011fb6c                                                         11 minutes ago      Running             sdn-controller            1                   d4e6726774492
e17fa833d1c39       9e292852b769f6133e6e25f7a6b6b4f457d5c00ddd7735bffa39724868056a01                                                         30 minutes ago      Exited              whereabouts-cni           0                   1e8937168a96d
0700071978d56       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0de69542fc5c98f06e794bc6d522b76ca626d9089a49215510dcba158f1250b   30 minutes ago      Exited              whereabouts-cni-bincopy   0                   1e8937168a96d
58142749dd746       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831ac3823614ef1230fbc786d990ee186cfe3a54540ed266decabdf64475032c   31 minutes ago      Exited              routeoverride-cni         0                   1e8937168a96d
0af004bcc0682       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9222d21a1664062c0f0be3e0269392ea951fc346b1d51f831b4b080aca752b61   31 minutes ago      Exited              cni-plugins               0                   1e8937168a96d
63b8097640355       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42de8035ebe256cc1efe062cf8eef5a42d06fd4657469a5b5cd16c18520e08f8   31 minutes ago      Exited              sdn-controller            0                   d4e6726774492
034e70174b347       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42de8035ebe256cc1efe062cf8eef5a42d06fd4657469a5b5cd16c18520e08f8   31 minutes ago      Running             openvswitch               0                   9393c86f957d1
dec2262d6670e       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ad0033cdf25dca68753355915935bf2471d4d11ba568c3eb331cae403d4fa2c   31 minutes ago      Exited              multus-binary-copy        0                   1e8937168a96d
596a0b780af04       quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3829b56f156b88642013133072be9de9ec600c570db5153f9b45ae5868aa5257   31 minutes ago      Running             network-operator          0                   a579fffb6333f
[root@crc-d4mxd-master-0 core]# crictl logs 0265f5cfb4d29
I0122 09:25:03.704947   21855 cmd.go:121] Reading proxy configuration from /config/kube-proxy-config.yaml
I0122 09:25:03.705897   21855 feature_gate.go:243] feature gates: &{map[]}
I0122 09:25:03.705920   21855 cmd.go:216] Watching config file /config/kube-proxy-config.yaml for changes
I0122 09:25:03.705935   21855 cmd.go:216] Watching config file /config/..2021_01_22_08_57_59.443374971/kube-proxy-config.yaml for changes
I0122 09:25:03.725141   21855 node.go:152] Initializing SDN node "crc-d4mxd-master-0" (192.168.126.11) of type "redhat/openshift-ovs-networkpolicy"
I0122 09:25:03.725278   21855 cmd.go:159] Starting node networking (v0.0.0-alpha.0-233-g7106dab9)
I0122 09:25:03.725283   21855 node.go:340] Starting openshift-sdn network plugin
I0122 09:25:03.804877   21855 sdn_controller.go:139] [SDN setup] full SDN setup required (cluster CIDR not found)
F0122 09:25:04.035397   21855 cmd.go:111] Failed to start sdn: node SDN setup failed: file exists

When looking at the code, the failure seems to be related to the fact that route -n doesn't contain any routes to the cluster CIDR which is normal in the single node case.

[root@crc-d4mxd-master-0 core]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.126.1   0.0.0.0         UG    100    0        0 ens3
192.168.126.0   0.0.0.0         255.255.255.0   U     100    0        0 ens3

Would it be enough to remove this check in the case of clusterCIDR == hostCIDR ?

why unsupported network policies are converted to deny-all

I ran some workload that has an unsupported networkpolicy, it is converted to deny-all per this line. This makes debugging difficult. Would it make sense just to drop the unsupported networkpolicy instead of deny all?
@deads2k

I1031 14:34:49.442501    4018 networkpolicy.go:482] Unsupported NetworkPolicy xxxx (named port values ("metrics") are not implemented); treating as deny-all

fix up startup / informer usage

(We probably won't actually fix this, but I did the analysis and want to dump it somewhere.)

We keep running into problems with the fact that openshift-sdn-node doesn't wait for its informers to be ready at startup.

Here's what sdn-node startup currently looks like:

pkg/cmd/openshift-sdn/node/cmd.go: run()

  sdn.init()
    sdn.buildInformers()
    sdn.initSDN()
      sdnnode.New()
        NewNetworkPolicyPlugin()
          newNodeVNIDMap()
        NewOVSController()
        common.NewEgressDNS()
        newHostSubnetWatcher()
        newPodManager()
        newEgressIPWatcher()
          common.NewEgressIPTracker()
    sdn.initProxy()
      sdnproxy.New()
        common.NewEgressDNS()

  sdn.start()
    sdn.runSDN()
      sdn.osdnNode.Start()
        node.getLocalSubnet()
        newNodeIPTables()
        nodeIPTables.Setup()
        node.SetupSDN()
        hostSubnets.Start()
          watchHostSubnets()
        node.policy.Start()
          vnids.Start()
            populateVNIDs()
              common.ListAllNetNamespaces()
            watchNetNamespaces()
          initNamespaces()
            common.ListAllNamespaces()
            common.ListAllNetworkPolicies()
          watchNamespaces()
          watchPods()
          watchNetworkPolicies()
        node.SetupEgressNetworkPolicy()
          common.ListAllEgressNetworkPolicies()
          plugin.policy.GetVNID()
          plugin.watchEgressNetworkPolicies()
        node.egressIP.Start()
          eip.tracker.Start()
            watchHostSubnets()
            watchNetNamespaces()
            watchNodes()
            go WaitForCacheSync()
        node.watchServices()
        node.podManager.InitRunningPods()
          m.policy.GetVNID()
        node.podManager.Start()
        node.reattachPods()
          node.podManager.handleCNIRequest()
            podManager.setup()
              m.kClient.CoreV1().Pods().Get()
              m.policy.GetVNID()
        node.FinishSetupSDN()
    sdn.runProxy()
      newProxyServer()
      wrapProxy()
        sdn.SetBaseProxies()
          NewHybridProxier()
        sdn.osdnProxy.Start()
          common.GetParsedClusterNetwork()
          common.ListAllEgressNetworkPolicies()
	  proxy.watchEgressNetworkPolicies()
	  proxy.watchNetNamespaces()
      startProxyServer()
        informers.NewSharedInformerFactoryWithOptions()
        informerFactory.Start()

    sdn.informers.start()

ideally we would call sdn.informers.start() between sdn.init() and sdn.start() (and then get rid of all of the ListAll* methods and just use the informer listers). But this would require reorganizing stuff so that all of the "watch" methods get called at init time. Right now the split between init and start is mostly there to allow for unit tests that don't use clients/informers, but that could be fixed by just using fake clients/informers in the unit tests.

OVS segfault

I've deployed a small OKD 3.11 cluster for testing purposes, and the SDN seems flaky. I left it alone for a couple days, and came back to one of the OVS pods having segfaulted:

Dec 07 15:46:34 stacks1 kernel: ovs-ctl[50650]: segfault at 7fa22eddaf80 ip 00007fa22eb17eb0 sp 00007ffe14001358 error 4 in libc-2.17.so[7fa22ea92000+1c3000]

The OVS and SDN pods on that node did restart themselves as expected, but that caused me to encounter #47. I'm not certain how to capture core dumps from Docker containers, if I'm honest, so if you have any pointers on how to do that I'd appreciate it.

❯ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
[...]
openshift v3.11.0+f6acaf1-349
kubernetes v1.11.0+d4cacc0

OKD 3.11 - crio - failed to find plugin "loopback" in path [/opt/cni/bin/ /opt/loopback/bin]

I am currently running a large okd: v3.11.0+b750162-89 cluster running crio as the runtime. I am seeing an issue with pods getting stuck in a ContainerCreating state. Upon investigation it appears pods are getting created before the SDN is fully up and running, once its up however it fails to create the pod.

here is a example:

pod start time: Wed, 20 May 2020 06:35:34 -0400

events:

  Type     Reason                  Age                 From                                   Message
  ----     ------                  ----                ----                                   -------
  Warning  FailedCreatePodSandBox  55m                 kubelet, ip-X.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fluent-bit-962zf_openshift-logging_a2bd2a99-9a85-11ea-a7a8-0287f3ab0050_0(301ea30ea0ad94070c5d934f3d1799cc4225afa865ff498619057bc08ad09b6d): failed to find plugin "loopback" in path [/opt/cni/bin/ /opt/loopback/bin]
  Warning  FailedCreatePodSandBox  54m                 kubelet, ip-X.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fluent-bit-962zf_openshift-logging_a2bd2a99-9a85-11ea-a7a8-0287f3ab0050_0(99f7918f079c7e829d731926f01351419d931be67d00cac1de20726382c8810f): failed to find plugin "loopback" in path [/opt/cni/bin/ /opt/loopback/bin]
  Warning  FailedCreatePodSandBox  54m                 kubelet, ip-X.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_fluent-bit-962zf_openshift-logging_a2bd2a99-9a85-11ea-a7a8-0287f3ab0050_0(9518362bebc1185be0c9df17e38dfd4931a64cd05aa948f7a75f2796673cc3e5): OpenShift SDN network process is not (yet?) available
  Warning  FailedCreatePodSandBox  4m (x220 over 54m)  kubelet, ip-X.ec2.internal  Failed create pod sandbox: rpc error: code = Unknown desc = pod sandbox with name "k8s_fluent-bit-962zf_openshift-logging_a2bd2a99-9a85-11ea-a7a8-0287f3ab0050_0" already exists

SDN start time: Wed, 20 May 2020 06:35:34 -0400

crio logs:

May 20 10:35:52 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:35:52.173892500Z" level=error msg="Error deleting network: failed to find plugin "openshift-sdn" in path [/opt/cni/bin/ /opt/openshift-sdn/bin]"
May 20 10:36:08 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:08.350781469Z" level=error msg="Error adding network: failed to find plugin "loopback" in path [/opt/cni/bin/ /opt/loopback/bin]"
May 20 10:36:08 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:08.350805073Z" level=error msg="Error while adding to cni lo network: failed to find plugin "loopback" in path [/opt/cni/bin/ /opt/loopback/bin]"
May 20 10:36:08 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:08.350822489Z" level=error msg="Error deleting network: failed to find plugin "openshift-sdn" in path [/opt/cni/bin/ /opt/openshift-sdn/bin]"
May 20 10:36:12 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:12.674429655Z" level=error msg="Error adding network: OpenShift SDN network process is not (yet?) available"
May 20 10:36:12 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:12.674450141Z" level=error msg="Error while adding to cni network: OpenShift SDN network process is not (yet?) available"
May 20 10:36:12 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:12.678066342Z" level=error msg="Error deleting network: failed to send CNI request: Post http://dummy/: dial unix /var/run/openshift-sdn/cni-server.sock: connect: no such file or directory"
May 20 10:36:19 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:19.536904826Z" level=error msg="Error adding network: OpenShift SDN network process is not (yet?) available"
May 20 10:36:19 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:19.536930324Z" level=error msg="Error while adding to cni network: OpenShift SDN network process is not (yet?) available"
May 20 10:36:19 ip-X.ec2.internal crio[1555]: time="2020-05-20 10:36:19.540556043Z" level=error msg="Error deleting network: failed to send CNI request: Post http://dummy/: dial unix /var/run/openshift-sdn/cni-server.sock: connect: no such file or directory"

^ after the last log entry (May 20 10:36:19 ) there appears to be no more issues talking to the SDN.

SDN startup logs:

2020/05/20 10:36:08 socat[3636] E connect(5, AF=1 "/var/run/openshift-sdn/cni-server.sock", 40): No such file or directory
User "sa" set.
Context "default-context" modified.
which: no openshift-sdn in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin)
I0520 10:36:18.909405    3563 start_network.go:193] Reading node configuration from /etc/origin/node/node-config.yaml
I0520 10:36:18.912000    3563 start_network.go:200] Starting node networking ip-X.ec2.internal (v3.11.0+d699176-406)
W0520 10:36:18.912136    3563 server.go:195] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0520 10:36:18.912165    3563 feature_gate.go:230] feature gates: &{map[]}
I0520 10:36:18.914229    3563 transport.go:160] Refreshing client certificate from store
I0520 10:36:18.914255    3563 certificate_store.go:131] Loading cert/key pair from "/etc/origin/node/certificates/kubelet-client-current.pem".
I0520 10:36:18.929569    3563 node.go:151] Initializing SDN node of type "redhat/openshift-ovs-networkpolicy" with configured hostname "ip-X.ec2.internal" (IP ""), iptables sync period "30s"
I0520 10:36:18.934523    3563 node.go:281] Starting openshift-sdn network plugin
I0520 10:36:19.214899    3563 sdn_controller.go:138] [SDN setup] full SDN setup required (Link not found)
I0520 10:36:19.643931    3563 vnids.go:148] Associate netid 12328631 to namespace "X" with mcEnabled false
....
W0520 10:36:19.908659    3563 pod.go:218] No sandbox for pod datadog-agent/datadog-agent-hv9p4
W0520 10:36:19.908684    3563 pod.go:218] No sandbox for pod openshift-monitoring/node-exporter-ps4bh
W0520 10:36:19.908690    3563 pod.go:218] No sandbox for pod openshift-node/sync-dw5cj
W0520 10:36:19.908696    3563 pod.go:218] No sandbox for pod openshift-sdn/ovs-gxfbq
W0520 10:36:19.908704    3563 pod.go:218] No sandbox for pod openshift-sdn/sdn-rkk56
I0520 10:36:19.908709    3563 node.go:352] Starting openshift-sdn pod manager
E0520 10:36:19.908931    3563 cniserver.go:150] failed to remove old pod info socket: remove /var/run/openshift-sdn: device or resource busy
E0520 10:36:19.908967    3563 cniserver.go:153] failed to remove contents of socket directory: remove /var/run/openshift-sdn: device or resource busy
I0520 10:36:19.913154    3563 node.go:379] openshift-sdn network plugin registering startup
I0520 10:36:19.913277    3563 node.go:397] openshift-sdn network plugin ready
I0520 10:36:19.915712    3563 network.go:97] Using iptables Proxier.
I0520 10:36:19.916200    3563 networkpolicy.go:331] SyncVNIDRules: 0 unused VNIDs
I0520 10:36:19.918356    3563 network.go:129] Tearing down userspace rules.
I0520 10:36:19.930669    3563 proxier.go:216] Setting proxy IP to X and initializing iptables
I0520 10:36:19.949401    3563 proxy.go:86] Starting multitenant SDN proxy endpoint filter
I0520 10:36:19.955340    3563 config.go:202] Starting service config controller
I0520 10:36:19.955358    3563 controller_utils.go:1025] Waiting for caches to sync for service config controller
I0520 10:36:19.955371    3563 network.go:231] Started Kubernetes Proxy on 0.0.0.0
I0520 10:36:19.955387    3563 start_network.go:232] Waiting for the SDN proxy startup to complete...
I0520 10:36:19.955386    3563 config.go:102] Starting endpoints config controller
I0520 10:36:19.955404    3563 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
I0520 10:36:19.955478    3563 network.go:55] Starting DNS on 127.0.0.1:53
I0520 10:36:19.956467    3563 server.go:76] Monitoring dnsmasq to point cluster queries to 127.0.0.1
I0520 10:36:19.956562    3563 logs.go:49] skydns: ready for queries on cluster.local. for tcp://127.0.0.1:53 [rcache 0]
I0520 10:36:19.956603    3563 logs.go:49] skydns: ready for queries on cluster.local. for udp://127.0.0.1:53 [rcache 0]

If I destroy the pod and allow k8s to recreate it the new pod comes up just fine (in this case its a daemonset so its getting assigned to the same node)

append a route to new-cluster-cidr in old container

I use openshift-sdn with a small cluster-network in my cluster for a long time, now i find that this cluster-network has beed used up
when i want to add more nodes into my cluster.

i change args or update clusternetworks ,anyway ,now i add a new cidr 10.132.0.0/14 in my cluster-network: default

# kubectl  get clusternetwork -oyaml 
apiVersion: v1
items:
- apiVersion: network.openshift.io/v1
  clusterNetworks:
  - CIDR: 10.178.40.0/21
    hostSubnetLength: 10
  - CIDR: 10.132.0.0/14
    hostSubnetLength: 8
  hostsubnetlength: 10
  kind: ClusterNetwork
  metadata:
    creationTimestamp: 2020-07-09T03:04:22Z
    generation: 1
    name: default
    namespace: ""
    resourceVersion: "36528919"
    selfLink: /apis/network.openshift.io/v1/clusternetworks/default
    uid: e3b4a921-c190-11ea-b605-fa163e6fe7d6
  network: 10.178.40.0/21
  pluginName: redhat/openshift-ovs-multitenant
  serviceNetwork: 10.178.32.0/21
  vxlanPort: 4789
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

but as this doc said , node must be delete and re-creating , I have some pod running on these node that can not be shutdown.

Then I found this pr : 38780ce . As I know ,I can restart sdn-agent-pod on node to re-build routes/iptables/openflows for new cidr in clusternetwork, it will not cause my pod-container re-create (it will call reattach method)

But in this way, all existing pods' containers cann't connect to new cidr, because of lack of route to new cidr. Why don't agent inject a route to new cidr into old container when it do reattach container ?

Pods networking is broken after openvswitch is restarted

Description

After ovs pod is restarted all pods on the corresponding node come up with broken networking. The gateway is not reachable, thus all egress connections are not possible.
If ovs-ofctl -O OpenFlow13 dump-ports-desc br0 is run inside the ovs pod, the output doesn't show old vethXXX interfaces, however they're still present on host.

Version
  • The output of git describe of openshift-ansible
openshift-ansible-3.11.146-1-22-g37e13e5
  • ovs image version:
docker.io/openshift/origin-node:v3.11
Steps To Reproduce
  1. Delete/restart the ovs pod on the compute node.
  2. Run ovs-ofctl -O OpenFlow13 dump-ports-desc br0, verify that veth interfaces are missing.
Expected Results

Expected pod networking is not broken after ovs is restarted. Old vethXXX interfaces are picked by ovs after the restart.

Additional Information
  • Operating system and version: CentOS 7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.