Giter Site home page Giter Site logo

okd-project / okd Goto Github PK

View Code? Open in Web Editor NEW
1.6K 1.6K 288.0 810 KB

The self-managing, auto-upgrading, Kubernetes distribution for everyone

Home Page: https://okd.io

License: Apache License 2.0

HCL 45.82% Python 36.65% Shell 10.77% Smarty 0.54% Jinja 6.22%

okd's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

okd's Issues

Several errors when installing OKD 4.4.0 on AWS

When I'm trying to install OKD on AWS the installation will crash with several errors.

$ openshift-install version
openshift-install unreleased-master-2054-g7e1413f19c0af31bd7919d3067080a9ec787e135-dirty
built from commit 7e1413f19c0af31bd7919d3067080a9ec787e135
release image registry.svc.ci.openshift.org/origin/release@sha256:3b68e0f037c0ef3359ba3707d623a00402d0467d5ebafa8fea34ad326a27ed30

Installer Console Output

Click to expand
$ openshift-install create cluster
INFO Consuming Install Config from target directory 
INFO Creating infrastructure resources...         
INFO Waiting up to 30m0s for the Kubernetes API at https://api.dev.example.com:6443... 
INFO API v1.17.0 up                               
INFO Waiting up to 30m0s for bootstrapping to complete... 
INFO Destroying the bootstrap resources...        
ERROR                                              
ERROR Warning: Resource targeting is in effect     
ERROR                                              
ERROR You are creating a plan with the -target option, which means that the result 
ERROR of this plan may not represent all of the changes requested by the current 
ERROR configuration.                               
ERROR                                                      
ERROR The -target option is not for routine use, and is provided only for 
ERROR exceptional situations such as recovering from errors or mistakes, or when 
ERROR Terraform specifically suggests to use it as part of an error message. 
ERROR                                              
ERROR                                              
ERROR Warning: Applied changes may be incomplete   
ERROR                                              
ERROR The plan was created with the -target option in effect, so some changes 
ERROR requested in the configuration may have been ignored and the output values may 
ERROR not be fully updated. Run the following command to verify that no other 
ERROR changes are pending:                         
ERROR     terraform plan                           
ERROR                                               
ERROR Note that the -target option is not suitable for routine use, and is provided 
ERROR only for exceptional situations such as recovering from errors or mistakes, or 
ERROR when Terraform specifically suggests to use it as part of an error message. 
ERROR
INFO Waiting up to 30m0s for the cluster at https://api.dev.example.com:6443 to initialize... 
E0121 14:09:10.603015   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: Get https://api.dev.example.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=6227&timeoutSeconds=434&watch=true: dial tcp 3.125.232.148:6443: connect: connection refused
E0121 14:09:11.670914   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: Get https://api.dev.example.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=6227&timeoutSeconds=501&watch=true: dial tcp 3.125.227.80:6443: connect: connection refused
E0121 14:23:19.657660   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: Get https://api.dev.example.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=12851&timeoutSeconds=428&watch=true: dial tcp 3.124.42.151:6443: connect: connection refused
E0121 14:23:26.410308   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: the server is currently unable to handle the request (get clusterversions.config.openshift.io)
I0121 14:23:37.751957   29692 trace.go:116] Trace[1562460260]: "Reflector ListAndWatch" name:k8s.io/client-go/tools/watch/informerwatcher.go:146 (started: 2020-01-21 14:23:27.411104 +0100 CET m=+2282.244719334) (total time: 10.340616233s):
Trace[1562460260]: [10.340594715s] [10.340594715s] Objects listed
E0121 14:25:45.974279   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: Get https://api.dev.example.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=12958&timeoutSeconds=453&watch=true: dial tcp 3.125.232.148:6443: connect: connection refused
E0121 14:25:47.045923   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: Get https://api.dev.example.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=12958&timeoutSeconds=563&watch=true: dial tcp 3.124.42.151:6443: connect: connection refused
E0121 14:25:48.109525   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: Get https://api.dev.example.com:6443/apis/config.openshift.io/v1/clusterversions?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dversion&resourceVersion=12958&timeoutSeconds=549&watch=true: dial tcp 3.125.232.148:6443: connect: connection refused
E0121 14:25:55.373676   29692 reflector.go:280] k8s.io/client-go/tools/watch/informerwatcher.go:146: Failed to watch *v1.ClusterVersion: the server is currently unable to handle the request (get clusterversions.config.openshift.io)
I0121 14:26:12.074432   29692 trace.go:116] Trace[1614350600]: "Reflector ListAndWatch" name:k8s.io/client-go/tools/watch/informerwatcher.go:146 (started: 2020-01-21 14:25:56.37865 +0100 CET m=+2431.209468384) (total time: 15.695454553s):
Trace[1614350600]: [15.695429463s] [15.695429463s] Objects listed
ERROR Cluster operator authentication Degraded is True with IngressStateEndpoints_MissingSubsets::RouteStatus_FailedHost: IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server
RouteStatusDegraded: route is not available at canonical host oauth-openshift.apps.dev.example.com: [] 
INFO Cluster operator authentication Progressing is Unknown with NoData:  
INFO Cluster operator authentication Available is Unknown with NoData:  
INFO Cluster operator console Progressing is True with RouteSyncProgressingFailedHost: RouteSyncProgressing: route is not available at canonical host [] 
INFO Cluster operator console Available is Unknown with NoData:  
INFO Cluster operator image-registry Available is False with NoReplicasAvailable: The deployment does not have available replicas 
INFO Cluster operator image-registry Progressing is True with DeploymentNotCompleted: The deployment has not completed 
ERROR Cluster operator ingress Degraded is True with IngressControllersDegraded: Some ingresscontrollers are degraded: default 
INFO Cluster operator ingress Progressing is True with Reconciling: Not all ingress controllers are available.
Moving to release version "4.4.0-0.okd-2020-01-20-231545".
Moving to ingress-controller image version "registry.svc.ci.openshift.org/origin/4.4-2020-01-20-231545@sha256:c5aa779b80bf6b7f9e98a4f85a3fec5a17543ce89376fc13e924deedcd7298cf". 
INFO Cluster operator ingress Available is False with IngressUnavailable: Not all ingress controllers are available. 
INFO Cluster operator insights Disabled is True with Disabled: Health reporting is disabled 
INFO Cluster operator kube-storage-version-migrator Available is False with _NoMigratorPod: Available: deployment/migrator.openshift-kube-storage-version-migrator: no replicas are available 
INFO Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack. 
ERROR Cluster operator monitoring Degraded is True with UpdatingAlertmanagerFailed: Failed to rollout the stack. Error: running task Updating Alertmanager failed: waiting for Alertmanager Route to become ready failed: waiting for RouteReady of alertmanager-main: no status available for alertmanager-main 
INFO Cluster operator monitoring Available is False with :  
INFO Cluster operator support Disabled is True with : Health reporting is disabled 
FATAL failed to initialize the cluster: Working towards 4.4.0-0.okd-2020-01-20-231545: 99% complete

install-config.yaml

I set the replicas to 1 and just for this issue I also anonymized the domain.

Click to expand
apiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
platform: {}
replicas: 2
controlPlane:
hyperthreading: Enabled
name: master
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: dev
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
  hostPrefix: 23
machineCIDR: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
aws:
  region: eu-central-1
publish: External
pullSecret: '{"auths":{"fake":{"auth": "bar"}}}'
sshKey: |
ssh-rsa ...

Gathered data about cluster

https://drive.google.com/file/d/11mi2Z_3Oye3bErvQwHJ4f-2pRrPebVAZ/view?usp=sharing

Outdated MCO in 4.3 playload

MCO from fcos branch is being promoted to 4.4 release, but not in 4.3.

fcos branch has been rebased recently on latest master, so its safe to point 4.3:machine-config-operator to 4.4 image. This would unblock 4.3 installs.

/assign @smarterclayton

DOC: OKD4 on Libvirt/KVM

Hi,
following the example of @jomeier in #27, I'd like to share the walkthrough I followed to bootstrap and install an OKD 4.3 cluster on top of libvirt+KVM.
This walkthrough is based from my experience with Red Hat OpenShift Container Platform 4 installation. If you have better ideas or things I could do better, please tell me with your comments. :-)

Assumptions:

  • The host OS is CentOS 7, so I'm using its version of libvirt and qemu-kvm.
  • You've already configured load balancer and DNS servers.
  • I will still use DHCP for obtaining the IPs, but I'm using a MAC-based reservation into the libvirt network configuration, so I won't need a standalone DHCP server.

Pre-tasks:

  1. Download from Fedora CoreOS website:
  • Installer ISO
  • Kernel
  • Initramfs
  • Raw image

Despite we're not install Fedora CoreOS from PXE, we still need kernel and initramfs as we're going to use the Direct Kernel Boot feature of libvirt to inject kernel boot parameters.

  1. Provision a webserver
    Create a simple webserver to host the ignition configs and the raw image. These files will be fetched by the CoreOS installer and deployed to the VMs.
    On my CentOS host I used lighttpd as it is small enough for what I need.
    Since my server is hosted by a third party provider, I bound the web server to the libvirt internal network IP assigned to my host (the default gateway). This way only the VMs can access to the resources hosted by it, and there's no way you can reach it from the public IP.
    Just remember to set a different port than 80, since it will be already bound by HAProxy. After that, ensure that SELinux allows the port you choose, or configure it accordingly.

Walkthrough:

  1. Configure libvirt network
    Create a simple NAT network. Even the "default" network created by libvirt is fine.
    There is no need to specify the IP reservation at this point, since the VMs don't exist yet. You can add the records in the dhcp tag later.
    Just remember to stop and start the network after adding them.
<network connections='3'>
  <name>openshift</name>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='virbr0' stp='on' delay='0'/>
  <ip address='GATEWAY' netmask='NETMASK'>
    <dhcp>
      <range start='DHCP START IP RANGE' end='DHCP END IP RANGE'/>
      <host mac='BOOTSTRAP MAC ADDRESS' name='okd4bs' ip='BOOTSTRAP IP'/>
      <host mac='MASTER 0 MAC ADDRESS' name='okd4m0' ip='MASTER 0 IP'/>
      <host mac='MASTER 1 MAC ADDRESS' name='okd4m1' ip='MASTER 1 IP'/>
      <host mac='MASTER 2 MAC ADDRESS' name='okd4m2' ip='MASTER 2 IP'/>
      <host mac='WORKER 0 MAC ADDRESS' name='okd4w0' ip='WORKER 1 IP'/>
      <host mac='WORKER 1 MAC ADDRESS' name='okd4w1' ip='WORKER 2 IP'/>
    </dhcp>
  </ip>
</network>
  1. Create the install-config.yaml
    Login to Red Hat Cluster Manager portal and obtain a pull secret. I choose Bare Metal as UPI provider.
    Create a SSH key pair for SSH login to the cluster VMs.
    Create, then, your install-config.yaml, like the following example:
---
apiVersion: v1
baseDomain: YOUR_DOMAIN
compute:
- hyperthreading: Enabled   
  name: worker
  replicas: 0 
controlPlane:
  hyperthreading: Enabled   
  name: master 
  replicas: 3 
metadata:
  name: CLUSTER_NAME
networking:
  clusterNetwork:
  - cidr: 10.100.0.0/14 
    hostPrefix: 23 
  networkType: OpenShiftSDN
  serviceNetwork: 
  - 172.30.0.0/16
platform:
  none: {} 
pullSecret: 'SECRET FROM RED HAT CLUSTER MANAGER'
sshKey: 'YOUR SSH PUBLIC KEY' 

Modify it accordingly to the size and the configuration of your cluster and then remember to backup it, because the OpenShift Installer will remove it after generating the Ignition configuration files.

  1. Generate the Ignition configuration files:
$ openshift-install create ignition-configs
  1. Provision the VMs
    I will provision a cluster with 3 control planes and 2 workers. So I will create 5 VMs + 1 for boostrapping. I chose Red Hat Atomic 7.4 as OS type.
    Configure the VM for booting them with the kernel and initramfs you previously downloded, with the following parameters:
    KERNEL: /path/of/the/kernel
    INITRAMFS: /path/of/initramfs
    BOOT ARGS: ip=dhcp rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.inst.image_url=http://WEB_SERVER_IP:PORT/fedora-coreos-31.20191217.2.0-metal.x86_64.raw.xz coreos.inst.ignition_url=http://WEB_SERVER_IP:PORT/SERVER_ROLE.ign
    Replace SERVER_ROLE with bootstrap for the bootstrap server, with master for control plane servers, and with worker for worker servers.
    Replace WEB_SERVER_IP:PORT with the endpoint of the webserver you previously configured.

NOTE: CoreOS installer automatically reboot the server after installation, so when the installation is finished shut down the VMs, otherwise the installer will keep installing the OS.
At this point you can add the VMs MAC address to the network configuration in libvirt, just like the above example. After the installation disable the Kernel boot arguments, as we don't need them anymore.

  1. Start the bootstrap server
    After every server in your cluster was provisioned, start the bootstrap server.
    By default, Fedora CoreOS will install the OS from the official OSTree image, so we have to wait a few minutes for the machine-config-daemon to pull and install the pivot image from quay.io. This image is necessary for the kubelet service, as the official Fedora CoreOS image does not include hyperkube.
    After the image was pulled and installed, the server will be rebooted by itself.
    When the server is up again wait for the API service and the MachineConfig service to be spawned (check for the ports 6443 and 22623).

  2. Start the other servers
    Now that the bootstrap server is ready, you can safely start every server of your cluster.
    Just like the bootstrap server, the control planes and the workers will boot with the official Fedora CoreOS image, that does not contains hyperkube, so the kubelet service will not start and therefore the cluster bootstrapping won't start.
    Wait for the machine-config-daemon to pull the same image as the bootstrap server and for the reboot.
    If for some reason the server will not reboot and rpm-ostree status will not show any pivot image, you can force it manually with:
    $ sudo machine-config-daemon firstboot-complete-machineconfig
    The servers will reboot themselves and after that they will try to join the cluster.
    After reboot rpm-ostree status should show something like:

[core@okd4m0 ~]$ rpm-ostree status
State: idle
AutomaticUpdates: disabled
Deployments:
● pivot://quay.io/openshift/okd-content@sha256:830ede6ea29c7b3e227261e4b2f098cbe2c98a344b2b54f3f05bb011e56b0209
              CustomOrigin: Managed by machine-config-operator
                 Timestamp: 2019-11-15T18:25:12Z

  ostree://fedora:fedora/x86_64/coreos/testing
                   Version: 31.20191217.2.0 (2019-12-18T14:11:27Z)
                    Commit: fd3a3a1549de2bb9031f0767d10d2302c178dec09203a1db146e0ad28f38c498
              GPGSignature: Valid signature by 7D22D5867F2A4236474BF7B850CB390B3C3359C4

Wait for the finish of the installation and then login to the console with the kubeadmin credentials, or configure an additional identity provider.

Enjoy:
immagine

Any idea when Azure support comes back ?

Hi everybody,

do you have any idea, when Azure support comes back to the installer for OKD4 ? It seems to being stripped out for the preview for some reason.

Many friendly greetings,

Josef

MCO reporting node not found after restart of mcd pods

While debugging an issue related to pull secrets in MCO, the machine config daemon did not come back up after a pod delete and instead reported the following:

W0115 15:30:05.615694 2039508 daemon.go:557] Got an error from auxiliary tools: error: cannot apply annotation for SSH access due to: unable to update node "nil": node "master-0" not found

The same log can be found on all mcd pods in the cluster post-reboot.
Is this related to the fork changes for MCO?

Version: registry-svc-ci-openshift-org-origin-4-4-2020-01-14-215321


Interesting, on a brand new install without any changes, I can't get any worker nodes to join at all:

Jan 15 16:40:43 worker-1 hyperkube[31249]: E0115 16:40:43.533178   31249 reflector.go:156] k8s.io/kubernetes/pkg/kubelet/kubelet.go:458: Failed to list *v1.Node: Get https://api-int.okd.blah.net:6443/api/v1/nodes?fieldSelector=metadata.name%3Dworker-1&limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kube-apiserver-lb-signer")
Jan 15 16:40:43 worker-1 hyperkube[31249]: E0115 16:40:43.543711   31249 kubelet.go:2271] node "worker-1" not found

EDIT: Still not sure why it happens, it has happened multiple times since then, and I seem to get it to work by re-running ignition script and therefore re-installing fcos on the node in question.

OKD support for AWS provider

We would like to keep this master issue to link new issues which are related to AWS deployment so we can keep track as well as letting community know if a specific provider deployment is supported or has known issues etc

Support for vSphere

Hi okd working group,
After some initial problems pull-secret={"auths":{"fake":{"auth": "bar"}}}, I was able to install okd 4.3 on vsphere. I followed this (OpenShift 4.2 vSphere Install Quickstart)[https://blog.openshift.com/openshift-4-2-vsphere-install-quickstart/]
I encountered the following issues. The image registry was not able to start, because of a missing (ReadWriteMany) persistent storage (PV). After changing the storage to empty dir
oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
and
oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState": "Managed"}}'
(Value before Removed) the image registry was able to launch.

Now I'll deep dive into okd4. Thanks to @LorbusChris for fast help via chat.

Best Regards
Reamer

etcd bootstrap failures

Noticed in latest build (4.4.0-0.okd-2020-01-29-103659)

etcd fails to bootstrap correctly. This occurs very often on bootstrap or master nodes, but not always.

Jan 29 17:45:56 bootstrap.okd.example.net bootkube.sh[830]: {"level":"warn","ts":"2020-01-29T17:45:56.812Z","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-2eb8c07f-110f-4509-86bc-1204fc6ef953/172.16.20.29:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 172.16.20.29:2379: connect: connection refused\""}
Jan 29 17:45:56 bootstrap.okd.example.net bootkube.sh[830]: https://172.16.20.29:2379 is unhealthy: failed to commit proposal: context deadline exceeded
Jan 29 17:45:56 bootstrap.okd.example.net bootkube.sh[830]: Error: unhealthy cluster

We can see that etcd isn't getting certificates:

2020-01-29T17:43:10.301492157+00:00 stderr F + kube-client-agent request --kubeconfig=/etc/kubernetes/kubeconfig --orgname=system:etcd-servers --assetsdir=/etc/ssl/etcd --dnsnames=localhost,etcd.kube-system.svc,etcd.kube-system.svc.cluster.local,etcd.openshift-etcd.svc,etcd.openshift-etcd.svc.cluster.local --commonname=system:etcd-server:172.16.20.29 --ipaddrs=172.16.20.29,127.0.0.1
2020-01-29T17:43:20.456237172+00:00 stderr F ERROR: logging before flag.Parse: E0129 17:43:20.456082       7 agent.go:145] unable to retrieve approved CSR: the server could not find the requested resource (get certificatesigningrequests.certificates.k8s.io system:etcd-server:172.16.20.29). Retrying.
2020-01-29T17:43:23.459441822+00:00 stderr P ERROR: logging before flag.Parse: 
2020-01-29T17:43:23.459516973+00:00 stderr F E0129 17:43:23.459187       7 agent.go:145] unable to retrieve approved CSR: the server could not find the requested resource (get certificatesigningrequests.certificates.k8s.io system:etcd-server:172.16.20.29). Retrying.
2020-01-29T17:43:26.457881285+00:00 stderr F ERROR: logging before flag.Parse: E0129 17:43:26.457771       7 agent.go:145] unable to retrieve approved CSR: the server could not find the requested resource (get certificatesigningrequests.certificates.k8s.io system:etcd-server:172.16.20.29). Retrying.
2020-01-29T17:43:29.457852868+00:00 stderr F ERROR: logging before flag.Parse: E0129 17:43:29.457774       7 agent.go:145] unable to retrieve approved CSR: the server could not find the requested resource (get certificatesigningrequests.certificates.k8s.io system:etcd-server:172.16.20.29). Retrying.
2020-01-29T17:43:30.457434550+00:00 stderr F ERROR: logging before flag.Parse: E0129 17:43:30.457358       7 agent.go:145] unable to retrieve approved CSR: the server could not find the requested resource (get certificatesigningrequests.certificates.k8s.io system:etcd-server:172.16.20.29). Retrying.
2020-01-29T17:43:30.457434550+00:00 stderr F Error: error requesting certificate: error obtaining signed certificate from signer: timed out waiting for the condition
2020-01-29T17:43:30.457716467+00:00 stderr F Usage:
2020-01-29T17:43:30.457716467+00:00 stderr F   kube-client-agent request --FLAGS [flags]
2020-01-29T17:43:30.457716467+00:00 stderr F 
2020-01-29T17:43:30.457716467+00:00 stderr F Flags:
2020-01-29T17:43:30.457716467+00:00 stderr F       --assetsdir string    Directory location for the agent where it stores signed certs
2020-01-29T17:43:30.457716467+00:00 stderr F       --commonname string   Common name for the certificate being requested
2020-01-29T17:43:30.457716467+00:00 stderr F       --dnsnames string     Comma separated DNS names of the node to be provided for the X509 certificate
2020-01-29T17:43:30.457716467+00:00 stderr F   -h, --help                help for request
2020-01-29T17:43:30.457716467+00:00 stderr F       --ipaddrs string      Comma separated IP addresses of the node to be provided for the X509 certificate
2020-01-29T17:43:30.457716467+00:00 stderr F       --kubeconfig string   Path to the kubeconfig file to connect to apiserver. If "", InClusterConfig is used which uses the service account kubernetes gives to pods.
2020-01-29T17:43:30.457716467+00:00 stderr F       --orgname string      CA private key file for signer
2020-01-29T17:43:30.457716467+00:00 stderr F 
2020-01-29T17:43:30.457716467+00:00 stderr F ERROR: logging before flag.Parse: F0129 17:43:30.457651       7 main.go:18] Error executing kube-client-agent: error requesting certificate: error obtaining signed certificate from signer: timed out waiting for the condition

No logs in etcd-signer container on bootstrap node.

OKD support for OpenStack provider

We would like to keep this master issue to link new issues which are related to OpenStack deployment so we can keep track as well as letting community know if a specific provider deployment is supported or has known issues etc

machine-config-host-pull.service almost always fails because network is not up yet

As the title says, machine-config-host-pull.service almost always fails in my environment. As far as I can tell, this is because it tries to pull the pivot image before network-online.target is reached, so the network's not available yet.

mcd early pull

Please see the attached systemd boot chart for an example of this. Please also note that restarting the service manually once I can SSH into the nodes works.

OKD support for Azure provider

We would like to keep this master issue to link new issues which are related to Azure deployment so we can keep track as well as letting community know if a specific provider deployment is supported or has known issues etc

OKD support for VMware provider

We would like to keep this master issue to link new issues which are related to VMware deployment so we can keep track as well as letting community know if a specific provider deployment is supported or has known issues etc

MCO /etc/crio/crio.conf file conflict upon upgrading releases

When updating from 4.4.0-0.okd-2020-01-14-215321 to 4.4.0-0.okd-2020-01-15-152306, (not sure if it happens for other release differences), an issue occurs where MCO attempts to overwrite the /etc/crio/crio.conf file but fails due to nodes containing different versions of this file than expected.

Couldn't fit log output due to size of config files, find it here: https://pastebin.com/kkdwSye7

[FCOS] Azure: No workers created if a custom image is used

Hi,

I manually uploaded the decompressed FCOS image to a Azure storage blob in a resourcegroup and let Terraform create an fcos image from that. I had to patch openshift-install that I can do that in /data/data/azure/main.tf

The idea is that all my OKD cluster VMs use this image and I don't have to upload it every time I create a new cluster becaue in my case thats very slow (more than 25 minutes following a timeout).

The k8s control plane comes up this way but no workers are created.

I debugged that and saw that in the machineset for the workers still the "original" fcos image is referenced and not my own image which is in a different resource group.

The image for the machineset is defined in:

/pkg/asset/machines/azure/machines.go

I would like to create a parameter in the Azure installer where I can ask for the storage blob where the fcos vhd file is stored.

What do you think about this idea?

Greetings,

Josef

New OKD update URL: Unable to retrieve available updates: unexpected HTTP status: 400 Bad Request

Hi,

I changed the URL for updated OKD images in the the ClusterVersion manifest to:
https://origin-release.svc.ci.openshift.org/graph

apiVersion: config.openshift.io/v1
kind: ClusterVersion
metadata:
  creationTimestamp: '2019-11-30T20:45:11Z'
  generation: 11
  name: version
  resourceVersion: '5843386'
  selfLink: /apis/config.openshift.io/v1/clusterversions/version
  uid: 0abf886d-ccb5-47b0-b417-394811ad651b
spec:
  channel: fast-4.3
  clusterID: d960adc9-68c7-43cc-953c-fb81e886008d
  upstream: 'https://origin-release.svc.ci.openshift.org/graph'
status:
  availableUpdates: null
  conditions:
    - lastTransitionTime: '2019-11-30T21:14:55Z'
      message: Done applying 4.3.0-0.okd-2019-11-15-182656
      status: 'True'
      type: Available
    - lastTransitionTime: '2019-12-03T09:40:28Z'
      status: 'False'
      type: Failing
    - lastTransitionTime: '2019-11-30T21:14:55Z'
      message: Cluster version is 4.3.0-0.okd-2019-11-15-182656
      status: 'False'
      type: Progressing
    - lastTransitionTime: '2019-12-10T10:39:29Z'
      message: >-
        Unable to retrieve available updates: unexpected HTTP status: 400 Bad
        Request
      reason: RemoteFailed
      status: 'False'
      type: RetrievedUpdates
...

As reported in the Status field the Cluster Version Operator reports a HTTP error 400.

If I curl to this URL I get a JSON with DockerImages. I can pull this DockerImages manually:

~$ curl https://origin-release.svc.ci.openshift.org/graph
{
  "nodes": [
    {
      "version": "4.3.0-0.okd-2019-12-09-174357",
      "payload": "registry.svc.ci.openshift.org/origin/release@sha256:c394a5db6ea1b8e534b5fc6dcfb27cb15c08312f9c75b06f2e0fb3ee8dc6f339"
    },
    {
      "version": "4.3.0-0.okd-2019-12-05-213603",
      "payload": "registry.svc.ci.openshift.org/origin/release@sha256:f8552ab8af4f3130d6ccd1423b6027735d73c6d45c5dff61ccdcb304de6386d7"
    },
    {
      "version": "4.3.0-0.okd-2019-12-05-210603",
      "payload": "registry.svc.ci.openshift.org/origin/release@sha256:5f23b43883fffc7ae89d3d86b7b2b7372f087a6471939c5ae03e85122f1904b4"
    },
    {
      "version": "4.3.0-0.okd-2019-12-05-193216",
      "payload": "registry.svc.ci.openshift.org/origin/release@sha256:ac6663eea65aa6b50361c5e782090745e5608097625cf0906ef30f713c101ad1"
    },
    {
      "version": "4.3.0-0.okd-2019-12-05-185724",
      "payload": "registry.svc.ci.openshift.org/origin/release@sha256:d0ea09b6e1089284ca65c4525710b0292b95bb1663fd43001499227c66d92a8b"
    },
...

I found the URL on https://github.com/orgs/openshift/projects/1 👍

Patch CVO in installer to update upstream to https://origin-release.svc.ci.openshift.org/graph?stable-XXX

Added by vrutkovs

Greetings,

Josef

open-vm-tools doesn't start

Description
I have two problems with open-vm-tools.

  1. open-vm-tools are not enabled.

  2. open-vm-tools are in included in 32-bit instead of 64-bit. yumdownloader should download the right binary. See openshift/release

Steps to reproduce the issue:

  1. Install 4.4.0-0.okd (https://origin-release.svc.ci.openshift.org/) - my used version
  2. Check open-vm-tools

Describe the results you received:

[core@master1 ~]$ systemctl status vmtoolsd.service 
● vmtoolsd.service - Service for virtual machines hosted on VMware
   Loaded: loaded (/usr/lib/systemd/system/vmtoolsd.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: http://github.com/vmware/open-vm-tools
[core@master1 ~]$ sudo file /usr/bin/vmtoolsd
/usr/bin/vmtoolsd: ELF 32-bit LSB pie executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, BuildID[sha1]=d27d5ce82743beade43af0b4ca9bd35c55a6b4ca, for GNU/Linux 3.2.0, stripped

Describe the results you expected:
open-vm-tools should run to have support for PV on vsphere

Additional environment details (platform, options, etc.):

  • Plattform vsphere

OKD support for GCP provider

We would like to keep this master issue to link new issues which are related to GCP deployment so we can keep track as well as letting community know if a specific provider deployment is supported or has known issues etc

OKD project state and relation to OpenShift

I am deeply confused by the current state of OKD project. Until release of version 4, I thought that OpenShift was developed in an open source process typical for Red Hat - i.e. that OKD was the upstream project where development work was done and OpenShift Container Platform was the supported version, based on OKD code.

However, something has changed along with the release of OpenShift 4. I don't understand what is the current relation between OpenShift and OKD. There was no official OKD release for version 4, despite OpenShift already being at version 4.2. The OKD web page suggests that latest release was 3.11.

Could you please explain what is the current development model for OpenShift, and the relation between OKD and OpenShift?

OperatorHub UI broken in 4.4.0-0.okd-2020-01-28-022517 in console app

Switching between categories in the OperatorHub in the web console leads to entities being appended to the list of previous entities, instead of their replacement (in 4.3 it was working correctly).

So if I i.e. go to https://console-openshift-console.apps.mycluster/operatorhub/all-namespaces?category=Storage I see 8 items (including counter of 8 top right). Actually each item is doubled, so there are only 4 unique items. Switching to "Streaming and messaging" category append some items, so that overall UI shows 12 items.
Only checking on some filter checkbox resets the view after switching between categories.

Additionally comparing to the content of operatorhub.io not all items present there are shown in UI (i.e. rook is missing)

Azure: machine-config-server on control plane serves ignition configs only through localhost

Hi,

today I tried the new fcos vhd image for Azure. install-config.yaml set to platform: none.

Bootstrapping worked, bootstrap VM does not respond to port 22623 anymore (could be deleted), I have a control plane (3 masters) running and can get all pods if I

sudo KUBECONFIG=/etc/kubernetes/kubeconfig oc get pods --all-namespaces

Currently I don't have loadbalancers in my setup.

api-int.xxx and api.xxx are hardcoded to master-0. Before the control plane was ready, they pointed to the bootstrap VM.

I tried to add additional worker VMs but they can't get the worker.ign files from the machine-config-server running on the masters.

If I ssh into master-0 and try this curls, one works, the other ones don't:

curl -k https://localhost:22623/config/worker  <- works, worker.ign is served
curl -k https://127.0.0.1:22623/config/worker <- Connection refused
curl -k https://10.1.0.5:22623/config/worker <- Connection refused (10.1.0.5 is private IP of my master-0 VM)

So it seems as if the machine-config-server is not accessible from outside the VM.

[core@master-0 ~]$ sudo netstat -tulpn | grep 22623
tcp6       0      0 :::22623                :::*                    LISTEN      10862/machine-confi

Greetings,

Josef

Ability to create small non-reselient cluster

It'd be useful to be able to create a small non-reselient cluster (eg. 1x master, 1x infra, 2x workers) that could fit in VMs on a laptop (<16GB). Appreciate I could use CRC, but that's not entirely representative of a real cluster. Without this I'd need a fair amount of kit, or a fairly large (for personal or research use) AWS bill - could this feature bring OKD to more people?

I think in the old days of 3.11 you could do something like this, from memory you even put master and infra on the same VM. Anyone else interested, or maybe this is just something I'd like :)

FCOS on GCP is missing gcp-routes.sh and gcp-routes.service

Fedora CoreOS is missing the following files on GCP:

gcp-routes.sh and gcp-routes.service.

Without these files the bootstrap and masters are never reported healthy to the loadbalancer. I have add the needed files and the bootstrap means instantly reports healthy

Deployment on OpenStack Fails

I have been attempting to deploy OKD4 on my OpenStack Stein cluster. The cluster has all of the required services, compute, block, object, network, etc.

When attempting to deploy OKD4 on OpenStack, Terraform fails due to the following error:

ERROR                                              
ERROR Error: Unsupported argument                  
ERROR                                              
ERROR   on ../../tmp/openshift-install-867057577/bootstrap/main.tf line 75, in data "ignition_file" "dhcp_conf": 
ERROR   75:   filesystem = "root"                  
ERROR                                              
ERROR An argument named "filesystem" is not expected here. 
ERROR                                              
ERROR                                              
ERROR Error: Unsupported argument                  
ERROR                                              
ERROR   on ../../tmp/openshift-install-867057577/bootstrap/main.tf line 88, in data "ignition_file" "dns_conf": 
ERROR   88:   filesystem = "root"                  
ERROR                                              
ERROR An argument named "filesystem" is not expected here. 
ERROR                                              
ERROR                                              
ERROR Error: Unsupported argument                  
ERROR                                              
ERROR   on ../../tmp/openshift-install-867057577/bootstrap/main.tf line 101, in data "ignition_file" "hostname": 
ERROR  101:   filesystem = "root"                  
ERROR                                              
ERROR An argument named "filesystem" is not expected here. 
ERROR                                              
ERROR                                              
ERROR Error: Unsupported argument                  
ERROR                                              
ERROR   on ../../tmp/openshift-install-867057577/masters/main.tf line 7, in data "ignition_file" "hostname": 
ERROR    7:   filesystem = "root"                  
ERROR                                              
ERROR An argument named "filesystem" is not expected here. 
ERROR                                              
ERROR Failed to read tfstate: open /tmp/openshift-install-867057577/terraform.tfstate: no such file or directory 
FATAL failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply using Terraform 

The same install-config.yaml file works fine on ocp4.
Install-config.yaml:

apiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
  name: worker
  platform:
    openstack:
      type: ocp.master
  replicas: 3
controlPlane:
  hyperthreading: Enabled
  name: master
  platform:
    openstack:
      type: ocp.master
  replicas: 3
metadata:
  creationTimestamp: null
  name: ec
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  machineCIDR: 10.0.0.0/16
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  openstack:
    cloud: openstack
    computeFlavor: ocp.master
    externalNetwork: ext-net
    lbFloatingIP: 172.16.19.164
    octaviaSupport: "0"
    region: "ca-east"
    trunkSupport: "0"
publish: External
pullSecret: '{"auths":{"fake":{"auth": "bar"}}}'
sshKey: ssh-rsa AAAAB3...

Also, it seems that the OpenStack terraform vars are still specifying the rhcos image:

{
  "openstack_base_image_name": "ec-zp8b5-rhcos",
  "openstack_base_image_url": "https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.3/43.80.20191002.1/x86_64/rhcos-43.80.20191002.1-openstack.x86_64.qcow2",
  "openstack_external_network": "ext-net",
  "openstack_credentials_cloud": "openstack",
  "openstack_master_flavor_name": "ocp.master",
  "openstack_lb_floating_ip": "172.16.19.164",
  "openstack_api_int_ip": "10.0.0.5",
  "openstack_node_dns_ip": "10.0.0.6",
  "openstack_ingress_ip": "10.0.0.7",
  "openstack_trunk_support": "0",
  "openstack_octavia_support": "0"

I attempted to change the image name to fedora-coreos with no luck:

{
  "openstack_base_image_name": "fedora-coreos-30.20191014.0",
...
}

AWS: crio.conf missing on bootstrap node

During installation on AWS with default settings bootkube.sh fails with an error before restarting:

Dec 16 23:01:59 ip-10-0-8-136 bootkube.sh[4611]: sed: can't read /etc/crio/crio.conf: No such file or directory

I got past it by creating /etc/crio/ and /etc/crio.conf (with the default contents on a fedora install).

I'm not sure what subproject this issue should go under.

docs: clarify that OKD is currently deployed on top of FCOS base

The only mention of FCOS found in the current documentation is a reference that "AWS is currently the best place to start with the OKD4 preview while we get Fedora CoreOS machine images set up in the other clouds."

The README/other docs should be clear on the fact that currently FCOS is the only supported base, and there are plans to make it so that you can run on anything you want, eventually.

Error deploying sample builds

I have a new OKD install, version: 4.3.0-0.okd-2019-11-15-182656, but I'm getting the error below when trying to do a generic deployment to test builds (S2i and docker-built).

error instantiating Build from BuildConfig project-py/docker-build (0): Error resolving ImageStreamTag ruby:latest in namespace openshift: unable to find latest tagged image

Generated from buildconfig-controller
12 times in the last few seconds
error instantiating Build from BuildConfig project-py/s2i-build (0): Error resolving ImageStreamTag ruby:2.4 in namespace openshift: unable to find latest tagged image

AWS: Error launching source instance: timeout while waiting for state to become 'success'

Version

$ openshift-install version
openshift-install unreleased-master-2025-g23f40299a6ab4f4f9bfbfd1ec1d15dfab90a6ecf-dirty
built from commit 23f40299a6ab4f4f9bfbfd1ec1d15dfab90a6ecf
release image quay.io/openshift/okd@sha256:5d2e42d555d24bb1a60d20eb03ae7e288f4347cad6a68c5428d00598c5816678

Platform:

aws

What happened?

When installing OKD on AWS this error always occurs. Just 1/15 times this part succeeded but it run into a different error.

ERROR                                              
ERROR Error: Error launching source instance: timeout while waiting for state to become 'success' (timeout: 30s) 
ERROR                                              
ERROR   on ../../../../../private/var/folders/b2/n35kbcsx1296ys4z6ndjsq4m0000gn/T/openshift-install-085413156/master/main.tf line 93, in resource "aws_instance" "master": 
ERROR   93: resource "aws_instance" "master" {     
ERROR                                              
ERROR                                              
FATAL failed to fetch Cluster: failed to generate asset "Cluster": failed to create cluster: failed to apply using Terraform

What you expected to happen?

Should install cluster on AWS without errors.

How to reproduce it?

I just tried to install OKD to AWS using the openshift-installer build of the okd repository. But I also tried to deploy it with the current version of this repository with a modified release-image.

$ openshift-install create cluster --log-level debug

Doc: OKD4 on Proxmox

Hi,

although it is no full documentation I like to lose a few words about how OKD4 can be installed on Proxmox (without storage).

  1. Assumptions:

    • You use PFSense for Load Balancing, DNS resolving and as DHCP server in your private OKD network
    • your config files and scripts are located on the Proxmox host in this directory: /root/install-config
    • the name of your storage in Proxmox is 'local' (all disk images are stored here)
    • Your cloud provider uses VLAN and wants your MTU to be set to 1400 (in my case its Co. Hetzner)
    • Proxmox version: 6 (Virtual Environment 6.0-15 or higher)
    • You installed openshift-installer from the OKD preview 1
    • MAC addresses for each VM must be unique. I use static IP mapping in PFSense's DHCP server.
    • I had to setup a 2nd server for the workers in Proxmox (different story!). But the scripts are similar for the workers. Change master.ign to worker.ign.
  2. Create SSH pubkey

  3. Create install-config.yaml

    apiVersion: v1
    baseDomain: <your domain name e.g. example.com>
    compute:
    - name: worker
      replicas: 0
    controlPlane:
      name: master
      replicas: 3
    metadata:
      name: okd
    networking:
      clusterNetworks:
      - cidr: 10.254.0.0/16
        hostPrefix: 24
      networkType: OpenShiftSDN
      serviceNetwork:
      - 172.30.0.0/16
    platform:
      none: {}
    pullSecret: '<Your pull secret from https://cloud.redhat.com/openshift/install/vsphere/user-provisioned>'
    sshKey: <Your public SSH key beginning with ssh-rsa ...>
    
  4. Create ignition files:
    IMPORTANT: Save your install-config.yaml because the openshift installer will delete it :-)

    openshift-installer create ignition-configs
    
  5. Install Proxmox and PFSense (I use its DNS resolvers, DHCP)
    If your cloud provider requires a MTU size of 1400 you maybe must patch your Proxmox to be able to set that on your VMs. I followed the instructions on https://forum.proxmox.com/threads/set-mtu-on-guest.45078/page-2 for that. Setting up networking on Hetzner with only one NIC per VM in a VLAN was a nightmare (routing, ...). But now it works.

  6. Put your DNS entries in the DNS forwarder of PFSense. The etcd SVC entries should be entered in Services->DNS Resolver->General Settings->Custom Options like this:

    server:
    local-data: "_etcd-server-ssl._tcp.okd.<YOUR HOSTNAME e.g. example.com>  60 IN    SRV 0        10     2380 etcd-0.okd.<YOUR HOSTNAME e.g. example.com>."
    local-data: "_etcd-server-ssl._tcp.okd.<YOUR HOSTNAME e.g. example.com>  60 IN    SRV 0        10     2380 etcd-1.okd.<YOUR HOSTNAME e.g. example.com>."
    local-data: "_etcd-server-ssl._tcp.okd.<YOUR HOSTNAME e.g. example.com>  60 IN    SRV 0        10     2380 etcd-2.okd.<YOUR HOSTNAME e.g. example.com>."
    
    local-zone: "apps.okd.<YOUR HOSTNAME e.g. example.com>" redirect
    local-data: "apps.okd.<YOUR HOSTNAME e.g. example.com> 60 IN A <IP ADDRESS OF YOUR INGRESS ROUTER/DOMAIN NAME>"
    

    I had to add the last two entries for apps.okd because during the installation some OKD services are communicating with apps.okd.... adresses and I had network hairpinning problems I didn't get resolved.

  7. Download FCOS image for QEMU: https://builds.coreos.fedoraproject.org/prod/streams/testing/builds/31.20191217.2.0/x86_64/fedora-coreos-31.20191217.2.0-qemu.x86_64.qcow2.xz or similar. Rename FCOS image to fedora-coreos.qcow2

  8. Scripts for bootstrap and master machines. Run them on the Proxmox host:

    create-cluster.sh

    ./create-bootstrap.sh 109 bootstrap <some unique MAC address #0 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    ./create-masters.sh
    

    create-bootstrap.sh

    ID=$1
    NAME=$2
    MACADDR=$3
    
    qm stop $ID
    sleep 10
    
    qm destroy $ID
    sleep 10
    
    qm create $ID --name $NAME --memory 2048 --net0 
    virtio,bridge=vmbr1,macaddr=$MACADDR,mtu=1400
    qm importdisk $ID fedora-coreos.qcow2 local
    qm set $ID --scsihw virtio-scsi-pci --scsi0 local:$ID/vm-$ID-disk-0.raw
    qm set $ID --boot c --bootdisk scsi0
    qm set $ID --serial0 socket --vga serial0
    
    echo "args: -fw_cfg name=opt/com.coreos/config,file=/root/install-config/bootstrap.ign" >> /etc/pve/qemu-server/$ID.conf
    
    qm start $ID
    

    create-masters.sh

    ./create-master.sh 110 master0 <some unique MAC address #1 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    ./create-master.sh 111 master1 <some unique MAC address #2 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    ./create-master.sh 112 master2 <some unique MAC address #3 for static IP through DHCP e.g.: 12:34:56:78:90:12>
    

    create-master.sh

    ID=$1
    NAME=$2
    MACADDR=$3
    
    qm stop $ID
    sleep 10
    
    qm destroy $ID
    sleep 10
    
    # !!! Important: Minimum 2 cores and 4GByte RAM. Without that SDN won't start !!!
    qm create $ID --name $NAME --cores 3 --memory 12000 --net0 virtio,bridge=vmbr1,macaddr=$MACADDR,mtu=1400
    qm importdisk $ID fedora-coreos.qcow2 local
    qm set $ID --scsihw virtio-scsi-pci --scsi0 local:$ID/vm-$ID-disk-0.raw
    qm set $ID --boot c --bootdisk scsi0
    qm set $ID --serial0 socket --vga serial0
    
    echo "args: -fw_cfg name=opt/com.coreos/config,file=/root/install-config/master.ign" >> /etc/pve/qemu-server/$ID.conf
    
    qm start $ID
    

I hope that's enough for the start. The installation and configuration of PFSense is worth an article on its own :-)

Have fun.

Greetings,

Josef

Bare Metal bootstrap fails with OVN

When testing baremetal installation with OVN, bootstrap keeps failing at the following point:

Jan 28 18:16:57 bootstrap.okd.example.net bootkube.sh[21846]: "99_openshift-machineconfig_99-worker-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": no matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Jan 28 18:16:57 bootstrap.okd.example.net bootkube.sh[21846]: [#3181] failed to create some manifests:
Jan 28 18:16:57 bootstrap.okd.example.net bootkube.sh[21846]: "99_openshift-machineconfig_99-master-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-master-ssh.yaml": no matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"
Jan 28 18:16:57 bootstrap.okd.example.net bootkube.sh[21846]: "99_openshift-machineconfig_99-worker-ssh.yaml": unable to get REST mapping for "99_openshift-machineconfig_99-worker-ssh.yaml": no matches for kind "MachineConfig" in version "machineconfiguration.openshift.io/v1"

It seems that OVN keeps getting stuck here:

Jan 28 18:10:13 master-0.okd.example.net hyperkube[6711]: time="2020-01-28T18:08:52Z" level=debug msg="exec(630): /usr/bin/ovs-ofctl --no-stats --no-names dump-flows br-int table=41,ip,nw_src=10.10.0.0/16"
Jan 28 18:10:13 master-0.okd.example.net hyperkube[6711]: time="2020-01-28T18:08:52Z" level=fatal msg="Timeout error while obtaining addresses for k8s-master-0.okd.example.net (timed out waiting for the condition)"

When it says it is obtaining address for k8s-master-0.okd.example.net, is that a DNS FQDN?

Networking Config:

networking:
  clusterNetwork:
  - cidr: 10.10.0.0/16
    hostPrefix: 23
  networkType: OVNKubernetes
  serviceNetwork: 
  - 10.11.0.0/16

Saw some interesting logs in ovnkube-master container:

time="2020-01-28T17:47:25Z" level=error msg="macAddress annotation not found for node \"master-0.okd.example.net\" "
time="2020-01-28T17:47:25Z" level=error msg="k8s.ovn.org/l3-gateway-config annotation not found for node \"master-0.okd.example.net\""
time="2020-01-28T17:47:25Z" level=info msg="Allocated node master-2.okd.example.net HostSubnet 10.129.0.0/23"
time="2020-01-28T17:47:25Z" level=info msg="Setting annotations map[k8s.ovn.org/node-subnets:{\"default\":\"10.129.0.0/23\"} ovn_host_subnet:<nil>] on node master-2.okd.example.net"

ovs-daemons logs:

2020-01-28T18:24:10.356Z|00043|jsonrpc|WARN|unix#109: receive error: Connection reset by peer
2020-01-28T18:24:10.357Z|00044|reconnect|WARN|unix#109: connection dropped (Connection reset by peer)

nbdb / sbdb logs:

2020-01-28T18:41:52.781278548+00:00 stderr F 2020-01-28T18:41:52Z|00995|jsonrpc|WARN|ssl:172.16.20.20:44506: receive error: Protocol error
2020-01-28T18:41:52.781522487+00:00 stderr F 2020-01-28T18:41:52Z|00996|reconnect|WARN|ssl:172.16.20.20:44506: connection dropped (Protocol error)

Ran a capture on master-0, I am seeing a bunch of retransmissions to bootstrap node (172.16.20.29), pcap attached:
master-0.pcap.zip

docs: describe how to ssh into FCOS images to troubleshoot

You can SSH into FCOS images to troubleshoot issues during initial deployment/bootstrap if you have direct access to the running machine. ssh core@<ip address or hostname of VM>. This is not documented anywhere except the old Container Linux CoreOS docs. It should probably be mentioned in these docs as well.

fcos nodes fail to start crio service

Version

4.3.0-0.okd-2020-01-24-215611 on FedoraCoreOS31

What happened?

Master nodes fail (FCOS image) to start crio service, due to missing /usr/share/containers/oci/hooks.d dir

Jan 24 15:43:31 osinfra-cqr7z-master-2 crio[3282]: time="2020-01-24 15:43:31.186522987Z" level=fatal msg="runtime config: invalid hooks_dir: stat /usr/share/containers/oci/hooks.d: no such file or directory: stat /usr/share/containers/oci/hooks.d: no such file or directory"

What you expected to happen?

/usr/share/containers/oci/hooks.d directory exists in the image or configuration is adapted to remove it

How to reproduce it (as minimally and precisely as possible)?

install-config.yaml

apiVersion: v1
baseDomain: fake.com
compute:
- name: worker
  replicas: 0
controlPlane:
  name: master
  replicas: 3
metadata:
  name: osinfra
networking:
  clusterNetworks:
  - cidr: 10.254.0.0/16
    hostPrefix: 24
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
  machineNetwork:
  - cidr: 192.168.190.0/24
platform:
  none: {}
pullSecret: '{"auths":{"fake":{"auth": "bar"}}}'
sshKey: my_key

Follow https://github.com/openshift/installer/blob/fcos/docs/user/openstack/install_upi.md instructions with fcos-31 image up to (including) starting master nodes. Login to any master node after it refetched image.
System reports, that crio.service failed to start

Anything else we need to know?

After manual editing of the /etc/crio/crio.conf and restarting crio service bootstraping continues.

The same problem exists in worker nodes as well.

After all this leads to the degradation of the machine-configs operator state

Docs: Upgrade from Alpha to Beta to Stable.

I'm wondering about if it's possible to upgrade from alpha to beta and later to the stable version without setting up an entire new cluster. Maybe someone can add this information to the README.

Azure: Upload of decompressed FCOS image is very slow

Hi,

I couldn't manage to get the new Azure installer working because the upload of the decompressed (fcos image is xz compressed) takes so long (more than 30 Minutes on my PC) that this procedure seems not to be feasable to me.

Please support a procedure where users can upload the fcos image on their own to a storage account which is in a different resource group and will be referenced from all cluster VMs.

A method to configure this storage blob container is necessary (through env variables which are fed to the installer?).

The time for the upload would be necessary only once for all OKD clusters and must not be spent for each cluster again and again.

Best regards,

Josef

Azure: VMs cant resolve hostnames -> Cant pull release image

Hi,

I sshed in my bootstrap VM and saw with journalctl that fcos cant pull the origin release image.

A

curl google.de

ends with:

curl: (6) Could not resolve host: google.de

There is no /etc/resolv.conf file in my bootstrap VM.

The private DNS zone was created successfully in my resource group.

Greetings,

Josef

docs: Describe current state of IPI vs UPI testing

Per discussion with @vrutkovs:

Right, this needs careful wording in a README of okd repo.
At the moment we have 2 blocking e2e tests for okd-specific repos - AWS IPI install and vSphere UPI install in our CI. Other platforms - like gcp/azure/metal would be added as optional in order to test platform-specific things. We don't have a capacity to test all platforms at once sadly. Also note, that OKD nightlies are promoted from OCP nightlies which run all kind of platform tests, so I think we're sufficiently covered by those.
In terms of supported installs both IPI and UPI are equal (although reproducing IPI issues is easier)
Lets file a bug in okd repo to describe current state of this in the README

OKD support for Libvirt provider

We would like to keep this master issue to link new issues which are related to Libvirt deployment so we can keep track as well as letting community know if a specific provider deployment is supported or has known issues etc

Masters and Workers can't pivot behind the proxy

Hi,

I try to install OKD4 on vSphere like this:

https://docs.openshift.com/container-platform/4.2/installing/installing_vsphere/installing-vsphere.html#installing-vsphere

For sure I had to change the version of the append-bootstrap.ign from 2.x.x to 3.0.0 to get it running on the bootstrap server.

I set the platform to 'none' because I create the ignition files and manually create the VMs in vSphere with them.

I'm behind a corporate proxy. Because of that I used the proxy settings in the install-config.yaml.

If I ssh into the bootstrap server the proxy env vars are set. The bootstrap server initialization runs successfully, it servers the ignition files for masters and workers on port 22623.

If I ssh into the master server, I see this in the journal:

Nov 30 09:32:02 localhost sh[1108]: Error: unable to pull quay.io/openshift/okd-content@sha256:6625bb97a35604080af348340f0788df36455352b1a039f073cb6894c548fb78: unable to pull image: Error initializing source docker://quay.io/openshift/okd-content@sha256:6625bb97a35604080af348340f0788df36455352b1a039f073cb6894c548fb78: pinging docker registry returned: Get https://quay.io/v2/: dial tcp 3.230.48.144:443: i/o timeout

This also doesn't work:
sudo podman run hello-world

It seems as if the proxy.sh is missing in /etc/profile.d on the master.

Greetings,

Josef

WebUI: api-explorer link not working in 4.3.0-0.okd-2019-11-15-182656

Hi,

Home -> Explore

results in a white page in the browser.

Debug window in Chrome:

  • Network Tab: sharing-config 404
  • Console:
main-chunk-7dcd7779403e0937e4e7.min.js:1 Active plugins: [@console/app, @console/ceph-storage-plugin, @console/container-security, @console/dev-console, @console/knative-plugin, @console/kubevirt-plugin, @console/metal3-plugin, @console/network-attachment-definition-plugin, @console/noobaa-storage-plugin, @console/operator-lifecycle-manager]
main-chunk-7dcd7779403e0937e4e7.min.js:1 Loaded cached API resources from localStorage
main-chunk-7dcd7779403e0937e4e7.min.js:1 MACHINE_AUTOSCALER was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_NOTIFICATION was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_CLI_DOWNLOAD was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_EXTERNAL_LOG_LINK was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_YAML_SAMPLE was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 NET_ATTACH_DEF was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 MACHINE_CONFIG was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 PROMETHEUS was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 OPERATOR_LIFECYCLE_MANAGER was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 METAL3 was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CLUSTER_API was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 MACHINE_HEALTH_CHECK was detected.
api-explorer:1 A cookie associated with a resource at http://itsm.rsint.net/ was set with `SameSite=None` but without `Secure`. A future release of Chrome will only deliver cookies marked `SameSite=None` if they are also marked `Secure`. You can review cookies in developer tools under Application>Storage>Cookies and see more details at https://www.chromestatus.com/feature/5633521622188032.
main-chunk-7dcd7779403e0937e4e7.min.js:1 GET https://console-openshift-console.apps.c2.okd4.cloud.rsint.net/api/kubernetes/api/v1/namespaces/openshift-logging/configmaps/sharing-config 404
v @ main-chunk-7dcd7779403e0937e4e7.min.js:1
b @ main-chunk-7dcd7779403e0937e4e7.min.js:1
F @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
5049 @ main-chunk-7dcd7779403e0937e4e7.min.js:1
r @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
4987 @ main-chunk-7dcd7779403e0937e4e7.min.js:1
r @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
d @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
c @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:1
main-chunk-7dcd7779403e0937e4e7.min.js:1 destroying websocket: /api/kubernetes/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion
main-chunk-7dcd7779403e0937e4e7.min.js:1 WebSocket connection to 'wss://console-openshift-console.apps.c2.okd4.cloud.rsint.net/api/kubernetes/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion' failed: WebSocket is closed before the connection is established.
r.destroy @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:5664
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
t.clear @ main-chunk-7dcd7779403e0937e4e7.min.js:1
t.componentWillUnmount @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Aa @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Pa @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Fa @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
e.unstable_runWithPriority @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348750
uo @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Wc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
kc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
e.unstable_runWithPriority @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348750
uo @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
po @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
ho @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
jc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
enqueueSetState @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
C.setState @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348421
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
Promise.then (async)
t.loadComponent @ main-chunk-7dcd7779403e0937e4e7.min.js:1
t.componentDidMount @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
e.unstable_runWithPriority @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348750
uo @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Wc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
jc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
as @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
cs @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Pc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
ps @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
render @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
5049 @ main-chunk-7dcd7779403e0937e4e7.min.js:1
r @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
4987 @ main-chunk-7dcd7779403e0937e4e7.min.js:1
r @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
d @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
c @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:1
main-chunk-7dcd7779403e0937e4e7.min.js:1 websocket error: /api/kubernetes/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion
main-chunk-7dcd7779403e0937e4e7.min.js:1 websocket closed: /api/kubernetes/apis/config.openshift.io/v1/clusterversions?watch=true&fieldSelector=metadata.name%3Dversion CloseEvent {isTrusted: true, wasClean: false, code: 1006, reason: "", type: "close", …}
vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456 RangeError: Invalid array length
    at a (api-explorer-chunk-7c53cc9b628c5c594dc5.min.js:1)
    at api-explorer-chunk-7c53cc9b628c5c594dc5.min.js:1
    at zi (vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456)
    at Zc (vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456)
    at Bc (vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456)
    at Rc (vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456)
    at kc (vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456)
    at vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
    at e.unstable_runWithPriority (vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348750)
    at uo (vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456)
ka @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Va.n.callback @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Ro @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
zo @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
e.unstable_runWithPriority @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348750
uo @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Wc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
kc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
e.unstable_runWithPriority @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348750
uo @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
po @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
ho @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
jc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
enqueueSetState @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
C.setState @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348421
(anonymous) @ main-chunk-7dcd7779403e0937e4e7.min.js:1
Promise.then (async)
t.loadComponent @ main-chunk-7dcd7779403e0937e4e7.min.js:1
t.componentDidMount @ main-chunk-7dcd7779403e0937e4e7.min.js:1
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
e.unstable_runWithPriority @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348750
uo @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Wc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
jc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
as @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
cs @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
Pc @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
ps @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
render @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:348456
5049 @ main-chunk-7dcd7779403e0937e4e7.min.js:1
r @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
4987 @ main-chunk-7dcd7779403e0937e4e7.min.js:1
r @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
d @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
c @ runtime~main-bundle-06897e4b2e572953ca6a.min.js:1
(anonymous) @ vendors~main-chunk-6ad83b434dca7b9e2093.min.js:1
main-chunk-7dcd7779403e0937e4e7.min.js:1 stopped watching console.openshift.io~v1~ConsoleNotification before finishing incremental loading.
main-chunk-7dcd7779403e0937e4e7.min.js:1 loaded apiregistration.k8s.io~v1~APIService
main-chunk-7dcd7779403e0937e4e7.min.js:1 websocket open: /api/kubernetes/apis/apiregistration.k8s.io/v1/apiservices?watch=true&resourceVersion=39468
main-chunk-7dcd7779403e0937e4e7.min.js:1 MACHINE_AUTOSCALER was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_EXTERNAL_LOG_LINK was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_NOTIFICATION was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_CLI_DOWNLOAD was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CONSOLE_YAML_SAMPLE was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 NET_ATTACH_DEF was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 MACHINE_CONFIG was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 PROMETHEUS was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 OPERATOR_LIFECYCLE_MANAGER was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 METAL3 was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 MACHINE_HEALTH_CHECK was detected.
main-chunk-7dcd7779403e0937e4e7.min.js:1 CLUSTER_API was detected.

Greetings,

Josef

OKD support for BareMetal provider

We would like to keep this master issue to link new issues which are related to BareMetal deployment so we can keep track as well as letting community know if a specific provider deployment is supported or has known issues etc

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.