Giter Site home page Giter Site logo

openshift / microshift Goto Github PK

View Code? Open in Web Editor NEW
635.0 26.0 189.0 92.97 MB

A small form factor OpenShift/Kubernetes optimized for edge computing

Home Page: https://microshift.io

License: Apache License 2.0

Go 37.47% Makefile 1.92% Shell 32.82% Python 13.20% Jinja 0.20% JavaScript 1.08% SCSS 0.01% HTML 0.10% Smarty 0.06% RobotFramework 13.10% Dockerfile 0.04%
kubernetes iot containers k8s openshift edge-computing hacktoberfest hacktoberfest2021

microshift's Introduction

MicroShift

MicroShift is a project that optimizes OpenShift Kubernetes for small form factor and edge computing.

Edge devices deployed out in the field pose very different operational, environmental, and business challenges from those of cloud computing. These motivate different engineering trade-offs for Kubernetes at the far edge than for cloud or near-edge scenarios.

MicroShift design goals cater to this:

  • make frugal use of system resources (CPU, memory, network, storage, etc.)
  • tolerate severe networking constraints
  • update securely, safely, speedily, and seamlessly (without disrupting workloads)
  • build on and integrate cleanly with edge-optimized operating systems like RHEL for Edge
  • provide a consistent development and management experience with standard OpenShift

These properties should also make MicroShift a great tool for other use cases such as Kubernetes applications development on resource-constrained systems, scale testing, and provisioning of lightweight Kubernetes control planes.

System Requirements

To run MicroShift, the minimum system requirements are:

  • x86_64 or aarch64 CPU architecture
  • Red Hat Enterprise Linux 9 with Extended Update Support (9.2 or later)
  • 2 CPU cores
  • 2GB of RAM
  • 2GB of free system root storage for MicroShift and its container images

The system requirements also include resources for the operating system unless explicitly mentioned otherwise.

Depending on user workload requirements, it may be necessary to add more resources i.e. CPU and RAM for better performance, disk space in a root partition for container images, an LVM group for container storage, etc.

Deploying MicroShift on Edge Devices

For production deployments, MicroShift can be run on bare metal hardware or hypervisors supported and certified for the Red Hat Enterprise Linux 9 operating system.

User Documentation

To install, configure and run MicroShift, refer to the following documentation:

Contributor Documentation

To build MicroShift from source and contribute to its development, refer to the following documentation:

Community

Community documentation sources are managed at https://github.com/redhat-et/microshift-documentation and published on https://microshift.io.

To get started with MicroShift, please refer to the Getting Started section of the MicroShift User Documentation.

For information about getting in touch with the MicroShift community, check our community page.

microshift's People

Contributors

agullon avatar benluddy avatar chiragkyal avatar cooktheryan avatar copejon avatar dependabot[bot] avatar dhellmann avatar dhensel-rh avatar eggfoobar avatar eslutsky avatar fzdarsky avatar ggiguash avatar husky-parul avatar iranzo avatar jakobmoellerdev avatar jogeo avatar mangelajo avatar microshift-rebase-script[bot] avatar oglok avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar pacevedom avatar pliurh avatar pmtk avatar rootfs avatar sallyom avatar sjug avatar stlaz avatar zshi-redhat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

microshift's Issues

Review /hack/patches

We applied these patches to vendored OpenShift packages in the early days. Patching vendored packages is bad, as it makes re-vendoring a nightmare.

We should review those patches as they may be(come) obsolete (e.g. disabling the sig handler in openshift-controller-manager) and delete those. Any remaining patches should be named appropriately and upstreamed.

Write documentation

A minimal README.md that explains how to get started quickly using install.sh and things that can be parameterized in Microshift.

Add uninstall.sh script

With @rootfs help I got some of the commands, and added some cleanups to the systemd service.
I used something like this, which is probably not suitable for most users because it might be too destructive:

[root@server ~]# cat microshift-uninstall.sh
#!/bin/bash

V()
{
  echo "### $*"
  $*
}

V systemctl stop microshift
V systemctl disable microshift
V rm -rf /etc/systemd/system/microshift
V rm -rf /usr/lib/systemd/system/microshift
V systemctl daemon-reload
V systemctl reset-failed

V crictl stop $(crictl ps -q) -t 1
# V crictl rm $(crictl ps -q -a) -t 1

V "mount | grep overlay | awk '{print \$3}' | xargs umount"
V "mount | grep kubelet | awk '{print \$3}' | xargs umount"
V pkill -9 pause

V rm -rf /var/lib/microshift
V rm -rf /var/lib/rook
V rm -rf /var/lib/etcd
V rm -rf /var/lib/kubelet
V rm -rf $HOME/.kube

V mkdir -p /var/lib/kubelet
V chcon -R -t container_file_t /var/lib/kubelet/

Service- and role-specific kubeconfigs

Currently, there's only one CA and components use the kubeadmin kubeconfig. Change to one CA and one kubeconfig per component. As part of this, minimize certs and kubeconfig generation depending on the role of the instance, e.g. if we run as --roles node only, there shouldn't be a need to initialize CP config.

CLI option to set API server port

Currently the API port 6443 is hardcoded. This becomes an issue when the port is taken (especially when multiple microshift clusters are running on the same host). Need to parameterize API server port.

Unable to install Strimzi operator on microshift

I am unable to install Strimzi operator on Microshift I am getting the following error.

2021-07-21 15:38:45 INFO  Main:61 - ClusterOperator 0.24.0 is starting
2021-07-21 15:38:46 INFO  Main:63 - Cluster Operator configuration is ClusterOperatorConfig(namespaces=[*],reconciliationIntervalMs=120000,operationTimeoutMs=300000,connectBuildTimeoutMs=300000,createClusterRoles=false,versions=versions{2.7.0={proto: 2.7 msg: 2.7 kafka-image: quay.io/strimzi/kafka@sha256:95cfe9000afda2f7def269ca46472d3803ee85146c521884884d8505a7187daf connect-image: quay.io/strimzi/kafka@sha256:95cfe9000afda2f7def269ca46472d3803ee85146c521884884d8505a7187daf connects2i-image: quay.io/strimzi/kafka@sha256:95cfe9000afda2f7def269ca46472d3803ee85146c521884884d8505a7187daf mirrormaker-image: quay.io/strimzi/kafka@sha256:95cfe9000afda2f7def269ca46472d3803ee85146c521884884d8505a7187daf mirrormaker2-image: quay.io/strimzi/kafka@sha256:95cfe9000afda2f7def269ca46472d3803ee85146c521884884d8505a7187daf}, 2.7.1={proto: 2.7 msg: 2.7 kafka-image: quay.io/strimzi/kafka@sha256:8959b7968ab8b3306906cdbff2ebb8d63329af37e58124a601843795c4ef5bd6 connect-image: quay.io/strimzi/kafka@sha256:8959b7968ab8b3306906cdbff2ebb8d63329af37e58124a601843795c4ef5bd6 connects2i-image: quay.io/strimzi/kafka@sha256:8959b7968ab8b3306906cdbff2ebb8d63329af37e58124a601843795c4ef5bd6 mirrormaker-image: quay.io/strimzi/kafka@sha256:8959b7968ab8b3306906cdbff2ebb8d63329af37e58124a601843795c4ef5bd6 mirrormaker2-image: quay.io/strimzi/kafka@sha256:8959b7968ab8b3306906cdbff2ebb8d63329af37e58124a601843795c4ef5bd6}, 2.8.0={proto: 2.8 msg: 2.8 kafka-image: quay.io/strimzi/kafka@sha256:fbb08410d9595029bc4a02ed859971264e6ce2dc85dd6a9855eaa7bb58b52a25 connect-image: quay.io/strimzi/kafka@sha256:fbb08410d9595029bc4a02ed859971264e6ce2dc85dd6a9855eaa7bb58b52a25 connects2i-image: quay.io/strimzi/kafka@sha256:fbb08410d9595029bc4a02ed859971264e6ce2dc85dd6a9855eaa7bb58b52a25 mirrormaker-image: quay.io/strimzi/kafka@sha256:fbb08410d9595029bc4a02ed859971264e6ce2dc85dd6a9855eaa7bb58b52a25 mirrormaker2-image: quay.io/strimzi/kafka@sha256:fbb08410d9595029bc4a02ed859971264e6ce2dc85dd6a9855eaa7bb58b52a25}},imagePullPolicy=null,imagePullSecrets=null,operatorNamespace=operators,operatorNamespaceLabels=null,rbacScope=CLUSTER,customResourceSelector=null,featureGates=FeatureGates(controlPlaneListener=false,ServiceAccountPatching=false))
2021-07-21 15:38:47 ERROR PlatformFeaturesAvailability:152 - Detection of Kubernetes version failed.
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:53) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.ClusterOperationsImpl.fetchVersion(ClusterOperationsImpl.java:57) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.DefaultKubernetesClient.getVersion(DefaultKubernetesClient.java:501) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.DefaultKubernetesClient.getVersion(DefaultKubernetesClient.java:496) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.strimzi.operator.PlatformFeaturesAvailability.lambda$getVersionInfoFromKubernetes$5(PlatformFeaturesAvailability.java:150) ~[io.strimzi.operator-common-0.24.0.jar:0.24.0]
	at io.vertx.core.impl.ContextImpl.lambda$null$0(ContextImpl.java:160) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:96) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:158) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty.netty-common-4.1.65.Final.jar:4.1.65.Final]
	at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.text.ParseException: Unparseable date: "2021-07-07-002919"
	at java.text.DateFormat.parse(DateFormat.java:395) ~[?:?]
	at io.fabric8.kubernetes.client.VersionInfo$Builder.withBuildDate(VersionInfo.java:106) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.ClusterOperationsImpl.fetchVersionInfoFromResponse(ClusterOperationsImpl.java:73) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.ClusterOperationsImpl.fetchVersion(ClusterOperationsImpl.java:55) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	... 11 more
2021-07-21 15:38:47 ERROR Main:91 - Failed to gather environment facts
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:53) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.ClusterOperationsImpl.fetchVersion(ClusterOperationsImpl.java:57) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.DefaultKubernetesClient.getVersion(DefaultKubernetesClient.java:501) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.DefaultKubernetesClient.getVersion(DefaultKubernetesClient.java:496) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.strimzi.operator.PlatformFeaturesAvailability.lambda$getVersionInfoFromKubernetes$5(PlatformFeaturesAvailability.java:150) ~[io.strimzi.operator-common-0.24.0.jar:0.24.0]
	at io.vertx.core.impl.ContextImpl.lambda$null$0(ContextImpl.java:160) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at io.vertx.core.impl.AbstractContext.dispatch(AbstractContext.java:96) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:158) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76) ~[io.vertx.vertx-core-4.1.0.jar:4.1.0]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [io.netty.netty-common-4.1.65.Final.jar:4.1.65.Final]
	at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.text.ParseException: Unparseable date: "2021-07-07-002919"
	at java.text.DateFormat.parse(DateFormat.java:395) ~[?:?]
	at io.fabric8.kubernetes.client.VersionInfo$Builder.withBuildDate(VersionInfo.java:106) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.ClusterOperationsImpl.fetchVersionInfoFromResponse(ClusterOperationsImpl.java:73) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	at io.fabric8.kubernetes.client.dsl.internal.ClusterOperationsImpl.fetchVersion(ClusterOperationsImpl.java:55) ~[io.fabric8.kubernetes-client-5.4.1.jar:?]
	... 11 more

The steps I took to deploy strimzi are below.

Confirm that operatorhub is installed

$ย kubectl get pods -n olm
NAME                                READY   STATUS    RESTARTS   AGE
catalog-operator-64bd4f69f6-dwqr9   1/1     Running   0          2m43s
olm-operator-789475dcf9-jft2z       1/1     Running   0          2m43s
operatorhubio-catalog-pdpxl         1/1     Running   0          2m21s
packageserver-759454b88b-ff47x      1/1     Running   0          2m21s
packageserver-759454b88b-xdsd8      1/1     Running   0          2m21s

Install the kafka operator

kubectl create -f https://operatorhub.io/install/strimzi-kafka-operator.yaml
kubectl get csv -n operators -w

Add hostname check to script

If the hostname of the device is not a full fqdn the cluster will fail on restart. There can be a check on the install.sh script that checks if the hostname is fully qualified.

No dynamic PV provisioning with storageclass kubevirt-hostpath-provisioner?

I created a PVC but the PV is not being allocated automatically.
Do I have to manually create every PV like pv-test.sh is doing?

Here is what I tried:

[root@server ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 (Ootpa)

[root@server ~]# microshift version
Microshift Version: v0.2-0-g028fbb3
[root@server ~]# kubectl get pvc
NAME                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
db-noobaa-db-pg-0   Pending                                      kubevirt-hostpath-provisioner   11m
[root@server ~]# kubectl describe pvc
Name:          db-noobaa-db-pg-0
Namespace:     noobaa
StorageClass:  kubevirt-hostpath-provisioner
Status:        Pending
Volume:
Labels:        app=noobaa
               noobaa-db=postgres
Annotations:   volume.beta.kubernetes.io/storage-provisioner: kubevirt.io/hostpath-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       noobaa-db-pg-0
Events:
  Type    Reason                Age                 From                         Message
  ----    ------                ----                ----                         -------
  Normal  ExternalProvisioning  92s (x63 over 16m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "kubevirt.io/hostpath-provisioner" or manually created by system administrator
[root@server ~]# kubectl get pv
No resources found
[root@server ~]# kubectl get sc
NAME                                      PROVISIONER                        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
kubevirt-hostpath-provisioner (default)   kubevirt.io/hostpath-provisioner   Delete          Immediate           false                  18h
[root@server ~]# kubectl get all -n kubevirt-hostpath-provisioner
NAME                                      READY   STATUS    RESTARTS   AGE
pod/kubevirt-hostpath-provisioner-v7psx   1/1     Running   0          5m51s

NAME                                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/kubevirt-hostpath-provisioner   1         1         1       1            1           <none>          18h

[root@server ~]# kubectl logs kubevirt-hostpath-provisioner-v7psx -n kubevirt-hostpath-provisioner
I0701 19:27:32.393613       1 hostpath-provisioner.go:82] initiating kubevirt/hostpath-provisioner on node: server.kube-nfs-kerberos.redhat-et
I0701 19:27:32.395143       1 hostpath-provisioner.go:277] creating provisioner controller with name: kubevirt.io/hostpath-provisioner
I0701 19:27:32.395360       1 controller.go:772] Starting provisioner controller kubevirt.io/hostpath-provisioner_kubevirt-hostpath-provisioner-v7psx_e4930e8e-bc65-48b5-a06e-7ba1c79400a5!
I0701 19:27:32.495474       1 controller.go:821] Started provisioner controller kubevirt.io/hostpath-provisioner_kubevirt-hostpath-provisioner-v7psx_e4930e8e-bc65-48b5-a06e-7ba1c79400a5!
I0701 19:27:32.495548       1 hostpath-provisioner.go:95] isCorrectNodeByBindingMode mode: Immediate

Add E2E CI Automation

We need CI automation to deploy a testing environment (#131) and execute smoke tests (#100). This should be executed when a PR is opened, when a PR receives a new commit, and after a PR is merged into main. Tests triggered by PRs should block the PR until the tests pass. Tests triggered on merges to main should post the pass/fail of the test as a notification to project owners/admins.

Use consistent naming

We currently mix "microshift" and "ushift". Proposal is to use "microshift" consistently everywhere (help messages, dirs, binary, ...).

Optionally start OpenShift components

In the current implementation, all components in the pkg/assets are started. We need to fine tune the process to make the startup configurable and only turn on those that are allowed in the configuration file.

get pods takes an hour (60 minutes) to complete

i'm on OSX and did the following:

docker pull docker.io/rootfs/ushift:macos   
docker volume create ushift-vol
docker run -d --rm --name ushift --privileged -v /lib/modules:/lib/modules -v ushift-vol:/var/lib -p 6443:6443 docker.io/rootfs/ushift:macos
 docker exec -ti ushift bash      
export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig   
systemctl status microshift

so far so good but then calls via kubectl freezes (does not matter if i do it from outside container or not):

exit
docker cp ushift:/var/lib/microshift/resources/kubeadmin/kubeconfig ./kubeconfig
kubectl get pods -A -w --kubeconfig ./kubeconfig
NAMESPACE                       NAME                                  READY   STATUS              RESTARTS   AGE
kube-system                     kube-flannel-ds-xg54z                 1/1     Running             0          21h
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-5zj7n   0/1     Evicted             0          11m
openshift-dns                   dns-default-4dsx5                     0/3     ContainerCreating   0          21h
openshift-ingress               router-default-78d9fc46bb-jwjqt       0/1     Pending             0          21h
openshift-service-ca            service-ca-66bcdfc59-gw78f            0/1     Pending             0          21h

Issue/PR Templates

#157 Adds a generic PR templates, and Bug Report and Enhancement Issue templates to help structure contributor submissions.

[DOC] Need developer quick-start documentation

First, Thank you for merging my first commits on the install.sh script! This is a really cool project!

Issue:

Following the repo's code to run the following proceedure on clean Fedora 34 & RHEL 8.4 boxes, I reach the same failure.

Both the service-ca-controller and kubevirt-hostpath-provisioner containers fail on the path /var/run/secrets/kubernetes.io/serviceaccount/token

Failure:

[root@fedora34-n01 microshift]# crictl logs bad3e6efce244
W0616 17:19:25.677054       1 cmd.go:200] Using insecure, self-signed certificates
I0616 17:19:25.677259       1 crypto.go:588] Generating new CA for service-ca-controller-signer@1623863965 cert, and key in /tmp/serving-cert-282934419/serving-signer.crt, /tmp/serving-cert-282934419/serving-signer.key
I0616 17:19:25.933813       1 observer_polling.go:52] Starting from specified content for file "/var/run/secrets/serving-cert/tls.crt"
I0616 17:19:25.933885       1 observer_polling.go:52] Starting from specified content for file "/var/run/secrets/serving-cert/tls.key"
F0616 17:19:25.933943       1 cmd.go:125] open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied

[root@fedora34-n01 microshift]# crictl logs e95272e1aa1ca
F0616 17:19:19.983312       1 hostpath-provisioner.go:259] Failed to create config: open /var/run/secrets/kubernetes.io/serviceaccount/token: permission denied

[root@fedora34-n01 microshift]# ls /var/run/secrets
ls: cannot access '/var/run/secrets': No such file or directory

Pods:

[root@fedora34-n01 microshift]# crictl ps --all
CONTAINER           IMAGE                                                                                                           CREATED             STATE               NAME                            ATTEMPT             POD ID
bad3e6efce244       60cd591c9b8ca14a10722093c9e53fef1a995e4b16d8c3da7fa1c0265dcebc2d                                                23 seconds ago      Exited              service-ca-controller           14                  8ecc0a87aed9a
e95272e1aa1ca       quay.io/kubevirt/hostpath-provisioner@sha256:4f742df37462129a4307cdcd2b5eaf7a4069f455997d61ab26f362760eadd6bc   29 seconds ago      Exited              kubevirt-hostpath-provisioner   14                  6b86ff08e5bc2
a95e73774497f       8522d622299ca431311ac69992419c956fbaca6fa8289c76810c9399d17c69de                                                47 minutes ago      Running             kube-flannel                    0                   c432caafe9f3a
fdd543d040261       quay.io/coreos/flannel@sha256:4a330b2f2e74046e493b2edc30d61fdebbdddaaedcb32d62736f25be8d3c64d5                  47 minutes ago      Exited              install-cni                     0                   c432caafe9f3a

Method:

`# as root`
dnf install make go -y
go get -u github.com/go-bindata/go-bindata/...
git clone https://github.com/redhat-et/microshift && cd microshift
git checkout install
make microshift
./install.sh         `# removed the '--now' flag from the systemctl enable microshift`
systemctl start microshift

install.sh not working on RHEL 8.4

OS Details

NAME="Red Hat Enterprise Linux"
VERSION="8.4 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.4"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.4 (Ootpa)"

Issue

I'm attempting to install on an AWS EC2 instance running RHEL 8.4. I successfully ran
curl -sfL https://raw.githubusercontent.com/redhat-et/microshift/main/install.sh > install.sh. However, when executing install.sh, it gets stuck (hangs) on line 167 (register_subs) below:

# Script execution
get_distro
get_arch
if [ $DISTRO = "rhel" ]; then
    register_subs
fi

Within the function register_subs(), the command sudo subscription-manager status seems to be the one that's hanging. When I run this command in my shell myself, it displays

[root@ip-166-28-35-54 ec2-user]# sudo subscription-manager status
+-------------------------------------------+
   System Status Details
+-------------------------------------------+
Overall Status: Unknown

and hangs until I CTRL+C.

[Enhancement]: hostpath image should not be latest

Set the hostpath image to a version rather than latest

Design Document Link

PR: TBD

What would you like to be added:

Set version of hostpath and verify it works

Why is this needed:

Using latest may set us up for issues in the future

install error on subscription-manager register (This system is already registered. Use --force to override)

Hi
I failed running the install script on RHEL 8.4:

[root@server ~]# curl -sfL https://raw.githubusercontent.com/redhat-et/microshift/main/install.sh | sh -
This system is already registered. Use --force to override

Running with -x:

[root@server ~]# curl -sfL https://raw.githubusercontent.com/redhat-et/microshift/main/install.sh | sh -x -
+ set -e -o pipefail
++ curl -s https://api.github.com/repos/redhat-et/microshift/releases
++ grep tag_name
++ cut -d '"' -f 4
+ VERSION=v0.2
+ get_distro
++ egrep '^(ID)=' /etc/os-release
++ sed 's/"//g'
++ cut -f2 -d=
+ DISTRO=rhel
+ [[ rhel != @(rhel|fedora|centos) ]]
+ get_arch
++ uname -m
+ ARCH=x86_64
+ '[' rhel = rhel ']'
+ register_subs
+ sudo subscription-manager register --auto-attach
This system is already registered. Use --force to override

I bypassed it by saving the script, and editing it to comment out this line.
I am not familiar with the subscription-manager enough to suggest a PR, but if you can guide me on what would be the right approach I would be able to suggest a fix.

Thanks!

Add release script

Add a script to manage microshift releases. The script should accept a commit hash as a command line arg. The commit will specify the commit to build and publish. The script should generate a version for the build in the following format: 4.7.0-0.microshift-YYYY-MM-DD-HHMMSS. This version must be applied to the commit as a git tag and pushed to the repo, as well as used to tag release images.

Build artifacts should be

  • Microshift cross-compiled binaries (amd64 & arm64). The name of the binaries should be formatted as microshift-$OS-$ARCH, e.g. microshift-linux-amd64. The binaries should be published to https://github.com/redhat-et/microshift/releases and the generated version should be set as the release title.
  • There should be a sha256 file containing the checksums of all release artifacts.
  • A container image. The image should be tagged as quay.io/microshift/microshift:$VERSION and pushed to the quay repo. The container should be a multi-architecture[0] image, in order to provide a clean and uniform user experience when pulling images.

[0] https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

Clean up templating of resources

We currently mix Go templating and string concatenation when writing configs for assets. Change to consistently use one or the other.

Add smoke testing scripts

Create a set of smoke tests to be executed against a running Microshift cluster. Each test should encapsulate it's workflow and manifests as an isolated unit of work, e.g. ./test/pv-test.sh. The test should clean up after itself on completion.

The manifests should define common openshift workloads (i.e. create a deployment, wait for pods to run; define a pod and pvc, wait for the pvc to provision and pod to run; etc) and produce a quantifiable pass/fail result. The test script will be responsible for deploying the manifests, monitoring their state, and reporting a pass/fail as well as logs relevant to the components being tested.

Tests to add:

  • Pod scaling: increasing or decreasing the replicas value of a Deployment scales pods accordingly
  • Cluster Networking: create a Pod and associated Service, ensure Pod is reachable via the Service endpoint
  • TDB
  • ...

Pre-loading container images into CRI-O

MicroShift needs a built-in mechanism to pre-load container images into nodes' CRI-O instances so that it can bootstrap its component services a) without network connectivity and b) without local image registry.

The mechanism should monitor /var/lib/microshift/images for oci-archives or multi-image docker-archives as produced by podman save and pre-load them into CRI-O on cluster initialization or whenever that directory changes.

Requirements:

  • MUST NOT change image digests, so the images can be referenced from K8s manifests by digest, too.
  • MUST be compatible with multi-arch images and having multiple tarballs/versions of images (e.g. for updates)
  • SHOULD unzip tarballs if necessary.
  • SHOULD support deleting pre-loaded images from /var/lib/microshift/images and subsequently pruning from CRI-O.

Add version info

Add apimachinery version info, add new command to display version, populate via GO_LD_EXTRAFLAGS (until Makefile is switched to library), and remove version patches for component-base and client-go.

[BUG] Certs are not initialised when /var/lib/microshift dir is present

What happened:

MicroShift doesn't initialise certs, configs, etc. if /var/lib/microshift dir exists prior to running MicroShift.

What you expected to happen:

MicroShift should initialise certs when they are actually missing.

How to reproduce it (as minimally and precisely as possible):

  1. mkdir /var/lib/microshift
  2. install & run microshift

Anything else we need to know?:

Reasons that /var/lib/microshift dir would be present prior to "first run" is preloading of images or manifests.

Environment:

Relevant Logs

How can I use internal image registry with Microshift?

Hi team,

I find this project very useful and is trying to use it with my project, but as soon as running it I noticed an image-registry is not available. Apparently by design Microshfit doesn't install any operators by default so no Image Registry Operator nor configs.imageregistry.operator.openshift.io/cluster exist either.

Any advices on how to use image registry with Microshift? Is there a plan to provide an easy way to install and enable it?

Different pods get same IP address and get restarted constantly

I see sometimes the following behavior:

[vagrant@fedora ~]$ kubectl get pods -A -o wide -w
NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE   IP          NODE     NOMINATED NODE   READINESS GATES
kube-system                     kube-flannel-ds-d55c4                 1/1     Running   0          25m   10.0.2.15   fedora   <none>           <none>
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-cqkv9   1/1     Running   0          25m   10.42.0.2   fedora   <none>           <none>
openshift-dns                   dns-default-qkhxz                     3/3     Running   0          25m   10.42.0.2   fedora   <none>           <none>
openshift-ingress               router-default-78d9fc46bb-5vbk2       1/1     Running   3          25m   10.42.0.3   fedora   <none>           <none>
openshift-service-ca            service-ca-66bcdfc59-sslh4            1/1     Running   2          26m   10.42.0.3   fedora   <none>           <none>

After a reboot, pods get different IP addresses and get stable:

NAMESPACE                       NAME                                  READY   STATUS    RESTARTS   AGE   IP          NODE     NOMINATED NODE   READINESS GATES
kube-system                     kube-flannel-ds-d55c4                 1/1     Running   0          27m   10.0.2.15   fedora   <none>           <none>
kubevirt-hostpath-provisioner   kubevirt-hostpath-provisioner-cqkv9   1/1     Running   0          27m   10.42.0.5   fedora   <none>           <none>
openshift-dns                   dns-default-qkhxz                     3/3     Running   0          27m   10.42.0.6   fedora   <none>           <none>
openshift-ingress               router-default-78d9fc46bb-5vbk2       1/1     Running   0          27m   10.42.0.4   fedora   <none>           <none>
openshift-service-ca            service-ca-66bcdfc59-sslh4            1/1     Running   0          27m   10.42.0.7   fedora   <none>           <none>

Remove unnecessary k8s command line flags

K8s injects lots of flags, most of which aren't meaningful or even used in a Microshift context, e.g. the cloud-provider and docker flags. We should only expose a clean, minimal set of args. The issue, of course, is AddGoFlagSet, but removing that means some of the K8s components don't start. If this cannot be easily solved, we may fall back to hiding those flags (flagset.MarkHidden).

Integrate into OpenShift CI

Integrate this repo into OpenShift CI, running at least go fmt, go vet, and go test as well as uploading image builds. Remove travis.

Blocked on #75.

Create install.sh

Create an install.sh script in the root of the repo for installing Microshift. The script shall perform the necessary installation and configuration of dependencies (from RPMs) on the host system, then install and (by default) start Microshift binary (not container on Podman).
Allow to install as systemd unit. Stretch goal: Allow running as container on podman managed by systemd (as generated by podman generate systemd).

Smoke/E2E Testing Infra Automation

Add a script to deploy infra for smoke and e2e testing. The script should deploy an VM instance on a cloud platform (PSI, AWS, GCP, etc), and download and deploy Microshift. The script should return a kubeconfig to allow for external access to the Microshift cluster. This implies that the instance's network should be configured to support external client connections. The OS environment should be configured using install.sh (n.b. that at present the install script installs a release of Microshift, managed by systemd).

The script should handle deployment failures gracefully and not orphan resources. The script should also be capable of tearing down testing infra on demand (meaning it should remember what it deployed). In this way, the script will be used to bookend smoke and e2e testing: first being run to deploy the infra, and then cleaning up once testing completes.

Requirements:

  • Should support AWS, GCP, Azure and PSI (our private cluster farm)
  • Should parameterize an SSH key file path and configure an ssh connection to the instance
  • Should parameterize operating system by taking a cloud specific image identifier (e.g. an AMI ID on AWS)
  • Should parameterize a git commit and repo url (defaulting to main:HEAD of github.com/redhat-et/microshift). The script will download the source, compile it, and install it.

[BUG] error building at STEP "RUN yum install glibc-static -y": error while running runtime: exit status 1

What happened:

glibc-static issues when attempting to build from source

What you expected to happen:

Successful build from source

How to reproduce it (as minimally and precisely as possible):

  1. I'm running the release.sh script and it is causing it.
  2. I believe @guymguym @sallyom @husky-parul have attempted to build project from source as well with a similar issue

Anything else we need to know?:

Environment:

  • Microshift version (use microshift version): All
  • Hardware configuration: amd64
  • OS (e.g: cat /etc/os-release): Fedora but using ubi8 during build
  • Kernel (e.g. uname -a):
  • Others:

Relevant Logs

STEP 10: RUN yum install glibc-static -y
exec container process `/bin/sh`: Exec format error
STEP 11: FROM registry.access.redhat.com/ubi8/ubi-minimal:8.4
Error: error building at STEP "RUN yum install glibc-static -y": error while running runtime: exit status 1
make[2]: *** [Makefile:86: _build_containerized] Error 125
make[2]: Leaving directory '/home/rcook/git/microshift'
make[1]: *** [Makefile:101: build-containerized-cross-build-linux-arm64] Error 2
make[1]: Leaving directory '/home/rcook/git/microshift'
make: *** [Makefile:106: build-containerized-cross-build] Error 2

Refactor embedded services, improve startup and shutdown behavior

  1. Currently, the component services embedded into Microshift have their initialization, configuration, and execution logic spread over multiple places in the repo. That logic should be consolidated into one place per component and at the same time be standardized onto a common interface for managing component service lifecycle.

  2. Standardizing the service interface also allows us to implement a generic logic for ensuring startup-ordering (waiting for dependencies to become ready before starting a dependent service) as well as cancellation.

  3. For some component services (kube-apiserver, kube-scheduler, ...) we do not implement cancellation yet: Executing these services via their Cobra commands means they try to grab the system interrupts (SIGTERM et al.) for themselves and only stop on these interrupts rather than a stop channel.

  4. While at it, implement signalling readiness of MicroShift (incl. its embedded services) to systemd and cleanly shut down on SIGTERM and SIGHUP signals.

Failed to get CPU

I attempted to install istio on the cluster and it looks like it is crashing the microshift deployment. When it attempting to get the cpu resource metric.

I was trying to install istio-1.10.3

-- Journal begins at Wed 2021-07-21 12:31:16 EDT, ends at Wed 2021-07-21 16:17:29 EDT. --
-- Journal begins at Wed 2021-07-21 12:31:16 EDT, ends at Wed 2021-07-21 16:17:29 EDT. --
Jul 21 16:12:48 fedora microshift[944]: I0721 16:12:48.133828     944 event.go:291] "Event occurred" object="istio-system/istiod" kind="HorizontalPodAutoscaler" apiVersion="autoscaling/v2beta2" type="Warning" reason="FailedGetResourceMetric" message="failed to get cpu ut>
Jul 21 16:12:48 fedora microshift[944]: I0721 16:12:48.133917     944 event.go:291] "Event occurred" object="istio-system/istiod" kind="HorizontalPodAutoscaler" apiVersion="autoscaling/v2beta2" type="Warning" reason="FailedComputeMetricsReplicas" message="invalid metrics>
Jul 21 16:12:48 fedora microshift[944]: E0721 16:12:48.149944     944 horizontal.go:227] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/activator: invalid metrics (1 invalid out of 1), first error is: failed to get cpu>
Jul 21 16:12:48 fedora microshift[944]: I0721 16:12:48.150741     944 event.go:291] "Event occurred" object="knative-serving/activator" kind="HorizontalPodAutoscaler" apiVersion="autoscaling/v2beta2" type="Warning" reason="FailedGetResourceMetric" message="failed to get >
Jul 21 16:12:48 fedora microshift[944]: I0721 16:12:48.150840     944 event.go:291] "Event occurred" object="knative-serving/activator" kind="HorizontalPodAutoscaler" apiVersion="autoscaling/v2beta2" type="Warning" reason="FailedComputeMetricsReplicas" message="invalid m>
Jul 21 16:12:48 fedora microshift[944]: E0721 16:12:48.157789     944 horizontal.go:227] failed to compute desired number of replicas based on listed metrics for Deployment/knative-serving/webhook: invalid metrics (1 invalid out of 1), first error is: failed to get cpu u>
Jul 21 16:12:48 fedora microshift[944]: I0721 16:12:48.157978     944 event.go:291] "Event occurred" object="knative-serving/webhook" kind="HorizontalPodAutoscaler" apiVersion="autoscaling/v2beta2" type="Warning" reason="FailedGetResourceMetric" message="failed to get cp>
Jul 21 16:12:48 fedora microshift[944]: I0721 16:12:48.158060     944 event.go:291] "Event occurred" object="knative-serving/webhook" kind="HorizontalPodAutoscaler" apiVersion="autoscaling/v2beta2" type="Warning" reason="FailedComputeMetricsReplicas" message="invalid met>
Jul 21 16:12:48 fedora microshift[944]: {"level":"warn","ts":"2021-07-21T16:12:48.307-0400","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"144.617917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/even>
Jul 21 16:12:48 fedora microshift[944]: {"level":"info","ts":"2021-07-21T16:12:48.307-0400","caller":"traceutil/trace.go:145","msg":"trace[620089184] range","detail":"{range_begin:/registry/events/istio-system/istiod.1693e767a773fc38; range_end:; response_count:1; respon>
Jul 21 16:12:48 fedora microshift[944]: {"level":"warn","ts":"2021-07-21T16:12:48.549-0400","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"198.767186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods>
Jul 21 16:12:48 fedora microshift[944]: {"level":"info","ts":"2021-07-21T16:12:48.550-0400","caller":"traceutil/trace.go:145","msg":"trace[21190676] range","detail":"{range_begin:/registry/pods/olm/packageserver-8848f6957-cwkrs; range_end:; response_count:1; response_rev>
Jul 21 16:12:48 fedora microshift[944]: {"level":"warn","ts":"2021-07-21T16:12:48.920-0400","caller":"etcdserver/util.go:163","msg":"apply request took too long","took":"359.244258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/even>
Jul 21 16:12:48 fedora microshift[944]: {"level":"info","ts":"2021-07-21T16:12:48.921-0400","caller":"traceutil/trace.go:145","msg":"trace[1570538332] range","detail":"{range_begin:/registry/events/knative-serving/webhook.1693e767a83d4626; range_end:; response_count:1; r>

Originally posted by @tosin2013 in #172 (comment)

microshift VERSION problem during install

install.sh
get_microshift function fails with "not found" during curl

It could be an incorrect release identification due to multiple tags when VERSION variable is set

$ curl -s https://api.github.com/repos/redhat-et/microshift/releases | grep tag_name | cut -d '"' -f 4
0.4.7-0.microshift-2021-07-07-002815
v0.2

Setting the VERSION variable manually makes it work

#!/bin/sh
set -e -o pipefail

# Usage:
# ./install.sh

VERSION=$(curl -s https://api.github.com/repos/redhat-et/microshift/releases | grep tag_name | cut -d '"' -f 4)
VERSION=0.4.7-0.microshift-2021-07-07-002815

Refactor embedded services to service interface and manager

With #152 and #153 we've started restructuring MicroShift such that embedded services (etcd, kube-apiserver, etc.) have their logic self-contained (one file per service), conform to a common Service interface, and using a service manager to implement startup dependencies and cancellation consistently.

So far, only etcd and kube-apiserver have been migrated over. We need all other components to be migrated, too:

  • kube-controller-manager
  • kube-scheduler
  • openshift-preparation (namespaces, SCCs, CRDs, etc.)
  • openshift-apiserver
  • openshift-controller-manager
  • kubelet
  • kubeproxy
  • component-loader (see #144)

Services can likely be migrated over one-by-one in multiple PRs / by multiple people.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.