devfile / devworkspace-operator Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
Currently, the DevWorkspace operator is only deployable by setting environment variables and running kustomize
. To support deploying the operator via chectl
, we need to publish processed yamls that can be referenced in chectl
without the need to run kustomization. This could potentially be done via GitHub releases to avoid filling the repo with thousands of lines of yaml.
It's needed to complete endpoints implementations needed for Theia plugins:
Theia defines https://github.com/eclipse/che-plugin-registry/blob/master/v3/plugins/redhat/java11/0.63.0a/meta.yaml protocol, discoverable, type, path which have a bit different location in devfile 2.0 format
Che-theia (or other workspaces) may want to know which registries (like plug-in and/or devfile) were used to create this workspace. (For example for che-theia it is to list all other plug-ins and the one who are enabled)
There is a config map of the devworkspace controller but workspaces won't have the permission to read from there.
Registries URLs should be available inside DevWorkspace annotation or config maps information or mounted information file in /var/run/secrets/devfile.io/
that can be used to grab these URLs or anything else
As part of our next milestone, we need to support storage similar to asynchronous storage in Che. Hopefully, we can provision async storage in the controller itself (similar to how common storage is currently implemented). We need to investigate what work is required and implement required changes if possible.
Now that devfile/api#221 is merged, we should utilize the new field to give a short description explaining why workspaces failed (e.g. "plugin not found", "opeshift-oauth routing only supported on OpenShift")
This will require updating the devfile/api dependency, and IIRC there are some incompatibilities that need to be fixed.
The secure routing for the devworkspace we proposed was htpasswd.
And it's simple but it does not seem to provide a good UX for kubernetes cluster users.
So, instead, we should investigate OpenID Connect routing that can be used if K8s cluster is configured with such authentication https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens
The devworkspace operator ideally should be free from the OpenID provider implementation,
The OpenID server which we can try to use for such integration https://github.com/dexidp/dex
The fields commands
and projects
of a DevWorkspace
are not managed by the DevWorkspace
operator.
That means that tools that use those fields need to:
For example, in a Che-Theia workspace scenario, if a new project is added to a DevWorkspace
, Che-Theia needs to immediately git clone and add a new workspace. If the editor is not Che-Theia
but IntelliJ
, we need to implement it on that editor too.
That has a couple of problems:
Possible reconciliations handled by the DevWorkspace controller:
projects
--> #205commands
--> CM mounted as files?Implement support for devfile plugins specified by URI
We currently support basic metrics (while in dev mode/experimental features enabled) to track how long it takes to start a DevWorkspace. This support should be expanded to report more info about the operator, and the Operator metrics endpoint should be used.
Kubebuilder docs
It's needed to implement Volume component that will define PVC or emptyDir with size configuration.
components:
- name: maven
container:
image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
mountSources: true
volumeMounts:
- name: my-storage
path: /home/jboss/.settings
- name: my-storage
volume:
size: 500Mi
If component references non-declared volume - devworkspace should fail to start or failed to be created?
Probably failed to start since such validation needs external resources to be fetched like in plugins case. See below.
? mountSource: true
should be converted to volume with name projects?
Volume can be reused/configured from plugin. Like here https://github.com/devfile/api/blob/master/samples/devfiles/spring-boot-http-booster-devfile.yaml.
...
components:
- name: java-support
plugin:
id: redhat/java8/latest
components:
- name: vscode-java
container:
memoryLimit: 2Gi
- name: m2 # it already has volume define we just configure it
volume:
size: 2G
...
- name: maven-tooling
container:
image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
mountSources: true
memoryLimit: 768Mi
volumeMounts:
- name: m2 # using volume from plugin definition
path: /home/jboss/.m2
Which PVC strategy we should support in devworkspace? Probably common only - one PVC for one namespace. Should we implement the same isolation mechanism as for Che?
- PVC structure:
/workspaceId1
/volumeName1
/volumeName2
/workspaceId2
/volumeName1
/volumeName2
? Since plugins need volumes in initContainers, we can face issues with initSubpaths from initContainer
and probably we need to run a separate job to do itWhen we've a dev workspace, we can ask resources like with
$ kubectl get devworkspaces --all-namespaces
but runtime information is missing as well as the components from these objects.
That information is stored in other custom resources
It would be nice that from a DevWorkspace we could have pointers to the runtime info custom resources and component custom resources
There is like a pattern-rule with workspace-id suffix but would be more convenient to know the related source names from the DevWorkspace resource
We could use labels for all workspace related objects with an identifier of the dev workspace.
It may be simpler to use workspace name but not workspaceId that we have since namespace/name fully identifies devworkspace CR instance.
Most of workspace-related objects are cluster-scoped (so namespace label seems to be redundant), but some of them could be cluster-scoped, like OpenShift Oauth client. Then to grab all workspace-related objects we need to have namespace label as well.
Example of label: io.devfile.dev_workspace.name
+ io.devfile.dev_workspace.namespace
Che-Theia knows the identifier of the dev workspace and then will be able to do the queries.
* name/namespace could be combined in one label if it makes sense + namespace format/name format
satisfies label value format, like we won't exceed the maximum length.
Having different labels for name and namespace could be more straightforward since namespace is optional when you do a namespace-scoped query (which should 100% case of Che Theia).
I created a DevWorkspace using devWorkspace Operator on minikube (no Eclipse Che there)
Then, from che-theia container, I tried to get workspacerouting objects and it failed
/projects/tmp $ ./kubectl get workspaceroutings.controller.devfile.io/routing-workspaceeb55021d3cff42e0 -n che
Error from server (Forbidden): workspaceroutings.controller.devfile.io "routing-workspaceeb55021d3cff42e0" is forbidden: User "system:serviceaccount:che:workspaceeb55021d3cff42e0-sa" cannot get resource "workspaceroutings" in API group "controller.devfile.io" in the namespace "che"
but I can access the dev workspace object:
/projects/tmp $ ./kubectl get devworkspaces/theia -n che
NAME WORKSPACE ID PHASE URL
theia workspaceeb55021d3cff42e0 Running http://workspaceeb55021d3cff42e0-theia-3100.192.168.64.31.nip.io
or the pod object
/projects/tmp $ ./kubectl get pods -n che
NAME READY STATUS RESTARTS AGE
workspaceeb55021d3cff42e0-77f7bd767f-tld2s 3/3 Running 3 16d
Trying on the host( where minikube is launched, the command is successful)
$ kubectl get workspaceroutings.controller.devfile.io/routing-workspaceeb55021d3cff42e0 -n che NAME AGE
routing-workspaceeb55021d3cff42e0 16d
Lot of environment variables defined in containers are using CHE_
prefix.
It shouldn't as it is agnostic of Che.
I would get a clear mapping between old and new names
Also, some environment variables could be available through files
For example k8s stores some config in /var/run/secrets/kubernetes.io/serviceaccount
file
CHE_MACHINE_TOKEN
(or others) could be a candidate to use files instead of ENV variables ? like /var/run/secrets/devfile.io/machine-token
ENV NAME | Replacement |
---|---|
CHE_API | N/A |
CHE_API_INTERNAL | N/A |
CHE_WORKSPACE_ID | N/A (end user info) |
CHE_MACHINE_TOKEN | N/A |
CHE_WORKSPACE_TELEMETRY_BACKEND_PORT | (not defined per che server) CHE_WORKSPACE_TELEMETRY_BACKEND_PORT (not yet implemented). It's needed to take a look if Devfile 2.x allows plugins to bring their env vars into containers, like it's supported in plugin.meta.yaml |
CHE_MACHINE_NAME | DEV_WORKSPACE_COMPONENT_NAME |
CHE_PROJECTS_ROOT | COMPONENT_PROJECTS_ROOT or PROJECTS_ROOT |
Non CHE_ variables
ENV NAME | Replacement |
---|---|
NO_PROXY | NO_PROXY |
HTTP_PROXY | HTTP_PROXY |
HTTPS_PROXY | HTTPS_PROXY |
Che/Theia is expecting as well a
ENV NAME | Replacement |
---|---|
PRODUCT_JSON | config map for json |
DevWorkspace controller from POC stage run mkdir containers as init containers but that's not going to help solve files permissions issues if other init containers mounts some subfolders, because (as Che used to do it) subfolders must be initialized before they are mounted.
But since it works somehow with the current approach, and Angel heard that it might be resolved on k8s side(we don't have a good referene), WE MUST make sure it works on K8s/OpenShift clusters and remove mkdir init container at all,
OR such permissions issue still exists on some k8s/openshift clusters - we should rework it in the proper way, where subfolders are inited from a separate pod before they are mounted into devworkspace pod.
The Web Terminal Tooling plugin handles /exec/init
calls by attempting to inject kubeconfig into the first container in the pod. If this fails, the whole call fails, so the plugin only works when the tooling container is first in the list.
However, the devfile/api functions for merging plugin components into a devworkspace merge components in the order 1. Parent, 2. Plugins, 3. Main content, resulting in the tooling container being last in the list. This causes web terminal to fail with changes from #240
The web terminal should resolve the first compatible container (i.e. if it can't resolve an exec in the first container, it should try the second, etc.)
We need a way of specifying where web terminal should inject kubeconfig.
Investigate devfile flattening + devworkspace startup workflow.
During preparing a proposal, it's needed to take into account
Plugins in Devfile 2.0 https://docs.google.com/presentation/d/17qAUiTY752INugy3iajb8ujku8eJXWmOnPF1SaTBxMA/edit#slide=id.g7862af9181_0_0
Investigating faster workspace startup #209
Git Clone First #205
How to grab user/ssh/preferences etc from a DevWorkspace
is it the responsibility of for example Che/Theia to create such resources if it doesn't exists or DevWorkspaces can provide config maps ?
A solution to use in Che/Theia to be able to start smoothly Theia (no missing service) for example
user settings - config map
workspace setting - config map
ssh keys - secret
^ these objects should define with annotations that they should be mount to devworkspaces, via env vars or file(to which path)
--> controller should take into account with objects and them to devworkspace-related deployment
--> another controller should manage user
The basic idea is that each 'component/controller/client' should handle a specific part.
Where to start ? who is creating the config map if not there ? (before we add this new controller micro-service)
che-theia ? no, as it should be mounted first. Let's move it to a custom chectl command:
$ chectl workspace:create <namespace> (should use workspace engine selected by the user when doing server:start)
it will take care of checking if there is a config map or not for this namespace and if not, prompt the user about its name, git stuff, etc.
namespace needs to have a custom label to identify the user
I saw this issue on crc, and only the update helped me to solve this issue.
Now, I see this on a real OpenShift Cluster, and at time when everything seems to work fine(deployment, pods, ...), except namespace removing(is terminating forever), webhook server fails to start with error
2020-12-07T15:15:47.140Z INFO webhook.server ERROR: Could not evaluate if admission webhook configurations are available {"error": "unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request"}
2020-12-07T15:15:47.140Z ERROR cmd Failed to create webhooks {"error": "unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request"}
main.main
/devworkspace-operator/webhook/main.go:78
runtime.main
/usr/local/go/src/runtime/proc.go:203
Since webhook server prevents all pod/exec I wonder if we can make it safer, and not fail if we face that error.
If the devworkspace operator is configured with a default routing class, this routing class is not persisted in the workspace routing object. This then makes it impossible for the external solvers to reconcile such routings.
To support the new plugins model, it's needed to implement apply of container component on preStart events.
Then we'll be able to describe plugins like the following (it's adapted samples from https://github.com/devfile/api/tree/master/samples):
schemaVersion: 2.0.0
metadata:
publisher: redhat
name: vsx-template
type: template
parameters:
VSX_LIST # ????
components:
- name: vsx-installer
container:
image: vsx-installer # technically it's adapted artifacts plugin broker which is not in place yet
volumeMounts:
- name: vsx
path: "/vsx"
env:
- name: VSX_LIST
value: ""
- name: theia-remote-injector
container:
image: "quay.io/eclipse/che-theia-endpoint-runtime-binary:7.20.0"
volumesMounts:
- mountPath: "/remote-endpoint"
name: remote-endpoint
env:
- name: PLUGIN_REMOTE_ENDPOINT_EXECUTABLE
value: /remote-endpoint/plugin-remote-endpoint
- name: REMOTE_ENDPOINT_VOLUME_NAME
value: remote-endpoint
- name: remote-endpoint
volume:
emptyDir: {} ? โ2
commands:
- id: copyVsx
apply:
component: vsx-installer
- id: injectRemoteILauncher
apply:
component: theia-remote-injector
events:
preStart:
- copyVsx
- injectRemoteILauncher
? โ1 We don't want to get copies of vsxInstaller
and injectRemoteILauncher
but the model does not allow to define merging the same components. So, maybe it should be implementation-specific for plugin component - if a different component brings the component with the same name - we try to merge them. Everything except VSX_LIST should be the same.
It may be a bit simpler in terms on interface declaration if we define different env vars in different plugins, like VSX_JAVA_8
, VSX_JAVA_DEBUG
, ... Otherwise we should hardcode that only VSX_LIST
should be merged with appending.
? โ2 empty dir volumes are not implemented yet devfile/api#189
schemaVersion: 2.0.0
metadata:
publisher: redhat
name: java8
version: latest
displayName: Language Support for Java 8
title: Language Support for Java(TM) by ...
description: Java Linting, Intellisense ...
icon: https://.../logo-eclipseche.svg
repository: https://github.../vscode-java
category: Language
firstPublicationDate: "2020-02-20"
pluginType: che-theia-vsx
parent:
id: redhat/theia-vsx-template/latest
components:
- name: vsx-installer
container:
env:
- name: VSX_LIST
value: java-dbg.vsix,java.vsix
components:
- name: vscode-java
container:
image: ...che-sidecar-java
memoryLimit: "1500Mi"
volumeMounts:
- path: "/home/theia/.m2"
name: m2
- name: m2
volume: {}
? โ3 # plugin sidercar has entrypoint with env var stub that should be injected from remote injector. See https://github.com/che-dockerfiles/che-sidecar-java/blob/master/Dockerfile#L32
Currently che-plugin-broker encapsulates this logic and apply configuration if the plugin is theia or vscode: https://github.com/eclipse/che-plugin-broker/blob/40cdcfb0e54ef1bf170690045802cc6710c33dfc/brokers/metadata/broker.go#L134
In Devfile 2.0 there is an issue to provide env var to all containers devfile/api#149
But what with remoteInjector emptyDir volumes? Should we contribute it to every container as well? Or maybe producing duplicates but more consistent - plugin container should define remote-endpoint
volumeMount.
schemaVersion: 2.0.0
metadata:
name: spring-boot-http-booster
type: workspace
projects:
- name: spring-boot-http-booster
git:
remotes:
origin: https://github.com/snowdrop/spring-boot-http-booster
checkoutFrom:
revision: master
components:
# Should we explicitly define theia as plugin? Probably yes or we should analyze resolved plugin configuration for some indicator if it's editor or not - before providing the default one.
- name: java-support
plugin:
id: redhat/java8/latest
components:
- name: vscode-java
container:
memoryLimit: 2Gi
- name: m2
volume:
size: 2G
- name: maven-tooling
container:
image: registry.redhat.io/codeready-workspaces/stacks-java-rhel8:2.1
mountSources: true
memoryLimit: 768Mi
env:
- name: JAVA_OPTS
value: >-
-XX:MaxRAMPercentage=50.0 -XX:+UseParallelGC -XX:MinHeapFreeRatio=10
-XX:MaxHeapFreeRatio=20 -XX:GCTimeRatio=4
-XX:AdaptiveSizePolicyWeight=90 -Dsun.zip.disableMemoryMapping=true
-Xms20m -Djava.security.egd=file:/dev/./urandom -Duser.home=/home/jboss
- name: MAVEN_OPTS
value: $(JAVA_OPTS)
endpoints:
- name: 8080-tcp
targetPort: 8080
exposure: public
volumeMounts:
- name: m2
path: /home/jboss/.m2
commands:
- id: build
exec:
component: maven
commandLine: mvn -Duser.home=${HOME} -DskipTests clean install
workingDir: '${PROJECTS_ROOT}/spring-boot-http-booster'
env:
- name: MAVEN_OPTS
value: "-Xmx200m"
As we implement the full devfile/api plugins functionality, we should consider whether the flattening process should be separated out into a subcontroller, similar to what we had for Component subresources
Devfiles format allows to specify the parent but it's not implemented yet on devworkspace operator side.
So, this issue is about implementing it
Include the conversion between v1alpha1 et v1alpha2 dans devfile/api
Info: https://book.kubebuilder.io/multiversion-tutorial/conversion.html
Part of #175
Add readiness and liveness probe for controller and webhook server.
It came from issues.redhat.com/browse/CRW-683
issues.redhat.com/browse/CRW-916
It should be possible to start a devworkspace without IDE, like the following:
kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha2
metadata:
name: java-sample
spec:
started: true
template:
projects:
- name: frontend
git:
remotes:
origin: https://github.com/spring-projects/spring-petclinic
components:
- name: maven
container:
image: quay.io/eclipse/che-java8-maven:nightly
That's really a prerequisite to support TLS on the routing since Theia Webview does not work without it.
To avoid importing CA into the browser, it makes sense to include here single host issue.
on OpenShift #201:
TLS: I think on OpenShift we should just enable Edge
Termination + Redirect
terminal policy and rely on the cluster certificates;
SingleHost: for simplification, we just should set the same host which includes workspaceID and use like component/endpoint names in path rule;
On Kubernetes there are different ways to go:
seems to be the simplest: generate wildcard certificate like with cert-manager and configure it as the default one https://kubernetes.github.io/ingress-nginx/user-guide/tls; Then TLS and Singlehost implementation is the same as for OpenShift;
Alternatives:
not to require wildcard certificate: we could generate certificate per operators, which then should be propagated to the workspaces namespaces. Then workspaceID goes to the path rule. It's a bit more difficult in terms of the need to propagate secret;
not to require a wildcard certificate, each workspace could own its own certificate. The difficulty - the operator is dependant on cert-manager and should manage certificates CR as well.
I think it worths to go with the simplest and then change it later.
Currently, DevWorkspace Operator uses one PVC for all workspace and provides isolation with subpaths.
And these subpaths are never clean up. It means that if user recreates workspace - they probably will exceed file system quota.
It's needed to clean up subpaths that belong to removed workspaces. I see it's implemented with an additional finalizer, that set up a dedicated Deployment/Pod/Job that mounts / of the workspace PVC and cleans up the needed files. Note that this Pod can be blocked if another workspace is running at the same time (Since PVC has RWO). To avoid unneeded runs of such pre-create deployment, we may store initialized subpaths in the PVC annotations.
Also, pay attention to be able to remove subpaths - it may be needed to initialize them in the right way, where we mount /
, initialize subpaths, and only then we are able to mount subpaths which we should be able to clean up without permissions issues. More see #211
We have a goal to dogfood, so develop DevWorkpace Operator with DevWorkspace 2.0
This issue is about to contribute initial Devfile 2.0 we can start with, that will have needed tools (kubectl, oc, kustomize, controller-gen, ...) and commands to:
Update the DevWorkspace Controller to Operator SDK 1.0 (compatible with the new kubebuilder)
In cases where the devworkspace operator deployment is stuck in a crashloop, every time it starts it will manage the webhooks server serviceaccount in such a way that a new sa-token is created.
โฏ kc get secrets
NAME TYPE DATA AGE
default-token-tcwqm kubernetes.io/service-account-token 3 31m
devworkspace-operator-webhook-cert kubernetes.io/tls 3 31m
devworkspace-webhook-server-token-52mhn kubernetes.io/service-account-token 3 29m
devworkspace-webhook-server-token-5427w kubernetes.io/service-account-token 3 24m
devworkspace-webhook-server-token-gbgjc kubernetes.io/service-account-token 3 4m16s
devworkspace-webhook-server-token-hj8jt kubernetes.io/service-account-token 3 19m
devworkspace-webhook-server-token-lhlvg kubernetes.io/service-account-token 3 27m
devworkspace-webhook-server-token-mb6dk kubernetes.io/service-account-token 3 31m
devworkspace-webhook-server-token-mdljr kubernetes.io/service-account-token 3 9m23s
devworkspace-webhook-server-token-ns25v kubernetes.io/service-account-token 3 14m
devworkspace-webhook-server-token-r67bb kubernetes.io/service-account-token 3 30m
devworkspace-webhook-server-token-v92b7 kubernetes.io/service-account-token 3 30m
devworkspace-webhook-server-token-xlh7g kubernetes.io/service-account-token 3 30m
devworkspace-webhook-server-token-zbstn kubernetes.io/service-account-token 3 83s
container
s components, commands and events (cf. the following gist: https://github.com/davidfestal/api/blob/devfile-2.0-vscode-plugin-management/samples/plugin-sample/all-in-one-theia-nodejs.devworkspace.yaml)
apply
of container
components on preStart
events => create an initContainer
#183Volume
component (or infer it when there is a mount for now ?) #185conversion-gen
calls to generate conversion code for all the parts that are the samecomponent
s that come from a plugin, to be able to gather them in the che-theia UI and make the distinction from user-runtime containers ? => Add an optional attribute on a non-plugin components ?DevWorkspaceTemplateSpecContent
in its status ?). There should be an option (false by default) to enable the use Devfile 2.0 plugin mechanism for plugins loaded through ID or url.true
by default(1 // 2) > 3 > 4 > 5
Currently, the devworkspace operator runs workspaces in deployments with a recreate
strategy. This is required because rolling deployments can hang if they mount a RWO volume, but it also means that most modifications to a devworkspace result in a short time where the workspace is offline.
We should look into ways to configure whether rolling deployments should be used, and potentially enable rolling deployments automatically (if e.g. a workspace doesn't mount any PVCs or something like async storage is used).
We want to demonstrate/validate that a Theia based DevWorkspace can be used to work on a real-world, cloud-native project (this one).
The ones described here
Operators bootstrapped by kubebuilder are set up with a scaffold for indexing objects created on the cluster. This is currently unimplemented as of PR #187
Set up indexing (see doc) as appropriate
With the advent of external workspace routing controllers, there has arisen the need to configure them on per-workspace basis. In another words it should be possible to pass down routing class specific configuration to the external controller.
== Proposed Solution
There is a precedent for handling this kind of "polymorphism" in Kubernetes with the Ingress annotations, e.g. https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/.
I would like to propose configuring the workspace routing controller using annotations on the DevWorkspace
object.
Let's say we have a workspace routing controller handling the myrouting
routing class.
We would be able to configure it on the DevWorkspace
object like this:
kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
name: cloud-shell
annotations:
controller.devfile.io/restricted-access: "true"
controller.devfile.io/other-cotroller-annotation: "yes"
myrouting.routingclass.controller.devfile.io/answer: "42"
spec:
started: true
routingClass: myrouting
template:
...
This would create a workspace routing object with the 2 following annotations:
kind: WorkspaceRouting
apiVersion: workspace.devfile.io/v1alpha1
metadata:
name: ...
annotations:
controller.devfile.io/restricted-access: "true"
myrouting.routingclass.controller.devfile.io/answer: "42"
...
The restricted-access
is already being passed down by the existing code. myrouting.routingclass.controller.devfile.io/answer
is considered a configuration property of the controller and therefore is passed down to the WorkspaceRouting
object. controller.devfile.io/other-cotroller-annotation
is NOT passed down, because it is unrelated to the workpace routing.
== Alternative Solution
We could also specify the configuration directly in the spec of the Devworkspace
, e.g.:
kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
name: cloud-shell
annotations:
controller.devfile.io/restricted-access: "true"
controller.devfile.io/other-cotroller-annotation: "yes"
spec:
started: true
routingClass: myrouting
routingAnnotations:
answer: "42"
template:
...
This feels a little bit less idiomatic Kubernetes to me though.
I'm creating this issue to bring up the problem I faced, in case anyone else sees the same thing.
minikube version: v1.15.1
Some images can't seem to correctly pull .vsix
files.
Working:
quay.io/fedora/fedora:34
quay.io/eclipse/che-nodejs10-ubi:nightly
Not working:
quay.io/samsahai/curl:latest
-- alpine-based, used in VSX installer in devfilesquay.io/eclipse/che-plugin-registry:nightly
-- alpine-basedIMAGES=(
"quay.io/fedora/fedora:34"
"quay.io/eclipse/che-nodejs10-ubi:nightly"
"quay.io/samsahai/curl:latest"
"quay.io/eclipse/che-plugin-registry:nightly"
)
for image in $IMAGES; do
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: download-test-$(echo ${image} | sed 's|[^a-z0-9]|-|g')
labels:
app: download-test
spec:
restartPolicy: Never
containers:
- image: ${image}
name: test-download
command:
- '/bin/sh'
args:
- '-c'
- >
if which wget >/dev/null; then
wget -S https://github.com/golang/vscode-go/releases/download/v0.16.1/go-0.16.1.vsix -O /tmp/test
elif which curl >/dev/null; then
curl -L https://github.com/golang/vscode-go/releases/download/v0.16.1/go-0.16.1.vsix > /tmp/test
fi
EOF
done
Check pods:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
download-test-quay-io-eclipse-che-nodejs10-ubi-nightly 0/1 Completed 0 5s
download-test-quay-io-eclipse-che-plugin-registry-nightly 0/1 Error 0 4s
download-test-quay-io-fedora-fedora-34 0/1 Completed 0 5s
download-test-quay-io-samsahai-curl-latest 0/1 Error 0 4s
to cleanup:
kubectl delete all -l 'app=download-test'
Currently with dev-workspace I can create a workspace and I've access to:
But where to grab component status ?
(with che server it was within workspace.runtime object)
It seems for now I've to look at pod object and compare some container name with component/name but it looks very fragile.
Currently, if a workspace specifies a plugin that cannot be retrieved, the controller retries on a loop (with backoff). Instead we should fail workspace startup with an understandable message.
Loading a Che workspace is currently an operation that takes 45 secs or more.
We want to speed this up as much as possible. Ideally under 10 secs because fast software is the best software.
We have the opportunity to experiment new ideas with the DevWorkspace that we could not test before.
In particular:
In this gdoc @amisevsk has added some considerations and a proposal.
With the advent of external workspace routing controllers, there has arisen the need to configure them on per-workspace basis. In another words it should be possible to pass down routing class specific configuration to the external controller.
== Proposed Solution
There is a precedent for handling this kind of "polymorphism" in Kubernetes with the Ingress annotations, e.g. https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/.
I would like to propose configuring the workspace routing controller using annotations on the DevWorkspace
object.
Let's say we have a workspace routing controller handling the myrouting
routing class.
We would be able to configure it on the DevWorkspace
object like this:
kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
name: cloud-shell
annotations:
controller.devfile.io/restricted-access: "true"
controller.devfile.io/other-cotroller-annotation: "yes"
myrouting.routingclass.controller.devfile.io/answer: "42"
spec:
started: true
routingClass: myrouting
template:
...
This would create a workspace routing object with the 2 following annotations:
kind: WorkspaceRouting
apiVersion: workspace.devfile.io/v1alpha1
metadata:
name: ...
annotations:
controller.devfile.io/restricted-access: "true"
myrouting.routingclass.controller.devfile.io/answer: "42"
...
The restricted-access
is already being passed down by the existing code. myrouting.routingclass.controller.devfile.io/answer
is considered a configuration property of the controller and therefore is passed down to the WorkspaceRouting
object. controller.devfile.io/other-cotroller-annotation
is NOT passed down, because it is unrelated to the workpace routing.
== Alternative Solution
We could also specify the configuration directly in the spec of the Devworkspace
, e.g.:
kind: DevWorkspace
apiVersion: workspace.devfile.io/v1alpha1
metadata:
name: cloud-shell
annotations:
controller.devfile.io/restricted-access: "true"
controller.devfile.io/other-cotroller-annotation: "yes"
spec:
started: true
routingClass: myrouting
routingAnnotations:
answer: "42"
template:
...
This feels a little bit less idiomatic Kubernetes to me though.
workspaces resources are available with devworkspaces
but workspace routings are with workspaceroutings
if instead of using Workspace we use DevWorkspace
we should have WorkspaceRoutings renamed to DevWorkspaceRoutings
To move forward faster with an ability to use devworkspace controller as workspace engine in chectl, it's needed to provide some quick solution to make basic routing working again, it's needed:
Later these hacks should be replaced with proper routing with authentication enabled.
Initially, we used Che specific job that simply uses openssl to create certificates.
It's better to use https://github.com/newrelic/k8s-webhook-cert-manager or https://github.com/jet/kube-webhook-certgen(used by nginx ingress controller).
At the time we'll have DevWorkspace Operator it may not be actual anymore and we may fully rely on OLM for certificates for webhook server but we'll see. I even am glad to see any better propose an alternative.
Update:
point to the last documentation about webhooks in Kubebuilder, which is now completely the basis of the new OperatorSDK 1.0:
https://book.kubebuilder.io/cronjob-tutorial/running.html and https://book.kubebuilder.io/cronjob-tutorial/cert-manager.html
This issue is about investigating how reducing containers number influences workspace-startup time
As the object of investigation can be chosen any language pack devfile, like Java, Go...
As an example Java Maven https://github.com/eclipse/che-devfile-registry/blob/master/devfiles/java-maven/devfile.yaml
Currently, if you run it on OSIO you'll get the following images:
init containers
"quay.io/eclipse/che-theia-endpoint-runtime-binary:7.22.0"
"quay.io/eclipse/che-plugin-artifacts-broker:v3.4.0"
authentication
"quay.io/eclipse/che-jwtproxy:0.10.0"
// are run for every Theia based containers
"quay.io/eclipse/che-theia:7.22.0"
"quay.io/eclipse/che-machine-exec:7.22.0"
// language specific sidecar, runs vsx + has preinstalled tools for them
"quay.io/eclipse/che-sidecar-java:11-86274e3"
// container designed for user to use terminal and execute commands
"quay.io/eclipse/che-java11-maven:7.22.0"
// OSIO specific plugin-container provisioned in every workspace
"quay.io/eclipse/che-workspace-telemetry-woopra-plugin:latest"
According to https://docs.google.com/document/d/1V8RA6_wEd20vTRKL60yPXmRIN8A_lo5ps8mb0T9FUwY/edit#heading=h.s3u51mcvho3h each image takes 2 additional seconds even if it's already cached on the node.
It's needed to investigate what here can be merged and how much it speeds up workspace start-up, like: plugin-artifact-broker can be removed if we download VSX into plugin sider on the build. The same about Endpoint runtime binary.
Che Theia can be merged with Che Machine Exec.
Then we could go in different ways:
Prepare language-specific flat all-on-one image, like Theia + java plugin (user still is able to add additional images if needed)
or run Theia + prepare language specific flat tooling image (user still is able to add additional images if needed)
See #209 (comment)
I'm not sure what we can do with JWTProxy + Telemetry sidecars.
Currently, the devworkspace operator relies on a configmap to define a few options (default routingClass, etc.). However, operators as managed by OLM don't come with configmaps, and instead we create a configmap on cluster during startup as a way of manually configuring the operator after the fact.
We should remove configmap functionality as its generally not used for operators. Instead configuration should be defined in a standard way, e.g. by OLM descriptors
In the scope of eclipse-che/che#18990 Dashboard is going to create dockercfg in the DevWorkspace namespace.
It's needed to implement the mechanism to use such secret as imagePullSecret.
Possible alternatives:
controller.devfile.io/secret-kind: imagePullSecret
"Async storage" as used by Che requires
This allows workspaces to avoid issues around backing storage for PVCs (e.g. Gluster volumes have trouble synchronizing when many files are touched -- e.g. in javascript .node_modules
) while also providing persistence (unlike ephemeral volumes).
We should reuse the sync components used by Che to implement this in the DevWorkspace operator.
A basic implementation of async storage would
For the first implementation, we should target
This is mainly to avoid edge cases in managing out-of-devworkspace resources when there are multiple devworkspaces:
It's not possible to download go modules without using proxy.golang.org
due to a couple of issues:
go mod download
in the current master
gives the error
go: github.com/eclipse/[email protected]+incompatible: invalid version: unknown revision b20597f15e4c
v3.1.1-0.20200207223144-b20597f15e4c+incompatible
with v3.1.1
in the go.mod file and running again updates the dependency to
require github.com/eclipse/che-plugin-broker v3.1.1+incompatible
go: github.com/openshift/[email protected]+incompatible: invalid pseudo-version: preceding tag (v3.9.0) not found
The default go package in Fedora 31 and 32 uses GOPROXY=direct
, which causes the issue above; this will likely impact rhel-based distros as well.
Set GOPROXY=https://proxy.golang.org,direct
before downloading modules or download and use a separate binary -- e.g.
pushd $(mktemp -d)
go get golang.org/dl/go${VERSION}
go${VERSION} download
alias go=go${VERSION}
popd
Once modules have been successfully cached, the error above is avoided even without using proxy.golang.org
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.