Giter Site home page Giter Site logo

operator-framework / operator-lifecycle-manager Goto Github PK

View Code? Open in Web Editor NEW
1.6K 54.0 537.0 80.65 MB

A management framework for extending Kubernetes with Operators

Home Page: https://olm.operatorframework.io

License: Apache License 2.0

Go 98.91% Makefile 0.27% Shell 0.45% Dockerfile 0.06% Starlark 0.30% Mustache 0.01%
kubernetes kubernetes-operator olm crd kubernetes-applications

operator-lifecycle-manager's Introduction

Operator Lifecycle Manager Operator Lifecycle Manager

Container Repository on Quay.io License Go Report Card Slack Channel

Documentation

User documentation can be found on the OLM website.

Overview

This project is a component of the Operator Framework, an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Read more in the introduction blog post and learn about practical use cases at the OLM website.

OLM extends Kubernetes to provide a declarative way to install, manage, and upgrade Operators and their dependencies in a cluster. It provides the following features:

Over-the-Air Updates and Catalogs

Kubernetes clusters are being kept up to date using elaborate update mechanisms today, more often automatically and in the background. Operators, being cluster extensions, should follow that. OLM has a concept of catalogs from which Operators are available to install and being kept up to date. In this model OLM allows maintainers granular authoring of the update path and gives commercial vendors a flexible publishing mechanism using channels.

Dependency Model

With OLMs packaging format Operators can express dependencies on the platform and on other Operators. They can rely on OLM to respect these requirements as long as the cluster is up. In this way, OLMs dependency model ensures Operators stay working during their long lifecycle across multiple updates of the platform or other Operators.

Discoverability

OLM advertises installed Operators and their services into the namespaces of tenants. They can discover which managed services are available and which Operator provides them. Administrators can rely on catalog content projected into a cluster, enabling discovery of Operators available to install.

Cluster Stability

Operators must claim ownership of their APIs. OLM will prevent conflicting Operators owning the same APIs being installed, ensuring cluster stability.

Declarative UI controls

Operators can behave like managed service providers. Their user interface on the command line are APIs. For graphical consoles OLM annotates those APIs with descriptors that drive the creation of rich interfaces and forms for users to interact with the Operator in a natural, cloud-like way.

Prerequisites

  • git
  • go version v1.12+.
  • docker version 17.03+.
  • kubectl version v1.11.3+.
  • Access to a Kubernetes v1.11.3+ cluster.

Getting Started

Check out the Getting Started section in the docs.

Installation

Install OLM on a Kubernetes cluster by following the installation guide.

For a complete end-to-end example of how OLM fits into the Operator Framework, see the Operator Framework website and the Getting Started guide on OperatorHub.io.

Contributing your Operator

Have an awesome Operator you want to share? Checkout the publishing docs to learn about contributing to OperatorHub.io.

Subscribe to a Package and Channel

Cloud Services can be installed from the catalog by subscribing to a channel in the corresponding package.

Kubernetes-native Applications

An Operator is an application-specific controller that extends the Kubernetes API to create, configure, manage, and operate instances of complex applications on behalf of a user.

OLM requires that applications be managed by an operator, but that doesn't mean that each application must write one from scratch. Depending on the level of control required you may:

  • Package up an existing set of resources for OLM with helm-app-operator-kit without writing a single line of go.
  • Use the operator-sdk to quickly build an operator from scratch.

The primary vehicle for describing operator requirements with OLM is a ClusterServiceVersion. Once you have an application packaged for OLM, you can deploy it with OLM by creating its ClusterServiceVersion in a namespace with a supporting OperatorGroup.

ClusterServiceVersions can be collected into CatalogSources which will allow automated installation and dependency resolution via an InstallPlan, and can be kept up-to-date with a Subscription.

Learn more about the components used by OLM by reading about the architecture and philosophy.

Key Concepts

CustomResourceDefinitions

OLM standardizes interactions with operators by requiring that the interface to an operator be via the Kubernetes API. Because we expect users to define the interfaces to their applications, OLM currently uses CRDs to define the Kubernetes API interactions.

Examples: EtcdCluster CRD, EtcdBackup CRD

Descriptors

OLM introduces the notion of “descriptors” of both spec and status fields in kubernetes API responses. Descriptors are intended to indicate various properties of a field in order to make decisions about their content. For example, this can drive connecting two operators together (e.g. connecting the connection string from a mysql instance to a consuming application) and be used to drive rich interactions in a UI.

See an example of a ClusterServiceVersion with descriptors

Dependency Resolution

To minimize the effort required to run an application on kubernetes, OLM handles dependency discovery and resolution of applications running on OLM.

This is achieved through additional metadata on the application definition. Each operator must define:

  • The CRDs that it is responsible for managing.
    • e.g., the etcd operator manages EtcdCluster.
  • The CRDs that it depends on.
    • e.g., the vault operator depends on EtcdCluster, because Vault is backed by etcd.

Basic dependency resolution is then possible by finding, for each “required” CRD, the corresponding operator that manages it and installing it as well. Dependency resolution can be further constrained by the way a user interacts with catalogs.

Granularity

Dependency resolution is driven through the (Group, Version, Kind) of CRDs. This means that no updates can occur to a given CRD (of a particular Group, Version, Kind) unless they are completely backward compatible.

There is no way to express a dependency on a particular version of an operator (e.g. etcd-operator v0.9.0) or application instance (e.g. etcd v3.2.1). This encourages application authors to depend on the interface and not the implementation.

Discovery, Catalogs, and Automated Upgrades

OLM has the concept of catalogs, which are repositories of application definitions and CRDs.

Catalogs contain a set of Packages, which map “channels” to a particular application definition. Channels allow package authors to write different upgrade paths for different users (e.g. alpha vs. stable).

Example: etcd package

Users can subscribe to channels and have their operators automatically updated when new versions are released.

Here's an example of a subscription:

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd
  namespace: olm
spec:
  channel: singlenamespace-alpha
  name: etcd
  source: operatorhubio-catalog
  sourceNamespace: olm

This will keep the etcd ClusterServiceVersion up to date as new versions become available in the catalog.

Catalogs are served internally over a grpc interface to OLM from operator-registry pods. Catalog data such as bundles are documented there as well.

Samples

To explore any operator samples using the OLM, see the https://operatorhub.io/ and its resources in Community Operators.

Community and how to get involved

Contributing

Check out the contributor documentation. Also, see the proposal docs and issues for ongoing or planned work.

Reporting bugs

See reporting bugs for details about reporting any issues.

Reporting flaky tests

See reporting flaky tests for details about reporting flaky tests.

License

Operator Lifecycle Manager is under Apache 2.0 license. See the LICENSE file for details.

operator-lifecycle-manager's People

Contributors

akihikokuroda avatar alecmerdler avatar anik120 avatar ankitathomas avatar awgreene avatar benluddy avatar charltonaustin avatar dcondomitti avatar dependabot[bot] avatar dinhxuanvu avatar dtfranz avatar ecordell avatar exdx avatar harishsurf avatar hasbro17 avatar joelanford avatar josephschorr avatar jpeeler avatar jzelinskie avatar kevinrizza avatar m1kola avatar madorn avatar njhale avatar openshift-merge-robot avatar perdasilva avatar robszumski avatar stevekuznetsov avatar timflannagan avatar tkashem avatar tmshort avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator-lifecycle-manager's Issues

Question: How to "enable" the OLM in the console for Minishift ?

image

I have added the "myproject" as watchname space:

spec:
  containers:
  - command:
    - /bin/olm
    - -watchedNamespaces
    - myproject
    - -debug
    env:
    - name: OPERATOR_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: OPERATOR_NAME
      value: olm-operator
    image: quay.io/coreos/olm:local

but that seems to be not good enough. What else would I need ?

Question: instalingl the OLM via the CVO

I check the payload of the csv, and find many duplicates of YAML file, is it as expected?

[core@ip-10-0-2-239 ~]$ oc adm release extract --from=registry.svc.ci.openshift.org/openshift/origin-release@sha256:6dbef1b08ad892e7c01da342de7fd233d58650e771da151b4fafbef65350b847 --to=release-payload
[core@ip-10-0-2-239 ~]$ ls release-payload/ | grep 30
0000_30_00-namespace.yaml
0000_30_00-namespace_2.yaml
0000_30_01-olm-operator.serviceaccount.yaml
0000_30_01-olm-operator.serviceaccount_2.yaml
0000_30_02-clusterserviceversion.crd.yaml
0000_30_02-clusterserviceversion.crd_2.yaml
0000_30_03-installplan.crd.yaml
0000_30_03-installplan.crd_2.yaml
0000_30_04-subscription.crd.yaml
0000_30_04-subscription.crd_2.yaml
0000_30_05-catalogsource.crd.yaml
0000_30_05-catalogsource.crd_2.yaml
0000_30_06-rh-operators.configmap.yaml
0000_30_06-rh-operators.configmap_2.yaml
0000_30_07-certified-operators.configmap.yaml
0000_30_07-certified-operators.configmap_2.yaml
0000_30_08-certified-operators.catalogsource.yaml
0000_30_08-certified-operators.catalogsource_2.yaml
0000_30_09-rh-operators.catalogsource.yaml
0000_30_09-rh-operators.catalogsource_2.yaml
0000_30_10-olm-operator.deployment.yaml
0000_30_10-olm-operator.deployment_2.yaml
0000_30_11-catalog-operator.deployment.yaml
0000_30_11-catalog-operator.deployment_2.yaml
0000_30_12-aggregated.clusterrole.yaml
0000_30_12-aggregated.clusterrole_2.yaml
0000_30_13-operatorgroup.crd.yaml
0000_30_13-operatorgroup.crd_2.yaml
0000_30_14-olm-operators.configmap.yaml
0000_30_14-olm-operators.configmap_2.yaml
0000_30_15-olm-operators.catalogsource.yaml
0000_30_15-olm-operators.catalogsource_2.yaml
0000_30_16-operatorgroup-default.yaml
0000_30_16-operatorgroup-default_2.yaml
0000_30_17-packageserver.subscription.yaml
0000_30_17-packageserver.subscription_2.yaml

And, they are different, I guess they are from different two manifest files. Why we do that?
For example, the content of 0000_30_16-operatorgroup-default.yaml:

[core@ip-10-0-2-239 ~]$ cat release-payload/0000_30_16-operatorgroup-default.yaml
---
# Source: olm/templates/0000_30_16-operatorgroup-default.yaml
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: global-operators
  namespace: openshift-operators

The content of 0000_30_16-operatorgroup-default_2.yaml:

[core@ip-10-0-2-239 ~]$ cat release-payload/0000_30_16-operatorgroup-default_2.yaml
---
# Source: olm/templates/0000_30_16-operatorgroup-default.yaml
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: global-operators
  namespace: openshift-operators
---
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: olm-operators
  namespace: openshift-operator-lifecycle-manager

@ecordell Could you help clarify this?

Help needed with CR display name

Even though display name of the provided API is correctly displayed in a tab:

image

It is displayed with pure CRD names for singular and plural:

image

How can I configure my csv so that it reads Eclipse Che Clusters and Create Eclipse Che Cluster (for the button)?

manifests: OLM is creating a namespace without run-level

On installation CVO is errors until openshift-apiserver is running because of missing openshift.io/run-level: "1" label on openshift-operators.

I1212 21:46:09.241861       1 cvo.go:269] Error syncing operator openshift-cluster-version/version: Could not update operatorgroup "openshift-operators/global-operators" (operators.coreos.com/v1alpha2, 108 of 218): the server has forbidden updates to this resource

OLM is running at pretty high priority order (30) in release payload and is blocking installation of other operators for quite some time.

What is the use of openshift-operators namespace and why is this in the release payload? And if this is important for OLM can you add openshift.io/run-level: "1" label like other important SLOs.

CSV support for ClusterRoles

StrategyDeploymentPermissions in CSV spec.install.spec.permissions supports specifying rules for RBAC Roles only, as per https://github.com/operator-framework/operator-lifecycle-manager/blob/master/pkg/controller/install/deployment.go#L78-L80 .

Is there a way to add ClusterRoles? I've an operator that needs access to node, persistentvolumeclaim and storageclass, which are cluster level access.

While deploying without OLM, for RBAC permissions, I create a ClusterRole and ClusterRoleBinding, and bind the ClusterRole to the ServiceAccount of my operator.

If this support can be added in the future, I can try implementing it and create a PR.

make run-local missing charts

When running make run-local, I get this error.

. ./scripts/install_local.sh local build/resources
namespace "local" created
error: the path "build/resources" does not exist
make: *** [run-local] Error 1

Shouldn't the manifests task be added to the run-local one?

`The ConfigMap "rh-operators" is invalid` while installing on upstream k8s

Following the instructions at Install the latest released version of OLM for upstream Kubernetes on k8s 1.11, I get the following message:

$ kubectl apply -f deploy/upstream/manifests/latest/
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager configured
serviceaccount/olm-operator-serviceaccount unchanged
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-kube-system configured
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com configured
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com configured
catalogsource.operators.coreos.com/rh-operators unchanged
deployment.apps/olm-operator unchanged
deployment.apps/catalog-operator unchanged
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit configured
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view configured
apiservice.apiregistration.k8s.io/v1alpha1.packages.apps.redhat.com configured
clusterrolebinding.rbac.authorization.k8s.io/packagemanifest:system:auth-delegator configured
rolebinding.rbac.authorization.k8s.io/packagemanifest-auth-reader unchanged
clusterrolebinding.rbac.authorization.k8s.io/packagemanifest-view configured
clusterrolebinding.rbac.authorization.k8s.io/package-apiserver-clusterrolebinding configured
secret/package-server-certs unchanged
deployment.apps/package-server unchanged
service/package-server unchanged
The ConfigMap "rh-operators" is invalid: metadata.annotations: Too long: must have at most 262144 characters

Is there a workaround for annotations length limit?

OLM complains with "Policy rule not satisfied for service account"

I built OLM with minikube using make run-local, then I injected "manually" the CRDs, serviceaccounts, roles, rolebindings.

OLM complains with Policy rule not satisfied for service account even resources are present.
This is one of the errors, I have 3 serviceaccounts and I see the same message:

  - dependents:
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["get","list","watch","create","update","delete"],"apiGroups":[""],"resources":["configmaps"]}
      status: NotSatisfied
      version: v1beta1
    group: ""
    kind: ServiceAccount
    message: Policy rule not satisfied for service account
    name: rook-ceph-osd
    status: PresentNotSatisfied
    version: v1

However, the service account is present in the local namespace:

$ kubectl -n local get serviceaccount rook-ceph-osd -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: 2019-02-25T08:23:41Z
  labels:
    operator: rook
    storage-backend: ceph
  name: rook-ceph-osd
  namespace: local
  resourceVersion: "404764"
  selfLink: /api/v1/namespaces/local/serviceaccounts/rook-ceph-osd
  uid: a85a87c9-38d6-11e9-bed6-f06b275bb031
secrets:
- name: rook-ceph-osd-token-px7zl

As well as the role:

kubectl -n local get role rook-ceph-osd -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: 2019-02-25T08:23:52Z
  name: rook-ceph-osd
  namespace: local
  resourceVersion: "404807"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/local/roles/rook-ceph-osd
  uid: af32366f-38d6-11e9-bed6-f06b275bb031
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
  - create
  - update
  - delete

And the bindings:

kubectl -n local get rolebindings rook-ceph-osd -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: 2019-02-25T08:23:52Z
  name: rook-ceph-osd
  namespace: local
  resourceVersion: "404804"
  selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/local/rolebindings/rook-ceph-osd
  uid: af1b103f-38d6-11e9-bed6-f06b275bb031
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rook-ceph-osd
subjects:
- kind: ServiceAccount
  name: rook-ceph-osd
  namespace: local

Here is my CSV:

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  creationTimestamp: null
  name: rook.v0.1.0
  namespace: placeholder
spec:
  displayName: rook
  description: |
      Rook: Storage Orchestration for Kubernetes

      Rook runs as a cloud-native service for optimal integration with applications in need of storage, and handles the heavy-lifting behind the scenes such as provisioning and management.      ## Before Your Start
      Rook orchestrates battle-tested open-source storage technologies including Ceph, which has years of production deployments and runs some of the worlds largest clusters.
      Rook is open source software released under the Apache 2.0 license. Rook has an active developer and user community.

  keywords: ['rook', 'ceph', 'storage', 'object storage', 'open source']
  version: 0.1.0
  minKubeVersion: 1.11.0
  maturity: alpha
  maintainers:
  - name: Red Hat, Inc.
    email: [email protected]
  provider:
    name: Red Hat, Inc.
  labels:
    alm-owner-etcd: etcdoperator
    operated-by: rookoperator
  selector:
    matchLabels:
      alm-owner-etcd: etcdoperator
      operated-by: rookoperator
  links:
  - name: Blog
    url: https://blog.rook.io
  - name: Documentation
    url: https://rook.github.io/docs/rook/master/
  - name: rook Operator Source Code
    url: https://github.com/rook/rook/tree/master/pkg/operator/ceph

  icon:
  - base64data: #TODO
    mediatype: image/png
  apiservicedefinitions: {}
  customresourcedefinitions:
    owned:
    - kind: CephCluster
      name: cephclusters.ceph.rook.io
      version: v1
      displayName: Ceph Cluster
      description: Represents a Ceph cluster.
    - kind: CephBlockPool
      name: cephblockpools.ceph.rook.io
      version: v1
      displayName: Ceph Block Pool
      description: Represents a Ceph Block Pool.
    - kind: CephFilesystem
      name: cephfilesystems.ceph.rook.io
      version: v1
      displayName: Ceph Filesystem
      description: Represents a Ceph Filesystem.
    - kind: CephNFS
      name: cephnfses.ceph.rook.io
      version: v1
      displayName: Ceph NFS
      description: Represents a Ceph NFS interface.
    - kind: CephObjectStore
      name: cephobjectstores.ceph.rook.io
      version: v1
      displayName: Ceph Object Store
      description: Represents a Ceph Object Store.
    - kind: CephObjectStoreUser
      name: cephobjectstoreusers.ceph.rook.io
      version: v1
      displayName: Ceph Object Store User
      description: Represents a Ceph Object Store User.
    - kind: Volume
      name: volumes.rook.io
      version: v1alpha2
      displayName: Ceph Volume
      description: Represents a Ceph volume.
  installModes:
  - type: OwnNamespace
    supported: true
  - type: SingleNamespace
    supported: true
  - type: MultiNamespace
    supported: false
  - type: AllNamespaces
    supported: true
  install:
    spec:
      deployments:
      - name: rook-ceph-operator
        spec:
          replicas: 1
          selector: null # Not supported?
          strategy: {}
          template:
            metadata:
              creationTimestamp: null
              labels:
                app: rook-ceph-operator
            spec:
              containers:
              - args:
                - ceph
                - operator
                env:
                - name: ROOK_ALLOW_MULTIPLE_FILESYSTEMS
                  value: "false"
                - name: ROOK_LOG_LEVEL
                  value: INFO
                - name: ROOK_MON_HEALTHCHECK_INTERVAL
                  value: 45s
                - name: ROOK_MON_OUT_TIMEOUT
                  value: 300s
                - name: ROOK_DISCOVER_DEVICES_INTERVAL
                  value: 60m
                - name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
                  value: "false"
                - name: ROOK_ENABLE_SELINUX_RELABELING
                  value: "true"
                - name: ROOK_ENABLE_FSGROUP
                  value: "true"
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                image: rook/ceph:master
                name: rook-ceph-operator
                resources: {}
                volumeMounts:
                - mountPath: /var/lib/rook
                  name: rook-config
                - mountPath: /etc/ceph
                  name: default-config-dir
              serviceAccountName: rook-ceph-system
              volumes:
              - emptyDir: {}
                name: rook-config
              - emptyDir: {}
                name: default-config-dir
      permissions:
      - rules:
        - apiGroups:
          - ""
          resources:
          - configmaps
          verbs:
          - get
          - list
          - watch
          - create
          - update
          - delete
        serviceAccountName: rook-ceph-osd
      - rules:
        - apiGroups:
          - ""
          resources:
          - pods
          - services
          - configmaps
          - nodes
          - nodes/proxy
          verbs:
          - get
          - list
          - watch
        - apiGroups:
          - batch
          resources:
          - jobs
          verbs:
          - get
          - list
          - watch
          - create
          - update
          - delete
        - apiGroups:
          - ceph.rook.io
          resources:
          - '*'
          verbs:
          - '*'
        serviceAccountName: rook-ceph-mgr
      - rules:
        - apiGroups:
          - ""
          resources:
          - pods
          - configmaps
          - events
          - persistentvolumes
          - persistentvolumeclaims
          - endpoints
          - secrets
          - pods/log
          - services
          verbs:
          - get
          - list
          - watch
          - patch
          - create
          - update
          - delete
        - apiGroups:
          - apps
          resources:
          - daemonsets
          - replicasets
          - deployments
          verbs:
          - get
          - list
          - watch
          - create
          - update
          - delete
        - apiGroups:
          - ""
          resources:
          - nodes
          - nodes/proxy
          verbs:
          - get
          - list
          - watch
        - apiGroups:
          - storage.k8s.io
          resources:
          - storageclasses
          verbs:
          - get
          - list
          - watch
        - apiGroups:
          - batch
          resources:
          - jobs
          verbs:
          - get
          - list
          - watch
          - create
          - update
          - delete
        - apiGroups:
          - ceph.rook.io
          resources:
          - '*'
          verbs:
          - '*'
        - apiGroups:
          - rook.io
          resources:
          - '*'
          verbs:
          - '*'
        serviceAccountName: rook-ceph-system
    strategy: deployment

Am I missing something obvious?
Thanks in advance for your help.

make run-local is broken

I tried to follow https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/install/install.md#run-locally-with-minikube and got:

$ make run-local
./scripts/package-release.sh  deploy/tectonic-alm-operator/manifests/ deploy/tectonic-alm-operator/values.yaml
Usage: ./scripts/package-release.sh semver chart values
* semver: semver-formatted version for this package
* chart: the directory to output the chart
* values: the values file
make: *** [tectonic-release] Error 1

none of the deployment works with okd 3.11

latest deployment for okd makes packageservice restarts
0.7.4 deployment make operator crashloopbackoff

catalogs don't work, and issuing CSV alone to openshift-operators fails, in cluster console failed staus and none deployments are created

any ideas how to install OLM with okd 3.11?

`make run-local` for Minikube fails with "packageserver" deployment failure

I was trying to install this on minikube,as with make run-local given in the README (https://github.com/operator-framework/operator-lifecycle-manager/blob/master/Documentation/install/install.md)

The installation fails with the below error message:

deployment "olm-operator" successfully rolled out
deployment "catalog-operator" successfully rolled out
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
Error from server (NotFound): deployments.extensions "packageserver" not found
retrying check rollout status for deployment "packageserver"...
deployment "packageserver" failed to roll out
make: *** [run-local] Error 1

minikube version: v0.26.1
Output of $kubectl version

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:16Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

VM provider: Virtualbox
Platform: MacOS mojave 10.14.1

OLM GUI does not have permission to list its OLM CRDs

On current Kubernetes cluster (see below), "Kubernetes Marketplace" page shows:

Restricted Access
You don't have access to this section due to cluster policy.
packagemanifests.packages.apps.redhat.com is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "packagemanifests" in API group "packages.apps.redhat.com" at the cluster scope

I installed OLM this way:

  1. Run Kubernetes using hack/local-up-cluster.sh (~today-ish master)
  2. Install OLM using kubectl create -f deploy/upstream/manifests/latest/ (with current OLM master)
  3. Run console using ./scripts/run_console_local.sh

make manifests error when rendering templates with helm

When running the manifests command : ver=0.0.0-local values_file=Documentation/install/local-values-shift.yaml make manifests I get this error :

mkdir -p build/resources
helm template -n olm -f build/chart/values.yaml build/chart --output-dir build/resources
Error: rendering template failed: runtime error: invalid memory address or nil pointer dereference

What could be the cause of it?

Can't deploy OLM onto OpenShift

I'm attempting to get OLM running and viewable from the web console and hitting some rough patches. My latest issue is with oc cluster up failing due to "unknown flag --service-catalog" as shown below.

30 minutes ago I was browsing the service-catalog web interface just fine, but failing to look at the rest of the OKD interface. Will write up another issue for that if I can reproduce soon.

Currently have commit 0a5a3... checked out.

[dwhatley@precision-t operator-lifecycle-manager]$ minishift version
minishift v1.23.0+91235ee

[dwhatley@precision-t operator-lifecycle-manager]$ minishift delete
You are deleting the Minishift VM: 'minishift'. Do you want to continue [y/N]?: y
Deleting the Minishift VM...
Minishift VM deleted.


[dwhatley@precision-t operator-lifecycle-manager]$ make run-local-shift
. ./scripts/build_local_shift.sh
-- Starting profile 'minishift'
-- Check if deprecated options are used ... OK
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.10.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.10.0' is supported ... OK
-- Checking if requested hypervisor 'kvm' is supported on this platform ... OK
-- Checking if KVM driver is installed ... 
   Driver is available at /usr/local/bin/docker-machine-driver-kvm ... 
   Checking driver binary is executable ... OK
-- Checking if Libvirt is installed ... OK
-- Checking if Libvirt default network is present ... OK
-- Checking if Libvirt default network is active ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting the OpenShift cluster using 'kvm' hypervisor ...
-- Minishift VM will be configured with ...
   Memory:    4 GB
   vCPUs :    2
   Disk size: 20 GB
-- Starting Minishift VM ................. OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ... 
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ... 
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 1% used OK
   Importing 'openshift/origin:v3.10.0' .............................. OK
   Importing 'openshift/origin-docker-registry:v3.10.0' .............. OK
   Importing 'openshift/origin-haproxy-router:v3.10.0' ............... OK
-- OpenShift cluster will be configured with ...
   Version: v3.10.0
-- Copying oc binary from the OpenShift container image to VM ... OK
-- Starting OpenShift cluster -- Extra 'oc' cluster up flags (experimental) ... 
   '--service-catalog'
.Error during 'cluster up' execution: Error starting the cluster. ssh command error:
command : /var/lib/minishift/bin/oc cluster up --base-dir /var/lib/minishift/base --public-hostname 192.168.42.121 --routing-suffix 192.168.42.121.nip.io --service-catalog
err     : exit status 1
output  : Error: unknown flag: --service-catalog

Unhelpful "RequirementsNotMet" message when attempting to deploy Template Service Broker Operator CSV

My team has been working on writing a CSV that will allow the Template Service Broker (TSB) to be deployed via OLM.

Currently, you must deploy the TSB Operator in the openshift-template-service-broker namespace when deploying with OLM (this is probably a requirement for the TSB in general, I'm somewhat new to actually using it). It took me a while to realize this, because the error message that OLM spits out doesn't point towards the actual problem.

I see the following status on my CSV instance YAML if i try to deploy somewhere other than the openshift-template-service-broker namespace:

status:
  conditions:
    - lastTransitionTime: '2018-10-29T20:28:17Z'
      lastUpdateTime: '2018-10-29T20:28:17Z'
      message: requirements not yet checked
      phase: Pending
      reason: RequirementsUnknown
    - lastTransitionTime: '2018-10-29T20:28:17Z'
      lastUpdateTime: '2018-10-29T20:28:17Z'
      message: one or more requirements couldn't be found
      phase: Pending
      reason: RequirementsNotMet
  lastTransitionTime: '2018-10-29T20:28:17Z'
  lastUpdateTime: '2018-10-29T20:28:17Z'
  message: one or more requirements couldn't be found
  phase: Pending
  reason: RequirementsNotMet
  requirementStatus:
    - group: apiextensions.k8s.io
      kind: CustomResourceDefinition
      name: templateservicebrokers.osb.openshift.io
      status: Present
      uuid: 5c8facba-dbab-11e8-8b28-1866da0d45a8
      version: v1beta1
    - group: ''
      kind: ServiceAccount
      name: apiserver
      status: NotPresent
      version: v1
    - group: ''
      kind: ServiceAccount
      name: template-service-broker-operator
      status: NotPresent
      version: v1
    - group: ''
      kind: ServiceAccount
      name: template-service-broker-client
      status: NotPresent
      version: v1

However, I can verify manually that the 3 ServiceAccounts that are supposedly NotPresent do in fact exist.

[dwhatley@precision-t template-service-broker-operator]$ oc get sa -n myproject
NAME                               SECRETS   AGE
apiserver                          2         12m
builder                            2         1h
default                            2         1h
deployer                           2         1h
template-service-broker-client     2         12m
template-service-broker-operator   2         12m

And in fact, the installplan for the TSB shows that all of the required ServiceAccounts were created successfully:

image

As to why this is occurring, my unverified belief is that OpenShift has a hard-coded security policy which allows the TSB to function properly only in the designated openshift-template-service-broker namespace. I think this is necessary due to the high privilege level that the TSB operates at. Still working on getting more details about my "unverified belief", will post a comment here if I find something more concrete.

Here's a link to the CatalogSource where the TSB CSV (yay acronyms) I'm trying to deploy comes from: https://github.com/fusor/catasb/blob/fc14e50852f0cc36fbf6d61eca49012fe4476b00/ansible/roles/olm_setup/templates/osb-operators.configmap.upstream.yaml

Latest console image is broken

Setup: Minikube, Kubernetes 1.11.4, make run_local, ./scripts/run_console_local.sh:

image

when changing from latest to quay.io/openshift/origin-console:v3.11 then it works again.

Also it looks like that latest points to a v4.0 version so maybe it would be a good idea to pin the tag in run_console_local/sh ?

upstream deployment: olm pod keeps crashing

I followed the install guide to deploy upstream version of olm.

kubectl create -f deploy/upstream/manifests/latest/

Upon inspecting olm namespace, I noticed that one of the pods (registry?) keeps crashing

k -n olm get po  -w
NAME                                READY   STATUS             RESTARTS   AGE
catalog-operator-77ffcc7f88-w8ctc   1/1     Running            0          12s
olm-operator-c77b64c95-57lb9        0/1     CrashLoopBackOff   1          12s
olm-operators-bh6pv                 1/1     Running            0          10s
olm-operator-c77b64c95-57lb9   0/1   Error   2     17s
olm-operator-c77b64c95-57lb9   0/1   CrashLoopBackOff   2     26s
olm-operator-c77b64c95-57lb9   0/1   Error   3     40s
olm-operator-c77b64c95-57lb9   0/1   CrashLoopBackOff   3     41s
olm-operator-c77b64c95-57lb9   0/1   Error   4     89s
olm-operator-c77b64c95-57lb9   0/1   CrashLoopBackOff   4     96s

Looks like a bug

k -n olm logs po/olm-operator-c77b64c95-57lb9
flag provided but not defined: -writeStatusName
Usage of /bin/olm:
  -alsologtostderr
        log to standard error as well as files
  -debug
        use debug log level
  -interval duration
        wake up interval (default 5m0s)
  -kubeconfig string
        absolute path to the kubeconfig file
  -log_backtrace_at value
        when logging hits line file:N, emit a stack trace
  -log_dir string
        If non-empty, write log files in this directory
  -logtostderr
        log to standard error instead of files
  -stderrthreshold value
        logs at or above this threshold go to stderr
  -v value
        log level for V logs
  -version
        displays olm version
  -vmodule value
        comma-separated list of pattern=N settings for file-filtered logging
  -watchedNamespaces -watchedNamespaces=""
        comma separated list of namespaces for olm operator to watch. If not set, or set to the empty string (e.g. -watchedNamespaces=""), olm operator will watch all namespaces in the cluster.

olm-operator local run expects `master` image tag which is unavailable

Environment:

Using run make-local-shift,

Issue:

Saw that the olm-operator pod went into ImagePullBackOff, failing to pull the image quay.io/coreos/olm with the master tag.

Workaround

I was able to fix this by editing the pod definition to use one of the existing tags, for example 0.7.3.

Looks like this needs to be fixed somewhere in the makefile or bringup scripts. Please feel free to let me know if any further details are needed. :)

vendoring fails when run `make vendor`

I tried to vendor the codebase with error of missing repo github.com/coreos-inc/tectonic-operators, my guess is that this repo is private and is only accessible to certain people. Can this code be made available if it is private or change the Gopkg link to the approproate location.

$ make vendor                                 
/home/hummer/go/bin/dep ensure -v -vendor-only
(1/69) Wrote github.com/PuerkitoBio/[email protected]   
(2/69) Wrote github.com/go-openapi/swag@master                                                           
(3/69) Wrote github.com/PuerkitoBio/urlesc@master   
(4/69) Wrote github.com/davecgh/[email protected]                                                                        
(5/69) Wrote github.com/go-openapi/jsonreference@master                                                  
(6/69) Wrote github.com/beorn7/perks@master         
(7/69) Wrote github.com/pmezard/[email protected]   
(8/69) Wrote github.com/coreos/[email protected]     
(9/69) Wrote github.com/go-openapi/loads@master     
(10/69) Wrote github.com/asaskevich/govalidator@v8                                                       
(11/69) Wrote k8s.io/kube-openapi@master            
(12/69) Wrote github.com/go-openapi/strfmt@master
(13/69) Wrote github.com/go-openapi/runtime@master  
(14/69) Wrote github.com/golang/glog@master         
(15/69) Wrote github.com/golang/groupcache@master                                                        
(16/69) Wrote github.com/ghodss/[email protected]         
(17/69) Wrote github.com/go-openapi/errors@master
(18/69) Wrote github.com/gorilla/[email protected]         
(19/69) Wrote github.com/google/gofuzz@master       
(20/69) Wrote github.com/howeyc/gopass@master                                                            
(21/69) Wrote github.com/pmorie/[email protected]                                      
(22/69) Wrote github.com/prometheus/[email protected]
(23/69) Wrote github.com/go-openapi/analysis@master
(24/69) Wrote github.com/imdario/[email protected]
(25/69) Wrote github.com/gorilla/[email protected]
(26/69) Wrote github.com/golang/[email protected]
(27/69) Wrote github.com/prometheus/client_model@master
(28/69) Wrote github.com/go-openapi/validate@master
(29/69) Wrote github.com/gregjones/httpcache@master
(30/69) Wrote github.com/hashicorp/golang-lru@master
(31/69) Failed to write github.com/coreos-inc/tectonic-operators@beryllium-m1
(32/69) Failed to write github.com/google/btree@master
(33/69) Failed to write github.com/modern-go/[email protected]
(34/69) Failed to write github.com/sirupsen/[email protected]
(35/69) Failed to write github.com/juju/[email protected]
(36/69) Failed to write github.com/pmorie/[email protected]
(37/69) Failed to write github.com/emicklei/[email protected]
(38/69) Failed to write github.com/prometheus/common@master
(39/69) Failed to write github.com/matttproud/[email protected]
(40/69) Failed to write github.com/json-iterator/[email protected]
(41/69) Failed to write github.com/prometheus/procfs@master
(42/69) Failed to write github.com/golang/[email protected]
(43/69) Failed to write github.com/mitchellh/mapstructure@master
(44/69) Failed to write github.com/mailru/easyjson@master
(45/69) Failed to write github.com/go-openapi/spec@master
(46/69) Failed to write github.com/googleapis/[email protected]
(47/69) Failed to write github.com/gogo/[email protected]
grouped write of manifest, lock and vendor: error while writing out vendor tree: failed to write dep tree: failed to export github.com/coreos-inc/tectonic-operators: no valid source could be created:
        failed to set up sources from the following URLs:
https://github.com/coreos-inc/tectonic-operators
: remote repository at https://github.com/coreos-inc/tectonic-operators does not exist, or is inaccessible: fatal: could not read Username for 'https://github.com': terminal prompts disabled
: exit status 128
        failed to set up sources from the following URLs:
ssh://[email protected]/coreos-inc/tectonic-operators
: remote repository at ssh://[email protected]/coreos-inc/tectonic-operators does not exist, or is inaccessible: ERROR: Repository not found.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists. 
: exit status 128
        failed to set up sources from the following URLs:
git://github.com/coreos-inc/tectonic-operators
: remote repository at git://github.com/coreos-inc/tectonic-operators does not exist, or is inaccessible: fatal: remote error:
  Repository not found.
: exit status 128                             
        failed to set up sources from the following URLs:
http://github.com/coreos-inc/tectonic-operators
: remote repository at http://github.com/coreos-inc/tectonic-operators does not exist, or is inaccessible: fatal: could not read Username for 'https://github.com': terminal prompts disabled
: exit status 128
make: *** [Makefile:112: vendor] Error 1

change catalog operator default namespace

currently it comes up expecting tectonic-system. There is no such namespace when you deploy the operator from git. Passing -namespace flag works, but this namespace is nowhere in the documentation.

make schema-check fails with missing vendored dependency

steps to reproduce:

  1. git clone
  2. set gopath
  3. make schema-check
go run ./cmd/validator/main.go ./deploy/chart/catalog_resources
vendor/github.com/emicklei/go-restful/container.go:17:2: cannot find package "github.com/emicklei/go-restful/log" in any of:
	/Users/jzelinskie/CoreOS/go/olm/src/github.com/operator-framework/operator-lifecycle-manager/vendor/github.com/emicklei/go-restful/log (vendor tree)
	/usr/local/opt/go/libexec/src/github.com/emicklei/go-restful/log (from $GOROOT)
	/Users/jzelinskie/CoreOS/go/olm/src/github.com/emicklei/go-restful/log (from $GOPATH)
make: *** [schema-check] Error 1

package-server pod keeps crashing

package-server panics regularly:

k -n olm logs -f package-server-68948c8996-rjhqt
time="2018-12-04T13:44:41Z" level=info msg="Using in-cluster kube client config"
time="2018-12-04T13:44:41Z" level=info msg="package-server configured to watch namespaces []"
time="2018-12-04T13:44:41Z" level=info msg="Using in-cluster kube client config"
time="2018-12-04T13:44:41Z" level=info msg="connection established. cluster-version: v1.14.0-alpha.0.654+6f0bd529b0c1b6-dirty"
time="2018-12-04T13:44:41Z" level=info msg="operator ready"
time="2018-12-04T13:44:41Z" level=info msg="starting informers..."
time="2018-12-04T13:44:41Z" level=info msg="waiting for caches to sync..."
I1204 13:44:41.306000       1 reflector.go:202] Starting reflector *v1alpha1.CatalogSource (5m0s) from github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:97
I1204 13:44:41.306032       1 reflector.go:240] Listing and watching *v1alpha1.CatalogSource from github.com/operator-framework/operator-lifecycle-manager/pkg/lib/queueinformer/queueinformer_operator.go:97
time="2018-12-04T13:44:41Z" level=info msg="olm/rh-operators added"
I1204 13:44:41.406209       1 shared_informer.go:123] caches populated
time="2018-12-04T13:44:41Z" level=info msg="starting workers..."
time="2018-12-04T13:44:41Z" level=info msg="getting from queue" key=olm/rh-operators queue=catsrc
[restful] 2018/12/04 13:44:41 log.go:33: [restful/swagger] listing is available at https://:5443/swaggerapi
[restful] 2018/12/04 13:44:41 log.go:33: [restful/swagger] https://:5443/swaggerui/ is mapped to folder /swagger-ui/
I1204 13:44:41.468606       1 serve.go:96] Serving securely on [::]:5443
I1204 13:44:44.297924       1 wrap.go:42] GET /healthz: (2.97766ms) 200 [[kube-probe/1.14+] 172.17.0.1:34082]
I1204 13:44:48.481561       1 wrap.go:42] GET /healthz: (89.312µs) 200 [[kube-probe/1.14+] 172.17.0.1:34092]
I1204 13:44:49.515358       1 authorization.go:73] Forbidden: "/", Reason: ""
I1204 13:44:49.515550       1 wrap.go:42] GET /: (3.272649ms) 403 [[Go-http-client/2.0] 192.168.121.204:51824]
I1204 13:44:49.521427       1 authorization.go:73] Forbidden: "/", Reason: ""
I1204 13:44:49.521565       1 wrap.go:42] GET /: (262.421µs) 403 [[Go-http-client/2.0] 192.168.121.204:51824]
I1204 13:44:50.033601       1 wrap.go:42] GET /apis/packages.apps.redhat.com/v1alpha1: (1.813112ms) 200 [[Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:63.0) Gecko/20100101 Firefox/63.0] 192.168.121.204:51856]
I1204 13:44:50.861271       1 wrap.go:42] GET /apis/packages.apps.redhat.com/v1alpha1?timeout=32s: (1.392206ms) 200 [[hyperkube/v1.14.0 (linux/amd64) kubernetes/6f0bd52/controller-discovery] 192.168.121.204:51856]
I1204 13:44:52.783994       1 wrap.go:42] GET /openapi/v2: (17.134802ms) 304 [[] 192.168.121.204:51856]
I1204 13:44:54.308413       1 wrap.go:42] GET /healthz: (3.333876ms) 200 [[kube-probe/1.14+] 172.17.0.1:34132]
I1204 13:44:57.919663       1 wrap.go:42] GET /apis/packages.apps.redhat.com/v1alpha1?timeout=32s: (4.722212ms) 200 [[hyperkube/v1.14.0 (linux/amd64) kubernetes/6f0bd52/system:serviceaccount:kube-system:resourcequota-controller] 192.168.121.204:51856]
I1204 13:44:58.180481       1 wrap.go:42] GET /apis/packages.apps.redhat.com/v1alpha1/packagemanifests?labelSelector=%21olm-visibility&limit=250: (13.400315ms) 200 [[Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:63.0) Gecko/20100101 Firefox/63.0] 192.168.121.204:51856]
I1204 13:44:58.315275       1 get.go:245] Starting watch for /apis/packages.apps.redhat.com/v1alpha1/packagemanifests, rv= labels=!olm-visibility fields= timeout=58m12.916358481s
I1204 13:44:58.491744       1 wrap.go:42] GET /healthz: (231.533µs) 200 [[kube-probe/1.14+] 172.17.0.1:34162]
I1204 13:44:59.105104       1 wrap.go:42] GET /apis/packages.apps.redhat.com/v1alpha1/packagemanifests?labelSelector=%21olm-visibility&watch=true: (791.871669ms) hijacked [[Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:63.0) Gecko/20100101 Firefox/63.0] 192.168.121.204:51888]
I1204 13:45:02.528263       1 wrap.go:42] GET /apis/packages.apps.redhat.com/v1alpha1/packagemanifests?labelSelector=%21olm-visibility&limit=250: (1.487231ms) 200 [[Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:63.0) Gecko/20100101 Firefox/63.0] 192.168.121.204:51856]
I1204 13:45:02.590947       1 get.go:245] Starting watch for /apis/packages.apps.redhat.com/v1alpha1/packagemanifests, rv= labels=!olm-visibility fields= timeout=49m56.208095793s
I1204 13:45:04.307826       1 wrap.go:42] GET /healthz: (184.978µs) 200 [[kube-probe/1.14+] 172.17.0.1:34196]
I1204 13:45:05.797241       1 wrap.go:42] GET /apis/packages.apps.redhat.com/v1alpha1?timeout=32s: (4.075613ms) 200 [[hyperkube/v1.14.0 (linux/amd64) kubernetes/6f0bd52/system:serviceaccount:kube-system:generic-garbage-collector] 192.168.121.204:51856]
panic: close of closed channel
goroutine 242 [running]:
github.com/operator-framework/operator-lifecycle-manager/pkg/package-server/provider.(*InMemoryProvider).Subscribe.func1(0xc4203de2a0, 0xc4202d7400, 0x1, 0x1, 0x1)
        /go/src/github.com/operator-framework/operator-lifecycle-manager/pkg/package-server/provider/inmem.go:302 +0xcb
created by github.com/operator-framework/operator-lifecycle-manager/pkg/package-server/provider.(*InMemoryProvider).Subscribe
        /go/src/github.com/operator-framework/operator-lifecycle-manager/pkg/package-server/provider/inmem.go:296 +0x27c

I installed OLM this way:

  1. run kubernetes using hack/local-up-cluster.sh
  2. install OLM using kubectl create -f deploy/upstream/manifests/latest/ (with today's OLM master)
  3. run console using ./scripts/run_console_local.sh
  4. tune up permissions, see #597
  5. randomly click around in the console, creating subscription hits it pretty regularly.

Creation of CRD defined in 05-catalogsource.crd.yaml fails

When trying to deploy using Ansible the creation of this CRD fails with this message:

The CustomResourceDefinition "catalogsources.operators.coreos.com" is invalid: spec.validation.openAPIV3Schema: Invalid value: apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"Represents a subscription to a source and channel", Type:"object", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string{"spec"}, Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps{"spec":apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"Spec for a catalog source.", Type:"object", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string{"sourceType"}, Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps{"secrets":apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"A set of secrets that can be used to access the contents of the catalog. It is best to keep this list small, since each will need to be tried for every catalog entry.", Type:"array", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string(nil), Items:(*apiextensions.JSONSchemaPropsOrArray)(0xc43113fda0), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps(nil), AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}, "sourceType":apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"The type of the source. Currently the only supported type is \"internal\".", Type:"string", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON{"internal"}, MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string(nil), Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps(nil), AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}, "configMap":apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"The name of a ConfigMap that holds the entries for an in-memory catalog.", Type:"string", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string(nil), Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps(nil), AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}, "displayName":apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"Pretty name for display", Type:"string", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string(nil), Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps(nil), AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}, "publisher":apiextensions.JSONSchemaProps{ID:"", Schema:"", Ref:(*string)(nil), Description:"The name of an entity that publishes this catalog", Type:"string", Format:"", Title:"", Default:(*apiextensions.JSON)(nil), Maximum:(*float64)(nil), ExclusiveMaximum:false, Minimum:(*float64)(nil), ExclusiveMinimum:false, MaxLength:(*int64)(nil), MinLength:(*int64)(nil), Pattern:"", MaxItems:(*int64)(nil), MinItems:(*int64)(nil), UniqueItems:false, MultipleOf:(*float64)(nil), Enum:[]apiextensions.JSON(nil), MaxProperties:(*int64)(nil), MinProperties:(*int64)(nil), Required:[]string(nil), Items:(*apiextensions.JSONSchemaPropsOrArray)(nil), AllOf:[]apiextensions.JSONSchemaProps(nil), OneOf:[]apiextensions.JSONSchemaProps(nil), AnyOf:[]apiextensions.JSONSchemaProps(nil), Not:(*apiextensions.JSONSchemaProps)(nil), Properties:map[string]apiextensions.JSONSchemaProps(nil), AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}}, AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}}, AdditionalProperties:(*apiextensions.JSONSchemaPropsOrBool)(nil), PatternProperties:map[string]apiextensions.JSONSchemaProps(nil), Dependencies:apiextensions.JSONSchemaDependencies(nil), AdditionalItems:(*apiextensions.JSONSchemaPropsOrBool)(nil), Definitions:apiextensions.JSONSchemaDefinitions(nil), ExternalDocs:(*apiextensions.ExternalDocumentation)(nil), Example:(*apiextensions.JSON)(nil)}: if subresources for custom resources are enabled, only properties can be used at the root of the schema

Making these changes on the file it's working:

Original:

  validation:
    openAPIV3Schema:
      type: object
      description: Represents a subscription to a source and channel
      required:
      - spec
      properties:
        spec:
          type: object
          description: Spec for a catalog source.
          required:
          - sourceType

Modified:

  validation:
    openAPIV3Schema:
      properties:
        spec:
          type: object
          description: Represents a subscription to a source and channel
          required:
          - spec
          type: object
          description: Spec for a catalog source.
          required:
          - sourceType

HTH

`./scripts/run_console_local.sh` doesn't provide a usable console with `make run-local` or `make run-local-shift`

I've been trying to get started with interacting with OLM via the Web UI and have run into issues using minikube / minishift via make run-local and make run-local-shift followed by ./scripts/run_console_local.sh to start the OKD console

Navigating to https://my_ip:8443 only shows the service catalog, nothing else, so I figured that the run_console_local.sh might help.

Once I am successful, I'm expecting that there will be a new section of the UI that I can interact with as shown in the screenshot from README.md ( below).

sub-view

Actual result is not as functional, menu buttons are clickable but nothing else much happens on the page.

I am also seeing a bunch of errors resulting from the ./scripts/run_console_local.sh script:

[dwhatley@precision-t operator-lifecycle-manager]$ ./scripts/run_console_local.sh 
Using https://192.168.42.152:8443
2018/08/29 20:40:27 cmd/main: cookies are not secure because base-address is not https!
2018/08/29 20:40:27 cmd/main: running with AUTHENTICATION DISABLED!
2018/08/29 20:40:27 cmd/main: Binding to 0.0.0.0:9000...
2018/08/29 20:40:27 cmd/main: not using TLS
2018/08/29 20:40:29 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:40:29 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
[... Lots more of these ...]
2018/08/29 20:41:18 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:19 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:22 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:24 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:24 http: proxy error: dial tcp 192.168.42.152:8443: connect: connection refused
2018/08/29 20:41:25 http: proxy error: dial tcp 192.168.42.152:8443: i/o timeout
2018/08/29 20:41:25 http: proxy error: dial tcp 192.168.42.152:8443: i/o timeout
2018/08/29 20:41:25 http: proxy error: dial tcp 192.168.42.152:8443: i/o timeout

Will continue to debug, but looking for pointers from the experts on where to start :). Perhaps docs need an update as well if this isn't a working method anymore.

make run-local-shift fails on minishift due to the lack of helm

minishift by default doesn't ship helm (it could be installed via minishift-addons) so make run-local-shift fails with ./scripts/package-release.sh: line 26: helm: command not found

[stirabos@t470s operator-lifecycle-manager]$ make run-local-shift 
. ./scripts/build_local_shift.sh
-- Starting profile 'minishift'
The 'minishift' VM is already running.
Logged into "https://192.168.42.201:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-dns
    kube-proxy
    kube-public
    kube-system
  * myproject
    openshift
    openshift-apiserver
    openshift-controller-manager
    openshift-core-operators
    openshift-infra
    openshift-node
    openshift-web-console

Using project "myproject".
Sending build context to Docker daemon 53.41 MB
Step 1/6 : FROM golang:1.10
Trying to pull repository docker.io/library/golang ... 
1.10: Pulling from docker.io/library/golang
05d1a5232b46: Pull complete 
5cee356eda6b: Pull complete 
89d3385f0fd3: Pull complete 
80ae6b477848: Pull complete 
94ebfeaaddf3: Pull complete 
e132030a369d: Pull complete 
c67c5750c788: Pull complete 
Digest: sha256:a325153f1f1e2edb76a76ad789aff172b89dd6178e8f74d39ef87c04d87d1961
Status: Downloaded newer image for docker.io/golang:1.10
 ---> a4afc24299ee
Step 2/6 : WORKDIR /go/src/github.com/operator-framework/operator-lifecycle-manager
 ---> 80c03d68299f
Removing intermediate container 7a9f43d79279
Step 3/6 : COPY . .
 ---> e9063da745a2
Removing intermediate container 2a58de286bf7
Step 4/6 : RUN make build && cp bin/olm /bin/olm && cp bin/catalog /bin/catalog
 ---> Running in 421ecb380f26

building bin/catalog
building bin/olm
building bin/package-server
building bin/validator
 ---> 139d17cad118
Removing intermediate container 421ecb380f26
Step 5/6 : COPY deploy/chart/catalog_resources /var/catalog_resources
 ---> c288d3e16ea7
Removing intermediate container dcb0a8633750
Step 6/6 : CMD /bin/olm
 ---> Running in 6e79ac9a47a2
 ---> 4befc491f3d9
Removing intermediate container 6e79ac9a47a2
Successfully built 4befc491f3d9
mkdir -p build/resources
. ./scripts/package-release.sh 1.0.0-local build/resources Documentation/install/local-values-shift.yaml
./scripts/package-release.sh: line 26: helm: command not found
cp: cannot stat '/tmp/tmp.LLaaVzSG0x/chart/olm/templates/.': No such file or directory
make: *** [Makefile:63: run-local-shift] Error 1

Getting started guide does not work

Having deployed OLM, I created a memcached cvs. After that, created crd and rbac. However, operator is flagged as failed:

image

Where can I find a reason for this failure?

CSV waits for wrong CRD version to be available

I did my first steps with the OLM and experimented a bit by writing my own CSV. In my first attempt I got the CRD version wrong, so I used '1beta1' instead of '1alpha1'. So I removed the CSV again and re-applied it with the changed version.

However I first wast stuck in RequirementsNotMet condition with the OLM still waiting on previously specified CSV, but now I'm this state for the CSV:

// ...
spec:
  customresourcedefinitions:
    owned:
    - description: Camel in the cloud
      displayName: Syndesis
      kind: Syndesis
      name: syndesises.syndesis.io
      specDescriptors:
      - description: Specify how many integrations are allowed for this installation
        displayName: Integration Limit
        path: integration.limit
        x-descriptors:
        - urn:alm:descriptor:com.tectonic.ui:label
      version: v1alpha1
// ....
status:
 /// ...
 message: install strategy completed with no errors
  phase: Succeeded
  reason: InstallSucceeded
  requirementStatus:
  - group: apiextensions.k8s.io
    kind: CustomResourceDefinition
    name: syndesises.syndesis.io
    status: Present
    uuid: a76cbe04-cc6a-11e8-acbc-0ed2eff6cb2c
    version: v1beta1

which actually says that version v1beta1 is present (but configured is v1alpha1 in the spec)

Also, there is only a 1alpha1 CRD registered in the cluster (oc get crd syndesises.syndesis.io -o json | jq .spec)

{
  "group": "syndesis.io",
  "names": {
    "kind": "Syndesis",
    "listKind": "SyndesisList",
    "plural": "syndesises",
    "singular": "syndesis"
  },
  "scope": "Namespaced",
  "version": "v1alpha1"
}

Could it be that the requirementStatus uses some stale cached version ?

MountVolume.SetUp failed for volume "config-volume" : secrets "alertmanager-alertmanager-main" not found

okd 3.11

when create alertmanager:


9:57:11 AM | Warning | Failed Mount | MountVolume.SetUp failed for volume "config-volume" : secrets "alertmanager-alertmanager-main" not found         9 times in the last 3 minutes
-- | -- | -- | --
9:57:06 AM | Warning | Failed Mount | Unable  to mount volumes for pod  "alertmanager-alertmanager-main-0_prometheus-monitoring(c0bafd64-0fc3-11e9-983e-fa163eb48449)":  timeout expired waiting for volumes to attach or mount for pod  "prometheus-monitoring"/"alertmanager-alertmanager-main-0". list of  unmounted volumes=[config-volume]. list of unattached  volumes=[config-volume alertmanager-alertmanager-main-db  default-token-v5lgg]
9:55:03 AM | Normal | Scheduled | Successfully assigned prometheus-monitoring/alertmanager-alertmanager-main-0 to okd311-1


How to enable all workspaces so that OLM watches them?

I have not found any info on how to enable all workspaces to be watched by OLM. Or particular workspaces. The issue I am having that that in 4.0 by default only openshift-operators namespace is watched. But I cannot run the operator and resources it creates in this namespace due to certain reasons.

Is there a doc on how to enable a particular workspace in OLM?

OLM failed -- Tag latest not found in repository quay.io/coreos/olm

-bash-4.2# oc version
oc v3.11.0+62803d0-1
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
-bash-4.2# 

OLM Ansible Install:

-bash-4.2# sudo ansible-playbook -i ${HOME}/installcentos/openshift-ansible/inventory/hosts.localhost ${HOME}/installcentos/openshift-ansible/playbooks/olm/config.yml 

PLAY [Initialization Checkpoint Start] *****************************************

TASK [Set install initialization 'In Progress'] ********************************
ok: [localhost]

PLAY [Populate config host groups] *********************************************

TASK [Load group name mapping variables] ***************************************
ok: [localhost]

TASK [Evaluate groups - g_nfs_hosts is single host] ****************************
skipping: [localhost]

TASK [Evaluate oo_all_hosts] ***************************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_masters] *****************************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_first_master] ************************************************
ok: [localhost]

TASK [Evaluate oo_new_etcd_to_config] ******************************************

TASK [Evaluate oo_masters_to_config] *******************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_etcd_to_config] **********************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_first_etcd] **************************************************
ok: [localhost]

TASK [Evaluate oo_etcd_hosts_to_upgrade] ***************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_etcd_hosts_to_backup] ****************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_nodes_to_config] *********************************************
ok: [localhost] => (item=localhost)

TASK [Evaluate oo_lb_to_config] ************************************************

TASK [Evaluate oo_nfs_to_config] ***********************************************

TASK [Evaluate oo_glusterfs_to_config] *****************************************

TASK [Evaluate oo_etcd_to_migrate] *********************************************
ok: [localhost] => (item=localhost)
 [WARNING]: Could not match supplied host pattern, ignoring: oo_lb_to_config

 [WARNING]: Could not match supplied host pattern, ignoring: oo_nfs_to_config


PLAY [Ensure that all non-node hosts are accessible] ***************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

PLAY [Initialize basic host facts] *********************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [openshift_sanitize_inventory : include_tasks] ****************************
included: /root/installcentos/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for localhost

TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] ***
ok: [localhost]

TASK [openshift_sanitize_inventory : debug] ************************************
skipping: [localhost]

TASK [openshift_sanitize_inventory : set_stats] ********************************
skipping: [localhost]

TASK [openshift_sanitize_inventory : set_fact] *********************************
ok: [localhost]

TASK [openshift_sanitize_inventory : Standardize on latest variable names] *****
ok: [localhost]

TASK [openshift_sanitize_inventory : Normalize openshift_release] **************
skipping: [localhost]

TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : include_tasks] ****************************
included: /root/installcentos/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for localhost

TASK [openshift_sanitize_inventory : set_fact] *********************************

TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Check for deprecated prometheus/grafana install] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : ensure provider configuration variables are defined] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure removed web console extension variables are not set] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : Ensure that web console port matches API server port] ***
skipping: [localhost]

TASK [openshift_sanitize_inventory : At least one master is schedulable] *******
skipping: [localhost]

TASK [Detecting Operating System from ostree_booted] ***************************
ok: [localhost]

TASK [set openshift_deployment_type if unset] **********************************
skipping: [localhost]

TASK [initialize_facts set fact openshift_is_atomic] ***************************
ok: [localhost]

TASK [Determine Atomic Host Docker Version] ************************************
skipping: [localhost]

TASK [assert atomic host docker version is 1.12 or later] **********************
skipping: [localhost]

PLAY [Retrieve existing master configs and validate] ***************************

TASK [openshift_control_plane : stat] ******************************************
ok: [localhost]

TASK [openshift_control_plane : slurp] *****************************************
ok: [localhost]

TASK [openshift_control_plane : set_fact] **************************************
ok: [localhost]

TASK [openshift_control_plane : Check for file paths outside of /etc/origin/master in master's config] ***
ok: [localhost]

TASK [openshift_control_plane : set_fact] **************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

PLAY [Initialize special first-master variables] *******************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

PLAY [Disable web console if required] *****************************************

TASK [set_fact] ****************************************************************
skipping: [localhost]

PLAY [Setup yum repositories for all hosts] ************************************
skipping: no hosts matched

PLAY [Install packages necessary for installer] ********************************

TASK [Gathering Facts] *********************************************************
skipping: [localhost]

TASK [Determine if chrony is installed] ****************************************
skipping: [localhost]

TASK [Install ntp package] *****************************************************
skipping: [localhost]

TASK [Start and enable ntpd/chronyd] *******************************************
skipping: [localhost]

TASK [Ensure openshift-ansible installer package deps are installed] ***********
skipping: [localhost]

PLAY [Initialize cluster facts] ************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [get openshift_current_version] *******************************************
ok: [localhost]

TASK [set_fact openshift_portal_net if present on masters] *********************
ok: [localhost]

TASK [Gather Cluster facts] ****************************************************
changed: [localhost]

TASK [Set fact of no_proxy_internal_hostnames] *********************************
skipping: [localhost]

TASK [Initialize openshift.node.sdn_mtu] ***************************************
ok: [localhost]

TASK [set_fact l_kubelet_node_name] ********************************************
ok: [localhost]

PLAY [Initialize etcd host variables] ******************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

TASK [set_fact] ****************************************************************
ok: [localhost]

PLAY [Determine openshift_version to configure on first master] ****************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [include_role : openshift_version] ****************************************

TASK [openshift_version : Use openshift_current_version fact as version to configure if already installed] ***
ok: [localhost]

TASK [openshift_version : Set openshift_version to openshift_release if undefined] ***
skipping: [localhost]

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "msg": "openshift_pkg_version was not defined. Falling back to -3.11.0"
}

TASK [openshift_version : set_fact] ********************************************
ok: [localhost]

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "msg": "openshift_image_tag was not defined. Falling back to v3.11.0"
}

TASK [openshift_version : set_fact] ********************************************
ok: [localhost]

TASK [openshift_version : assert openshift_release in openshift_image_tag] *****
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [openshift_version : assert openshift_release in openshift_pkg_version] ***
ok: [localhost] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_release": "3.11"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_image_tag": "v3.11.0"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_pkg_version": "-3.11.0*"
}

TASK [openshift_version : debug] ***********************************************
ok: [localhost] => {
    "openshift_version": "3.11.0"
}

PLAY [Set openshift_version for etcd, node, and master hosts] ******************
skipping: no hosts matched

PLAY [Verify Requirements] *****************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [Run variable sanity checks] **********************************************
ok: [localhost]

TASK [Validate openshift_node_groups and openshift_node_group_name] ************
ok: [localhost]

PLAY [Verify Node NetworkManager] **********************************************
skipping: no hosts matched

PLAY [Initialization Checkpoint End] *******************************************

TASK [Set install initialization 'Complete'] ***********************************
ok: [localhost]

PLAY [OLM Install Checkpoint Start] ********************************************

TASK [Set OLM install 'In Progress'] *******************************************
ok: [localhost]

PLAY [Operator Lifecycle Manager] **********************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost]

TASK [olm : include_tasks] *****************************************************
included: /root/installcentos/openshift-ansible/roles/olm/tasks/install.yaml for localhost

TASK [olm : create operator-lifecycle-manager project] *************************
changed: [localhost]

TASK [olm : Make temp directory for manifests] *********************************
ok: [localhost]

TASK [olm : Copy manifests to temp directory] **********************************
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/aggregated-edit.clusterrole.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/aggregated-view.clusterrole.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/catalogsource.crd.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/certified-operators.catalogsource.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/certified-operators.configmap.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/installplan.crd.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/olm-operator.clusterrole.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/olm-operator.rolebinding.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/olm-operator.serviceaccount.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/rh-operators.catalogsource.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/rh-operators.configmap.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/subscription.crd.yaml)
changed: [localhost] => (item=/root/installcentos/openshift-ansible/roles/olm/files/clusterserviceversion.crd.yaml)

TASK [olm : Set olm-operator template] *****************************************
changed: [localhost]

TASK [olm : Set catalog-operator template] *************************************
changed: [localhost]

TASK [olm : Apply olm-operator-serviceaccount ServiceAccount manifest] *********
changed: [localhost]

TASK [olm : Apply operator-lifecycle-manager ClusterRole manifest] *************
changed: [localhost]

TASK [olm : Apply olm-operator-binding-operator-lifecycle-manager ClusterRoleBinding manifest] ***
changed: [localhost]

TASK [olm : Apply clusterserviceversions.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply catalogsources.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply installplans.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply subscriptions.operators.coreos.com CustomResourceDefinition manifest] ***
changed: [localhost]

TASK [olm : Apply rh-operators ConfigMap manifest] *****************************
changed: [localhost]

TASK [olm : Apply rh-operators CatalogSource manifest] *************************
changed: [localhost]

TASK [olm : Apply certified-operators ConfigMap manifest] **********************
changed: [localhost]

TASK [olm : Apply certified-operators CatalogSource manifest] ******************
changed: [localhost]

TASK [olm : Apply olm-operator Deployment manifest] ****************************
changed: [localhost]

TASK [olm : Apply catalog-operator Deployment manifest] ************************
changed: [localhost]

TASK [olm : Apply aggregate-olm-edit ClusterRole manifest] *********************
changed: [localhost]

TASK [olm : Apply aggregate-olm-view ClusterRole manifest] *********************
changed: [localhost]

TASK [olm : include_tasks] *****************************************************
skipping: [localhost]

PLAY [OLM Install Checkpoint End] **********************************************

TASK [Set OLM install 'Complete'] **********************************************
ok: [localhost]

PLAY RECAP *********************************************************************
localhost                  : ok=80   changed=20   unreachable=0    failed=0   


INSTALLER STATUS ***************************************************************
Initialization  : Complete (0:00:52)
OLM Install     : Complete (0:00:38)
-bash-4.2# 

Checking:

-bash-4.2# oc adm manage-node localhost.localdomain --list-pods | grep operator-lifecycle-manager

Listing matched pods on node: localhost.localdomain

operator-lifecycle-manager          catalog-operator-599574b497-lgbpk              0/1       ImagePullBackOff   0          2m
operator-lifecycle-manager          olm-operator-66bf8f6bbc-7nwfk                  0/1       ErrImagePull       0          2m
-bash-4.2# oc get events -n operator-lifecycle-manager
LAST SEEN   FIRST SEEN   COUNT     NAME                                                 KIND         SUBOBJECT                           TYPE      REASON              SOURCE                           MESSAGE
2m          2m           1         olm-operator-66bf8f6bbc-7nwfk.157a02e4b60d64ec       Pod                                              Normal    Scheduled           default-scheduler                Successfully assigned operator-lifecycle-manager/olm-operator-66bf8f6bbc-7nwfk to localhost.localdomain
2m          2m           1         olm-operator-66bf8f6bbc.157a02e4b5a3339e             ReplicaSet                                       Normal    SuccessfulCreate    replicaset-controller            Created pod: olm-operator-66bf8f6bbc-7nwfk
2m          2m           1         olm-operator.157a02e4b4297a2e                        Deployment                                       Normal    ScalingReplicaSet   deployment-controller            Scaled up replica set olm-operator-66bf8f6bbc to 1
2m          2m           1         catalog-operator-599574b497.157a02e50838831e         ReplicaSet                                       Normal    SuccessfulCreate    replicaset-controller            Created pod: catalog-operator-599574b497-lgbpk
2m          2m           1         catalog-operator-599574b497-lgbpk.157a02e508eaa92e   Pod                                              Normal    Scheduled           default-scheduler                Successfully assigned operator-lifecycle-manager/catalog-operator-599574b497-lgbpk to localhost.localdomain
2m          2m           1         catalog-operator.157a02e507431281                    Deployment                                       Normal    ScalingReplicaSet   deployment-controller            Scaled up replica set catalog-operator-599574b497 to 1
2m          2m           2         catalog-operator-599574b497-lgbpk.157a02e5df26c029   Pod          spec.containers{catalog-operator}   Normal    Pulling             kubelet, localhost.localdomain   pulling image "quay.io/coreos/catalog"
2m          2m           2         olm-operator-66bf8f6bbc-7nwfk.157a02e57da5ccaf       Pod          spec.containers{olm-operator}       Normal    Pulling             kubelet, localhost.localdomain   pulling image "quay.io/coreos/olm"
2m          2m           2         catalog-operator-599574b497-lgbpk.157a02e6b7be0514   Pod          spec.containers{catalog-operator}   Warning   Failed              kubelet, localhost.localdomain   Failed to pull image "quay.io/coreos/catalog": rpc error: code = Unknown desc = Tag latest not found in repository quay.io/coreos/catalog
2m          2m           2         catalog-operator-599574b497-lgbpk.157a02e6b7be9959   Pod          spec.containers{catalog-operator}   Warning   Failed              kubelet, localhost.localdomain   Error: ErrImagePull
2m          2m           2         olm-operator-66bf8f6bbc-7nwfk.157a02e61b128505       Pod          spec.containers{olm-operator}       Warning   Failed              kubelet, localhost.localdomain   Error: ErrImagePull
2m          2m           2         olm-operator-66bf8f6bbc-7nwfk.157a02e61b11521b       Pod          spec.containers{olm-operator}       Warning   Failed              kubelet, localhost.localdomain   Failed to pull image "quay.io/coreos/olm": rpc error: code = Unknown desc = Tag latest not found in repository quay.io/coreos/olm
2m          2m           7         catalog-operator-599574b497-lgbpk.157a02e70afe6319   Pod                                              Normal    SandboxChanged      kubelet, localhost.localdomain   Pod sandbox changed, it will be killed and re-created.
2m          2m           7         olm-operator-66bf8f6bbc-7nwfk.157a02e64f2c0964       Pod                                              Normal    SandboxChanged      kubelet, localhost.localdomain   Pod sandbox changed, it will be killed and re-created.
2m          2m           6         catalog-operator-599574b497-lgbpk.157a02e7b45556c5   Pod          spec.containers{catalog-operator}   Warning   Failed              kubelet, localhost.localdomain   Error: ImagePullBackOff
2m          2m           6         catalog-operator-599574b497-lgbpk.157a02e7b455140c   Pod          spec.containers{catalog-operator}   Normal    BackOff             kubelet, localhost.localdomain   Back-off pulling image "quay.io/coreos/catalog"
2m          2m           6         olm-operator-66bf8f6bbc-7nwfk.157a02e6d52e6f75       Pod          spec.containers{olm-operator}       Normal    BackOff             kubelet, localhost.localdomain   Back-off pulling image "quay.io/coreos/olm"
2m          2m           6         olm-operator-66bf8f6bbc-7nwfk.157a02e6d52edcce       Pod          spec.containers{olm-operator}       Warning   Failed              kubelet, localhost.localdomain   Error: ImagePullBackOff
-bash-4.2# docker pull quay.io/coreos/olm
Using default tag: latest
Trying to pull repository quay.io/coreos/olm ... 
Pulling repository quay.io/coreos/olm
Tag latest not found in repository quay.io/coreos/olm
-bash-4.2# 

Typos in architecture doc

The architecture.md doc includes some typos. In the last paragraph "Catalog (Registry) Design":

"Within a package, channels which point to a particular CSV." This is not a valid sentence. Perhaps the word "which" should be removed? Or maybe the rest of the sentence is missing?

"Because CSVs explicitly reference the CSV that they replace, a package manifest provides the catalog Operator needs to update a CSV to the latest version in a channel (stepping through each intermediate version)." This sentence appears to merge into some other sentence around the words "provides the catalog Operator needs to update". I don't have enough domain knowledge to suggest a fix.

Subscription stuck at Upgrading

I have installed OpenShift 4.0 (installer v0.11.0)

Luckily in this version OperatorHub console page is fixed, and I can see Operators.

However, upon installing an operator, a subscription is created, however, it is stuck at Upgrading phase, no matter what upgrade policy is chosen.

This happens to RH, certified and community operators, as well as the one I have added myself (by editing OperatorSource for community operators)

image
image

Which logs may I look at to try to figure out what's happening?

Sometimes, it takes 10-20 mins for status to become up-to-date.

Unable to retrieve pull secret openshift-operator-lifecycle-manager/coreos-pull-secret for openshift-operator-lifecycle-manager/olm-operator...

Seeing this spammed over and over in my master logs on a fresh 4.0 install:

Dec 06 13:57:29 test1-master-0 hyperkube[2893]: W1206 13:57:29.875130    2893 kubelet_pods.go:841] Unable to retrieve pull secret openshift-operator-lifecycle-manager/coreos-pull-secret for openshift-operator-lifecycle-manager/olm-operator-68957f688b-59nsx due to secrets "coreos-pull-secret" not found.  The image pull may not succeed.
Dec 06 13:57:36 test1-master-0 hyperkube[2893]: W1206 13:57:36.375690    2893 kubelet_pods.go:841] Unable to retrieve pull secret openshift-operator-lifecycle-manager/coreos-pull-secret for openshift-operator-lifecycle-manager/catalog-operator-5c75f9c68d-r52rx due to secrets "coreos-pull-secret" not found.  The image pull may not succeed.
...

operator does not upgrade packagserver

After a cluster upgrade from 4.0-2019-02-08-055616 to 4.0.0-0.alpha-2019-02-08-131655, the packageserver deployment is not upgraded

$ oc get clusterversion
NAME      VERSION                           AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.alpha-2019-02-08-131655   True        True          11m

$ oc get deployment olm-operator -oyaml
...
spec:
...
  template:
...
    spec:
      containers:
      - args:
        - -writeStatusName
        - operator-lifecycle-manager
        command:
        - /bin/olm
        env:
        - name: OPERATOR_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: OPERATOR_NAME
          value: olm-operator
        image: registry.svc.ci.openshift.org/openshift/origin-v4.0-2019-02-08-131655@sha256:c379e068eca7768313ed026f8fac04057920526d388c42dfd585a3a0c4227677

$ oc get deployment packageserver -oyaml
...
spec:
...
  template:
...
    spec:
      containers:
      - command:
        - /bin/package-server
...
        image: registry.svc.ci.openshift.org/openshift/origin-v4.0-2019-02-08-055616@sha256:c379e068eca7768313ed026f8fac04057920526d388c42dfd585a3a0c4227677

@smarterclayton @derekwaynecarr

Failed to update catalog source `rh-operators` status

Hello there, I just installed OLM on the new k8s cluster and I can see this log message from the olm-operator pod

kubectl logs -f -n olm catalog-operator-54ddc48bd-jgqfp
time="2018-10-30T15:23:37Z" level=info msg="Using in-cluster kube client config"
time="2018-10-30T15:23:37Z" level=info msg="Using in-cluster kube client config"
time="2018-10-30T15:23:37Z" level=info msg="connection established. cluster-version: v1.10.0"
time="2018-10-30T15:23:37Z" level=info msg="operator ready"
time="2018-10-30T15:23:37Z" level=info msg="starting informers..."
time="2018-10-30T15:23:37Z" level=info msg="waiting for caches to sync..."
time="2018-10-30T15:23:37Z" level=info msg="olm/rh-operators added"
time="2018-10-30T15:23:37Z" level=info msg="starting workers..."
time="2018-10-30T15:23:37Z" level=info msg="getting from queue" key=olm/rh-operators queue=catsrc
time="2018-10-30T15:23:37Z" level=info msg="retrying olm/rh-operators"
E1030 15:23:37.631372       1 queueinformer_operator.go:136] Sync "olm/rh-operators" failed: failed to update catalog source rh-operators status: the server could not find the requested resource (put catalogsources.operators.coreos.com rh-operators)
time="2018-10-30T15:23:37Z" level=info msg="getting from queue" key=olm/rh-operators queue=catsrc
time="2018-10-30T15:23:37Z" level=info msg="retrying olm/rh-operators"
E1030 15:23:37.655494       1 queueinformer_operator.go:136] Sync "olm/rh-operators" failed: failed to update catalog source rh-operators status: the server could not find the requested resource (put catalogsources.operators.coreos.com rh-operators)
time="2018-10-30T15:23:37Z" level=info msg="getting from queue" key=olm/rh-operators queue=catsrc
time="2018-10-30T15:23:37Z" level=info msg="retrying olm/rh-operators"
E1030 15:23:37.693061       1 queueinformer_operator.go:136] Sync "olm/rh-operators" failed: failed to update catalog source rh-operators status: the server could not find the requested resource (put catalogsources.operators.coreos.com rh-operators)
time="2018-10-30T15:23:37Z" level=info msg="getting from queue" key=olm/rh-operators queue=catsrc
time="2018-10-30T15:23:37Z" level=info msg="retrying olm/rh-operators"
E1030 15:23:37.736738       1 queueinformer_operator.go:136] Sync "olm/rh-operators" failed: failed to update catalog source rh-operators status: the server could not find the requested resource (put catalogsources.operators.coreos.com rh-operators)
time="2018-10-30T15:23:37Z" level=info msg="getting from queue" key=olm/rh-operators queue=catsrc
time="2018-10-30T15:23:37Z" level=info msg="retrying olm/rh-operators"
E1030 15:23:37.798421       1 queueinformer_operator.go:136] Sync "olm/rh-operators" failed: failed to update catalog source rh-operators status: the server could not find the requested resource (put catalogsources.operators.coreos.com rh-operators)
time="2018-10-30T15:23:37Z" level=info msg="getting from queue" key=olm/rh-operators queue=catsrc

Can you give me any hint of how can i debug this issue? Thanks in advance!

Environment

  • Kuberentes v1.10
  • OLM ver 0.7.4

Installation sometimes fails

Sometimes the installation fails with:

error: unable to recognize "operator-lifecycle-manager/deploy/upstream/manifests/latest/0000_30_09-rh-operators.catalogsource.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"

This looks like a synchronization issue, where the CRD for the CatalogSource is not properly registered before the installation proceeds

Full log:

namespace/olm created
clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
serviceaccount/olm-operator-serviceaccount created
clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
configmap/rh-operators created
deployment.apps/olm-operator created
deployment.apps/catalog-operator created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
apiservice.apiregistration.k8s.io/v1alpha1.packages.apps.redhat.com created
clusterrolebinding.rbac.authorization.k8s.io/packagemanifest:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/packagemanifest-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/packagemanifest-view created
clusterrolebinding.rbac.authorization.k8s.io/package-apiserver-clusterrolebinding created
secret/package-server-certs created
deployment.apps/package-server created
service/package-server created
error: unable to recognize "operator-lifecycle-manager/deploy/upstream/manifests/latest/0000_30_09-rh-operators.catalogsource.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"

Use CR definitions instead of configmap data

Hello,

I just would like to ask if it makes sense to use the catalog source CR instead of a configmap, since managing multiple CSVs and CRDs in a configmap field can become a bit messy is you have lots of those.

Does it make more sense to add that info into the catalog source object instead?
It could use the same three fields used in the configmap but they could each be a list of their relative CRs:

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  generation: 1
  name: custom-operators
spec:
  displayName: My Operators
  publisher: myself
  sourceType: embedded
  data:
    clusterServiceVersions:
      - csv-name1
      - csv-name2
      - csv-name3
    customResourceDefinitions:
      - crd-name1
      - crd-name2
      - crd-name3
    packages:
      - pkg-name1
      - pkg-name2
      - pkg-name3

It could also use some kind of label matching so it could scale a bit better (no need to list all dependencies in the catalog source cr):

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  generation: 1
  name: custom-operators
spec:
  displayName: My Operators
  publisher: myself
  sourceType: selector
  selector:
    matchLabels:
      catalog: custom-operators

Attempting upstream installation against kube >= 1.11 fails with validation errors

I invoked kubectl apply -f deploy/upstream/manifests/0.5.0 as per the installation instructions. I see the same errors with minikube supplied with --kubernetes-version v1.11.0 and running hack/local-up-cluster.sh against kube master. 0.4.0 has the same problem.

error validating "deploy/upstream/manifests/0.5.0/03-clusterserviceversion.crd.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.customresourcedefinitions.properties.owned.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.customresourcedefinitions.properties.required.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.icon.properties.mediatype.enum[0]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.icon.properties.mediatype.enum[1]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.icon.properties.mediatype.enum[2]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.icon.properties.mediatype.enum[3]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.install.oneOf[0].properties.strategy.enum[0]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.install.oneOf[1].properties.spec.properties.deployments.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.install.oneOf[1].properties.spec.properties.permissions.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.install.oneOf[1].properties.strategy.enum[0]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.keywords.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.links.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maintainers.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[0]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[1]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[2]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[3]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[4]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[5]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[6]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.maturity.enum[7]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.selector.properties.matchExpressions): unknown field "descriptions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaProps, ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.selector.properties.matchExpressions.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected ""]; if you choose to ignore these errors, turn validation off with --validate=false
error validating "deploy/upstream/manifests/0.5.0/05-catalogsource.crd.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.configMap): unknown field "string" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaProps, ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.secrets.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.sourceType.enum[0]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected ""]; if you choose to ignore these errors, turn validation off with --validate=false
error validating "deploy/upstream/manifests/0.5.0/06-installplan.crd.yaml": error validating data: [ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.anyOf[0]): unknown field "approval" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaProps, ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.anyOf[1]): unknown field "approval" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaProps, ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.approval.enum[0]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.approval.enum[1]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.approval.enum[2]): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSON: got "string", expected "", ValidationError(CustomResourceDefinition.spec.validation.openAPIV3Schema.properties.spec.properties.clusterServiceVersionNames.items): invalid type for io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.JSONSchemaPropsOrArray: got "map", expected ""]; if you choose to ignore these errors, turn validation off with --validate=false

What version(s) of kubernetes are known to support installation of olm?

Unable to deploy OLM on minishift

Env:

[leseb@tarox~/operator-lifecycle-manager][master !] minishift status
Minishift:  Running
Profile:    minishift
OpenShift:  Running (openshift v3.11.0+948efc6-96)
DiskUsage:  11% of 39G (Mounted On: /mnt/sda1)
CacheUsage: 1.863 GB (used by oc binary, ISO or cached images)

Error log after running: make run-local-shift:

[leseb@tarox~/operator-lifecycle-manager][master !] make run-local-shift                                                                                                                              [34/17639]
go: finding github.com/googleapis/gnostic/OpenAPIv2 latest
go: finding github.com/googleapis/gnostic/compiler latest
go: finding github.com/google/gofuzz latest
go: finding github.com/emicklei/go-restful/log latest
go: finding github.com/google/btree latest
go: finding github.com/modern-go/concurrent latest
. ./scripts/build_local_shift.sh
-- Starting profile 'minishift'
The 'minishift' VM is already running.
Logged into "https://192.168.42.248:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-dns
    kube-proxy
    kube-public
    kube-system
  * myproject
    openshift
    openshift-apiserver
    openshift-controller-manager
    openshift-core-operators
    openshift-infra
    openshift-node
    openshift-service-cert-signer
    openshift-web-console

Using project "myproject".
Sending build context to Docker daemon 77.39 MB
Step 1/3 : FROM golang:1.11
 ---> 901414995ecd
Step 2/3 : WORKDIR /go/src/github.com/operator-framework/operator-lifecycle-manager
 ---> Using cache
 ---> 34ea57fdac84
Step 3/3 : COPY . .
 ---> b06482cd582e
Removing intermediate container 24fb4d455f3f
Successfully built b06482cd582e
mkdir -p build/resources
. ./scripts/package-release.sh 1.0.0 build/resources Documentation/install/local-values-shift.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_00-namespace.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_14-olm-operators.configmap.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_02-clusterserviceversion.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_03-installplan.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_04-subscription.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_05-catalogsource.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_13-operatorgroup.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_01-olm-operator.serviceaccount.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_12-aggregated.clusterrole.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_10-olm-operator.deployment.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_11-catalog-operator.deployment.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_15-olm-operators.catalogsource.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_16-operatorgroup-default.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_17-packageserver.subscription.yaml
. ./scripts/install_local.sh local build/resources
namespace "local" created
namespace "local" configured
clusterrole.authorization.openshift.io "system:controller:operator-lifecycle-manager" created
serviceaccount "olm-operator-serviceaccount" created
clusterrolebinding.authorization.openshift.io "olm-operator-binding-local" created
customresourcedefinition.apiextensions.k8s.io "clusterserviceversions.operators.coreos.com" created
customresourcedefinition.apiextensions.k8s.io "installplans.operators.coreos.com" created
customresourcedefinition.apiextensions.k8s.io "subscriptions.operators.coreos.com" created
customresourcedefinition.apiextensions.k8s.io "catalogsources.operators.coreos.com" created
deployment.apps "olm-operator" created
deployment.apps "catalog-operator" created
clusterrole.rbac.authorization.k8s.io "aggregate-olm-edit" created
clusterrole.rbac.authorization.k8s.io "aggregate-olm-view" created
customresourcedefinition.apiextensions.k8s.io "operatorgroups.operators.coreos.com" created
configmap "olm-operators" replaced
catalogsource.operators.coreos.com "olm-operators" created
operatorgroup.operators.coreos.com "global-operators" created
operatorgroup.operators.coreos.com "olm-operators" created
subscription.operators.coreos.com "packageserver" created
Waiting for rollout to finish: 0 of 1 updated replicas are available...

error: deployment "olm-operator" exceeded its progress deadline
make: *** [Makefile:68: run-local-shift] Error 1

However oc get deployment indicates the resource is running but not available:

[leseb@tarox~] oc get deployments --all-namespaces
NAMESPACE                       NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
local                           catalog-operator                         1         1         1            0           17m
local                           olm-operator                             1         1         1            0           18m
openshift-core-operators        openshift-service-cert-signer-operator   1         1         1            1           26m
openshift-core-operators        openshift-web-console-operator           1         1         1            1           24m
openshift-service-cert-signer   apiservice-cabundle-injector             1         1         1            1           25m
openshift-service-cert-signer   service-serving-cert-signer              1         1         1            1           25m
openshift-web-console           webconsole                               1         1         1            1           24m

Description of the resource:

[leseb@tarox~] oc describe deployment.apps/olm-operator -n local
Name:                   olm-operator
Namespace:              local
CreationTimestamp:      Fri, 08 Feb 2019 15:27:51 +0100
Labels:                 app=olm-operator
Annotations:            deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"olm-operator"},"name":"olm-operator","namespace":"local"},"sp...
Selector:               app=olm-operator
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=olm-operator
  Service Account:  olm-operator-serviceaccount
  Containers:
   olm-operator:
    Image:      quay.io/coreos/olm:local
    Port:       8080/TCP
    Host Port:  0/TCP
    Command:
      /bin/olm
    Args:
      -watchedNamespaces
      local
      -debug
      -writeStatusName

    Liveness:   http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      OPERATOR_NAMESPACE:   (v1:metadata.namespace)
      OPERATOR_NAME:       olm-operator
    Mounts:                <none>
  Volumes:                 <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    False   ProgressDeadlineExceeded
OldReplicaSets:  <none>
NewReplicaSet:   olm-operator-6ffbdc7886 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  21m   deployment-controller  Scaled up replica set olm-operator-6ffbdc7886 to 1

Could this simply be a timing issue?
Is my minishift version too old? (minishift does not have any 4.0 alpha AFAIK)

Thanks.

make build fails

make build command fails with an error
Steps to reproduce

  1. get latest from master branch
  2. make build

I am seeing the following error -

operator-framework/operator-lifecycle-manager (master) $ make build
if [ 1 = 1true ]; then \
	GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go test -o bin/alm -c -covermode=count -coverpkg ./pkg/... github.com/operator-framework/operator-lifecycle-manager/cmd/alm; \
else \
	GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o bin/alm github.com/operator-framework/operator-lifecycle-manager/cmd/alm; \
fi
vendor/github.com/emicklei/go-restful/container.go:17:2: cannot find package "github.com/emicklei/go-restful/log" in any of:
	/{$GOPATH}/src/github.com/operator-framework/operator-lifecycle-manager/vendor/github.com/emicklei/go-restful/log (vendor tree)
	/usr/lib/golang/src/github.com/emicklei/go-restful/log (from $GOROOT)
	/{$GOPATH}/src/github.com/emicklei/go-restful/log (from $GOPATH)
make: *** [Makefile:40: bin/alm] Error 1

The sub package 'github.com/emicklei/go-restful/log' is missing from the vendor tree.
make vendor should fix it, but I am running into another issue - dep ensure -v -vendor-only seems to hang indefinitely on my local workstation.

Non operator-sdk operators

To enable a new operator, the admin needs to

  1. Install CRD and RBAC for that operator
  2. Enable subscription.
    To simplify this, can we have the
    1 catalog/OLM operator install CRD once an install plan is executed?

Many operators out there install CRDs on startup and don't have a separate CRD yaml. Is there a guide on how to use them with OLM?

OLM compatibility with cluster monitoring Operator

I am currently running an OKD 3.11 with cluster monitoring operator installed which leverage both Prometheus and Alertmanager Operator. On the OKD documentation it is said that the « cluster monitoring operator » is dedicated to monitor the cluster itself and if we need to monitor applications we have to use OLM
« Users interested in leveraging Prometheus for application monitoring on OpenShift should consider using OLM to easily deploy a Prometheus Operator and setup new Prometheus instances to monitor and alert on their applications ».
As cluster-monitoring-operator already provide « Prometheus operator » I was wondering if it will lead to issues by installing both OLM and cluster monitoring.
And in case of it is possible to use those two component, the Prometheus application instance will have to connect to the Alertmanager brought by cluster monitoring solutions which is in a dedicated namespace. Is it secure to allow network communications to the « openshiht-monitoring » namespace ?

ClusterRoleBinding against aggregated-apiserver-clusterrole without role manifest

There is a dependency against the aggregated-apiserver-clusterrole cluster role, but the role is not installed into the cluster: https://github.com/operator-framework/operator-lifecycle-manager/blob/master/manifests/0000_30_13-packageserver.yaml#L64

This causes openshift/cluster-openshift-apiserver-operator#38.

The cluster role originally comes from the sample-apiserver https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/sample-apiserver/artifacts/example/rbac.yaml. Please add this to the manifests here to stop the noise in the logs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.