Giter Site home page Giter Site logo

ceph-csi-operator's Introduction

ceph-csi-operator's People

Contributors

madhu-1 avatar nb-ohad avatar subhamkrai avatar iamniting avatar parth-gr avatar

Stargazers

 avatar [Hurd]Donkey avatar Federico Lucifredi avatar  avatar Praveen M avatar

Watchers

Mike Perez avatar Sage Weil avatar Josh Durgin avatar Yehuda Sadeh avatar Travis Nielsen avatar Casey Bodley avatar Humble Devassy Chirammal avatar Satoru Takeuchi avatar  avatar  avatar Neha Ojha avatar  avatar  avatar Divyansh Kamboj avatar

ceph-csi-operator's Issues

Integration in Ceph Mgr or cephadm

Describe the feature you'd like to have

Once the ceph-csi-operator project is stable, write a Ceph Mgr module so that existing Ceph cluster can easily be used to provide storage functionality in one or more Kubernetes clusters.

Who is the end user and what is the use case where this feature will be valuable?

Users that have an existing Ceph cluster and want to setup Ceph-CSI storage functionality on Kubernetes clusters.

How will we know we have a good solution? (acceptance criteria)

The Ceph Mgr module should offer sufficient options to enable at least CephFS and RBD storage for common workloads (NFS can follow later if there is a need). It is expected that ceph-csi-operator will be deployed by Ceph Mgr on the target Kubernetes cluster(s).

CRD apiVersion for stable features running in production should be v1beta1

Describe the feature you'd like to have

The CSI operator CRDs should be v1 instead of v1alpha1 when we expect it to be running in production.

The conversation was started in the csi operator design doc here, but it was marked resolved after some offline conversation without documenting the reasoning. Let's discuss openly on the expectation for v1.

Who is the end user and what is the use case where this feature will be valuable?

Users look at CRD stability for a clue if it is production ready.
We are taking a stable CSI driver and moving the settings to a stable CSI operator where we expect it to quickly be used in production. Production users will expect a stable version, even though it's new.

Many users may not care about the api version as long as we claim to support it, but some users will look at an alpha version and refuse to run it in production. For Rook upstream I consider it a blocker to use alpha CRDs in production deployments.

If we are anyway supporting it in production, we have to provide full support for all of our customers, for all new and upgraded clusters, with full feature parity for all customized settings. To me, that is the definition of v1, not alpha. Why not just ship with v1 now? Then we don’t have to worry about conversion to v1 later. As suggested, we expect stability from the start if we are putting it in production.

And if we are supporting alpha in production, let's define the blocking criteria for moving it to v1.

Redundant `CephCSI` prefix in all the CRDs

Naming of APIs in design doc so far is

  1. CephCSIOperatorConfig (v1a1)
  2. CephCSIDriver (v1a1)
  3. CephCSICephCluster (v1a1)
  4. CephCSIConfig (v1a1)
  5. CephCSIConfigMapping (v1)

The group we chose is csi.ceph.io, first point, do we need to have CephCSI prefix in all the CRDs? It'll be repeated in Kind/Resource and also in Group. Second, most of these APIs are mentioning a config and we are deriving desired state out of it, as such should we again have Config in the name as an abstraction?

After a rename the APIs will read as (in GR form)

  1. operators.csi.ceph.io
  2. drivers.csi.ceph.io
  3. cephclusters.csi.ceph.io (maybe clusters.csi.ceph.io)
  4. - (maybe we need to remove abstraction)
  5. mappings.csi.ceph.io (maybe elaborate on what is the mapping)

The only con (technically) that I can think of is during kubectl ops, we might be forced to type upto non-overlapping GR (unless we could use shortName) if there is a conflicting Kind/Resource. Since everything is a config, could we make group as config.csi.ceph.io?

RFE: Add a new Xready field to the CR status to indicates its ready for consumption.

          I usually see phase/message/reason in resource Conditions, but it seems to me that Conditions have been going out of vogue. I think this is probable because they are incredibly hard to deal with in code and user scripts compared to having typed fields for status.

Many of the recent sig-storage KEPs use a someResourceReady: bool (e.g., bucketReady: bool, snapshotReady: bool) as well, so that users can know with a boolean whether their resource is ready.

Personally, I don't have a preference between message and reason. Both are alphabetically close to phase which should mean users see the outputs near each other as status items are added.

But I would suggest some sort of xReady bool that indicates when config is ready.

Originally posted by @BlaineEXE in #1 (comment)

Add new controller for NodeFencing

This architecture is one-way only. CRD changes -> Configure drivers. I know that CSI drivers are usually pretty simple and can usually just be a deployment and daemonset, so maybe this really is the architecture. But the operator pattern follows control theory, where system states also result in the controller taking actions.

Are there any other :

are there ConfigMaps that cause reconciliations?
does the controller respond to any other events like nodes going online/offline, or getting new labels applied?
I feel like this is a bigger question: is this operator taking on the node loss fencing behavior? https://github.com/rook/rook/blob/682d639532534068a6154664119e10f0016f547d/design/ceph/node-loss-rbd-cephfs.md

I feel that, since the node fencing is both Kube-specific and involves CSI-Addons CRDs, it makes sense that the Ceph-CSI driver would be the right controller for this, so that non-Rook users could benefit from the fencing that RBD and CephFS need to function well in failure conditions.

_Originally posted by @BlaineEXE in #1 (comment)

New struct for volumes and volumemounts

          can we have something below to make it easy for the user, the name needs to be defined only once for both volume and mount 
type Volumes struct {
	Name         string              `json:"name"`
	Volumes      corev1.VolumeSource `json:",inline"`
	VolumeMounts corev1.VolumeMount  `json:",inline"`
}

Originally posted by @Madhu-1 in #24 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.