Giter Site home page Giter Site logo

samba-in-kubernetes / samba-operator Goto Github PK

View Code? Open in Web Editor NEW
94.0 13.0 23.0 5.99 MB

An operator for a Samba as a service on PVCs in kubernetes

License: Apache License 2.0

Dockerfile 0.27% Go 91.82% Makefile 2.60% Shell 5.31%
samba containers kubernetes operator

samba-operator's Introduction

Samba Operator

An operator for Samba as a service on PVCs in kubernetes.

Description

This project implements the samba-operator. It it responsible for the the SmbShare, SmbSecurityConfig, and SmbCommonConfig custom resources:

  • SmbShare describes an SMB Share that will be used to share data with clients.
  • SmbSecurityConfig describes domain and/or user based security properties for one or more shares
  • SmbCommonConfig describes general configuration properties for smb shares

Trying it out (Quick Start)

Prerequisites

You need to have a kubernetes cluster running. For example, minikube is sufficient.

If you wish to use Active Directory domain based security you need one or more domain controllers that are visible to Pods within the Kubernetes cluster.

If you wish to access shares from outside the Kubernetes cluster your cluster must support Services with type LoadBalancer.

Start the operator

In order to install the CRDs, other resources, and start the operator, invoke:

make deploy

To use your own image, use:

make deploy IMG=<my-registry/and/image:tag>

To delete the operator and CRDs from the cluster, run:

make delete-deploy

Alternatively, if you do not wish to use make tools to deploy the operator, you can also use the kubectl command in the following manner.

kubectl apply -k config/default

To remove the operator and all related resources, use:

kubectl delete -k config/default

Creating new Shares

Use a PVC you define

A share can be created that uses pre-existing PVCs, ones that are not directly managed by the operator.

Assuming you have a PVC named mypvc, you can create a new SmbShare using the example YAML below:

apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbShare
metadata:
  name: smbshare1
spec:
  storage:
    pvc:
      name: "mypvc"
  readOnly: false

Use a PVC embedded in the SmbShare

A share can be created that embeds a PVC definition. In this case the operator will automatically manage the PVC along with the share. This example assumes you have a default storage class enabled.

For example:

apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbShare
metadata:
  name: smbshare2
spec:
  storage:
    pvc:
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  readOnly: false

Testing it with a Local Connection

Assuming a local Linux-based environment you can test out a connection to the container by forwarding the SMB port and using a local install of smbclient to access the share:

$ kubectl get pods              NAME                              READY
STATUS    RESTARTS   AGE
my-smbservice-7f779ddc8c-nb6k6    1/1     Running   0          62m
samba-operator-5758b4dbbf-gk9pk   1/1     Running   0          70m
$ kubectl port-forward pod/my-smbservice-7f779ddc8c-nb6k6  4455:445
Forwarding from 127.0.0.1:4455 -> 445
Forwarding from [::1]:4455 -> 445
Handling connection for 4455
$ smbclient -p 4455 -U sambauser //localhost/share
Enter SAMBA\sambauser's password:
Try "help" to get a list of possible commands.
smb: \> ls
.                                   D        0  Fri Aug 28 14:43:26 2020
..                                  D        0  Fri Aug 28 14:32:53 2020
x                                   A   359386  Fri Aug 28 14:35:18 2020
gefcanilant                         A  5141264  Fri Aug 28 14:43:26 2020

4184064 blocks of size 1024. 4141292 blocks available
smb: \>

Above we forward the normal SMB port to an unprivileged local port, assuming you'll be running this as a normal user.

Documentation

For additional details on how to set up shares that can authenticate via Active Directory, or use a load balancer, etc please refer to the Samba Operator Documentation.

Containers on quay.io

This operator uses the container built from samba-in-kubernetes/samba-container as found on quay.io.

The container from this codebase is published on quay.io too.

Additional Resources

samba-operator's People

Contributors

anoopcs9 avatar dependabot[bot] avatar ibotty avatar nixpanic avatar obnoxxx avatar ofan avatar phlogistonjohn avatar spuiuk avatar synarete avatar tiferrei avatar yardenshoham avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

samba-operator's Issues

Add centosci test target for kubernetes 1.24

No rush on this one, but we're testing our PRs on k8s 1.23 but 1.24 was released earlier this month. I think that we should target 1.24 (only) for a while and then consider widening the number of kube versions we test against later this year. Feel free to discuss.

YAML files "churn" when executing make commands

I think this is fallout from #131

When I run commands such as make manifests many of the YAML files show as changed to git. However, I've not touched the sources. I think we should do one of the following:

  • Configure our yaml generating tools to conform to the new styling
  • Configure the make file to automatically format the output of make manifests and the like
  • Change our formatting/linting rules to conform to the default output of the tools used for make manifests and the like.

`mountPath` integration test fails when run alone

If mountPath test happens to be the first or when run alone it fails as smbclient test pod is not present in the k8s cluster. Following is the error from one such occurrence from CentOS CI runs:

=== RUN   TestIntegration
=== RUN   TestIntegration/deploy
=== RUN   TestIntegration/deploy/default
=== RUN   TestIntegration/deploy/default/TestImageAndTag
=== RUN   TestIntegration/deploy/default/TestOperatorReady
=== RUN   TestIntegration/smbShares
=== RUN   TestIntegration/smbShares/mountPath
    mount_path_test.go:70: 
        	Error Trace:	mount_path_test.go:70
        	            				suite.go:118
        	            				integration_test.go:15
        	Error:      	Received unexpected error:
        	            	failed to flush cache: ['rm' '-f' '/var/lib/samba/lock/gencache.tdb']: failed executing command (pod:samba-operator-system/smbclient container:client): pods "smbclient" not found [exit: 1; stdout: ; stderr: ]
        	Test:       	TestIntegration/smbShares/mountPath

We may have to create smbclient pod if missing in the cluster as part of SetupSuite()

Tests should assert that all resources created for a SmbShare are deleted

Tests should assert that all resources created for a SmbShare are deleted when said SmbShare is deleted.

Basically we want to check that things are cleaned up properly. There's currently some breakage here due to the namespace/SetControllerReference problem reported in #87 but I suspect there's more. By adding to the tests we can be confident that this stuff gets fixed and prevents problems in the future.

Drop port 139

I've been researching how best to expose SMB shares to systems outside the k8s cluster. One thing that stands out to me is that we really don't need port 139. It's pretty old, obsolete, and having it as part of the container spec is just going to be confusing. I think we should focus our efforts on "modern" SMB until we have strong demand otherwise.

samba-operator-controller-manager continiously restarting with status OOMKilled

After deploying the operator with make deploy on OpenShift, the pod is restarting constantly:

$ kubectl -n samba-operator-system get pods
NAME                                                READY   STATUS      RESTARTS   AGE
samba-operator-controller-manager-5c4766cfc-hnp2k   1/2     OOMKilled   2          5m58s
$ kubectl -n samba-operator-system describe pods
Name:         samba-operator-controller-manager-5c4766cfc-hnp2k
Namespace:    samba-operator-system
Priority:     0
Node:         ip-10-0-153-215.ec2.internal/10.0.153.215
Start Time:   Fri, 30 Apr 2021 10:18:13 +0200
Labels:       control-plane=controller-manager
              pod-template-hash=5c4766cfc
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.131.0.62"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "10.131.0.62"
                    ],
                    "default": true,
                    "dns": {}
                }]
              openshift.io/scc: restricted
Status:       Running
IP:           10.131.0.62
IPs:
  IP:           10.131.0.62
Controlled By:  ReplicaSet/samba-operator-controller-manager-5c4766cfc
Containers:
  kube-rbac-proxy:
    Container ID:  cri-o://9dcea85dea85669f5d32af871cc81f45dd0ab2a98f762d8d3763afd3d9c5d5f2
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Fri, 30 Apr 2021 10:18:18 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tnfbb (ro)
  manager:
    Container ID:  cri-o://99320227bde1457d9fc019ecf479ce75fba630a428c80109f875023bae8a831e
    Image:         quay.io/samba.org/samba-operator:latest
    Image ID:      quay.io/samba.org/samba-operator@sha256:d1dbcea58e9800d17c40064d238ded061700eb5fb9e643c7b2884b834e6dd812
    Port:          <none>
    Host Port:     <none>
    Command:
      /manager
    Args:
      --metrics-addr=127.0.0.1:8080
      --enable-leader-election
    State:          Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Fri, 30 Apr 2021 10:25:40 +0200
      Finished:     Fri, 30 Apr 2021 10:26:06 +0200
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Fri, 30 Apr 2021 10:24:26 +0200
      Finished:     Fri, 30 Apr 2021 10:24:51 +0200
    Ready:          False
    Restart Count:  4
    Limits:
      cpu:     100m
      memory:  30Mi
    Requests:
      cpu:        100m
      memory:     20Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-tnfbb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-tnfbb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-tnfbb
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                   From               Message
  ----     ------          ----                  ----               -------
  Normal   Scheduled       8m                    default-scheduler  Successfully assigned samba-operator-system/samba-operator-controller-manager-5c4766cfc-hnp2k to ip-10-0-153-215.ec2.internal
  Normal   AddedInterface  7m59s                 multus             Add eth0 [10.131.0.62/23]
  Normal   Pulling         7m58s                 kubelet            Pulling image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0"
  Normal   Pulled          7m56s                 kubelet            Successfully pulled image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0" in 2.107868137s
  Normal   Created         7m56s                 kubelet            Created container kube-rbac-proxy
  Normal   Started         7m56s                 kubelet            Started container kube-rbac-proxy
  Normal   Pulled          7m52s                 kubelet            Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 3.780498563s
  Normal   Pulled          3m20s                 kubelet            Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 139.668453ms
  Normal   Pulled          2m40s                 kubelet            Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 141.670246ms
  Normal   Pulling         108s (x4 over 7m56s)  kubelet            Pulling image "quay.io/samba.org/samba-operator:latest"
  Normal   Created         108s (x4 over 7m52s)  kubelet            Created container manager
  Normal   Started         108s (x4 over 7m52s)  kubelet            Started container manager
  Normal   Pulled          108s                  kubelet            Successfully pulled image "quay.io/samba.org/samba-operator:latest" in 130.90248ms
  Warning  BackOff         57s (x6 over 2m54s)   kubelet            Back-off restarting failed container

This seems to be related to the configuration in the Deployment/samba-operator-controller-manager. Increasing the memory limit from 30Mi to 60Mi 100Mi seems to make the Pod run (without any workloads, that is):

          resources:
            limits:
              cpu: 100m
              memory: 100Mi

Add test cases for smb metrics container

We need to add test cases to verify that the metrics container is being created when it should and works (for some value of works).

I don't think we need to run a full blown Prometheus but it may be good to at least verify that the http endpoint is valid.

updating SmbdContainerImage and ServiceAccountName via environment does not change the share's deployment's image

This is on version v0.2.

> oc logs -n samba-operator-system deploy/samba-operator-controller-manager
2022-06-15T09:33:03.716Z        INFO    setup   loaded configuration successfully       {"config": {"SmbdContainerImage":"quay.io/samba.org/samba-server:v0.2","SmbdMetricsContainerImage":"qu
ay.io/samba.org/samba-metrics:v0.2","SvcWatchContainerImage":"quay.io/samba.org/svcwatch:v0.2","SmbdContainerName":"samba","WinbindContainerName":"wb","WorkingNamespace":"samba-operator-syst
em","SambaDebugLevel":"","StatePVCSize":"1Gi","ClusterSupport":"","SmbServicePort":445,"SmbdPort":445,"ServiceAccountName":"samba","MetricsExporterMode":"disabled","PodName":"samba-operator-
controller-manager-7486c6dcf5-zq7w7","PodNamespace":"samba-operator-system","PodIP":"172.20.11.77"}}
[...]
2022-06-15T09:33:26.516Z        INFO    controllers.SmbShare    Updating state for SmbShare  {"smbshare": "namespace/my-share", "SmbShare.Namespace": "namespace", "SmbShare.Name": "myshare", "Smb
Share.UID": "f69ba631-de0b-4856-b3fb-b0cb9f2a4ca1"}
2022-06-15T09:33:31.718Z        INFO    controllers.SmbShare    Done updating SmbShare resources      {"smbshare": "namespace/my-share"}

But:

> oc get po -n namespace myshare | grep -E '(serviceAccountName|image):'
    image: quay.io/samba.org/samba-server:latest
  serviceAccountName: default
    image: quay.io/samba.org/samba-server:latest

[ctdb/ss] Expliclity handle cases of SmbShare contents changing

In the future we will probably want to fully reconcile certain changes to the SmbShare, such as changing from a non-clustered to a clustered instance. However in the short term it should be enough to recognize the situation and refuse to do anything (too) destructive.

Fix annoying 'dos charset 'CP850' unavailable' warnings

Tools like smbclient and net are generating dos charset warnings when run.
Looks somewhat like:

# smbclient -U foo -L //localhost
lp_load_ex: changing to config backend registry
Password for [WORKGROUP\foo]:
dos charset 'CP850' unavailable - using ASCII                      <-------- HERE
session setup failed: NT_STATUS_LOGON_FAILURE

It's minor but unnecessary as it can be eliminated by setting an smb.conf option (dos charset AFAICT).
Let's set that option and eliminate an annoyance.

samba client pod is not coming up after `k apply -f client-test-pod.yaml`

steps :

  1. create a minikue deployment
  2. clone the repo and make deploy
  3. $ k config set-context --current --namespace=samba-operator-system
  4. $ # k get pods
    NAME READY STATUS RESTARTS AGE
    samba-operator-controller-manager-844d976b7b-nlgqb 2/2 Running 0

[a@dhcp47-98 files]$ k apply -f client-test-pod.yaml
pod/smbclient created

[a@dhcp47]$ k get pods
NAME READY STATUS RESTARTS AGE
samba-ad-server-86b7dd9856-shvxp 1/1 Running 0 46m
samba-operator-controller-manager-844d976b7b-nlgqb 2/2 Running 0 19h
smbclient 0/1 ContainerCreating 0 30m -->> pod is not comming up


events:

35m Normal Scheduled pod/smbclient Successfully assigned samba-operator-system/smbclient to minikube
34m Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
27m Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
25m Warning FailedMount pod/smbclient Unable to attach or mount volumes: unmounted volumes=[kube-api-access-45s5r data], unattached volumes=[kube-api-access-45s5r data]: timed out waiting for the condition
4m36s Normal Scheduled pod/smbclient Successfully assigned samba-operator-system/smbclient to minikube
2m28s Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
2m33s Warning FailedMount pod/smbclient Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data kube-api-access-88s6g]: timed out waiting for the condition
87s Normal Scheduled pod/smbclient Successfully assigned samba-operator-system/smbclient to minikube
24s Warning FailedMount pod/smbclient MountVolume.SetUp failed for volume "data" : configmap "sample-data1" not found
16s Warning FailedMount pod/smbclient Unable to attach or mount volumes: unmounted volumes=[data kube-api-access-88s6g], unattached volumes=[data kube-api-access-88s6g]: timed out waiting for the condition


Hi plz, let me know if I missed any steps ,

smbclient/gencache name caching can cause test failures

Ideally, the tests run like they do in the CI, once per cluster. But when developing and/or testing locally there are good reasons to reuse a k8s cluster. The tests don't tear down certain resources like smbclient. Unfortunately, using smbclient to connect to one dns name places a cache record in gencache.tdb that seems to outlive the TTL for the record as served up by Coredns in the k8s cluster.

This issues is more a reminder that the problem lurks rather than something that needs to be resolved immediately.

AD share is not aible to fetch own SID

I installed the Samba Operator 0.2 on an Openshift 4.8 Barebone Cluster. I created some AD shares.

  1. the created share export pod is starting
  2. in AD (Samba 4.12.2) the computer object is created
  3. the pod has a CrashLoopBackOff, the wb container cannot start:
winbindd version 4.15.7 started.
Copyright Andrew Tridgell and the Samba Team 1992-2021
initialize_winbindd_cache: clearing cache and re-creating with version number 2
Could not fetch our SID - did we join?
unable to initialize domain list

yamls:

`apiVersion: v1
kind: Secret
metadata:
  name: join1
  namespace: samba-shares
type: Opaque
stringData:
  join.json: |
    {"username": "samba-container-join", "password": ":-)"}
---
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbSecurityConfig
metadata:
  name: addomain
  namespace: samba-shares
spec:
  mode: active-directory
  realm: ad.domain.com
  joinSources:
  - userJoin:
      secret: join1
      key: join.json
---
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbSecurityConfig
metadata:
  name: addomain
  namespace: samba-shares
spec:
  mode: active-directory
  realm: ad.domain.com
  joinSources:
  - userJoin:
      secret: join1
      key: join.json
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbCommonConfig
metadata:
  name: freigabe
  namespace: samba-shares
spec:
  network:
    publish: external
---
apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbShare
metadata:
  name: testshare
  namespace: samba-shares
spec:
  commonConfig: freigabe
  securityConfig: addomain
  readOnly: false
  storage:
    pvc:
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

samba-tool at the AD Server shows that the entry is created`

# samba-tool computer show TESTSHARE 
dn: CN=TESTSHARE,OU=Containers,OU=Domain Computers,DC=ad,DC=domain,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
objectClass: computer
cn: TESTSHARE
instanceType: 4
whenCreated: 20220615103058.0Z
uSNCreated: 144306
name: TESTSHARE
objectGUID: 3adabc17-a938-47fa-843c-1e864b86e19e
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 0
lastLogoff: 0
primaryGroupID: 515
objectSid: S-1-5-21-2358220382-4025805735-3930986455-1375
accountExpires: 9223372036854775807
sAMAccountName: TESTSHARE$
sAMAccountType: 805306369
servicePrincipalName: HOST/TESTSHARE.ad.domain.com
servicePrincipalName: RestrictedKrbHost/TESTSHARE.ad.domain.com
servicePrincipalName: HOST/TESTSHARE
servicePrincipalName: RestrictedKrbHost/TESTSHARE
objectCategory: CN=Computer,CN=Schema,CN=Configuration,DC=ad,DC=domain,DC=com
isCriticalSystemObject: FALSE
dNSHostName: testshare.ad.domain.com
lastLogonTimestamp: 132997626582395210
msDS-SupportedEncryptionTypes: 31
pwdLastSet: 132997630161230470
userAccountControl: 4096
lastLogon: 132997630162023640
logonCount: 6
whenChanged: 20220615104727.0Z
uSNChanged: 144314
distinguishedName: CN=TESTSHARE,OU=Containers,OU=Domain Computers,DC=ad,DC=domain,DC=com

3) debug the pod / wb container

# oc get pods
NAME                                   READY   STATUS             RESTARTS   AGE
testshare-testshare-5986c96565-92gx9   1/2     CrashLoopBackOff   12         41m
# oc get logs testshare-5986c96565-92gx9 -c wb
winbindd version 4.15.7 started.
Copyright Andrew Tridgell and the Samba Team 1992-2021
initialize_winbindd_cache: clearing cache and re-creating with version number 2
Could not fetch our SID - did we join?
unable to initialize domain list
sh-5.1# samba-container 
[global]
	disable spoolss = yes
	fileid:algorithm = fsid
	load printers = no
	printcap name = /dev/null
	printing = bsd
	smb ports = 445
	vfs objects = fileid
	idmap config * : backend = autorid
	idmap config * : range = 2000-9999999
	realm = AD.DOMAIN.COM
	security = ads
	workgroup = AD
	netbios name = testshare

[testshare]
	path = /mnt/75067755-fe82-4f3c-841f-1ad7df34b5c8
	read only = no

and the same wenn I start debugging ...

[root@testshare-5986c96565-92gx9-debug /]# samba-container run winbindd
winbindd version 4.15.7 started.
Copyright Andrew Tridgell and the Samba Team 1992-2021
initialize_winbindd_cache: clearing cache and re-creating with version number 2
Could not fetch our SID - did we join?
unable to initialize domain list

so, there is a SID, AD says welcome and the Pod could not fetch the own SID.

[Feature] Add an option in SmbShare to expose the share as a Service

For users it will be very convenient to have a Service that can be use to connect to the SmbShare by name.

For example, using tests/files/smbshare1.yaml as SmbShare, and adding the following Service:

apiVersion: v1
kind: Service
metadata:
  name: tshare1
  namespace: samba-operator-system
spec:
  selector:
    samba-operator.samba.org/service: tshare1
  ports:
  - port: 445
    protocol: TCP

This makes it possible to use //tshare1/My Share to connect, independent from the IP-address that the Pod currently has:

$ kubectl -n samba-operator-system exec -ti centos -- /bin/bash
[root@centos /]# smbclient -U sambauser '//tshare1/My Share'
Enter SAMBA\sambauser's password: 
Try "help" to get a list of possible commands.
smb: \> 

Or, if the consumer of the SmbShare runs in a different namespace, it can use tshare1.samba-operator-system.svc.cluster.local/My Share:

$ kubectl exec -ti centos -- /bin/bash
[root@centos /]# smbclient -U sambauser '//tshare1.samba-operator-system.svc.cluster.local/My Share'
Enter SAMBA\sambauser's password: 
Try "help" to get a list of possible commands.
smb: \> 

It would be nice if the scripts worked with a KUBECONFIG env of the form path1:path2:path3

It is possible to have the value for the KUBECONFIG env such that you have multiple kubeconfig files. The kubectl command knows how to parse and uses the first kubeconfig file that is available in that list.

I don't know if it is possible to do something similar in bash scripts. May be it is possible to provide a similar behavior by usign the client library from https://pkg.go.dev/k8s.io/cli-runtime. Just wanted to put it out there.

Document how to expose a share outside the k8s cluster

Support for additional configuration that will allow access to a share from outside the k8s cluster was added a few months back but not documented. In fact, all the basic docs are a bit out of date. These should be updated to reflect the current state of the operator.

[ctdb/ss] Add support for AD DNS Registration

The non-clustered SmbShare instances are able to host additional containers that watch for changes with a Services (public) IP addresses and register those to the AD DNS. This should be supported and tested in clustered (ctdb) modes too.

rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated

Running make deploy emits numerous warnings, including:

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole

I think any/all uses of this ClusterRole kind is from kubebuilder/operator sdk and its tool chain, but we should probably figure out how to update anyway, since we don't even need to be backwards compatible with k8s pre 1.17

smbtorture failure - smb2.rw.invalid

When testing against the smb-operator samba server using a rook supplied cephfs pvc, we see a failure when running the smb2.rw.invalid test.

[root@smbclient samba-integration]# /bin/smbtorture --fullname --target=samba3 --user=sambauser%samba //10.244.2.14/smbshare3  smb2.rw.invalid
smbtorture 4.15.5
Using seed 1651410917
time: 2022-05-01 13:15:17.325545
test: smb2.rw.invalid
time: 2022-05-01 13:15:17.327756
dos charset 'CP850' unavailable - using ASCII
time: 2022-05-01 13:15:17.444662
failure: smb2.rw.invalid [
../../source4/torture/smb2/read_write.c:331: status was NT_STATUS_DISK_FULL, expected NT_STATUS_OK: Incorrect status
]

The part of the code with the failure is at

        w.in.file.handle = h;    
        w.in.offset = 0xfffffff0000 - 1; /* MAXFILESIZE - 1 */
        w.in.data.data = buf;
        w.in.data.length = 1;
        status = smb2_write(tree, &w);
        if (TARGET_IS_SAMBA3(torture) || TARGET_IS_SAMBA4(torture)) {
                CHECK_STATUS(status, NT_STATUS_OK);
                CHECK_VALUE(w.out.nwritten, 1);
        } else {
                CHECK_STATUS(status, NT_STATUS_DISK_FULL);
        }

This is not seen with an ext4 underlying filesystem.

Versions:
samba-4.15.6-0.fc35.x86_64

mount point:
10.111.173.90:6789,10.110.224.62:6789,10.101.104.103:6789:/volumes/csi/csi-vol-0cb59f87-c54e-11ec-ad3d-1e1dd7acb57d/ae264282-34b6-4255-a25c-6d8f60d9fc5e /mnt/dc189f61-d413-4b76-bb99-4b86beb30c0a ceph rw,relatime,name=csi-cephfs-node,secret=,acl,mds_namespace=myfs 0 0

Document how to consume a share inside the k8s cluster

I used csi-driver-smb and a pv. If that's the intended way, I can write some documentation.

In the future it would, of course, be great to allow consuming smbshares with pods in the same namespace without the administrator having to manually create a pv.

Samba Pod fails to start on OpenShift

The following error was addressed with #71:

Traceback (most recent call last):
  File "/usr/local/bin/samba-container", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 447, in main
    cfunc(cli, config)
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 97, in run_container
    init_container(cli, config)
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 84, in init_container
    import_config(cli, config)
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 49, in import_config
    paths.ensure_samba_dirs()
  File "/usr/local/lib/python3.9/site-packages/sambacc/paths.py", line 37, in ensure_samba_dirs
    _mkdir(wb_sockets_dir)
  File "/usr/local/lib/python3.9/site-packages/sambacc/paths.py", line 43, in _mkdir
    os.mkdir(path)
PermissionError: [Errno 13] Permission denied: '/run/samba/winbindd'

Unfortunately, it is not sufficient and the next problem looks like this:

$ oc -n samba-operator-system logs pvc-0c6867c2-5875-405a-a4da-6f11d11c9e12-b58654f9-bmgn5
Failed to initialize the registry: WERR_ACCESS_DENIED
Failed to initialize the registry: WERR_ACCESS_DENIED
Can't load /etc/samba/smb.conf - run testparm to debug it
Traceback (most recent call last):
  File "/usr/local/bin/samba-container", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 447, in main
    cfunc(cli, config)
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 97, in run_container
    init_container(cli, config)
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 84, in init_container
    import_config(cli, config)
  File "/usr/local/lib/python3.9/site-packages/sambacc/main.py", line 54, in import_config
    loader.import_config(iconfig)
  File "/usr/local/lib/python3.9/site-packages/sambacc/netcmd_loader.py", line 59, in import_config
    self._check(cli, proc)
  File "/usr/local/lib/python3.9/site-packages/sambacc/netcmd_loader.py", line 52, in _check
    raise LoaderError("failed to run {}".format(cli))
sambacc.netcmd_loader.LoaderError: failed to run ['net', 'conf', 'import', '/dev/stdin']

Add sample for OpenShift's DNS operator to resolve AD-Zone

This issue is more of a stopgap until proper documentation is written for people searching for that information. I intend to write this up properly.

OpenShift's DNS operator does not allow editing the coredns configmap when in managed state. It does support changing the CRD though. The following file does the same on OpenShift as the file in tests/files/coredns-snippet.template.
The file can be applied with

oc patch dns.operator/default --type merge --patch-file /path/to/file

(Of course AD_SERVER_IP has to be the actual IP.)

spec:
  servers:
  - name: ad-zone
    zones:
    - ad.schaeffer-ag.de
    forwardPlugin:
      upstreams:
      - AD_SERVER_IP

support for ExternalDNS

Awesome project so far. However, there is a community of users that will use the "user" authentication method (i.e. no AD) but still want to expose samba shares external to the k8s cluster.

Since there is no AD, AD-DNS isn't really an option. However, K8s does provide ExternalDNS as an option for exposing services externally via integration with the external DNS infrastructure.

It may be interesting to have an option to expose either via AD-DNS or ExernalDNS.

Creating SMBShares in different namespaces

Looking at the code, the operator creates the resources (PVC, Deployment, Service) in the operator's namespace and then sets the SMBShare resource as the owner of these resources (to have K8S garbage collect them).

But, K8S does not allow cross namespace ownership. So, if the SMBShare is created in a different namespace, this operation will fail. Please note that the return value is not checked:

controllerutil.SetControllerReference(s, pvc, m.scheme)

In the demo showed in SambaXP right after provisioning the operator the context is set to use the namespace of the operator. As a result, all the resources created in the demo are created in that namespace and this issue does not occur.

Is the intention to force the users to create SMBShares only in the operator's namespace?

ctdb-is-experimental: Failing to multi attach in a clustered setup

apiVersion: samba-operator.samba.org/v1alpha1
kind: SmbShare
metadata:
  name: smbshare3
spec:
  scaling:
    availabilityMode: clustered
    minClusterSize: 2
  storage:
    pvc:
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  readOnly: false
  shareName: "smbshare3"
[sprabhu@fedora samba-operator]$ kubectl get pods
NAME                               READY   STATUS     RESTARTS   AGE
samba-ad-server-86b7dd9856-zkptq   1/1     Running    0          5h22m
smbshare3-0                        3/3     Running    0          6m31s
smbshare3-1                        0/3     Init:0/4   0          6m10s

Checking with kubectl describe smbshare3-1, we see

  Type     Reason                  Age    From                     Message
  ----     ------                  ----   ----                     -------
  Normal   Scheduled               3m20s  default-scheduler        Successfully assigned default/smbshare3-1 to minikube-m02
  Warning  FailedAttachVolume      3m20s  attachdetach-controller  Multi-Attach error for volume "pvc-10d65634-15ab-4db1-ad35-243e9589d861" Volume is already used by pod(s) smbshare3-0
  Normal   SuccessfulAttachVolume  3m19s  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-7d070fb6-76c0-48d7-a629-a99e3d4ee2a6"
  Warning  FailedMount             77s    kubelet                  Unable to attach or mount volumes: unmounted volumes=[smbshare3-pvc-smb], unattached volumes=[smbshare3-state-ctdb ctdb-config ctdb-volatile samba-container-config ctdb-sockets samba-state-dir kube-api-access-8kcnn ctdb-persistent smbshare3-pvc-smb]: timed out waiting for the condition

[sprabhu@fedora samba-operator]$ kubectl get pvc
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
smbshare3-pvc     Bound    pvc-10d65634-15ab-4db1-ad35-243e9589d861   1Gi        RWO            rook-cephfs    7m41s
smbshare3-state   Bound    pvc-7d070fb6-76c0-48d7-a629-a99e3d4ee2a6   1Gi        RWX            rook-cephfs    7m41s

kustomization.yaml in developer guide is incomplete

The contents for config/developer/kustomization.yaml should also include

resources:

  • ../crd
  • ../rbac
  • ../manager-full

for it to work properly.

Without this, the command
make DEVEOPER=1 deploy will fail with the output

/home/sprabhu/go/bin/controller-gen "crd:trivialVersions=true,crdVersions=v1" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/developer && /home/sprabhu/go/bin/kustomize edit set image controller=quay.io/spuiuk/smbshare_devel:test
/home/sprabhu/go/bin/kustomize build config/developer | kubectl apply -f -
Error: merging from generator &{0xc0004b6360 { map[] map[]} {{system controller-cfg merge {[SAMBA_OP_SAMBA_DEBUG_LEVEL=10 SAMBA_OP_CLUSTER_SUPPORT=ctdb-is-experimental] [] [] } }}}: id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"controller-cfg", Namespace:"system"} does not exist; cannot merge or replace

[ctdb/ss] Test clustered mode in the default CI

Currently the tests for clustered SmbShares are not run in the CI. This is because the CI uses a single node k8s cluster with no ability to provision RWX PVCs.

At the bare minimum the test cluster must support Read-Write-Many. Ideally it would also support >=3 k8s nodes.

I don't know if the github CI is sufficient for this.

expose to external using TCP ingress

I'm using traefik as ingress controller.

Is it possible to ask the operator to create the service that exposes the share only as a cluster IP so that it will be possible to use the ingresstcp resource to expose the service through the ingress controller?

Why two CRD types?

I was trying out this operator and following along with the readme but couldn't figure out the justfication for the two different CRDs. It seems like the SmbPvc is meant to create a SmbService and a PVC and glue them together. If I'm right I'm not quite sure what the advantage is.

If I may, my first thought is that the CRDs should be more oriented toward the task the user (cluster/storage admins) are going to be performing. In that way, SmbService makes some sense but you perhaps even go more granluar and create SmbShare CRDs instead.

IMO if you want to bind the lifecycle of a PV/PVC from a lower part of the storage stack either by name, for an existing PVC, or by directly specifying the pvc parameters I'd go with something like:

source:
  pvc:
    name: "mypvcname"

AND as an alternate form

source:
  pvc:
    spec:
        #...embedded pvc spec...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.