Giter Site home page Giter Site logo

Comments (10)

edmc-ss avatar edmc-ss commented on June 16, 2024

Welcome to ProxyFS, jnamdar!

Apologies about the lack of the CentOS box... indeed, I believe the path to it needs tweaking.
That particular box has the vbox tools installed allowing "synced folders" to work.
I trust you found a workable substitute.
In any event, we should be updating the CentOS box version we are referencing.

Nice to see you were able to get your Swift Proxy "NoAuth" pipeline up and working.
One caution about that pipeline. As you can imagine, it doesn't want to have the pfs_middleware
installed (but does need meta_middleware installed)... the reverse of the "normal" Swift Proxy.
Another caution... if you enable encryption in your cluster, you need to be sure the encryption middleware is in both pipelines (and after pfs_middleware in the "normal" pipeline).

Keystone Auth for the Swift API pipeline should "just work"... but I must admit to never having tried it. The pfs_middleware that enables ProxyFS sits after the Auth step in the pipeline, so we should be just fine. But if you are asking about Keystone Auth being integrated on the SMB side, I don't have any experience with that alas. Is that the need for you?

Your attempt to get SMB mounting working to your modified setup is very interesting. I was first thinking that one of the authorizing steps was missed so I tried a couple of things.

  1. Samba, due to its history of leveraging Linux "authorization" by mapping SMB users to Linux users, means that the SMB user needs to also exist in the Linux user database. I noted, however, that your smbpasswd -a step would have failed if it didn't.

  2. I then note that if you haven't added your SMB user to the valid users = line in /etc/samba/smb.conf - and restarted or otherwise "triggered" a refresh - it won't (yet) know about your added SMB user. Alas, what you'd get in this case is a mount error(13): Permission denied, not the mount error(5): Input/output error you received.

What I note has usually happened when getting the error you received I just re-verified. I sent a SIGKILL to the 'proxyfsdprocess. Without something restarting it, the mount will fail in precisely the way you see... and I strongly suspect that has happened. What's going on there is Samba has been configured (for this volume anyway) to invoke the vfs plug-in included in a couple of submodules of ProxyFS (I believe you must have done agit submodule update --recursive` at some point, so you should be fine). Anyway, the jrpcclient submodule is actually the one connecting to the proxyfsd process over a couple of TCP ports (the supplied .conf's have those as ports 12345 & 32345 I believe) on the PrivateIPAddr interface. Port 32345 is used for the read and write data path (only coming from the adjacent on the node Samba instance in your case)... while port 12345 is for all other RPCs... coming from both the adjacent Samba instance as well as all of the pfs_middleware instances in your collection of Swift Proxy nodes.

Anyway, I'm thinking the proxyfsd process has failed. It would be very interesting to see the log from that process (/var/log/proxyfsd/proxyfsd.log).

Next, it looks like you got a "500 Internal Error" when later attempting a Swift API method targeting this Swift Account. This makes total sense. What's going on here is there is a Header in the Swift Account that indicates this is a so-called "BiModal" Account... meaning that ProxyFS manages its content. In your proxy-server.conf's [filter:pfs] section, there are instructions for which proxyfsd instance (IPAddr & Port#) to contact (note the above comments about TCP Port 12345). The pfs_middleware attempts to contact this proxyfsd_host (this can actually be a list btw) asking ProxyFS for the IPAddr of the proxyfsd instance actually serving the Volume. In this case, it should be told "hey, yes, it's me - you've come to the right place". Anyway, it tries really hard to contact the proxyfsd process indicated...and if it fails, you'll see this "500 Internal Error".

So with that, I believe all signs point to a failure of your proxyfsd process. If you can post the log file I mentioned, perhaps we can get to the bottom of this quickly.

Now just one more thing to check out. As you noted, you needed to configure what we call the "NoAuth" Swift Proxy instance adjacent to your proxyfsd process. It should be serving localhost:8090 I believe. If you log onto the "node" running proxyfsd, you should be able to talk to the underlying Swift Account. Just make sure this is possible. Do a HEAD on the "BiModal" Swift Account... you should see the BiModal Header I mentioned earlier indicating it is, in fact, BiModal. If you do a GET on it, you should see at least one Container likely named ".checkpoint". This is the Container that receives all of the file system's metadata for that Volume in the form of a "write log" consisting of LogSegment objects numbered in ascending order (starting with 0000000000000002 or so). There will be holes as the same 64-bit "Nonce" number sequence is used for all sorts of things that must be named uniquely (e.g. Inode#'s, Container names, etc...).

If you don't see the BiModal Header on the Account... or the .checkpoint Container in the Account, then what it sounds like to me is the Volume needs to be formatted. If you look in the start_and_mount_pfs script, you should see a function named format_volume_if_necessary that invokes a tool called mkproxyfs. It takes just a few options... this script uses "-I" to say "format it if it's not already formatted". That's probably what you want... There are other options to say either "only format if the Account is empty" or "hey, empty it first before formatting" (obviously dangerous).

One note about formatting. ProxyFS requires a bit more guarantee than an Eventually Consistent storage system such as Swift provides. It cannot handle getting back "stale" information that an EC system may supply if, say, you did a 2nd PUT of an Object (a subsequent GET could return nothing, the first version, or the 2nd version). To avoid this, ProxyFS never PUTs the same Object (or Container) twice. That's why I mentioned the "Nonce" above. As such, the only way it could "know" that it had never used a given Nonce Value before is if the Account starts out completely empty. So just a heads up that you should either never reuse an Account... or empty if first (e.g. with mkproxyfs -F).

Hope all of this discussion gives you things to look at to get your system running again. I don't know which of these are going to help identify the issue... but hopefully at least one of them will.

And, again, welcome to ProxyFS!

from proxyfs.

jnamdar avatar jnamdar commented on June 16, 2024

Hello, thank you @edmc-ss for the thorough answer.

Here is the output of my proxyfsd.log :

time="2019-06-21T13:42:21.915145+02:00" level=warning msg="config variable 'TrackedLock.LockHoldTimeLImit' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=8 package=trackedlock pid=4773
time="2019-06-21T13:42:21.915240+02:00" level=warning msg="config variable 'TrackedLock.LockCheckPeriod' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=8 package=trackedlock pid=4773
time="2019-06-21T13:42:21.915302+02:00" level=info msg="trackedlock pkg: LockHoldTimeLimit 0 sec  LockCheckPeriod 0 sec" function=parseConfMap goroutine=8 package=trackedlock pid=4773
time="2019-06-21T13:42:21.915698+02:00" level=info msg="evtlog.Up(): event logging is false" function=Up goroutine=8 package=evtlog pid=4773
time="2019-06-21T13:42:21.917613+02:00" level=info msg="SwiftClient.RetryLimit 11, SwiftClient.RetryDelay 1.000 sec, SwiftClient.RetryExpBackoff 1.5" function=reloadConfig goroutine=8 package=swiftclient pid=4773
time="2019-06-21T13:42:21.917694+02:00" level=info msg="SwiftClient.RetryLimitObject 8, SwiftClient.RetryDelayObject 1.000 sec, SwiftClient.RetryExpBackoffObject 1.9" function=reloadConfig goroutine=8 package=swiftclient pid=4773
time="2019-06-21T13:42:21.917780+02:00" level=info msg="SwiftClient.ChecksumChunkedPutChunks disabled\n" function=reloadConfig goroutine=8 package=swiftclient pid=4773
time="2019-06-21T13:42:21.918737+02:00" level=info msg="Transitions Package Registration List: [logger trackedlock dlm evtlog stats swiftclient headhunter halter inode fs fuse jrpcfs statslogger liveness httpserver]" function=up gor
outine=8 package=transitions pid=4773
time="2019-06-21T13:42:21.920466+02:00" level=info msg="ChunkedFreeConnections: min=512 mean=512 max=512  NonChunkedFreeConnections: min=127 mean=127 max=127" function=logStats goroutine=18 package=statslogger pid=4773
time="2019-06-21T13:42:21.920542+02:00" level=info msg="Memory in Kibyte (total): Sys=68288 StackSys=352 MSpanSys=32 MCacheSys=16 BuckHashSys=3 GCSys=2182 OtherSys=518" function=logStats goroutine=18 package=statslogger pid=4773
time="2019-06-21T13:42:21.920605+02:00" level=info msg="Memory in Kibyte (total): HeapInuse=2552 HeapIdle=62632 HeapReleased=0 Cumulative TotalAlloc=1951" function=logStats goroutine=18 package=statslogger pid=4773
time="2019-06-21T13:42:21.920659+02:00" level=info msg="GC Stats (total): NumGC=0  NumForcedGC=0  NextGC=4369 KiB  PauseTotalMsec=0  GC_CPU=0.00%" function=logStats goroutine=18 package=statslogger pid=4773
time="2019-06-21T13:42:21.920711+02:00" level=info msg="Swift Client Ops (total): Account QueryOps=0 ModifyOps=0 Container QueryOps=0 ModifyOps=0 Object QueryOps=0 ModifyOps=0" function=logStats goroutine=18 package=statslogger pid=4773
time="2019-06-21T13:42:21.920784+02:00" level=info msg="Swift Client ChunkedPut Ops (total): FetchOps=0 ReadOps=0 SendOps=0 CloseOps=0" function=logStats goroutine=18 package=statslogger pid=4773
time="2019-06-21T13:42:21.963298+02:00" level=info msg="Inode cache discard ticker for 'volume: CommonVolume' is: 1s MaxBytesInodeCache: 10485760" function=startInodeCacheDiscard goroutine=8 package=inode pid=4773
time="2019-06-21T13:42:21.963768+02:00" level=info msg="Adopting ReadCache Parameters..." function=adoptVolumeGroupReadCacheParameters goroutine=8 package=inode pid=4773
time="2019-06-21T13:42:21.963852+02:00" level=info msg="...ReadCacheQuotaFraction(0.2) of memSize(0x000000016AF16000) totals 0x000000000742447A" function=adoptVolumeGroupReadCacheParameters goroutine=8 package=inode pid=4773
time="2019-06-21T13:42:21.963924+02:00" level=info msg="...0x00000074 cache lines (each of size 0x00100000) totalling 0x0000000007400000 for Volume Group CommonVolumeGroup" function=adoptVolumeGroupReadCacheParameters goroutine=8 package=inode pid=4773
time="2019-06-21T13:42:21.963990+02:00" level=info msg="Checkpoint per Flush for volume CommonVolume is true" function=ServeVolume goroutine=8 package=fs pid=4773
time="2019-06-21T13:42:21.965235+02:00" level=warning msg="Couldn't mount CommonVolume.FUSEMountPoint == CommonMountPoint" error="fusermount: exit status 1" function=performMount goroutine=8 package=fuse
time="2019-06-21T13:42:21.970704+02:00" level=warning msg="config variable 'TrackedLock.LockHoldTimeLImit' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=8 package=trackedlock pid=4773
time="2019-06-21T13:42:21.970803+02:00" level=warning msg="config variable 'TrackedLock.LockCheckPeriod' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=8 package=trackedlock pid=4773
time="2019-06-21T13:42:21.971845+02:00" level=info msg="trackedlock pkg: LockHoldTimeLimit 0 sec  LockCheckPeriod 0 sec" function=parseConfMap goroutine=8 package=trackedlock pid=4773
time="2019-06-21T13:42:21.971963+02:00" level=info msg="evtlog.Signaled(): event logging is now false (was false)" function=SignaledFinish goroutine=8 package=evtlog pid=4773
time="2019-06-21T13:42:21.972334+02:00" level=info msg="transitions.Up() returning successfully" function=func1 goroutine=8 package=transitions pid=4773
time="2019-06-21T13:42:21.972410+02:00" level=info msg="proxyfsd is starting up (version 1.10.1.0.2-5-g16af550) (PID 4773); invoked as '/vagrant/bin/proxyfsd' '/vagrant/src/github.com/swiftstack/ProxyFS/saio/proxyfs.conf'" function=Daemon goroutine=8 package=proxyfsd pid=4773
time="2019-06-21T13:42:22.177267+02:00" level=warning msg="config variable 'TrackedLock.LockHoldTimeLImit' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=1 package=trackedlock pid=4868
time="2019-06-21T13:42:22.177397+02:00" level=warning msg="config variable 'TrackedLock.LockCheckPeriod' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=1 package=trackedlock pid=4868
time="2019-06-21T13:42:22.177453+02:00" level=info msg="trackedlock pkg: LockHoldTimeLimit 0 sec  LockCheckPeriod 0 sec" function=parseConfMap goroutine=1 package=trackedlock pid=4868
time="2019-06-21T13:42:22.177890+02:00" level=info msg="evtlog.Up(): event logging is false" function=Up goroutine=1 package=evtlog pid=4868
time="2019-06-21T13:42:22.179261+02:00" level=info msg="SwiftClient.RetryLimit 1, SwiftClient.RetryDelay 1.000 sec, SwiftClient.RetryExpBackoff 1.5" function=reloadConfig goroutine=1 package=swiftclient pid=4868
time="2019-06-21T13:42:22.179325+02:00" level=info msg="SwiftClient.RetryLimitObject 8, SwiftClient.RetryDelayObject 1.000 sec, SwiftClient.RetryExpBackoffObject 1.9" function=reloadConfig goroutine=1 package=swiftclient pid=4868
time="2019-06-21T13:42:22.179377+02:00" level=info msg="SwiftClient.ChecksumChunkedPutChunks disabled\n" function=reloadConfig goroutine=1 package=swiftclient pid=4868
time="2019-06-21T13:42:22.179498+02:00" level=info msg="Transitions Package Registration List: [logger trackedlock evtlog stats swiftclient headhunter]" function=up goroutine=1 package=transitions pid=4868
time="2019-06-21T13:42:22.180059+02:00" level=warning msg="config variable 'TrackedLock.LockHoldTimeLImit' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=1 package=trackedlock pid=4868
time="2019-06-21T13:42:22.180117+02:00" level=warning msg="config variable 'TrackedLock.LockCheckPeriod' defaulting to '0s': [TrackedLock] missing" function=parseConfMap goroutine=1 package=trackedlock pid=4868
time="2019-06-21T13:42:22.180163+02:00" level=info msg="trackedlock pkg: LockHoldTimeLimit 0 sec  LockCheckPeriod 0 sec" function=parseConfMap goroutine=1 package=trackedlock pid=4868
time="2019-06-21T13:42:22.180212+02:00" level=info msg="evtlog.Signaled(): event logging is now false (was false)" function=SignaledFinish goroutine=1 package=evtlog pid=4868
time="2019-06-21T13:42:22.180258+02:00" level=info msg="transitions.Up() returning successfully" function=func1 goroutine=1 package=transitions pid=4868
time="2019-06-21T13:42:22.180296+02:00" level=info msg="mkproxyfs is starting up (version 1.10.1.0.2-5-g16af550) (PID 4868); invoked as '/vagrant/bin/mkproxyfs' '-I' 'CommonVolume' '/vagrant/src/github.com/swiftstack/ProxyFS/saio/proxyfs.conf' 'SwiftClient.RetryLimit=1'" function=Format goroutine=1 package=mkproxyfs pid=4868
time="2019-06-21T13:42:22.198627+02:00" level=info msg="transitions.Down() called" function=down goroutine=1 package=transitions pid=4868
time="2019-06-21T13:42:22.198869+02:00" level=info msg="SwiftClient.RetryLimit 1, SwiftClient.RetryDelay 1.000 sec, SwiftClient.RetryExpBackoff 1.5" function=reloadConfig goroutine=1 package=swiftclient pid=4868
time="2019-06-21T13:42:22.199055+02:00" level=info msg="SwiftClient.RetryLimitObject 8, SwiftClient.RetryDelayObject 1.000 sec, SwiftClient.RetryExpBackoffObject 1.9" function=reloadConfig goroutine=1 package=swiftclient pid=4868
time="2019-06-21T13:42:22.199246+02:00" level=info msg="SwiftClient.ChecksumChunkedPutChunks disabled\n" function=reloadConfig goroutine=1 package=swiftclient pid=4868
time="2019-06-21T13:42:22.200547+02:00" level=info msg="tracklock.Down() called" function=Down goroutine=1 package=trackedlock pid=4868

This line seems relevant :

time="2019-06-21T13:42:21.965235+02:00" level=warning msg="Couldn't mount CommonVolume.FUSEMountPoint == CommonMountPoint" error="fusermount: exit status 1" function=performMount goroutine=8 package=fuse

Edit: I take that back, this error doesn't appear anymore (I must've messed with something in the configuration).

I formatted the volume by calling mkproxyfs -F like you said, it did actually flush it (the mounted volume became empty, it had a folder with a file before). I then recreated this folder and copied a file in it.

I do have a .__checkpoint__ container in the account, it looks like this with a curl (controller is the machine hosting my proxy) :

[root@controller swift]# curl http://controller:8090/v1/AUTH_test/.__checkpoint__/
0000000000000002
0000000000000067
000000000000007A

Regards

Edit2: From the error

[2019/06/18 13:41:48.831109,  2] ../lib/util/modules.c:196(do_smb_load_module)
  Module 'proxyfs' loaded
[2019/06/18 13:41:48.834266,  1] vfs_proxyfs.c:230(vfs_proxyfs_connect)
  proxyfs_mount_failed: Volume : CommonVolume Connection_path /mnt/CommonVolume Service proxyfs user vagrant errno 19
[2019/06/18 13:41:48.834293,  1] ../source3/smbd/service.c:636(make_connection_snum)
  make_connection_snum: SMB_VFS_CONNECT for service 'proxyfs' at '/mnt/CommonVolume' failed: No such device

, I get that the function proxyfs_mount from the jrpc-client failed when trying to mount the samba share. How would I go on troubleshooting it?

I also dug further on the error I get when executing [root@controller adminuser]# swift -A http://controller:8080/auth/v1.0 -U test:tester -K testing stat --debug. The 12345 port was firewall blocked on the VM with the proxyfsd service, which would explain the "unreachable host" error.

I fixed that but now, when trying to get a list of the containers in the test account, it's failing with the resulting error stack:

Jun 21 19:08:31 controller proxy-server: - - 21/Jun/2019/17/08/31 HEAD /auth/v1.0 HTTP/1.0 400 - Swift - - - - tx16277828dba2402fadaae-005d0d0f0f - 0.0004 RL - 1561136911.990171909 1561136911.990561962 -
Jun 21 19:08:31 controller proxy-server: 192.168.71.37 192.168.71.37 21/Jun/2019/17/08/31 GET /auth/v1.0 HTTP/1.0 200 - python-swiftclient-3.6.0 - - - - tx16277828dba2402fadaae-005d0d0f0f - 0.0038 - - 1561136911.988802910 1561136911
.992594004 -
Jun 21 19:08:31 controller proxy-server: STDERR: 192.168.71.37 - - [21/Jun/2019 17:08:31] "GET /auth/v1.0 HTTP/1.1" 200 417 0.004867 (txn: tx16277828dba2402fadaae-005d0d0f0f)
Jun 21 19:08:31 controller proxy-server: STDERR: (19870) accepted ('192.168.71.37', 43720)
Jun 21 19:08:32 controller proxy-server: 192.168.71.37 192.168.71.37 21/Jun/2019/17/08/31 GET /v1/AUTH_test%3Fformat%3Djson HTTP/1.0 500 - python-swiftclient-3.6.0 AUTH_tka5e664b10... - - - tx9f625f0b9b6a4c65962c8-005d0d0f0f - 0.002
8 - - 1561136911.997173071 1561136911.999938011 -
Jun 21 19:08:32 controller proxy-server: Erreur : une erreur s'est produite: Connexion refusée (txn: tx9f625f0b9b6a4c65962c8-005d0d0f0f)

, which basically means "connection refused". I'm not sure where the connection is trying to be made though...

from proxyfs.

jnamdar avatar jnamdar commented on June 16, 2024

I'm trying to understand how ProxyFS stores objects. Reading this page, I get that when I create an object using Filesystem access (via the Samba share for instance), ProxyFS uses exclusively the NoAuth pipeline. This pipeline includes the meta middleware in order to update the account's metadata (in the .__checkpoint__ container I'm guessing?).

The other pipeline (the usual one) seems to be only used when trying to access objects the usual way, and the pfs middleware allows us to request objects in a ProxyFS-managed account.

If I'm right about these pipelines' roles, where exactly would authentication play a part when writing/reading object via Filesystem Access? I don't see how adding Keystone authentication to the usual pipeline would help since it's not used by Filesystem Access.

Do you think steps such as asking for a Keystone token, and adding it to every request's header would have to be directly implemented in the SMB VFS/jrpcclient/jrpcfs/swiftclient layers?
Ideally the user would provide credentials such as projectname:username/password when mounting the samba share, and those credentials would be sent through every layer to ask for the token.

Looking forward to understanding ProxyFS better 😄

from proxyfs.

edmc-ss avatar edmc-ss commented on June 16, 2024

Hello again jnamdar... and I must applaud all the excellent questions/topics you are raising.

Before going further, I want to make sure you are aware of the Slace "group" we've got that might make interacting with other ProxyFS folks more responsive. The slack group is here and you can invite yourself here.

First off I want to address the only "interesting" line I saw in the proxyfs.log file you included:
time="2019-06-21T13:42:21.965235+02:00" level=warning msg="Couldn't mount CommonVolume.FUSEMountPoint == CommonMountPoint" error="fusermount: exit status 1" function=performMount goroutine=8 package=fuse

What's going on here is that ProxyFS attempts to provide a FUSE mount point for the file system at the path your .conf file specified. As is the case with all mounts, the mount point must be a directory. Indeed, on many systems, the directory must be empty. If either the directory does not exist... or is not empty (on systems that require it to be so), you'll get this error message... It's not fatal (hence, the "level=warning"), but may not be what you desire.

A little background on how ProxyFS presents the file system via SMB vs NFS would be helpful here. Both smbd and nfsd are able to present a local file system to the network via those two protocols. For SMB, however, you probably noticed the "vfs" library (indeed, there are two: vfs, which is Samba-specific, and jrpcclient, which is generic and used by vfs). Using vfs (& jrpcclient), Samba is able to communicate directly (via a couple of TCP sockets per client) with the proxyfsd process. So, in the case of SMB/Samba, we are actually not presenting a file system that is locally available.

NFS is a different beast. While at one time, the ProxyFS plan was to leverage nfs-ganesha, a tool that does for NFS what Samba does for SMB. As it turns out, nfs-ganesha has an "FSAL" mechanism that enables one to "plug in" a file system just like Samba's "VFS" mechanism. Hence, there was this intention to code up an "fsal" library to plug into nfs-ganesha that would leverage that same jrpcclient library to communicate with proxyfsd. Alas, other priorities arose and the team never implemented an "fsal" library.

As a result, in order to present a file system via NFS, ProxyFS needed to expose the file system "locally". It does so via FUSE. As you can imagine, This whole SMB and NFS protocol thing is quite outside the purview of ProxyFS... so ProxyFS didn't want to "insist" on a FUSE exposure of the file system. As such, the FUSE mount step is allowed to fail since it isn't required (particularly in the SMB exposure case). It's very convenient, though, so even SwiftStack's Controller always ensures this directory exists (and is empty) so that the FUSE mount actually works. In any event, the /etc/exports is populated with a line (or more) to expose the FUSE mount point via NFS.

I'm speculating that your original attempt had skipped the step of creating the directory to which the FUSE mount point was attempted.

Now, here's something I do when ever I successfully mount... I "stat" the root of the mount. What I should see is that the Inode Number of the "root directory" of my mount is "1". If it's not "1", then you've probably not successfully mounted... or NFS is just exporting the directory and ProxyFS isn't presenting it. In any event, it's a handy quick check to make sure things are active.

I'm gonna close this response as you've posted a follow-up... so I'll respond to that one in a separate post.

from proxyfs.

edmc-ss avatar edmc-ss commented on June 16, 2024

Hello again jnamdar,

Your speculation about the use of the "NoAuth" Swift Proxy versus the "normal" Swift Proxy is totally correct! Indeed, your question about "who provides authorization" is very key. As you can imagine, the world of SMB and NFS (distinctly different between them as well) is entirely different than for Swift & S3 API (also distinctly different between them as luck would have it). To put it vaguely, it is sometimes the protocol server and sometimes the file system and sometimes the Swift or S3 API pipelines that provide authorization. Let's talk about each:

SMB:

  • file/directory-granular access control via ACLs/ACEs
  • Samba stores ACLs in "extended attributes" of the file/directory
  • Samba enforces ACLs... so access to the file system is as a "all seeing user"
  • For so-called "smb users" (i.e. where you are using smbpasswd), smbusers are mapped to local Linux users (uid/gid) and Samba honors the owner:group:other "mode" of files/directories

NFS:

  • The client machine is trusted to authenticate a user... providing the uid:gid:other-gids(up to 16) via the NFS protocol
  • NFSd accesses the FUSE mount point with the euid:egid of the client-provided "credentials"
  • If configured, NFSd will augment the "other-gids" with a full set of gids to which the uid is a member

Swift API:

  • An element of the "normal" Swift Proxy pipeline enforces that the provided AuthToken enables the client to access either the selected Account OR the selected Container.
  • Once past this stage of the pipeline, the file system is accessed via an "all seeing user".

S3 API:

  • Provided via the "s3api" middleware in the "normal" Swift Proxy pipeline
  • The S3 API Key is used to allow or deny access to a "bucket" (which is a Container in Swift parlance)
  • Once past this stage of the pipeline, the file system is accessed via an "all seeing user".
  • Although S3 API specification allows it, Swift's s3api middleware provides no per-object granular access control.

As you can well imagine, the impedance mismatch between access control among these four protocols is tremendous. What we generally tell our customers is that they should apply per-protocol access control and not rely upon any reasonable "mapping" between them. In other words, don't expect a "chmod" to alter the access control for users coming in via the Swift or S3 API.

Hope the above makes some sense. I'll stop this post here to respond to your remaining topics in a subsequent post.

from proxyfs.

edmc-ss avatar edmc-ss commented on June 16, 2024

Keystone Auth would be an awesome addition... though I don't understand how it might apply to NFS. As mentioned previously, the authorization strategy of NFSv3 is to entirely trust the "client". So let me just "punt" on that one.

As for SMB, the protocol is very rich as it turns out. With SMB, it's the Server (i.e. Samba in this case) that provides the necessary Auth. SMB supports something called SPNEGO... the term actually is an acronym of sorts for "Security Profile Negotiation". SPNEGO supports all kinds of "plug in" Auth mechanisms spanning from the very old Lan Manager ("LM") all the way up to Kerberos (as implemented by Active Directory btw). While this is way out of my skill set, it should be possible to provide an Auth plug-in for Keystone...and I'd be surprised if somebody hasn't done that already. To my understanding, Keystone is well within the capabilities of the SPNEGO Auth processing model. I would solicit help from the Samba folks on that one...

from proxyfs.

edmc-ss avatar edmc-ss commented on June 16, 2024

Re the error you are getting from the "normal" Swift Proxy, I'm just curious what your "pfs" middleware filter section looks like. It should be something like this:

[filter:pfs]
use = egg:pfs_middleware#pfs
proxyfsd_host = 127.0.0.1
proxyfsd_port = 12345
bypass_mode = read-write

Notice that the port# is that same "12345" that is used by Samba's vfs/jrpcclient library. It uses the very same JSON-RPC mechanism to communicate with the ProxyFS instance (proxyfsd). The "host" should be any one of the PrivateIPAddr's that any of your ProxyFS instances are listening on. Indeed, it can be a list :-). What happens is that the pfs_middleware determines (from the Account's Header check) that the Account is being managed by ProxyFS and asks proxyfs_host:proxyfs_port which ProxyFS instance is actually managing the Account/Volume/FileSystem at this point in time. Kind of like a redirect.

What I suspect you may be seeing is that your "normal" Swift Proxy cannot communicate with the desired ProxyFS instance... not sure though.

from proxyfs.

edmc-ss avatar edmc-ss commented on June 16, 2024

It's a bit unfortunate that your starting point is ProxyFS/saio (or the "runway" equivalent) as that setup is trivially straight forward. I'm suspecting that your cluster's topology is quite a bit more complicated than just "localhost" :-). Perhaps you could focus on the PrivateIPAddr discussion above and see about making connections.

from proxyfs.

jnamdar avatar jnamdar commented on June 16, 2024

I agree, the saio environment seems to be especially useful for test/development purposes, but it's the only thing I found to install ProxyFS. Is there any way I could check out another installation process that does not focus on hosting everything on a single machine?

Regards

from proxyfs.

edmc-ss avatar edmc-ss commented on June 16, 2024

I've actually made a start on a branch called "ArmbianExample" that will ultimately network 3 little ODroid HC1's into a small Swift cluster... with ProxyFS running on it in an HA arrangement.

Beyond that, I guess I'd be remiss if I didn't plug SwiftStack's actual product. You can try that out and see what it does as well... It certainly "wizards" a lot of the setup we've been talking about. Happy to help you come up to speed on that product :-).

from proxyfs.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.