Giter Site home page Giter Site logo

proxyfs's People

Contributors

alistairncoles avatar balajisrao avatar blakem avatar brandon-m-wang avatar bschatz-swift avatar charmster avatar clayg avatar creachadair avatar dbishop avatar edmc-ss avatar ehudkaldor avatar foxlisk avatar gerardgine avatar jarnold avatar notmyname avatar orion avatar smerritt avatar thiagodasilva avatar tipabu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

proxyfs's Issues

Expose JSON-RPC interface to proxyfsd via pfs_middleware

This is part of the general move to expose more of ProxyFS directly to clients; see also: #229, #230, #241

Introduce a new HTTP method (PROXYFS?) that can be used on the volume/account to allow clients to make JsonRpcClient requests directly. Method will only be valid at the account (and only for bimodal accounts), so we can use that to look up which proxyfsd instance to talk to. Require JSON request bodies, probably with some limit on size – send it straight to proxyfsd. Emit the JSON response verbatim back to the client. If the call times out, respond 502 Bad Gateway.

Similar to the X-Bypass-ProxyFS header, this should be limited to Swift owners; once we have a bypass_mode option, it should (as a first pass) require read-write.

Bonus points:

  • parse requests to verify format before sending to proxyfs
  • whitelist of RPC methods to allow when bypass_mode = read-only
  • allow client to specify an RPC timeout (keeping some reasonable default if not specified)
  • consider allowing multiple JSON objects per request, instead of requiring pipelined HTTP requests
  • consider sniffing for errors and responding 400

how to start using proxyFS with our existing saio?

hello everyone,
I would like to use proxyFS with my existing saio, but unfortunately I did'nt find any useful documentation describing how proxyFS works or what the architecture is look like. I do'nt know how to start . I appreciate any suggestions.

how to install or compile proxyfs on ubuntu

hello members.

I am thinking about installing or compiling proxyfs on Ubuntu 18.04 and after that connect to exist openstack swift in my server form , but this project doesn't have user manual for it .
do you have any suggestion for me ?

Support delimiter requests for container listings

Currently, trying to list a ProxyFS "container" through the pfs middleware ignores delimiter query params, causing all objects in the container to be listed.

This would probably be fairly cheap to do for delimiter=/, but the more general case still requires that we walk the whole tree. Maybe it'd be enough for 90% of use cases to support / and 400 any other delimiters?

Fixing this should solve several Swift and Swift3 functional tests (as well as make large volumes manageable for Swift API users).

Backend Swift IP configuration is not allowed

swiftstack ProxyFS 1.5.3 can not be configurable that assign another Swift IP address than 127.0.0.1 because it's hard corded in the go swiftclient.

For performance consideration, it may be ok as design because setting other proxy server than localhost might cause performance issue from the connection over the IP network. However, the hard code to 127.0.0.1 is a bit problematic to my case because I'd like to configure my proxy-sever other than 0.0.0.0. (i.e. configure the swift's bind_ip to a specific out-bound network ip.) Then, proxyfsd can not talk with the proxy-server.

I thought I may create a new proxy-server conf to bind to 127.0.0.1 but it will have resource overhead to make duplicate proxy-server caused by just a bind ib difference.

I tried to touch around the code to enable the backend swift ip as configurable [1], and it looks to work in my local environment. If it's suitable for ProxyFS community, I'll push a pull request to support the staff in upstream.

Any ideas?

1: bloodeagle40234@3fe824b

Support more than '/' as a delimiter

...which also means adding delimiter support when listing an account.

Supporting just / was fairly cheap since we're working with a filesystem, but there are swift tests that expect more than that. We should probably do it, even though it involves a filesystem walk.

regression_test.py is misnamed

I know this has been on the TBD list for awhile, but perhaps now is a good time to actually do it as people are starting to take a look at the project. Probably a better presentation is a Makefile with one or more support scripts written in python.

certificate for proxyfs.org is expired

I cannot visit the webpage because of this:

$ openssl s_client -connect proxyfs.org:443 -servername proxyfs.org                                                                                          
CONNECTED(00000003)
depth=2 O = Digital Signature Trust Co., CN = DST Root CA X3
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
verify return:1
depth=0 CN = proxyfs.org
verify error:num=10:certificate has expired
notAfter=Oct 29 23:01:40 2018 GMT
verify return:1
depth=0 CN = proxyfs.org
notAfter=Oct 29 23:01:40 2018 GMT
verify return:1
---
Certificate chain
 0 s:CN = proxyfs.org
   i:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
 1 s:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
   i:O = Digital Signature Trust Co., CN = DST Root CA X3
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIGLzCCBRegAwIBAgISA1QiASGGxkfdbhnPH3H/i9OBMA0GCSqGSIb3DQEBCwUA
MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD
ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0xODA3MzEyMzAxNDBaFw0x
ODEwMjkyMzAxNDBaMBYxFDASBgNVBAMTC3Byb3h5ZnMub3JnMIIBIjANBgkqhkiG
9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwdjHEJ4IwZPR7NftZtQoksruw7Vj5NB1AFIg
mQH93Pi8HY/NINrZNF3aDgtVC0oKA0MaX54aY3eM9ibXz3D0rOWSgoxJ2XFptxGe
s7VjujsxYvocVzWUQ7yjOum9dQtxZjLQi58BKxs1Ca5J8dbRX9x1O18KLgh5mqxE
RjLafHiFTYR9RAn3rwiXhMRhhLFFY2oOA4aPwyP0DNrDI3gM74qpRFUT21EPiJda
US8evu+7/oyapWURbU95/uqyyCrK8fW4z7RlfF9NABrjnukGFLQ7O7X1UpvjBOIJ
1+6KDy5ylbfkPSNJTp+jnG2V5/MLsl4Ccpx+0mZyzNroDRu28wIDAQABo4IDQTCC
Az0wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUFBwMBBggrBgEFBQcD
AjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBQiW5v5FdzFYFBZ8bOmg2LQsAzxeTAf
BgNVHSMEGDAWgBSoSmpjBH3duubRObemRWXv86jsoTBvBggrBgEFBQcBAQRjMGEw
LgYIKwYBBQUHMAGGImh0dHA6Ly9vY3NwLmludC14My5sZXRzZW5jcnlwdC5vcmcw
LwYIKwYBBQUHMAKGI2h0dHA6Ly9jZXJ0LmludC14My5sZXRzZW5jcnlwdC5vcmcv
MEUGA1UdEQQ+MDyCC3Byb3h5ZnMuY29tggtwcm94eWZzLm9yZ4IPd3d3LnByb3h5
ZnMuY29tgg93d3cucHJveHlmcy5vcmcwgf4GA1UdIASB9jCB8zAIBgZngQwBAgEw
geYGCysGAQQBgt8TAQEBMIHWMCYGCCsGAQUFBwIBFhpodHRwOi8vY3BzLmxldHNl
bmNyeXB0Lm9yZzCBqwYIKwYBBQUHAgIwgZ4MgZtUaGlzIENlcnRpZmljYXRlIG1h
eSBvbmx5IGJlIHJlbGllZCB1cG9uIGJ5IFJlbHlpbmcgUGFydGllcyBhbmQgb25s
eSBpbiBhY2NvcmRhbmNlIHdpdGggdGhlIENlcnRpZmljYXRlIFBvbGljeSBmb3Vu
ZCBhdCBodHRwczovL2xldHNlbmNyeXB0Lm9yZy9yZXBvc2l0b3J5LzCCAQMGCisG
AQQB1nkCBAIEgfQEgfEA7wB1ANt0r+7LKeyx/so+cW0s5bmquzb3hHGDx12dTze2
H79kAAABZPLKSV4AAAQDAEYwRAIgFRpJrpkxFZgIU+qdG9W8WOFq5LO4J3x0KzFa
/3yTxU8CIESLhZ4lAxWuQjpc68iNcWYK2ECC6izwT7JRd3Y+NEplAHYApFASaQVa
FVReYhGrN7wQP2KuVXakXksXFEU+GyIQaiUAAAFk8spJRwAABAMARzBFAiEAwFPH
gNF5apiMJxafuYZAU3DZzZjyIMTffCM8+H0t5U0CIHtuV66UFVsjT6IVnQe1Z4Gu
k4rhyl9lAtn7Ki5DC/iXMA0GCSqGSIb3DQEBCwUAA4IBAQAVxKj/I5aI6Lahe2Xe
1wJo2U9+za8rgruQLYIMeK4YqpPOkk7DsimY7njGGmiInfwt3O9/lNdfQmtTahCb
hyMe9d1/LQ+xI9pkN8nLm76zRhesRhMXUaFljb5fcUE4774tPGoW2QiLqJtG0M8s
ofw0v0vksN5xZd7WG9491W8JO6/C4rn7N8RjtjFm3zAlkcnf4pk1n7K4S7LFTQPa
r/qj/O65/ZAbMkp45sJ7+40M6jOfDXI35k0r6RlEaL5cWLYI+N9zz1R2+/L/kmn2
c78lfQXc2Qn2AubcMhCrzxCVaDfSCiK+jkYcoktWhnxNNMNQ0Bmhc3GlzO03pnPV
sKqJ
-----END CERTIFICATE-----
subject=CN = proxyfs.org

issuer=C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

---
No client certificate CA names sent
Peer signing digest: SHA512
Peer signature type: RSA
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3450 bytes and written 439 bytes
Verification error: certificate has expired
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES128-GCM-SHA256
    Session-ID: 8B3ED6EF49D9FBEBF591F2245CFB68C7C0D1A86D542BCD5204605ACE9E36F50B
    Session-ID-ctx:
    Master-Key: 7209069D975ACBED587A6018914EB1943B138DC8E21D52B0BF9ADDCAD76BD50430822A1293293F689B9971FCA8657B8D
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 86400 (seconds)
    TLS session ticket:
    0000 - 8e 93 45 58 77 65 4c 87-68 39 83 77 59 2e cc 7c   ..EXweL.h9.wY..|
    0010 - 7b 05 d0 67 99 3e f4 06-30 61 b3 d9 af 21 d8 7f   {..g.>..0a...!..
    0020 - 8c 3e 0d e3 33 42 bf 7c-44 5e 31 c6 44 9f 29 3d   .>..3B.|D^1.D.)=
    0030 - 9d 30 5a bf 87 5e c6 8f-50 80 b0 45 db fb 97 84   .0Z..^..P..E....
    0040 - 60 44 a7 6e 88 28 d7 20-d6 b2 ec e5 a0 ce 62 49   `D.n.(. ......bI
    0050 - ff 80 99 b6 a5 04 70 dd-2c 4b 58 33 7f 5d 52 fe   ......p.,KX3.]R.
    0060 - c7 b6 60 11 5c 9c 5e 38-51 c0 94 65 a4 67 3c c2   ..`.\.^8Q..e.g<.
    0070 - 0b 63 37 49 cc f7 84 11-28 06 00 17 88 8f 65 29   .c7I....(.....e)
    0080 - c6 2d 9b 52 a4 cc 69 b4-ef fd 8c 71 96 1d ef 66   .-.R..i....q...f
    0090 - e7 4e de 12 eb 61 2b a2-d8 40 75 01 f7 75 a6 97   [email protected]..
    00a0 - 6f 7b d6 1c 2a 4c e3 c8-94 c8 d0 cb 99 6c c3 b0   o{..*L.......l..
    00b0 - c5 e6 65 a1 cc 42 1b f4-f6 f2 66 e0 3e 85 bb 94   ..e..B....f.>...

    Start Time: 1546601504
    Timeout   : 7200 (sec)
    Verify return code: 10 (certificate has expired)
    Extended master secret: no
---
DONE

Stringer may be missing in non swift-runway environment

When I was running, python regression_test.py according to README.md, that command failed with those lines as follows.

(NOTE: those lines are not all lines but some of failure lines)
blunder/api.go:58: running "stringer": exec: "stringer": executable file not found in $PATH
inode/validate_test.go:79:175: blunder.FsError(blunder.Errno(validationErr)).String undefined (type blunder.FsError has no field or method String)
FAIL github.com/swiftstack/ProxyFS/inode [build failed]
go install succeeded!
go generate failed!
go test failed!
go vet succeeded!

It looks like we're missing to install stringer module. Then, after I installed string via go get golang.org/x/tools/cmd/stringer those failures were resolved and got succeeded.

Connect ProxyFS to existing OpenStack Swift/Keystone installation

Hi,

Firstly, thanks for this application and for giving us the opportunity to use it.

To try it out, I deployed ProxyFS on a CentOS 7.4 VM using the Vagrantfile in the saio subfolder. By the way, the referenced vagrant box in this file seems to be down, I used this box (config.vm.box = "CentosBox/Centos-7-v7.4-Minimal-CLI" with the virtualbox provider to continue.

After the vagrant_provision.sh finished running, I compiled the ProxyFS project using make: everything went well. I then used the script start_and_mount_pfs to mount the NFS and SMB share.
I can create folders/files in both shares without issues, and then view everything with the swift CLI:

[vagrant@localhost ~]$ ll /mnt/smb_proxyfs_mount/
total 0
drwxr-xr-x. 2 vagrant vagrant 0 Jun 17 16:25 test
drwxr-xr-x. 2 vagrant vagrant 0 Jun 14 15:48 test_container
drwxr-xr-x. 2 vagrant vagrant 0 Jun 14 15:56 test_container2
[vagrant@localhost ~]$ ll /mnt/smb_proxyfs_mount/test_container
total 0
-rwxr-xr-x. 1 vagrant vagrant 8 Jun 14 15:48 test_file.txt
[vagrant@localhost ~]$ cat /mnt/smb_proxyfs_mount/test_container/test_file.txt
abcdefg
[vagrant@localhost ~]$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing list
test
test_container
test_container2
[vagrant@localhost ~]$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing list test_container
test_file.txt
[vagrant@localhost ~]$ curl -i http://127.0.0.1:8080/v1/AUTH_test/test_container/test_file.txt -X GET -H "X-Auth-Token: AUTH_tka85032f655f249cca7d43b5c71184858"
HTTP/1.1 200 OK
Content-Length: 8
Accept-Ranges: bytes
Last-Modified: Fri, 14 Jun 2019 13:48:25 GMT
Etag: "pfsv2/AUTH_test/00000311/00000001-32"
X-Timestamp: 1560520104.65309
Content-Type: text/plain
X-Trans-Id: txb4100d18d9de43d094d35-005d08d72a
X-Openstack-Request-Id: txb4100d18d9de43d094d35-005d08d72a
Date: Tue, 18 Jun 2019 12:20:58 GMT

abcdefg

I've been looking for a way to use ProxyFS in my existing OpenStack Swift/Keystone installation:

  • Swift version rocky (installed with this link)
  • Keystone v3 version rocky (installed using this link)

So far, I have been able to deploy a CentOS 7.4 VM using the Vagrantfile in the saio subfolder. I removed everything regarding the installation of Swift (including the creation of the user swift) since I already have one installed.

I then fiddled with the ProxyFS configuration on this VM to point to my existing Swift Proxy server. I installed the pfs and meta middlewares on the machine hosting my Swift Proxy server, added them to the pipeline.
I also launched another instance of the Proxy server listening on port 8090 with the /etc/swift/proxy-server/proxy-noauth.cond.d/20_settings.conf file:
/usr/bin/python2 /usr/bin/swift-proxy-server /etc/swift/proxy-server/proxy-noauth.cond.d

Finally I used the script start_and_mount_pfs, after removing the lines about starting Swift, to launch ProxyFS and mount the NFS and SMB network shares.

The NFS share seems to work well (I can create folders and write files), but I'm getting an error trying to mount the SMB one. Relevant info: since I haven't created a swift user, I replaced it with the vagrant user that was already existing in the VM in the smb.conf file, and used smbpasswd -a vagrant.
Command line error:

[vagrant@localhost ~]$ sudo mount -t cifs -o user=vagrant,uid=1000,gid=1000,vers=3.0,iocharset=utf8,actimeo=0 //127.0.0.1/proxyfs /mnt/smb_proxyfs_mount/
Password for vagrant@//127.0.0.1/proxyfs:  *******
mount error(5): Input/output error
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

What I find in /var/log/samba/log.smbd after adding log level = 3 passdb:5 auth:5 in smb.conf:

[2019/06/18 13:41:48.820712,  3] ../lib/util/access.c:361(allow_access)
  Allowed connection from 127.0.0.1 (127.0.0.1)
[2019/06/18 13:41:48.821084,  3] ../source3/smbd/oplock.c:1322(init_oplocks)
  init_oplocks: initializing messages.
[2019/06/18 13:41:48.821353,  3] ../source3/smbd/process.c:1958(process_smb)
  Transaction 0 of length 106 (0 toread)
[2019/06/18 13:41:48.821806,  3] ../source3/smbd/smb2_negprot.c:290(smbd_smb2_request_process_negprot)
  Selected protocol SMB3_00
[2019/06/18 13:41:48.821849,  5] ../source3/auth/auth.c:491(make_auth_context_subsystem)
  Making default auth method list for server role = 'standalone server', encrypt passwords = yes
[2019/06/18 13:41:48.821873,  5] ../source3/auth/auth.c:48(smb_register_auth)
  Attempting to register auth backend trustdomain
[2019/06/18 13:41:48.821926,  5] ../source3/auth/auth.c:60(smb_register_auth)
  Successfully added auth method 'trustdomain'
[2019/06/18 13:41:48.821945,  5] ../source3/auth/auth.c:48(smb_register_auth)
  Attempting to register auth backend ntdomain
[2019/06/18 13:41:48.821956,  5] ../source3/auth/auth.c:60(smb_register_auth)
  Successfully added auth method 'ntdomain'
[2019/06/18 13:41:48.821970,  5] ../source3/auth/auth.c:48(smb_register_auth)
  Attempting to register auth backend guest
[2019/06/18 13:41:48.821983,  5] ../source3/auth/auth.c:60(smb_register_auth)
  Successfully added auth method 'guest'
[2019/06/18 13:41:48.821994,  5] ../source3/auth/auth.c:48(smb_register_auth)
  Attempting to register auth backend sam
[2019/06/18 13:41:48.822004,  5] ../source3/auth/auth.c:60(smb_register_auth)
  Successfully added auth method 'sam'
[2019/06/18 13:41:48.822015,  5] ../source3/auth/auth.c:48(smb_register_auth)
  Attempting to register auth backend sam_ignoredomain
[2019/06/18 13:41:48.822026,  5] ../source3/auth/auth.c:60(smb_register_auth)
  Successfully added auth method 'sam_ignoredomain'
[2019/06/18 13:41:48.822060,  5] ../source3/auth/auth.c:48(smb_register_auth)
  Attempting to register auth backend winbind
[2019/06/18 13:41:48.822076,  5] ../source3/auth/auth.c:60(smb_register_auth)
  Successfully added auth method 'winbind'
[2019/06/18 13:41:48.822086,  5] ../source3/auth/auth.c:378(load_auth_module)
  load_auth_module: Attempting to find an auth method to match guest
[2019/06/18 13:41:48.822099,  5] ../source3/auth/auth.c:403(load_auth_module)
  load_auth_module: auth method guest has a valid init
[2019/06/18 13:41:48.822110,  5] ../source3/auth/auth.c:378(load_auth_module)
  load_auth_module: Attempting to find an auth method to match sam
[2019/06/18 13:41:48.822122,  5] ../source3/auth/auth.c:403(load_auth_module)
  load_auth_module: auth method sam has a valid init
[2019/06/18 13:41:48.823791,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'gssapi_spnego' registered
[2019/06/18 13:41:48.823830,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'gssapi_krb5' registered
[2019/06/18 13:41:48.823904,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'gssapi_krb5_sasl' registered
[2019/06/18 13:41:48.823935,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'spnego' registered
[2019/06/18 13:41:48.823949,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'schannel' registered
[2019/06/18 13:41:48.823964,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'naclrpc_as_system' registered
[2019/06/18 13:41:48.823976,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'sasl-EXTERNAL' registered
[2019/06/18 13:41:48.823988,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'ntlmssp' registered
[2019/06/18 13:41:48.824000,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'ntlmssp_resume_ccache' registered
[2019/06/18 13:41:48.824014,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'http_basic' registered
[2019/06/18 13:41:48.824030,  3] ../auth/gensec/gensec_start.c:918(gensec_register)
  GENSEC backend 'http_ntlm' registered
[2019/06/18 13:41:48.824789,  5] ../source3/auth/auth.c:491(make_auth_context_subsystem)
  Making default auth method list for server role = 'standalone server', encrypt passwords = yes
[2019/06/18 13:41:48.824822,  5] ../source3/auth/auth.c:378(load_auth_module)
  load_auth_module: Attempting to find an auth method to match guest
[2019/06/18 13:41:48.824836,  5] ../source3/auth/auth.c:403(load_auth_module)
  load_auth_module: auth method guest has a valid init
[2019/06/18 13:41:48.824847,  5] ../source3/auth/auth.c:378(load_auth_module)
  load_auth_module: Attempting to find an auth method to match sam
[2019/06/18 13:41:48.824859,  5] ../source3/auth/auth.c:403(load_auth_module)
  load_auth_module: auth method sam has a valid init
[2019/06/18 13:41:48.825052,  3] ../auth/ntlmssp/ntlmssp_util.c:69(debug_ntlmssp_flags)
  Got NTLMSSP neg_flags=0xa0080205
[2019/06/18 13:41:48.825484,  3] ../auth/ntlmssp/ntlmssp_server.c:452(ntlmssp_server_preauth)
  Got user=[vagrant] domain=[LOCALHOST] workstation=[] len1=0 len2=132
[2019/06/18 13:41:48.825565,  3] ../source3/param/loadparm.c:3823(lp_load_ex)
  lp_load_ex: refreshing parameters
[2019/06/18 13:41:48.825665,  3] ../source3/param/loadparm.c:542(init_globals)
  Initialising global parameters
[2019/06/18 13:41:48.825810,  3] ../source3/param/loadparm.c:2752(lp_do_section)
  Processing section "[global]"
[2019/06/18 13:41:48.825983,  2] ../source3/param/loadparm.c:2769(lp_do_section)
  Processing section "[proxyfs]"
[2019/06/18 13:41:48.826162,  3] ../source3/param/loadparm.c:1592(lp_add_ipc)
  adding IPC service
[2019/06/18 13:41:48.826198,  5] ../source3/auth/auth_util.c:123(make_user_info_map)
  Mapping user [LOCALHOST]\[vagrant] from workstation []
[2019/06/18 13:41:48.826220,  5] ../source3/auth/user_info.c:62(make_user_info)
  attempting to make a user_info for vagrant (vagrant)
[2019/06/18 13:41:48.826236,  5] ../source3/auth/user_info.c:70(make_user_info)
  making strings for vagrant's user_info struct
[2019/06/18 13:41:48.826244,  5] ../source3/auth/user_info.c:108(make_user_info)
  making blobs for vagrant's user_info struct
[2019/06/18 13:41:48.826251,  3] ../source3/auth/auth.c:178(auth_check_ntlm_password)
  check_ntlm_password:  Checking password for unmapped user [LOCALHOST]\[vagrant]@[] with the new password interface
[2019/06/18 13:41:48.826259,  3] ../source3/auth/auth.c:181(auth_check_ntlm_password)
  check_ntlm_password:  mapped user is: [LOCALHOST]\[vagrant]@[]
[2019/06/18 13:41:48.826554,  3] ../source3/passdb/lookup_sid.c:1680(get_primary_group_sid)
  Forcing Primary Group to 'Domain Users' for vagrant
[2019/06/18 13:41:48.826646,  4] ../source3/auth/check_samsec.c:183(sam_account_ok)
  sam_account_ok: Checking SMB password for user vagrant
[2019/06/18 13:41:48.826661,  5] ../source3/auth/check_samsec.c:165(logon_hours_ok)
  logon_hours_ok: user vagrant allowed to logon at this time (Tue Jun 18 11:41:48 2019
  )
[2019/06/18 13:41:48.827099,  5] ../source3/auth/server_info_sam.c:122(make_server_info_sam)
  make_server_info_sam: made server info for user vagrant -> vagrant
[2019/06/18 13:41:48.827130,  3] ../source3/auth/auth.c:249(auth_check_ntlm_password)
  check_ntlm_password: sam authentication for user [vagrant] succeeded
[2019/06/18 13:41:48.827153,  5] ../source3/auth/auth.c:292(auth_check_ntlm_password)
  check_ntlm_password:  PAM Account for user [vagrant] succeeded
[2019/06/18 13:41:48.827160,  2] ../source3/auth/auth.c:305(auth_check_ntlm_password)
  check_ntlm_password:  authentication for user [vagrant] -> [vagrant] -> [vagrant] succeeded
[2019/06/18 13:41:48.827343,  3] ../source3/auth/token_util.c:548(finalize_local_nt_token)
  Failed to fetch domain sid for WORKGROUP
[2019/06/18 13:41:48.827371,  3] ../source3/auth/token_util.c:580(finalize_local_nt_token)
  Failed to fetch domain sid for WORKGROUP
[2019/06/18 13:41:48.827624,  5] ../source3/passdb/pdb_interface.c:1749(lookup_global_sam_rid)
  lookup_global_sam_rid: looking up RID 513.
[2019/06/18 13:41:48.827655,  5] ../source3/passdb/pdb_tdb.c:658(tdbsam_getsampwrid)
  pdb_getsampwrid (TDB): error looking up RID 513 by key RID_00000201.
[2019/06/18 13:41:48.827672,  5] ../source3/passdb/pdb_interface.c:1825(lookup_global_sam_rid)
  Can't find a unix id for an unmapped group
[2019/06/18 13:41:48.827679,  5] ../source3/passdb/pdb_interface.c:1535(pdb_default_sid_to_id)
  SID S-1-5-21-2240567756-3470875878-3910347872-513 belongs to our domain, but there is no corresponding object in the database.
[2019/06/18 13:41:48.827699,  5] ../source3/passdb/pdb_interface.c:1749(lookup_global_sam_rid)
  lookup_global_sam_rid: looking up RID 513.
[2019/06/18 13:41:48.827711,  5] ../source3/passdb/pdb_tdb.c:658(tdbsam_getsampwrid)
  pdb_getsampwrid (TDB): error looking up RID 513 by key RID_00000201.
[2019/06/18 13:41:48.827723,  5] ../source3/passdb/pdb_interface.c:1825(lookup_global_sam_rid)
  Can't find a unix id for an unmapped group
[2019/06/18 13:41:48.827729,  5] ../source3/passdb/pdb_interface.c:1535(pdb_default_sid_to_id)
  SID S-1-5-21-2240567756-3470875878-3910347872-513 belongs to our domain, but there is no corresponding object in the database.
[2019/06/18 13:41:48.827829,  3] ../source3/smbd/password.c:144(register_homes_share)
  Adding homes service for user 'vagrant' using home directory: '/home/vagrant'
[2019/06/18 13:41:48.828148,  3] ../lib/util/access.c:361(allow_access)
  Allowed connection from 127.0.0.1 (127.0.0.1)
[2019/06/18 13:41:48.828191,  3] ../libcli/security/dom_sid.c:210(dom_sid_parse_endp)
  string_to_sid: SID vagrant is not in a valid format
[2019/06/18 13:41:48.828274,  3] ../source3/passdb/lookup_sid.c:1680(get_primary_group_sid)
  Forcing Primary Group to 'Domain Users' for vagrant
[2019/06/18 13:41:48.828374,  3] ../source3/smbd/service.c:576(make_connection_snum)
  Connect path is '/mnt/CommonVolume' for service [proxyfs]
[2019/06/18 13:41:48.828407,  3] ../libcli/security/dom_sid.c:210(dom_sid_parse_endp)
  string_to_sid: SID vagrant is not in a valid format
[2019/06/18 13:41:48.828483,  3] ../source3/passdb/lookup_sid.c:1680(get_primary_group_sid)
  Forcing Primary Group to 'Domain Users' for vagrant
[2019/06/18 13:41:48.828562,  3] ../source3/smbd/vfs.c:113(vfs_init_default)
  Initialising default vfs hooks
[2019/06/18 13:41:48.828589,  3] ../source3/smbd/vfs.c:139(vfs_init_custom)
  Initialising custom vfs hooks from [/[Default VFS]/]
[2019/06/18 13:41:48.828598,  3] ../source3/smbd/vfs.c:139(vfs_init_custom)
  Initialising custom vfs hooks from [proxyfs]
[2019/06/18 13:41:48.831109,  2] ../lib/util/modules.c:196(do_smb_load_module)
  Module 'proxyfs' loaded
[2019/06/18 13:41:48.834266,  1] vfs_proxyfs.c:230(vfs_proxyfs_connect)
  proxyfs_mount_failed: Volume : CommonVolume Connection_path /mnt/CommonVolume Service proxyfs user vagrant errno 19
[2019/06/18 13:41:48.834293,  1] ../source3/smbd/service.c:636(make_connection_snum)
  make_connection_snum: SMB_VFS_CONNECT for service 'proxyfs' at '/mnt/CommonVolume' failed: No such device
[2019/06/18 13:41:48.834344,  3] ../source3/smbd/smb2_server.c:3097(smbd_smb2_request_error_ex)
  smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_UNSUCCESSFUL] || at ../source3/smbd/smb2_tcon.c:135
[2019/06/18 13:41:48.960403,  3] ../source3/smbd/server_exit.c:246(exit_server_common)
  Server exit (NT_STATUS_END_OF_FILE)
[2019/06/18 13:41:48.966933,  3] ../source3/lib/util_procid.c:54(pid_to_procid)
  pid_to_procid: messaging_dgm_get_unique failed: No such file or directory

It looks like the samba authentication went well, but the relevant error to me are the following lines:

[2019/06/18 13:41:48.831109,  2] ../lib/util/modules.c:196(do_smb_load_module)
  Module 'proxyfs' loaded
[2019/06/18 13:41:48.834266,  1] vfs_proxyfs.c:230(vfs_proxyfs_connect)
  proxyfs_mount_failed: Volume : CommonVolume Connection_path /mnt/CommonVolume Service proxyfs user vagrant errno 19
[2019/06/18 13:41:48.834293,  1] ../source3/smbd/service.c:636(make_connection_snum)
  make_connection_snum: SMB_VFS_CONNECT for service 'proxyfs' at '/mnt/CommonVolume' failed: No such device

I tried troubleshooting this, but no luck so far. Would anyone be able to help on this?
Here's my df -H output if needed:

Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/cl-root           19G  3.1G   16G  17% /
devtmpfs                     3.1G     0  3.1G   0% /dev
tmpfs                        3.1G     0  3.1G   0% /dev/shm
tmpfs                        3.1G  9.0M  3.1G   1% /run
tmpfs                        3.1G     0  3.1G   0% /sys/fs/cgroup
/dev/sda1                    1.1G  240M  824M  23% /boot
tmpfs                        609M     0  609M   0% /run/user/1000
CommonMountPoint             110T     0  110T   0% /CommonMountPoint
127.0.0.1:/CommonMountPoint  110T     0  110T   0% /mnt/nfs_proxyfs_mount

I also tried to get containers and objects I created via the NFS share with the Object Storage API, but I got the following error on my Swift Proxy server:

[root@controller adminuser]# swift -A http://controller:8080/auth/v1.0 -U test:tester -K testing stat --debug
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): controller:8080
DEBUG:urllib3.connectionpool:http://controller:8080 "GET /auth/v1.0 HTTP/1.1" 200 0
DEBUG:swiftclient:REQ: curl -i http://controller:8080/auth/v1.0 -X GET
DEBUG:swiftclient:RESP STATUS: 200 OK
DEBUG:swiftclient:RESP HEADERS: {u'Content-Length': u'0', u'X-Trans-Id': u'tx6493625ff99f4486a7f5b-005d08d170', u'X-Auth-Token-Expires': u'76663', u'X-Auth-Token': u'AUTH_tk24c8619d99964285a356cbf294531184', u'X-Storage-Token': u'AUTH_tk24c8619d99964285a356cbf294531184', u'Date': u'Tue, 18 Jun 2019 11:56:32 GMT', u'X-Storage-Url': u'http://controller:8080/v1/AUTH_test', u'Content-Type': u'text/html; charset=UTF-8', u'X-Openstack-Request-Id': u'tx6493625ff99f4486a7f5b-005d08d170'}
DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): controller:8080
DEBUG:urllib3.connectionpool:http://controller:8080 "HEAD /v1/AUTH_test HTTP/1.1" 500 0
INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_test -I -H "X-Auth-Token: AUTH_tk24c8619d99964285a356cbf294531184"
INFO:swiftclient:RESP STATUS: 500 Internal Error
INFO:swiftclient:RESP HEADERS: {u'Date': u'Tue, 18 Jun 2019 11:56:32 GMT', u'Content-Length': u'17', u'Content-Type': u'text/plain', u'X-Openstack-Request-Id': u'tx3bdf2145377d4050a7044-005d08d170', u'X-Trans-Id': u'tx3bdf2145377d4050a7044-005d08d170'}

Relevant lines in /var/log/messages regarding the error:

Jun 18 13:57:57 controller proxy-server: STDERR: (23786) accepted ('192.168.71.37', 52024)
Jun 18 13:57:57 controller proxy-server: - - 18/Jun/2019/11/57/57 HEAD /auth/v1.0 HTTP/1.0 400 - Swift - - - - tx1c9994434391428a82261-005d08d1c5 - 0.0002 RL - 1560859077.899780035 1560859077.899970055 -
Jun 18 13:57:57 controller proxy-server: 192.168.71.37 192.168.71.37 18/Jun/2019/11/57/57 GET /auth/v1.0 HTTP/1.0 200 - python-swiftclient-3.6.0 - - - - tx1c9994434391428a82261-005d08d1c5 - 0.0021 - - 1560859077.899091005 1560859077.901160955 -
Jun 18 13:57:57 controller proxy-server: STDERR: 192.168.71.37 - - [18/Jun/2019 11:57:57] "GET /auth/v1.0 HTTP/1.1" 200 417 0.002583 (txn: tx1c9994434391428a82261-005d08d1c5)
Jun 18 13:57:57 controller proxy-server: STDERR: (23786) accepted ('192.168.71.37', 52026)
Jun 18 13:57:57 controller proxy-server: 192.168.71.37 192.168.71.37 18/Jun/2019/11/57/57 HEAD /v1/AUTH_test%3Fformat%3Djson HTTP/1.0 500 - python-swiftclient-3.6.0 AUTH_tk24c8619d9... - - - txc715be53ba9e476483a71-005d08d1c5 - 0.0013 - - 1560859077.906188011 1560859077.907531023 -
Jun 18 13:57:57 controller proxy-server: Erreur : une erreur s'est produite: Hôte inaccessible (txn: txc715be53ba9e476483a71-005d08d1c5)
Jun 18 13:57:57 controller proxy-server: STDERR: 192.168.71.37 - - [18/Jun/2019 11:57:57] "HEAD /v1/AUTH_test HTTP/1.1" 500 222 0.001975 (txn: txc715be53ba9e476483a71-005d08d1c5)

On another subject, does ProxyFS support Keystone authentication, instead of the tempauth used in the main pipeline?

More broadly, has anyone tried to connect ProxyFS to an existing OpenStack Swift/Keystone installation?

Regards

Capacity control on volume is not supported

Perhaps I might be missing the conf but it seems like ProxyFS doesn't support capacity control on each volume. To share the volumes to working groups, the capacity control (calling account-quote in the Swift term) would be useful to limit the total volume size that the users can store their files.

question: can the proxyFS Fuse be mounted on multiple machines at same time?

as far as I understand, proxyFS is very similar to https://github.com/s3ql/s3ql
but it does have the "segment" concept that is pretty nice, what i'm worried is that I'm not sure if you are forced to only have 1 single fuse mount at the same time, so only one machine can have the whole proxyFS mounted.(yes you can use NFS and API but all the traffic then pass thru this machine.
Could you explain me if this is correct or you are able to mount multiple machine's at same time so no traffic is passing through single machine?

Regards,

Stuck on writing and reading files larger than 1 Mb

after running fio benchmark with a 1Mb file size it works fine. but when I change the global size to ex: 10Mi it stuck.
benchmark.fio

[global]
directory=/mnt
end_fsync=1
filename_format=fio.$jobnum.$filenum
group_reporting
iodepth=1
ioengine=psync
size=100Mi

[4KiB_write]
blocksize=4Ki
readwrite=write

[4KiB_read]
blocksize=4Ki
readwrite=read

I faced the below trace log when it stuck.
iclient log

[2022-04-10T04:23:03.734579155Z][TRACE] <== rpcAdjustInodeTableEntryOpenCount(adjustInodeTableEntryOpenCountResponse: &{}, err: <nil>)
[2022-04-10T04:23:03.734680208Z][TRACE] <== DoOpen(openOut: &{FH:5407 OpenFlags:0 Padding:0}, errno: errno 0)
[2022-04-10T04:23:03.734747176Z][TRACE] <== DoGetXAttr(getXAttrOut: <nil>, errno: no data available)
[2022-04-10T04:23:03.734947844Z][TRACE] ==> DoGetAttr(inHeader: &{Len:56 OpCode:3 Unique:24 NodeID:5131 UID:0 GID:0 PID:2750 Padding:0}, getAttrIn: &{Flags:1 Dummy:0 FH:5407})
[2022-04-10T04:23:03.7350326Z][TRACE] <== DoGetAttr(getAttrOut: &{AttrValidSec:0 AttrValidNSec:250000000 Dummy:0 Attr:{Ino:5131 Size:104857600 Blocks:204800 ATimeSec:1649564583 MTimeSec:1649564583 CTimeSec:1649524097 ATimeNSec:722828167 MTimeNSec:722828167 CTimeNSec:561918465 Mode:33188 NLink:1 UID:0 GID:0 RDev:0 BlkSize:512 Padding:0}}, errno: errno 0)
[2022-04-10T04:23:03.735083518Z][TRACE] ==> DoWrite(inHeader: &{Len:4176 OpCode:16 Unique:26 NodeID:5131 UID:0 GID:0 PID:2749 Padding:0}, writeIn: &{FH:5404 Offset:4096 Size:4096: WriteFlags:0 LockOwner:0 Flags:32770 Padding:0 len(Data):4096})
[2022-04-10T04:23:03.735395006Z][TRACE] <== DoWrite(writeOut: &{Size:4096 Padding:0}, errno: errno 0)
[2022-04-10T04:23:03.735363175Z][TRACE] ==> DoRead(inHeader: &{Len:80 OpCode:15 Unique:28 NodeID:5131 UID:0 GID:0 PID:2750 Padding:0}, readIn: &{FH:5407 Offset:0 Size:16384 ReadFlags:0 LockOwner:0 Flags:32768 Padding:0})
[2022-04-10T04:23:03.735540696Z][TRACE] ==> DoGetXAttr(inHeader: &{Len:68 OpCode:22 Unique:30 NodeID:5131 UID:0 GID:0 PID:2749 Padding:0}, getXAttrIn: &{Size:0 Padding:0 Position:0 Padding2:0 Name:[115 101 99 117 114 105 116 121 46 99 97 112 97 98 105 108 105 116 121]})

"Support" storage policies

I don't think we should explicitly lie to users (even if that would allow us to get more bimodal tests working) but we should do some "reasonable" things when clients look at X-Storage-Policy headers:

  • container GETs should use the volume's DefaultPhysicalContainerLayout's ContainerStoragePolicy
  • container PUTs that include the volume's DefaultPhysicalContainerLayout's ContainerStoragePolicy or don't specify a X-Storage-Policy should proceed as normal
  • container PUTs that include a different X-Storage-Policy should be rejected with a 409

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.