Giter Site home page Giter Site logo

gssapi / gssproxy Goto Github PK

View Code? Open in Web Editor NEW
42.0 5.0 26.0 1.05 MB

A proxy for GSSAPI | Docs at https://github.com/gssapi/gssproxy/tree/main/docs

License: Other

Makefile 1.42% M4 2.44% C 85.97% Python 6.56% Shell 0.16% RPC 3.47%
gssapi kerberos krb5 nfs nfs-server nfs-client kernel afs cifs gss-proxy

gssproxy's People

Contributors

alphix avatar aweits avatar bertogg avatar c0rn3j avatar cipherboy avatar cryptomilk avatar eliba avatar frozencemetery avatar gd avatar jacobshivers avatar jas4711 avatar jcpunk avatar kloczek avatar listout avatar mw-a avatar nicowilliams avatar okapia avatar opoplawski avatar rmainz avatar scottmayhew avatar simo5 avatar soapgentoo avatar stanislavlevin avatar vlendec avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

gssproxy's Issues

Unified config file is processed despite error messages

Hi,

I switched to gssproxy recently—after years of relying on rpc.svcgssd. My installation on Debian bullseye came with a unified configuration file in /etc/gssproxy/gssproxy.conf, but my logs are being spammed with these messages:

Error when reading config directory: File /etc/gssproxy/gssproxy.conf did not match provided patterns. Skipping.
Error when reading config directory: File /etc/gssproxy/gssproxy.conf~ did not match provided patterns. Skipping.

The messages are not correct. The regular file (although perhaps not the backup) is being read and processed!

I believe it is the result of the call here.

Maybe unified config files are superseded, but the documentation still recommends them. Which should I use, please?

Also, should gssproxy ignore editor backup files? Thank you!

Kind regards
Felix Lechner

0.8.4: When stopping: free(): invalid pointer

I'm not very familiar with what gssproxy is supposed to do, but I'm using gssproxy 0.8.4 on archlinux on a fairly simple desktop that is also an nfs client. I noticed gssproxy has been crashing for months, which sometimes makes my machine hang on shutdown.

I have some interesting logs, on start:

gssproxy[XXX]: Error when reading config directory: File /etc/gssproxy/gssproxy.conf did not match provided patterns. Skipping.

this file contains the default as provided by my distro, it's content is:

[gssproxy]

When stopping gssproxy:

gssproxy[XXX: free(): invalid pointer

Stack trace of thread XXX:
#0  0x00007f1f5eb8cd22 raise (libc.so.6 + 0x3cd22)
#1  0x00007f1f5eb76862 abort (libc.so.6 + 0x26862)
#2  0x00007f1f5ebced28 __libc_message (libc.so.6 + 0x7ed28)
#3  0x00007f1f5ebd692a malloc_printerr (libc.so.6 + 0x8692a)
#4  0x00007f1f5ebd7cfc _int_free (libc.so.6 + 0x87cfc)
#5  0x00007f1f5ebdb9e8 __libc_free (libc.so.6 + 0x8b9e8)
#6  0x00007f1f5edd9fbc verto_cleanup (libverto.so.0 + 0x2fbc)
#7  0x000056306f8bc783 n/a (gssproxy + 0x5783)
#8  0x00007f1f5eb77b25 __libc_start_main (libc.so.6 + 0x27b25)
#9  0x000056306f8bc92e n/a (gssproxy + 0x592e)

I see a reference to verto_cleanup; this is provided by libverto.so.0.0 which is provided by krb5 1.19.2

gssproxy 0.9.0 exits when idle and crashes

With version 0.9.0, the gssproxy service exits when it's unused for 1000 seconds. However, there is nothing that would automatically start it again when it's actually needed – there's no systemd .socket for userspace clients, and as far as I know, there's no way to autostart the service for NFS server so it has to remain running continuously. The latter part is what's causing the most problems in my case, as the machines serve NFS with GSSAPI and gssproxy is used as the replacement for rpc.svcgssd, so when it exits I end up with hung clients. I had to add Restart=always to the service to avoid this issue.

(Actually at first I thought that the service keeps crashing, but after enabling debug logging I found out that the crash is just a side effect of the daemon trying to exit, probably same as #36.)

krb5 traces missing from journald logs at debug level 3

As discovered in #43 the way systemd starts daemons end up creating a redirection t a socket that breaks the ability to use '/dev/stderr' as a path name to be able to effectively reopen fd=2.

This means debug level 3 ends up not containing krb5 traces because the file open fails.
We should try to see if we can find a mechanism to deal with this, in the worst case creating a socket of our own and then redirecting its output back to stderr, or perhaps finding out if we can pass an FD number instead of a path to KRB5_TRACE, maybe asking krb5 to add a KRB5_TRACE_FD option ?

gssproxy breaks no_root_squash export option with knfsd

Trying to test krb5 NFS exports with the "no_root_squash" export option, but it's not working and any request from root on the client ends up getting squashed to nobody. The client defaults to using the machine credentials for the root account (nfs/@realm).

The server kernel upcalls to gssproxy to ACCEPT_SEC_CONTEXT for the client's machine cred. It fails to match that to a local account on the server, and downcalls with the uid and gid set to -1. The kernel then just assumes that the account doesn't exist and maps it to "nobody".

I think for nfsd, we need for machine creds to be reported at uid=0/gid=0 and allow the kernel to decide whether to squash them or not. Is there an option for this already in gssproxy?

Elaborate on when `cred_store = ccache:...` should be used

Current docs aren't very clear about when to use this. As a result, reasonable users often make the assumption that this should be something like cred_store = ccache:/tmp/krb5cc_%u or cred_store = ccache:KEYRING:%u or what have you.

We should make explicit what this is actually intended for and that it shouldn't be otherwise used.

We leak

On current rawhide, with valgrind and debuginfos installed, if we do make check CHECKARGS='--force-valgrind --valgrind-cmd valgrind\ --track-origins=yes\ --leak-check=full, two problems occur.

First, we leak a whole bunch:

[root@localhost testdir]# grep -ir 'definitely lost:' | grep -v '0 bytes in 0 blocks'
test_0.log:==38725==    definitely lost: 864 bytes in 5 blocks
test_24.log:==38758==    definitely lost: 1,198 bytes in 4 blocks
test_7.log:==38733==    definitely lost: 120 bytes in 3 blocks
test_22.log:==38756==    definitely lost: 1,199 bytes in 4 blocks
test_18.log:==38752==    definitely lost: 1,199 bytes in 4 blocks
test_1.log:==38727==    definitely lost: 1,199 bytes in 4 blocks
gssproxy.log:==38718==    definitely lost: 32,302 bytes in 28 blocks
test_10.log:==38736==    definitely lost: 184 bytes in 6 blocks
test_6.log:==38732==    definitely lost: 120 bytes in 3 blocks
test_14.log:==38748==    definitely lost: 1,200 bytes in 4 blocks
test_9.log:==38735==    definitely lost: 120 bytes in 3 blocks
test_5.log:==38731==    definitely lost: 120 bytes in 3 blocks
test_4.log:==38730==    definitely lost: 184 bytes in 6 blocks
test_12.log:==38746==    definitely lost: 1,166 bytes in 4 blocks
test_11.log:==38739==    definitely lost: 80 bytes in 2 blocks
test_11.log:==38740==    definitely lost: 80 bytes in 2 blocks
test_11.log:==38738==    definitely lost: 814 bytes in 2 blocks
test_8.log:==38734==    definitely lost: 184 bytes in 6 blocks
test_26.log:==38760==    definitely lost: 128 bytes in 4 blocks
[root@localhost testdir]# 

Second, not all the tests pass:

...
[PASS] (15) Accept test returned 0
Testing positive program name matching...
  Testing basic acquire creds...
[FAIL] (16) Acquire test returned 255 (expected zero)
[INFO] To debug this test case, run:
    make check CHECKARGS='--debug-num=16'
Testing negative program name matching...
  Testing basic acquire creds...
[PASS] (17) Acquire test returned 255
[FAIL] Program test returned 255 (expected zero)
Testing basic SIGHUP with no change
  Testing basic init/accept context
[PASS] (18) Init test returned 0
...

I've played with timeouts and there doesn't seem to be a value that gets them all passing. Unfortunately I don't have time to dig further into this right now.

Security context mech oids should be static

Per RFC 274 section 5.1:

   mech_type            Object ID, modify, optional Security mechanism
                        used.  The returned OID value will be a pointer
                        into static storage, and should be treated as
			read-only by the caller (in particular, it does
                        not need to be freed).  If not required, specify
                        NULL.

This assumption is shared by the krb5 mechglue, which assumes this parameter can be requested without needing additional memory overhead. However, this causes the following valgrind trace from t_acquire (test_0.log):

==38725== 25 (16 direct, 9 indirect) bytes in 1 blocks are definitely lost in loss record 14 of 67
==38725==    at 0x483BAE9: calloc (vg_replace_malloc.c:760)
==38725==    by 0x53E325D: gp_conv_gssx_to_oid_alloc (gp_conv.c:82)
==38725==    by 0x53E8B04: gpm_accept_sec_context (gpm_accept_sec_context.c:75)
==38725==    by 0x53EFCCD: gssi_accept_sec_context (gpp_accept_sec_context.c:101)
==38725==    by 0x488ADF2: gss_accept_sec_context (g_accept_sec_context.c:266)
==38725==    by 0x401AD8: main (t_acquire.c:83)

Multi-homed NFS client does not select correct service principal

The NFSv4.0 callback client in the Linux NFS server invokes gssproxy (somehow) to acquire the credential for its callback channel.

On multi-homed systems, GSSX_ARG_ACQUIRE_CRED always selects the principal associated with "uname -n". When creating an NFS client on alternate network interfaces, GSSX_ARG_ACQUIRE_CRED needs to select the principal associated with that interface, not the one associated with "uname -n".

Autoconf complains about obsolete options

There's a lot of moaning from autoconf about obsolete options....e.g.:

configure.ac:12: warning: The macro `AC_PROG_CC_C99' is obsolete.
configure.ac:12: You should run autoupdate.
./lib/autoconf/c.m4:1659: AC_PROG_CC_C99 is expanded from...
configure.ac:12: the top level
configure.ac:23: warning: The macro `AC_OUTPUT_COMMANDS' is obsolete.
configure.ac:23: You should run autoupdate.
./lib/autoconf/status.m4:1025: AC_OUTPUT_COMMANDS is expanded from...
m4/po.m4:23: AM_PO_SUBDIRS is expanded from...
m4/gettext.m4:59: AM_GNU_GETTEXT is expanded from...
configure.ac:23: the top level
configure.ac:23: warning: The macro `AC_TRY_LINK' is obsolete.
configure.ac:23: You should run autoupdate.
./lib/autoconf/general.m4:2920: AC_TRY_LINK is expanded from...
lib/m4sugar/m4sh.m4:692: _AS_IF_ELSE is expanded from...
lib/m4sugar/m4sh.m4:699: AS_IF is expanded from...
./lib/autoconf/general.m4:2249: AC_CACHE_VAL is expanded from...
./lib/autoconf/general.m4:2270: AC_CACHE_CHECK is expanded from...
m4/gettext.m4:59: AM_GNU_GETTEXT is expanded from...
configure.ac:23: the top level
configure.ac:23: warning: The macro `AC_TRY_LINK' is obsolete.
configure.ac:23: You should run autoupdate.
./lib/autoconf/general.m4:2920: AC_TRY_LINK is expanded from...
lib/m4sugar/m4sh.m4:692: _AS_IF_ELSE is expanded from...
lib/m4sugar/m4sh.m4:699: AS_IF is expanded from...
...

Time to follow the advice and run autoupdate, or is there something I'm missing?

Valgrind CI pass

The test suite supports running under valgrind if additional flags are passed. Two tests are known to not work (Simo believes this is because they rely on program name matching). So we would need a separate pass.

We would also want to generate a suppressions file for libverto/libev (but this is probably not worth it - these are fixed-size one-time leaks, and valgrind has trouble generating useful traces for it).

Make check failed to execute normally.

version info:
krb5: 1.19
gssproxy: 0.8.4
nss_wrapper: 1.1.11
openldap: 2.6.0
socket_wrapper: 1.1.9

When i run make check some errors occur:

./tests/runtests.py


To pass arguments to the test suite, use CHECKARGS:
    make check CHECKARGS='--debug-num=<num>'
A full set of available options can be seen with --help


Waiting for LDAP server to start...
krb5kdc: starting...
Traceback (most recent call last):
  File "/root/rpmbuild/BUILD/gssproxy-0.8.4/./tests/runtests.py", line 89, in runtests_main
    gproc, gpsocket = setup_gssproxy(testdir, gssproxyenv)
  File "/root/rpmbuild/BUILD/gssproxy-0.8.4/tests/testlib.py", line 766, in setup_gssproxy
    gssproxy_reload(testdir, gproc.pid, {
  File "/root/rpmbuild/BUILD/gssproxy-0.8.4/tests/testlib.py", line 801, in gssproxy_reload
    raise Exception("timed out while waiting for gssproxy to reload")
Exception: timed out while waiting for gssproxy to reload
Killing LDAP(559570)
Killing KDC(559579)
make: *** [Makefile:2248: check] Error 1

Variable substitution for hostname / FDQN

I like to request a new config variable as substitution for host's current FDQN. This would allow me to have a reusable gssproxy config file with krb5_principal set to a service principal name. The service principal contains the current FQDN, e.g. myservice/host.ipa.example. If gssproxy would support an additional variable, I could ship the config file in RPM packages.

[service/myservice]
  mechs = krb5
  cred_store = keytab:/var/lib/ipa/gssproxy/myservice.keytab
  allow_client_ccache_sync = true
  cred_usage = initiate
  euid = myservice
  # krb5_principal = myservice/host.ipa.example
   krb5_principal = myservice/%h

The krb5_principal option is required to initiate credentials without explicit principal name in the user application. For example ipa command line tool and its API need krb5_principal to automatically authenticate.

# sudo -u myservice /bin/bash
bash-5.1$ GSS_USE_PROXY=1 ipa ping
--------------------------------------------
IPA server version 4.10.1. API version 2.251
--------------------------------------------

Without an explicit krb5_principal, the command fails, because GSSX_ARG_ACQUIRE_CRED runs with input_cred_handle: <Null>.

FreeIPA ticket https://pagure.io/freeipa/issue/9442 has some context.

Reference that libidmap interface is not implemented and that one hat to use the krb5.conf for id-mapping requirements.

I think it would be sufficient to write that libidmap interface is not implemented and that one hat to use the krb5.conf for id-mapping requirements.

Am Freitag, 3. Mai 2024, 15:00:47 CEST schrieb Simo Sorce:

I would accept a patch to the NFS.md doc if you have a clear idea of what to write, or even just an issue that describes precisely the kind of change you'd think would make this clearer in the doc.

Originally posted by @trupf in #100 (comment)

An actual error map

    /* placeholder,                                                                                      
     * we will need an actual map but to speed up testing just make a sum with                           
     * a special base and hope no conflicts will happen in the mechglue */

(from gss_plugin.c)

static idmapping is working different whe gssproxy is installed on the nfs server

I'm not actually sure if this is an gssproxy issue or related to something else...

I have the following entries in /etc/idmapd.conf on the server:

....
[Translation]
Method = static,nsswitch

[Static]
backup/[email protected] = borg
backup/[email protected] = borg
....

Intention is that the principal ,which is used in a keytab does authenticate on the server and has access rights of user borg and id is mapped for file an directory onwership. This is actually working when no gssproxy is installed on the the server. But if it is installed all files owned by user "borg" on the server are display as owned by "nobody" on the client and file access is not granted If I than remove the static mapping entry, than the correct ownership is displayed, but access to the file is of course not permitted (which is correct in this case as user backup is not allowed to access borg's files...).
Than again authentication and user mapping with a keytab for "[email protected]" (without the host name part) does work with correct mapping and access rights even in gssproxy, but than I would have to use the same keytab on different clients, which is not the intention...
So I think the static mapping should only be used for authentication to the nfs4 server, but for ownership should still the original user be used and displayed. At least it is working this way without gssproxy.

Expired credentials are never renewed

It seams gssproxy doesn't renew client cache on expiration.

If client cache does not exists gssproxy acquire credentials and everything is working until clien cache expires. If client cache files is removed gssproxy acquire new credentials and service using it continues to work.

Service uses SASL GSSAPI to access LDAP and it's running as separate user (1000).

I have tested it with ldapsearch:
sudo -u 1000 GSS_USE_PROXY=yes ldapsearch -Y GSSAPI

klist -c /run/krb5_ccache that cache is expired.

Krb5 trace states The referenced credential has expired...

BTW. SELinux is in permissive mode.

Service configuration:

[service/example-service]
  mechs = krb5
  cred_store = keytab:/etc/krb5.keytab
  cred_store = ccache:FILE:/run/krb5_ccache
  cred_store = client_keytab:/etc/krb5.keytab
  min_lifetime = 360
  cred_usage = both
  euid = 1000

Example log with debug_level=3:

gssproxy[643415]: [2021/12/11 00:36:11]: [status] Sending data [0x7f6378076900 (168)]: successful write of 168
gssproxy[643415]: [2021/12/11 00:36:11]: [status] Sending data: 0x7f6378076900 (168)
gssproxy[643415]: [2021/12/11 00:36:11]: [status] Handling query reply: 0x7f6378076900 (168)
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Handling query output: 0x7f6378076900 (168)
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Returned buffer 8 (GSSX_INIT_SEC_CONTEXT) from [0x561388f0e370 (216)]: [0x7f6378076900 (168)]
gssproxy[643415]:     GSSX_RES_INIT_SEC_CONTEXT( status: { 851968 { 1 2 840 113554 1 2 2 } 2529638944 "Unspecified GSS failure.  Minor code may provide more information" "Ticket expired" [  ] } context_handle: <Null> output_token: <Null> )
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: No impersonator credentials detected
gssproxy[643415]:     GSSX_ARG_INIT_SEC_CONTEXT( call_ctx: { "" [  ] } context_handle: <Null> cred_handle: <Null> target_name: { "[email protected]" { 1 2 840 113554 1 2 1 4 } [  ] [  ] [ ] } mech_type: { 1 2 840 113554 1 2 2 } req_flags: 58 time_req: 0 input_cb: <Null> input_token: <Null> [ { [ 73796e635f6d6f6469666965645f63726564730 ] [ 64656661756c740 ] } ] )
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: gp_rpc_execute: executing 8 (GSSX_INIT_SEC_CONTEXT) for service "example-service", euid: 1000,socket: (null)
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Executing request 8 (GSSX_INIT_SEC_CONTEXT) from [0x561388f0e370 (216)]
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Processing request [0x561388f0e370 (216)]
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: Connection matched service example-service
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Handling query input: 0x561388f0e370 (216)
gssproxy[643415]: [2021/12/11 00:36:11]: [status] Sending data [0x7f6378023de0 (164)]: successful write of 164
gssproxy[643415]: [2021/12/11 00:36:11]: [status] Sending data: 0x7f6378023de0 (164)
gssproxy[643415]: [2021/12/11 00:36:11]: [status] Handling query reply: 0x7f6378023de0 (164)
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Handling query output: 0x7f6378023de0 (164)
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Returned buffer 6 (GSSX_ACQUIRE_CRED) from [0x561388f053e0 (848)]: [0x7f6378023de0 (164)]
gssproxy[643415]:     GSSX_RES_ACQUIRE_CRED( status: { 720896 { 1 2 840 113554 1 2 2 } 100001 "The referenced credential has expired" "Success" [  ] } output_cred_handle: { { "" <None> [  ] [  ] [ ] } [ ] [  ] 0 } )
gssproxy[643415]:     GSSX_ARG_ACQUIRE_CRED( call_ctx: { "" [  ] } input_cred_handle: { { "service1/[email protected]" { 1 2 840 113554 1 2 2 1 } [ 410b692affffff8648ffffff86fffffff7121220002f696d61702f7372762d68747a2d66736e2d312e64652e692e696e666f6d6161732e687240494e464f4d4141532e4852 ] [ 420b692affffff8648ffffff86fffffff7121220002f696d61702f7372762d68747a2d66736e2d312e64652e692e696e666f6d6161732e687240494e464f4d4141532e48520000 ] [ ] } [ { { "service1/[email protected]" { 1 2 840 113554 1 2 2 1 } [ 410b692affffff8648ffffff86fffffff7121220002f696d61702f7372762d68747a2d66736e2d312e64652e692e696e666f6d6161732e687240494e464f4d4141532e4852 ] [ 420b692affffff8648ffffff86fffffff7121220002f696d61702f7372762d68747a2d66736e2d312e64652e692e696e666f6d6161732e687240494e464f4d4141532e48520000 ] [ ] } { 1 2 840 113554 1 2 2 } INITIATE 85446 0 } ] [ ffffffd37dffffff8b1ffffff94ffffff8fffffffa05d1a3effffff96ffffff957144ffffffe5ffffffe078ffffff91ffffffc9ffffffd9ffffffb4541ffffffb5bffffffa270ffffffaaffffff862cffffffdeffffffb8481873ffffff9fffffffd44b61213ffffffcfffffff934f03bffffffb412ffffffeeffffffea75ffffffdfffffffd7646a28ffffff9a2f643134b322921407fffffff9b43ffffffffffffffb9ffffffdc7a39fffffff0ffffffe9492e4ffffffff24dffffff90fffffffafffffff128ffffffcf15ffffffaefffffff7ffffffc9ffffffb5ffffff994b5effffff8c73ffffff8dffffffa7ffffffd3ffffffd2ffffffcb571639fffffff07affffffa97affffffb836ffffff8e6d79744c357dffffffacffffffeaffffffbcffffffdc6cffffffa8ffffff844b5effffff877ffffffdc23ffffffcf22ffffff87ffffff83ffffffdf79ffffffd9ffffff912d3ffffffcaffffffbb43dffffffc8137c4cffffffe46448ffffffab262dffffffdb314469ffffffcf1b28ffffffe11121ffffffc94870ffffff8dfffffff5b2d43ffffffd8ffffffa555216e78ffffff9fffffffba6fffffffd430ffffffbaffffffbc6effffff83fffffff86b281bffffffc9137365ffffffa4ffffffaf2ffffffe0fffffff811561c51774effffffcb6a7dffffffe0ffffffecffffffa42affffffc1ffffffa7ffffff826074ffffff99757affffffb043322fffffffc5ffffff8cffffffb3ffffff8affffffd04cffffffe0ffffffb22964ffffffc2ffffff8d68ffffffaefffffff80ffffffc741fffffff1ffffff902b ] 0 } add_cred: 0 desired_name: <Null> time_req: 0 desired_mechs: { } cred_usage: INITIATE initiator_time_req: 0 acceptor_time_req: 0 )
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: gp_rpc_execute: executing 6 (GSSX_ACQUIRE_CRED) for service "example-service", euid: 1000,socket: (null)
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Executing request 6 (GSSX_ACQUIRE_CRED) from [0x561388f053e0 (848)]
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Processing request [0x561388f053e0 (848)]
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: Connection matched service example-service
gssproxy[643415]: [CID 12][2021/12/11 00:36:11]: [status] Handling query input: 0x561388f053e0 (848)
gssproxy[643415]: [2021/12/11 00:36:11]: Client [2021/12/11 00:36:11]: (/usr/bin/ldapsearch) [2021/12/11 00:36:11]:  connected (fd = 12)[2021/12/11 00:36:11]:  (pid = 643985) (uid = 1000) (gid = 1000)[2021/12/11 00:36:11]:  (context = unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023)[2021/12/11 00:36:11]:

The purpose of Encrypted/Credentials/v1@X-GSSPROXY:

Hi,

I've successfully setup GSS-Proxy with NFS client and Constraint Delegation against Active Directory. I think I've understood how things are working but I still miss a couple of bits!
For reference, the config is

[service/nfs-client]
 mechs = krb5
 cred_store = keytab:/etc/krb5.keytab
 cred_store = ccache:FILE:/var/lib/gssproxy/clients/krb5cc_%U
 cred_usage = initiate
 allow_any_uid = yes
 impersonate = true
 euid = 0
  • The first one is this rather mysterious/surprising Encrypted/Credentials/v1@X-GSSPROXY: ticket(s) that gets added to the users' default kerberos cache:
$ klist
Ticket cache: KCM:1850627282:33793
Default principal: [email protected]

Valid starting       Expires              Service principal
01/01/1970 01:00:00  01/01/1970 01:00:00  Encrypted/Credentials/v1@X-GSSPROXY:
01/01/1970 01:00:00  01/01/1970 01:00:00  Encrypted/Credentials/v1@X-GSSPROXY:

=> What are those for and how do they fit in the grand schema of things?

  • The second one is the location of the actual Service Ticket (the one for nfs/[email protected]) got on behalf of the user. From KRB5_TRACE logs, I can see many MEMORY: references:
[754] 1713649111.465532: Resolving unique ccache of type MEMORY
[754] 1713649111.465533: Initializing MEMORY:PjKD2SW with default princ [email protected]
[754] 1713649111.465534: Storing [email protected] -> [email protected] in MEMORY:PjKD2SW
[754] 1713649111.465535: Storing [email protected] -> krb5_ccache_conf_data/proxy_impersonator@X-CACHECONF: in MEMORY:PjKD2SW
[754] 1713649111.465536: Storing [email protected] -> krb5_ccache_conf_data/refresh_time@X-CACHECONF: in MEMORY:PjKD2SW
[754] 1713649111.465537: Storing [email protected] -> krb5_ccache_conf_data/pa_type/krbtgt\/SOMEDOMAIN.COM\@SOMEDOMAIN.COM@X-CACHECONF: in MEMORY:PjKD2SW
[754] 1713649111.465538: Storing [email protected] -> krbtgt/[email protected] in MEMORY:PjKD2SW
[...]
[3831] 1713542008.050652: Get cred via TGT krbtgt/[email protected] after requesting nfs/[email protected] (canonicalize on)

I guess that GSS-Proxy is somehow storing some bits in a memory cache. /var/lib/gssproxy/clients/krb5cc_xxxx files get populated thow, but they only host TGT.
=> Is there any reasons why using MEMORY cache? (maybe it's mandatory in impersonation scenario?)

  • Last but not least, for Constraint Delegation to actually work against Active Directory env, the first ticket requested for the host for itself on behalf of the user (the s4u2self part) needs to be forwardable in order to be accepted by AD and trigger the s4u2proxy part. Unfortunately, the default configuration of [libdefaults] section of /etc/krb5.conf after domain join through realmd does not have the forwardable set to true.

=> Is there any way for GSS-Proxy to enforce forwardability in the impersonation scenario? (regardless of /etc/krb5.conf setting)

Can we get a 0.9.2 release?

Greetings team,

Looking at the past releases we have one roughly every year, with 0.9.1 from Jun 2022.

Since then we have a handful of albeit small fixes: null pointer derefs, to print modifiers, systemd unit hardening and musl build fix, deprecation warnings and typos, lots of typos. Personally I am mostly looking forward to 05140b3 and ec46345.

Can we get a minor release with the above goodies?

Thanks in advance

lifetime of credentials

It seems gssproxy doesn't expose lifetime of credentials or doesn't do it properly.

In IPA env(WSGI, GSS_USE_PROXY=yes) I inquire the lifetime of creds as:

store = {'ccache': '/run/ipa/ccaches/xxx'}
creds = gssapi.Credentials(usage="initiate", name=None, store=store)
print(creds.lifetime)

which always show the initial lifetime of credentials (in my example it was always 20) even the credentials are expired.

While the decrypted ccache

import gssapi

store = {'ccache': '/root/decryptedccache'}
creds = gssapi.Credentials(usage="initiate", name=None, store=store)
print(creds.lifetime)

shows the correct remaining lifetime of creds and raises with ExpiredCredentialsError on expiration.

Is such proxied lifetime's behaviour expected, bug or not implemented yet?

A "usermode" to use gssproxy with flatpaks

Flatpaks are a way to get user applications (generally GUI) package in a operating system agnotic way, and with some isolation for the user session.
This also means flatpaks have difficulty dealing with GSSAPI as it often relies on the host's krb5 configuration, for default realm and other configs.
Using gssproxy for privilege separation would also mean the host krb5 config could be used, and the TGT will not be leaked in the flatpak environment.

This calls for a simplified user mode, where gssproxy is run as a user and can itself be intercepted by the host gss-proxy if needed.

Crash in gssrpc_xdr_free()

Using Epiphany Tech Preview, every time I navigate to a web page that requires Kerberos authentication, my network process crashes in gssproxy code. I've never seen this before until a few minutes ago, but now it's become 100% reproducible across browser restarts, so somehow I've gotten my system into a bad state that triggers this bug. Not sure how, though.

Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00007f4a545bb591 in gssrpc_xdr_bytes (xdrs=0x7ffe2289ec80, cpp=0x8, sizep=0x0, maxsize=4294967295) at xdr.c:441
441		char *sp = *cpp;  /* sp is the actual string pointer */
[Current thread is 1 (Thread 0x7f4ab1e3d100 (LWP 8))]
(gdb) bt full
#0  0x00007f4a545bb591 in gssrpc_xdr_bytes (xdrs=0x7ffe2289ec80, cpp=0x8, sizep=0x0, maxsize=4294967295) at xdr.c:441
        sp = <optimized out>
        nodesize = <optimized out>
#1  0x00007f4a545d9599 in xdr_octet_string () at rpcgen/gss_proxy_xdr.c:18
#2  0x00007f4a545d95bd in xdr_gssx_buffer (xdrs=<optimized out>, objp=<optimized out>) at rpcgen/gss_proxy_xdr.c:44
#3  0x00007f4a545d9746 in xdr_gssx_name (xdrs=xdrs@entry=0x7ffe2289ec80, objp=objp@entry=0x0)
    at rpcgen/gss_proxy_xdr.c:186
#4  0x00007f4a545d97f6 in xdr_gssx_cred (xdrs=0x7ffe2289ec80, 
    xdrs@entry=<error reading variable: value has been optimized out>, objp=0x0, 
    objp@entry=<error reading variable: value has been optimized out>) at rpcgen/gss_proxy_xdr.c:225
#5  0x00007f4a545bb00b in gssrpc_xdr_free (proc=<optimized out>, objp=<optimized out>) at xdr.c:81

                  x = {x_op = XDR_FREE, x_ops = 0x7f4a545e3fac <gssi_init_sec_context+524>, x_public = 0xffffffff <error: Cannot access memory at address 0xffffffff>, x_private = 0x0, x_base = 0x0, x_handy = 978656736}
#6  0x00007f4a545e41bd in gssi_init_sec_context
    (minor_status=minor_status@entry=0x7ffe2289f1e8, claimant_cred_handle=claimant_cred_handle@entry=0x0, context_handle=0x55ef3a2c04b0, target_name=0x55ef3a435080, mech_type=0x55ef3a56bc90, req_flags=req_flags@entry=32, time_req=<optimized out>, input_cb=<optimized out>, input_token=<optimized out>, actual_mech_type=<optimized out>, output_token=<optimized out>, ret_flags=<optimized out>, time_rec=<optimized out>) at src/mechglue/gpp_init_sec_context.c:174
        behavior = <optimized out>
        ctx_handle = 0x55ef3a52f590
        cred_handle = 0x55ef3a3f37b0
        out_cred = 0x55ef3a421570
        tmaj = <optimized out>
        tmin = 0
        maj = 0
        min = 0
#7  0x00007f4ab23e1fe0 in gss_init_sec_context
    (minor_status=minor_status@entry=0x7ffe2289f1e8, claimant_cred_handle=claimant_cred_handle@entry=0x0, context_handle=context_handle@entry=0x55ef3a551da8, target_name=target_name@entry=0x55ef3a5506f0, req_mech_type=<optimized out>, req_flags=req_flags@entry=32, time_req=<optimized out>, input_chan_bindings=<optimized out>, input_token=<optimized out>, actual_mech_type=<optimized out>, output_token=<optimized out>, ret_flags=<optimized out>, time_rec=<optimized out>) at g_init_sec_context.c:211
        status = <optimized out>
        temp_minor_status = 32586
        union_name = 0x55ef3a5506f0
        union_cred = <optimized out>
        internal_name = 0x55ef3a435080
        union_ctx_id = 0x55ef3a2c04a0
        selected_mech = 0x55ef3a634210
        mech = 0x55ef3a633fb0
        input_cred_handle = 0x0
#8  0x00007f4ab2408351 in init_ctx_call_init
    (minor_status=minor_status@entry=0x7ffe2289f1e8, sc=sc@entry=0x55ef3a551d80, spcred=spcred@entry=0x0, acc_negState=acc_negState@entry=4294967295, target_name=target_name@entry=0x55ef3a5506f0, req_flags=req_flags@entry=0, time_req=<optimized out>, mechtok_in=<optimized out>, bindings=<optimized out>, mechtok_out=<optimized out>, time_rec=<optimized out>, send_token=<optimized out>) at spnego_mech.c:929
        ret = <optimized out>
        tmpret = <optimized out>
        tmpmin = 21999
        mech_req_flags = 32
        mcred = <optimized out>
#9  0x00007f4ab2409f16 in spnego_gss_init_sec_context
    (minor_status=minor_status@entry=0x7ffe2289f1e8, claimant_cred_handle=claimant_cred_handle@entry=0x0, context_hand--Type <RET> for more, q to quit, c to continue without paging--c
le=0x55ef3a3d97c0, target_name=0x55ef3a5506f0, mech_type=<optimized out>, req_flags=req_flags@entry=0, time_req=<optimized out>, bindings=<optimized out>, input_token=<optimized out>, actual_mech=<optimized out>, output_token=<optimized out>, ret_flags=<optimized out>, time_rec=<optimized out>) at spnego_mech.c:1087
        send_token = INIT_TOKEN_SEND
        tmpmin = 0
        ret = <optimized out>
        negState = 4294967295
        acc_negState = 4294967295
        mechtok_in = 0x0
        mechListMIC_in = 0x0
        mechListMIC_out = 0x0
        mechtok_out = {length = 631, value = 0x55ef3a66a710}
        spcred = 0x0
        spnego_ctx = 0x55ef3a551d80
#10 0x00007f4ab23e1fe0 in gss_init_sec_context (minor_status=minor_status@entry=0x7ffe2289f1e8, claimant_cred_handle=claimant_cred_handle@entry=0x0, context_handle=context_handle@entry=0x55ef3a34fea0, target_name=0x55ef3a613cf0, req_mech_type=req_mech_type@entry=0x7f4ab52211e0 <gss_mech_spnego>, req_flags=req_flags@entry=0, time_req=<optimized out>, input_chan_bindings=<optimized out>, input_token=<optimized out>, actual_mech_type=<optimized out>, output_token=<optimized out>, ret_flags=<optimized out>, time_rec=<optimized out>) at g_init_sec_context.c:211
        status = <optimized out>
        temp_minor_status = 0
        union_name = 0x55ef3a613cf0
        union_cred = <optimized out>
        internal_name = 0x55ef3a5506f0
        union_ctx_id = 0x55ef3a3d97b0
        selected_mech = 0x55ef3a63fc80
        mech = 0x55ef3a63fc80
        input_cred_handle = 0x0
#11 0x00007f4ab51aef88 in soup_gss_client_step (conn=conn@entry=0x55ef3a34fe90, challenge=challenge@entry=0x7f4ab52001d5 "", error_message=error_message@entry=0x7ffe2289f2f0) at ../libsoup/auth/soup-auth-negotiate.c:596
        maj_stat = <optimized out>
        min_stat = 0
        in = {length = 0, value = 0x0}
        out = {length = 0, value = 0x0}
        ret = 0
#12 0x00007f4ab51af5ac in soup_gss_build_response (conn=conn@entry=0x55ef3a34fe90, auth=<optimized out>, error_message=error_message@entry=0x7ffe2289f2f0) at ../libsoup/auth/soup-auth-negotiate.c:494
#13 0x00007f4ab51af86c in soup_auth_negotiate_update_connection (auth=0x7f48c0001a20 [SoupAuthNegotiate], msg=0x55ef3a345e60 [SoupMessage], header=<optimized out>, state=0x55ef3a34fe90) at ../libsoup/auth/soup-auth-negotiate.c:265
        success = 1
        conn = 0x55ef3a34fe90
        error_message = 0x0
        __func__ = "soup_auth_negotiate_update_connection"
#14 0x00007f4ab51b18d1 in soup_connection_auth_update (auth=0x7f48c0001a20 [SoupAuthNegotiate], msg=0x55ef3a345e60 [SoupMessage], auth_params=<optimized out>) at ../libsoup/auth/soup-connection-auth.c:153
        cauth = 0x7f48c0001a20 [SoupAuthNegotiate]
        conn = 0x55ef3a34fe90
        iter = {dummy1 = 0x55ef3a3cfd20, dummy2 = 0x7f48c0001a20, dummy3 = 0x7ffe2289f3d0, dummy4 = 8, dummy5 = 32586, dummy6 = 0x7f4a00000000}
        auth_header = 0x55ef3a3e6aa0
        key = 0x7ffe2289f3d0
        value = 0x7f4ab51ee6f0 <soup_str_case_hash+64>
        result = <optimized out>
#15 0x00007f4ab51a9b8a in soup_auth_new (type=<optimized out>, msg=msg@entry=0x55ef3a345e60 [SoupMessage], auth_header=<optimized out>) at ../libsoup/auth/soup-auth.c:291
        auth = 0x7f48c0001a20 [SoupAuthNegotiate]
        params = 0x55ef3a3cfd20
        scheme = 0x7f4ab51fe808 "Negotiate"
        uri = <optimized out>
        authority = <optimized out>
        __func__ = "soup_auth_new"
        priv = 0x7f48c0001a00
#16 0x00007f4ab51b04d8 in create_auth (priv=priv@entry=0x55ef39ecf540, msg=msg@entry=0x55ef3a345e60 [SoupMessage]) at ../libsoup/auth/soup-auth-manager.c:337
        j = 0
        header = 0x55ef3a3804d0 "Negotiate"
        auth_class = 0x55ef39f02790
        challenges = 0x55ef39f2a040
        auth = <optimized out>
        i = 3
#17 0x00007f4ab51b0f6b in auth_got_headers (msg=0x55ef3a345e60 [SoupMessage], manager=0x55ef39ecf570) at ../libsoup/auth/soup-auth-manager.c:632
        priv = 0x55ef39ecf540
        auth = <optimized out>
        prior_auth = <optimized out>
        prior_auth_failed = 0
#21 0x00007f4ab4f452e3 in <emit signal ??? on instance 0x55ef3a345e60 [SoupMessage]> (instance=<optimized out>, signal_id=<optimized out>, detail=detail@entry=0) at ../gobject/gsignal.c:3606
        var_args = {{gp_offset = 24, fp_offset = 48, overflow_arg_area = 0x7ffe2289f8d0, reg_save_area = 0x7ffe2289f810}}
    #18 0x00007f4ab4f294d2 in g_closure_invoke (closure=0x55ef3a60a5b0, return_value=return_value@entry=0x0, n_param_values=1, param_values=param_values@entry=0x7ffe2289f670, invocation_hint=invocation_hint@entry=0x7ffe2289f5f0) at ../gobject/gclosure.c:832
                marshal = 0x7f4ab51e7e90 <status_handler_metamarshal>
                marshal_data = 0x191
                in_marshal = 0
                real_closure = 0x55ef3a60a590
                __func__ = "g_closure_invoke"
    #19 0x00007f4ab4f3e1a8 in signal_emit_unlocked_R (node=node@entry=0x55ef3a342010, detail=detail@entry=0, instance=instance@entry=0x55ef3a345e60, emission_return=emission_return@entry=0x0, instance_and_params=instance_and_params@entry=0x7ffe2289f670) at ../gobject/gsignal.c:3796
                tmp = <optimized out>
                handler = 0x55ef3a400600
                accumulator = 0x0
                emission = {next = 0x0, instance = 0x55ef3a345e60, ihint = {signal_id = 32, detail = 0, run_type = (G_SIGNAL_RUN_FIRST | G_SIGNAL_ACCUMULATOR_FIRST_RUN)}, state = EMISSION_RUN, chain_type = 0x4 [void]}
                hlist = <optimized out>
                handler_list = 0x55ef3a380ec0
                return_accu = 0x0
                accu = {g_type = 0x0, data = {{v_int = 0, v_uint = 0, v_long = 0, v_ulong = 0, v_int64 = 0, v_uint64 = 0, v_float = 0, v_double = 0, v_pointer = 0x0}, {v_int = 0, v_uint = 0, v_long = 0, v_ulong = 0, v_int64 = 0, v_uint64 = 0, v_float = 0, v_double = 0, v_pointer = 0x0}}}
                signal_id = 32
                max_sequential_handler_number = 10223
                return_value_altered = <optimized out>
    #20 0x00007f4ab4f45115 in g_signal_emit_valist (instance=<optimized out>, signal_id=<optimized out>, detail=<optimized out>, var_args=var_args@entry=0x7ffe2289f7f0) at ../gobject/gsignal.c:3549
                instance_and_params = 0x7ffe2289f670
                signal_return_type = <optimized out>
                param_values = 0x7ffe2289f688
                node = <optimized out>
                i = <optimized out>
                n_params = <optimized out>
                __func__ = "g_signal_emit_valist"
#22 0x00007f4ab51e8943 in soup_message_got_headers (msg=<optimized out>) at ../libsoup/soup-message.c:1212
#23 0x00007f4ab51c6408 in on_frame_recv_callback (session=<optimized out>, frame=0x55ef3a579208, user_data=0x55ef3a501f80) at ../libsoup/http2/soup-client-message-io-http2.c:731
        status = 401
        io = 0x55ef3a501f80
        data = 0x55ef3a435aa0
        __func__ = "on_frame_recv_callback"
#24 0x00007f4ab23acd67 in session_call_on_frame_received (frame=0x55ef3a579208, session=0x55ef3a578f30) at ../../lib/nghttp2_session.c:3658
        rv = <optimized out>
        rv = <optimized out>
        frame = 0x55ef3a579208
        stream = 0x55ef3a37bea0
        __PRETTY_FUNCTION__ = "session_after_header_block_received"
        data_readlen = <optimized out>
        trail_padlen = <optimized out>
        final = <optimized out>
        first = <optimized out>
        last = <optimized out>
        iframe = 0x55ef3a579208
        readlen = 1347
        padlen = <optimized out>
        rv = <optimized out>
        busy = 0
        cont_hd = {length = 140729477888544, stream_id = 1683520941, type = 74 'J', flags = 127 '\177', reserved = 0 '\000'}
        stream = <optimized out>
        pri_fieldlen = <optimized out>
        mem = 0x55ef3a579910
        __PRETTY_FUNCTION__ = "nghttp2_session_mem_recv"
#25 session_after_header_block_received (session=0x55ef3a578f30) at ../../lib/nghttp2_session.c:4180
        rv = <optimized out>
        frame = 0x55ef3a579208
        stream = 0x55ef3a37bea0
        __PRETTY_FUNCTION__ = "session_after_header_block_received"
        data_readlen = <optimized out>
        trail_padlen = <optimized out>
        final = <optimized out>
        first = <optimized out>
        last = <optimized out>
        iframe = 0x55ef3a579208
        readlen = 1347
        padlen = <optimized out>
        rv = <optimized out>
        busy = 0
        cont_hd = {length = 140729477888544, stream_id = 1683520941, type = 74 'J', flags = 127 '\177', reserved = 0 '\000'}
        stream = <optimized out>
        pri_fieldlen = <optimized out>
        mem = 0x55ef3a579910
        __PRETTY_FUNCTION__ = "nghttp2_session_mem_recv"
#26 nghttp2_session_mem_recv (session=0x55ef3a578f30, in=0x7ffe2289ffac "", in@entry=0x7ffe2289fa60 "", inlen=inlen@entry=2017) at ../../lib/nghttp2_session.c:6823
        data_readlen = <optimized out>
        trail_padlen = <optimized out>
        final = <optimized out>
        first = <optimized out>
        last = <optimized out>
        iframe = 0x55ef3a579208
        readlen = 1347
        padlen = <optimized out>
        rv = <optimized out>
        busy = 0
        cont_hd = {length = 140729477888544, stream_id = 1683520941, type = 74 'J', flags = 127 '\177', reserved = 0 '\000'}
        stream = <optimized out>
        pri_fieldlen = <optimized out>
        mem = 0x55ef3a579910
        __PRETTY_FUNCTION__ = "nghttp2_session_mem_recv"
#27 0x00007f4ab51c50a5 in io_read (io=0x55ef3a501f80, blocking=<optimized out>, cancellable=0x0, error=0x7ffe228a1ab0) at ../libsoup/http2/soup-client-message-io-http2.c:411
        buffer = "\000\005C\001\004\000\000\000\001 H\003\064\060\061v\204\252cU\347a\226\337=\277J\005\225\065\021*\b\002\022\201r\340\031\270\310Tţ\177_\221I|\245\211\323M\037d\234v \251\203\206\374+=\\\003\066\065\062\000\211 \311\071V!\352M\207\243\232\250\353!'\260\277JSj\022\265\205\356:\r \322_\245)\037\225\207\061`\a\000\207AR\261\016~\246/⇆\374qn\301\273vMZb\311~\002\216VI\033\201Z6]\225f\204\310\326\031^mg$\fo2F\236i\247\027@\276\324\342[\020c\325\000~\324\326\064\317\003\003\265\063\261aGE(c\005\065\320\177E.K\372\330\373Sp\351.\343$\260i=E\373S"...
        read = 2017
        ret = <optimized out>
        __func__ = "io_read"
#28 0x00007f4ab51c52b0 in io_read_ready (stream=<optimized out>, io=0x55ef3a501f80) at ../libsoup/http2/soup-client-message-io-http2.c:437
        error = 0x0
        progress = <optimized out>
        conn = 0x55ef3a34a210 [SoupConnection]
#29 0x00007f4ab4e2c661 in g_main_dispatch (context=<optimized out>) at ../glib/gmain.c:3444
        dispatch = 0x7f4ab5024820 <pollable_source_dispatch>
        prev_source = 0x0
        begin_time_nsec = 8785558435284
        was_in_call = 0
        user_data = 0x55ef3a501f80
        callback = 0x7f4ab51c5200 <io_read_ready>
        cb_funcs = 0x7f4ab4f102c0 <g_source_callback_funcs>
        cb_data = 0x55ef3a420bd0
        need_destroy = <optimized out>
        source = 0x55ef3a56f490
        current = 0x55ef39eeba20
        i = 3
        __func__ = "g_main_dispatch"
#30 g_main_context_dispatch (context=<optimized out>) at ../glib/gmain.c:4162
#31 0x00007f4ab4e2cbb8 in g_main_context_iterate (context=0x55ef39ec9780, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4238
        max_priority = 2147483647
        timeout = 540
        some_ready = 1
        nfds = 15
        allocated_nfds = <optimized out>
        fds = <optimized out>
        begin_time_nsec = 8785504442538
#32 0x00007f4ab4e2ce9f in g_main_loop_run (loop=0x55ef39eca930) at ../glib/gmain.c:4438
        __func__ = "g_main_loop_run"
#33 0x00007f4ab83e8eb0 in WTF::RunLoop::run() () at /usr/lib/debug/source/sdk/webkit2gtk-5.0.bst/Source/WTF/wtf/glib/RunLoopGLib.cpp:108
        runLoop = @0x7f4aa80100e0: {<WTF::FunctionDispatcher> = {_vptr.FunctionDispatcher = 0x7f4ab87c79f0 <vtable for WTF::RunLoop+16>}, <WTF::ThreadSafeRefCounted<WTF::RunLoop, (WTF::DestructionThread)0>> = {<WTF::ThreadSafeRefCountedBase> = {m_refCount = std::atomic<unsigned int> = { 14 }}, <No data fields>}, m_currentIteration = {m_start = 1, m_end = 1, m_buffer = {<WTF::VectorBufferBase<WTF::Function<void()>, WTF::FastMalloc>> = {m_buffer = 0x7f4aa8307e00, m_capacity = 16, m_size = 0}, <No data fields>}}, m_nextIterationLock = {static isHeldBit = 1 '\001', static hasParkedBit = 2 '\002', m_byte = {value = std::atomic<unsigned char> = { 0 '\000' }}}, m_nextIteration = {m_start = 0, m_end = 0, m_buffer = {<WTF::VectorBufferBase<WTF::Function<void()>, WTF::FastMalloc>> = {m_buffer = 0x0, m_capacity = 0, m_size = 0}, <No data fields>}}, m_isFunctionDispatchSuspended = false, m_hasSuspendedFunctions = false, static s_runLoopSourceFunctions = {prepare = 0x0, check = 0x0, dispatch = 0x7f4ab83e8cf0 <_FUN(GSource*, GSourceFunc, gpointer)>, finalize = 0x0, closure_callback = 0x0, closure_marshal = 0x0}, m_mainContext = {m_ptr = 0x55ef39ec9780}, m_mainLoops = {<WTF::VectorBuffer<WTF::GRefPtr<_GMainLoop>, 0, WTF::FastMalloc>> = {<WTF::VectorBufferBase<WTF::GRefPtr<_GMainLoop>, WTF::FastMalloc>> = {m_buffer = 0x7f4aa8008180, m_capacity = 16, m_size = 1}, <No data fields>}, <No data fields>}, m_source = {m_ptr = 0x55ef39eca950}, m_observers = {m_set = {m_impl = {{m_table = 0x0, m_tableForLLDB = 0x0}}}}}
        mainContext = 0x55ef39ec9780
        innermostLoop = 0x55ef39eca930
        nestedMainLoop = <optimized out>
#34 0x00007f4ab9433430 in WebKit::AuxiliaryProcessMainBase<WebKit::NetworkProcess, false>::run(int, char**) (argc=3, argv=0x7ffe228a1e58, this=0x7ffe228a1cb0) at /usr/lib/debug/source/sdk/webkit2gtk-5.0.bst/Source/WebKit/Shared/AuxiliaryProcessMain.h:71
        auxiliaryMain = {m_storage = {__data = "\340\313M\274J\177", '\000' <repeats 26 times>, "\001\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\f", '\000' <repeats 15 times>, "\001\000\000\000\000\000\000\000\300\000\003\250J\177\000", __align = {<No data fields>}}}
#35 WebKit::AuxiliaryProcessMainBase<WebKit::NetworkProcess, false>::run(int, char**) (argv=0x7ffe228a1e58, argc=3, this=0x7ffe228a1cb0) at /usr/lib/debug/source/sdk/webkit2gtk-5.0.bst/Source/WebKit/Shared/AuxiliaryProcessMain.h:58
        auxiliaryMain = {m_storage = {__data = "\340\313M\274J\177", '\000' <repeats 26 times>, "\001\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\f", '\000' <repeats 15 times>, "\001\000\000\000\000\000\000\000\300\000\003\250J\177\000", __align = {<No data fields>}}}
#36 WebKit::AuxiliaryProcessMain<WebKit::NetworkProcessMainSoup>(int, char**) (argc=3, argv=0x7ffe228a1e58) at /usr/lib/debug/source/sdk/webkit2gtk-5.0.bst/Source/WebKit/Shared/AuxiliaryProcessMain.h:97
        auxiliaryMain = {m_storage = {__data = "\340\313M\274J\177", '\000' <repeats 26 times>, "\001\000\000\000\000\000\000\000\001\000\000\000\000\000\000\000\f", '\000' <repeats 15 times>, "\001\000\000\000\000\000\000\000\300\000\003\250J\177\000", __align = {<No data fields>}}}
#37 0x00007f4ab886154a in __libc_start_call_main (main=main@entry=0x55ef3878b060 <main>, argc=argc@entry=3, argv=argv@entry=0x7ffe228a1e58) at ../sysdeps/nptl/libc_start_call_main.h:58
        self = <optimized out>
        result = <optimized out>
        unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140729477897816, -4170227206851225090, 3, 0, 94485932989840, 139958966870016, -4170227206836545026, -4086642367529707010}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x3, 0x7ffe228a1e50}, data = {prev = 0x0, cleanup = 0x0, canceltype = 3}}}
        not_first_call = <optimized out>
#38 0x00007f4ab886160b in __libc_start_main_impl (main=0x55ef3878b060 <main>, argc=3, argv=0x7ffe228a1e58, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=<optimized out>) at ../csu/libc-start.c:389
#39 0x000055ef3878b095 in _start ()

error messages from kerberos are not logged

I am using rpc-gssd and gssproxy mechanism, and I found a mistake in my /etc/krb5.conf

    default_ccache_name = DIR:/home/%{username}/.k5_ccache

this used to work for regular users needing a ticket, but it fails when root tries to mount a NFS volume, as there is no /home/root directory. It sounds trivial but the investigation took a while:

with verbosity activated, rpc-gssd will log

ERROR: GSS-API: error in gss_acquire_cred(): GSS_S_FAILURE (Unspecified GSS failure.  Minor code may provide more information) - (0x9ae73ac3)

which is not helpful.

In this case Kerberos constructs a readable error message however gssproxy simply grabs the Kerberos error code and puts it into a "minor code", and is later unable to display it.

Is it possible to improve gssproxy so that errors coming from the Kerberos API are logged properly? Thanks

I could check that something like

diff --git a/src/mechglue/gpp_creds.c b/src/mechglue/gpp_creds.c
index 677834d..84db676 100644
--- a/src/mechglue/gpp_creds.c
+++ b/src/mechglue/gpp_creds.c
@@ -327,6 +327,11 @@ OM_uint32 gppint_retrieve_remote_creds(uint32_t *min, const char *ccache_name,
 
 done:
     if (ctx) {
+        if (ret) {
+            char* msg = krb5_get_error_message(ctx, ret);
+            gpm_save_internal_status(ret, msg);
+            krb5_free_error_message(ctx, msg);
+        }
         krb5_free_cred_contents(ctx, &cred);
         krb5_free_cred_contents(ctx, &icred);
         if (ccache) krb5_cc_close(ctx, ccache);

makes the error message lot more helpful:

rpc.gssd[54289]: ERROR: GSS-API: error in gss_acquire_cred(): GSS_S_FAILURE (Unspecified GSS failure.  Minor code may provide more information) - Credential cache directory /home/root/.k5_ccache does not exist

[0.9.0] doesn't allow disabled systemd

With version 0.9.9, despite --with-initscript=none, the build errors out with

configure: error: conditional "HAVE_SYSTEMD_DAEMON" was never defined.
Usually this means the macro was only invoked conditionally.

I think this commit is responsible, but haven't tracked it down further.
Might be the m4 systemd macro.

145c7ce

Migrate test suite to 389ds

Currently the test suite uses openldap as a server. However, we'd like to be able to run it in RHEL/CentOS, which means that it ought to be done with 389ds.

n.b.: While it's tempting to use freeipa rather than doing this by hand, there are two large problems with that. First, freeipa takes forever to download and install because it's setting up so much else that we don't need (we just need a KDC). Second, freeipa's server isn't usable in distros that aren't Fedora-like, while gssproxy is; within reason, I'd like to keep parity between where we can run our code and our tests.

Unexpected failure in realpath: 13 (Permission denied)

Hi

I'm trying to set up gssproxy for Apache and it works fine in a normal environment, but when I run it in a podman container I get an error. It is not yet clear to me if this is the real source of my problems, or just a random error message, but it seemed worth reporting anyway.

gssproxy[5215]: Unexpected failure in realpath: 13 (Permission denied)

Indeed there is a permission issue:

root@3e205a109ea6:~# ps auxww|grep apac
root        5088  0.0  0.1  12668  6720 ?        Ss   10:23   0:00 /usr/sbin/apache2 -k start
www-data    5154  0.0  0.1 1938280 6664 ?        Sl   10:23   0:00 /usr/sbin/apache2 -k start
www-data    5155  0.0  0.1 2003816 10268 ?       Sl   10:23   0:00 /usr/sbin/apache2 -k start
root        5246  0.0  0.0   3180   712 pts/0    S+   10:28   0:00 grep apac
root@3e205a109ea6:~# ls -la /proc/5088/exe
lrwxrwxrwx 1 root root 0 Sep 14 10:28 /proc/5088/exe -> /usr/sbin/apache2
root@3e205a109ea6:~# ls -la /proc/5154/exe
ls: cannot read symbolic link '/proc/5154/exe': Permission denied
lrwxrwxrwx 1 root root 0 Sep 14 10:28 /proc/5154/exe
root@3e205a109ea6:~# 

RFE: implemt flex/bison based ini file parser

Currently gssproxy uses libini_config to parse its configuration file.
That library is using libbasicobjects, libcollection, libpath_utils and libref_array libraries. All this to parse only few tockens.

As gssproxy is kind core distribution package I think that would be way better to rewrite that part using bison/flex to reduce size of the minimal system. As good starting pont could be probably used https://github.com/ezaquarii/bison-flex-cpp-example.

User mode proxy should quit after a short timeout

Since the user mode gssproxy instance is socket-activated, it should quit after a short timeout to reduce system resource utilization, as it will just be activated again later on when needed. Normal timeouts for this would be 1 minute or 5 minutes, depending on how long you prefer. (Maybe using 5 minutes would be better here because it seems to be pretty slow for the first authentication?)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.