Giter Site home page Giter Site logo

containers / udica Goto Github PK

View Code? Open in Web Editor NEW
441.0 20.0 45.0 433 KB

This repository contains a tool for generating SELinux security profiles for containers

License: GNU General Public License v3.0

Python 86.32% Roff 4.28% Shell 7.70% Awk 0.54% Makefile 0.45% Dockerfile 0.70%

udica's Introduction

UDICA logo

udica - Generate SELinux policies for containers!

Build Status

Overview

This repository contains a tool for generating SELinux security profiles for containers. The whole concept is based on "block inheritence" feature inside CIL intermediate language supported by SELinux userspace. The tool creates a policy which combines rules inherited from specified CIL blocks(templates) and rules discovered by inspection of container JSON file, which contains mountpoints and ports definitions.

Final policy could be loaded immediately or moved to another system where it could be loaded via semodule.

What's with the weird name?

The name of this tool is derived from the Slovak word "udica" [uɟit͡sa], which means "fishing rod". It is a reference to the saying "Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime." Here udica is the fishing rod that allows you to get the fish (container policy) yourself, instead of always having to ask your local fisherman (SELinux expert) to catch (create) it for you ;)

State

This tool is still in early phase of development. Any feedback, ideas, pull requests are welcome. We're still adding new features, parameters and policy blocks which could be used.

Proof of concept

Tool was created based on following PoC where process of creating policy is described: https://github.com/fedora-selinux/container-selinux-customization

Supported container engines

Udica supports following container engines:

  • CRI-O v1.14.10+
  • docker v1.13+
  • podman v2.0+
  • containerd v1.5.0+ (using nerdctl v0.14+ or crictl)

Installing

Install udica tool with all dependencies

$ sudo dnf install -y podman setools-console git container-selinux
$ git clone https://github.com/containers/udica
$ cd udica && sudo python3 ./setup.py install

Alternatively you can run udica directly from git:

$ python3 -m udica --help

Another way how to install udica is to use fedora repository:

# dnf install udica -y

Or you can use Python Package Index (Pypi):

# pip install udica

Make sure that SELinux is in Enforcing mode

# setenforce 1
# getenforce
Enforcing

Current situation

Let's start podman container with following parameters:

# podman run -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
  • Container will bind mount /home with read only perms
  • Container will bind mount /var/spool with read/write perms
  • Container will publish container's port 21 to the host

Container runs with container_t type and c447,c628 categories.

Access mounted /home is not working:

[root@37a3635afb8f /]# cd /home/
[root@37a3635afb8f home]# ls
ls: cannot open directory '.': Permission denied

Because there is no allow rule for container_t to access /home

# sesearch -A -s container_t -t home_root_t
#

Access mounted /var/spool is not working:

[root@37a3635afb8f home]# cd /var/spool/
[root@37a3635afb8f spool]# ls
ls: cannot open directory '.': Permission denied
[root@37a3635afb8f spool]# touch test
touch: cannot touch 'test': Permission denied

Because there is no allow rule for container_t to access /var/spool

# sesearch -A -s container_t -t var_spool_t -c dir -p read
#

On the other hand, what is completely allowed is network access.

# sesearch -A -s container_t -t port_type -c tcp_socket
allow container_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };
allow sandbox_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };

# sesearch -A -s container_t -t port_type -c udp_socket
allow container_net_domain port_type:udp_socket { name_bind recv_msg send_msg };
allow sandbox_net_domain port_type:udp_socket { name_bind recv_msg send_msg };

It would be great to restrict this access and allow container bind just on tcp port 21 or with the same label.

Creating SELinux policy for container

To create policy for container, it's necessary to have running container for which a policy will be generated. Container from previous chapter will be used.

Let's find container id using podman ps command:

# podman ps
CONTAINER ID   IMAGE                             COMMAND   CREATED          STATUS              PORTS   NAMES
37a3635afb8f   docker.io/library/fedora:latest   bash      15 minutes ago   Up 15 minutes ago           heuristic_lewin

Container ID is 37a3635afb8f.

To create policy for it udica tool could be used. Parameter '-j' is for container json file and SELinux policy name for container.

# podman inspect 37a3635afb8f > container.json
# udica -j container.json  my_container

or

# podman inspect 37a3635afb8f | udica  my_container

Policy my_container with container id 37a3635afb8f created!

Please load these modules using:
# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

Restart the container with: "--security-opt label=type:my_container.process" parameter

Policy is generated. Let's follow instructions from output:

# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

# podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

Container is now running with my_container.process type:

# ps -efZ | grep my_container.process
unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434  1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305  0 13:49 pts/0 00:00:00 bash

SELinux now allows access to /home and /var/spool mount points:

[root@814ec56079e5 /]# cd /home
[root@814ec56079e5 home]# ls
lvrabec

[root@814ec56079e5 ~]# cd /var/spool/
[root@814ec56079e5 spool]# touch test
[root@814ec56079e5 spool]#

SELinux now allows binding to tcp/udp port 21, but not to 80:

[root@5bd8cb2ad911 /]# nc -lvp 21
Ncat: Version 7.60 ( https://nmap.org/ncat )
Ncat: Generating a temporary 1024-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
Ncat: SHA-1 fingerprint: 6EEC 102E 6666 5F96 CC4F E5FA A1BE 4A5E 6C76 B6DC
Ncat: Listening on :::21
Ncat: Listening on 0.0.0.0:21

[root@5bd8cb2ad911 /]# nc -lvp 80
Ncat: Version 7.60 ( https://nmap.org/ncat )
Ncat: Generating a temporary 1024-bit RSA key. Use --ssl-key and --ssl-cert to use a permanent one.
Ncat: SHA-1 fingerprint: 6EEC 102E 6666 5F96 CC4F E5FA A1BE 4A5E 6C76 B6DC
Ncat: bind to :::80: Permission denied. QUITTING.

Creating SELinux policy for confined user

Each Linux user on an SELinux-enabled system is mapped to an SELinux user. By default administrators can choose between the following SELinux users when confining a user account: root, staff_u, sysadm_u, user_u, xguest_u, guest_u (and unconfined_u which does not limit the user's actions).

To give administrators more options in confining users, udica now provides a way to generate a custom SELinux user (and corresponding roles and types) based on the specified parameters. The new user policy is assembled using a set of predefined policy macros based on use-cases (managing network, administrative tasks, etc.).

To generate a confined user, use the "confined_user" keyword followed by a list of options:

Option Use case
-a, --admin_commands Use administrative commands (vipw, passwd, ...)
-g, --graphical_login Use graphical login environment
-m, --mozilla_usage Use mozilla firefox
-n, --networking Manage basic networking (ip, ifconfig, traceroute, tcpdump, ...)
-d, --security_advanced Manage SELinux settings (semanage, semodule, sepolicy, ...)
-i, --security_basic Use read-only security-related tools (seinfo, getsebool, sesearch, ...)
-s, --sudo Run commands as root using sudo
-l, --user_login Basic rules common to all users (tty, pty, ...)
-c, --ssh_connect Connect over SSH
-b, --basic_commands Use basic commands (date, ls, ps, man, systemctl -user, journalctl -user, passwd, ...)

The new user also needs to be assigned an MLS/MCS level and range. These are set to s0 and s0:c0.c1023 respectively by default to work well in targeted policy mode. For more details see Red Hat Multi-Level Security documentation.

$ udica confined_user -abcdgilmns --level s0 --range "s0:c0" custom_user

Created custom_user.cil
Run the following commands to apply the new policy:
Install the new policy module
# semodule -i custom_user.cil /usr/share/udica/macros/confined_user_macros.cil
Create a default context file for the new user
# sed -e ’s|user|custom_user|g’ /etc/selinux/targeted/contexts/users/user_u > /etc/selinux/targeted/contexts/users/custom_user_u
Map the new selinux user to an existing user account
# semanage login -a -s custom_user_u custom_user
Fix labels in the user's home directory
# restorecon -RvF /home/custom_user

As prompted by udica, the new user policy needs to be installed into the system along with the confined_user_macros file and a default context file needs to be created before the policy is ready to be used.

Last step is either assignment to an existing linux user (using semanage login), or specifying the new SELinux user when creating a new linux user account (no need to run restorecon for a new user home directory).

useradd -Z custom_user_u

The created policy defines a new SELinux user <user_name>_u, a corresponding role <user_name>_r and a list of types (varies based on selected options) <user_name>_t, <user_name>_sudo_t, <user_name>_ssh_agent_t, ...

See Red Hat Confined User documentation for more details about confined users, their assignment, available roles and access they allow.

SELinux labels vs. objects they represent

Policies generated by udica work with SELinux labels as opposed to filesystem paths, port numbers etc. This means that allowing access to given path (e.g. path to a directory mounted to your container), port number, or any other resource may also allow access to other resources you didn't specify, since the same SELinux label can be assigned to multiple resources.

For example a container using port 21 will also be given access to ports 989 and 990 by udica, since all the listed ports share a single label.

# sudo semanage port -l | grep 21
ftp_port_t                     tcp      21, 989, 990

Similarly, bind mounting a sub-directory of your home directory will result in a container policy allowing access to almost all the data in the home directory, unless a non-default label is used for the mounted path.

# sudo semanage fcontext -l | grep user_home_t
/home/[^/]+/.+                                     all files          unconfined_u:object_r:user_home_t:s0

Running from a container

To build the udica container to your local registry, run the following command:

$ make image

Once having the image built, it's possible to run udica from whithin a container. The necessary directories to bind-mount are:

  • /sys/fs/selinux
  • /etc/selinux/
  • /var/lib/selinux/

For reference, this would be a way to call the container via podman:

podman run --user root --privileged -ti \
    -v /sys/fs/selinux:/sys/fs/selinux \
    -v /etc/selinux/:/etc/selinux/ \
    -v /var/lib/selinux/:/var/lib/selinux/ \
    --rm --name=udica udica

Testing

Udica repository contains units tests for basic functionality of the tool. To run tests follow these commands:

$ make test

On SELinux enabled systems you can run also (root access required):

# python3 tests/test_integration.py

Udica in OpenShift

Udica could run in OpenShift and generate SELinux policies for pods in the same instance. SELinux policy helper operator is a controller that listens to all pods in the system. It will attempt to generate a policy for pods when the pod is annotated with a specific tag "generate-selinux-policy" and the pod is in a running state. In order to generate the policy, it spawns a pod with the selinux-k8s tool which uses udica to generate the policy. It will spit out a configmap with the appropriate policy.

Real example is demonstrated in following demo.

Demo

asciicast

Known issues

  • It's not possible to detect capabilities used by container in docker engine, therefore you have to use '-c' to specify capabilities for docker container manually.
  • It's not possible to generate custom local policy using "audit2allow -M" tool from AVCs where source context was generated by udica. For this purpose please use '--append-rules' option.
  • In some situations udica fails to identify which container engine is used, therefore "--container-engine" parameter has to be used to inform udica how JSON inspection file should be parsed.

udica's People

Contributors

alegrey91 avatar ashcrow avatar bachradsusi avatar cevich avatar janzarsky avatar jaormx avatar jean-edouard avatar martinbasti avatar mjahoda avatar mskott avatar renovate[bot] avatar rhatdan avatar tomastomecek avatar tomsweeneyredhat avatar tristancacqueray avatar tscherf avatar ttreuthardt avatar vmojzis avatar wonder93 avatar wrabcak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

udica's Issues

Meta task silently fails with permission error

Describe the bug

/usr/local/bin/entrypoint.sh |& ${_TIMESTAMP}
[01:18:08] START - All [+xxxx] lines that follow are relative to right now.
[+0001s] Activated service account credentials for: [[email protected]]
[+0003s] ERROR: (gcloud.compute.images.update) HTTPError 403: Required 'compute.images.get' permission for 'projects/SECRET/global/images/fedora-32-podman-6530021898584064'
[+0004s] ERROR: (gcloud.compute.images.update) HTTPError 403: Required 'compute.images.get' permission for 'projects/SECRET/global/images/fedora-31-podman-6530021898584064'
[01:18:12] END - [+0004s] total duration since START

To Reproduce

  1. Submit pull request or merge pull request

Expected behavior

The meta task should never fail, and probably shouldn't fail silently (my fault).

Additional context

I checked the permissions of the service account, and they appear to have 'compute.images.get' access.

udica cannot use the container ID once it is provided

Describe the bug
Help message of udica contains:
-i CONTAINERID, --container-id CONTAINERID
Running container ID

but udica still needs a docker file or directory.

To Reproduce
Steps to reproduce the behavior:

ps -efZ | grep mycontainer

unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 7712 6221 0 09:46 pts/0 00:00:00 podman run --security-opt label=type:mycontainer.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
system_u:system_r:mycontainer.process:s0:c62,c167 root 7801 7791 0 09:46 pts/0 00:00:00 bash
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 root 7900 7525 0 09:53 pts/1 00:00:00 grep --color=auto mycontainer

podman ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
063c3ef6f436 docker.io/library/fedora:latest bash 7 minutes ago Up 7 minutes ago 0.0.0.0:21->21/tcp sad_mahavira

udica -i 063c3ef6f436 mycontainer

Traceback (most recent call last):
File "/usr/local/bin/udica", line 11, in
load_entry_point('udica==0.1.1', 'console_scripts', 'udica')()
File "/usr/local/lib/python3.6/site-packages/udica-0.1.1-py3.6.egg/udica/main.py", line 56, in main
File "/usr/lib64/python3.6/subprocess.py", line 287, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib64/python3.6/subprocess.py", line 729, in init
restore_signals, start_new_session)
File "/usr/lib64/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'docker': 'docker'

Expected behavior
Either the container ID is sufficient for udica to work successfully, or documentation should advise users to create the 'docker' file or directory.

Creating a policy as non-root not working as expected

I am testing the functionality that was added with this PR #78

When attempting to create a policy, per the PR and the tests, I don't get the same results. When launching a container to create a policy from ie.

podman run --device /dev/tty0 fedora /bin/bash

I don't get the device to map. I am running this rootless. When doing an inspect on the container, I see this

"Devices": []

If I run the container as root, the device maps. I am not sure if this is the way udicapodman is supposed to work. Obviously, udica can't create a policy that would allow for the device if its not a part of the container metadata. Does Udica require the container used to build the policy to have the correct permissions? If so, perhaps this needs to be added to the documentation. If not, I am wondering if devices somehow work differently than volumes for example. Perhaps devices can't even map if they are denied by selinux. Since the device can't even be mounted, udica would never even know that it needs to build a policy for it. I hope this makes sense, I am new to Udica, but it appears to be solving an issue I've had for quite some time, it just isn't working the way I expect it to.

If udica is run directly from git without installing, it fails

Describe the bug

README.md says:

Alternatively tou can run udica directly from git:

$ python3 -m udica --help

--help works but when you try to generate a policy you get:

# python3 -m udica -i 78f693e41d93 bz1123

Policy bz1123 created!
Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/p/devel/github/containers/udica/udica/__main__.py", line 134, in <module>
    main()
  File "/home/p/devel/github/containers/udica/udica/__main__.py", line 129, in main
    load_policy(opts)
  File "/home/p/devel/github/containers/udica/udica/policy.py", line 193, in load_policy
    chdir(TEMPLATES_STORE)
FileNotFoundError: [Errno 2] No such file or directory: '/usr/share/udica/templates'

created policy blocks access to /dev/null

Describe the bug
Custom policy created with udica blocks access to /dev/null

To Reproduce
Steps to reproduce the behavior:

  1. create container with podman, create json file with podman inspect, feed json file to udica
  2. create policy with udica, install with module with semodule -i ...
  3. start container with additional parameter --security-opt label=type:POLICYNAME.process

Expected behavior
Container works as before, only a bit more protected in case of unusual behavior.

Additional context
Container does not start. Running
podman start -i -a container returns Couldn't open /dev/null: Permission denied
Running the container without --security-opt ... works without problem, thus I suspect that the generated policy is a bit too strict.

Possibly missing needed attributes for a given policy?

Describe the bug

Udica could possibly be missing some attributes/rules for a given policy generation. I ran udica for a given container (in this case I was trying it for rook/ceph) that was getting avc denials, then updated the container to have the type, but it is still getting denied. Audit2allow still shows a potential rule that might have allowed the container to do what it was trying to do. explained below:

To Reproduce
Steps to reproduce the behavior:

  1. attempt to install rook-ceph, in this case I am working on the init-container known as chown-container-data-dir for one of the deployments which is being denied (there are others but this is the first one being denied).

  2. find the crashing container, inspect and run udica against it. Here is the resulting policy generated:

cat chown-container-data-dir.cil
(block chown-container-data-dir
    (blockinherit container)
    (allow process var_lib_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write )))
    (allow process var_lib_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write )))
    (allow process var_lib_t ( sock_file ( append getattr open read write )))
    (allow process var_lib_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write )))
    (allow process var_lib_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write )))
    (allow process var_lib_t ( sock_file ( append getattr open read write )))
    (allow process var_lib_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write )))
    (allow process var_lib_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write )))
    (allow process var_lib_t ( sock_file ( append getattr open read write )))
  1. follow the instructions provided by udica in my case:
    semodule -i chown-container-data-dir.cil /usr/share/udica/templates/base_container.cil
    then update the container to use the new type/label.

  2. even with the new policy in place, the container is still being denied:

type=SYSCALL msg=audit(1627570460.894:2008): arch=c000003e syscall=260 success=no exit=-13 a0=ffffff9c a1=55ef2249e3a0 a2=a7 a3=a7 items=0 ppid=77030 pid=84832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="chown" exe="/usr/bin/coreutils" subj=system_u:system_r:chown-container-data-dir.process:s0:c82,c419 key=(null)
type=AVC msg=audit(1627570460.894:2008): avc:  denied  { setattr } for  pid=84832 comm="chown" name="data" dev="nvme0n1p2" ino=33783456 scontext=system_u:system_r:chown-container-data-dir.process:s0:c82,c419 tcontext=system_u:object_r:container_var_lib_t:s0 tclass=dir permissive=0

notice the scontext: scontext=system_u:system_r:chown-container-data-dir.process:s0:c82,c419

Expected behavior
the container with the policy in place, and with the type set to proper type it should get past the denial. this of course would mean (according to my understanding) it would not miss any needed allow rules (see below)

Additional context

These outputs might be useful lets see that specific source and target with class dir:

sesearch -A -s chown-container-data-dir.process -t container_var_lib_t -c dir
allow container_domain container_var_lib_t:dir { add_name getattr ioctl lock open read remove_name search write };
allow container_domain file_type:dir { getattr open search };
allow svirt_sandbox_domain container_var_lib_t:dir { add_name getattr ioctl lock open read remove_name search write };
allow svirt_sandbox_domain file_type:dir { getattr open search };

and audit2allow "thinks" an extra setattr is needed, only care about the first section specific to chown-container-data-dir.process, as I don't want to give entire container_t setattr obviously:

audit2allow -a
#============= chown-container-data-dir.process ==============
allow chown-container-data-dir.process container_var_lib_t:dir setattr;
#============= container_t ==============
#!!!! This avc can be allowed using the boolean 'container_manage_cgroup'
allow container_t cgroup_t:file write;
allow container_t container_var_lib_t:dir setattr;

possibly some significant sections from the inspect json:

crictl -r unix:///run/containerd/containerd.sock inspect a5c7874801937 | grep mountLab
el
        "mountLabel": "system_u:object_r:container_file_t:s0:c203,c480"


[root@ip-10-42-32-235 ~]# crictl -r unix:///run/containerd/containerd.sock inspect a5c7874801937 | grep selinux
        "selinuxRelabel": true
        "selinuxRelabel": true
        "selinuxRelabel": false
        "selinuxRelabel": false
        "selinuxRelabel": false
        "selinuxRelabel": true
        "selinuxRelabel": true
        "selinuxRelabel": true
          "selinux_relabel": true
          "selinux_relabel": true
          "selinux_relabel": true
          "selinux_relabel": true
          "selinux_relabel": true
          "selinux_options": {
        "selinuxLabel": "system_u:system_r:chown-container-data-dir.process:s0:c203,c480"


[root@ip-10-42-32-235 ~]# crictl -r unix:///run/containerd/containerd.sock inspect a5c7874801937 | grep selinux_options -C 3
          "namespace_options": {
            "pid": 1
          },
          "selinux_options": {
            "type": "chown-container-data-dir.process"
          },
          "run_as_user": {},

let me know if anything else might be useful, like the full inspect json.

Or if I missed something obvious...I apologize but thank you for your consideration.

CentOS Stream 8, udica returns errors when building CIL...

Describe the bug
When running udica, the following error is returned :
Traceback (most recent call last):
File "/usr/bin/udica", line 11, in
load_entry_point('udica==0.2.6', 'console_scripts', 'udica')()
File "/usr/lib/python3.6/site-packages/udica/main.py", line 216, in main
container_caps = sorted(engine_helper.get_caps(container_inspect, opts))
TypeError: 'NoneType' object is not iterable

To Reproduce
Steps to reproduce the behavior:

  1. podman inspect f8d0cb6c653e >b13test.json
  2. udica -j b13test.json b13test
  3. Aforementioned output is displayed

Expected behavior
Expected output :
Policy b13test with container id f8d0cb6c653e created!

Additional context
See b13test.json as attached file
b13test.zip

$ podman version
Version: 4.0.0-dev
API Version: 4.0.0-dev
Go Version: go1.16.7
Built: Thu Sep 30 17:17:20 2021
OS/Arch: linux/amd64

$ udica --version
0.2.6

$ more /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"

Su not working in container with udica generated policy

Describe the bug
When a container is run in a udica generated policy, su command does not work.

To Reproduce
Steps to reproduce the behavior:

  1. Install this rule generated by udica:
(block container-sabnzbd
   (blockinherit container)
   (blockinherit net_container)
   (blockinherit restricted_net_container)
   (allow process process ( capability ( audit_write chown dac_override fowner fsetid kill mknod net_bind_service net_raw setfcap setgid setpcap setuid sys_chroot ))) 

   (allow process unreserved_port_t ( tcp_socket (  name_bind ))) 
   (allow process container_file_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write ))) 
   (allow process container_file_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write ))) 
   (allow process container_file_t ( sock_file ( append getattr open read write ))) 
   (allow process container_file_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write ))) 
   (allow process container_file_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write ))) 
   (allow process container_file_t ( sock_file ( append getattr open read write ))) 
   (allow process public_content_rw_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write ))) 
   (allow process public_content_rw_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write ))) 
   (allow process public_content_rw_t ( sock_file ( append getattr open read write ))) 
)
  1. podman run --security-opt label=type:container-sabnzbd.process -it debian:buster /bin/sh
  2. su => su: System error

Expected behavior
Su should work as expected, like when not specifying -security-opt label=type:container-sabnzbd.process .

Solution

Udica seems to need the (allow process process ( netlink_audit_socket ( nlmsg_read nlmsg_relay nlmsg_tty_audit ))) rule of container-selinux. Adding this rule fixes the problem, I'm not sure though whether it's possible to auto-detect when it is required.

port ranges are not supported

Describe the bug
When running udica on container with ports that are part of a range, it crashes. For example port 8612 is part of a range 8610-8614.

To Reproduce
Steps to reproduce the behavior:

  1. podman run -p 8612 fedora bash
  2. udica -i <container_id> my_container
    Traceback (most recent call last):
    File "/usr/bin/udica", line 11, in
    load_entry_point('udica==0.1.4', 'console_scripts', 'udica')()
    File "/usr/lib/python3.7/site-packages/udica/main.py", line 107, in main
    create_policy(opts, container_caps, container_mounts, container_ports)
    File "/usr/lib/python3.7/site-packages/udica/policy.py", line 118, in create_policy
    policy.write(' (allow process ' + list_ports(item['hostPort']) + ' ( ' + perms.socket[item['protocol']] + ' ( name_bind ))) \n')
    TypeError: can only concatenate str (not "NoneType") to str

Expected behavior
no traceback

Additional context
$ seinfo --portcon 8612
Portcon: 5
portcon sctp 1024-65535 system_u:object_r:unreserved_port_t:s0
portcon tcp 1024-32767 system_u:object_r:unreserved_port_t:s0
portcon tcp 8610-8614 system_u:object_r:ipp_port_t:s0
portcon udp 1024-32767 system_u:object_r:unreserved_port_t:s0
portcon udp 8610-8614 system_u:object_r:ipp_port_t:s0

The same thing happens when running with port that has no context, for example 35000.

Add support for docker containers

$ python3 -m udica -i ddbd-c -n ddbd
Traceback (most recent call last):
  File "/usr/lib64/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/tt/g/udica/udica/__main__.py", line 43, in <module>
    main()
  File "/home/tt/g/udica/udica/__main__.py", line 29, in main
    container_caps = parse_cap(container_caps_data)
  File "/home/tt/g/udica/udica/parse.py", line 10, in parse_cap
    return data.split('\n')[1].split(',')
IndexError: list index out of range

ddbd-c is a docker container, so I'm assuming that udica can't find it and tracebacks

Author: @TomasTomecek

allowing port 21 also means allowing ports 989 and 990

Describe the bug
Users of udica may be confused by the fact that allowing port 21 also means that ports 989 and 990 are allowed too, because from SELinux policy point of view they are labeled the same way: ftp_port_t.

To Reproduce
Steps to reproduce the behavior:

  1. podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
  2. nc -lvp 21
  3. nc -lvp 989
  4. nc -lvp 990

Expected behavior
Documentation should contain a note about this behavior.

Additional context
Ephemeral ports (32768-61000) are allowed too unless the content of /proc/sys/net/ipv4/ip_local_port_range is changed.

Create a library containing the template policy

Is your feature request related to a problem? Please describe.
I use the udica tool to create fine-grain SELinux policies. While this tool is good development tool, it's really heavy to install all the package and its dependency in production, which is useless. In a production context, I only need the content of /usr/share/udica/templates/. If we have a package with only those templates, it's easy to install this lib and install our modules

Describe the solution you'd like
Separate the templates from the udica package, to be more production friendly and avoid installing all the dependency (including python). It's basically a lib that can be used alone, without udica.

Describe alternatives you've considered
Manage this template by myself, but it requires maintenance every-time the templates are modified

Additional context
I run my containers in Fedora CoreOS and having fine-grain SELinux policies increate a lot my OS security

if container triggers SELinux denials then audit2allow cannot generate policy from them

Describe the bug
If a container is running under a policy generated by udica and the container triggers some SELinux denials then these denials cannot be transformed into a local policy module via audit2allow, because the compilation fails.

To Reproduce
Steps to reproduce the behavior:

  1. podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
  2. run in the container as root: nc -lvp 22
  3. run on the host as root: ausearch -m avc -i | audit2allow -M mypolicy

Expected behavior
Either the problem gets documented as a known bug or it is fixed.

Pip installs templates in wrong place

Describe the bug
pip install udica puts the data_files (/usr/share/udica/templates/*.cil) in the wrong place. pip install git+https://github.com/containers/udica works fine.

To Reproduce
Steps to reproduce the behavior:

  1. pip install udica then pip uninstall udica gives these file listing:

Uninstalling udica-0.2.1:
/usr/local/bin/udica
/usr/local/lib/python3.6/site-packages/udica-0.2.1.dist-info/INSTALLER
/usr/local/lib/python3.6/site-packages/udica-0.2.1.dist-info/LICENSE
/usr/local/lib/python3.6/site-packages/udica-0.2.1.dist-info/METADATA
/usr/local/lib/python3.6/site-packages/udica-0.2.1.dist-info/RECORD
/usr/local/lib/python3.6/site-packages/udica-0.2.1.dist-info/WHEEL
/usr/local/lib/python3.6/site-packages/udica-0.2.1.dist-info/entry_points.txt
/usr/local/lib/python3.6/site-packages/udica-0.2.1.dist-info/top_level.txt
/usr/local/lib/python3.6/site-packages/udica/init.py
/usr/local/lib/python3.6/site-packages/udica/main.py
/usr/local/lib/python3.6/site-packages/udica/pycache/init.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/main.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/parse.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/perms.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/policy.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/parse.py
/usr/local/lib/python3.6/site-packages/udica/perms.py
/usr/local/lib/python3.6/site-packages/udica/policy.py
/usr/local/lib/python3.6/site-packages/usr/share/licenses/udica/LICENSE
/usr/local/lib/python3.6/site-packages/usr/share/udica/ansible/deploy-module.yml
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/base_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/config_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/home_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/log_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/net_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/tmp_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/tty_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/virt_container.cil
/usr/local/lib/python3.6/site-packages/usr/share/udica/templates/x_container.cil
Proceed (y/n)? y
Successfully uninstalled udica-0.2.1

  1. pip install git+https://github.com/containers/udica and then pip uninstall udica gives the correct listing:

Uninstalling udica-0.2.1:
/usr/local/bin/udica
/usr/local/lib/python3.6/site-packages/udica-0.2.1-py3.6.egg-info
/usr/local/lib/python3.6/site-packages/udica/init.py
/usr/local/lib/python3.6/site-packages/udica/main.py
/usr/local/lib/python3.6/site-packages/udica/pycache/init.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/main.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/parse.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/perms.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/pycache/policy.cpython-36.pyc
/usr/local/lib/python3.6/site-packages/udica/parse.py
/usr/local/lib/python3.6/site-packages/udica/perms.py
/usr/local/lib/python3.6/site-packages/udica/policy.py
/usr/share/licenses/udica/LICENSE
/usr/share/udica/ansible/deploy-module.yml
/usr/share/udica/templates/base_container.cil
/usr/share/udica/templates/config_container.cil
/usr/share/udica/templates/home_container.cil
/usr/share/udica/templates/log_container.cil
/usr/share/udica/templates/net_container.cil
/usr/share/udica/templates/tmp_container.cil
/usr/share/udica/templates/tty_container.cil
/usr/share/udica/templates/virt_container.cil
/usr/share/udica/templates/x_container.cil
Proceed (y/n)? y
Successfully uninstalled udica-0.2.1

Expected behavior
pip install udica should show the same behaviour as pip install git+https://github.com/containers/udica

Additional context

Any plans to support containerd?

Is your feature request related to a problem? Please describe.
I just noticed its not listed among the various container runtimes but it is widely used.

Describe the solution you'd like
containerd support

Describe alternatives you've considered
there is nothing I'm aware of.

Additional context
if containerd is implied by one of the others, than this is obviously an unnecessary request, but it might be a good idea to mention it somewhere.

Udica could be able to update generated policy based on AVC denial messages

If a container is running under a policy generated by udica and the container triggers some SELinux denials then these denials cannot be transformed into a local policy module via audit2allow but udica itself could be able to update the policy and users would use udica instead of audit2allow.

e.g.

  1. podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
  2. run in the container as root: nc -lvp 22
  3. run on the host as root: udica --modify --avc audit.log my_container

this would update my_container.cil with rules needed for container to bind to port 22 and user would just install the module again.

https://bugzilla.redhat.com/show_bug.cgi?id=1732704

Idea by @bachradsusi .

lost network connectivity

Describe the bug
My container lost the ability to connect to the network. This occurred when making a selinux security policy to be able to mount/access host folder inside a container.

To Reproduce
Steps to reproduce the behavior:

cont_id=$(podman create --rm \
  -v /hostfolder:/containerfolder:rw \
  -p 7777:7777 \
  registry.access.redhat.com/rhel7:latest)
expected_str=$(podman inspect $cont_id | sudo udica container_mount_myhostfolder)
podman rm $cont_id
# The above lines makes a selinux process
sudo semodule -i container_mount_myhostfolder.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}
# The above line installs the selinux process
# The below line runs the container with the specified selinux block
podman run -it --rm \
  --security-opt label=type:container_mount_myhostfolder.process \
  -v /hostfolder:/containerfolder:rw \
  --replace \
  --name thankyoupodmanteam \
  registry.access.redhat.com/rhel7:latest sh -c 'curl https://duckduckgo.com'

The selinux logged denial is below

sudo ausearch -m avc --start recent
 node=f.q.d.n type=AVC msg=audit(1656031606.687:43271466): avc:  denied  { name_connect } for  pid=1694771 comm="curl" dest=1234 scontext=system_u:system_r:container_mount_myhostfolder.process:s0:c328,c348 tcontext=system_u:object_r:http_cache_port_t:s0 tclass=tcp_socket permissive=0

Expected behavior
I would anticipate the container be able to still connect to the network, especially if able to bind to a port.

Additional context

using audit2why I was able to deduce and contribute to the CIL file that udica outputs, but I still think udica should permit network connectivity by default as that is what I personally was intuitively expecting.

sudo ausearch -m avc --start today | audit2why
q_depth should be larger than 512 for safety margin
node=f.q.d.n type=AVC msg=audit(1656033096.558:43646554): avc:  denied  { name_connect } for  pid=1698931 comm="curl" dest=1234 scontext=system_u:system_r:container_mount_myhostfolder.process:s0:c205,c312 tcontext=system_u:object_r:http_cache_port_t:s0 tclass=tcp_socket permissive=0

        Was caused by:
                Missing type enforcement (TE) allow rule.

                You can use audit2allow to generate a loadable module to allow this access.

sudo ausearch -m avc --start today | audit2allow
q_depth should be larger than 512 for safety margin


#============= container_mount_myhostfolder.process ==============
allow container_mount_myhostfolder.process http_cache_port_t:tcp_socket name_connect;

based on the above, the selinux process I manually add in the cil file and obtained success

    (allow process http_cache_port_t ( tcp_socket (  name_connect ))) 

so the full successful cil file resulted in

(block container_mount_myhostfolder
    (blockinherit container)
    (blockinherit restricted_net_container)
    (allow process http_cache_port_t ( tcp_socket (  name_bind ))) 
    (allow process http_cache_port_t ( tcp_socket (  name_connect ))) 
    (allow process http_port_t ( tcp_socket (  name_bind ))) 
    (allow process default_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write ))) 
    (allow process default_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write ))) 
    (allow process default_t ( fifo_file ( getattr read write append ioctl lock open ))) 
    (allow process default_t ( sock_file ( append getattr open read write ))) 
)

Maybe I need to tell udica to not restrict_net_container?

    (blockinherit restricted_net_container)

Cannot Create Policy -- TypeError("in method 'selabel_lookup'.....

Hi

Am having issues creating a custom policy.

/usr/bin/udica --container-engine docker -j container_name.json my_container

("Couldn't create policy:", TypeError("in method 'selabel_lookup', argument 3 of type 'char const *'",))

NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"

container-selinux.noarch 2:2.68-1.el7 installed
libselinux.x86_64 2.5-12.amzn2.0.2 installed
libselinux-python.x86_64 2.5-12.amzn2.0.2 @amzn2-core
libselinux-utils.x86_64 2.5-12.amzn2.0.2 installed
selinux-policy.noarch 3.13.1-192.amzn2.6 installed
selinux-policy-devel.noarch 3.13.1-192.amzn2.6 @amzn2-core
selinux-policy-targeted.noarch 3.13.1-192.amzn2.6 installed
libselinux.i686 2.5-12.amzn2.0.2 amzn2-core
libselinux-devel.x86_64 2.5-12.amzn2.0.2 amzn2-core
libselinux-ruby.x86_64 2.5-12.amzn2.0.2 amzn2-core
libselinux-static.x86_64 2.5-12.amzn2.0.2 amzn2-core
pcp-selinux.x86_64 3.12.2-5.amzn2 amzn2-core
selinux-policy-doc.noarch 3.13.1-192.amzn2.6 amzn2-core
selinux-policy-minimum.noarch 3.13.1-192.amzn2.6 amzn2-core
selinux-policy-mls.noarch 3.13.1-192.amzn2.6 amzn2-core
selinux-policy-sandbox.noarch 3.13.1-192.amzn2.6 amzn2-core
setools-console.x86_64 3.3.8-2.amzn2.0.2 @amzn2-core
setools-libs.x86_64 3.3.8-2.amzn2.0.2 @amzn2-core
setools.x86_64 3.3.8-2.amzn2.0.2 amzn2-core
setools-devel.x86_64 3.3.8-2.amzn2.0.2 amzn2-core
setools-gui.x86_64 3.3.8-2.amzn2.0.2 amzn2-core
setools-libs.i686 3.3.8-2.amzn2.0.2 amzn2-core
setools-libs-tcl.x86_64 3.3.8-2.amzn2.0.2 amzn2-core

  • installed udica with -- pip install udica
  • Docker is container runtime
  • Exported container_name.json using - docker inspect ID > container_name.json

Please let me know what information I can provide..

Any assistance greatly appreciated!

missing /usr/local/lib/python3.6/site-packages directory

Describe the bug
First run of setup.py leads to:

python3 ./setup.py install

running install
error: can't create or remove files in install directory

The following error occurred while trying to add or remove files in the
installation directory:

[Errno 2] No such file or directory: '/usr/local/lib/python3.6/site-packages/test-easy-install-7253.write-test'

The installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:

/usr/local/lib/python3.6/site-packages/

This directory does not currently exist. Please create it and try again, or
choose a different installation directory (using the -d or --install-dir
option).

Expected behavior
The documentation should recommend to create the directory, because it may not exist.

Disable Dependabot after Renovate trial run

Is your feature request related to a problem? Please describe.
Both Renovate and Dependabot are enabled on this repository.

Describe the solution you'd like
Assuming Renovate's behavior is acceptable compared to Dependabot, I will disable Dependabot after 30-ish days.

Describe alternatives you've considered
If there's a problem with the Renovate configuration/operation, I can limit it to only manage CI VM image updates - leaving Dependabot enabled.

Additional context
#121

Couldn't create policy: 'Source' (v0.1.9, Podman v1.0.5, RHEL 8.0)

Describe the bug
udica v0.1.9 doesn't work with Podman v1.0.5 included in RHEL 8.0

To Reproduce

$ podman run -it --name foo --rm -v /root:/root2 centos
$ podman inspect foo > foo.json
$ udica foo < foo.json 
Couldn't create policy: 'Source'

Expected behavior
It should work

Additional context
foo.json

[
    {
        "ID": "25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb",
        "Created": "2019-09-17T03:25:25.843359189+09:00",
        "Path": "/bin/bash",
        "Args": [
            "/bin/bash"
        ],
        "State": {
            "OciVersion": "1.0.1-dev",
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 120464,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2019-09-17T03:25:26.198201953+09:00",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "67fa590cfc1c207c30b837528373f819f6262c884b7e69118d060a0c04d70ab8",
        "ImageName": "docker.io/library/centos:latest",
        "Rootfs": "",
        "ResolvConfPath": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/resolv.conf",
        "HostnamePath": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/hostname",
        "HostsPath": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/hosts",
        "StaticDir": "/var/lib/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata",
        "LogPath": "/var/lib/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/ctr.log",
        "Name": "foo",
        "RestartCount": 0,
        "Driver": "overlay",
        "MountLabel": "system_u:object_r:container_file_t:s0:c481,c549",
        "ProcessLabel": "system_u:system_r:container_t:s0:c481,c549",
        "AppArmorProfile": "",
        "EffectiveCaps": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FSETID",
            "CAP_FOWNER",
            "CAP_MKNOD",
            "CAP_NET_RAW",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETFCAP",
            "CAP_SETPCAP",
            "CAP_NET_BIND_SERVICE",
            "CAP_SYS_CHROOT",
            "CAP_KILL",
            "CAP_AUDIT_WRITE"
        ],
        "BoundingCaps": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FSETID",
            "CAP_FOWNER",
            "CAP_MKNOD",
            "CAP_NET_RAW",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETFCAP",
            "CAP_SETPCAP",
            "CAP_NET_BIND_SERVICE",
            "CAP_SYS_CHROOT",
            "CAP_KILL",
            "CAP_AUDIT_WRITE"
        ],
        "ExecIDs": [],
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/var/lib/containers/storage/overlay/877b494a9f30e74e61b441ed84bb74b14e66fb9cc321d83f3a8a19c60d078654/diff",
                "MergedDir": "/var/lib/containers/storage/overlay/a4365f4d1fa69e6cf93009c8a324868c48a67e62f3b74da46bd5a94be40c81e4/merged",
                "UpperDir": "/var/lib/containers/storage/overlay/a4365f4d1fa69e6cf93009c8a324868c48a67e62f3b74da46bd5a94be40c81e4/diff",
                "WorkDir": "/var/lib/containers/storage/overlay/a4365f4d1fa69e6cf93009c8a324868c48a67e62f3b74da46bd5a94be40c81e4/work"
            }
        },
        "Mounts": [
            {
                "destination": "/sys",
                "type": "sysfs",
                "source": "sysfs",
                "options": [
                    "nosuid",
                    "noexec",
                    "nodev",
                    "ro"
                ]
            },
            {
                "destination": "/proc",
                "type": "proc",
                "source": "proc",
                "options": [
                    "nosuid",
                    "noexec",
                    "nodev"
                ]
            },
            {
                "destination": "/dev",
                "type": "tmpfs",
                "source": "tmpfs",
                "options": [
                    "nosuid",
                    "strictatime",
                    "mode=755",
                    "size=65536k"
                ]
            },
            {
                "destination": "/root2",
                "type": "bind",
                "source": "/root",
                "options": [
                    "rbind",
                    "rw",
                    "rprivate"
                ]
            },
            {
                "destination": "/etc/resolv.conf",
                "type": "bind",
                "source": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/resolv.conf",
                "options": [
                    "bind",
                    "private"
                ]
            },
            {
                "destination": "/dev/mqueue",
                "type": "mqueue",
                "source": "mqueue",
                "options": [
                    "nosuid",
                    "noexec",
                    "nodev"
                ]
            },
            {
                "destination": "/dev/pts",
                "type": "devpts",
                "source": "devpts",
                "options": [
                    "nosuid",
                    "noexec",
                    "newinstance",
                    "ptmxmode=0666",
                    "mode=0620",
                    "gid=5"
                ]
            },
            {
                "destination": "/etc/hosts",
                "type": "bind",
                "source": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/hosts",
                "options": [
                    "bind",
                    "private"
                ]
            },
            {
                "destination": "/dev/shm",
                "type": "bind",
                "source": "overlay-containers",
                "options": [
                    "bind",
                    "private"
                ]
            },
            {
                "destination": "/etc/hostname",
                "type": "bind",
                "source": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/hostname",
                "options": [
                    "bind",
                    "private"
                ]
            },
            {
                "destination": "/run/.containerenv",
                "type": "bind",
                "source": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/.containerenv",
                "options": [
                    "bind",
                    "private"
                ]
            },
            {
                "destination": "/run/secrets",
                "type": "bind",
                "source": "/var/run/containers/storage/overlay-containers/25a1e040fd7cfa83061756c8228f3e65a085e3f688aebbd1096bad2611e3d7fb/userdata/run/secrets",
                "options": [
                    "bind",
                    "private"
                ]
            },
            {
                "destination": "/sys/fs/cgroup",
                "type": "cgroup",
                "source": "cgroup",
                "options": [
                    "rprivate",
                    "nosuid",
                    "noexec",
                    "nodev",
                    "relatime",
                    "ro"
                ]
            }
        ],
        "Dependencies": [],
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": [],
            "SandboxKey": "/var/run/netns/cni-903f4326-e4e4-852c-7bad-a87dc0a72550",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "10.88.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "10.88.0.73",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "52:29:38:0b:83:16"
        },
        "ExitCommand": null,
        "Namespace": "",
        "IsInfra": false,
        "HostConfig": {
            "ContainerIDFile": "",
            "LogConfig": null,
            "NetworkMode": "bridge",
            "PortBindings": null,
            "AutoRemove": true,
            "CapAdd": [],
            "CapDrop": [],
            "DNS": [],
            "DNSOptions": [],
            "DNSSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "host",
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 65536000,
            "Runtime": "runc",
            "ConsoleSize": null,
            "CpuShares": null,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": null,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": null,
            "CpuQuota": null,
            "CpuRealtimePeriod": null,
            "CpuRealtimeRuntime": null,
            "CpuSetCpus": "",
            "CpuSetMems": "",
            "Devices": null,
            "DiskQuota": 0,
            "KernelMemory": null,
            "MemoryReservation": null,
            "MemorySwap": null,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": [],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "Tmpfs": []
        },
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": {
                "uid": 0,
                "gid": 0
            },
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": true,
            "OpenStdin": true,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "TERM=xterm",
                "HOSTNAME=",
                "container=podman"
            ],
            "Cmd": [
                "/bin/bash"
            ],
            "Image": "docker.io/library/centos:latest",
            "Volumes": null,
            "WorkingDir": "/",
            "Entrypoint": "",
            "Labels": {
                "org.label-schema.build-date": "20190801",
                "org.label-schema.license": "GPLv2",
                "org.label-schema.name": "CentOS Base Image",
                "org.label-schema.schema-version": "1.0",
                "org.label-schema.vendor": "CentOS"
            },
            "Annotations": {
                "io.kubernetes.cri-o.ContainerType": "sandbox",
                "io.kubernetes.cri-o.TTY": "true"
            },
            "StopSignal": 15
        }
    }
]

Couldn't create policy: 'source'

Describe the bug
Unable to generate policy.

# podman inspect 8e | udica -j -  my_container
Couldn't create policy: 'source'

To Reproduce
Steps to reproduce the behavior:

  1. Fresh fedora 30 aws instance
  2. Follow installation instructions in README.md

Expected behavior
Policy generated

not able to execute udica command successfully for generating policy

Describe the bug
While trying to run the udica -j container.json my_container command, I'm getting the below ImportError message. Tried with python3.4, 3.6.3 as well with python 2.7.5 on Centos7.

I tried to install the udica from source via git clone and via pip (after uninstalling, at a different time). Some sources mentioned about instally pysed module, but this also doesn't resolve the issue.

Traceback (most recent call last):
  File "/usr/bin/udica", line 9, in <module>
    load_entry_point('udica==0.1.9', 'console_scripts', 'udica')()
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 378, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2566, in load_entry_point
    return ep.load()
  File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 2260, in load
    entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File "build/bdist.linux-x86_64/egg/udica/__main__.py", line 22, in <module>
  File "build/bdist.linux-x86_64/egg/udica/policy.py", line 20, in <module>
ImportError: cannot import name replace

To Reproduce
Steps to reproduce the behavior:

  1. yum install python-pip
  2. pip install udica
  3. udica -j container.json my_container

Expected behavior
udica command be successfully executed and able to generate policy.

Additional context
Add any other context about the problem here.

Create Travis CI for udica

Is your feature request related to a problem? Please describe.
Create travis CI for udica

Describe the solution you'd like
Have CI testing which will run after every pull-request and commit

Add "--device-access" option

Is your feature request related to a problem? Please describe.
Udica doesn't allow users to specify devices the container should have access to.

Describe the solution you'd like
Add "--device-access" option, which lets user specify a device to which udica should allow access.

Based on the following discussion: in containers/container-selinux#167

Lack of check that check presence of sections

Describe the bug
If some sections are not present for example Mounts udcia will crash with Key error as cause etc

Even if i did something completely wrong it should be indicated Key Error error does not indicate it well IMO

To Reproduce
Make any container that does not have Mounts. NetworkSettings or Host config
run idcia on it
Example steps
podman pod create --name a
podman run -it --rm --pod a fedora /bin/bash
podman inspect k8s.gcr.io/pause:3.5

Expected behavior
Policy for mounts shoudl't be added if mounts are not present etc

Proposed Fix
https://github.com/WellIDKRealy/udica

i didn't test it extensively so i have no idea if i broke something

Policies are not generated in order/not reproduceable

Describe the bug

When generating selinux policies in CI, one expects that subsequent calls to Udica will generate the same policy, however, this doesn't seem to be the case. While the policies are equivalent, the order of the items in the policy differs. This makes it really hard to detect if new changes come in the policy as the container evolves, and thus, prevents us from checking this in CI.

For instance:

$ diff /tmp/ci/selinuxd.cil selinuxd/security/selinuxd.cil
5,7d4
<     (allow process sysfs_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write ))) 
<     (allow process sysfs_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write ))) 
<     (allow process sysfs_t ( sock_file ( append getattr open read write ))) 
22a20,22
>     (allow process sysfs_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write ))) 
>     (allow process sysfs_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write ))) 
>     (allow process sysfs_t ( sock_file ( append getattr open read write ))) 

While that diff doesn't differ in content, the issue there is that that section was created a different order in the policy.

To Reproduce
Steps to reproduce the behavior:

  1. generate a policy for a container and store the file
  2. run the policy generation again and store the file
  3. diff them

Expected behavior
Running Udica for a container should always generate the same policy in the same order (so commands like diff show they're equivalent.

Cannot remove policies installed by previous version

Describe the bug
I have installed some selinux policies created with udica v0.2.5 which I cannot remove anymore after upgrading to udica v0.2.6.
container-selinux also got updated to the corresponding versions.

To Reproduce
Steps to reproduce the behavior:

  1. create and install a custom policy using udica v0.2.5
  2. upgrade udica to v.0.2.6 and container-selinux to 2.170.0
  3. try to remove the custom policy

Expected behavior
policy gets removed

Additional context
Output when running semodule -r my-mosquitto-server after upgrading to udica v0.2.6:

libsemanage.semanage_direct_remove_key: Removing last my-mosquitto-server module (no other my-mosquitto-server module exists at another priority).
Re-declaration of type process
Previous declaration of type at /var/lib/selinux/targeted/tmp/modules/602/base_container/cil:2
Failed to copy block to blockinherit at /var/lib/selinux/targeted/tmp/modules/602/my-postgres-server/cil:3
Failed to copy block contents into blockinherit
Failed to resolve AST
semodule:  Failed!

Result: the module is still there and I cannot remove it.

Invalid syntax when only using one template

Describe the bug
The tool suggest command with invalid syntax when only one template is used.

To Reproduce
Steps to reproduce the behavior:

  1. # podman run -it fedora:latest /bin/bash
  2. # podman inspect -l | udica mycontainer
    Policy mycontainer created!

Please load these modules using:
# semodule -i mycontainer.cil /usr/share/udica/templates/{base_container.cil}

Restart the container with: "--security-opt label=type:mycontainer.process" parameter

  1. # semodule -i mycontainer.cil /usr/share/udica/templates/{base_container.cil}
    libsemanage.map_file: Unable to open /usr/share/udica/templates/{base_container.cil}
    (No such file or directory).
    libsemanage.semanage_direct_install_file: Unable to read file /usr/share/udica/templates/{base_container.cil}
    (No such file or directory).
    semodule: Failed on /usr/share/udica/templates/{base_container.cil}!

Expected behavior
The tool provides a valid command for loading the policy.

Additional context
Tested on F28

Create manpages for udica

Is your feature request related to a problem? Please describe.
Manpages are missing.

Describe the solution you'd like
Have proper manpages for udica

generated policy does not enable published ports

Describe the bug
I created a rootless pod for an application that listens on 9000 and 3483 tcp, and 3483 udp. Using the policy generated by udica, I get name_bind denials on all 3 ports.

To Reproduce

$ podman run \
    -d \
    --name lms \
    -p 9000:9000 \
    -p 3483:3483 \
    -p 3483:3483/udp \
    -v ~/squeezebox:/srv/squeezebox:Z \
    -v ~/Music:/srv/music \
    localhost/lmsserver
$ podman inspect lms | sudo udica lms_policy
$ sudo semodule -i lms_policy.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}
$ podman run \
    -d \
    --name lms \
    -p 9000:9000 \
    -p 3483:3483 \
    -p 3483:3483/udp \
    -v ~/squeezebox:/srv/squeezebox:Z \
    -v ~/Music:/srv/music \
    --security-opt label=type:lms_policy.process \
    localhost/lmsserver

Expected behavior
The application should be able to listen on the published ports

Additional context
This is the generated policy:

(block lms_policy
    (blockinherit container)
    (blockinherit restricted_net_container)
    (allow process process ( capability ( audit_write chown dac_override fowner fsetid kill mknod net_bind_service net_raw setfcap setgid setpcap setuid sys_chroot ))) 

    (allow process user_home_t ( dir ( open read getattr lock search ioctl add_name remove_name write ))) 
    (allow process user_home_t ( file ( getattr read write append ioctl lock map open create  ))) 
    (allow process user_home_t ( sock_file ( getattr read write append open  ))) 
    (allow process audio_home_t ( dir ( open read getattr lock search ioctl add_name remove_name write ))) 
    (allow process audio_home_t ( file ( getattr read write append ioctl lock map open create  ))) 
    (allow process audio_home_t ( sock_file ( getattr read write append open  ))) 

These are the AVC messages:

time->Sun Jul 26 20:28:57 2020
type=AVC msg=audit(1595813337.948:433): avc:  denied  { name_bind } for  pid=139206 comm=squeezeboxserve src=3483 scontext=system_u:system_r:lms_policy.process:s0:c258,c933 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=udp_socket permissive=0
----
time->Sun Jul 26 20:41:41 2020
type=AVC msg=audit(1595814101.696:458): avc:  denied  { name_bind } for  pid=139590 comm=nc src=9000 scontext=system_u:system_r:lms_policy.process:s0:c403,c885 tcontext=system_u:object_r:http_port_t:s0 tclass=tcp_socket permissive=0
----
time->Sun Jul 26 21:06:31 2020
type=AVC msg=audit(1595815591.384:549): avc:  denied  { name_bind } for  pid=140806 comm=squeezeboxserve src=3483 scontext=system_u:system_r:lms_policy.process:s0:c7,c498 tcontext=system_u:object_r:unreserved_port_t:s0 tclass=tcp_socket permissive=0
----

Re-running udica with these messages adds these lines to the policy:

    (allow process unreserved_port_t ( udp_socket ( name_bind ))) 
    (allow process http_port_t ( tcp_socket ( name_bind ))) 
    (allow process unreserved_port_t ( tcp_socket ( name_bind ))) 

With this revised policy, the application can operate successfully.

policy on sockets

when we run something as

docker run /opt/nfast:/opt/nfast:Z debian /opt/nfast/bin/ckinfo

it b0rks, as nfast tries to connect to a socket on the host:

ckinfo: C_Initialize failed rv = 00000006 (CKR_FUNCTION_FAILED)

and the logs state type=AVC msg=audit(1563352304.654:943005): avc:
denied { connectto } path="/opt/nfast/sockets/nserver

So, to get things work we need either:

  1. disable selinux (not a good plan)
  2. --permissive (not a good plan either)
  3. --security-opt label:disable

The best so far is 3) as nfast including all the other stuff is not supported
and would have other issues towards an Hardware Security Module (HSM)

We would love to see support for this kind of operations in udica.

Roeland

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/checks.yml
  • actions/checkout v3
  • actions/checkout v3
regex
.cirrus.yml
  • containers/automation_images 20231208t193858z-f39f38d13

  • Check this box to trigger a request for Renovate to run again on this repository

--append-rules does not work (or I am doing it wrong)

Describe the bug
Using --append-rules does not change the resulting .cil file.

To Reproduce
Steps to reproduce the behavior:

  1. Generate a custom policy for a container using udica -j alpine-ssh.json alpine-ssh.
  2. Load the custom policy using semodule -i alpine-ssh.cil /usr/share/udica/templates/base_container.cil
  3. Start the container again, this time using the --security-opt label=type:alpine-ssh.process parameter
  4. Get avc denial message and dump into file: ausearch -m AVC,USER_AVC -ts recent > avcfile
  5. Generate new custom policy using udica -j alpine-ssh.json --append-rules avcfile alpine-ssh2
  6. Load the new policy and run the container with the new policy.

Expected behavior
Container runs with new policy without generating AVC messages.

What really happens
Newly generated policy alpine-ssh2.cil is identical to the old one alpine-ssh.cil (the one without the --append-rules).
Loading the new policy does not make any difference and the container still fails, generating AVC messages.

Additional context
The container is a very basic alpine container which has openssh-client installed.
Container created using buildah:

ctr=$(buildah from docker.io/library/alpine:3)
buildah run $ctr -- apk update
buildah run $ctr -- apk add openssh-server openssh-client openssh-sftp-server
buildah commit $ctr alpine-ssh

AVC denials appear when the container tries to establish a new ssh connection to the outside.
Container is run without capabilities as non-root user (works fine when not using udica):
podman run --cap-drop=all -it 2259a108709b /bin/sh
`.cil' file generated by udica:

(block alpine-ssh
    (blockinherit container)
)

avcfile generated using ausearch -m AVC,USER_AVC -ts recent > avcfile and looks similar to the one in tests/append_avc_file:

time->Mon Jul  5 11:50:12 2021
type=AVC msg=audit(1625453412.556:1282): avc:  denied  { name_connect } for  pid=5543 comm="ssh" dest=22 scontext=system_u:system_r:alpine-ssh.process:s0:c7,c136 tcontext=system_u:object_r:ssh_port_t:s0 tclass=tcp_socket permissive=0

Generating policies for systemd-based containers

Udica is a great tool. And it would be great if one could use it also for generating policies for systemd - based containers (or even systemd - confined processes, not whole containers like Podman or Docker).

For instance having systemd portable service's unit file Udica would generate SELinux policy taking into consideration directories that were mapped to process.

Same for systemd-nspawn containers or even normal processes confined by systemd properties configured in unit files.

Not sure if Udica is the best project for this (from my perspective looks like it's rather for Podman / Docker based containers). So asking you guys - because if it were maybe I could help you with that a bit.

Support --tmpfs mount

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.

In some cases u need to mount tmpfs for security reasons and for other reasons.
so for example u may need to run the following:

podman run -e MYSQL_ROOT_PASSWORD=my-secret-pw --tmpfs /run -d mysql

Describe the solution you'd like
udica is not allowing this in the created configuration

Describe alternatives you've considered
Not sure if there is a policy i can add. I dont know SElinux. If u have an alternative solution, I would appreciate it too :)

Run time security for containers using udica

Runtime Security
After creating my_container.process for a container can we make it t apply to container without restarting the containers.

Describe the solution you'd like

Running a udica daemon to capture the container specs to create and applying SIGHUP to the daemon to hot reload

Describe alternatives you've considered

Running daemonsets in all nodes or one daemon to all nodes to
.

Error generating policies on containers mapping nfs shares as bind volumes.

I have a podman rootless container for plex and it maps several nfs mounts on the host as bind volumes for media access purposes.

When attempting to generate a policy with udica with:
podman inspect plex | udica -j - plex_container

Udica throws the error:
Couldn't create policy: [Errno 2] No such file or directory

Eventually it turned out it the issue was the volume bind mounts to the media I have in the container. If I remove those volume mappings, the udica command completes without errors.

This is on Fedora Server 35 running stock podman 3.4.4 and udica 0.2.6.

Here's the inspect output:

[linuxadmin@podman01 ~]$ podman inspect plex
[
    {
        "Id": "492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31",
        "Created": "2022-01-31T14:17:16.333559052-06:00",
        "Path": "/init",
        "Args": [
            "/init"
        ],
        "State": {
            "OciVersion": "1.0.2-dev",
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 190274,
            "ConmonPid": 190271,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2022-01-31T14:17:16.812703093-06:00",
            "FinishedAt": "0001-01-01T00:00:00Z",
            "Healthcheck": {
                "Status": "",
                "FailingStreak": 0,
                "Log": null
            },
            "CgroupPath": "/user.slice/user-1000.slice/[email protected]/user.slice/libpod-492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31.scope"
        },
        "Image": "5f8b6863b4cd80418bceaa3457204a08775c83445c861c0fd2208ee6c8c4b9d5",
        "ImageName": "lscr.io/linuxserver/plex:latest",
        "Rootfs": "",
        "Pod": "",
        "ResolvConfPath": "/run/user/1000/containers/overlay-containers/492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31/userdata/resolv.conf",
        "HostnamePath": "/run/user/1000/containers/overlay-containers/492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31/userdata/hostname",
        "HostsPath": "/run/user/1000/containers/overlay-containers/492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31/userdata/hosts",
        "StaticDir": "/home/linuxadmin/.local/share/containers/storage/overlay-containers/492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31/userdata",
        "OCIConfigPath": "/home/linuxadmin/.local/share/containers/storage/overlay-containers/492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31/userdata/config.json",
        "OCIRuntime": "crun",
        "ConmonPidFile": "/run/user/1000/containers/overlay-containers/492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31/userdata/conmon.pid",
        "PidFile": "/run/user/1000/containers/overlay-containers/492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31/userdata/pidfile",
        "Name": "plex",
        "RestartCount": 0,
        "Driver": "overlay",
        "MountLabel": "system_u:object_r:container_file_t:s0:c411,c417",
        "AppArmorProfile": "",
        "EffectiveCaps": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FOWNER",
            "CAP_FSETID",
            "CAP_KILL",
            "CAP_NET_BIND_SERVICE",
            "CAP_SETFCAP",
            "CAP_SETGID",
            "CAP_SETPCAP",
            "CAP_SETUID",
            "CAP_SYS_CHROOT"
        ],
        "BoundingCaps": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_FOWNER",
            "CAP_FSETID",
            "CAP_KILL",
            "CAP_NET_BIND_SERVICE",
            "CAP_SETFCAP",
            "CAP_SETGID",
            "CAP_SETPCAP",
            "CAP_SETUID",
            "CAP_SYS_CHROOT"
        ],
        "ExecIDs": [],
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/home/linuxadmin/.local/share/containers/storage/overlay/2c97f7c8d723a779e48b7f530b6c91f5442760bdb34f6617da43a3b8c51df256/diff:/home/linuxadmin/.local/share/containers/storage/overlay/7c8ee95248119161b93a19b920523dac152187a622c0a831afa77349bdd4087f/diff:/home/linuxadmin/.local/share/containers/storage/overlay/074a283794ff06a4bc22acb003fb62f4d26ee6a5674f8d707f30c6aa218cd4d4/diff:/home/linuxadmin/.local/share/containers/storage/overlay/169a8776fc282d460c1af4a396c741a5fb0e52b90ab3fc33f1d527a4c12a3b24/diff:/home/linuxadmin/.local/share/containers/storage/overlay/0721805f7accda2321f64aa1a39e84dddf197ab090e80851b9987a3038d406a1/diff:/home/linuxadmin/.local/share/containers/storage/overlay/1331a334e48e3951eb5f0ff195d2d1016ebd134061a4a1aedb604935ed44888e/diff:/home/linuxadmin/.local/share/containers/storage/overlay/87c6c1b32e8533e268ceef1f0db37225bf2a164b7914b33e2a1680323ec9510e/diff:/home/linuxadmin/.local/share/containers/storage/overlay/3fbab1d5b51f115925dd9bce225185f2a659e47c84eed63611add689b4f7b2ee/diff:/home/linuxadmin/.local/share/containers/storage/overlay/1f33901d7523dffca31543f9bfdecbb4eb1a5cf67e8ce4704c00636df2d70e52/diff",
                "MergedDir": "/home/linuxadmin/.local/share/containers/storage/overlay/eba6f80bc31ca411248a2283268cd55134f9afb28ace0da3b8f652006b941cc3/merged",
                "UpperDir": "/home/linuxadmin/.local/share/containers/storage/overlay/eba6f80bc31ca411248a2283268cd55134f9afb28ace0da3b8f652006b941cc3/diff",
                "WorkDir": "/home/linuxadmin/.local/share/containers/storage/overlay/eba6f80bc31ca411248a2283268cd55134f9afb28ace0da3b8f652006b941cc3/work"
            }
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "plex_config",
                "Source": "/home/linuxadmin/.local/share/containers/storage/volumes/plex_config/_data",
                "Destination": "/config",
                "Driver": "local",
                "Mode": "",
                "Options": [
                    "nosuid",
                    "nodev",
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/nfs/anime",
                "Destination": "/mnt/anime",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/nfs/movies",
                "Destination": "/mnt/movies",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/nfs/tv",
                "Destination": "/mnt/tv",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/mnt/nfs/videos",
                "Destination": "/mnt/videos",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Dependencies": [],
        "NetworkSettings": {
            "EndpointID": "",
            "Gateway": "",
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": "",
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "1900/udp": null,
                "3005/tcp": null,
                "32400/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "32400"
                    }
                ],
                "32410/udp": null,
                "32412/udp": null,
                "32413/udp": null,
                "32414/udp": null,
                "32469/tcp": null,
                "5353/udp": null,
                "8324/tcp": null
            },
            "SandboxKey": "/run/user/1000/netns/cni-eca4d9fe-0801-4e46-6cf0-054a488b53db",
            "Networks": {
                "app_net": {
                    "EndpointID": "",
                    "Gateway": "10.89.0.1",
                    "IPAddress": "10.89.0.43",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "de:28:00:6e:3d:2e",
                    "NetworkID": "app_net",
                    "DriverOpts": null,
                    "IPAMConfig": null,
                    "Links": null
                }
            }
        },
        "ExitCommand": [
            "/usr/bin/podman",
            "--root",
            "/home/linuxadmin/.local/share/containers/storage",
            "--runroot",
            "/run/user/1000/containers",
            "--log-level",
            "warning",
            "--cgroup-manager",
            "systemd",
            "--tmpdir",
            "/run/user/1000/libpod/tmp",
            "--runtime",
            "crun",
            "--storage-driver",
            "overlay",
            "--events-backend",
            "journald",
            "container",
            "cleanup",
            "--rm",
            "492a745625a4b984f6a2195f5ae620d688a5c85476b8c728cacb4081a48f0f31"
        ],
        "Namespace": "",
        "IsInfra": false,
        "Config": {
            "Hostname": "492a745625a4",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "TERM=xterm",
                "container=podman",
                "HOME=/root",
                "PGID=1000",
                "PLEX_MEDIA_SERVER_INFO_VENDOR=Docker",
                "NVIDIA_DRIVER_CAPABILITIES=compute,video,utility",
                "PLEX_MEDIA_SERVER_USER=abc",
                "PLEX_MEDIA_SERVER_HOME=/usr/lib/plexmediaserver",
                "PLEX_DOWNLOAD=https://downloads.plex.tv/plex-media-server-new",
                "TZ=US/Chicago",
                "PLEX_ARCH=amd64",
                "DEBIAN_FRONTEND=noninteractive",
                "PLEX_MEDIA_SERVER_MAX_PLUGIN_PROCS=6",
                "PLEX_MEDIA_SERVER_INFO_DEVICE=Docker Container (LinuxServer.io)",
                "PUID=1000",
                "VERSION=docker",
                "PLEX_MEDIA_SERVER_APPLICATION_SUPPORT_DIR=/config/Library/Application Support",
                "LANGUAGE=en_US.UTF-8",
                "LANG=en_US.UTF-8",
                "HOSTNAME=492a745625a4"
            ],
            "Cmd": null,
            "Image": "lscr.io/linuxserver/plex:latest",
            "Volumes": null,
            "WorkingDir": "/",
            "Entrypoint": "/init",
            "OnBuild": null,
            "Labels": {
                "PODMAN_SYSTEMD_UNIT": "container-plex.service",
                "build_version": "Linuxserver.io version:- 1.25.3.5409-f11334058-ls98 Build-date:- 2022-01-25T04:57:44+01:00",
                "maintainer": "thelamer",
                "org.opencontainers.image.authors": "linuxserver.io",
                "org.opencontainers.image.created": "2022-01-25T04:57:44+01:00",
                "org.opencontainers.image.description": "[Plex](https://plex.tv) organizes video, music and photos from personal media libraries and streams them to smart TVs, streaming boxes and mobile devices. This container is packaged as a standalone Plex Media Server. has always been a top priority. Straightforward design and bulk actions mean getting things done faster.",
                "org.opencontainers.image.documentation": "https://docs.linuxserver.io/images/docker-plex",
                "org.opencontainers.image.licenses": "GPL-3.0-only",
                "org.opencontainers.image.ref.name": "863fa5fb6bd3d3abfca0df017b1993c27dd1707e",
                "org.opencontainers.image.revision": "863fa5fb6bd3d3abfca0df017b1993c27dd1707e",
                "org.opencontainers.image.source": "https://github.com/linuxserver/docker-plex",
                "org.opencontainers.image.title": "Plex",
                "org.opencontainers.image.url": "https://github.com/linuxserver/docker-plex/packages",
                "org.opencontainers.image.vendor": "linuxserver.io",
                "org.opencontainers.image.version": "1.25.3.5409-f11334058-ls98"
            },
            "Annotations": {
                "io.container.manager": "libpod",
                "io.kubernetes.cri-o.Created": "2022-01-31T14:17:16.333559052-06:00",
                "io.kubernetes.cri-o.TTY": "false",
                "io.podman.annotations.autoremove": "TRUE",
                "io.podman.annotations.cid-file": "/run/user/1000/container-plex.service.ctr-id",
                "io.podman.annotations.init": "FALSE",
                "io.podman.annotations.privileged": "FALSE",
                "io.podman.annotations.publish-all": "FALSE",
                "org.opencontainers.image.stopSignal": "15"
            },
            "StopSignal": 15,
            "CreateCommand": [
                "/usr/bin/podman",
                "container",
                "run",
                "--cidfile=/run/user/1000/container-plex.service.ctr-id",
                "--cgroups=no-conmon",
                "--rm",
                "--sdnotify=conmon",
                "--replace",
                "--name",
                "plex",
                "--device",
                "/dev/dri:/dev/dri",
                "--env",
                "TZ=US/Chicago",
                "--env",
                "PUID=1000",
                "--env",
                "PGID=1000",
                "--env",
                "VERSION=docker",
                "--memory",
                "8g",
                "--memory-swap",
                "16g",
                "--network",
                "app_net",
                "--volume",
                "plex_config:/config",
                "--volume",
                "/mnt/nfs/anime:/mnt/anime",
                "--volume",
                "/mnt/nfs/movies:/mnt/movies",
                "--volume",
                "/mnt/nfs/tv:/mnt/tv",
                "--volume",
                "/mnt/nfs/videos:/mnt/videos",
                "--publish",
                "32400:32400/tcp",
                "--detach=True",
                "lscr.io/linuxserver/plex"
            ],
            "Umask": "0022",
            "Timeout": 0,
            "StopTimeout": 10
        },
        "HostConfig": {
            "Binds": [
                "plex_config:/config:rw,rprivate,nosuid,nodev,rbind",
                "/mnt/nfs/anime:/mnt/anime:rw,rprivate,rbind",
                "/mnt/nfs/movies:/mnt/movies:rw,rprivate,rbind",
                "/mnt/nfs/tv:/mnt/tv:rw,rprivate,rbind",
                "/mnt/nfs/videos:/mnt/videos:rw,rprivate,rbind"
            ],
            "CgroupManager": "systemd",
            "CgroupMode": "private",
            "ContainerIDFile": "/run/user/1000/container-plex.service.ctr-id",
            "LogConfig": {
                "Type": "journald",
                "Config": null,
                "Path": "",
                "Tag": "",
                "Size": "0B"
            },
            "NetworkMode": "bridge",
            "PortBindings": {
                "1900/udp": null,
                "3005/tcp": null,
                "32400/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "32400"
                    }
                ],
                "32410/udp": null,
                "32412/udp": null,
                "32413/udp": null,
                "32414/udp": null,
                "32469/tcp": null,
                "5353/udp": null,
                "8324/tcp": null
            },
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": true,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": [],
            "CapDrop": [
                "CAP_AUDIT_WRITE",
                "CAP_MKNOD",
                "CAP_NET_RAW"
            ],
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": [],
            "GroupAdd": [],
            "IpcMode": "private",
            "Cgroup": "",
            "Cgroups": "default",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "private",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "Tmpfs": {},
            "UTSMode": "private",
            "UsernsMode": "",
            "ShmSize": 65536000,
            "Runtime": "oci",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 8589934592,
            "NanoCpus": 0,
            "CgroupParent": "user.slice",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 17179869184,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 2048,
            "Ulimits": [],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "CgroupConf": null
        }
    }
]

Not sure if this is a bug or not. For now my workaround to generate the initial policy was to remove the nfs mounts from the container and attach them afterwards. Is this a known issue for Udica when nfs bind volumes are present on the container?

Add X and tty containers blocks

When I run a container using X socket and tty:

$ podman run --security-opt label=type:retroshare.process --net host -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/plautrba/.retroshare:/root/.retroshare -i -t retroshare bash -c 'export DISPLAY=:0; retroshare'

I need to allow the following rules:

#============= retroshare.process ==============
allow retroshare.process devtty_t:chr_file { open read write };
allow retroshare.process dri_device_t:chr_file { getattr ioctl read write };
allow retroshare.process proc_t:file { open read };
allow retroshare.process proc_t:lnk_file read;
allow retroshare.process unconfined_dbusd_t:unix_stream_socket connectto;
allow retroshare.process urandom_device_t:chr_file { open read };
allow retroshare.process xserver_t:fd use;
allow retroshare.process xserver_t:unix_stream_socket connectto;

#============= xserver_t ==============
allow xserver_t retroshare.process:dir search;
allow xserver_t retroshare.process:file { open read };

It would be great to have container blocks for these two areas which could be used by udica options, --X-access, --tty-access, or something like that

Author: @bachradsusi

udica needs to enable the container port, not the host port

Describe the bug
When a container is run with a network port remapped, udica generates a policy that allows access to the host port. The container port is the one that needs to be anabled.

To Reproduce
Steps to reproduce the behavior:

  1. run a container with a remapped port.
  2. generate a policy for the container with udica
  3. run the container with the new policy

Expected behavior
The container should be able to access the port

Additional context
With podman run -p 9001:9090 ..., udica generates a policy with this line:

(allow process tor_port_t ( tcp_socket ( name_bind )))

The container application gets a permission error accessing port 9090, and the host reports an AVC name_bind error on port 9090.
When I change the line in the policy to:

(allow process websm_port_t ( tcp_socket ( name_bind )))

the container application can run successfully.

Error when creating a policy: `Couldn't create policy: 'PERFMON'`

Describe the bug
Unable to create a policy from container in Fedora 33.

Udica version

0.2.3

podman version

Version:      3.1.0
API Version:  3.1.0
Go Version:   go1.15.8
Built:        Mon Apr 12 14:39:16 2021
OS/Arch:      linux/amd64

Podman inspect output

podman inspect selinuxd
[
    {
        "Id": "b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd",
+ udica selinuxd
        "Created": "2021-04-23T06:24:43.858960438Z",
+ podman inspect selinuxd
        "Path": "/usr/bin/selinuxdctl",
        "Args": [
            "daemon"
        ],
        "State": {
            "OciVersion": "1.0.2-dev",
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 14679,
            "ConmonPid": 14674,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-04-23T06:24:52.923486801Z",
            "FinishedAt": "0001-01-01T00:00:00Z",
            "Healthcheck": {
                "Status": "",
                "FailingStreak": 0,
                "Log": null
            }
        },
        "Image": "287e912c9e11e56391b395b84ec1929df469d9de610d07cab70e21f3eb28e7ca",
        "ImageName": "quay.io/jaosorior/selinuxd-fedora:latest",
        "Rootfs": "",
        "Pod": "",
        "ResolvConfPath": "/run/containers/storage/overlay-containers/b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd/userdata/resolv.conf",
        "HostnamePath": "/run/containers/storage/overlay-containers/b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd/userdata/hostname",
        "HostsPath": "/run/containers/storage/overlay-containers/b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd/userdata/hosts",
        "StaticDir": "/var/lib/containers/storage/overlay-containers/b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd/userdata",
        "OCIConfigPath": "/var/lib/containers/storage/overlay-containers/b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd/userdata/config.json",
        "OCIRuntime": "crun",
        "ConmonPidFile": "/run/containers/storage/overlay-containers/b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd/userdata/conmon.pid",
        "Name": "selinuxd",
        "RestartCount": 0,
        "Driver": "overlay",
        "MountLabel": "system_u:object_r:container_file_t:s0:c242,c471",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "EffectiveCaps": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_DAC_READ_SEARCH",
            "CAP_FOWNER",
            "CAP_FSETID",
            "CAP_KILL",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETPCAP",
            "CAP_LINUX_IMMUTABLE",
            "CAP_NET_BIND_SERVICE",
            "CAP_NET_BROADCAST",
            "CAP_NET_ADMIN",
            "CAP_NET_RAW",
            "CAP_IPC_LOCK",
            "CAP_IPC_OWNER",
            "CAP_SYS_MODULE",
            "CAP_SYS_RAWIO",
            "CAP_SYS_CHROOT",
            "CAP_SYS_PTRACE",
            "CAP_SYS_PACCT",
            "CAP_SYS_ADMIN",
            "CAP_SYS_BOOT",
            "CAP_SYS_NICE",
            "CAP_SYS_RESOURCE",
            "CAP_SYS_TIME",
            "CAP_SYS_TTY_CONFIG",
            "CAP_MKNOD",
            "CAP_LEASE",
            "CAP_AUDIT_WRITE",
            "CAP_AUDIT_CONTROL",
            "CAP_SETFCAP",
            "CAP_MAC_OVERRIDE",
            "CAP_MAC_ADMIN",
            "CAP_SYSLOG",
            "CAP_WAKE_ALARM",
            "CAP_BLOCK_SUSPEND",
            "CAP_AUDIT_READ",
            "CAP_PERFMON",
            "CAP_BPF"
        ],
        "BoundingCaps": [
            "CAP_CHOWN",
            "CAP_DAC_OVERRIDE",
            "CAP_DAC_READ_SEARCH",
            "CAP_FOWNER",
            "CAP_FSETID",
            "CAP_KILL",
            "CAP_SETGID",
            "CAP_SETUID",
            "CAP_SETPCAP",
            "CAP_LINUX_IMMUTABLE",
            "CAP_NET_BIND_SERVICE",
            "CAP_NET_BROADCAST",
            "CAP_NET_ADMIN",
            "CAP_NET_RAW",
            "CAP_IPC_LOCK",
            "CAP_IPC_OWNER",
            "CAP_SYS_MODULE",
            "CAP_SYS_RAWIO",
            "CAP_SYS_CHROOT",
            "CAP_SYS_PTRACE",
            "CAP_SYS_PACCT",
            "CAP_SYS_ADMIN",
            "CAP_SYS_BOOT",
            "CAP_SYS_NICE",
            "CAP_SYS_RESOURCE",
            "CAP_SYS_TIME",
            "CAP_SYS_TTY_CONFIG",
            "CAP_MKNOD",
            "CAP_LEASE",
            "CAP_AUDIT_WRITE",
            "CAP_AUDIT_CONTROL",
            "CAP_SETFCAP",
            "CAP_MAC_OVERRIDE",
            "CAP_MAC_ADMIN",
            "CAP_SYSLOG",
            "CAP_WAKE_ALARM",
            "CAP_BLOCK_SUSPEND",
            "CAP_AUDIT_READ",
            "CAP_PERFMON",
            "CAP_BPF"
        ],
        "ExecIDs": [],
        "GraphDriver": {
            "Name": "overlay",
            "Data": {
                "LowerDir": "/var/lib/containers/storage/overlay/f5079b9338fd0ea2aa28909e980400a5d03ec79d0d50f5d5beee7bbe7e33c87d/diff:/var/lib/containers/storage/overlay/4f6c1911868506b4e4876db275784eaa72e47ef76b763a2f7595696e379624e6/diff:/var/lib/containers/storage/overlay/36e6d1ca1019d1f90e809a0fd8ec92e9d84fa47afeeefa8898d2beed206f745a/diff:/var/lib/containers/storage/overlay/ad9e92539a859d4f075a713cd426d917f15c200a9b42c631f1eb4aff752ed706/diff:/var/lib/containers/storage/overlay/560fc2df26ee7f35189813d3837095337bd73eb166b569108acef00da10728c3/diff:/var/lib/containers/storage/overlay/27d65299ea8a2ae3431fa4161da0a141426e49da67273947ad5a439df69bba96/diff:/var/lib/containers/storage/overlay/efcf60e50823c88769df575821c86f5bc1390f7d34bcf9464a40d105bf0bd99e/diff",
                "MergedDir": "/var/lib/containers/storage/overlay/aafbc7489b4a074711bbdc2dc01ac3f8395ee284f29829bec4e7163adffe2200/merged",
                "UpperDir": "/var/lib/containers/storage/overlay/aafbc7489b4a074711bbdc2dc01ac3f8395ee284f29829bec4e7163adffe2200/diff",
                "WorkDir": "/var/lib/containers/storage/overlay/aafbc7489b4a074711bbdc2dc01ac3f8395ee284f29829bec4e7163adffe2200/work"
            }
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/sys/fs/selinux",
                "Destination": "/sys/fs/selinux",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "noexec",
                    "nosuid",
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/var/lib/selinux",
                "Destination": "/var/lib/selinux",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/selinux",
                "Destination": "/etc/selinux",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/etc/selinux.d",
                "Destination": "/etc/selinux.d",
                "Driver": "",
                "Mode": "",
                "Options": [
                    "rbind"
                ],
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Dependencies": [],
        "NetworkSettings": {
            "EndpointID": "",
            "Gateway": "10.88.0.1",
            "IPAddress": "10.88.0.3",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": "2e:e4:66:48:33:08",
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/run/netns/cni-2f49e675-a459-6845-ac97-7e2e896a948f",
            "Networks": {
                "podman": {
                    "EndpointID": "",
                    "Gateway": "10.88.0.1",
                    "IPAddress": "10.88.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "2e:e4:66:48:33:08",
                    "NetworkID": "podman",
                    "DriverOpts": null,
                    "IPAMConfig": null,
                    "Links": null
                }
            }
        },
        "ExitCommand": [
            "/usr/bin/podman",
            "--root",
            "/var/lib/containers/storage",
            "--runroot",
            "/run/containers/storage",
            "--log-level",
            "warning",
            "--cgroup-manager",
            "systemd",
            "--tmpdir",
            "/run/libpod",
            "--runtime",
            "crun",
            "--storage-driver",
            "overlay",
            "--storage-opt",
            "overlay.mountopt=nodev",
            "--events-backend",
            "journald",
            "container",
            "cleanup",
            "b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd"
        ],
        "Namespace": "",
        "IsInfra": false,
        "Config": {
            "Hostname": "b87ebdc9a0aa",
            "Domainname": "",
            "User": "root",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "TERM=xterm",
                "container=oci",
                "DISTTAG=f33container",
                "FGC=f33",
                "HOME=/root",
                "HOSTNAME=b87ebdc9a0aa"
            ],
            "Cmd": [
                "daemon"
            ],
            "Image": "quay.io/jaosorior/selinuxd-fedora:latest",
            "Volumes": null,
            "WorkingDir": "/",
            "Entrypoint": "/usr/bin/selinuxdctl",
            "OnBuild": null,
            "Labels": {
                "description": "selinuxd is a daemon that listens for files in /etc/selinux.d/ and installs the relevant policies.",
                "io.buildah.version": "1.19.4",
                "license": "MIT",
                "name": "selinuxd",
                "vendor": "Fedora Project",
                "version": "33"
            },
            "Annotations": {
                "io.container.manager": "libpod",
                "io.containers.trace-syscall": "of:/tmp/selinuxd-seccomp.json",
                "io.kubernetes.cri-o.Created": "2021-04-23T06:24:43.858960438Z",
                "io.kubernetes.cri-o.TTY": "false",
                "io.podman.annotations.autoremove": "FALSE",
                "io.podman.annotations.init": "FALSE",
                "io.podman.annotations.privileged": "TRUE",
                "io.podman.annotations.publish-all": "FALSE",
                "org.opencontainers.image.stopSignal": "15"
            },
            "StopSignal": 15,
            "CreateCommand": [
                "podman",
                "run",
                "--name",
                "selinuxd",
                "-d",
                "--annotation",
                "io.containers.trace-syscall=of:/tmp/selinuxd-seccomp.json",
                "--privileged",
                "-v",
                "/sys/fs/selinux:/sys/fs/selinux",
                "-v",
                "/var/lib/selinux:/var/lib/selinux",
                "-v",
                "/etc/selinux:/etc/selinux",
                "-v",
                "/etc/selinux.d:/etc/selinux.d",
                "quay.io/jaosorior/selinuxd-fedora:latest",
                "daemon"
            ],
            "Umask": "0022"
        },
        "HostConfig": {
            "Binds": [
                "/sys/fs/selinux:/sys/fs/selinux:rw,rprivate,noexec,nosuid,rbind",
                "/var/lib/selinux:/var/lib/selinux:rw,rprivate,rbind",
                "/etc/selinux:/etc/selinux:rw,rprivate,rbind",
                "/etc/selinux.d:/etc/selinux.d:rw,rprivate,rbind"
            ],
            "CgroupManager": "systemd",
            "CgroupMode": "private",
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "k8s-file",
                "Config": null,
                "Path": "/var/lib/containers/storage/overlay-containers/b87ebdc9a0aaec8458ceab844f56889136b413706fd2eda1bfcd1c6c6c0d52fd/userdata/ctr.log",
                "Tag": "",
                "Size": "0B"
            },
            "NetworkMode": "bridge",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": [],
            "CapDrop": [
                "CAP_CHECKPOINT_RESTORE"
            ],
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": [],
            "GroupAdd": [],
            "IpcMode": "private",
            "Cgroup": "",
            "Cgroups": "default",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "private",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [],
            "Tmpfs": {},
            "UTSMode": "private",
            "UsernsMode": "",
            "ShmSize": 65536000,
            "Runtime": "oci",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": 0,
            "OomKillDisable": false,
            "PidsLimit": 2048,
            "Ulimits": [
                {
                    "Name": "RLIMIT_NOFILE",
                    "Soft": 1048576,
                    "Hard": 1048576
                },
                {
                    "Name": "RLIMIT_NPROC",
                    "Soft": 4194304,
                    "Hard": 4194304
                }
            ],
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "CgroupConf": null
        }
    }
]

Error output

Couldn't create policy: 'PERFMON'
Error: Process completed with exit code 4.

Expected behavior
It should generate the policy

Additional context
We have this set up in our CI. You can see the failure here: JAORMX/selinuxd#73

SQL Client cannot connect to instance in web

Describe the bug
I have a php application that I develop that I run sans problems with the following:

podman run --privileged --rm -v (pwd)/einvoices/:/var/www/html/apidian2020/storage/app/public/:Z -p 9000:80 api-facturacion:production

Since I want to try udica I decided to start:

podman run --rm -v (pwd)/einvoices/:/var/www/html/apidian2020/storage/app/public/:rw -p 9000:80 api-facturacion:production

This created the following policy:

(block rocky-einvoice
    (blockinherit container)
    (blockinherit restricted_net_container)
    (allow process process ( capability ( audit_write chown dac_override fowner fsetid kill net_bind_service setfcap setgid setpcap setuid sys_chroot )))

    (allow process mysqld_port_t ( tcp_socket (  name_bind )))
    (allow process http_port_t ( tcp_socket (  name_bind )))
    (allow process user_home_t ( dir ( add_name create getattr ioctl lock open read remove_name rmdir search setattr write )))
    (allow process user_home_t ( file ( append create getattr ioctl lock map open read rename setattr unlink write )))
    (allow process user_home_t ( sock_file ( append getattr open read write )))
)

I installed in selinux:

sudo semodule -i rocky-einvoice.cil /usr/share/udica/templates/{base_container.cil,net_container.cil}

and run:

podman run --security-opt label=type:rocky-einvoice.process --rm -v (pwd)/einvoices/:/var/www/html/apidian2020/storage/app/public/:rw -p 9000:80 api-facturacion:production

and mysql migration dies, what can I see from journalctl is this:

AVC avc:  denied  { name_connect } for  pid=883848 comm="php" dest=3306 scontext=system_u:system_r:rocky-einvoice.process:s0:c116,c642 tcontext=system_u:object_r:mysqld_port_t:s0 tclass=tcp_socket permissive=0

Any help? Thanks!

To Reproduce
Create an instance in any cloud platform and run:

podman run --security-opt label=type:rocky-einvoice.process -it --rm --net host mysql mysql -vvv -h$(host) -uroot -p

Result:

ERROR 2003 (HY000): Can't connect to MySQL server on 'redactedhost' (13)

rocky-einvoice is the policy generated by udica

logs:

AVC avc:  denied  { name_connect } for  pid=916447 comm="mysql" dest=3306 scontext=system_u:system_r:rocky-einvoice.process:s0:c263,c861 tcontext=system_u:object_r:mysqld_port_t:s0 tclass=tcp_socket permissive=0

Expected behavior:

Being able to connect to SQL instance externally

Additional context
Any resources to learn about more udica? Thanks! Amazing product!

add a command-line option to print the udica version

Is your feature request related to a problem? Please describe.
Currently, the only way to determine your udica version is to examine the source code.

Describe the solution you'd like
Add support for "--version" and "-V" command-line options, which would just print the version string and then exit.

Describe alternatives you've considered
None

Additional context
An option like this is commonly available for command-line tools.

udica crashes parsing json file, 'NoneType' has no attribute 'split'

Describe the bug
udica crashes when parsing json file, giving an error message that an attribute error occured in policy.py: 'NoneType' has no attribute 'split'. Some policy file is created, however I am not sure if it is complete/usable (no SELinux expert).

To Reproduce
Steps to reproduce the behavior:

  1. create json file with podman inspect containername > data.json
  2. as root, run udica -j data.json policyname
  3. udica crashes and returns 1

Expected behavior
udica creates great new SELinux policy module and exits cleanly (returns 0)

Additional context
entire error message:

Traceback (most recent call last):
  File "/usr/bin/udica", line 11, in <module>
    load_entry_point('udica==0.1.7', 'console_scripts', 'udica')()
  File "/usr/lib/python3.7/site-packages/udica/__main__.py", line 109, in main
    create_policy(opts, container_caps, container_mounts, container_ports)
  File "/usr/lib/python3.7/site-packages/udica/policy.py", line 172, in create_policy
    contexts = list_contexts(item['source'])
  File "/usr/lib/python3.7/site-packages/udica/policy.py", line 64, in list_contexts
    contexts.append(context.split(':')[2])
AttributeError: 'NoneType' object has no attribute 'split'

EDIT:
forgot to mention that the container is unprivileged, setup and run by user podmanuser. This user also creates the json file which is then copied to root as apparently udica needs to be run as root. Thus running podman ps -a as root will not list the container. Maybe that's an issue for udica?

Add policy generation for fifo_files

Is your feature request related to a problem? Please describe.
When using udica to generate SELinux policies I am unable to get access to the fifo_files in my container mounts.
Describe the solution you'd like
I would like the policy generated by udica to include the same access to fifo_files as it does sock_files within the mount points of my containers.
Describe alternatives you've considered
Modify the CIL policy by handing before loading the module. Have a flag in udica for the different object classes that I want to be able to access within the mounts of my container.
Additional context
Containers can currently manage fifo’s with the following type labels: container_file_t
https://github.com/containers/container-selinux/blob/d89a599e3d3c362ec178600ed04c72f337c10d28/container.te#L796

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.