Giter Site home page Giter Site logo

greenbone / ospd-openvas Goto Github PK

View Code? Open in Web Editor NEW
67.0 15.0 58.0 6.74 MB

ospd-openvas is an OSP server implementation to allow GVM to remotely control an OpenVAS Scanner

License: GNU Affero General Public License v3.0

Python 89.90% Dockerfile 0.69% Go 8.03% NASL 0.93% Shell 0.16% Makefile 0.28%
python base openvas techops tooling

ospd-openvas's Introduction

Greenbone Logo

ospd-openvas

GitHub releases PyPI Build and test

ospd-openvas is an OSP server implementation to remotely control OpenVAS Scanner and Notus Scanner.

Once running, you need to configure OpenVAS Scanner and Notus Scanner for the Greenbone Vulnerability Manager, for example via the web interface Greenbone Security Assistant. Then you can create scan tasks to use both scanners.

Installation

Requirements

Python 3.7 and later is supported.

ospd-openvas has dependencies on the following Python packages:

  • defusedxml
  • deprecated
  • lxml
  • packaging
  • paho-mqtt
  • psutil
  • python-gnupg
  • redis

Mandatory configuration

The ospd-openvas startup parameter --lock-file-dir or the lock_file_dir config parameter of the ospd.conf config file needs to point to the same location / path of the gvmd daemon and the openvas command line tool (Default: <install-prefix>/var/run). Examples for both are shipped within the config sub-folder of this project.

Also in order to be able to use Notus ospd-openvas must connect to a MQTT broker, such as Mosquitto in order to communicate. With the parameter --mqtt-broker-address (Default: localhost) the correct address must be given as well as the corresponding port with --mqtt-broker-port (Default: 1883).

Please see the Details section of the GVM release notes for more details.

Optional configuration

Please note that although you can run openvas (launched from an ospd-openvas process) as a user without elevated privileges, it is recommended that you start openvas as root since a number of Network Vulnerability Tests (NVTs) require root privileges to perform certain operations like packet forgery. If you run openvas as a user without permission to perform these operations, your scan results are likely to be incomplete.

As openvas will be launched from an ospd-openvas process with sudo, the next configuration is required in the sudoers file:

sudo visudo

add this line to allow the user running ospd-openvas, to launch openvas with root permissions

<user> ALL = NOPASSWD: <install prefix>/sbin/openvas

If you set an install prefix, you have to update the path in the sudoers file too:

Defaults        secure_path=<existing paths...>:<install prefix>/sbin

Usage

There are no special usage aspects for this module beyond the generic usage guide.

Please follow the general usage guide for ospd-based scanners:

https://github.com/greenbone/ospd-openvas/blob/main/docs/USAGE-ospd-scanner.md

Support

For any question on the usage of ospd-openvas please use the Greenbone Community Portal. If you found a problem with the software, please create an issue on GitHub. If you are a Greenbone customer you may alternatively or additionally forward your issue to the Greenbone Support Portal.

Maintainer

This project is maintained by Greenbone AG.

Contributing

Your contributions are highly appreciated. Please create a pull request on GitHub. Bigger changes need to be discussed with the development team via the issues section at GitHub first.

For development you should use poetry to keep your python packages separated in different environments. First install poetry via pip

python3 -m pip install --user poetry

Afterwards run

poetry install

in the checkout directory of ospd-openvas (the directory containing the pyproject.toml file) to install all dependencies including the packages only required for development.

The ospd-openvas repository uses autohooks to apply linting and auto formatting via git hooks. Please ensure the git hooks are active.

poetry install
poetry run autohooks activate --force

License

Copyright (C) 2018-2022 Greenbone AG

Licensed under the GNU Affero General Public License v3.0 or later.

ospd-openvas's People

Contributors

arnostiefvater avatar bjoernricks avatar cfi-gb avatar davidak avatar dependabot-preview[bot] avatar dependabot[bot] avatar dexus avatar enderbee avatar fabaff avatar flowdalic avatar fv3rdugo avatar greenbonebot avatar j-licht avatar janowagner avatar jjnicola avatar k3v3n avatar kraemii avatar kroosec avatar mattmundell avatar mergify[bot] avatar nichtsfrei avatar pascalholthaus avatar szlin avatar thorsten-passfeld avatar timopollmeier avatar wiegandm avatar y0urself avatar yywing avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ospd-openvas's Issues

Feature Request: Align logging timezone with gvmd, gsad and openvas (UTC)

Environment:

Ubuntu 18.04
Greenbone Vulnerability Manager 9.0.1~git-e250176b-gvmd-9.0
GIT revision e250176b-gvmd-9.0
Manager DB revision 221
gvm@ov-master-eqi:~$ gsad --version
Greenbone Security Assistant 9.0.1~git-9fb2e63cd-gsa-9.0
gvm@ov-master-eqi:~$ openvas --version
OpenVAS 7.0.1
gvm-libs 11.0.1

Expected behaviour:
Use same timezone for logging than gvmd, gsad and openvas.

Current behaviour:
Currently ospd-openvas use a specific time zone for events logging. This would be perfect if gsad, gvmd and openvas would use it as well. Unfortunately, gsad, gvmd and openvas uses UTC for events logging; so it becomes difficult to track issues between gvmd, openvas and ospd-openvas because timezones are not the same, especially since none of those time zones are the local time zone.

It would be nice if all tools could use the same time zone, and preferably the local time zone of the machine.

This feature request is complimentary of #287

Container "greenbone/ospd-openvas:stable" does not start

I followed the manual on https://greenbone.github.io/docs/latest/21.04/container to use the Greenbone Docker containers.
The container "greenbone/ospd-openvas:stable" does not start. It's restarting all the time

Expected behavior

It should start

Actual behavior

It's restarting everytime

Steps to reproduce

Just start the Containers from the compose file

GVM versions

Latest stable docker containers

Environment

Operating system:
Ubuntu 20.04.4 LTS

Installation method / source: (packages, source installation)
Docker-compose

Logfiles

ospd-openvas_1  | usage: ospd-openvas [-h] [--version] [-s [CONFIG]] [--log-config [LOG_CONFIG]]
ospd-openvas_1  |                     [-p PORT] [-b ADDRESS] [-u UNIX_SOCKET]
ospd-openvas_1  |                     [--pid-file PID_FILE] [--lock-file-dir LOCK_FILE_DIR]
ospd-openvas_1  |                     [-m SOCKET_MODE] [-k KEY_FILE] [-c CERT_FILE]
ospd-openvas_1  |                     [--ca-file CA_FILE] [-L LOG_LEVEL] [-f]
ospd-openvas_1  |                     [-t STREAM_TIMEOUT] [-l LOG_FILE] [--niceness NICENESS]
ospd-openvas_1  |                     [--scaninfo-store-time SCANINFO_STORE_TIME]
ospd-openvas_1  |                     [--list-commands] [--max-scans MAX_SCANS]
ospd-openvas_1  |                     [--min-free-mem-scan-queue MIN_FREE_MEM_SCAN_QUEUE]
ospd-openvas_1  |                     [--max-queued-scans MAX_QUEUED_SCANS]
ospd-openvas_1  |                     [--mqtt-broker-address MQTT_BROKER_ADDRESS]
ospd-openvas_1  |                     [--mqtt-broker-port MQTT_BROKER_PORT]
ospd-openvas_1  |                     [--notus-feed-dir NOTUS_FEED_DIR]
ospd-openvas_1  |                     [--disable-notus-hashsum-verification DISABLE_NOTUS_HASHSUM_VERIFICATION]
ospd-openvas_1  | ospd-openvas: error: argument --disable-notus-hashsum-verification: expected one argument

ospd-openvas/config/ospd.conf

Theres a mistake (duplicate "unix_socket" section) in ospd-openvas/config/ospd.conf which will produce:

RuntimeError: Error while parsing config file [..]ospd.conf. Error was While reading from '[..]ospd.conf' [line 5]: option 'unix_socket' in section 'OSPD - openvas' already exists

Second section should be "pid_file", of course.

python 3.9 support, importlib.metadata.PackageNotFoundError: ospd-openvas

Environment: Fedora 33

python3-3.9.0-1.fc33.x86_64

export PYTHONPATH=/opt/atomicorp/lib/python3.9/site-packages 

[gvm@fc33 ~]$ /opt/atomicorp/bin/ospd-openvas --help
Traceback (most recent call last):
  File "/opt/atomicorp/bin/ospd-openvas", line 33, in <module>
    sys.exit(load_entry_point('ospd-openvas==21.4.0', 'console_scripts', 'ospd-openvas')())
  File "/opt/atomicorp/bin/ospd-openvas", line 22, in importlib_load_entry_point
    for entry_point in distribution(dist_name).entry_points
  File "/usr/lib64/python3.9/importlib/metadata.py", line 524, in distribution
    return Distribution.from_name(distribution_name)
  File "/usr/lib64/python3.9/importlib/metadata.py", line 187, in from_name
    raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: ospd-openvas

I believe the format for python 3.9 handling metadata has changed. So my guess is that python setup isnt installing things if you declare PYTHONPATH 3.9:

export PYTHONPATH=/opt/atomicorp/lib/python3.9/site-packages
python3 setup.py install --prefix=/opt/atomicorp/

This does however work just fine if you are not declaring the PYTHONPATH, and write (or overwrite in this case) to the system site-packages at /usr/lib/python3.9/site-packages.

[22.4.2] - raise Exception("GPG verification of notus sha256sums failed")

Running into an error with the latest version 22.4.2 (downgrading back to 22.4.0 resolves the problem).

Expected behavior

Starting up and running without any issues (like 22.4.0 did and still does for me).

Actual behavior

Running into the following error since upgrading from 22.4.0 to 22.4.2.

Sep 06 18:34:57 hostname ospd-openvas[4407]: Traceback (most recent call last):
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/host/bin/ospd-openvas", line 33, in <module>
Sep 06 18:34:57 hostname ospd-openvas[4407]:     sys.exit(load_entry_point('ospd-openvas==22.4.2', 'console_scripts', 'ospd-openvas')())
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd_openvas/daemon.py", line 1243, in main
Sep 06 18:34:57 hostname ospd-openvas[4407]:     daemon_main('OSPD - openvas', OSPDopenvas, NotusParser())
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd/main.py", line 164, in main
Sep 06 18:34:57 hostname ospd-openvas[4407]:     daemon.init(server)
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd_openvas/daemon.py", line 524, in init
Sep 06 18:34:57 hostname ospd-openvas[4407]:     self.update_vts()
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd_openvas/daemon.py", line 649, in update_vts
Sep 06 18:34:57 hostname ospd-openvas[4407]:     self.nvti.notus.reload_cache()
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd_openvas/notus.py", line 119, in reload_cache
Sep 06 18:34:57 hostname ospd-openvas[4407]:     if self._verifier(f):
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd_openvas/gpg_sha_verifier.py", line 121, in verify
Sep 06 18:34:57 hostname ospd-openvas[4407]:     assumed_name = sha256sums().get(hash_sum)
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd_openvas/gpg_sha_verifier.py", line 63, in internal_reload
Sep 06 18:34:57 hostname ospd-openvas[4407]:     return config.on_verification_failure(None)
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd_openvas/notus.py", line 50, in on_hash_sum_verification_failure
Sep 06 18:34:57 hostname ospd-openvas[4407]:     raise Exception("GPG verification of notus sha256sums failed")
Sep 06 18:34:57 hostname ospd-openvas[4407]: Exception: GPG verification of notus sha256sums failed
Sep 06 18:34:57 hostname ospd-openvas[4407]: Exception ignored in atexit callback: <function exit_cleanup at 0x7f5245740310>
Sep 06 18:34:57 hostname ospd-openvas[4407]: Traceback (most recent call last):
Sep 06 18:34:57 hostname ospd-openvas[4407]:   File "/usr/lib/python3.10/site-packages/ospd/main.py", line 86, in exit_cleanup
Sep 06 18:34:57 hostname ospd-openvas[4407]:     sys.exit()
Sep 06 18:34:57 hostname ospd-openvas[4407]: SystemExit:
Sep 06 18:34:57 hostname systemd[1]: ospd-openvas.service: Main process exited, code=exited, status=1/FAILURE
Sep 06 18:34:57 hostname systemd[1]: ospd-openvas.service: Failed with result 'exit-code'.

Steps to reproduce

  1. upgrade ospd-openvas from the previously working 22.4.0 to 22.4.2
  2. start service
  3. run into error

GVM versions

gsa: Greenbone Security Assistant 22.04.0

gvm: Greenbone Vulnerability Manager 22.4.0~dev1 (<- note: ~dev1 was somehow introduced between tag 22.4 and the actual release tag 22.4.0 with the change to PROJECT_DEV_VERSION 1 in CMakeLists.txt: greenbone/gvmd@v22.4...v22.4.0)
Manager DB revision 250

openvas-scanner: OpenVAS 22.4.0

gvm-libs: gvm-libs 22.4.0

Environment

Operating system: Exherbo Linux

Installation method / source: source-based packages

Logfiles

/var/log/gvm/ospd-openvas.log

OSPD[14136] 2022-09-06 16:52:33,999: INFO: (ospd.main) Starting OSPd OpenVAS version 22.4.2.
OSPD[14136] 2022-09-06 16:52:34,007: WARNING: (ospd_openvas.messaging.mqtt) Could not connect to MQTT broker, error was: [Errno 111] Connection refused. Trying again in 10s.
OSPD[14136] 2022-09-06 16:52:44,020: WARNING: (ospd_openvas.messaging.mqtt) Could not connect to MQTT broker, error was: [Errno 111] Connection refused. Trying again in 10s.
OSPD[14136] 2022-09-06 16:52:44,054: INFO: (ospd_openvas.daemon) Loading VTs. Scans will be [requested|queued] until VTs are loaded. This may take a few minutes, please wait...
OSPD[14136] 2022-09-06 16:52:44,242: WARNING: (gnupg) potential problem: ERROR: add_keyblock_resource 33587201
OSPD[14136] 2022-09-06 16:52:44,243: WARNING: (gnupg) potential problem: ERROR: keydb_search 33554445
OSPD[14136] 2022-09-06 16:52:44,243: WARNING: (gnupg) potential problem: ERROR: keydb_search 33554445
OSPD[14136] 2022-09-06 16:52:44,243: WARNING: (gnupg) gpg returned a non-zero error code: 2
OSPD[14136] 2022-09-06 16:52:44,252: INFO: (ospd.main) Shutting-down server ...

Note for the MQTT broker WARNING: I've not yet setup MQTT & packaged notus-scanner, so I already had that warning with 22.4.0 previously as well of course.

Additional information:

# ls -la /var/lib/notus/advisories
insgesamt 46828
drwxrwxr-x 2 gvm gvm     4096  6. Sep 12:42 .
drwxrwxr-x 4 gvm gvm     4096  6. Sep 12:42 ..
-rw-rw-r-- 1 gvm gvm 14294650  6. Sep 06:38 euleros.notus
-rw-rw-r-- 1 gvm gvm  9050712  6. Sep 06:38 mageia.notus
-rw-rw-r-- 1 gvm gvm      318  6. Sep 06:38 sha256sums
-rw-rw-r-- 1 gvm gvm      833  6. Sep 06:38 sha256sums.asc
-rw-rw-r-- 1 gvm gvm  2522789  6. Sep 06:38 slackware.notus
-rw-rw-r-- 1 gvm gvm 22062329  6. Sep 06:38 suse.notus
# ls -la /var/lib/gvm/gvmd/gnupg
insgesamt 32
drwx------ 4 gvm gvm 4096  6. Sep 18:56 .
drwxr-xr-x 4 gvm gvm 4096  6. Sep 17:35 ..
drwx------ 2 gvm gvm 4096 21. Okt 2019  openpgp-revocs.d
drwx------ 2 gvm gvm 4096 21. Okt 2019  private-keys-v1.d
-rw------- 1 gvm gvm  818 21. Okt 2019  pubring.kbx
-rw------- 1 gvm gvm   32 21. Okt 2019  pubring.kbx~
-rw------- 1 gvm gvm  600  6. Sep 18:56 random_seed
-rw------- 1 gvm gvm 1280 21. Okt 2019  trustdb.gpg
# cat /etc/gvm/ospd-openvas.conf 
[OSPD - openvas]
log_level = INFO
socket_mode = 0o770
unix_socket = /run/ospd/ospd-openvas.sock
pid_file = /run/ospd/ospd-openvas.pid
log_file = /var/log/gvm/ospd-openvas.log
lock_file_dir = /run/ospd

I also tried adding notus-feed-dir = /var/lib/notus/advisories to the ospd-openvas.conf as I've seen it's also passed in your systemd file suggestion at https://greenbone.github.io/docs/latest/22.4/source-build/index.html#setting-up-services-for-systemd but it didn't make any difference.

openvas.service error on systemctl start openvas

Hi.
When running systemctl start openvas i got this error. I've tried different branch and tags and now i'm on 20.8.2 branch oldstable for ospd-openvas and also for ospd.

`openvas.service - Control the OpenVAS service
Loaded: loaded (/etc/systemd/system/openvas.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-12-13 13:04:05 UTC; 35s ago
Process: 7261 ExecStartPre=/usr/bin/rm -rf /opt/gvm/var/run/ospd-openvas.pid /opt/gvm/var/run/ospd.sock /opt/gvm/var/run/gvmd.sock (code=exited, status=0/SUCCESS)
Process: 7262 ExecStart=/usr/bin/python3 /opt/gvm/bin/ospd-openvas --pid-file /opt/gvm/var/run/ospd-openvas.pid --log-file /opt/gvm/var/log/gvm/ospd-openvas.log --lock-file-dir /opt/gv>
Main PID: 7262 (code=exited, status=1/FAILURE)

Dec 13 13:04:05 openvas python3[7262]: return self.execute_command("KEYS", pattern, **kwargs)
Dec 13 13:04:05 openvas python3[7262]: File "/opt/gvm/lib/python3.8/site-packages/redis-4.1.0rc2-py3.8.egg/redis/client.py", line 1156, in execute_command
Dec 13 13:04:05 openvas python3[7262]: conn = self.connection or pool.get_connection(command_name, **options)
Dec 13 13:04:05 openvas python3[7262]: File "/opt/gvm/lib/python3.8/site-packages/redis-4.1.0rc2-py3.8.egg/redis/connection.py", line 1240, in get_connection
Dec 13 13:04:05 openvas python3[7262]: connection = self.make_connection()
Dec 13 13:04:05 openvas python3[7262]: File "/opt/gvm/lib/python3.8/site-packages/redis-4.1.0rc2-py3.8.egg/redis/connection.py", line 1280, in make_connection
Dec 13 13:04:05 openvas python3[7262]: return self.connection_class(**self.connection_kwargs)
Dec 13 13:04:05 openvas python3[7262]: TypeError: init() got an unexpected keyword argument 'redis_connect_func'
Dec 13 13:04:05 openvas systemd[1]: openvas.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 13:04:05 openvas systemd[1]: openvas.service: Failed with result 'exit-code'.`

Resumed tasks may generate invalid target value error on ospd-openvas side

Environment:

Ubuntu 18.04
Greenbone Vulnerability Manager 9.0.1~git-e250176b-gvmd-9.0
GIT revision e250176b-gvmd-9.0
Manager DB revision 221
gvm@ov-master-eqi:~$ gsad --version
Greenbone Security Assistant 9.0.1~git-9fb2e63cd-gsa-9.0
gvm@ov-master-eqi:~$ openvas --version
OpenVAS 7.0.1
gvm-libs 11.0.1

Expected behaviour:
Tasks resumed shouldn't produce any target errors on ospd-openvas side.

Actual behaviour:
If a task stop while running for whatever reason; resuming it may generated continuous Invalid target value errors on ospd-openvas side. Target has not been changed, and starting the task again (without resuming it) will not produce those errors:


2020-07-10 11:58:10,604 OSPD - openvas: INFO: (ospd.network) : Invalid target value
2020-07-10 11:58:10,772 OSPD - openvas: INFO: (ospd.network) : Invalid target value
2020-07-10 11:58:11,618 OSPD - openvas: INFO: (ospd.network) : Invalid target value
2020-07-10 11:58:11,779 OSPD - openvas: INFO: (ospd.network) : Invalid target value
2020-07-10 11:58:12,628 OSPD - openvas: INFO: (ospd.network) : Invalid target value
....

It is not clear if this bug has any effect on scans results, since openvas logs doesn't show any errors and resume the scans without problem; while no error is reported on gsad / gvmd side.

How To reproduce:

  • Start a task
  • After at least 50-60% achieved; stop the task.
  • Once stopped; resume the task.
  • Look at ospd-openvas logs for the upon errors.

Note: The issue do not always occurs; but is likely to occurs on big tasks (/22 or higher)

Allow redis >=5.0.0 for redis 7.2 support

Expected behavior

Work with current stable/GA redis version which is now at 7.2 for which python redis >=5.0.0 added support, https://github.com/redis/redis-py/releases/tag/v5.0.0.

Actual behavior

https://github.com/greenbone/ospd-openvas/blob/main/pyproject.toml#L47 restricts redis to <5.0.0:

redis = ">=3.5.3,<5.0.0"

Steps to reproduce

  1. Try to update to latest stable redis 7.2

GVM versions

gsa: 22.05.2

gvm: 22.8.0

openvas-scanner: 22.7.3

gvm-libs: 22.7.0

Environment

Operating system:
Exherbo Linux

Installation method / source:
source installation

Logfiles

Redis-server killed by oom_killer at 19GB memory allocation

I have a relatively small installation of greenbone community containers running on docker (one VM). Only 85 targets and 8 tasks.
The VM containing 6 [email protected] and 16GB of vRAM

When I start more then one scan in Greenbone all scan will be stopped because of the openvas crash.
When the problem is happening the redis try to allocate more memory then all of the memory and swap of the VM.

Sep 18 14:11:05 ITSM02 kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=docker-65dd470fde11d33d27a502c2e9aef0d0d01dc287d15746f68807275f63050d52.scope,mems_allowed=0,global_oom,task_memcg=/system.slice/docker-1def25045ac5ffb3f939d482581ed9ce4050d5ad08c31c4f0b86d57008537e0b.scope,task=redis-server,pid=3496,uid=100
Sep 18 14:11:05 ITSM02 kernel: Out of memory: Killed process 3496 (redis-server) total-vm:19497288kB, anon-rss:12841672kB, file-rss:0kB, shmem-rss:0kB, UID:100 pgtables:33344kB oom_score_adj:0
Sep 18 14:11:06 ITSM02 kernel: oom_reaper: reaped process 3496 (redis-server), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Sep 18 14:11:07 ITSM02 kernel: show_signal_msg: 3 callbacks suppressed
Sep 18 14:11:07 ITSM02 kernel: openvas[117580]: segfault at 0 ip 000055d92e44ffe6 sp 00007ffc8a9932c0 error 4 in openvas[55d92e44f000+9000]
Sep 18 14:11:07 ITSM02 kernel: Code: ff ff 48 8d 3d 4b 80 00 00 48 89 c3 e8 a3 f8 ff ff 89 de 48 89 c7 48 8b 05 c7 d1 00 00 ff 50 18 48 8d 35 5d 80 00 00 48 89 c3 <48> 8b 00 48 89 df ff 50 28 48 89 c5 48 8b 03 48 8b 80 d0 00 00 00
Sep 18 14:11:07 ITSM02 kernel: openvas[166925]: segfault at 0 ip 000055c611a13fe6 sp 00007ffe63419aa0 error 4 in openvas[55c611a13000+9000]
Sep 18 14:11:07 ITSM02 kernel: Code: ff ff 48 8d 3d 4b 80 00 00 48 89 c3 e8 a3 f8 ff ff 89 de 48 89 c7 48 8b 05 c7 d1 00 00 ff 50 18 48 8d 35 5d 80 00 00 48 89 c3 <48> 8b 00 48 89 df ff 50 28 48 89 c5 48 8b 03 48 8b 80 d0 00 00 00

The redis server print a log:

2023-09-18T09:42:39+02:00 8:M 18 Sep 2023 07:42:39.305 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see jemalloc/jemalloc#1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2023-09-18T09:42:39+02:00 8:M 18 Sep 2023 07:42:39.347 * The server is now ready to accept connections at /run/redis/redis.sock
2023-09-18T14:11:07+02:00 Killed
2023-09-18T14:11:11+02:00 9:C 18 Sep 2023 12:11:11.537 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2023-09-18T14:11:11+02:00 9:C 18 Sep 2023 12:11:11.537 # Redis version=7.0.11, bits=64, commit=00000000, modified=0, pid=9, just started

The vm.overcommit_memory is set in sysctl.conf like this:

vm.overcommit_ratio = 25
vm.overcommit_memory = 1

The vm.overcommit_ration calculated by swap size

After the redis are killed, in the openvas log:

2023-09-18T12:36:26+02:00 OSPD[8] 2023-09-18 10:36:26,308: INFO: (ospd.ospd) Starting scan 539f2dae-6495-422e-b043-f0ab4c9fdb44.
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:42:52+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:54:01+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:02+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:02+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:03+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T12:56:05+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0
2023-09-18T13:11:53+02:00 OSPD[8] 2023-09-18 11:11:53,367: INFO: (ospd.command.command) Scan 217bd3d9-a4bf-4bca-80f2-e972aff67d5f added to the queue in position 2.
2023-09-18T13:11:56+02:00 OSPD[8] 2023-09-18 11:11:56,580: INFO: (ospd.ospd) Currently 1 queued scans.
2023-09-18T13:11:56+02:00 OSPD[8] 2023-09-18 11:11:56,784: INFO: (ospd.ospd) Starting scan 217bd3d9-a4bf-4bca-80f2-e972aff67d5f.
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 493, in read_response
2023-09-18T14:11:07+02:00 response = self._parser.read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 15, in read_response
2023-09-18T14:11:07+02:00 result = self._read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 25, in _read_response
2023-09-18T14:11:07+02:00 raw = self._buffer.readline()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 115, in readline
2023-09-18T14:11:07+02:00 self._read_from_socket()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 65, in _read_from_socket
2023-09-18T14:11:07+02:00 data = self._sock.recv(socket_read_size)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:07+02:00 During handling of the above exception, another exception occurred:
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/bin/ospd-openvas", line 8, in
2023-09-18T14:11:07+02:00 sys.exit(main())
2023-09-18T14:11:07+02:00 ^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 1244, in main
2023-09-18T14:11:07+02:00 daemon_main('OSPD - openvas', OSPDopenvas, NotusParser())
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/main.py", line 153, in main
2023-09-18T14:11:07+02:00 daemon.run()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 1103, in run
2023-09-18T14:11:07+02:00 self.scheduler()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 703, in scheduler
2023-09-18T14:11:07+02:00 self.check_feed()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 677, in check_feed
2023-09-18T14:11:07+02:00 current_feed = self.nvti.get_feed_version()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/nvticache.py", line 70, in get_feed_version
2023-09-18T14:11:07+02:00 return OpenvasDB.get_single_item(self.ctx, NVTI_CACHE_NAME)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 268, in get_single_item
2023-09-18T14:11:07+02:00 return ctx.lindex(name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/commands/core.py", line 2685, in lindex
2023-09-18T14:11:07+02:00 return self.execute_command("LINDEX", name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 508, in execute_command
2023-09-18T14:11:07+02:00 return conn.retry.call_with_retry(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 49, in call_with_retry
2023-09-18T14:11:07+02:00 fail(error)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 512, in
2023-09-18T14:11:07+02:00 lambda error: self._disconnect_raise(conn, error),
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 498, in _disconnect_raise
2023-09-18T14:11:07+02:00 raise error
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 46, in call_with_retry
2023-09-18T14:11:07+02:00 return do()
2023-09-18T14:11:07+02:00 ^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 509, in
2023-09-18T14:11:07+02:00 lambda: self._send_command_parse_response(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 485, in _send_command_parse_response
2023-09-18T14:11:07+02:00 return self.parse_response(conn, command_name, **options)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 525, in parse_response
2023-09-18T14:11:07+02:00 response = connection.read_response()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 501, in read_response
2023-09-18T14:11:07+02:00 raise ConnectionError(
2023-09-18T14:11:07+02:00 redis.exceptions.ConnectionError: Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer')
2023-09-18T14:11:07+02:00 OSPD[8] 2023-09-18 12:11:07,017: ERROR: (ospd.ospd) 217bd3d9-a4bf-4bca-80f2-e972aff67d5f: Exception Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer') while scanning
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 493, in read_response
2023-09-18T14:11:07+02:00 response = self._parser.read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 15, in read_response
2023-09-18T14:11:07+02:00 result = self._read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 25, in _read_response
2023-09-18T14:11:07+02:00 raw = self._buffer.readline()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 115, in readline
2023-09-18T14:11:07+02:00 self._read_from_socket()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 65, in _read_from_socket
2023-09-18T14:11:07+02:00 data = self._sock.recv(socket_read_size)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:07+02:00 During handling of the above exception, another exception occurred:
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 584, in start_scan
2023-09-18T14:11:07+02:00 self.exec_scan(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 1174, in exec_scan
2023-09-18T14:11:07+02:00 target_is_finished = kbdb.target_is_finished(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 576, in target_is_finished
2023-09-18T14:11:07+02:00 status = self._get_single_item(f'internal/{scan_id}')
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 470, in _get_single_item
2023-09-18T14:11:07+02:00 return OpenvasDB.get_single_item(self.ctx, name)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 268, in get_single_item
2023-09-18T14:11:07+02:00 return ctx.lindex(name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/commands/core.py", line 2685, in lindex
2023-09-18T14:11:07+02:00 return self.execute_command("LINDEX", name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 508, in execute_command
2023-09-18T14:11:07+02:00 return conn.retry.call_with_retry(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 49, in call_with_retry
2023-09-18T14:11:07+02:00 fail(error)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 512, in
2023-09-18T14:11:07+02:00 lambda error: self._disconnect_raise(conn, error),
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 498, in _disconnect_raise
2023-09-18T14:11:07+02:00 raise error
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 46, in call_with_retry
2023-09-18T14:11:07+02:00 return do()
2023-09-18T14:11:07+02:00 ^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 509, in
2023-09-18T14:11:07+02:00 lambda: self._send_command_parse_response(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 485, in _send_command_parse_response
2023-09-18T14:11:07+02:00 return self.parse_response(conn, command_name, **options)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 525, in parse_response
2023-09-18T14:11:07+02:00 response = connection.read_response()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 501, in read_response
2023-09-18T14:11:07+02:00 raise ConnectionError(
2023-09-18T14:11:07+02:00 redis.exceptions.ConnectionError: Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer')
2023-09-18T14:11:07+02:00 Process Process-4:
2023-09-18T14:11:07+02:00 OSPD[8] 2023-09-18 12:11:07,166: ERROR: (ospd.ospd) 539f2dae-6495-422e-b043-f0ab4c9fdb44: Exception Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer') while scanning
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 493, in read_response
2023-09-18T14:11:07+02:00 response = self._parser.read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 15, in read_response
2023-09-18T14:11:07+02:00 result = self._read_response(disable_decoding=disable_decoding)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/resp2.py", line 25, in _read_response
2023-09-18T14:11:07+02:00 raw = self._buffer.readline()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 115, in readline
2023-09-18T14:11:07+02:00 self._read_from_socket()
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/_parsers/socket.py", line 65, in _read_from_socket
2023-09-18T14:11:07+02:00 data = self._sock.recv(socket_read_size)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:07+02:00 During handling of the above exception, another exception occurred:
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 584, in start_scan
2023-09-18T14:11:07+02:00 self.exec_scan(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/daemon.py", line 1174, in exec_scan
2023-09-18T14:11:07+02:00 target_is_finished = kbdb.target_is_finished(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 576, in target_is_finished
2023-09-18T14:11:07+02:00 status = self._get_single_item(f'internal/{scan_id}')
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 470, in _get_single_item
2023-09-18T14:11:07+02:00 return OpenvasDB.get_single_item(self.ctx, name)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd_openvas/db.py", line 268, in get_single_item
2023-09-18T14:11:07+02:00 return ctx.lindex(name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/commands/core.py", line 2685, in lindex
2023-09-18T14:11:07+02:00 return self.execute_command("LINDEX", name, index)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 508, in execute_command
2023-09-18T14:11:07+02:00 return conn.retry.call_with_retry(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 49, in call_with_retry
2023-09-18T14:11:07+02:00 fail(error)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 512, in
2023-09-18T14:11:07+02:00 lambda error: self._disconnect_raise(conn, error),
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 498, in _disconnect_raise
2023-09-18T14:11:07+02:00 raise error
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/retry.py", line 46, in call_with_retry
2023-09-18T14:11:07+02:00 return do()
2023-09-18T14:11:07+02:00 ^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 509, in
2023-09-18T14:11:07+02:00 lambda: self._send_command_parse_response(
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 485, in _send_command_parse_response
2023-09-18T14:11:07+02:00 return self.parse_response(conn, command_name, **options)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/client.py", line 525, in parse_response
2023-09-18T14:11:07+02:00 response = connection.read_response()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/redis/connection.py", line 501, in read_response
2023-09-18T14:11:07+02:00 raise ConnectionError(
2023-09-18T14:11:07+02:00 redis.exceptions.ConnectionError: Error while reading from /run/redis/redis.sock : (104, 'Connection reset by peer')
2023-09-18T14:11:07+02:00 Traceback (most recent call last):
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2023-09-18T14:11:07+02:00 self.run()
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
2023-09-18T14:11:07+02:00 self._target(*self._args, **self._kwargs)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 598, in start_scan
2023-09-18T14:11:07+02:00 self.set_scan_progress(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 672, in set_scan_progress
2023-09-18T14:11:07+02:00 self._get_scan_progress_raw(scan_id)
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 814, in _get_scan_progress_raw
2023-09-18T14:11:07+02:00 current_progress['overall'] = self.get_scan_progress(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 1342, in get_scan_progress
2023-09-18T14:11:07+02:00 progress = self.scan_collection.get_progress(scan_id)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 370, in get_progress
2023-09-18T14:11:07+02:00 return self.scans_table[scan_id].get('progress', ScanProgress.INIT)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "", line 2, in get
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 822, in _callmethod
2023-09-18T14:11:07+02:00 kind, result = conn.recv()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 249, in recv
2023-09-18T14:11:07+02:00 buf = self._recv_bytes()
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 413, in _recv_bytes
2023-09-18T14:11:07+02:00 buf = self._recv(4)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 378, in _recv
2023-09-18T14:11:07+02:00 chunk = read(handle, remaining)
2023-09-18T14:11:07+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:07+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:08+02:00 Process Process-3:
2023-09-18T14:11:08+02:00 Traceback (most recent call last):
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
2023-09-18T14:11:08+02:00 self.run()
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
2023-09-18T14:11:08+02:00 self._target(*self._args, **self._kwargs)
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 598, in start_scan
2023-09-18T14:11:08+02:00 self.set_scan_progress(scan_id)
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 672, in set_scan_progress
2023-09-18T14:11:08+02:00 self._get_scan_progress_raw(scan_id)
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 814, in _get_scan_progress_raw
2023-09-18T14:11:08+02:00 current_progress['overall'] = self.get_scan_progress(scan_id)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 1342, in get_scan_progress
2023-09-18T14:11:08+02:00 progress = self.scan_collection.get_progress(scan_id)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 370, in get_progress
2023-09-18T14:11:08+02:00 return self.scans_table[scan_id].get('progress', ScanProgress.INIT)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "", line 2, in get
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 822, in _callmethod
2023-09-18T14:11:08+02:00 kind, result = conn.recv()
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 249, in recv
2023-09-18T14:11:08+02:00 buf = self._recv_bytes()
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 413, in _recv_bytes
2023-09-18T14:11:08+02:00 buf = self._recv(4)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 378, in _recv
2023-09-18T14:11:08+02:00 chunk = read(handle, remaining)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 ConnectionResetError: [Errno 104] Connection reset by peer
2023-09-18T14:11:08+02:00 Exception ignored in atexit callback: <function exit_cleanup at 0x7f4530af9120>
2023-09-18T14:11:08+02:00 Traceback (most recent call last):
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/main.py", line 69, in exit_cleanup
2023-09-18T14:11:08+02:00 daemon.daemon_exit_cleanup()
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/ospd.py", line 448, in daemon_exit_cleanup
2023-09-18T14:11:08+02:00 self.scan_collection.clean_up_pickled_scan_info()
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 252, in clean_up_pickled_scan_info
2023-09-18T14:11:08+02:00 if self.get_status(scan_id) == ScanStatus.QUEUED:
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/local/lib/python3.11/dist-packages/ospd/scan.py", line 352, in get_status
2023-09-18T14:11:08+02:00 status = self.scans_table.get(scan_id, {}).get('status', None)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "", line 2, in get
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 818, in _callmethod
2023-09-18T14:11:08+02:00 self._connect()
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/managers.py", line 805, in _connect
2023-09-18T14:11:08+02:00 conn = self._Client(self._token.address, authkey=self._authkey)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 501, in Client
2023-09-18T14:11:08+02:00 c = SocketClient(address)
2023-09-18T14:11:08+02:00 ^^^^^^^^^^^^^^^^^^^^^
2023-09-18T14:11:08+02:00 File "/usr/lib/python3.11/multiprocessing/connection.py", line 629, in SocketClient
2023-09-18T14:11:08+02:00 s.connect(address)
2023-09-18T14:11:08+02:00 FileNotFoundError: [Errno 2] No such file or directory
2023-09-18T14:11:11+02:00 [WARN tini (7)] Tini is not running as PID 1 and isn't registered as a child subreaper.
2023-09-18T14:11:11+02:00 Zombie processes will not be re-parented to Tini, so zombie reaping won't work.
2023-09-18T14:11:11+02:00 To fix the problem, use the -s option or set the environment variable TINI_SUBREAPER to register Tini as a child subreaper, or run Tini as PID 1.
2023-09-18T14:11:15+02:00 OSPD[8] 2023-09-18 12:11:15,495: INFO: (ospd.main) Starting OSPd OpenVAS version 22.6.0.
2023-09-18T14:11:15+02:00 OSPD[8] 2023-09-18 12:11:15,513: INFO: (ospd_openvas.messaging.mqtt) Successfully connected to MQTT broker
2023-09-18T14:11:25+02:00 OSPD[8] 2023-09-18 12:11:25,658: INFO: (ospd_openvas.daemon) Loading VTs. Scans will be [requested|queued] until VTs are loaded. This may take a few minutes, please wait...
2023-09-18T14:13:30+02:00 OSPD[8] 2023-09-18 12:13:30,122: INFO: (ospd_openvas.daemon) Finished loading VTs. The VT cache has been updated from version 0 to 202309150621.
2023-09-18T14:32:45+02:00 OSPD[8] 2023-09-18 12:32:45,567: INFO: (ospd.command.command) Scan 02317102-1868-48cd-882b-07772ac65ee3 added to the queue in position 2.
2023-09-18T14:32:50+02:00 OSPD[8] 2023-09-18 12:32:50,956: INFO: (ospd.ospd) Currently 1 queued scans.
2023-09-18T14:32:51+02:00 OSPD[8] 2023-09-18 12:32:51,172: INFO: (ospd.ospd) Starting scan 02317102-1868-48cd-882b-07772ac65ee3.
2023-09-18T14:39:34+02:00 Oops, ksba_cert_get_image failed: imagelen=51 hdr=4 len=735 off=0

[20.8.0] Unexpected interrupted return code

The scanner self will exits fine, but ospd-openvas will interpret it as an error.
ospd-openvas log:

(ospd.ospd) aae6b036-7a1c-47d4-8866-0e07c8799b48: Host scan finished.
(ospd.ospd) aae6b036-7a1c-47d4-8866-0e07c8799b48: Scan interrupted.
(ospd.ospd) aae6b036-7a1c-47d4-8866-0e07c8799b48: Scan stopped with errors.
(ospd.ospd) aae6b036-7a1c-47d4-8866-0e07c8799b48: Scan interrupted.

openvas log:

Vulnerability scan d667271d-11ef-4e68-bc0c-39fa0659e778 finished for host XXXXX in 993.56 seconds
Vulnerability scan d667271d-11ef-4e68-bc0c-39fa0659e778 finished in 1000 seconds: 1 hosts

No error was logged by openvas.

openvas --version

OpenVAS 20.8.0
gvm-libs 20.8.0

ospd version:

20.8.1

Most new code since 2005: (C) 2020 Greenbone Networks GmbH
Nessus origin: (C) 2004 Renaud Deraison [email protected]
License GPLv2: GNU GPL version 2
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

feed-update.lock issues

GVM versions

gsa: 9.0.0 git-87b20cb24-gsa-9.0
gvm: 9.0.1 git-7518695a-gvmd-9.0

Environment

Ubuntu 18.04 TLS
Greenbone build from git

We notice that after some days the auto download of NVT's does no longer work properly. When running greenbone-nvt-sync under the gvm user gives no return, it exits immediately. I think it's because the feed-update.lock lock file is present in the run directory which should not be there.

When I remove 'feed-update.lock', I can successfully run greenbone-nvt-sync.

Also, when I reboot the server, the gvmd complains that the ospd.sock isn't present. This is true, because ospd-openvas process seems to be locked when starting. Again, when I manually remove 'feed-update.lock', the ospd scanner continue's its start and after it's loaded the gvmd can read its socket.

If I leave the 'feed-update.lock' in place, it never ever gets deleted. There seems to be an issue with the 'feed-update.lock` not being correctly handled/deleted. I have this in three separate setups with the exact same behavior.

Feature Request: Align Tasks UUID between gvmd / ospd-openvas

Environment:

Ubuntu 18.04
Greenbone Vulnerability Manager 9.0.1~git-e250176b-gvmd-9.0
GIT revision e250176b-gvmd-9.0
Manager DB revision 221
gvm@ov-master-eqi:~$ gsad --version
Greenbone Security Assistant 9.0.1~git-9fb2e63cd-gsa-9.0
gvm@ov-master-eqi:~$ openvas --version
OpenVAS 7.0.1
gvm-libs 11.0.1

Current behaviour:
Gvmd use one UUID for a task; while ospd-openvas use a different one for the same task.This make issues & bug tracking very difficult in a master / slave architecture when you have multiple scans running.

Expected behaviour:
Share the same tasks UUIDs between gvmd and ospd-openvas.

openvas finishes task, ospd-openvas keeps looking elsewhere...

OpenVAS 7.0.1
gvm-libs 11.0.1
OSP Server for openvas: 1.0.1
OSP: 1.2
OSPd: 2.0.1
python2.6
Ubuntu 18.04 LTS
Redis 4.09 with GVMd tuned configuration file

Hello

I have a scan running on a somewhat important task (3642 IP, with many dead hosts). When I run this task, openvas is launched by ospd-openvas without problems. Both are located on the same machine.
After some times, openvas finish scanning the task as it's suppose to:

_

sd main:MESSAGE:2020-05-14 20h42.23 utc:6675: Test complete
sd main:MESSAGE:2020-05-14 20h42.23 utc:6675: Total time to scan all hosts : 115770 seconds

_

However ospd-openvas seems to have lost communication in the middle with openvas & gvmd, as the last log entry reads 2020-05-13 (while openvas last log is 2020-05-14). No error logged. Process is still running and loaded:

_

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6669 gvm 20 0 725872 537896 5744 R 87.5 3.2 829:27.94 python3.6
6666 gvm 20 0 375068 178548 5120 S 6.2 1.1 8:30.31 python3.6

_

On gvmd side, task is still running but stuck at 7% since more than one day. The problem is not systematic; out of 10 launches it occurs around 5-6 times. Other 4-5 times scan will finish successfully.

My ospd-openvas process is still running, in case I can do anything to help investigating wha's going on.

Thank you

[1.0.0] Error while connecting to redis is not catched.

When the daemon can't read/write to the redis socket, then an python exception is thrown, but not catched.

File "/usr/bin/ospd-openvas", line 11, in
load_entry_point('ospd-openvas==1.0.0', 'console_scripts', 'ospd-openvas')()
File "/usr/lib/python3.6/site-packages/ospd_openvas/daemon.py", line 1454, in main
daemon_main('OSPD - openvas', OSPDopenvas)
File "/usr/lib/python3.6/site-packages/ospd/main.py", line 159, in main
daemon.init()
File "/usr/lib/python3.6/site-packages/ospd_openvas/daemon.py", line 283, in init
self.openvas_db.db_init()
File "/usr/lib/python3.6/site-packages/ospd_openvas/db.py", line 139, in db_init
self.max_db_index()
File "/usr/lib/python3.6/site-packages/ospd_openvas/db.py", line 117, in max_db_index
ctx = self.kb_connect()
File "/usr/lib/python3.6/site-packages/ospd_openvas/db.py", line 195, in kb_connect
'Redis Error: Not possible to connect to the kb.'
ospd_openvas.errors.OspdOpenvasError: Redis Error: Not possible to connect to the kb.

v20.4.1 sudoers problem

Hi,

Just upgraded to tag version v20.4.1 . When running a task I get permissions problems. I've got my sudoers in place, just like with V20.4.0 .

libgvm boreas:WARNING:2021-06-25 18h59.19 utc:3263: set_socket: failed to open ICMPV4 socket: Operation not permitted
libgvm boreas:WARNING:2021-06-25 18h59.19 utc:3263: start_alive_detection. Boreas could not initialise alive detection. Boreas was not able to open a new socket. Exit Boreas.

visudo
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/opt/gvm/sbin"

/etc/sudoers.d/gvm
gvm ALL = NOPASSWD: /opt/gvm/sbin/openvas
gvm ALL = NOPASSWD: /opt/gvm/sbin/gsad

Root dir is '/opt/gvm/'.

Has anything changed with v20.4.1? Dont understand whats the case here...

Regards,
Bastiaan

failure when starting a remote scan, KeyError: 'end_time'

I'm attempting to set up a distributed scan environment with gvmd and gsa running on a central system and ospd-openvas instances spread around to execute the tasks and send them back to the central system. I found a super convenient docker implementation of this environment that handles the setup and connection of these instance here https://github.com/Secure-Compliance-Solutions-LLC/GVM-Docker. I'm able to get both an instance of the gvm controller and an instance of the ospd-openvas scanner running and connected. My problem comes when I try to initiate my first remote scan. When I queue the scan to start, it is sent to the remote scanner and immediately prints out the stack error pasted below in openvas.log.

OSPD[50] 2020-12-10 16:49:28,867: INFO: (ospd.command.command) Scan 37dda0d5-cb26-4f33-a238-88216a587923 added to the queue in position 1.

Traceback (most recent call last):
File "/usr/local/bin/ospd-openvas", line 11, in
load_entry_point('ospd-openvas==20.8.0', 'console_scripts', 'ospd-openvas')()
File "/usr/local/lib/python3.8/dist-packages/ospd_openvas-20.8.0-py3.8.egg/ospd_openvas/daemon.py", line 1383, in main
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/main.py", line 160, in main
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/ospd.py", line 1255, in run
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/ospd.py", line 1398, in clean_forgotten_scans
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/ospd.py", line 1487, in get_scan_end_time
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/scan.py", line 424, in get_end_time
File "", line 2, in getitem
File "/usr/lib/python3.8/multiprocessing/managers.py", line 850, in _callmethod
raise convert_to_error(kind, result)
KeyError: 'end_time'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/managers.py", line 827, in _callmethod
conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/main.py", line 81, in exit_cleanup
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/ospd.py", line 438, in daemon_exit_cleanup
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/scan.py", line 242, in clean_up_pickled_scan_info
File "/usr/local/lib/python3.8/dist-packages/ospd-20.8.1-py3.8.egg/ospd/scan.py", line 340, in get_status
File "", line 2, in get
File "/usr/lib/python3.8/multiprocessing/managers.py", line 831, in _callmethod
self._connect()
File "/usr/lib/python3.8/multiprocessing/managers.py", line 818, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 502, in Client
c = SocketClient(address)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 629, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
connect to /data/ospd.sock port -2 failed: Connection refused
connect to /data/ospd.sock port -2 failed: Connection refused

I'm unsure if the docker environment I'm using could be causing this, but it seems to just be an error when starting a remotely initiated scan. Any help would be appreciated.

Exception when start daemon when empty pid file existed

Expected behavior

Daemon can start normally.

Actual behavior

Daemon can't start if there is an empty pid file existed.

Steps to reproduce

  1. Create an empty pid file.
  2. Start ospd-openvas with the pid_file parameter points to the empty pid file

Logfiles

Traceback (most recent call last):
  File "/usr/local/bin/ospd-openvas", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/ospd_openvas/daemon.py", line 1268, in main
    daemon_main('OSPD - openvas', OSPDopenvas, NotusParser())
  File "/usr/local/lib/python3.10/dist-packages/ospd/main.py", line 139, in main
    if not create_pid(args.pid_file):
  File "/usr/local/lib/python3.10/dist-packages/ospd/misc.py", line 117, in create_pid
    process = psutil.Process(int(pf_pid))
ValueError: invalid literal for int() with base 10: ''

[22.4.0] Test test_port_convert fails

The test tests/test_port_convert.py fails with:
==================================== ERRORS ====================================
_________________ ERROR collecting tests/test_port_convert.py __________________
tests/test_port_convert.py:46: in
logging.disable()
E TypeError: disable() missing 1 required positional argument: 'level'
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
=========================== 1 error in 0.30 seconds ============================
Reading the python doc's:
https://docs.python.org/3/library/logging.html#module-logging
It will requires an options.

AttributeError: 'NoneType' object has no attribute 'pop'

With ospd-openvas 1.0.0 - might be related to the timeout change - worked around this by calling continue if None == _custom

jan 14 12:59:55 openvas ospd-openvas[25014]: Traceback (most recent call last):
jan 14 12:59:55 openvas ospd-openvas[25014]: File "/usr/bin/ospd-openvas", line 11, in
jan 14 12:59:55 openvas ospd-openvas[25014]: load_entry_point('ospd-openvas==1.0.0', 'console_scripts', 'ospd-openvas')()
jan 14 12:59:55 openvas ospd-openvas[25014]: File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 1454, in main
jan 14 12:59:55 openvas ospd-openvas[25014]: daemon_main('OSPD - openvas', OSPDopenvas)
jan 14 12:59:55 openvas ospd-openvas[25014]: File "/usr/lib/python3/dist-packages/ospd/main.py", line 159, in main
jan 14 12:59:55 openvas ospd-openvas[25014]: daemon.init()
jan 14 12:59:55 openvas ospd-openvas[25014]: File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 293, in init
jan 14 12:59:55 openvas ospd-openvas[25014]: self.load_vts()
jan 14 12:59:55 openvas ospd-openvas[25014]: File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 413, in load_vts
jan 14 12:59:55 openvas ospd-openvas[25014]: _name = _custom.pop('name')
jan 14 12:59:55 openvas ospd-openvas[25014]: AttributeError: 'NoneType' object has no attribute 'pop'

[20.8.0] Service unavailable when verifying ospd-openvas scanner over tcp

gvm@ov-master-eqi-test:~$ gvmd --version
Greenbone Vulnerability Manager 20.08.0~git-3a5ec8bc-gvmd-20.08
GIT revision 3a5ec8bc-gvmd-20.08
Manager DB revision 233

OSP Server for openvas: 20.8.0
OSP: 20.8.1
OSPd OpenVAS: 20.8.1

OS: Ubuntu 18.04 LTS with gnutls 3.6.15, python 3.7

Problem: When running an ospd-openvas scanner on a TCP socket, gvmd fails to connect to it even though the certificates handshake is correct. Both GVMD and OSPD are running on the same system. Obviously same problem happens if ospd is run on a different IP. See below

Scanner is launched with the following command:

/opt/gvm/bin/ospd-scanner/bin/python3.7 /opt/gvm/bin/ospd-scanner/bin/ospd-openvas -s /opt/gvm/etc/openvas/ospd.conf --log-file /opt/gvm/var/log/gvm/ospd-scanner-remote.log -p 9392 -b 127.0.0.1 --pid-file /opt/gvm/var/run/ospd-openvas-remote.pid --lock-file-dir /opt/gvm/var/run/ -k /opt/gvm/var/lib/gvm/private/CA/serverkey.pem -c /opt/gvm/var/lib/gvm/CA/servercert.pem --ca-file /opt/gvm/var/lib/gvm/CA/cacert.pem -L DEBUG -f

On Gvmd, scanner is added using:

gvmd --create-scanner=TestRemoteScanner --scanner-type=OpenVAS --scanner-port=9392 --scanner-host=127.0.0.1 --scanner-ca-pub=/opt/gvm/var/lib/gvm/CA/cacert.pem --scanner-key-priv=/opt/gvm/var/lib/gvm/private/CA/clientkey.pem --scanner-key-pub=/opt/gvm/var/lib/gvm/CA/clientcert.pem

Scanner is created successfully. Now If I try to verify the scanner:

gvm@ov-master-eqi-test:/opt/gvm/src/20.08/ospd-ospd-20.08/ospd$ gvmd --get-scanners
8840a9e5-f1c0-45f1-835b-550c32fc3001  OpenVAS  127.0.0.1  9392  TestRemoteScanner
gvm@ov-master-eqi-test:/opt/gvm/src/20.08/ospd-ospd-20.08/ospd$ gvmd --verify-scanner=8840a9e5-f1c0-45f1-835b-550c32fc3001
Failed to verify scanner.

If I manually check the certificate handshake using:

openssl s_client -connect 127.0.0.1:9392 -cert /opt/gvm/var/lib/gvm/CA/clientcert.pem -key /opt/gvm/var/lib/gvm/private/CA/clientkey.pem -CAfile /opt/gvm/var/lib/gvm/CA/cacert.pem -reconnect -showcerts -debug

connects and sending <get_version/> manually with openssl gets answered with

<get_version_response status="200" status_text="OK"><protocol><name>OSP</nam

So the cert infrastructure created with gvm-manage-certs is correct and the initial dialog seems to work fine.

However, on GVMD side, I have the following logs:

md   main:  DEBUG:2020-09-28 12h54.37 UTC:98909: <= client  "<verify_scanner scanner_id="8840a9e5-f1c0-45f1-835b-550c32fc3001"/>"
md    gmp:  DEBUG:2020-09-28 12h54.37 UTC:98909:    XML  start: verify_scanner (1)
md    gmp:  DEBUG:2020-09-28 12h54.37 UTC:98909:    client state set: 519
md    gmp:  DEBUG:2020-09-28 12h54.37 UTC:98909:    XML    end: verify_scanner
lib  serv:  DEBUG:2020-09-28 12h54.37 UTC:98909:    Connected to server 127.0.0.1 port 9392.
lib  serv:  DEBUG:2020-09-28 12h54.37 UTC:98909:    Shook hands with server 127.0.0.1 port 9392.
lib  serv:  DEBUG:2020-09-28 12h54.37 UTC:98909:    send 14 from <get_version/>[...]
lib  serv:  DEBUG:2020-09-28 12h54.37 UTC:98909: => <get_version/>
lib  serv:  DEBUG:2020-09-28 12h54.37 UTC:98909: => done
lib   xml:  DEBUG:2020-09-28 12h54.37 UTC:98909:    asking for 1048576
md   main:  DEBUG:2020-09-28 12h54.37 UTC:98909: -> client: <verify_scanner_response status="503" status_text="Service unavailable"/>
md    gmp:  DEBUG:2020-09-28 12h54.37 UTC:98909:    client state set: 1
md   main:  DEBUG:2020-09-28 12h54.37 UTC:98909: => client  73 bytes
md   main:  DEBUG:2020-09-28 12h54.37 UTC:98909: => client  done
md   main:  DEBUG:2020-09-28 12h54.37 UTC:98909:    EOF reading from client
md   main:  DEBUG:2020-09-28 12h54.37 UTC:98909:    Cleaning up
md   main:  DEBUG:2020-09-28 12h54.37 UTC:98909:    Exiting

AttributeError: 'NoneType' object has no attribute 'pop'

gvm-cli --protocol OSP socket --sockpath /var/run/ospd/ospd.sock --xml "<start_scan><scan_id>111111</scan_id>www.xx.comU:8011</start_scan>"

OSPD[2155] 2023-03-18 02:08:33,877: ERROR: (ospd.ospd) While handling client command:
Traceback (most recent call last):
File "/opt/atomicorp/lib/python3.8/site-packages/ospd/ospd.py", line 558, in handle_client_stream
self.handle_command(data, stream)
File "/opt/atomicorp/lib/python3.8/site-packages/ospd/ospd.py", line 1077, in handle_command
response = command.handle_xml(tree)
File "/opt/atomicorp/lib/python3.8/site-packages/ospd/command/command.py", line 617, in handle_xml
scan_id = self._daemon.create_scan(
File "/opt/atomicorp/lib/python3.8/site-packages/ospd/ospd.py", line 1255, in create_scan
return self.scan_collection.create_scan(
File "/opt/atomicorp/lib/python3.8/site-packages/ospd/scan.py", line 321, in create_scan
credentials = target.pop('credentials')
AttributeError: 'NoneType' object has no attribute 'pop'

version:
OpenVAS 22.4.1
gvm-libs 22.4.1
Most new code since 2005: (C) 2022 Greenbone Networks GmbH
Nessus origin: (C) 2004 Renaud Deraison [email protected]
License GPLv2: GNU GPL version 2
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

why?

ospd.openvas [21.4.3] - randomely stay stuck in INIT phase when a new scan is launched

Expected behavior

  1. Launch a scan
  2. Scan will pass the normal phases; QUEUED, INIT, RUNNING
  3. Scan will exit once completed successfully
  4. In case a scan is stuck for whatever reason on the scanner side, GSA should allow the user to force stop of the task. There should be some time out for the REQUESTED status, at least as a workaround for the current bug.

Actual behavior

  1. Run a scan via GSA 21.4.3 against a slave ospd.openvas daemon, hosted on the network
  2. Scan will occasionaly remains stucks in INIT phase on ospd.openvas, and will therefore stays stuck forever in "REQUESTED" state on GSA
  3. Only way to force GSA to recover control of the scan is to kill the related ospd.openvas process, which will force the scan task as Stopped.

Steps to reproduce

None, as this unfortunately happens randomely.

GVM versions

gsa: Greenbone Security Assistant 21.4.3

gvm: Greenbone Vulnerability Manager 21.4.3
Manager DB revision 242

openvas-scanner: OpenVAS 21.4.3
gvm-libs 21.4.3

Environment

Operating system:
Linux ov-slave-kolding 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
gvm@ov-slave-kolding:~$ cat /etc/lsb-release
DISTRIB_ID=LinuxMint
DISTRIB_RELEASE=19.1
DISTRIB_CODENAME=tessa
DISTRIB_DESCRIPTION="Linux Mint 19.1 Tessa"

Installation method / source: source installation

Logfiles

OSPD[16806] 2022-01-25 10:32:11,509: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55698)
OSPD[16806] 2022-01-25 10:32:11,523: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:11,524: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,525: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,525: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:11,525: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,526: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,526: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,527: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:11,605: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:11,607: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,608: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,608: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:11,608: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,609: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,609: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,610: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55700)
OSPD[16806] 2022-01-25 10:32:11,612: DEBUG: (ospd.ospd) Returning 0 results
OSPD[16806] 2022-01-25 10:32:11,612: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:11,709: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:11,711: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,712: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,712: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:11,712: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,713: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,713: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,715: DEBUG: (ospd.ospd) Returning 0 results
OSPD[16806] 2022-01-25 10:32:11,715: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:11,748: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55702)
OSPD[16806] 2022-01-25 10:32:11,843: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:11,845: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,846: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,846: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:11,846: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55714)
OSPD[16806] 2022-01-25 10:32:11,847: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,848: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,848: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,849: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:11,943: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:11,945: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,945: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,945: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:11,946: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,946: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:11,947: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:11,948: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:13,126: DEBUG: (ospd_openvas.daemon) Current feed version: 202201241112
OSPD[16806] 2022-01-25 10:32:13,126: DEBUG: (ospd_openvas.daemon) Plugin feed version: 202201241112
OSPD[16806] 2022-01-25 10:32:13,127: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:13,127: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:16,936: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55746)
OSPD[16806] 2022-01-25 10:32:17,031: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:17,033: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,034: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,034: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:17,034: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,035: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,036: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55748)
OSPD[16806] 2022-01-25 10:32:17,036: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,037: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:17,125: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55750)
OSPD[16806] 2022-01-25 10:32:17,133: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:17,135: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,136: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,136: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:17,136: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,137: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,137: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,138: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:17,222: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:17,224: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,224: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,225: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:17,225: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,226: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55752)
OSPD[16806] 2022-01-25 10:32:17,226: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,227: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,228: DEBUG: (ospd.ospd) Returning 0 results
OSPD[16806] 2022-01-25 10:32:17,229: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:17,321: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:17,322: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,323: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,323: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:17,323: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,324: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,325: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,326: DEBUG: (ospd.ospd) Returning 0 results
OSPD[16806] 2022-01-25 10:32:17,326: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:17,378: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55754)
OSPD[16806] 2022-01-25 10:32:17,474: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:17,475: DEBUG: (ospd.server) New connection from ('10.194.157.7', 55756)
OSPD[16806] 2022-01-25 10:32:17,476: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,477: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,477: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Check scan process: 
	Progress 0
	 Status: INIT
OSPD[16806] 2022-01-25 10:32:17,477: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,478: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,479: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,479: DEBUG: (ospd.ospd) 402df51e-4b62-473a-a611-42a59f014770: Results sent successfully to the client. Cleaning temporary result list.
OSPD[16806] 2022-01-25 10:32:17,567: DEBUG: (ospd.ospd) Handling get_scans command request.
OSPD[16806] 2022-01-25 10:32:17,569: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan status: INIT,
OSPD[16806] 2022-01-25 10:32:17,570: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Current scan progress: 0,
OSPD[16806] 2022-01-25 10:32:17,570: DEBUG: (ospd.ospd) 1864345b-0055-46dd-8e8d-ef57a731d261: Check scan process: 
	Progress 0
	 Status: INIT
....

unable to change socket_mode

I have tried to specify in both config file /etc/openvas/ospd.conf and command line via --socket-mode=0o777 but none worked.

Would you have any ideas?

I'm using the latest version 1.0.0

Incorrect progress calculation due to excluded_hosts results in Interrupted scan

Originally asked as a forum question: https://forum.greenbone.net/t/excluded-hosts-causing-interrupted-scans/16257

This is related to other issues such as #335 and https://github.com/greenbone/openvas-scanner/pull/1509/files

I apologize if opening a ticket here isn't the correct way to continue the discussion!

Expected behavior

My use case is scanning a single specific FQDN on a single specific IP.

I expect to be able to create a FQDN target and exclude a list of IPs.

Steps to reproduce

For example, a.com resolves to 1.1.1.1 and 2.2.2.2, and I want to scan this single hostname on 2.2.2.2 only. I create a target for a.com, set excluded_hosts=1.1.1.1 and expand_vhosts=0.

I expect this to do a DNS lookup for a.com and find 1.1.1.1 and 2.2.2.2, and to exclude 1.1.1.1 and scan 2.2.2.2. It should use the one vhost a.com and expand to no other domains on 2.2.2.2.

Actual behavior

This works.

However, when the scan completes it is set to Interrupted rather than Done. I believe there's no error in the scan, and that this is because of the OSPD progress calculation.

For example, with 1 included domain and 1 excluded IP, ospd-openvas.log:

Host scan finished.
Host scan got interrupted. Progress: 50, Status: RUNNING
Scan interrupted.
Scan process is dead and its progress is 50

With 1 included domain and 2 excluded IPs the result is a scan Interrupted at 33%.

My assumptions based on the code

The problem appears to be in the OSPD. Initially the host count is unknown. The progress is calculated ospd/scan.py calculate_target_progress().

This calls get_count_total(), which sees there's no cached count in the scans table. This then calls get_host_count() which finds the one host and calls update_count_total(count_total=1) to cache the count for subsequent calls.

We have the 1 host (a.com) that takes part in the progress calculation. The excluded host (1.1.1.1) is ignored, i.e.simplify_exclude_host_count() sees that the excluded IPs aren't part of the host list and treats them as invalid_exc_hosts rather than counting them.

All is good until the OSPD daemon later updates the cached total with a different value.

ospd_openvas/daemon.py report_results() gets called while the scan is in progress. I haven't investigated the circumstances of this.

Part of this updates the host count, calling ospd/ospd.py set_scan_total_hosts(count_total=2) which calls ospd/scan.py update_count_total(count_total=2). I.e. this is a count that includes the exclude hosts.

Now the progress calculation uses a host count of 2. Since the "invalid" excluded host (1.1.1.1) is still ignored it comes up with a total of 2.

There could well be a work-around/alternative way to scan a specific domain and IP pair that avoids this problem! I'm afraid I haven't found it though.

For a newcomer to the code the host count logic in the different parts of the Greenbone ecosystem is a bit confusing :) I couldn't see a good place to make a change that wouldn't break something else!

GVM versions

I’ve built from source:

OSPd OpenVAS version 22.6.1
Greenbone Security Assistant 22.07.0
Greenbone Vulnerability Manager 23.0.1
Manager DB revision 255
OpenVAS 22.7.6
gvm-libs 22.7.3

Environment

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS"

[1.0.0] Socket permissions will reset during startup

When the log message "DEBUG: (ospd_openvas.daemon) Loading vts in memory." comes, the socket permission are as request by --socket-mode. But after the message "DEBUG: (ospd_openvas.daemon) Finish loading up vts." the socket permission are reset to 0755.

ospd-openvas:22.4.6 - scan process never stop

Hi,
I use GVM Community containers 22.4 (https://greenbone.github.io/docs/latest/22.4/container/index.html).
When I run a scan, everything works perfectly ... except when the scan ends. The "openvas --scan-start" process never stops even though the task is done under GSA.

Log ospd-openvas container:
06/04/2023 14:32:32OSPD[1] 2023-04-06 12:32:32,624: DEBUG: (ospd.ospd) 3f2f4d82-83ec-47ca-9728-c07a2cd103cd: Current scan status: FINISHED,

docker top
*UID PID PPID C STIME TTY TIME CMD
gvm 19793 19772 6 13:49 pts/0 00:02:58 /usr/bin/python3 /usr/local/bin/ospd-openvas -f --config /etc/gvm/ospd-openvas.conf --mqtt-broker-address mqtt-broker --notus-feed-dir /var/lib/notus/advisories-m 666
gvm 19874 19793 0 13:49 pts/0 00:00:17 /usr/bin/python3 /usr/local/bin/ospd-openvas -f --config /etc/gvm/ospd-openvas.conf --mqtt-broker-address mqtt-broker --notus-feed-dir /var/lib/notus/advisories-m 666
gvm 31298 19793 10 14:15 ? 00:01:52 openvas --scan-start 3f2f4d82-83ec-47ca-9728-c07a2cd103cd

TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

In ospd-openvas 1.0.0, i worked around it with if None == timeout: timeout = 120 to continue testing

jan 14 10:50:24 openvas ospd-openvas[16659]: Traceback (most recent call last):
File "/usr/bin/ospd-openvas", line 11, in
load_entry_point('ospd-openvas==1.0.0', 'console_scripts', 'ospd-openvas')()
File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 1454, in main
daemon_main('OSPD - openvas', OSPDopenvas)
File "/usr/lib/python3/dist-packages/ospd/main.py", line 159, in main
daemon.init()
File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 293, in init
self.load_vts()
File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 410, in load_vts
_vt_params = self.nvti.get_nvt_params(vt_id)
File "/usr/lib/python3/dist-packages/ospd_openvas/nvticache.py", line 82, in get_nvt_params
if int(timeout) > 0:
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

[22.4.6] AttributeError: 'NoneType' object has no attribute 'startswith'

This error message appears after I use virtualenv to install. I try to solve it through conf, but it doesn't work,Is this a bug?
root@localhost bin]# ./ospd-openvas Traceback (most recent call last): File "./ospd-openvas", line 8, in <module> sys.exit(main()) File "/opt/ospd-scanner/lib/python3.7/site-packages/ospd_openvas/daemon.py", line 1243, in main daemon_main('OSPD - openvas', OSPDopenvas, NotusParser()) File "/opt/ospd-scanner/lib/python3.7/site-packages/ospd/main.py", line 126, in main daemon = daemon_class(**vars(args)) File "/opt/ospd-scanner/lib/python3.7/site-packages/ospd_openvas/daemon.py", line 452, in __init__ self.main_db = MainDB() File "/opt/ospd-scanner/lib/python3.7/site-packages/ospd_openvas/db.py", line 589, in __init__ super().__init__(self.DEFAULT_INDEX, ctx) File "/opt/ospd-scanner/lib/python3.7/site-packages/ospd_openvas/db.py", line 411, in __init__ self.ctx = OpenvasDB.create_context(kbindex) File "/opt/ospd-scanner/lib/python3.7/site-packages/ospd_openvas/db.py", line 113, in create_context decode_responses=True, File "/opt/ospd-scanner/lib/python3.7/site-packages/redis/client.py", line 902, in from_url connection_pool = ConnectionPool.from_url(url, **kwargs) File "/opt/ospd-scanner/lib/python3.7/site-packages/redis/connection.py", line 1370, in from_url url_options = parse_url(url) File "/opt/ospd-scanner/lib/python3.7/site-packages/redis/connection.py", line 1259, in parse_url url.startswith("redis://") AttributeError: 'NoneType' object has no attribute 'startswith'

`[OSPD - openvas]

General

pid_file = /var/run/ospd/openvas.pid
lock_file_dir = /var/run/
stream_timeout = 1
max_scans = 3
min_free_mem_scan_queue = 1000
max_queued_scans = 0

Log config

log_level = DEBUG
log_file = /var/log/gvm/openvas.log
log_config = /.config/ospd-logging.conf

Unix socket settings

socket_mode = 0o770
unix_socket = /var/run/ospd/openvas.sock

TLS socket settings and certificates.

#port = 9390
#bind_address = 0.0.0.0
#key_file = install-prefix/var/lib/gvm/private/CA/serverkey.pem
#cert_file = install-prefix/var/lib/gvm/CA/servercert.pem
#ca_file = install-prefix/var/lib/gvm/CA/cacert.pem

[OSPD - some wrapper]
log_level = DEBUG
socket_mode = 0o770
unix_socket = /var/run/ospd/ospd-wrapper.sock
pid_file = /var/run/ospd/ospd-wrapper.pid
log_file = /var/log/gvm/ospd-wrapper.log`

KeyError: 'vt_groups' on command 'start_scan'

I'm trying to create an independent scanner with openvas 7.0, ospd-openvas and gvm-cli to control it.

This error happened in a Debian 10, with everything installed from the latest stable release. More precisely, in the case of ospd-openvas, I installed it from pip.

$ ospd-openvas --version
OSP Server for openvas: 1.0.0
OSP: 1.2
OSPd: 2.0.0

When I try the examples of OSP API and send the "start_scan" command, the server responds with status code 200, but in the backend the scan fails with the following stacktrace:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/ospd/ospd.py", line 777, in parallel_scan
    ret = self.exec_scan(scan_id, target)
  File "/usr/local/lib/python3.7/dist-packages/ospd_openvas/daemon.py", line 1351, in exec_scan
    nvts_list, nvts_params = self.process_vts(nvts)
  File "/usr/local/lib/python3.7/dist-packages/ospd_openvas/daemon.py", line 1087, in process_vts
    vtgroups = vts.pop('vt_groups')
KeyError: 'vt_groups'

start_scan command:

<start_scan target="192.168.1.20" ports="80, 443"><scanner_params><target_port>443</target_port><use_https>1</use_https><profile>fast_scan</profile></scanner_params></start_scan>

In the scan report appears an error, Host process failure ('vt_groups').

Any ideas about this error? I was unable to discover the reason for this error.

NVT updates won't finish properly in case openvas scan is finished, but still running for gsad / gvmd and so will stuck new scans requests

Environment:

OpenVAS 7.0.1
gvm-libs 11.0.1
OSP Server for openvas: 1.0.1
OSP: 1.2
OSPd: 2.0.1
python2.6
Ubuntu 18.04 LTS
Redis 4.09 with GVMd tuned configuration file

Current behaviour:
If you have a scan shown as completed on openvas / ospd-openvas side, but still in Running state on gvmd / gsad side, and a NVT update start; then the update won't finish properly preventing new scans to start.

In the following example; scan has terminated properly and later a NVT update is launched:

sd   main:MESSAGE:2020-07-19 19h39.45 utc:32689: Total time to scan all hosts : 49142 seconds
lib  nvticache:MESSAGE:2020-07-24 09h15.32 utc:5936: Updated NVT cache from version 202007071016 to 202007230955

On ospd-openvas side; scan is seen has finished but update never finish:

2020-07-23 04:38:48,951 OSPD - openvas: INFO: (ospd.ospd) 68cab482-415c-4475-9f9b-d2dcd6f562b3: Scan finished.
2020-07-24 11:15:10,793 OSPD - openvas: INFO: (ospd_openvas.daemon) A feed update process is running. Trying again later...
2020-07-24 11:15:10,793 OSPD - openvas: DEBUG: (ospd_openvas.daemon) The feed was not upload or it is outdated, but other process is locking the update. Trying again later...
2020-07-24 11:15:20,857 OSPD - openvas: DEBUG: (ospd_openvas.daemon) Loading NVTs in Redis DB
2020-07-24 11:15:32,836 OSPD - openvas: DEBUG: (ospd_openvas.daemon) Feed lock file removed.

However; the update is still considered as pending, even hours after last logged event, and the following update log never show up:
OSPD - openvas: INFO: (ospd_openvas.daemon) Finish loading up vts.

As a consequence, further scan requests will automatically fails since ospd-openvas still consider there is a pending update:
2020-07-25 10:56:50,170 OSPD - openvas: INFO: (ospd_openvas.daemon) c6ccea2a-3ca3-47a2-be89-d6319fb8f9d3: There is a pending feed update. The scan can not be started.

Expected behaviour:
When a scan is done on openvas / ospd-openvas side; NVT updates shouldn't be stuck and should finish properly.

How to reproduce:

  1. Start a scan on somehow important tasks, so that results calculation by gvmd will keep running after ospd-openvas complete
  2. When openvas / ospd-openvas side has completed task scan; run a feed update on the scanner
  3. After a reasonable delay, where feed update should be finished, launch a new scan
  4. Scan will be automatically set in "Done" state with report set in "Error" as a pending update is stuck forever.

Pulling the active scans with gvm-cli

Environment:
Greenbone Vulnerability Manager 20.08.0git-c04cad16-gvmd-20.08
Greenbone Security Assistant 20.08.0git-d26e061f9-gsa-20.08
OSP Server for openvas: 20.8.0
OSP: 20.8.1
OSPd OpenVAS: 20.8.1
gvm-cli 20.10.2.dev1 (API version 20.11.3)
Ubuntu 20.04

I have an issue where gvmd believes that running scans are terminating, often "interrupted", and I've yet to figure out quite why.
In the meantime, I am also using the "max scans" setting in ospd-openvas to type and keep things trimmed down to 2 parallel scans, and also often find that although gvmd says the scans are not running, ospd thinks they are; scans queue and I often get "(ospd.ospd) Not possible to run a new scan. Max scan limit set to 2 reached." in the logs. I realise I could just remove this limit, but I'd like to have things a bit more exact:

I was trying to make it all a bit more "closed-loop" by using gvm-cli to pull a list of known running scans using

gvm-cli --protocol OSP socket --socketpath /opt/gvm/var/run/ospd.sock --xml="<get_scans/>"

but unlike the gmp protocol, this does not produce a list, just an error ; "Response Error 400. No scan_id attribute".
Is there a way of simply pulling the number of running scans on the ospd side without knowing the "scan_id", so I can equate them to the number of running scans within gvm and restart the service if different? Even more ideally I'd also know the uuids or some other attribute which could be used to match task to scan so I can selectively cull stuff in the scanner.
I realise the uuids are dissimilar so I can't query a specific scan without scouring logs to see what's running and get the uuids, more difficult and hacky to script whilst rotating logs and so on behind the scenes.

It's my first delve into the use of gvm-cli with the OSP protocol so I may have missed something!

Kind Regards
Andy

ospd.errors.RequiredArgument: set_redisctx: Argument ctx is required

I was just testing the new ospd-openvas scanner and encounter the following errors when running a scan:

# ospd-openvas -f --log-level DEBUG
2019-11-15 09:05:17,837 OSPD - openvas: DEBUG: (ospd_openvas.daemon) Loading vts in memory.
2019-11-15 09:05:34,495 OSPD - openvas: DEBUG: (ospd_openvas.daemon) Finish loading up vts.
2019-11-15 09:05:44,575 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:05:54,613 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:06:04,652 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:06:14,694 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:06:37,020 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:06:47,059 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:07:08,526 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:07:09,368 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:07:19,524 OSPD - openvas: INFO: (ospd.ospd) b8c9f193-5966-4df2-b571-16221cc1c87b: Scan started.
2019-11-15 09:07:19,525 OSPD - openvas: INFO: (ospd.ospd) x.x.x.x: Host scan started on ports T:1-65535,U:1-65535.
2019-11-15 09:07:19,539 OSPD - openvas: ERROR: (ospd.ospd) While scanning x.x.x.x:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/ospd/ospd.py", line 777, in parallel_scan
    ret = self.exec_scan(scan_id, target)
  File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 1277, in exec_scan
    self.openvas_db.set_redisctx(ctx)
  File "/usr/lib/python3/dist-packages/ospd_openvas/db.py", line 133, in set_redisctx
    raise RequiredArgument('set_redisctx', 'ctx')
ospd.errors.RequiredArgument: set_redisctx: Argument ctx is required
2019-11-15 09:07:19,591 OSPD - openvas: DEBUG: (ospd.server) New connection from /var/run/ospd/ospd.sock
2019-11-15 09:07:21,534 OSPD - openvas: INFO: (ospd.ospd) b8c9f193-5966-4df2-b571-16221cc1c87b: Scan finished.

scanner still running after stop it on GSA Web UI.

Environment

  • openvas: 21.10.0dev1git-0b879efc-master
  • gvm-libs 21.10.0dev1git-55465356-master
  • ospd: 21.10.0.dev1
  • ospd-openvas: 21.10.0.dev1

Issue

Hello, I have noticed that after stopped the running full and fast scan task,
the scanner process still keep running in the background.

bash# ps -ef | grep openvas
843382 root      1:05 {ospd-openvas} /usr/bin/python3 /usr/bin/ospd-openvas -f --unix-socket /var/run/ospd/ospd.sock --socket-mode 0o666 --log-level DEBUG
845224 root      0:46 {ospd-openvas} /usr/bin/python3 /usr/bin/ospd-openvas -f --unix-socket /var/run/ospd/ospd.sock --socket-mode 0o666 --log-level DEBUG
845884 root      0:10 openvas --scan-start <uuid>
846042 root      0:00 openvas: testing <ip>
846049 root      0:00 openvas: testing <ip> (/var/lib/openvas/plugins/nmap.nasl)
852420 root      0:00 grep openvas
bash# ps -ef | grep nmap
846049 root      0:00 openvas: testing <ip> (/var/lib/openvas/plugins/nmap.nasl)
846051 root      0:00 nmap -n -Pn -oG /tmp/nmap-<ip>-1887023174 -sT -sU -p T:1-3,5,7,9,11,13,17-25,27,29,31,33...

Then, I read the code and noticed these following code snippets of ospd-openvas:

# daemon.py -> OSPDopenvas -> exec_scan

    # Check if the client stopped the whole scan
    if scan_stopped:
        logger.debug('%s: Scan stopped by the client', scan_id)

        self.stop_scan_cleanup(kbdb, scan_id, openvas_process)

        # clean main_db, but wait for scanner to finish.
        while not kbdb.target_is_finished(scan_id):
            logger.debug('%s: Waiting for openvas to finish', scan_id)
            time.sleep(1)
        self.main_db.release_database(kbdb)
        return


# daemon.py -> OSPDopenvas -> stop_scan_cleanup
    logger.debug('Stopping process: %s', ovas_process)

    while ovas_process.is_running():
        if ovas_process.status() == psutil.STATUS_ZOMBIE:
            ovas_process.wait()
        else:
            time.sleep(0.1)

It seems ospd never send any signal to the scanner when stopping,
and just waiting for the process to finish its current job.

Does it has some other mechanism to notify the scanner process?

Or, this behavior is by design?

Current docker image cannot start because of redis-py bug

Expected behavior

The docker container for this project should build and connect to the redis server container without error.

Actual behavior

The current docker container build (as of Feb 8 ~09:00 UTC) fails to start and returns the following error:

AttributeError: 'UnixDomainSocketConnection' object has no attribute '_command_packer'

Steps to reproduce

Follow these instructions. Notice that the container doesn't start properly.

Proposed solution

Redis 4.5.1 fixed this bug. Rebuild this container and push it, ensuring that redis-py v4.5.1 is installed.

Working installation of GVM11 stops working on ospd-openvas

Hi,

I am getting the following error from ospd-openvas when issuing the command ospd-openvas --help

Traceback (most recent call last):
  File "/opt/gvm/bin/ospd-openvas", line 6, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3095, in <module>
    @_call_aside
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3079, in _call_aside
    f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 3108, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 570, in _build_master
    ws.require(__requires__)
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 888, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 774, in resolve
    raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'ospd-openvas==1.0.1' distribution was not found and is required by the application

The installation that this is happening on was totally stable and this change has come from day to day. How to address this?

Thank you!

Start ospd-openvas - socket + port

/usr/local/bin/ospd-openvas --pid-file /var/run/ospd/ospd.pid --unix-socket /var/run/ospd/ospd.sock -m 0777 --key-file /usr/local/var/lib/gvm/private/CA/serverkey.pem --cert-file /usr/local/var/lib/gvm/CA/servercert.pem --ca-file /usr/local/var/lib/gvm/CA/cacert.pem -L DEBUG -f

When not specified "-b 0.0.0.0 -p 51234", the .sock file is created.

/usr/local/bin/ospd-openvas --pid-file /var/run/ospd/ospd.pid --unix-socket /var/run/ospd/ospd.sock -m 0777 -b 0.0.0.0 -p 51234 --key-file /usr/local/var/lib/gvm/private/CA/serverkey.pem --cert-file /usr/local/var/lib/gvm/CA/servercert.pem --ca-file /usr/local/var/lib/gvm/CA/cacert.pem -L DEBUG -f

When "-b 0.0.0.0 -p 51234" is specified, the .sock file is not created.

Is it possible to start the service by listening on the port, and the socket?

TypeError: lrem() got an unexpected keyword argument 'count'

With ospd-openvas 1.0.0

2020-01-14 13:05:29,249 OSPD - openvas: ERROR: (ospd.ospd) While scanning aa,bb,cc,dd/xx:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ospd/ospd.py", line 777, in parallel_scan
ret = self.exec_scan(scan_id, target)
File "/usr/lib/python3/dist-packages/ospd_openvas/daemon.py", line 1441, in exec_scan
self.openvas_db.remove_list_item('internal/dbindex', i)
File "/usr/lib/python3/dist-packages/ospd_openvas/db.py", line 291, in remove_list_item
ctx.lrem(key, count=LIST_ALL, value=value)

Scans consistently end up in state Interrupted

Since updating to the latest version with the stable tag last week (the 14th), we have some scans that now consistently end up in state interrupted instead of finishing.

The symptoms appear to be more or less exactly those in #951, for example:

OSPD[7] 2024-03-15 01:14:15,731: INFO: (ospd.ospd) ac4be7ec-972b-417f-bd68-05a753c53c52: Host scan finished.
OSPD[7] 2024-03-15 01:14:15,733: INFO: (ospd.ospd) ac4be7ec-972b-417f-bd68-05a753c53c52: Host scan got interrupted. Progress: 63, Status: RUNNING
OSPD[7] 2024-03-15 01:14:15,734: INFO: (ospd.ospd) ac4be7ec-972b-417f-bd68-05a753c53c52: Scan interrupted.
OSPD[7] 2024-03-15 01:14:16,126: INFO: (ospd.ospd) ac4be7ec-972b-417f-bd68-05a753c53c52: Scan process is dead and its progress is 63

For this scan, the target that fails looks basically like this:

included: 10.1.2.2-10.1.2.254
excluded: 10.1.2.111, 10.1.2.112, 10.1.2.113, 10.1.2.117, 10.1.2.118, 10.1.2.119, 10.1.2.171, 10.1.2.240

When I do a test run now I appear to get 14 hosts alive and included in the range, and I note that 14/(14+8) = 0.636.. , so not accounting for the excluded hosts would appear to roughly explain the progress not being 100%. This may be a coincidence, I have not looked too closely at the changes.

Kali linux : every scan aborted since last update - ospd-openvas error

Hello.
Since last update on kali (system and greenbone components), every scan are automatically aborted after ashort period of activity.
ospd.openvas.log report this :

OSPD[1841762] 2022-10-26 14:12:06,550: INFO: (ospd.command.command) Scan cb56381e-60fb-4dcf-b9d7-c9fe61a255a6 added to the queue in position 1.
OSPD[1841762] 2022-10-26 14:12:12,013: INFO: (ospd.ospd) Currently 1 queued scans.
OSPD[1841762] 2022-10-26 14:12:12,397: INFO: (ospd.ospd) Starting scan cb56381e-60fb-4dcf-b9d7-c9fe61a255a6.
OSPD[1841762] 2022-10-26 14:22:39,614: ERROR: (ospd.ospd) While handling client command:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/managers.py", line 810, in _callmethod
conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/ospd/ospd.py", line 484, in handle_client_stream
self.handle_command(data, stream)
File "/usr/lib/python3/dist-packages/ospd/ospd.py", line 1223, in handle_command
response = command.handle_xml(tree)
File "/usr/lib/python3/dist-packages/ospd/command/command.py", line 453, in handle_xml
self._daemon.check_scan_process(scan_id)
File "/usr/lib/python3/dist-packages/ospd/ospd.py", line 1448, in check_scan_process
status = self.get_scan_status(scan_id)
File "/usr/lib/python3/dist-packages/ospd/ospd.py", line 670, in get_scan_status
status = self.scan_collection.get_status(scan_id)
File "/usr/lib/python3/dist-packages/ospd/scan.py", line 358, in get_status
status = self.scans_table[scan_id].get('status')
File "", line 2, in get
File "/usr/lib/python3.10/multiprocessing/managers.py", line 814, in _callmethod
self._connect()
File "/usr/lib/python3.10/multiprocessing/managers.py", line 801, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/lib/python3.10/multiprocessing/connection.py", line 507, in Client
c = SocketClient(address)
File "/usr/lib/python3.10/multiprocessing/connection.py", line 635, in SocketClient
s.connect(address)
ConnectionRefusedError: [Errno 111] Connection refused
OSPD[1841762] 2022-10-26 14:22:40,428: WARNING: (ospd.ospd) Error sending data to the cli ent while executing a scan cb56381e-60fb-4dcf-b9d7-c9fe61a255a6.
OSPD[1866096] 2022-10-26 14:25:38,841: INFO: (ospd.main) Starting OSPd OpenVAS version 21 .4.5.dev1.

Dont't know if related, but all the memory is occupied by Redis (8 gb + 3gb of swap) (openvas running, but no scan runnin) and a /24 "normal" scan use 100% of CPU.

Thanks
gvm02
gvm01

UnicodeDecodeError: 'ascii' codec can't decode

The lastest version: 20.8.0
when I started ospd service, error occured

Jan 28 00:36:02 localhost.localdomain python3[18089]:   File "/opt/gvm/lib/python3.6/site-packages/ospd-20.8.1-py3.6.egg/ospd/main.py", line 124, in main
Jan 28 00:36:02 localhost.localdomain python3[18089]:   File "/opt/gvm/lib/python3.6/site-packages/ospd_openvas-20.8.0-py3.6.egg/ospd_openvas/daemon.py", line 443, in __>
Jan 28 00:36:02 localhost.localdomain python3[18089]:   File "/opt/gvm/lib/python3.6/site-packages/ospd_openvas-20.8.0-py3.6.egg/ospd_openvas/db.py", line 565, in __init>
Jan 28 00:36:02 localhost.localdomain python3[18089]:   File "/opt/gvm/lib/python3.6/site-packages/ospd_openvas-20.8.0-py3.6.egg/ospd_openvas/db.py", line 362, in __init>
Jan 28 00:36:02 localhost.localdomain python3[18089]:   File "/opt/gvm/lib/python3.6/site-packages/ospd_openvas-20.8.0-py3.6.egg/ospd_openvas/db.py", line 97, in create_>
Jan 28 00:36:02 localhost.localdomain python3[18089]:   File "/opt/gvm/lib/python3.6/site-packages/ospd_openvas-20.8.0-py3.6.egg/ospd_openvas/db.py", line 75, in get_dat>
Jan 28 00:36:02 localhost.localdomain python3[18089]:   File "/opt/gvm/lib/python3.6/site-packages/ospd_openvas-20.8.0-py3.6.egg/ospd_openvas/openvas.py", line 104, in g>
Jan 28 00:36:02 localhost.localdomain python3[18089]: UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 221: ordinal not in range(128)
Jan 28 00:36:02 localhost.localdomain systemd[1]: ospd.service: Main process exited, code=exited, status=1/FAILURE
Jan 28 00:36:02 localhost.localdomain systemd[1]: ospd.service: Failed with result 'exit-code'.

It should be utf-8 instead of ascii.

[1.0.0] discrepancy between help output and man page

ospd-openvas --help states:

-u UNIX_SOCKET, --unix-socket UNIX_SOCKET
                        Unix file socket to listen on. Default:
                        /var/run/ospd/ospd.sock

while docs/ospd-openvas.8 tells:

.TP
.BI "-u " UNIX_SOCKET ", --unix-socket "UNIX_SOCKET
Unix file socket to listen on. Default: /tmp/ospd.sock

and also
-p PORT, --port PORT TCP Port to listen on. Default: 0
vs.
TCP Port to listen on. Default: 1234

paho-mqtt pypi package breaking changes

Expected behavior

ospd-openvas should connect to MQTT creating MQTTClient using paho-mqtt.
Note version 1.6.1 still works, this only fails when upgrading to paho-mqtt 2.0.0

Actual behavior

with paho-mqtt version 2.0.0, producing the following error for both ospd-openvas and notus-scanner

Traceback (most recent call last):
File “/opt/gvm/gvmpy/lib/python3.11/site-packages/paho/mqtt/client.py”, line 874, in del
self._reset_sockets()
File “/opt/gvm/gvmpy/lib/python3.11/site-packages/paho/mqtt/client.py”, line 1133, in _reset_sockets
self._sock_close()
File “/opt/gvm/gvmpy/lib/python3.11/site-packages/paho/mqtt/client.py”, line 1119, in _sock_close
if not self._sock:
^^^^^^^^^^
AttributeError: ‘MQTTClient’ object has no attribute ‘_sock’

Steps to reproduce

install paho-mqtt version 2.0.0

Error with several different versions of ospd-openvas (tested latest and 2 previous releases)

OpenVAS does not clear stale pid files when current pid matches old pid

Expected behavior

When starting ospd-openvas in Docker after a hard shutdown, OpenVAS recognizes that the PID file at /run/ospd/ospd-openvas.pid is stale, removes it, and continues to start.

Actual behavior

OpenVAS displays the following error:

OSPD[1] 2022-08-09 15:25:51,151: ERROR: (ospd.misc) There is an already running process. See /run/ospd/ospd-openvas.pid.

Steps to reproduce

  1. Follow the instructions at https://greenbone.github.io/docs/latest/22.4/container/index.html to set up the Greenbone Community Containers.
  2. Kill the ospd-openvas suddenly with docker compose <options> kill ospd-openvas
  3. Attempt to restart the container with docker compose <options> up ospd-openvas

GVM versions

gsa: 22.04.0

gvm: 22.4.0~dev1

openvas-scanner: 22.4.1~dev1

gvm-libs: 22.4.1~dev1

Environment

Operating system:

$ uname -a && cat /etc/lsb-release 
Linux PC2287 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.3 LTS"

Installation method / source: Official docker-compose.yml file on https://greenbone.github.io/docs/latest/22.4/container/index.html

Proposed fix

There was already an attempt to fix this in 200079a. The fix doesn't work in a docker environment because with docker OpenVAS runs in a pid namespace where it always sees its pid as 1. Therefore the check at

if process_name == new_process_name:
yields a false positive.

That line should be updated to also check if the pid in the file matches the pid of the current process, and clear the file if it does.

SSL Handshake error doing mutual authentication

Hello

i am having issues connecting my gvmd setup to a remote scanner using the certificates created with gvmd-manage-certs. Adding the scanner via GSAD and hitting the "Verify" button, i get the following error on the remote box:

2019-12-17 15:29:59,390 OSPD - openvas: DEBUG: (ospd.server) New connection from ('xxx.xxx.xxx.xxx', 51080)
----------------------------------------
Exception happened during processing of request from ('xxx.xxx.xxx.xxx', 51080)
Traceback (most recent call last):
  File "/usr/lib/python3.7/socketserver.py", line 650, in process_request_thread
    self.finish_request(request, client_address)
  File "/usr/lib/python3.7/socketserver.py", line 360, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib/python3.7/socketserver.py", line 720, in __init__
    self.handle()
  File "/opt/openvas/lib/python3.7/site-packages/ospd-2.0.0-py3.7.egg/ospd/server.py", line 126, in handle
    self.server.handle_request(self.request, self.client_address)
  File "/opt/openvas/lib/python3.7/site-packages/ospd-2.0.0-py3.7.egg/ospd/server.py", line 166, in handle_request
    self.server.handle_request(request, client_address)
  File "/opt/openvas/lib/python3.7/site-packages/ospd-2.0.0-py3.7.egg/ospd/server.py", line 294, in handle_request
    req_socket = self.tls_context.wrap_socket(request, server_side=True)
  File "/usr/lib/python3.7/ssl.py", line 423, in wrap_socket
    session=session
  File "/usr/lib/python3.7/ssl.py", line 870, in _create
    self.do_handshake()
  File "/usr/lib/python3.7/ssl.py", line 1139, in do_handshake
    self._sslobj.do_handshake()
OSError: [Errno 0] Error
----------------------------------------

on the remote i copied the following certifcates:

  • cacert.pem
  • servercert.pem
  • serverkey.pem

for the local scanner credentials, i've used the following certs:

  • cacert.pem
  • clientcert.pem
  • clientkey.pem

Versions on the local side :

gsa.vers
v9.0.0
gvm-libs.vers
v11.0.0
gvm-tools.vers
v2.0.0
gvmd.vers
v9.0.0
openvas-smb.vers
v1.0.5
openvas.vers
v7.0.0
ospd-openvas.vers
v1.0.0
ospd.vers
v2.0.0

Versions on the remote side:

./ospd-openvas --version
OSP Server for openvas: 1.0.0
OSP: 1.2
OSPd: 2.0.0

Copyright (C) 2014, 2015, 2018, 2019 Greenbone Networks GmbH
License GPLv2+: GNU GPL version 2 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

All binaries / scripts are grabbed from github and compiled on the latest "KALI LINUX" version (rolling).

Appreciate any input / suggestions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.