Giter Site home page Giter Site logo

Comments (28)

GeorgeVat avatar GeorgeVat commented on June 15, 2024 1

@GeorgeVat Have you made any changes in VM configuration? I am not able to access <public_ip_of_instance>:15672.

i just changed the docker-compose.yml file and force recreate my containers
image

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

Hello @siddo1008 , please attach service-api log and service-uat log to investigate your issue.
Am I properly understood from screenshot, that you were used RP for 6 month and only now you've met with issue?
BTW, you are using some kind of mixed version of RP (I compared versions of your services with Releases information in documentation).

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee Thanks for responding to my case. Yes RP worked fine for the last 6 months, it went into a loading state from yesterday. Below are the logs that you have asked for.
Service API
image
image
image
image
image

Service UAT
image
image
image
image
image
image
image
image
image

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee Any update on this one?

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@siddo1008 , it is quite difficult to analyze log file by reading screenshots.
I just found that something wrong with Rabbit. It is better to attach log files in text format.

Standard procedures for long-term working instances : https://reportportal.io/docs/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingVacuumFull/

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee Okay, I will attach the log file in text format for better visibility.

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

service-api.txt
service-uat.txt
@Pink-Bumblebee ^ Please help me

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@siddo1008 , situation is the same: something wrong with your Rabbit. You could try to connect to it by browser at <your_rp_server_address>:15672 with login and password 'rabbitmq'. Then you could check status of your Rabbit and it's queues.

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee Not able to access port 15672 even if I have added to Network settings.

image

from reportportal.

GeorgeVat avatar GeorgeVat commented on June 15, 2024

I have propably the same issue as @siddo1008 ,my service API container becomes unhealthy after trying to connect to rabbitmq.
I have accessed now the localhost:15672 (rabbitmq), what indicators should I observe to troubleshoot it ?

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@GeorgeVat Have you made any changes in VM configuration? I am not able to access <public_ip_of_instance>:15672.

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@siddo1008 , @GeorgeVat , may I ask you: what is system resources of the system, that you are trying to install on?
Is it MAC or virtual linux machine? How much memory or how much memory assigned for VM?
Maybe I could propose to add 2 additional containers to your installation: dozzle and glances. These 2 containers adding simple monitoring instruments and may be quite useful.

  glances:
    container_name: new_glances
    image: nicolargo/glances:latest-full
    restart: always
    pid: host
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /etc/os-release:/etcos-release:ro
    environment:
      - TZ=Europe/Minsk
      - GLANCES_OPT=--webserver
    ports:
      - "61208:61208"
    networks:
      - reportportal
    labels:
      - "traefik.port=61208"
      - "traefik.frontend.rule=Host:glances.docker.localhost"

  dozzle:
    container_name: dozzle
    image: amir20/dozzle:latest
    restart: always
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - "8888:8080"
    environment:
      DOZZLE_ENABLE_ACTIONS: "true"
    networks:
      - reportportal
    healthcheck:
      test: ["CMD", "/dozzle", "healthcheck"]
      interval: 3s
      timeout: 30s
      retries: 5
      start_period: 30s

ports are 8888 and 61208.

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee OS - Linux (ubuntu 20.04)
Size - 2vcpus, 7 GiB memory

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@siddo1008 please try to increase both, if it is possible. Memory at least 8 Gb, 16Gb is better. BTW, as I remember, docker has internal limit for memory usage ~ 50% of system memory.

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee But I don't understand how are resources causing this issue. Report Portal was working smooth for 6 months and all of sudden I'm facing this issue.

from reportportal.

GeorgeVat avatar GeorgeVat commented on June 15, 2024

@Pink-Bumblebee
Client: Docker Engine - Community
Version: 24.0.7
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.11.2
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.21.0
Path: /usr/libexec/docker/cli-plugins/docker-compose
scan: Docker Scan (Docker Inc.)
Version: v0.12.0
Path: /usr/libexec/docker/cli-plugins/docker-scan

Server:
Containers: 20
Running: 19
Paused: 0
Stopped: 1
Images: 16
Server Version: 24.0.7
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version: v1.1.10-0-g18a0cb0
init version: de40ad0
Security Options:
seccomp
Profile: builtin
Kernel Version: 4.18.0-348.2.1.el8_5.x86_64
Operating System: Rocky Linux 8.5 (Green Obsidian)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 125.8GiB
Name:
ID:
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

these are my system docker resources

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@siddo1008 , sorry, I could not guess what and when happened at your instance. I'm just trying to collect as much as possible information and propose standard actions.
You need to check what has happened at your instance. Not enough disk space, overloaded base, anything else.

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@GeorgeVat , I've tried to install reportportal using your docker-compose file, that you've sent to support. On my virtual machine (linux, 4cpu, 24gb ram) everything started and looks like operating. I said "looks like" because I do not have experience how to work with your additional Selenium containers.

from reportportal.

GeorgeVat avatar GeorgeVat commented on June 15, 2024

@Pink-Bumblebee I think the problem occurs when i have the auto-analysis on and the integration of my email server. When no fails occur on my test cases everything seems to work fine, but when a fail occurs then it goes to analyse it and, i think, trying to send an email at the same time. Now, i just unticked the boxes from the UI (Auto-Analysis, Auto-Unique Error) and so far it works fine, even with fails on the test cases.

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee I recreated docker-compose file using sudo docker compose -p reportportal up -d --force-recreate. The application is up and running, I can see all the containers in healthy state and able to login to application. But the issue is when I trigger a automation pipeline the application again gets stuck in loading state. Then I checked all containers, service-api container goes into unhealthy state. So I stopped the automation pipeline, restarted the VM and again the application is up and running but no logs on application of automation suit that I ran before application crashes. This continued every time I ran a automation pipeline. So in this case what can be done?
service-api-after-automation-pipeline-trigger.txt
service-api-before-pipeline-trigger.txt

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@siddo1008 , in both your files you have a lot of warnings:

WARN 1 --- [io-8585-exec-54] o.s.b.a.jdbc.DataSourceHealthIndicator   : DataSource health check failed
 WARN 1 --- [io-8585-exec-54] o.s.jdbc.support.SQLErrorCodesFactory    : Error while extracting database name

Please check your postgres.

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee Below are the logs for Postgres.

`PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-04-24 13:01:33.174 UTC [1] LOG:  starting PostgreSQL 12.16 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
2024-04-24 13:01:33.174 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-04-24 13:01:33.174 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2024-04-24 13:01:33.215 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-04-24 13:01:33.436 UTC [22] LOG:  database system was shut down at 2024-04-24 13:01:12 UTC
2024-04-24 13:01:33.563 UTC [1] LOG:  database system is ready to accept connections
2024-04-24 15:30:07.820 UTC [1] LOG:  received fast shutdown request
2024-04-24 15:30:07.832 UTC [1] LOG:  aborting any active transactions
2024-04-24 15:30:07.833 UTC [7072] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.834 UTC [7061] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.835 UTC [7071] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.835 UTC [7063] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.836 UTC [7060] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.840 UTC [7059] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.842 UTC [7062] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.843 UTC [7058] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.844 UTC [7048] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.844 UTC [7049] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.851 UTC [5792] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.851 UTC [5791] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.851 UTC [7040] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.851 UTC [5818] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.852 UTC [5783] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.856 UTC [5773] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.856 UTC [5764] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.856 UTC [5763] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.858 UTC [5760] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.860 UTC [5802] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.861 UTC [5801] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.871 UTC [5800] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.871 UTC [5790] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.878 UTC [5772] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.878 UTC [5762] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.879 UTC [5759] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.884 UTC [5758] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.884 UTC [5757] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.892 UTC [5733] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.901 UTC [5730] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.927 UTC [5722] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.946 UTC [5700] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.946 UTC [5702] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.946 UTC [5711] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.948 UTC [5721] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.948 UTC [5731] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.949 UTC [5732] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.949 UTC [5734] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.949 UTC [5745] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.949 UTC [5747] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.949 UTC [5755] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.950 UTC [5782] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.950 UTC [5735] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.951 UTC [5712] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.952 UTC [5761] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.952 UTC [5718] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.952 UTC [5746] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.952 UTC [5720] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.946 UTC [5703] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.884 UTC [5756] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.021 UTC [5668] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.027 UTC [5680] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.049 UTC [5677] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:07.946 UTC [5679] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.090 UTC [5719] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.092 UTC [5717] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.092 UTC [5716] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.095 UTC [5715] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.105 UTC [5714] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.111 UTC [5713] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.117 UTC [5701] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.123 UTC [5699] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.124 UTC [5698] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.124 UTC [5688] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.124 UTC [5697] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.321 UTC [1] LOG:  background worker "logical replication launcher" (PID 28) exited with exit code 1
2024-04-24 15:30:08.332 UTC [7080] FATAL:  terminating connection due to administrator command
2024-04-24 15:30:08.389 UTC [23] LOG:  shutting down
2024-04-24 15:30:08.945 UTC [7081] FATAL:  the database system is shutting down
2024-04-24 15:30:09.229 UTC [7082] FATAL:  the database system is shutting down
2024-04-24 15:30:09.667 UTC [1] LOG:  database system is shut down

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-04-25 05:06:36.883 UTC [1] LOG:  starting PostgreSQL 12.16 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
2024-04-25 05:06:36.883 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-04-25 05:06:36.883 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2024-04-25 05:06:36.959 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-04-25 05:06:37.328 UTC [22] LOG:  database system was shut down at 2024-04-24 15:30:09 UTC
2024-04-25 05:06:37.659 UTC [1] LOG:  database system is ready to accept connections
2024-04-25 07:34:05.203 UTC [1] LOG:  received fast shutdown request
2024-04-25 07:34:05.214 UTC [1] LOG:  aborting any active transactions
2024-04-25 07:34:05.214 UTC [5764] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.214 UTC [5773] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.216 UTC [5763] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.217 UTC [5755] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.219 UTC [5754] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.219 UTC [5746] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.220 UTC [5743] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.226 UTC [5742] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.227 UTC [5741] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.228 UTC [5740] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.236 UTC [5739] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.236 UTC [5729] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.237 UTC [5716] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.241 UTC [5634] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.243 UTC [5709] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.246 UTC [5708] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.251 UTC [5747] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.256 UTC [5696] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.256 UTC [5695] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.257 UTC [5690] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.260 UTC [5687] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.279 UTC [5673] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.282 UTC [5671] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.289 UTC [5648] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.289 UTC [5649] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.289 UTC [5659] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.289 UTC [5660] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.289 UTC [5661] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.289 UTC [5670] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.289 UTC [5672] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.290 UTC [5675] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.290 UTC [5686] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.291 UTC [5691] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.292 UTC [5693] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.292 UTC [5694] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.292 UTC [5697] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.292 UTC [5706] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.292 UTC [5707] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.292 UTC [5710] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.292 UTC [5711] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.293 UTC [5715] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.293 UTC [5726] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.293 UTC [5727] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.293 UTC [5730] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.294 UTC [5728] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.294 UTC [5704] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.294 UTC [5692] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.294 UTC [5713] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.311 UTC [5637] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.317 UTC [5610] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.317 UTC [5650] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.319 UTC [5647] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.319 UTC [5678] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.320 UTC [5674] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.320 UTC [5635] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.321 UTC [5658] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.321 UTC [5738] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.322 UTC [5676] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.288 UTC [5636] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.324 UTC [5662] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.338 UTC [5717] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.340 UTC [5714] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.353 UTC [5689] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.378 UTC [5712] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.294 UTC [5718] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.422 UTC [5688] FATAL:  terminating connection due to administrator command
2024-04-25 07:34:05.498 UTC [1] LOG:  background worker "logical replication launcher" (PID 28) exited with exit code 1
2024-04-25 07:34:05.635 UTC [23] LOG:  shutting down
2024-04-25 07:34:06.134 UTC [1] LOG:  database system is shut down

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-04-25 07:34:51.111 UTC [1] LOG:  starting PostgreSQL 12.16 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
2024-04-25 07:34:51.111 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-04-25 07:34:51.111 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2024-04-25 07:34:51.148 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-04-25 07:34:51.373 UTC [22] LOG:  database system was shut down at 2024-04-25 07:34:05 UTC
2024-04-25 07:34:51.497 UTC [1] LOG:  database system is ready to accept connections
2024-04-25 09:43:51.010 UTC [1] LOG:  received fast shutdown request
2024-04-25 09:43:51.197 UTC [1] LOG:  aborting any active transactions
2024-04-25 09:43:51.198 UTC [5793] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.198 UTC [5792] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.200 UTC [5784] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.204 UTC [5762] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.206 UTC [5783] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.207 UTC [5760] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.208 UTC [5759] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.210 UTC [5781] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.212 UTC [5772] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.223 UTC [5671] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.224 UTC [5669] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.224 UTC [5749] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.225 UTC [5748] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.227 UTC [5745] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.233 UTC [5668] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.235 UTC [5658] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.240 UTC [5657] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.240 UTC [5638] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.252 UTC [5629] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.252 UTC [5672] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.263 UTC [5737] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.279 UTC [5734] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.281 UTC [5732] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.307 UTC [5721] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.314 UTC [5720] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.323 UTC [5719] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.328 UTC [5718] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.332 UTC [5708] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.349 UTC [5705] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.387 UTC [5703] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.421 UTC [5691] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.428 UTC [5688] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.442 UTC [5687] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.485 UTC [5684] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.490 UTC [5683] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.491 UTC [5673] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.497 UTC [5667] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.523 UTC [5771] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.524 UTC [5764] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.525 UTC [5763] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.532 UTC [5717] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.599 UTC [5761] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.624 UTC [5757] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.626 UTC [5736] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.660 UTC [5735] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.662 UTC [5733] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.680 UTC [5730] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5723] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5709] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5707] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5704] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5694] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5693] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5692] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5690] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.682 UTC [5686] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.682 UTC [5675] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.682 UTC [5674] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.682 UTC [5656] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.682 UTC [5670] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.683 UTC [5773] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.683 UTC [5689] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.683 UTC [5758] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.683 UTC [5731] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.682 UTC [5685] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:51.681 UTC [5722] FATAL:  terminating connection due to administrator command
2024-04-25 09:43:52.153 UTC [1] LOG:  background worker "logical replication launcher" (PID 28) exited with exit code 1
2024-04-25 09:43:52.190 UTC [23] LOG:  shutting down
2024-04-25 09:43:52.876 UTC [6194] FATAL:  the database system is shutting down
2024-04-25 09:43:53.330 UTC [6195] FATAL:  the database system is shutting down
2024-04-25 09:43:53.733 UTC [6196] FATAL:  the database system is shutting down
2024-04-25 09:43:53.852 UTC [1] LOG:  database system is shut down

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-04-25 09:44:41.537 UTC [1] LOG:  starting PostgreSQL 12.16 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
2024-04-25 09:44:41.538 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-04-25 09:44:41.538 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2024-04-25 09:44:41.564 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-04-25 09:44:41.912 UTC [21] LOG:  database system was shut down at 2024-04-25 09:43:53 UTC
2024-04-25 09:44:41.976 UTC [1] LOG:  database system is ready to accept connections
2024-04-25 12:56:48.271 UTC [1] LOG:  received fast shutdown request
2024-04-25 12:56:48.294 UTC [1] LOG:  aborting any active transactions
2024-04-25 12:56:48.295 UTC [8608] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.296 UTC [8596] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.296 UTC [8595] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.297 UTC [8607] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.298 UTC [8581] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.301 UTC [8572] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.302 UTC [8558] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.302 UTC [8598] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.304 UTC [8556] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.305 UTC [8553] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.307 UTC [8597] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.311 UTC [8545] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.323 UTC [8544] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.324 UTC [8542] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.324 UTC [8540] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.326 UTC [8532] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.327 UTC [8529] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.329 UTC [8593] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.330 UTC [8591] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.332 UTC [8528] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.336 UTC [8515] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.340 UTC [8583] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.340 UTC [8507] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.355 UTC [8525] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.360 UTC [8580] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.367 UTC [8579] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.370 UTC [8570] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.370 UTC [8569] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.370 UTC [8568] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.370 UTC [8557] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.388 UTC [8541] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.388 UTC [8530] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.389 UTC [8527] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.395 UTC [8526] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.411 UTC [8517] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.421 UTC [8516] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.450 UTC [8503] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.463 UTC [8490] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.475 UTC [8475] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.477 UTC [8457] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.483 UTC [8446] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.519 UTC [8447] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.525 UTC [8476] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.539 UTC [8506] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.545 UTC [8505] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.556 UTC [8555] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.580 UTC [8501] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.586 UTC [8492] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.591 UTC [8502] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.591 UTC [8554] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.600 UTC [8500] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.600 UTC [8491] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.600 UTC [8489] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.600 UTC [8488] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.602 UTC [8487] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.656 UTC [8448] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.670 UTC [8465] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.681 UTC [8467] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.682 UTC [8594] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.682 UTC [8592] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.682 UTC [8606] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.682 UTC [8571] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.683 UTC [8543] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.683 UTC [8531] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.681 UTC [8477] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:48.728 UTC [8466] FATAL:  terminating connection due to administrator command
2024-04-25 12:56:49.029 UTC [1] LOG:  background worker "logical replication launcher" (PID 27) exited with exit code 1
2024-04-25 12:56:49.047 UTC [22] LOG:  shutting down
2024-04-25 12:56:50.175 UTC [9177] FATAL:  the database system is shutting down
2024-04-25 12:56:50.492 UTC [9178] FATAL:  the database system is shutting down
2024-04-25 12:56:51.208 UTC [9179] FATAL:  the database system is shutting down
2024-04-25 12:56:51.433 UTC [1] LOG:  database system is shut down

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-04-25 12:57:38.961 UTC [1] LOG:  starting PostgreSQL 12.16 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
2024-04-25 12:57:38.961 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-04-25 12:57:38.961 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2024-04-25 12:57:38.979 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-04-25 12:57:39.292 UTC [22] LOG:  database system was shut down at 2024-04-25 12:56:50 UTC
2024-04-25 12:57:39.339 UTC [1] LOG:  database system is ready to accept connections`

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee ^

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee May I upgrade all service images and check if application is working smooth?
Providing docker-compose.yml

version: '3.8'
services:

  gateway:
    image: traefik:v2.0.7
    logging: &logging
      driver: "json-file"
      options:
        max-size: 100m
        max-file: "5"
    ports:
      - "8080:8080" # HTTP exposed
      - "8081:8081" # HTTP Administration exposed
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    command:
      - --providers.docker=true
      - --providers.docker.constraints=Label(`traefik.expose`, `true`)
      - --entrypoints.web.address=:8080
      - --entrypoints.traefik.address=:8081
      - --api.dashboard=true
      - --api.insecure=true
    networks:
      - reportportal
    restart: always

  elasticsearch:
    image: [docker.elastic.co/elasticsearch/elasticsearch:7.10.1](http://docker.elastic.co/elasticsearch/elasticsearch:7.10.1)
    logging:
      <<: *logging
    volumes:
      - ./data/elasticsearch:/usr/share/elasticsearch/data
    environment:
      - "ES_JAVA_OPTS=-Dlog4j2.formatMsgNoLookups=true"
      - "bootstrap.memory_lock=true"
      - "discovery.type=single-node"
      - "logger.level=INFO"
      - "xpack.security.enabled=true"
      - "ELASTIC_PASSWORD=elastic1q2w3e"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
      # ports:
      # - "9200:9200"
    healthcheck:
      test: ["CMD", "curl","-s" ,"-f", "http://elastic:elastic1q2w3e@localhost:9200/_cat/health"]
    networks:
      - reportportal
    restart: always

  minio:
    image: minio/minio:RELEASE.2020-10-27T04-03-55Z
    logging:
      <<: *logging
    # ports:
    #   - 9000:9000
    volumes:
      ## For unix host
      - ./data/storage:/data
      ## For windows host
      # - minio:/data
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server /data
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3
    networks:
      - reportportal
    restart: always

  postgres:
    image: postgres:12-alpine
    logging:
      <<: *logging
    shm_size: '512m'
    environment:
      POSTGRES_USER: rpuser
      POSTGRES_PASSWORD: rppass
      POSTGRES_DB: reportportal
    volumes:
      ## For unix host
      - ./data/postgres:/var/lib/postgresql/data
      ## For windows host
      # - postgres:/var/lib/postgresql/data
    ## If you need to access the DB locally. Could be a security risk to expose DB.
    # ports:
    #   - "5432:5432"
    command:
      -c checkpoint_completion_target=0.9
      -c work_mem=96MB
      -c wal_writer_delay=20ms
      -c synchronous_commit=off
      -c wal_buffers=32MB
      -c min_wal_size=2GB
      -c max_wal_size=4GB
    ## Optional, for SSD Data Storage. If you are using the HDD, set up this command to '2'
    #  -c effective_io_concurrency=200
    ## Optional, for SSD Data Storage. If you are using the HDD, set up this command to '4'
    #  -c random_page_cost=1.1
    ## Optional can be scaled. Example for 4 CPU, 16GB RAM instance, where only the database is deployed
    #  -c max_worker_processes=4
    #  -c max_parallel_workers_per_gather=2
    #  -c max_parallel_workers=4
    #  -c shared_buffers=4GB
    #  -c effective_cache_size=12GB
    #  -c maintenance_work_mem=1GB
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -d $$POSTGRES_DB -U $$POSTGRES_USER"]
      interval: 10s
      timeout: 120s
      retries: 10
    networks:
      - reportportal
    restart: always

  ## An analyzer instance for main functionality, don't forget to use the same storage settings (either MinIO or filesystem) for analyzer_train
  analyzer:
    image: reportportal/service-auto-analyzer:5.7.6
    logging:
      <<: *logging
    environment:
      LOGGING_LEVEL: info
      AMQP_EXCHANGE_NAME: analyzer-default
      AMQP_VIRTUAL_HOST: analyzer
      AMQP_URL: amqp://rabbitmq:rabbitmq@rabbitmq:5672
      ES_HOSTS: http://elastic:elastic1q2w3e@elasticsearch:9200
      MINIO_SHORT_HOST: minio:9000
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    depends_on:
      elasticsearch:
        condition: service_started
      rabbitmq:
        condition: service_healthy
    networks:
      - reportportal
    restart: always

  ## An analyzer instance for training, don't forget to set up the same storage (either MinIO or filesystem) as for the main analyzer
  analyzer-train:
    image: reportportal/service-auto-analyzer:5.7.5
    logging:
      <<: *logging
    environment:
      LOGGING_LEVEL: info
      AMQP_EXCHANGE_NAME: analyzer-default
      AMQP_VIRTUAL_HOST: analyzer
      AMQP_URL: amqp://rabbitmq:rabbitmq@rabbitmq:5672
      ES_HOSTS: http://elastic:elastic1q2w3e@elasticsearch:9200
      MINIO_SHORT_HOST: minio:9000
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
      INSTANCE_TASK_TYPE: train
      UWSGI_WORKERS: 1
    depends_on:
      elasticsearch:
        condition: service_started
      rabbitmq:
        condition: service_healthy
    networks:
      - reportportal
    restart: always

  ## Metrics service for analyzing how the analyzer is working and managing retrained models
  metrics-gatherer:
    image: reportportal/service-metrics-gatherer:5.7.5
    logging:
      <<: *logging
    environment:
      LOGGING_LEVEL: info
      ES_HOST: http://elasticsearch:9200/
      ES_USER: elastic
      ES_PASSWORD: elastic1q2w3e
      POSTGRES_USER: "rpuser"
      POSTGRES_PASSWORD: "rppass"
      POSTGRES_DB: "reportportal"
      POSTGRES_HOST: "postgres"
      POSTGRES_PORT: 5432
      ALLOWED_START_TIME: "22:00"
      ALLOWED_END_TIME: "08:00"
      #TZ: Europe/Minsk you can change a timezone like this to specify when metrics are gathered
      AMQP_URL: amqp://rabbitmq:rabbitmq@rabbitmq:5672
      AMQP_VIRTUAL_HOST: analyzer
    depends_on:
      - elasticsearch
    networks:
      - reportportal
    restart: always

  ## Initial reportportal db schema. Run once.
  migrations:
    image: reportportal/migrations:5.9.0
    logging:
      <<: *logging
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      POSTGRES_SERVER: postgres
      POSTGRES_PORT: 5432
      POSTGRES_DB: reportportal
      POSTGRES_USER: rpuser
      POSTGRES_PASSWORD: rppass
    networks:
      - reportportal
    restart: on-failure

   ## ReportPortal service for different migrations.
   ## Read more: https://github.com/reportportal/migrations-complex/blob/master/README.md
#  migrations-complex:
#    image: reportportal/migrations-complex:1.0.0
#    ports:
#      - '5020:5020'
#    environment:
#      RP_DB_HOST: postgres
#      RP_DB_USER: rpuser
#      RP_DB_PASS: rppass
#      RP_DB_NAME: reportportal
#      RP_TOKEN_MIGRATION: "true"

  rabbitmq:
    image: bitnami/rabbitmq:3.12.2-debian-11-r8
    logging:
      <<: *logging
    # ports:
    #   - "5672:5672"
    #   - "15672:15672"
    environment:
      RABBITMQ_DEFAULT_USER: "rabbitmq"
      RABBITMQ_DEFAULT_PASS: "rabbitmq"
    healthcheck:
      test: ["CMD", "rabbitmqctl", "status"]
      interval: 30s
      timeout: 30s
      retries: 5
    networks:
      - reportportal
    restart: always

  uat:
    image: reportportal/service-authorization:5.8.2
    logging:
      <<: *logging
    # ports:
    #   - "9999:9999"
    environment:
      RP_DB_HOST: postgres
      RP_DB_USER: rpuser
      RP_DB_PASS: rppass
      RP_DB_NAME: reportportal
      RP_BINARYSTORE_TYPE: minio
      DATASTORE_ENDPOINT: http://minio:9000/
      DATASTORE_ACCESSKEY: minio
      DATASTORE_SECRETKEY: minio123
      RP_SESSION_LIVE: 86400 # in seconds
      RP_SAML_SESSION-LIVE: 4320
      ## The initial password for superadmin user on the FIRST launch. If the password was changed from the UI, this value can't change the password on redeployments.
      RP_INITIAL_ADMIN_PASSWORD: "MyDevPassword123!"
    healthcheck:
      test: curl -f http://0.0.0.0:9999/health
      interval: 60s
      timeout: 30s
      retries: 10
      start_period: 60s
    labels:
      - "traefik.http.middlewares.uat-strip-prefix.stripprefix.prefixes=/uat"
      - "traefik.http.routers.uat.middlewares=uat-strip-prefix@docker"
      - "traefik.http.routers.uat.rule=PathPrefix(`/uat`)"
      - "traefik.http.routers.uat.service=uat"
      - "traefik.http.services.uat.loadbalancer.server.port=9999"
      - "traefik.http.services.uat.loadbalancer.server.scheme=http"
      - "traefik.expose=true"
    depends_on:
      postgres:
        condition: service_healthy
    networks:
      - reportportal
    restart: always

  index:
    image: reportportal/service-index:5.8.0
    logging:
      <<: *logging
    depends_on:
      gateway:
        condition: service_started
    environment:
      LB_URL: http://gateway:8081/
      TRAEFIK_V2_MODE: 'true'
    labels:
      - "traefik.http.routers.index.rule=PathPrefix(`/`)"
      - "traefik.http.routers.index.service=index"
      - "traefik.http.services.index.loadbalancer.server.port=8080"
      - "traefik.http.services.index.loadbalancer.server.scheme=http"
      - "traefik.expose=true"
    networks:
      - reportportal
    restart: always

  api:
    image: reportportal/service-api:5.9.2
    logging:
      <<: *logging
    depends_on:
      rabbitmq:
        condition: service_healthy
      gateway:
        condition: service_started
      postgres:
        condition: service_healthy
    environment:
      ## Double entry moves test logs from PostgreSQL to Elastic-type engines
      ## Ref: https://reportportal.io/blog/double-entry-in-5.7.2
      RP_ELASTICSEARCHLOGMESSAGE_HOST: http://elasticsearch:9200/
      RP_DB_HOST: postgres
      RP_DB_USER: rpuser
      RP_DB_PASS: rppass
      RP_DB_NAME: reportportal
      RP_AMQP_USER: rabbitmq
      RP_AMQP_PASS: rabbitmq
      RP_AMQP_APIUSER: rabbitmq
      RP_AMQP_APIPASS: rabbitmq
      RP_AMQP_ANALYZER-VHOST: analyzer
      RP_BINARYSTORE_TYPE: minio
      DATASTORE_ENDPOINT: http://minio:9000/
      DATASTORE_ACCESSKEY: minio
      DATASTORE_SECRETKEY: minio123
      LOGGING_LEVEL_ORG_HIBERNATE_SQL: info
      RP_REQUESTLOGGING: "false"
      MANAGEMENT_HEALTH_ELASTICSEARCH_ENABLED: "false"
      RP_ENVIRONMENT_VARIABLE_ALLOW-DELETE-ACCOUNT: "false"
      JAVA_OPTS: -Xmx1g -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp  -Dcom.sun.management.jmxremote.rmi.port=12349 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false  -Dcom.sun.management.jmxremote.port=9010 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=0.0.0.0
    healthcheck:
      test: curl -f http://0.0.0.0:8585/health
      interval: 60s
      timeout: 30s
      retries: 10
      start_period: 60s
    labels:
      - "traefik.http.middlewares.api-strip-prefix.stripprefix.prefixes=/api"
      - "traefik.http.routers.api.middlewares=api-strip-prefix@docker"
      - "traefik.http.routers.api.rule=PathPrefix(`/api`)"
      - "traefik.http.routers.api.service=api"
      - "traefik.http.services.api.loadbalancer.server.port=8585"
      - "traefik.http.services.api.loadbalancer.server.scheme=http"
      - "traefik.expose=true"
    networks:
      - reportportal
    restart: always

  jobs:
    image: reportportal/service-jobs:5.8.2
    logging:
      <<: *logging
    depends_on:
      rabbitmq:
        condition: service_healthy
      gateway:
        condition: service_started
      postgres:
        condition: service_healthy
    environment:
      ## Double entry moves test logs from PostgreSQL to Elastic-type engines
      ## Ref: https://reportportal.io/blog/double-entry-in-5.7.2
      RP_ELASTICSEARCH_HOST: http://elasticsearch:9200/
      RP_ELASTICSEARCH_USERNAME: elastic
      RP_ELASTICSEARCH_PASSWORD:  elastic1q2w3e
      RP_DB_HOST: postgres
      RP_DB_USER: rpuser
      RP_DB_PASS: rppass
      RP_DB_NAME: reportportal
      RP_AMQP_USER: rabbitmq
      RP_AMQP_PASS: rabbitmq
      RP_AMQP_APIUSER: rabbitmq
      RP_AMQP_APIPASS: rabbitmq
      RP_AMQP_ANALYZER-VHOST: analyzer
      DATASTORE_TYPE: minio
      DATASTORE_ENDPOINT: http://minio:9000/
      DATASTORE_ACCESSKEY: minio
      DATASTORE_SECRETKEY: minio123
      RP_ENVIRONMENT_VARIABLE_CLEAN_ATTACHMENT_CRON: 0 0 */24 * * *
      RP_ENVIRONMENT_VARIABLE_CLEAN_LOG_CRON: 0 0 */24 * * *
      RP_ENVIRONMENT_VARIABLE_CLEAN_LAUNCH_CRON: 0 0 */24 * * *
      RP_ENVIRONMENT_VARIABLE_CLEAN_STORAGE_CRON: 0 0 */24 * * *
      RP_ENVIRONMENT_VARIABLE_STORAGE_PROJECT_CRON: 0 */5 * * * *
      RP_ENVIRONMENT_VARIABLE_CLEAN_STORAGE_CHUNKSIZE: 1000
      RP_PROCESSING_LOG_MAXBATCHSIZE: 2000
      RP_PROCESSING_LOG_MAXBATCHTIMEOUT: 6000
      RP_AMQP_MAXLOGCONSUMER: 1
    healthcheck:
      test: curl -f http://0.0.0.0:8686/health || exit 1
      interval: 60s
      timeout: 30s
      retries: 10
      start_period: 60s
    labels:
      - traefik.http.middlewares.jobs-strip-prefix.stripprefix.prefixes=/jobs
      - traefik.http.routers.jobs.middlewares=jobs-strip-prefix@docker
      - traefik.http.routers.jobs.rule=PathPrefix(`/jobs`)
      - traefik.http.routers.jobs.service=jobs
      - traefik.http.services.jobs.loadbalancer.server.port=8686
      - traefik.http.services.jobs.loadbalancer.server.scheme=http
      - traefik.expose=true
    networks:
      - reportportal
    restart: always

  ui:
    image: reportportal/service-ui:5.9.0
    environment:
      RP_SERVER_PORT: "8080"
    labels:
      - "traefik.http.middlewares.ui-strip-prefix.stripprefix.prefixes=/ui"
      - "traefik.http.routers.ui.middlewares=ui-strip-prefix@docker"
      - "traefik.http.routers.ui.rule=PathPrefix(`/ui`)"
      - "traefik.http.routers.ui.service=ui"
      - "traefik.http.services.ui.loadbalancer.server.port=8080"
      - "traefik.http.services.ui.loadbalancer.server.scheme=http"
      - "traefik.expose=true"
    networks:
      - reportportal
    restart: always


networks:
  reportportal:

from reportportal.

Pink-Bumblebee avatar Pink-Bumblebee commented on June 15, 2024

@Pink-Bumblebee Below are the logs for Postgres.

`PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-04-24 13:01:33.174 UTC [1] LOG:  starting PostgreSQL 12.16 on x86_64-pc-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
2024-04-24 13:01:33.174 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-04-24 13:01:33.174 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2024-04-24 13:01:33.215 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-04-24 13:01:33.436 UTC [22] LOG:  database system was shut down at 2024-04-24 13:01:12 UTC
2024-04-24 13:01:33.563 UTC [1] LOG:  database system is ready to accept connections
2024-04-24 15:30:07.820 UTC [1] LOG:  received fast shutdown request
2024-04-24 15:30:07.832 UTC [1] LOG:  aborting any active transactions
**2024-04-24 15:30:07.833 UTC [7072] FATAL:  terminating connection due to administrator command**

Check your Postgres. It is accessible via port 5432
Please read this: https://community.dhis2.org/t/fatal-terminating-connection-due-to-administrator-command/51134, https://stackoverflow.com/questions/5787066/whats-the-cause-of-pgerror-fatal-terminating-connection-due-to-administrator.
Also maybe these 2 articles from documentation will be helpful: https://reportportal.io/docs/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingPGRepack, https://reportportal.io/docs/issues-troubleshooting/HowToCleanUpTheReportPortalDatabaseUsingVacuumFull

BTW, haven't you deleted 'superadmin' user?

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee I tried FULL VACUUM, it got completed in 4 mins but database size remained the same. Also no change in our issue, application still in loading state. No I haven't deleted 'superadmin' user

from reportportal.

siddo1008 avatar siddo1008 commented on June 15, 2024

@Pink-Bumblebee BTW I checked df -h and below are the usage of all filesystems. FYI - This instance is dedicated only for ReportPortal. Will this help you analysis the situation.
image

from reportportal.

Keonik1 avatar Keonik1 commented on June 15, 2024

Same issue. #2061

Short description

After installation according to the instructions from the website, I started to do the instructions from “Get started”. A few minutes later (10-20) after opening the section related to sorting failed tests (here is the section) the api service stopped interacting with rabbitmq, but there are no errors in rabbitmq itself as far as I can see.
I have repeated this several times and always after opening this section after a few minutes this behavior starts.
If you don't open it, it can crash after a few hours.

I don't know what this could be related to, just a few guesses:

  • something is preventing the system resources from being fully utilized
  • weak disk subsystem
  • I am behind a proxy and services do not have access to the Internet.
  • one of the processes hangs and blocks communication for the API (because after about an hour it fixed itself, I did not restart anything, but only looked into what happened, but then after some time it crashed again).

Infrastructure

  • OS: debian 11.5

  • specs: VM (on proxmox hypervisor), 4 cores, 16GiB RAM

  • docker version

    Client: Docker Engine - Community
     Version:           25.0.1
     API version:       1.44
     Go version:        go1.21.6
     Git commit:        29cf629
     Built:             Tue Jan 23 23:09:52 2024
     OS/Arch:           linux/amd64
     Context:           default
    
    Server: Docker Engine - Community
     Engine:
      Version:          25.0.1
      API version:      1.44 (minimum version 1.24)
      Go version:       go1.21.6
      Git commit:       71fa3ab
      Built:            Tue Jan 23 23:09:52 2024
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          1.6.27
      GitCommit:        a1496014c916f9e62104b33d1bb5bd03b0858e59
     runc:
      Version:          1.1.11
      GitCommit:        v1.1.11-0-g4bccb38
     docker-init:
      Version:          0.19.0
      GitCommit:        de40ad0
  • docker-compose file - latest from master (2024-06-04), not changed, except volumes location (at the very bottom of the file)

    docker-compose.yml
    version: "3.8"
    services:
    
      ## External dependencies
    
      gateway:
        image: traefik:v2.11.2
        container_name: traefik
        logging: &logging
          driver: "json-file"
          options:
            max-size: 100m
            max-file: "5"
        ports:
          - "8080:8080" # ReportPortal UI
          - "8081:8081" # Traefik dashboard
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        command:
          - --providers.docker=true
          - --providers.docker.constraints=Label(`traefik.expose`, `true`)
          - --entrypoints.web.address=:8080
          - --entrypoints.traefik.address=:8081
          - --api.dashboard=true
          - --api.insecure=true
        networks:
          - reportportal
        restart: always
    
      opensearch:
        image: opensearchproject/opensearch:2.11.0
        container_name: opensearch
        logging:
          <<: *logging
        environment:
          discovery.type: single-node
          plugins.security.disabled: "true"
          bootstrap.memory_lock: "true"
          OPENSEARCH_JAVA_OPTS: -Xms512m -Xmx512m
          DISABLE_INSTALL_DEMO_CONFIG: "true"
        ulimits:
          memlock:
            soft: -1
            hard: -1
        ## Expose OpenSearch
        # ports:
        #   - "9200:9200"
        #   - "9600:9600"
        volumes:
          - opensearch:/usr/share/opensearch/data
        healthcheck:
          test: ["CMD", "curl","-s" ,"-f", "http://0.0.0.0:9200/_cat/health"]
        networks:
          - reportportal
    
      postgres:
        image: postgres:12.17-alpine3.17
        container_name: postgres
        logging:
          <<: *logging
        shm_size: '512m'
        environment:
          POSTGRES_USER: &db_user rpuser
          POSTGRES_PASSWORD: &db_password rppass
          POSTGRES_DB: &db_name reportportal
        volumes:
          - postgres:/var/lib/postgresql/data
        ## Expose Database
        # ports:
        #   - "5432:5432"
        command:
          ## PostgrsSQL performance tuning
          ## Ref: https://reportportal.io/docs/installation-steps/OptimalPerformanceHardwareSetup#5-postgresql-performance-tuning
          -c checkpoint_completion_target=0.9
          -c work_mem=96MB
          -c wal_writer_delay=20ms
          -c synchronous_commit=off
          -c wal_buffers=32MB
          -c min_wal_size=2GB
          -c max_wal_size=4GB
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -d $$POSTGRES_DB -U $$POSTGRES_USER"]
          interval: 10s
          timeout: 120s
          retries: 10
        networks:
          - reportportal
        restart: always
    
      rabbitmq:
        image: bitnami/rabbitmq:3.12.2-debian-11-r8
        container_name: rabbitmq
        logging:
          <<: *logging
        ## Expose RabbitMQ
        # ports:
        #   - "5672:5672"
        #   - "15672:15672"
        environment:
          RABBITMQ_DEFAULT_USER: &rabbitmq_user rabbitmq
          RABBITMQ_DEFAULT_PASS: &rabbitmq_password rabbitmq
        healthcheck:
          test: ["CMD", "rabbitmqctl", "status"]
          interval: 30s
          timeout: 30s
          retries: 5
        networks:
          - reportportal
        restart: always
    
      ## ReportPortal services
    
      index:
        image: reportportal/service-index:5.11.0
        container_name: reportportal-index
        logging:
          <<: *logging
        depends_on:
          gateway:
            condition: service_started
        environment:
          LB_URL: http://gateway:8081
          TRAEFIK_V2_MODE: 'true'
        healthcheck:
          test: wget -q --spider http://0.0.0.0:8080/health
          interval: 30s
          timeout: 30s
          retries: 10
          start_period: 10s
        labels:
          - "traefik.http.routers.index.rule=PathPrefix(`/`)"
          - "traefik.http.routers.index.service=index"
          - "traefik.http.services.index.loadbalancer.server.port=8080"
          - "traefik.http.services.index.loadbalancer.server.scheme=http"
          - "traefik.expose=true"
        networks:
          - reportportal
        restart: always
    
      ui:
        image: reportportal/service-ui:5.11.1
        container_name: reportportal-ui
        environment:
          RP_SERVER_PORT: "8080"
        healthcheck:
          test: wget -q --spider http://0.0.0.0:8080/health
          interval: 30s
          timeout: 30s
          retries: 10
          start_period: 10s
        labels:
          - "traefik.http.middlewares.ui-strip-prefix.stripprefix.prefixes=/ui"
          - "traefik.http.routers.ui.middlewares=ui-strip-prefix@docker"
          - "traefik.http.routers.ui.rule=PathPrefix(`/ui`)"
          - "traefik.http.routers.ui.service=ui"
          - "traefik.http.services.ui.loadbalancer.server.port=8080"
          - "traefik.http.services.ui.loadbalancer.server.scheme=http"
          - "traefik.expose=true"
        networks:
          - reportportal
        restart: always
    
      api:
        image: reportportal/service-api:5.11.1
        container_name: reportportal-api
        logging:
          <<: *logging
        depends_on:
          rabbitmq:
            condition: service_healthy
          gateway:
            condition: service_started
          postgres:
            condition: service_healthy
        environment:
          ## Double entry moves test logs from PostgreSQL to Elastic-type engines
          ## Ref: https://reportportal.io/blog/double-entry-in-5.7.2
          ## RP_ELASTICSEARCH_HOST: http://opensearch:9200
          RP_DB_HOST: postgres
          RP_DB_USER: *db_user
          RP_DB_PASS: *db_password
          RP_DB_NAME: *db_name
          RP_AMQP_HOST: &rabbitmq_host rabbitmq
          RP_AMQP_PORT: &rabbitmq_port 5672
          RP_AMQP_USER: *rabbitmq_user
          RP_AMQP_PASS: *rabbitmq_password
          RP_AMQP_APIUSER: *rabbitmq_user
          RP_AMQP_APIPASS: *rabbitmq_password
          RP_AMQP_ANALYZER-VHOST: analyzer
          DATASTORE_TYPE: filesystem
          LOGGING_LEVEL_ORG_HIBERNATE_SQL: info
          RP_REQUESTLOGGING: "false"
          AUDIT_LOGGER: "OFF"
          MANAGEMENT_HEALTH_ELASTICSEARCH_ENABLED: "false"
          RP_ENVIRONMENT_VARIABLE_ALLOW_DELETE_ACCOUNT: "false"
          JAVA_OPTS: >
            -Xmx1g 
            -XX:+HeapDumpOnOutOfMemoryError 
            -XX:HeapDumpPath=/tmp  
            -Dcom.sun.management.jmxremote.rmi.port=12349 
            -Dcom.sun.management.jmxremote 
            -Dcom.sun.management.jmxremote.local.only=false  
            -Dcom.sun.management.jmxremote.port=9010 
            -Dcom.sun.management.jmxremote.authenticate=false 
            -Dcom.sun.management.jmxremote.ssl=false 
            -Djava.rmi.server.hostname=0.0.0.0
          RP_JOBS_BASEURL: http://jobs:8686
          COM_TA_REPORTPORTAL_JOB_INTERRUPT_BROKEN_LAUNCHES_CRON: PT1H
          RP_ENVIRONMENT_VARIABLE_PATTERN-ANALYSIS_BATCH-SIZE: 100
          RP_ENVIRONMENT_VARIABLE_PATTERN-ANALYSIS_PREFETCH-COUNT: 1
          RP_ENVIRONMENT_VARIABLE_PATTERN-ANALYSIS_CONSUMERS-COUNT: 1
        volumes:
          - storage:/data/storage
        healthcheck:
          test: curl -f http://0.0.0.0:8585/health
          interval: 60s
          timeout: 30s
          retries: 10
          start_period: 60s
        labels:
          - "traefik.http.middlewares.api-strip-prefix.stripprefix.prefixes=/api"
          - "traefik.http.routers.api.middlewares=api-strip-prefix@docker"
          - "traefik.http.routers.api.rule=PathPrefix(`/api`)"
          - "traefik.http.routers.api.service=api"
          - "traefik.http.services.api.loadbalancer.server.port=8585"
          - "traefik.http.services.api.loadbalancer.server.scheme=http"
          - "traefik.expose=true"
        networks:
          - reportportal
        restart: always
    
      uat:
        image: reportportal/service-authorization:5.11.1
        container_name: reportportal-uat
        logging:
          <<: *logging
        environment:
          RP_DB_HOST: postgres
          RP_DB_USER: *db_user
          RP_DB_PASS: *db_password
          RP_DB_NAME: *db_name
          RP_AMQP_HOST: *rabbitmq_host
          RP_AMQP_PORT: *rabbitmq_port
          RP_AMQP_USER: *rabbitmq_user
          RP_AMQP_PASS: *rabbitmq_password
          RP_AMQP_APIUSER: *rabbitmq_user
          RP_AMQP_APIPASS: *rabbitmq_password
          DATASTORE_TYPE: filesystem
          RP_SESSION_LIVE: 86400 # in seconds
          RP_SAML_SESSION-LIVE: 4320
          ## RP_INITIAL_ADMIN_PASSWORD - the initial password of the superadmin user for the first launch. This value can't change the password on redeployments.
          RP_INITIAL_ADMIN_PASSWORD: "erebus"
          JAVA_OPTS: -Djava.security.egd=file:/dev/./urandom -XX:MinRAMPercentage=60.0 -XX:MaxRAMPercentage=90.0
        volumes:
          - storage:/data/storage
        healthcheck:
          test: curl -f http://0.0.0.0:9999/health
          interval: 60s
          timeout: 30s
          retries: 10
          start_period: 60s
        labels:
          - "traefik.http.middlewares.uat-strip-prefix.stripprefix.prefixes=/uat"
          - "traefik.http.routers.uat.middlewares=uat-strip-prefix@docker"
          - "traefik.http.routers.uat.rule=PathPrefix(`/uat`)"
          - "traefik.http.routers.uat.service=uat"
          - "traefik.http.services.uat.loadbalancer.server.port=9999"
          - "traefik.http.services.uat.loadbalancer.server.scheme=http"
          - "traefik.expose=true"
        depends_on:
          postgres:
            condition: service_healthy
        networks:
          - reportportal
        restart: always
    
      jobs:
        image: reportportal/service-jobs:5.11.1
        container_name: reportportal-jobs
        logging:
          <<: *logging
        depends_on:
          rabbitmq:
            condition: service_healthy
          gateway:
            condition: service_started
          postgres:
            condition: service_healthy
        environment:
          ## Double entry moves test logs from PostgreSQL to Elastic-type engines
          ## Ref: https://reportportal.io/blog/double-entry-in-5.7.2
          ## RP_ELASTICSEARCH_HOST: http://opensearch:9200
          ## RP_ELASTICSEARCH_USERNAME: ""
          ## RP_ELASTICSEARCH_PASSWORD: ""
          RP_DB_HOST: postgres
          RP_DB_USER: *db_user
          RP_DB_PASS: *db_password
          RP_DB_NAME: *db_name
          RP_AMQP_HOST: *rabbitmq_host
          RP_AMQP_PORT: *rabbitmq_port
          RP_AMQP_USER: *rabbitmq_user
          RP_AMQP_PASS: *rabbitmq_password
          RP_AMQP_APIUSER: *rabbitmq_user
          RP_AMQP_APIPASS: *rabbitmq_password
          RP_AMQP_ANALYZER-VHOST: analyzer
          DATASTORE_TYPE: filesystem
          RP_ENVIRONMENT_VARIABLE_CLEAN_ATTACHMENT_CRON: 0 0 */24 * * *
          RP_ENVIRONMENT_VARIABLE_CLEAN_LOG_CRON: 0 0 */24 * * *
          RP_ENVIRONMENT_VARIABLE_CLEAN_LAUNCH_CRON: 0 0 */24 * * *
          RP_ENVIRONMENT_VARIABLE_CLEAN_STORAGE_CRON: 0 0 */24 * * *
          RP_ENVIRONMENT_VARIABLE_STORAGE_PROJECT_CRON: 0 */5 * * * *
          RP_ENVIRONMENT_VARIABLE_CLEAN_EXPIREDUSER_CRON:  0 0 */24 * * *
          RP_ENVIRONMENT_VARIABLE_CLEAN_EXPIREDUSER_RETENTIONPERIOD: 365
          RP_ENVIRONMENT_VARIABLE_NOTIFICATION_EXPIREDUSER_CRON: 0 0 */24 * * * 
          RP_ENVIRONMENT_VARIABLE_CLEAN_EVENTS_RETENTIONPERIOD: 365
          RP_ENVIRONMENT_VARIABLE_CLEAN_EVENTS_CRON: 0 30 05 * * *
          RP_ENVIRONMENT_VARIABLE_CLEAN_STORAGE_CHUNKSIZE: 20000
          RP_PROCESSING_LOG_MAXBATCHSIZE: 2000
          RP_PROCESSING_LOG_MAXBATCHTIMEOUT: 6000
          RP_AMQP_MAXLOGCONSUMER: 1
          JAVA_OPTS: >
            -Djava.security.egd=file:/dev/./urandom
            -XX:+UseG1GC
            -XX:+UseStringDeduplication
            -XX:G1ReservePercent=20
            -XX:InitiatingHeapOccupancyPercent=60
            -XX:MaxRAMPercentage=70.0
            -XX:+HeapDumpOnOutOfMemoryError
            -XX:HeapDumpPath=/tmp
        volumes:
          - storage:/data/storage
        healthcheck:
          test: curl -f http://0.0.0.0:8686/health || exit 1
          interval: 60s
          timeout: 30s
          retries: 10
          start_period: 60s
        labels:
          - traefik.http.middlewares.jobs-strip-prefix.stripprefix.prefixes=/jobs
          - traefik.http.routers.jobs.middlewares=jobs-strip-prefix@docker
          - traefik.http.routers.jobs.rule=PathPrefix(`/jobs`)
          - traefik.http.routers.jobs.service=jobs
          - traefik.http.services.jobs.loadbalancer.server.port=8686
          - traefik.http.services.jobs.loadbalancer.server.scheme=http
          - traefik.expose=true
        networks:
          - reportportal
        restart: always
    
      analyzer:
        image: &analyzer_img reportportal/service-auto-analyzer:5.11.0-r1
        container_name: reportportal-analyzer
        logging:
          <<: *logging
        environment:
          LOGGING_LEVEL: info
          AMQP_EXCHANGE_NAME: analyzer-default
          AMQP_VIRTUAL_HOST: analyzer
          AMQP_URL: amqp://rabbitmq:rabbitmq@rabbitmq:5672
          # ES_USER: 
          # ES_PASSWORD: 
          ES_HOSTS: http://opensearch:9200
          ANALYZER_BINARYSTORE_TYPE: filesystem
        volumes:
          - storage:/data/storage
        depends_on:
          opensearch:
            condition: service_started
          rabbitmq:
            condition: service_healthy
        networks:
          - reportportal
        restart: always
    
      analyzer-train:
        image: *analyzer_img
        container_name: reportportal-analyzer-train
        logging:
          <<: *logging
        environment:
          LOGGING_LEVEL: info
          AMQP_EXCHANGE_NAME: analyzer-default
          AMQP_VIRTUAL_HOST: analyzer
          AMQP_URL: amqp://rabbitmq:rabbitmq@rabbitmq:5672
          # ES_USER: 
          # ES_PASSWORD: 
          ES_HOSTS: http://opensearch:9200
          INSTANCE_TASK_TYPE: train
          UWSGI_WORKERS: 1
          ANALYZER_BINARYSTORE_TYPE: filesystem
        volumes:
          - storage:/data/storage
        depends_on:
          opensearch:
            condition: service_started
          rabbitmq:
            condition: service_healthy
        networks:
          - reportportal
        restart: always
    
      metrics-gatherer:
        image: reportportal/service-metrics-gatherer:5.11.0-r1
        container_name: reportportal-metrics-gatherer
        logging:
          <<: *logging
        environment:
          LOGGING_LEVEL: info
          # ES_USER: 
          # ES_PASSWORD: 
          ES_HOST: http://opensearch:9200
          POSTGRES_USER: *db_user
          POSTGRES_PASSWORD: *db_password
          POSTGRES_DB: *db_name
          POSTGRES_HOST: postgres
          POSTGRES_PORT: 5432
          ALLOWED_START_TIME: "22:00"
          ALLOWED_END_TIME: "08:00"
          # TZ: Europe/Minsk you can change a timezone like this to specify when metrics are gathered
          AMQP_URL: amqp://rabbitmq:rabbitmq@rabbitmq:5672
          AMQP_VIRTUAL_HOST: analyzer
        depends_on:
          opensearch:
            condition: service_started
        networks:
          - reportportal
        restart: always
    
      migrations:
        image: reportportal/migrations:5.11.1
        container_name: reportportal-migrations
        logging:
          <<: *logging
        depends_on:
          postgres:
            condition: service_healthy
        environment:
          POSTGRES_SERVER: postgres
          POSTGRES_PORT: 5432
          POSTGRES_DB: *db_name
          POSTGRES_USER: *db_user
          POSTGRES_PASSWORD: *db_password
          OS_HOST: opensearch
          OS_PORT: 9200
          OS_PROTOCOL: http
          # OS_USER: 
          # OS_PASSWORD: 
        networks:
          - reportportal
        restart: on-failure
    
    
    volumes:
      opensearch:
        driver: local
        driver_opts:
          type: none
          device: "/data/reportportal/opensearch"
          o: bind
      storage:
        driver: local
        driver_opts:
          type: none
          device: "/data/reportportal/storage"
          o: bind
      postgres:
        driver: local
        driver_opts:
          type: none
          device: "/data/reportportal/postgres"
          o: bind
    
    networks:
      reportportal:
    
  • free -h

                   total        used        free      shared  buff/cache   available
    Mem:            15Gi       6,4Gi       6,9Gi       116Mi       2,4Gi       8,7Gi
    Swap:          979Mi          0B       979Mi
    
  • sudo df -h

    Filesystem                      Size  Used Avail Use% Mounted on
    udev                            7.8G     0  7.8G   0% /dev
    tmpfs                           1.6G  2.4M  1.6G   1% /run
    /dev/mapper/myvm--vg-root   28G   19G  7.3G  73% /
    tmpfs                           7.9G     0  7.9G   0% /dev/shm
    tmpfs                           5.0M     0  5.0M   0% /run/lock
    /dev/sdb1                        30G  312M   28G   2% /data
    /dev/sda2                       471M   80M  367M  18% /boot
    /dev/sda1                       511M  3.5M  508M   1% /boot/efi
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/9c9c8ba89db66586e7a2991c62ee3ace676fa8c2e3361332b53382c99d39db6a/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/9c5705d2e73b04b1725d03d85b2754b4f6b116859f742ff7224a67218d575858/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/bbef1a0d1a67c9927f3dfbf5a8cf452d7c736054689874f600d1459957dc50dd/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/5d11ee25a9ee8eddc5f2327c55fccf3e4570e70cf36b8e0103cbf22fd3e00f9a/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/b96431246d238971317ef9d930dc38e6c7591e6d635fb183815a5e85f627d8bb/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/8dde76e7250c78b60836f8722d9b9dd548c2c41054f65bf44257d80072f2b913/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/ad2a697c13dda8bf99093d52cc4a00b58f2441f67b6fe35a9e4658e63394b79d/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/da3eac37ed84ca65ca23048f7ad1b8d448033b6229df12a1e6fb43ffa6bb392d/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/1c9923bea5a1343c01de233aecd6508c08108c652209749539bec442c85bbf26/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/03f1734e7b67fd622b10dcdf9e2910f6dd0a5a0aefd1afde008afcb1ea55ee6d/merged
    tmpfs                           1.6G     0  1.6G   0% /run/user/1000
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/2618021359f4a14fe49cd50b7fa61e73eb5cd5611e4407185f284d95d6836d36/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/a0fb1b0815da6567c6d75700c0fa5b370107093b2b302ce3923a06fd4bafb908/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/734b4be83dfce6279f4f385a6a912e46a95effd6a5e2171ae8dcbbd7fddc10c9/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/3a6cb0e540ecc2a3152636ff0ebc0c0bf8986f8af2ac6ca210c48ab4b199b75c/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/94b34a5e582cf88ff2b84169db9afc41d9105d3fabfef59f0aff9eb77cf79bbd/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/8e5eb61230bc46137447877de0501cd2728f557adcf7039e2e0167fc6244245e/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/05eeb0056f6929cce0cee698c0c848574e5c959b506bc45ffce676680a5df9fc/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/04fb99187ba1355026ba47dce51340226e4ee359e25169606e943ceebe3ec2e5/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/db714f7b6a3f3b3c6b17a464e65222c1f2a9ef01dd64d8776e5f9de81dbf90c0/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/f425bc315aa030a7d97fade8f80c5c6c1a57ee1aafe88d5de8c34607cb013997/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/f6b0e84ca2ab87981d4651fab4afcea142600dfc290b3e3eb161501a87bea0b7/merged
    overlay                          28G   19G  7.3G  73% /var/lib/docker/overlay2/9d290e2e592bd31383bafadce4e68e506d70034d6aa904b32f0839a28bed85dd/merged
    
  • log file from docker compose logs: reportportal_docker.log

from reportportal.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.