thehive-project / docker-templates Goto Github PK
View Code? Open in Web Editor NEWDocker configurations for TheHive, Cortex and 3rd party tools
Home Page: https://thehive-project.org
License: GNU Affero General Public License v3.0
Docker configurations for TheHive, Cortex and 3rd party tools
Home Page: https://thehive-project.org
License: GNU Affero General Public License v3.0
Hi there,
I noticed a critical issue with the Cortex container that doesn't check when ES is up and running.
It gets into a state where it doesn't generate the cortex_5 indexes at the first time and then fails the database migration:
[info] play.api.Play - Application started (Prod) (no global state)
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_5/_search?scroll=60000ms
StringEntity({"seq_no_primary_term":"true","query":{"bool":{"must":[{"term":{"relations":{"value":"worker"}}},{"match_all":{}}]}},"from":0,"sort":[{"_doc":{"order":"desc"}}]},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_5],Some(_na_),Some(cortex_5),None,List(ElasticError(index_not_found_exception,no such index [cortex_5],Some(_na_),Some(cortex_5),None,null,None,None,None,List())),None,None,None,List())
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_5/_search?scroll=60000ms
StringEntity({"seq_no_primary_term":"true","query":{"bool":{"must":[{"term":{"relations":{"value":"job"}}},{"term":{"status":{"value":"Waiting"}}}]}},"from":0,"sort":[{"_doc":{"order":"desc"}}]},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_5],Some(_na_),Some(cortex_5),None,List(ElasticError(index_not_found_exception,no such index [cortex_5],Some(_na_),Some(cortex_5),None,null,None,None,None,List())),None,None,None,List())
[warn] o.e.d.SearchWithScroll - Search error
org.elastic4play.IndexNotFoundException$: null
at org.elastic4play.IndexNotFoundException$.<clinit>(Errors.scala)
at org.elastic4play.database.DBConfiguration.$anonfun$execute$2(DBConfiguration.scala:155)
at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:56)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:93)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:93)
The best solution is to add wait-for-it in the docker image:
https://docs.docker.com/compose/startup-order/
I am a bit unsure about how to modify the command but a good starting point:
FROM thehiveproject/cortex:latest
COPY wait-for-it.sh /opt/cortex/wait-for-it.sh
RUN chmod +x /opt/cortex/wait-for-it.sh
And then maybe something like this:
cortex:
build:
context: ./fixcortex
container_name: cortexfix
restart: unless-stopped
command: ["/opt/cortex/wait-for-it.sh", "elasticsearch_thp:9200", "--", "job-directory ${JOB_DIRECTORY}"]
environment:
- 'JOB_DIRECTORY=${JOB_DIRECTORY}'
volumes:
- './vol/cortex/application.conf:/etc/cortex/application.conf'
- './vol/cortex/jobs:${JOB_DIRECTORY}'
- '/var/run/docker.sock:/var/run/docker.sock'
depends_on:
- elasticsearch_thp
ports:
- 9001:9001
All of the samples with cortex break in different ways
docker-compose network create proxy
should be added to the README for thesedocker-compose up
does not work, but is so noisy its hard to debugdocker-compose up elasticsearch
got elasticsearch to workdocker-compose up cortex
in another tmux tab after elasticsearch gives cortex | [error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_5/_search?scroll=60000ms cortex | StringEntity({"seq_no_primary_term":"true","query":{"bool":{"must":[{"term":{"relations":{"value":"worker"}}},{"match_all":{}}]}},"from":0,"sort":[{"_doc":{"order":"desc"}}]},Some(application/json)) cortex | => ElasticError(index_not_found_exception,no such index [cortex_5],Some(_na_),Some(cortex_5),None,List(ElasticError(index_not_found_exception,no such index [cortex_5],Some(_na_),Some(cortex_5),None,null,None,None,None,List())),None,None,None,List()) cortex | [warn] o.e.d.SearchWithScroll - Search error cortex | org.elastic4play.IndexNotFoundException$: null cortex | at org.elastic4play.IndexNotFoundException$.<clinit>(Errors.scala) cortex | at org.elastic4play.database.DBConfiguration.$anonfun$execute$2(DBConfiguration.scala:155) cortex | at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307) cortex | at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41) cortex | at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64) cortex | at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:56) cortex | at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:93) cortex | at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) cortex | at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85) cortex | at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:93) ^CGracefully stopping... (press Ctrl+C again to force)
Ubuntu 20.04
Docker 20.10.12
Docker-Compose v2.2.3
Cloned repo, run docker-compose up -d from the thehive4-cassandra directory
Got error :
thehive4 | [info] c.d.d.c.Cluster [|] New Cassandra host cassandra/172.19.0.2:9042 added
thehive4 | [info] o.j.d.Backend [|] Configuring index [search]
thehive4 | [warn] o.t.s.u.Retry [|] An error occurs (java.lang.IllegalArgumentException: Could not instantiate implementation: org.janusgraph.diskstorage.lucene.LuceneIndex), retrying (3)
thehive4 | [info] c.d.d.c.ClockFactory [|] Using native clock to generate timestamps.
thehive4 | [info] c.d.d.c.p.DCAwareRoundRobinPolicy [|] Using data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRound
Hi there,
lot has changed since version 3!
Is there some documentation about how this new databases and file storage have been used?
Cheers.
While testing hive4+misp+elasticsearch, following error comes.
Seems like the play.http.secret.key="changethissosomethingsecret"
needs to be set as a env variable? (or secret)
[�[37minfo�[0m] ScalligraphApplication [|] Loading application ...
[�[37minfo�[0m] o.t.s.ScalligraphModule [|] Loading scalligraph module
[�[37minfo�[0m] a.e.s.Slf4jLogger [|] Slf4jLogger started
[�[37minfo�[0m] a.r.a.t.ArteryTcpTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://[email protected]:38545] with UID [6474092054687545453]
[�[37minfo�[0m] a.c.Cluster [|] Cluster Node [akka://[email protected]:38545] - Starting up, Akka version [2.6.10] ...
[�[37minfo�[0m] a.c.Cluster [|] Cluster Node [akka://[email protected]:38545] - Registered cluster JMX MBean [akka:type=Cluster]
[�[37minfo�[0m] a.c.Cluster [|] Cluster Node [akka://[email protected]:38545] - Started up successfully
[�[37minfo�[0m] a.c.Cluster [|] Cluster Node [akka://[email protected]:38545] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
[�[37minfo�[0m] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://[email protected]:38545#6474092054687545453], selfDc [default].
[�[31merror�[0m] a.a.OneForOneStrategy [|] Unable to provision, see the following errors:
1) Error in custom provider, Configuration error: Configuration error[
The application secret has not been set, and we are in prod mode. Your application is not secure.
To set the application secret, please read http://playframework.com/documentation/latest/ApplicationSecret
]
while locating play.api.http.HttpConfiguration$HttpConfigurationProvider
while locating play.api.http.HttpConfiguration
for the 1st parameter of play.api.http.HttpConfiguration$CookiesConfigurationProvider.<init>(HttpConfiguration.scala:378)
Hello I am getting this error on all the thehive4 template variations with the elasticsearch container.
For example for thehive4-cortex31-nodered:
[0.000s][error][logging] Error opening log file 'logs/gc.log': Permission denied
[0.000s][error][logging] Initialization of output 'file=logs/gc.log' using options 'filecount=32,filesize=64m' failed.
error:
Invalid -Xlog option '-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m', see error log for details.
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
at org.elasticsearch.tools.launchers.JvmErgonomics.flagsFinal(JvmErgonomics.java:126)
at org.elasticsearch.tools.launchers.JvmErgonomics.finalJvmOptions(JvmErgonomics.java:88)
at org.elasticsearch.tools.launchers.JvmErgonomics.choose(JvmErgonomics.java:59)
at org.elasticsearch.tools.launchers.JvmOptionsParser.jvmOptions(JvmOptionsParser.java:137)
at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:95)
The ES container is always restarting state.
n8n container keeps restarting with the below error.
› Error: There was an error: Error parsing n8n-config file
› "/home/node/.n8n/config". It does not seem to be valid JSON.
ln: /home/node/.n8n: File exists
I was using below config for the docker containers.
https://github.com/TheHive-Project/Docker-Templates/tree/main/docker/thehive4-cortex31-n8n
We are using a dockerized Hive at version 3.4. According to the migration path, I would need to upgrade to 4.0 first before going to 4.1. What is the recommended way to do this?
From my understanding the solution would be to have a Docker template for 4.0, run it and migrate, then run and upgrade 4.1 with the same data?
friends,
is there any reason not to have any image templates containing Lucene?
I see that almost all use ES 7.
Hi there found another warning that looks important during the Cortex container startup:
[warn] o.t.c.s.JobRunnerSrv - The package cortexutils for python hasn't been found
[warn] o.t.c.s.JobRunnerSrv - The package cortexutils for python2 hasn't been found
[warn] o.t.c.s.JobRunnerSrv - The package cortexutils for python3 hasn't been found
I believe there is an issue within the hive image.
For example take the : thehive4-berkleydb-cortex31 template.
Change the docker compose like so:
version: '3.8'
services:
elasticsearch_thp:
image: 'elasticsearch:7.11.1'
container_name: elasticsearch_thp
restart: unless-stopped
ports:
- '0.0.0.0:9200:9200'
environment:
- http.host=0.0.0.0
- discovery.type=single-node
- cluster.name=hive
- script.allowed_types= inline
- thread_pool.search.queue_size=100000
- thread_pool.write.queue_size=10000
- gateway.recover_after_nodes=1
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms256m -Xmx256m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- './vol/elasticsearch/data:/usr/share/elasticsearch/data'
- './vol/elasticsearch/logs:/usr/share/elasticsearch/logs'
cortex:
image: 'thehiveproject/cortex:latest'
container_name: cortex
restart: unless-stopped
command:
--job-directory ${JOB_DIRECTORY}
environment:
- 'JOB_DIRECTORY=${JOB_DIRECTORY}'
volumes:
- './vol/cortex/application.conf:/etc/cortex/application.conf'
- './vol/cortex/jobs:${JOB_DIRECTORY}'
- '/var/run/docker.sock:/var/run/docker.sock'
depends_on:
- elasticsearch_thp
ports:
- '0.0.0.0:9001:9001'
thehive:
image: 'thehiveproject/thehive4:latest'
container_name: thehive4
restart: unless-stopped
ports:
- '0.0.0.0:9000:9000'
volumes:
- ./vol/thehive/application.conf:/etc/thehive/application.conf
#- ./vol/thehive/db:/opt/thp/thehive/db
#- ./vol/thehive/index:/opt/thp/thehive/index
#- ./vol/thehive/data:/opt/thp/thehive/data
command: '--no-config --no-config-secret'
Notice how I commented out thehive volumes so that it should not have any local permission.
When you launch the composer file thehive image keep restarting (this was happening originally with the local volumes mounted).
Checking the logs gives:
Error injecting constructor, java.nio.file.AccessDeniedException: /opt/thp
So I am wondering whether the folder is not correct or there is an issue within the application.
@nadouani for visibility.
Ubuntu 20.04
Docker 20.10.12
Docker-Compose v2.2.3
Cloned repo, ran docker-compose up -d from the thehive4-cortex31-nginx-https directory
Got error "network proxy declared as external, but could not be found"
Hi there,
I am test running the latest image of TheHive4+Cortex.
During the first step which is the migration step I am getting this:
In the logs I see a lot of errors:
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"query":{"match":{"relations":{"query":"user"}}},"size":0},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"seq_no_primary_term":"true","query":{"ids":{"values":["init"]}},"size":1},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
[info] o.t.c.s.ErrorHandler - GET /api/user/current returned 520
org.elastic4play.IndexNotFoundException$: null
at org.elastic4play.IndexNotFoundException$.<clinit>(Errors.scala)
at org.elastic4play.database.DBConfiguration.$anonfun$execute$2(DBConfiguration.scala:155)
at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:56)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:93)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:93)
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"query":{"match":{"relations":{"query":"user"}}},"size":0},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"query":{"match":{"relations":{"query":"user"}}},"size":0},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"query":{"match":{"relations":{"query":"user"}}},"size":0},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"query":{"match":{"relations":{"query":"user"}}},"size":0},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"query":{"match":{"relations":{"query":"user"}}},"size":0},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
[error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_6/_search?
StringEntity({"query":{"match":{"relations":{"query":"user"}}},"size":0},Some(application/json))
=> ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,List(ElasticError(index_not_found_exception,no such index [cortex_6],Some(_na_),Some(cortex_6),None,null,None,None,None,List())),None,None,None,List())
then I click migration:
[info] c.s.e.h.JavaClient$ - Creating HTTP client on http://elasticsearch:9200
[info] c.s.e.h.JavaClient$ - Creating HTTP client on http://elasticsearch:9200
[info] c.s.e.h.JavaClient$ - Creating HTTP client on http://elasticsearch:9200
[info] c.s.e.h.JavaClient$ - Creating HTTP client on http://elasticsearch:9200
[info] c.s.e.h.JavaClient$ - Creating HTTP client on http://elasticsearch:9200
[info] o.e.s.MigrationSrv - Create a new empty database
[info] o.e.s.MigrationSrv - Migrate database from version 0, add operations for version 2
[info] o.e.s.MigrationSrv - Migrate database from version 0, add operations for version 3
[info] o.e.s.MigrationSrv - Migrate database from version 0, add operations for version 4
[info] o.e.s.MigrationSrv - Migrate database from version 0, add operations for version 5
[info] o.e.s.MigrationSrv - Migrate database from version 0, add operations for version 6
[warn] o.e.c.RestClient - request [PUT http://elasticsearch:9200/cortex_6?include_type_name=false] returned 1 warnings: [299 Elasticsearch-7.11.1-ff17057114c2199c9c1bbecc727003a907c0db7a "[types removal] Using include_type_name in create index requests is deprecated. The parameter will be removed in the next major version."]
[info] o.e.s.MigrationSrv - Migrating 0 entities from sequence
[info] o.e.s.MigrationSrv - Migrating 0 entities from artifact
[info] o.e.s.MigrationSrv - Migrating 0 entities from audit
[info] o.e.s.MigrationSrv - Migrating 0 entities from data
[info] o.e.s.MigrationSrv - Migrating 0 entities from dblist
[info] o.e.s.MigrationSrv - Migrating 0 entities from job
[info] o.e.s.MigrationSrv - Migrating 0 entities from organization
[info] o.e.s.MigrationSrv - Migrating 0 entities from report
[info] o.e.s.MigrationSrv - Migrating 0 entities from user
migrateEntity(audit) has finished : Success(())
[info] o.e.s.MigrationSrv - Migrating 0 entities from worker
migrateEntity(sequence) has finished : Success(())
migrateEntity(data) has finished : Success(())
migrateEntity(job) has finished : Success(())
migrateEntity(dblist) has finished : Success(())
migrateEntity(organization) has finished : Success(())
migrateEntity(user) has finished : Success(())
migrateEntity(report) has finished : Success(())
[info] o.e.s.MigrationSrv - Migrating 0 entities from workerConfig
migrateEntity(workerConfig) has finished : Success(())
migrateEntity(worker) has finished : Success(())
migrateEntity(artifact) has finished : Success(())
[info] o.e.s.MigrationSrv - End of migration
Then I create the cortex admin....
Hi there,
I am following this:
https://github.com/TheHive-Project/Docker-Templates/tree/main/docker/thehive4-berkleydb-cortex31
my docker is deployed on 192.168.2.14 so I go to the admin creation page:
http://192.168.2.14:9001
it redirects to:
http://192.168.2.14:9001/index.html#!/maintenance
which is fine I can see the form.
I then input the admin creds, nothing happens so I check the browser debug console:
VM778:1 POST http://192.168.2.14:9001/api/user 500 (Internal Server Error)
(anonymous) @ VM778:1
(anonymous) @ angular.js:13692
D @ angular.js:13418
o @ angular.js:13159
o @ angular.js:18075
(anonymous) @ angular.js:18123
$digest @ angular.js:19241
$apply @ angular.js:19630
(anonymous) @ angular.js:29127
it @ angular.js:3891
e @ angular.js:3879
angular.js:15697 Possibly unhandled rejection: {"data":{"type":"InternalError","message":"Unknown error: ElasticError(mapper_parsing_exception,failed to parse,None,None,None,List(ElasticError(mapper_parsing_exception,failed to parse,None,None,None,null,None,None,None,List())),Some(CausedBy(illegal_argument_exception,unknown join name [user] for field [relations],Map())),None,None,List())"},"status":500,"config":{"method":"POST","transformRequest":[null],"transformResponse":[null],"jsonpCallbackParam":"callback","url":"./api/user","data":{"login":"[email protected]","name":"admin","password":"secret","roles":["superadmin"],"organization":"cortex"},"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json;charset=utf-8","X-CORTEX-XSRF-TOKEN":"5ddad865f4711c9c488900686a2351f20e8c1763-1626505451786-0bdf1c356049105dd517a93f"}},"statusText":"Internal Server Error","xhrStatus":"complete"}
docker logs from cotex:
docker logs -t cortex | grep error
2021-07-17T07:15:45.676344275Z [warn] o.e.d.SearchWithScroll - Search error
2021-07-17T07:15:45.713471917Z Info{architecture=x86_64, clusterStore=null, cgroupDriver=cgroupfs, containers=23, containersRunning=3, containersStopped=20, containersPaused=0, cpuCfsPeriod=true, cpuCfsQuota=true, debug=false, dockerRootDir=/mnt/data/docker, storageDriver=overlay2, driverStatus=[[Backing Filesystem, extfs], [Supports d_type, true], [Native Overlay Diff, true]], executionDriver=null, experimentalBuild=false, httpProxy=, httpsProxy=, id=3ETS:3FZM:DUY5:DR4R:Q5EK:NUBR:BQ72:7B52:YFEX:734C:CY6F:RAAC, ipv4Forwarding=true, images=514, indexServerAddress=https://index.docker.io/v1/, initPath=null, initSha1=null, kernelMemory=true, kernelVersion=5.8.0-59-generic, labels=[], memTotal=67371700224, memoryLimit=true, cpus=16, eventsListener=0, fileDescriptors=42, goroutines=51, name=tigerman, noProxy=, oomKillDisable=true, operatingSystem=Ubuntu 20.04.2 LTS, osType=linux, plugins=Plugins{volumes=[local], networks=[bridge, host, ipvlan, macvlan, null, overlay]}, registryConfig=RegistryConfig{indexConfigs={docker.io=IndexConfig{name=docker.io, mirrors=[], secure=true, official=true}}, insecureRegistryCidrs=[127.0.0.0/8]}, serverVersion=20.10.5, swapLimit=true, swarm=SwarmInfo{cluster=null, controlAvailable=false, error=, localNodeState=inactive, nodeAddr=, nodeId=, nodes=null, managers=null, remoteManagers=null}, systemStatus=[], systemTime=Sat Jul 17 07:15:45 UTC 2021}
2021-07-17T07:15:45.739443498Z [warn] o.e.d.SearchWithScroll - Search error
2021-07-17T07:16:07.153709520Z [error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_5/_search?
2021-07-17T07:16:07.167783036Z [error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_5/_search?
2021-07-17T07:16:07.535941529Z [error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_5/_search?
2021-07-17T07:16:07.686078110Z [error] o.e.d.DBConfiguration - ElasticSearch request failure: POST:/cortex_5/_search?
2021-07-17T07:16:50.468686826Z [error] o.e.d.DBConfiguration - ElasticSearch request failure: PUT:/cortex_5/_doc/admin%40thehive.local?refresh=wait_for&op_type=create&[email protected]
2021-07-17T07:16:50.469404170Z org.elastic4play.InternalError: Unknown error: ElasticError(mapper_parsing_exception,failed to parse,None,None,None,List(ElasticError(mapper_parsing_exception,failed to parse,None,None,None,null,None,None,None,List())),Some(CausedBy(illegal_argument_exception,unknown join name [user] for field [relations],Map())),None,None,List())
2021-07-17T07:17:25.019371132Z [error] o.e.d.DBConfiguration - ElasticSearch request failure: PUT:/cortex_5/_doc/admin%40thehive.local?refresh=wait_for&op_type=create&[email protected]
2021-07-17T07:17:25.020175603Z org.elastic4play.InternalError: Unknown error: ElasticError(mapper_parsing_exception,failed to parse,None,None,None,List(ElasticError(mapper_parsing_exception,failed to parse,None,None,None,null,None,None,None,List())),Some(CausedBy(illegal_argument_exception,unknown join name [user] for field [relations],Map())),None,None,List())
So I check the elasticsearch indexes:
http://192.168.2.14:9200/_cat/indices/
and cortex one is there:
yellow open cortex_5 JPIpUnkBRguHhEP-Kac26Q 5 1 1 0 6.5kb 6.5kb
I am trying to guess the error would be that Cortex did not push the right mappings?
So I check the mappings:
http://192.168.2.14:9200/cortex_5/_mapping
And they seem to be there:
{"cortex_5":{"mappings":{"date_detection":false,"numeric_detection":false,"properties":{"attachment":{"type":"nested","properties":{"contentType":{"type":"keyword"},"hashes":{"type":"keyword"},"id":{"type":"keyword"},"name":{"type":"keyword"},"size":{"type":"long"}}},"author":{"type":"text","fielddata":true},"avatar":{"type":"binary"},"base":{"type":"boolean"},"baseConfig":{"type":"keyword"},"binary":{"type":"binary"},"cacheTag":{"type":"keyword"},"command":{"type":"text","fielddata":true},"config":{"type":"binary"},"configuration":{"type":"binary"},"createdAt":{"type":"date","format":"epoch_millis||basic_date_time_no_millis"},"createdBy":{"type":"keyword"},"data":{"type":"binary"},"dataType":{"type":"keyword"},"dataTypeList":{"type":"keyword"},"dblist":{"type":"keyword"},"description":{"type":"text","fielddata":true},"details":{"type":"nested","properties":{"_id":{"type":"keyword"},"dataTypeList":{"type":"keyword"},"description":{"type":"text","fielddata":true},"endDate":{"type":"date","format":"epoch_millis||basic_date_time_no_millis"},"errorMessage":{"type":"text","fielddata":true},"input":{"type":"binary"},"jobCache":{"type":"long"},"jobTimeout":{"type":"long"},"label":{"type":"keyword"},"message":{"type":"text","fielddata":true},"name":{"type":"keyword"},"organization":{"type":"keyword"},"pap":{"type":"long"},"parameters":{"type":"binary"},"rate":{"type":"long"},"rateUnit":{"type":"keyword"},"roles":{"type":"keyword"},"startDate":{"type":"date","format":"epoch_millis||basic_date_time_no_millis"},"status":{"type":"keyword"},"tlp":{"type":"long"},"updatedAt":{"type":"date","format":"epoch_millis||basic_date_time_no_millis"},"updatedBy":{"type":"keyword"}}},"dockerImage":{"type":"text","fielddata":true},"endDate":{"type":"date","format":"epoch_millis||basic_date_time_no_millis"},"errorMessage":{"type":"text","fielddata":true},"fromCache":{"type":"boolean"},"full":{"type":"binary"},"input":{"type":"binary"},"jobCache":{"type":"long"},"jobTimeout":{"type":"long"},"key":{"type":"keyword"},"label":{"type":"keyword"},"license":{"type":"text","fielddata":true},"login":{"type":"keyword"},"message":{"type":"text","fielddata":true},"name":{"type":"keyword"},"objectId":{"type":"keyword"},"objectType":{"type":"keyword"},"operation":{"type":"keyword"},"operations":{"type":"binary"},"organization":{"type":"keyword"},"otherDetails":{"type":"text","fielddata":true},"pap":{"type":"long"},"parameters":{"type":"binary"},"password":{"type":"keyword"},"preferences":{"type":"binary"},"rate":{"type":"long"},"rateUnit":{"type":"keyword"},"relations":{"type":"join","eager_global_ordinals":true,"relations":{"dblist":[],"sequence":[],"data":[],"audit":[],"organization":["worker","workerConfig"],"report":"artifact","job":"report","user":[]}},"requestId":{"type":"keyword"},"roles":{"type":"keyword"},"rootId":{"type":"keyword"},"sequenceCounter":{"type":"long"},"startDate":{"type":"date","format":"epoch_millis||basic_date_time_no_millis"},"status":{"type":"keyword"},"summary":{"type":"binary"},"tags":{"type":"keyword"},"tlp":{"type":"long"},"type":{"type":"keyword"},"updatedAt":{"type":"date","format":"epoch_millis||basic_date_time_no_millis"},"updatedBy":{"type":"keyword"},"url":{"type":"text","fielddata":true},"value":{"type":"keyword"},"version":{"type":"keyword"},"workerDefinitionId":{"type":"keyword"},"workerId":{"type":"keyword"},"workerName":{"type":"keyword"}}}}}
Let me know what else should I try.
im running "thehive4-cortex31-n8n" but when TheHive load, it's pre-configured and i can not update database
and i don't know what is default username/password
could you help with it please
Hi there,
is anybody working to update thehive for the most recent version of ES.
I tried version 7.13.0 and I was getting an error as described here.
Many thanks.
Starting thehive4-minimal via the docker-compose file causes the below errors. The docker daemon is working correctly as it is servicing other containers.
Status: Downloaded newer image for thehiveproject/thehive4:latest
Creating thehive4-minimal_thehive_1 ... done
Attaching to thehive4-minimal_thehive_1
thehive_1 | [info] ScalligraphApplication [|] Loading application ...
thehive_1 | [info] o.t.s.ScalligraphModule [|] Loading scalligraph module
thehive_1 | [info] a.e.s.Slf4jLogger [|] Slf4jLogger started
thehive_1 | [info] a.r.a.t.ArteryTcpTransport [|] Remoting started with transport [Artery tcp]; listening on address [akka://[email protected]:32869] with UID [1700451959803731164]
thehive_1 | [info] a.c.Cluster [|] Cluster Node [akka://[email protected]:32869] - Starting up, Akka version [2.6.10] ...
thehive_1 | [info] a.c.Cluster [|] Cluster Node [akka://[email protected]:32869] - Registered cluster JMX MBean [akka:type=Cluster]
thehive_1 | [info] a.c.Cluster [|] Cluster Node [akka://[email protected]:32869] - Started up successfully
thehive_1 | [info] a.c.Cluster [|] Cluster Node [akka://[email protected]:32869] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
thehive_1 | [info] a.c.s.SplitBrainResolver [|] SBR started. Config: strategy [KeepMajority], stable-after [20 seconds], down-all-when-unstable [15 seconds], selfUniqueAddress [akka://[email protected]:32869#1700451959803731164], selfDc [default].
thehive_1 | [info] o.r.Reflections [|] Reflections took 288 ms to scan 1 urls, producing 160 keys and 2415 values
thehive_1 | [info] o.t.t.ClusterSetup [|] Initialising cluster
thehive_1 | [info] a.c.Cluster [|] Cluster Node [akka://[email protected]:32869] - Node [akka://[email protected]:32869] is JOINING itself (with roles [dc-default], version [0.0.0]) and forming new cluster
thehive_1 | [info] a.c.Cluster [|] Cluster Node [akka://[email protected]:32869] - is the new leader among reachable nodes (more leaders may exist)
thehive_1 | [info] a.c.Cluster [|] Cluster Node [akka://[email protected]:32869] - Leader is moving node [akka://[email protected]:32869] to [Up]
thehive_1 | [info] o.t.t.ClusterListener [|] Member is Up: akka://[email protected]:32869
thehive_1 | [info] a.c.s.SplitBrainResolver [|] This node is now the leader responsible for taking SBR decisions among the reachable nodes (more leaders may exist).
thehive_1 | [info] a.c.s.ClusterSingletonManager [|] Singleton manager starting singleton actor [akka://application/system/singletonManagerJanusClusterManager/JanusClusterManager]
thehive_1 | [info] a.c.s.ClusterSingletonManager [|] ClusterSingletonManager state change [Start -> Oldest]
thehive_1 | [info] a.c.s.ClusterSingletonProxy [|] Singleton identified at [akka://application/system/singletonManagerJanusClusterManager/JanusClusterManager]
thehive_1 | Oops, cannot start the server.
thehive_1 | java.nio.file.AccessDeniedException: /opt/thp/thehive/db/je.properties
thehive_1 | at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
thehive_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
thehive_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
thehive_1 | at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
thehive_1 | at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
thehive_1 | at java.nio.file.Files.newOutputStream(Files.java:216)
thehive_1 | at org.thp.scalligraph.janus.JanusDatabase$.$anonfun$openDatabase$1(JanusDatabase.scala:49)
thehive_1 | at org.thp.scalligraph.janus.JanusDatabase$.$anonfun$openDatabase$1$adapted(JanusDatabase.scala:45)
thehive_1 | at scala.Option.foreach(Option.scala:407)
thehive_1 | at org.thp.scalligraph.janus.JanusDatabase$.openDatabase(JanusDatabase.scala:45)
thehive_1 | at org.thp.scalligraph.janus.JanusDatabaseProvider.$anonfun$get$3(JanusDatabaseProvider.scala:105)
thehive_1 | at scala.util.Success.$anonfun$map$1(Try.scala:255)
thehive_1 | at scala.util.Success.map(Try.scala:213)
thehive_1 | at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
thehive_1 | at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
thehive_1 | at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
thehive_1 | at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
thehive_1 | at org.thp.scalligraph.ContextPropagatingDisptacher$$anon$1.$anonfun$execute$2(ContextPropagatingDisptacher.scala:56)
thehive_1 | at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
thehive_1 | at org.thp.scalligraph.DiagnosticContext$$anon$2.withContext(ContextPropagatingDisptacher.scala:75)
thehive_1 | at org.thp.scalligraph.ContextPropagatingDisptacher$$anon$1.$anonfun$execute$1(ContextPropagatingDisptacher.scala:56)
thehive_1 | at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
thehive_1 | at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
thehive_1 | at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
thehive_1 | at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
thehive_1 | at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
thehive_1 | at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Update all docker-compose files related to TheHive 4 to use 4.1 images:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.