Giter Site home page Giter Site logo

Comments (5)

zanematthew avatar zanematthew commented on August 15, 2024 2

This should be marked as close. You can login to Kibana, it just takes time (~5min, maybe more) to startup, plus as documented in the readme you'll want to increase memory limit.

from stack-docker.

tlovett1 avatar tlovett1 commented on August 15, 2024

{"type":"log","@timestamp":"2018-02-08T01:24:20Z","tags":["license","warning","xpack"],"pid":1,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}

from stack-docker.

jarpy avatar jarpy commented on August 15, 2024

It's likely that the Elasticsearch container is not running. It might be running out of RAM, for example. Have a look at the log output for the Elasticsearch container and see if it's dying. Using docker-compose up | grep elasticsearch can help with the large amount of logging.

from stack-docker.

babacoders avatar babacoders commented on August 15, 2024

I am also facing the same issue:

Cloned the Repository, used docker-compose up and services looks good.

root@ubuntu:~# docker ps
CONTAINER ID        IMAGE                                                          COMMAND                  CREATED             STATUS              PORTS               NAMES
a8b74fee2bdd        docker.elastic.co/beats/metricbeat:6.1.3                       "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              metricbeat
74b5dc937dc2        docker.elastic.co/apm/apm-server:6.1.3                         "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              apm_server
2d3be1ab5451        docker.elastic.co/beats/filebeat:6.1.3                         "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              filebeat
ccc3ea1c514e        docker.elastic.co/beats/heartbeat:6.1.3                        "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              heartbeat
cf884fe86ff4        docker.elastic.co/beats/auditbeat:6.1.3                        "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              setup_auditbeat
8c9b6e23f5d0        docker.elastic.co/beats/metricbeat:6.1.3                       "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              setup_metricbeat
3d86ef8351ee        docker.elastic.co/apm/apm-server:6.1.3                         "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              setup_apm_server
823b4184dae3        docker.elastic.co/beats/filebeat:6.1.3                         "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              setup_filebeat
a7d79bd9b05a        docker.elastic.co/beats/packetbeat:6.1.3                       "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              setup_packetbeat
ea7496991b0f        docker.elastic.co/beats/heartbeat:6.1.3                        "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              setup_heartbeat
1166e1030179        docker.elastic.co/kibana/kibana:6.1.3                          "/bin/bash /usr/lo..."   9 hours ago         Up 9 hours                              kibana
5fbccf85eea1        docker.elastic.co/beats/auditbeat:6.1.3                        "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              auditbeat
5d96d05dd05f        centos:7                                                       "/bin/bash -c 'cat..."   9 hours ago         Up 9 hours                              setup_logstash
0eb4a95da802        centos:7                                                       "/bin/bash -c 'cat..."   9 hours ago         Up 9 hours                              setup_kibana
f9d8c1b646ea        docker.elastic.co/elasticsearch/elasticsearch-platinum:6.1.3   "/usr/local/bin/do..."   9 hours ago         Up 9 hours                              elasticsearch

While I checked the logs:

root@ubuntu:~# docker logs f9d8
[2018-02-12T17:37:46,063][INFO ][o.e.n.Node               ] [] initializing ...
[2018-02-12T17:37:46,133][INFO ][o.e.e.NodeEnvironment    ] [jrXdhxD] using [1] data paths, mounts [[/ (none)]], net usable_space [44.8gb], net total_space [56.3gb], types [aufs]
[2018-02-12T17:37:46,134][INFO ][o.e.e.NodeEnvironment    ] [jrXdhxD] heap size [1007.3mb], compressed ordinary object pointers [true]
[2018-02-12T17:37:46,136][INFO ][o.e.n.Node               ] node name [jrXdhxD] derived from node ID [jrXdhxDiT8GKtKltsG6Fng]; set [node.name] to override
[2018-02-12T17:37:46,136][INFO ][o.e.n.Node               ] version[6.1.3], pid[1], build[af51318/2018-01-26T18:22:55.523Z], OS[Linux/4.8.0-22-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_161/25.161-b14]
[2018-02-12T17:37:46,137][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
[2018-02-12T17:37:48,004][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [aggs-matrix-stats]
[2018-02-12T17:37:48,004][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [analysis-common]
[2018-02-12T17:37:48,004][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [ingest-common]
[2018-02-12T17:37:48,004][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [lang-expression]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [lang-mustache]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [lang-painless]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [mapper-extras]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [parent-join]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [percolator]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [reindex]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [repository-url]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [transport-netty4]
[2018-02-12T17:37:48,005][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded module [tribe]
[2018-02-12T17:37:48,006][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded plugin [ingest-geoip]
[2018-02-12T17:37:48,006][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded plugin [ingest-user-agent]
[2018-02-12T17:37:48,006][INFO ][o.e.p.PluginsService     ] [jrXdhxD] loaded plugin [x-pack]
[2018-02-12T17:37:50,906][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/121] [Main.cc@128] controller (64 bit): Version 6.1.3 (Build 49803b19919585) Copyright (c) 2018 Elasticsearch BV
[2018-02-12T17:37:51,461][INFO ][o.e.d.DiscoveryModule    ] [jrXdhxD] using discovery type [zen]
[2018-02-12T17:37:53,499][INFO ][o.e.n.Node               ] initialized
[2018-02-12T17:37:53,501][INFO ][o.e.n.Node               ] [jrXdhxD] starting ...
[2018-02-12T17:37:54,071][INFO ][o.e.t.TransportService   ] [jrXdhxD] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2018-02-12T17:37:54,132][WARN ][o.e.b.BootstrapChecks    ] [jrXdhxD] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2018-02-12T17:37:57,203][INFO ][o.e.c.s.MasterService    ] [jrXdhxD] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true}
[2018-02-12T17:37:57,215][INFO ][o.e.c.s.ClusterApplierService] [jrXdhxD] new_master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {jrXdhxD}{jrXdhxDiT8GKtKltsG6Fng}{CrSPn47TR-uBXotjRqhofQ}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=12598865920, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-02-12T17:37:57,268][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [jrXdhxD] publish_address {100.98.26.129:9200}, bound_addresses {[::]:9200}
[2018-02-12T17:37:57,269][INFO ][o.e.n.Node               ] [jrXdhxD] started
[2018-02-12T17:37:57,491][INFO ][o.e.g.GatewayService     ] [jrXdhxD] recovered [0] indices into cluster_state
[2018-02-12T17:37:58,990][INFO ][o.e.l.LicenseService     ] [jrXdhxD] license [5c7e79b3-d1e9-4f07-9793-ea3f20ae698e] mode [trial] - valid
[2018-02-12T17:38:04,015][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-es-6-2018.02.12] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2018-02-12T17:38:04,482][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watches] creating index, cause [auto(bulk api)], templates [.watches], shards [1]/[1], mappings [doc]
[2018-02-12T17:38:05,030][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watches/3a2eYZLURqWSE8If4ojDQg] update_mapping [doc]
[2018-02-12T17:39:05,388][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.triggered_watches] creating index, cause [auto(bulk api)], templates [.triggered_watches], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:05,815][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-alerts-6] creating index, cause [auto(bulk api)], templates [.monitoring-alerts], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:05,897][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watcher-history-7-2018.02.12] creating index, cause [auto(bulk api)], templates [.watch-history-7], shards [1]/[1], mappings [doc]
[2018-02-12T17:39:06,195][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.12/mxj9Ak5aQmqvoGPks67j-A] update_mapping [doc]
[2018-02-12T17:39:06,283][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.12/mxj9Ak5aQmqvoGPks67j-A] update_mapping [doc]
[2018-02-13T00:00:04,457][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.monitoring-es-6-2018.02.13] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[1], mappings [doc]
[2018-02-13T00:00:19,346][INFO ][o.e.c.m.MetaDataCreateIndexService] [jrXdhxD] [.watcher-history-7-2018.02.13] creating index, cause [auto(bulk api)], templates [.watch-history-7], shards [1]/[1], mappings [doc]
[2018-02-13T00:00:19,451][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.13/y1cnVrseR3atoRioU811gw] update_mapping [doc]
[2018-02-13T00:00:30,855][INFO ][o.e.c.m.MetaDataMappingService] [jrXdhxD] [.watcher-history-7-2018.02.13/y1cnVrseR3atoRioU811gw] update_mapping [doc]
[2018-02-13T01:38:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] triggering scheduled [ML] maintenance tasks
[2018-02-13T01:38:00,004][INFO ][o.e.x.m.a.DeleteExpiredDataAction$TransportAction] [jrXdhxD] Deleting expired data
[2018-02-13T01:38:00,081][INFO ][o.e.x.m.a.DeleteExpiredDataAction$TransportAction] [jrXdhxD] Completed deletion of expired data
[2018-02-13T01:38:00,083][INFO ][o.e.x.m.MlDailyMaintenanceService] Successfully completed [ML] maintenance tasks
curl -XPUT 'localhost:9200/idx'
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}root@ubuntu:~/openusm/logging#

I added the below entry for taking care of RAM:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-platinum:${TAG}
    container_name: elasticsearch
    network_mode: host
    environment: ['http.host=0.0.0.0', 'transport.host=127.0.0.1', 'ELASTIC_PASSWORD=${ELASTIC_PASSWORD}']
    environment:
      ES_JAVA_OPTS: "-Xmx4g -Xms4g"

    ports: ['127.0.0.1:9200:9200']

Can you please suggest?

from stack-docker.

jarpy avatar jarpy commented on August 15, 2024

I think that's a different thing.

curl -XPUT 'localhost:9200/idx'
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication token for REST request [/idx]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}root@ubuntu:~/openusm/logging#

I would expect that request to fail, since authentication is enabled.

Try curl -XPUT 'elastic:changeme@localhost:9200/idx' instead.

from stack-docker.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.