Giter Site home page Giter Site logo

stack-docker's Introduction

ARCHIVED

This project is no longer maintained. For alternative getting started experiences, you may want to try one of these options:

stack-docker

This example Docker Compose configuration demonstrates many components of the Elastic Stack, all running on a single machine under Docker.

Prerequisites

  • Docker and Docker Compose.
    • Windows and Mac users get Compose installed automatically with Docker for Windows/Mac.

    • Linux users can read the install instructions or can install via pip:

pip install docker-compose
  • Windows Users must set the following 2 ENV vars:

    • COMPOSE_CONVERT_WINDOWS_PATHS=1
    • PWD=/path/to/checkout/for/stack-docker
      • for example I use the path: /c/Users/nick/elastic/stack-docker
      • Note: you're paths must be in the form of /c/path/to/place using C:\path\to\place will not work
    • You can set these two ways:
      1. Temporarily add an env var in powershell use: $Env:COMPOSE_CONVERT_WINDOWS_PATHS=1
      2. Permanently add an env var in powershell use: [Environment]::SetEnvironmentVariable("COMPOSE_CONVERT_WINDOWS_PATHS", "1", "Machine")

      Note: you will need to refresh or create a new powershell for this env var to take effect

      1. in System Properties add the environment variables.
  • At least 4GiB of RAM for the containers. Windows and Mac users must configure their Docker virtual machine to have more than the default 2 GiB of RAM:

Docker VM memory settings

  • Linux Users must set the following configuration as root:
sysctl -w vm.max_map_count=262144

By default, the amount of Virtual Memory is not enough.

Starting the stack

First we need to:

  1. set default password
  2. create keystores to store passwords
  3. install dashboards, index patterns, etc.. for beats and apm

This is accomplished using the setup.yml file:

docker-compose -f setup.yml up

Please take note after the setup completes it will output the password that is used for the elastic login.

Now we can launch the stack with docker-compose up -d to create a demonstration Elastic Stack with Elasticsearch, Kibana, Logstash, Auditbeat, Metricbeat, Filebeat, Packetbeat, and Heartbeat.

Point a browser at http://localhost:5601 to see the results.

NOTE: Elasticsearch is now setup with self-signed certs.

Log in with elastic and what ever your auto generated elastic password is from the setup.

stack-docker's People

Contributors

ajpahl1008 avatar dadoonet avatar dliappis avatar elasticdog avatar fatmcgav avatar fxdgear avatar fyalavuz avatar jarpy avatar jbudz avatar recena avatar spinscale avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stack-docker's Issues

Role Kibana

I need to create user with all role, but the default user has not access the x-pack to creat an new user.
I do not think how to create an default user with all access.
I searched some property, but I do not get anything to set starting role.
Some help?

APM server setup fails

Just tried setting this up on my mac, following the directions in the readme.

I get the following error while running docker-compose -f setup.yml up:

setup_1  | setup_apm_server       | Loaded index template
setup_1  | setup_apm_server       | Loading dashboards (Kibana must be running and reachable)
setup_1  | setup_apm_server       | Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: <nil>. Response: {"message":"[doc][config:6.5.2]: version conflict, document already exists (current version [1]): [version_conflict_engine_exception] [doc][config:6.5.2]: version conflict, document already exists (current version [1]), with { index_uuid=\"m8fWXMZmSB... (truncated).
setup_1  | setup_apm_server exited with code 1

Even though the apm-server setup fails, the overall setup exists cleanly.

So then I try docker-compose up -d, and get this:

Creating apm_server      ... error
...

ERROR: for apm-server  Cannot create container for service apm-server: invalid mount config for type "bind": bind source path does not exist: /Users/rontoland/Code/stack-docker/config/apm-server/apm-server.keystore
ERROR: Encountered errors while bringing up the project.

Everything else seems to come up ok, but it seems there's problems getting the apm-server setup?

elasticsearch dies on Windows

Downloaded this repo, extracted to local path, ran docker-compose up. Everything appeared to start ok. Navigate to localhost:5601 and get redirected to http://localhost:5601/login?next=%2F with ERR_TOO_MANY_REDIRECTS

This repeats in the on-screen logs:

heartbeat_1         | 2017/06/01 16:42:09.412778 scheduler.go:294: INFO Scheduled job 'http@http://kibana:5601' already active.
heartbeat_1         | 2017/06/01 16:42:09.412802 scheduler.go:294: INFO Scheduled job 'icmp-host-ip@kibana' already active.
heartbeat_1         | 2017/06/01 16:42:09.412805 scheduler.go:294: INFO Scheduled job 'icmp-host-ip@elasticsearch' already active.
heartbeat_1         | 2017/06/01 16:42:09.412809 scheduler.go:294: INFO Scheduled job 'http@http://elasticsearch:9200' already active.
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:09Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:09Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
logstash_1          | [2017-06-01T16:42:10,286][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
logstash_1          | [2017-06-01T16:42:10,288][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
logstash_1          | [2017-06-01T16:42:10,288][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://elastic:xxxxxx@elasticsearch:9200/, :path=>"/"}
logstash_1          | [2017-06-01T16:42:10,291][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x3c6ad621 URL:http://logstash_system:xxxxxx@elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
logstash_1          | [2017-06-01T16:42:10,291][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x8363560 URL:http://logstash_system:xxxxxx@elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
logstash_1          | [2017-06-01T16:42:10,293][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x45e416c3 URL:http://elastic:xxxxxx@elasticsearch:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:11Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:11Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:11Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:11Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:12Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:12Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:14Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:14Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:14Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1            | {"type":"log","@timestamp":"2017-06-01T16:42:14Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}

checked running containers

$docker ps
CONTAINER ID        IMAGE                                       COMMAND                  CREATED             STATUS              PORTS                      NAMES
ff485e736be5        docker.elastic.co/logstash/logstash:5.4.1   "/usr/local/bin/do..."   42 minutes ago      Up 42 minutes       5044/tcp, 9600/tcp         elasticstack_logstash_1
fd1c78457958        docker.elastic.co/kibana/kibana:5.4.1       "/bin/sh -c /usr/l..."   42 minutes ago      Up 42 minutes       127.0.0.1:5601->5601/tcp   elasticstack_kibana_1
6473e6b8f0b7        docker.elastic.co/beats/packetbeat:5.4.1    "packetbeat -v -e ..."   42 minutes ago      Up 42 minutes                                  elasticstack_packetbeat_1
8cf96cc725b6        docker.elastic.co/beats/heartbeat:5.4.1     "heartbeat -e"           42 minutes ago      Up 42 minutes                                  elasticstack_heartbeat_1
03c3d500fb0a        docker.elastic.co/beats/filebeat:5.4.1      "filebeat -e"            42 minutes ago      Up 42 minutes                                  elasticstack_filebeat_1
673508ac78f4        docker.elastic.co/beats/metricbeat:5.4.1    "metricbeat -e"          42 minutes ago      Up 42 minutes                                  elasticstack_metricbeat_1

no elasticsearch in the list, checking logs

docker logs elasticstack_elasticsearch_1
[2017-06-01T16:01:24,748][INFO ][o.e.n.Node               ] [] initializing ...
[2017-06-01T16:01:24,888][INFO ][o.e.e.NodeEnvironment    ] [3lDg5IK] using [1] data paths, mounts [[/ (overlay)]], net usable_space [48.8gb], net total_space [58.8gb], spins? [possibly], types [overlay]
[2017-06-01T16:01:24,889][INFO ][o.e.e.NodeEnvironment    ] [3lDg5IK] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-06-01T16:01:24,890][INFO ][o.e.n.Node               ] node name [3lDg5IK] derived from node ID [3lDg5IKgS2mZmPIGQvudxw]; set [node.name] to override
[2017-06-01T16:01:24,891][INFO ][o.e.n.Node               ] version[5.4.1], pid[1], build[2cfe0df/2017-05-29T16:05:51.443Z], OS[Linux/4.9.30-moby/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_131/25.131-b12]
[2017-06-01T16:01:24,891][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+DisableExplicitGC, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch]
[2017-06-01T16:01:30,610][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [aggs-matrix-stats]
[2017-06-01T16:01:30,611][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [ingest-common]
[2017-06-01T16:01:30,611][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [lang-expression]
[2017-06-01T16:01:30,611][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [lang-groovy]
[2017-06-01T16:01:30,612][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [lang-mustache]
[2017-06-01T16:01:30,613][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [lang-painless]
[2017-06-01T16:01:30,613][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [percolator]
[2017-06-01T16:01:30,614][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [reindex]
[2017-06-01T16:01:30,614][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [transport-netty3]
[2017-06-01T16:01:30,614][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded module [transport-netty4]
[2017-06-01T16:01:30,619][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded plugin [ingest-geoip]
[2017-06-01T16:01:30,620][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded plugin [ingest-user-agent]
[2017-06-01T16:01:30,620][INFO ][o.e.p.PluginsService     ] [3lDg5IK] loaded plugin [x-pack]

I have similar issue when trying to run just elastic search docker image, it starts then dies, not seeing anything in the elastic logs that's giving me any clues.

$ docker version
Client:
 Version:      17.06.0-ce-rc1
 API version:  1.30
 Go version:   go1.8.1
 Git commit:   7f8486a
 Built:        Wed May 31 02:54:33 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.06.0-ce-rc1
 API version:  1.30 (minimum version 1.12)
 Go version:   go1.8.1
 Git commit:   7f8486a
 Built:        Wed May 31 03:00:14 2017
 OS/Arch:      linux/amd64
 Experimental: true

$ docker-compose version
docker-compose version 1.13.0, build 1719ceb8
docker-py version: 2.2.1
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.2j  26 Sep 2016

Using Docker for Windows Edge as the stable channel doesn't support docker-compose 3.2.

Filebeat (5.6): index created, but no documents

Thanks for a great way to getting started with the entire Elastic stack 👍

I've previously used the now-depricated "library" docker images (library/elastic etc). The transition to these images has been close to friction-free, except for the fact that logs from Filebeat are not added to the Elastic index.

I'm running latest version of docker on Windows

> docker --version
Docker version 17.09.0-ce, build afdb6d4

Steps to reproduce

  1. Checkout the branch 5.6. and add a volume to filebeat so that /mnt/log contains some log files. I had to adjust the JVM heap size for the Elasticsearch container by adding the following environment variable 'ES_JAVA_OPTS=-Xms512m -Xmx512m'
  2. Run docker-compose up and wait for the containers to start up
  3. Load list of indexes in Elasticsearch http://localhost:9200/_cat/indices?v

Result

I see a Filebeat index (filebeat-2017.10.06), but it does not contain any documents.

Looking at the log from the Filebeat container, I see the following (some log entries removed for readability)

INFO Harvester started for file: /mnt/log/my_log_file.log
ERR Connecting error publishing events (retrying): Get http://elasticsearch:9200: dial tcp 172.18.0.2:9200: getsockopt: connection refused
INFO Connected to Elasticsearch version 5.6.2
INFO Non-zero metrics in the last 30s: libbeat.es.call_count.PublishEvents=602 libbeat.es.publish.read_bytes=287614 libbeat.es.publish.write_bytes=19860998 libbeat.es.published_and_acked_events=29991 libbeat.publisher.published_events=28655 publish.events=28672 registrar.states.current=31 registrar.states.update=28672 registrar.writes=14

I'm not familiar with the inner workings of Filebeat, but if I read the logs right it looks like it successfully connected to elastic search (after a few retries) and published log entries to it. Further more I kind of ruled out authentication or connectivity problems, as the Filebeat index is created in Elasticsearch.

Do you have any idea what the problem might be or if I've missed something? Any help would be greatly appreciated!

Permission denied while setting up

With a fresh copy of the repository, running on a CentOS 7, I got an access control error with the following output.

$ docker-compose -f setup.yml up

Starting 02adc5c8189e_stack-docker_setup_1_4eb31c99477b ... done
Attaching to 02adc5c8189e_stack-docker_setup_1_4eb31c99477b
02adc5c8189e_stack-docker_setup_1_4eb31c99477b | cat: can't open './scripts/setup.sh': Permission denied
02adc5c8189e_stack-docker_setup_1_4eb31c99477b exited with code 0

Environment details

  • cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • uname -a
Linux blink-monolith 4.19.0-1.el7.elrepo.x86_64 #1 SMP Mon Oct 22 10:40:32 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
  • docker-compose version
docker-compose version 1.23.0, build c8524dc
docker-py version: 3.5.1
CPython version: 2.7.5
OpenSSL version: OpenSSL 1.0.2k-fips  26 Jan 2017
  • docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-75.git8633870.el7.centos.x86_64
 Go version:      go1.9.4
 Git commit:      8633870/1.13.1
 Built:           Fri Sep 28 19:45:08 2018
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-75.git8633870.el7.centos.x86_64
 Go version:      go1.9.4
 Git commit:      8633870/1.13.1
 Built:           Fri Sep 28 19:45:08 2018
 OS/Arch:         linux/amd64
 Experimental:    false
  • ls -la
drwxrwxr-x.  7 restrict restrict  4096 Kas  5 14:42 .
drwx------. 11 restrict restrict  4096 Kas  5 14:52 ..
drwxrwxr-x. 12 restrict restrict  4096 Kas  5 14:42 config
-rw-rw-r--.  1 restrict restrict  5197 Kas  5 14:42 docker-compose.setup.yml
-rw-rw-r--.  1 restrict restrict 11020 Kas  5 14:42 docker-compose.yml
-rw-rw-r--.  1 restrict restrict    32 Kas  5 14:42 .env
drwxrwxr-x.  8 restrict restrict  4096 Kas  5 14:51 .git
-rw-rw-r--.  1 restrict restrict   725 Kas  5 14:42 .gitignore
-rw-rw-r--.  1 restrict restrict 11357 Kas  5 14:42 LICENSE
-rw-rw-r--.  1 restrict restrict  1082 Kas  5 14:42 Makefile
-rw-rw-r--.  1 restrict restrict  2483 Kas  5 14:42 README.md
drwxrwxr-x.  2 restrict restrict    42 Kas  5 14:42 screenshots
drwxrwxr-x.  2 restrict restrict  4096 Kas  5 14:42 scripts
-rw-rw-r--.  1 restrict restrict   447 Kas  5 14:42 setup.yml
drwxrwxr-x.  2 restrict restrict    23 Kas  5 14:42 stack
-rw-rw-r--.  1 restrict restrict    87 Kas  5 14:42 .travis.yml

The username is restrict.

After hours of debugging, I have nothing up here. Maybe it's a SELinux issue, however, do you have any idea to make this thing work?

Sorry for lots of debug output.

Container setup_elasticsearch Vs elasticsearch

I am not sure I understand the difference between the containers setup_xxxx and xxxx.
Example: setup_elasticsearch and elasticsearch .
For example setup.sh tries to restart elasticsearch but only setup_elasticsearch is available at that stage.
Also, after setup and run, we end up with 2 containers per component. Exp: 2 for elasticsearch, 2 for kibana, etc.

Allow easy way to specify a data directory

Primary use case, for me, is to be able to run integration tests against the cluster created by stack-docker. Unfortunately the built-in filesystem is too slow so I would like to have it mounter to /tmp on the host machine which is running tmpfs

Auditbeat fails to start

Exiting: 1 error: 1 error: failed to create audit client: failed to create audit client: protocol not supported

This error happens when the host kernel has not been compiled with Audit.

Cannot specify a default index pattern

Left default of logstash-* in as the pattern, @timestamp as the Time Filter Field Name, clicked "Create", clicked star to set as default index, and the default index pattern refuses to save. Clicking on any other navigation item ("Discover" for example) returns me to the management page to define a default index pattern.

At this point, I cannot get past this element. docker-compose up -d and login via http://localhost:5601 is as far as I can get.

Thanks.

Kibana 6.4.0 crashes with fatal error

When using 6.4.0 RC images, Kibana fails to start, crashing with this message:

FATAL "elasticsearch.password" setting was not applied. Check for spelling errors and ensure that expected plugins are installed.

Here is a script to reproduce the error:

#!/bin/bash

set -xeuo pipefail

version=6.4.0
build=89d563cd
products='elasticsearch kibana logstash apm-server auditbeat filebeat heartbeat metricbeat packetbeat'

for product in ${products}; do
  curl https://staging.elastic.co/${version}-${build}/docker/${product}-6.4.0.tar.gz | docker load
done

sed -i "s/6[.][0-9][.][0-9]/${version}/" .env

docker-compose -f setup.yml up -d
watch docker logs kibana

I'm trying to determine if something has changed with the use of the keystore in Kibana 6.4.0, or if there is a problem with the 6.4.0 RC Docker image.

Kibana does not come up

Tried following steps in README.md file. http://localhost:5061 does not work.
Another thing I noticed is Elastic Search is inaccessible after logstash starts. If I start elasticsearch individually from the docker it is accessible at http://localhost:9200 but as soon as logstash starts this become inaccessible.

Docker Version 17.03.1-ce-mac12 (17661)
stack-docker version 5.4.3
Attached log file.
elastic-stask-docker.txt

Error response from daemon: No such container: elasticsearch

I'm just starting to attempt to use this project for the first time from 898d08b and hitting the error in the subject during execution of setup.sh. The docker restart elasticsearch command is reached before elasticsearch container is ever started (following the README) so the setup_logstash and setup_kibana containers sleep indefinitely waiting for elasticsearch container to start.

Here's my logs. I added some output to help me understand what was going on:

$ docker-compose -f setup.yml up
<...snip...>
setup_1  | setup_elasticsearch    | Move logstash certs to logstash config dir...
setup_1  | setup_elasticsearch    | Move kibana certs to kibana config dir...
setup_1  | setup_elasticsearch    | Move elasticsearch certs to elasticsearch config dir...
setup_1  | setup_elasticsearch exited with code 0
setup_1  | Restarting elasticsearch...
setup_1  | Error response from daemon: No such container: elasticsearch
setup_1  | RC: 1
setup_1  | Found orphan containers (elasticstack-docker_setup_1) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
setup_1  | Pulling setup_logstash (docker.elastic.co/logstash/logstash:6.3.0)...
setup_1  | 6.3.0: Pulling from logstash/logstash
setup_1  | Digest: sha256:3f80943ab3bc4ca9ef50eb712cdd720a1e61edeecae5f15388724039e432435a
setup_1  | Status: Downloaded newer image for docker.elastic.co/logstash/logstash:6.3.0
setup_1  | Pulling setup_kibana (docker.elastic.co/kibana/kibana:6.3.0)...
setup_1  | 6.3.0: Pulling from kibana/kibana
setup_1  | Digest: sha256:fd18353e0f980d812e31d52c5c706b614b33dde7728f2be1bb9fc18a531ea364
setup_1  | Status: Downloaded newer image for docker.elastic.co/kibana/kibana:6.3.0
Creating elasticsearch ... done
Creating setup_kibana   ... done
Creating setup_logstash ... done
Attaching to setup_kibana, setup_logstash
setup_1  | setup_kibana           | -rw-rw-r-- 1 root root 1200 Jul 25 01:11 /usr/share/kibana/config/ca/ca.crt
setup_1  | setup_logstash         | -rw-rw-r-- 1 root root 1200 Jul 25 01:11 /usr/share/logstash/config/ca/ca.crt
setup_1  | setup_logstash         | Sleeping while waiting for elasticsearch container...
setup_1  | setup_logstash         | Sleeping while waiting for elasticsearch container...
setup_1  | setup_logstash         | Sleeping while waiting for elasticsearch container...

Docker version:

Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:11:02 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Compose:

docker-compose version 1.21.2, build a133471

I ended up temporarily dropping back to d52aac2 to get the stack started.

Any help would be greatly appreciated!

No PDF Reports since 6.5.2

Hello,

since 6.5.2 PDF reports timeout with the following error:

TimeoutError: waiting for selector "[data-shared-item],[data-shared-items-count]" failed: timeout 30000ms exceeded

Is disabled all services except kibana and elasticsearch in docker-compose.yml.

Docker runs on Linux.

Logs:

{"type":"response","@timestamp":"2018-12-15T08:33:38Z","tags":[],"pid":1,"method":"post","statusCode":200,"req":{"url":"/api/reporting/generate/printablePdf?jobParams=(browserTimezone%3AEurope%2FBerlin%2Clayout%3A(dimensions%3A(height%3A1268%2Cwidth%3A1228)%2Cid%3Apreserve_layout)%2CobjectType%3Adashboard%2CrelativeUrls%3A!('%2Fapp%2Fkibana%23%2Fdashboard%2F63fbdd30-f89b-11e8-b6ae-93487c46f2e0%3F_g%3D(refreshInterval%3A(pause%3A!!t%2Cvalue%3A0)%2Ctime%3A(from%3Anow-7d%2Cmode%3Aquick%2Cto%3Anow))%26_a%3D(description%3A!'!'%2Cfilters%3A!!()%2CfullScreenMode%3A!!f%2Coptions%3A(darkTheme%3A!!f%2ChidePanelTitles%3A!!f%2CuseMargins%3A!!t)%2Cpanels%3A!!((embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'1!'%2Cw%3A24%2Cx%3A0%2Cy%3A0)%2Cid%3A!'2be74ec0-f8d2-11e8-b6ae-93487c46f2e0!'%2CpanelIndex%3A!'1!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'2!'%2Cw%3A24%2Cx%3A24%2Cy%3A0)%2Cid%3A!'2e0f4260-f8d3-11e8-b6ae-93487c46f2e0!'%2CpanelIndex%3A!'2!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'3!'%2Cw%3A24%2Cx%3A0%2Cy%3A15)%2Cid%3Ac8d52580-f8d3-11e8-b6ae-93487c46f2e0%2CpanelIndex%3A!'3!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'4!'%2Cw%3A24%2Cx%3A24%2Cy%3A15)%2Cid%3Ab89c2970-f8d3-11e8-b6ae-93487c46f2e0%2CpanelIndex%3A!'4!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'5!'%2Cw%3A24%2Cx%3A0%2Cy%3A30)%2Cid%3A!'0d88e270-f8d4-11e8-b6ae-93487c46f2e0!'%2CpanelIndex%3A!'5!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!'))%2Cquery%3A(language%3Alucene%2Cquery%3A!'!')%2CtimeRestore%3A!!f%2Ctitle%3A!'Testa%2BOverview!'%2CviewMode%3Aview)')%2Ctitle%3A'Testa%20Overview')","method":"post","headers":{"host":"stkn.org:5601","connection":"keep-alive","content-length":"0","origin":"http://stkn.org:5601","kbn-version":"6.5.3","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://stkn.org:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"de,de-DE;q=0.9,en;q=0.8,en-US;q=0.7"},"remoteAddress":"192.168.48.1","userAgent":"192.168.48.1","referer":"http://stkn.org:5601/app/kibana"},"res":{"statusCode":200,"responseTime":121,"contentLength":9},"message":"POST /api/reporting/generate/printablePdf?jobParams=(browserTimezone%3AEurope%2FBerlin%2Clayout%3A(dimensions%3A(height%3A1268%2Cwidth%3A1228)%2Cid%3Apreserve_layout)%2CobjectType%3Adashboard%2CrelativeUrls%3A!('%2Fapp%2Fkibana%23%2Fdashboard%2F63fbdd30-f89b-11e8-b6ae-93487c46f2e0%3F_g%3D(refreshInterval%3A(pause%3A!!t%2Cvalue%3A0)%2Ctime%3A(from%3Anow-7d%2Cmode%3Aquick%2Cto%3Anow))%26_a%3D(description%3A!'!'%2Cfilters%3A!!()%2CfullScreenMode%3A!!f%2Coptions%3A(darkTheme%3A!!f%2ChidePanelTitles%3A!!f%2CuseMargins%3A!!t)%2Cpanels%3A!!((embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'1!'%2Cw%3A24%2Cx%3A0%2Cy%3A0)%2Cid%3A!'2be74ec0-f8d2-11e8-b6ae-93487c46f2e0!'%2CpanelIndex%3A!'1!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'2!'%2Cw%3A24%2Cx%3A24%2Cy%3A0)%2Cid%3A!'2e0f4260-f8d3-11e8-b6ae-93487c46f2e0!'%2CpanelIndex%3A!'2!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'3!'%2Cw%3A24%2Cx%3A0%2Cy%3A15)%2Cid%3Ac8d52580-f8d3-11e8-b6ae-93487c46f2e0%2CpanelIndex%3A!'3!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'4!'%2Cw%3A24%2Cx%3A24%2Cy%3A15)%2Cid%3Ab89c2970-f8d3-11e8-b6ae-93487c46f2e0%2CpanelIndex%3A!'4!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!')%2C(embeddableConfig%3A()%2CgridData%3A(h%3A15%2Ci%3A!'5!'%2Cw%3A24%2Cx%3A0%2Cy%3A30)%2Cid%3A!'0d88e270-f8d4-11e8-b6ae-93487c46f2e0!'%2CpanelIndex%3A!'5!'%2Ctype%3Avisualization%2Cversion%3A!'6.3.0!'))%2Cquery%3A(language%3Alucene%2Cquery%3A!'!')%2CtimeRestore%3A!!f%2Ctitle%3A!'Testa%2BOverview!'%2CviewMode%3Aview)')%2Ctitle%3A'Testa%20Overview') 200 121ms - 9.0B"}
{"type":"response","@timestamp":"2018-12-15T08:33:41Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78","method":"get","headers":{"host":"stkn.org:5601","connection":"keep-alive","kbn-system-api":"true","kbn-version":"6.5.3","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://stkn.org:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"de,de-DE;q=0.9,en;q=0.8,en-US;q=0.7"},"remoteAddress":"192.168.48.1","userAgent":"192.168.48.1","referer":"http://stkn.org:5601/app/kibana"},"res":{"statusCode":200,"responseTime":80,"contentLength":9},"message":"GET /api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78 200 80ms - 9.0B"}
{"type":"response","@timestamp":"2018-12-15T08:33:51Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78","method":"get","headers":{"host":"stkn.org:5601","connection":"keep-alive","kbn-system-api":"true","kbn-version":"6.5.3","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://stkn.org:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"de,de-DE;q=0.9,en;q=0.8,en-US;q=0.7"},"remoteAddress":"192.168.48.1","userAgent":"192.168.48.1","referer":"http://stkn.org:5601/app/kibana"},"res":{"statusCode":200,"responseTime":28,"contentLength":9},"message":"GET /api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78 200 28ms - 9.0B"}
{"type":"response","@timestamp":"2018-12-15T08:34:01Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78","method":"get","headers":{"host":"stkn.org:5601","connection":"keep-alive","kbn-system-api":"true","kbn-version":"6.5.3","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://stkn.org:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"de,de-DE;q=0.9,en;q=0.8,en-US;q=0.7"},"remoteAddress":"192.168.48.1","userAgent":"192.168.48.1","referer":"http://stkn.org:5601/app/kibana"},"res":{"statusCode":200,"responseTime":29,"contentLength":9},"message":"GET /api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78 200 29ms - 9.0B"}
{"type":"response","@timestamp":"2018-12-15T08:34:11Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78","method":"get","headers":{"host":"stkn.org:5601","connection":"keep-alive","kbn-system-api":"true","kbn-version":"6.5.3","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://stkn.org:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"de,de-DE;q=0.9,en;q=0.8,en-US;q=0.7"},"remoteAddress":"192.168.48.1","userAgent":"192.168.48.1","referer":"http://stkn.org:5601/app/kibana"},"res":{"statusCode":200,"responseTime":26,"contentLength":9},"message":"GET /api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78 200 26ms - 9.0B"}
{"type":"response","@timestamp":"2018-12-15T08:34:21Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78","method":"get","headers":{"host":"stkn.org:5601","connection":"keep-alive","kbn-system-api":"true","kbn-version":"6.5.3","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://stkn.org:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"de,de-DE;q=0.9,en;q=0.8,en-US;q=0.7"},"remoteAddress":"192.168.48.1","userAgent":"192.168.48.1","referer":"http://stkn.org:5601/app/kibana"},"res":{"statusCode":200,"responseTime":34,"contentLength":9},"message":"GET /api/reporting/jobs/list?page=0&ids=jpp7fnkl00012ac5d34hmx78 200 34ms - 9.0B"}
{"type":"response","@timestamp":"2018-12-15T08:34:22Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/api/reporting/jobs/output/jpp7fnkl00012ac5d34hmx78","method":"get","headers":{"host":"stkn.org:5601","connection":"keep-alive","kbn-system-api":"true","kbn-version":"6.5.3","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","content-type":"application/json","accept":"*/*","referer":"http://stkn.org:5601/app/kibana","accept-encoding":"gzip, deflate","accept-language":"de,de-DE;q=0.9,en;q=0.8,en-US;q=0.7"},"remoteAddress":"192.168.48.1","userAgent":"192.168.48.1","referer":"http://stkn.org:5601/app/kibana"},"res":{"statusCode":200,"responseTime":31,"contentLength":9},"message":"GET /api/reporting/jobs/output/jpp7fnkl00012ac5d34hmx78 200 31ms - 9.0B"}

Pick version and component

Few points based on my usage:

  1. It would be nice to add clear examples on how to launch a specific version in readme:
    TAG=6.3.1 docker-compose -f setup.yml up
    This will require a change in setup.yml , to pass TAG as an ENV variable as well.

  2. Also would be possible to setup only some components using just arguments.
    Similar to
    docker-compose up -d elasticsearch kibana , have something to limit the setup to only some components of the stack (kibana and elasticsearch).
    It is now possible by editing setup.sh to only include setup_kibana and comment other parts. But was wondering if there is a way to make it simpler using only ENV variables (maybe).
    Maybe have that documented somewhere in the readme would help as well.

  3. Would be great to specify the url of elasticsearch: https://localhost:9200 (since kibana is on http but not elasticsearch). It can be a bit confusing.

env variable ELASTIC_PASSWORD not honored through ash

Hi

I need to set ELASTIC_PASSWORD and edited ./scripts/setup.sh

if [ -z "${ELASTIC_PASSWORD}" ]; then                                                  
    printf "generating random password since variable ELASTIC_PASSWORD not set\n"      
    PW=$(openssl rand -base64 16;)                                                     
    ELASTIC_PASSWORD="${ELASTIC_PASSWORD:-$PW}"                                        
    export ELASTIC_PASSWORD                                                            
fi                                                                                     

but the setup.yml pipe to ash does not seem to honor envionment variables
command: ['cat ./scripts/setup.sh | tr -d "\r" | ash']

it is always undefined.

And before you asked why:
I had a previous setup with lots of logstash, beats, rsyslogs using the old password.
and some of them I do not have access to.

invalid mount config for type "bind": bind source path does not exist

Hello, having some issues bringing up the stack,

root@min1 /e/elastic# uname -a
Linux min1 3.10.0-862.14.4.el7.x86_64 elastic/stack-docker#1 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux


root@min1 /e/elastic# docker version
Client:
 Version:           18.09.0
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        4d60db4
 Built:             Wed Nov  7 00:48:22 2018
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:19:08 2018
  OS/Arch:          linux/amd64
  Experimental:     false

I ran the docker-compose -f setup.yaml up, the entire setup is done and gives me a password.


root@min1 /e/elastic# docker-compose -f setup.yml up --remove-orphans
Creating network "elastic_default" with the default driver
Creating elastic_setup_1_2ca56a7c094b ... done
Attaching to elastic_setup_1_2564d05a80d6
setup_1_2564d05a80d6 | Creating network "elastic_stack" with the default driver
setup_1_2564d05a80d6 | Found orphan containers (elastic_setup_1_2564d05a80d6) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating setup_elasticsearch ... done
Attaching to setup_elasticsearch
setup_1_2564d05a80d6 | setup_elasticsearch    | Determining if x-pack is installed...
setup_1_2564d05a80d6 | setup_elasticsearch    | === CREATE Keystore ===
setup_1_2564d05a80d6 | setup_elasticsearch    | Elastic password is: "zqEag8W0Q5unC56UkEmnkg=="
setup_1_2564d05a80d6 | setup_elasticsearch    | Created elasticsearch keystore in /usr/share/elasticsearch/config
setup_1_2564d05a80d6 | setup_elasticsearch    | Setting bootstrap.password...
setup_1_2564d05a80d6 | setup_elasticsearch    | === CREATE SSL CERTS ===
setup_1_2564d05a80d6 | setup_elasticsearch    | Remove old ca zip...
setup_1_2564d05a80d6 | setup_elasticsearch    | Creating docker-cluster-ca.zip...
setup_1_2564d05a80d6 | setup_elasticsearch    | CA directory exists, removing...
setup_1_2564d05a80d6 | setup_elasticsearch    | Unzip ca files...
setup_1_2564d05a80d6 | setup_elasticsearch    | Archive:  /config/ssl/docker-cluster-ca.zip
setup_1_2564d05a80d6 | setup_elasticsearch    |    creating: /config/ssl/ca/
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/ca/ca.crt   
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/ca/ca.key   
setup_1_2564d05a80d6 | setup_elasticsearch    | Remove old docker-cluster.zip zip...
setup_1_2564d05a80d6 | setup_elasticsearch    | Create cluster certs zipfile...
setup_1_2564d05a80d6 | setup_elasticsearch    | Unzipping cluster certs zipfile...
setup_1_2564d05a80d6 | setup_elasticsearch    | Archive:  /config/ssl/docker-cluster.zip
setup_1_2564d05a80d6 | setup_elasticsearch    |    creating: /config/ssl/docker-cluster/elasticsearch/
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/docker-cluster/elasticsearch/elasticsearch.crt  
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/docker-cluster/elasticsearch/elasticsearch.key  
setup_1_2564d05a80d6 | setup_elasticsearch    |    creating: /config/ssl/docker-cluster/kibana/
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/docker-cluster/kibana/kibana.crt  
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/docker-cluster/kibana/kibana.key  
setup_1_2564d05a80d6 | setup_elasticsearch    |    creating: /config/ssl/docker-cluster/logstash/
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/docker-cluster/logstash/logstash.crt  
setup_1_2564d05a80d6 | setup_elasticsearch    |   inflating: /config/ssl/docker-cluster/logstash/logstash.key  
setup_1_2564d05a80d6 | setup_elasticsearch    | Move logstash certs to logstash config dir...
setup_1_2564d05a80d6 | setup_elasticsearch    | Move kibana certs to kibana config dir...
setup_1_2564d05a80d6 | setup_elasticsearch    | Move elasticsearch certs to elasticsearch config dir...
setup_1_2564d05a80d6 | setup_elasticsearch exited with code 0
setup_1_2564d05a80d6 | Found orphan containers (elastic_setup_1_2564d05a80d6) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Creating elasticsearch ... done
Creating setup_kibana   ... done
Creating setup_logstash ... done
Attaching to setup_kibana, setup_logstash
setup_1_2564d05a80d6 | setup_kibana           | -rw-rw-r--. 1 root root 1200 Nov 26 16:39 /usr/share/kibana/config/ca/ca.crt
setup_1_2564d05a80d6 | setup_logstash         | -rw-rw-r--. 1 root root 1200 Nov 26 16:39 /usr/share/logstash/config/ca/ca.crt
setup_1_2564d05a80d6 | setup_logstash         | {"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: passwords must be at least [6] characters long;"}],"type":"validation_exception","reason":"Validation Failed: 1: passwords must be at least [6] characters long;"},"status":400}=== CREATE Keystore ===
setup_1_2564d05a80d6 | setup_logstash         | Remove old logstash.keystore
setup_1_2564d05a80d6 | setup_kibana           | {"error":{"root_cause":[{"type":"validation_exception","reason":"Validation Failed: 1: passwords must be at least [6] characters long;"}],"type":"validation_exception","reason":"Validation Failed: 1: passwords must be at least [6] characters long;"},"status":400}=== CREATE Keystore ===
setup_1_2564d05a80d6 | setup_kibana           | Remove old kibana.keystore
setup_1_2564d05a80d6 | setup_kibana           | Created Kibana keystore in /usr/share/kibana/data/kibana.keystore
setup_1_2564d05a80d6 | setup_kibana           | Setting elasticsearch.password: "zqEag8W0Q5unC56UkEmnkg=="
setup_1_2564d05a80d6 | setup_kibana exited with code 0
setup_1_2564d05a80d6 | setup_logstash         | 
setup_1_2564d05a80d6 | setup_logstash         | WARNING: The keystore password is not set. Please set the environment variable `LOGSTASH_KEYSTORE_PASS`. Failure to do so will result in reduced security. Continue without password protection on the keystore? [y/N] Created Logstash keystore at /usr/share/logstash/config/logstash.keystore
setup_1_2564d05a80d6 | setup_logstash         | Setting ELASTIC_PASSWORD...
setup_1_2564d05a80d6 | setup_logstash         | 
setup_1_2564d05a80d6 | setup_logstash         | Enter value for ELASTIC_PASSWORD: Added 'elastic_password' to the Logstash keystore.
setup_1_2564d05a80d6 | setup_logstash exited with code 0
setup_1_2564d05a80d6 | Found orphan containers (elastic_setup_1_2564d05a80d6) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
setup_1_2564d05a80d6 | elasticsearch is up-to-date
Creating kibana ... done
Creating setup_packetbeat ... 
setup_1_2564d05a80d6 | Creating setup_metricbeat ... 
setup_1_2564d05a80d6 | Creating setup_filebeat   ... 
Creating setup_packetbeat ... done
Creating setup_metricbeat ... done
Creating setup_filebeat   ... done
setup_1_2564d05a80d6 | 
Creating setup_apm_server ... done
Creating setup_heartbeat  ... done
setup_1_2564d05a80d6 | 
setup_1_2564d05a80d6 | ERROR: for setup_auditbeat  Cannot start service setup_auditbeat: linux spec capabilities: Unknown capability to add: "CAP_AUDIT_READ"
setup_1_2564d05a80d6 | Encountered errors while bringing up the project.
setup_1_2564d05a80d6 | Setup completed successfully. To start the stack please run:
setup_1_2564d05a80d6 | 	 docker-compose up -d
setup_1_2564d05a80d6 | 
setup_1_2564d05a80d6 | If you wish to remove the setup containers please run:
setup_1_2564d05a80d6 | 	docker-compose -f docker-compose.yml -f docker-compose.setup.yml down --remove-orphans
setup_1_2564d05a80d6 | 
setup_1_2564d05a80d6 | You will have to re-start the stack after removing setup containers.
setup_1_2564d05a80d6 | 
setup_1_2564d05a80d6 | Your 'elastic' user password is: "zqEag8W0Q5unC56UkEmnkg=="
elastic_setup_1_2564d05a80d6 exited with code 0

when I startup the entire thing, getting bind errors,

root@min1 /e/elastic# docker-compose up 
Recreating elasticsearch ... done
Recreating kibana        ... done
Creating logstash        ... done
Creating metricbeat      ... 
Creating apm_server      ... error
Creating filebeat        ... 
Creating metricbeat      ... error
Creating packetbeat      ... 

ERROR: for apm_server  Cannot create container for service apm-server: invalid mount config for type "bind": bind source path does not exist: /etc/elastic/config/apm-server/apm-server.keystore
Creating heartbeat       ... 
Creating auditbeat       ... error
ERROR: for metricbeat  Cannot create container for service metricbeat: invalid mount config for type "bind": bind source path does not exist: /etc/elastic/config/metricbeat/metricbeat.keystore

Creating packetbeat      ... error
Creating filebeat        ... done

ERROR: for packetbeat  Cannot create container for service packetbeat: invalid mount config for type "bind": bind source path does noCreating heartbeat       ... done

ERROR: for auditbeat  Cannot create container for service auditbeat: invalid mount config for type "bind": bind source path does not exist: /etc/elastic/config/auditbeat/auditbeat.keystore

ERROR: for apm-server  Cannot create container for service apm-server: invalid mount config for type "bind": bind source path does not exist: /etc/elastic/config/apm-server/apm-server.keystore

ERROR: for metricbeat  Cannot create container for service metricbeat: invalid mount config for type "bind": bind source path does not exist: /etc/elastic/config/metricbeat/metricbeat.keystore

ERROR: for packetbeat  Cannot create container for service packetbeat: invalid mount config for type "bind": bind source path does not exist: /etc/elastic/config/packetbeat/packetbeat.keystore
ERROR: Encountered errors while bringing up the project.

Swarm Support

Hello Admins,

Can we also add docker swarm support, I am able to configure and run elastic stack 6.5.0 on a swarm cluster. I would like to contribute but not sure how to add that in this current structure.

Hanging on setup

Hi, on two different machines, docker-compose -f setup.yml up has hung at this point:

Creating elasticsearch ... done
Creating setup_logstash ... done
Creating setup_kibana   ... done
Attaching to setup_logstash, setup_kibana
setup_1  | setup_kibana           | -rw-rw-r-- 1 root root 1200 Sep  2 22:05 /usr/share/kibana/config/ca/ca.crt
setup_1  | setup_logstash         | -rw-rw-r-- 1 root root 1200 Sep  2 22:05 /usr/share/logstash/config/ca/ca.crt

Is the process done here, and I have to manually tear down the containers? Or have they truly hung? Host box is ubuntu 18.01. Please let me know if I can provide anything else, thanks!

Ability to Run Logstash with Centralized Pipeline Management

I would like to be able to use stack-docker to run Logstash with Centralized Pipeline Management.

To do this we would need to either make the logstash.yml file publicly available for editing OR automatically configure the logstash instance to use centralized pipeline management and specify the name of the pipelines that could be run, something like "script-1", "script-2" in the xpack.management.pipeline.id parameter (see configuration for more details).

I think the best approach might be to have a separate Logstash instance is correctly configured with a username, password, and predefined script names. However, this instance might be commented out because it might cause confusion to have two different Logstash instances running.

I would really like to have this feature for demonstrations.

.env is not used for ELASTIC_PASSWORD

Hi,

in case of password changing in .env file all beats (metric, file, audit) could not be registered due lack of permissions.
The reason is that password is not taking from env $ELASTIC_PASSWORD but from configuration files (e.g. /usr/share/filebeat/filebeat.yml for filebeat)

Step to reproduce

  1. Clone the repo
  2. Change the password in .env file
  3. docker-compose up
  4. check the status of containers. Example output for filebeat:
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]

Error running elastic stack.

[root@localhost start]# docker-compose up -d
WARNING: The ELASTIC_PASSWORD variable is not set. Defaulting to a blank string.
WARNING: The TAG variable is not set. Defaulting to a blank string.
ERROR: no such image: docker.elastic.co/elasticsearch/elasticsearch-platinum:: invalid reference format

Using Apache 2.0 as License

By following the setup, we get installed, but it has a 14 day license and if it expires, we cannot use the setup?

Please document if possible to use Apache license even if it means reduced features

Cant load dataset

I set up the elastic search and all the containers are up & running.

But I'm facing an issue while loading the dataset to elastic search. I setup the mappings & trying to load dataset [
I'm following the Tutorial - https://www.elastic.co/guide/en/kibana/current/tutorial-load-dataset.html.]

On running the curl upload command -
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json,
I'm getting the error 'curl: (52) Empty reply from server'.

On listing the docker containers it shows the below containers are running successfully.
docker.elastic.co/kibana/kibana:6.3.0 "/usr/local/bin/kiba…" 2 hours ago Up 2 hours (healthy) 0.0.0.0:5601->5601/tcp kibana
a3aa8556c225 docker.elastic.co/logstash/logstash:6.3.0 "/usr/local/bin/dock…" 2 hours ago Up 2 hours (healthy) 5044/tcp, 9600/tcp logstash
79c75ecde762 docker.elastic.co/elasticsearch/elasticsearch:6.3.0 "/usr/local/bin/dock…" 2 hours ago Up 2 hours (healthy) 0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch

But when I checked for [http://localhost:9200/bank ] it is not up & running. Not sure what is wrong here.

Cannot startup kibana

In my set I have a reverse-proxy that points to the docker-sets
When I run your stack, than I cannot connect to my :5601
When I run the official elk stack compose, then I do have access to the :5601 port,

Any ideas why I cannot connect to your :5601 port ?

  • connection refused-

X-Pack?

I would say that you can't have a canonical presentation for Elastic without incorporating X-Pack. I totally realize that this makes configuration more complicated, but if it is "ours" we need to show it.

Logstash created pipeline in Kibana Management doesn't work, while same pipeline work as main

Hi I added netflow pipeline in Kibana Management -> Logstash -> Pipelines

input {                                                                                                                                                                                                                                                                                               
  udp {                                                                                                                                               
    port => 40006                                                                                                                                      
    codec => netflow {                                                                                                                                
      versions => [5, 9]                                                                                                                              
    }                                                                                                                                                 
    type => netflow                                                                                                                                   
  }                                                                                                                                                   
}                                                                                                                                                                                                                                                                                                   

output {
        if ( [type] == "netflow" ) {
                elasticsearch {
                        
                        index => "logstash-netflow-%{host}-%{+YYYY.MM.dd}"
                        hosts    => [ 'elasticsearch' ]
                        user     => 'elastic'
                        password => "${ELASTIC_PASSWORD}" 
                        ssl => true
                        cacert => '/usr/share/logstash/config/certs/ca/ca.crt'

                }
        } else {
                elasticsearch {                        
                        index => "logstash-n-%{type}-%{+YYYY.MM.dd}"
                        hosts    => [ 'elasticsearch' ]
                        user     => 'elastic'
                        password => "${ELASTIC_PASSWORD}" 
                        ssl => true
                        cacert => '/usr/share/logstash/config/certs/ca/ca.crt'                        
                }
        }
}

and it did not work, but the main

input {
  heartbeat {
    interval => 5
    message  => 'Hello from Logstash 💓'
  }
}

output {
  elasticsearch {
    hosts    => [ 'elasticsearch' ]
    user     => 'elastic'
    password => "${ELASTIC_PASSWORD}"  # read password from logstash.keystore
    ssl => true
    cacert => '/usr/share/logstash/config/certs/ca/ca.crt'
  }
}

pipeline did work perfectly. Then I copied content of my pipeline
to ./config/logstash/pipeline/logstash.conf. Deleted the pipeline I created in Kibana. Did docker-compose up -d
and my pipeline started to work perfectly fine.

What do I need to do to make additional pipelines to work except the main one? Ideally I need to make multiple files ./config/logstash/pipeline/logstash.conf as I want it to be configured just from the docker-compose and avoid as much as possible manual configuration.

Update to 5.X

This needs to be updated to a modern version (ie not 2.1)

can't open './scripts/setup.sh': No such file or directory

docker-compose -f setup.yml up, does this work on windows?
I did all the steps before as requested.

COMPOSE_CONVERT_WINDOWS_PATHS=1
PWD=/path/to/checkout/for/stack-docke

My path is: /c/Github-repos/stack-docker-master

Doesnt work, when running "docker-compose -f setup.yml up" I still get "setup_1 | cat: can't open './scripts/setup.sh': No such file or directory"

"docker-compose up" fails.

Steps to reproduce

$ cat /etc/fedora-release
Fedora release 25 (Twenty Five)`

$ docker-compose version
docker-compose version 1.9.0, build 2585387
docker-py version: 1.10.6
CPython version: 2.7.13
OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017

$ docker-compose up
ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.elasticsearch: 'healthcheck'
Unsupported config option for services.kibana: 'healthcheck'
services.filebeat.depends_on contains an invalid type, it should be an array
services.heartbeat.depends_on contains an invalid type, it should be an array
services.kibana.depends_on contains an invalid type, it should be an array
services.logstash.depends_on contains an invalid type, it should be an array
services.metricbeat.depends_on contains an invalid type, it should be an array
services.packetbeat.depends_on contains an invalid type, it should be an array
services.create_logstash_index_pattern.depends_on contains an invalid type, it should be an array
services.import_dashboards.depends_on contains an invalid type, it should be an array
services.set_default_index_pattern.depends_on contains an invalid type, it should be an array

beats container failing to be created

Hi,

I'm getting the following errors when trying to bring the stack up.

ERROR: for apm_server Cannot create container for service apm-server: invalid mount config for type "bind": bind source path does not exist

ERROR: for metricbeat Cannot create container for service metricbeat: invalid mount config for type "bind": bind source path does not exist

ERROR: for packetbeat Cannot create container for service packetbeat: invalid mount config for type "bind": bind source path does not exist

ERROR: for heartbeat Cannot create container for service heartbeat: invalid mount config for type "bind": bind source path does not exist

ERROR: for filebeat Cannot create container for service filebeat: invalid mount config for type "bind": bind source path does not exist

ERROR: for auditbeat Cannot create container for service auditbeat: invalid mount config for type "bind": bind source path does not exist

ERROR: for apm-server Cannot create container for service apm-server: invalid mount config for type "bind": bind source path does not exist

ERROR: for metricbeat Cannot create container for service metricbeat: invalid mount config for type "bind": bind source path does not exist

I have tried using the an absolute path and even removed the volume mount and the error still persists.
Any ideas?

Thanks,

SecretStoreException::UnknownException "Error while trying to load the Logstash keystore"

Hello

Trying to bring up logstash after the installation went well brings up the following error:

ERROR: Failed to load settings file from "path.settings". Aborting... path.setting=/usr/share/logstash/config, exception=Java::OrgLogstashSecretStore::SecretStoreException::UnknownException, message=>Error while trying to load the Logstash keystore
[ERROR] 2018-08-02 14:47:36.244 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

I have no idea where to start looking.

I'm running it on Windows and the rest of the containers boot up fine.

Is there any way I can try to access the keystore manually to see if it is corrupt or anything?

Thank you!

Auditbeat fails to start after setup

Using the docker-volumes branch
All beats and services start successfully except auditbeat.
After checking the logs, I see the error:

Exiting: error loading config file: config file ("config/auditbeat.yml") must be owned by the beat user (uid=0) or root

Anyway around this? The volume which has the config file is owned by root, just like the others.

setup-beat.sh: kibana not ready yet but curl gives successfull exit code

Hi,

I tried installing 6.5.1 and encountered the following problem. When setup-beat.sh is run kibana is not ready but returns a string with a message that it's not ready but sends a return code indicating error. We should respect this with the following change and keep waiting.

Worked for me with this fix.

diff --git a/scripts/setup-beat.sh b/scripts/setup-beat.sh
index 2295d6fe..e5833c4c 100755
--- a/scripts/setup-beat.sh
+++ b/scripts/setup-beat.sh
@@ -4,7 +4,7 @@ set -euo pipefail

beat=$1

-until curl -s -k http://kibana:5601; do
+until curl -f -s -k http://kibana:5601; do
echo "Waiting for kibana..."
sleep 5
done

Kibana 6.6.0 crashes with Fatal error after initial setup

I am using elastic/stack-docker 6.6.0 in Windows 10, followed instructions as per the Readme.md, the initial setup runs well, then I run docker-compose up and I see some elasticsearch errors and java exceptions and see kibana Fatal error.

elasticsearch | [2019-02-08T17:45:59,892][ERROR][o.e.x.s.a.e.ReservedRealm] [SKQpBzc] failed to retrieve password hash for reserved user [elastic] elasticsearch | org.elasticsearch.action.UnavailableShardsException: at least one primary shard for the security index is unavailable

kibana | FATAL [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/doc/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}

How to prevent the crash, am I missing anything obvious? I tried with the 6.5.2 version and it didn't crash.

APM Server not being able to authenticate with Elastic Server

Hello,
I stumbled upon this issue while trying to use docker-compose to setup a 6.5.1 stack (modified the .env file for this).

Problem: I don't see activity in APM (Kibana) although the Java agent connects successfuly to the APM Server and is able to send data to it.

Diagnostic: running docker-compose logs apm-server shows Unauthorized (...) failed to authenticate user [elastic]:

apm_server       | 2018-11-29T20:06:57.694Z	INFO	pipeline/output.go:93	Attempting to reconnect to backoff(elasticsearch(https://elasticsearch:9200)) with 6 reconnect attempt(s)
apm_server       | 2018-11-29T20:08:01.988Z	ERROR	pipeline/output.go:100	Failed to connect to backoff(elasticsearch(https://elasticsearch:9200)): 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":["Bearer realm=\"security\"","Basic realm=\"security\" charset=\"UTF-8\""]}}],"type":"security_exception","reason":"failed to authenticate user [elastic]","header":{"WWW-Authenticate":["Bearer realm=\"security\"","Basic realm=\"security\" charset=\"UTF-8\""]}},"status":401}

I replaced password: "${ELASTIC_PASSWORD} with password: mypassword in apm-server.yml and the server started working perfectly. I see services in Kibana/APM, Traces, etc.

Is this an issue with the apm-server.keystore?

Nameserver 192.168.65.1 incorrect filebeat container

Hello,

I receive the following error when starting the filebeat container:

2019-01-22T09:25:25.738Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://elasticsearch:9200)) with 1 reconnect attempt(s)
2019-01-22T09:25:25.765Z WARN transport/tcp.go:53 DNS lookup failure "elasticsearch": lookup elasticsearch on 192.168.65.1:53: no such host
2019-01-22T09:25:27.765Z ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://elasticsearch:9200)): Get http://elasticsearch:9200: lookup elasticsearch on 192.168.65.1:53: no such host

For example the logstash container does not have this issue and I found a difference in the resolv.conf.

Wrong .conf:
nameserver 192.168.65.1
domain home

logstash and kibana conf (works)
nameserver 127.0.0.11
options ndots:0

The problem is when I change it for filebeat nothing happens and I am still unable to resolve "elasticsearch".

2019-01-22T09:39:05.221Z INFO pipeline/output.go:93 Attempting to reconnect to backoff(elasticsearch(http://elasticsearch:9200)) with 6 reconnect attempt(s)
2019-01-22T09:39:05.223Z WARN transport/tcp.go:53 DNS lookup failure "elasticsearch": lookup elasticsearch on 127.0.0.11:53: read udp 127.0.0.1:54728->127.0.0.11:53: read: connection refused

Whats also an issue is that whenever I restart the filebeat container the resolv.conf is set back to the wrong values.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.