Giter Site home page Giter Site logo

docker-elk's Introduction

Elastic stack (ELK) on Docker

Elastic Stack version Build Status Join the chat

Run the latest version of the Elastic stack with Docker and Docker Compose.

It gives you the ability to analyze any data set by using the searching/aggregation capabilities of Elasticsearch and the visualization power of Kibana.

Based on the official Docker images from Elastic:

Other available stack variants:

  • tls: TLS encryption enabled in Elasticsearch, Kibana (opt in), and Fleet
  • searchguard: Search Guard support

Important

Platinum features are enabled by default for a trial duration of 30 days. After this evaluation period, you will retain access to all the free features included in the Open Basic license seamlessly, without manual intervention required, and without losing any data. Refer to the How to disable paid features section to opt out of this behaviour.


tl;dr

docker-compose up setup
docker-compose up

Animated demo


Philosophy

We aim at providing the simplest possible entry into the Elastic stack for anybody who feels like experimenting with this powerful combo of technologies. This project's default configuration is purposely minimal and unopinionated. It does not rely on any external dependency, and uses as little custom automation as necessary to get things up and running.

Instead, we believe in good documentation so that you can use this repository as a template, tweak it, and make it your own. sherifabdlnaby/elastdocker is one example among others of project that builds upon this idea.


Contents

  1. Requirements
  2. Usage
  3. Configuration
  4. Extensibility
  5. JVM tuning
  6. Going further

Requirements

Host setup

Note

Especially on Linux, make sure your user has the required permissions to interact with the Docker daemon.

By default, the stack exposes the following ports:

  • 5044: Logstash Beats input
  • 50000: Logstash TCP input
  • 9600: Logstash monitoring API
  • 9200: Elasticsearch HTTP
  • 9300: Elasticsearch TCP transport
  • 5601: Kibana

Warning

Elasticsearch's bootstrap checks were purposely disabled to facilitate the setup of the Elastic stack in development environments. For production setups, we recommend users to set up their host according to the instructions from the Elasticsearch documentation: Important System Configuration.

Docker Desktop

Windows

If you are using the legacy Hyper-V mode of Docker Desktop for Windows, ensure File Sharing is enabled for the C: drive.

macOS

The default configuration of Docker Desktop for Mac allows mounting files from /Users/, /Volume/, /private/, /tmp and /var/folders exclusively. Make sure the repository is cloned in one of those locations or follow the instructions from the documentation to add more locations.

Usage

Warning

You must rebuild the stack images with docker-compose build whenever you switch branch or update the version of an already existing stack.

Bringing up the stack

Clone this repository onto the Docker host that will run the stack with the command below:

git clone https://github.com/deviantony/docker-elk.git

Then, initialize the Elasticsearch users and groups required by docker-elk by executing the command:

docker-compose up setup

If everything went well and the setup completed without error, start the other stack components:

docker-compose up

Note

You can also run all services in the background (detached mode) by appending the -d flag to the above command.

Give Kibana about a minute to initialize, then access the Kibana web UI by opening http://localhost:5601 in a web browser and use the following (default) credentials to log in:

  • user: elastic
  • password: changeme

Note

Upon the initial startup, the elastic, logstash_internal and kibana_system Elasticsearch users are intialized with the values of the passwords defined in the .env file ("changeme" by default). The first one is the built-in superuser, the other two are used by Kibana and Logstash respectively to communicate with Elasticsearch. This task is only performed during the initial startup of the stack. To change users' passwords after they have been initialized, please refer to the instructions in the next section.

Initial setup

Setting up user authentication

Note

Refer to Security settings in Elasticsearch to disable authentication.

Warning

Starting with Elastic v8.0.0, it is no longer possible to run Kibana using the bootstraped privileged elastic user.

The "changeme" password set by default for all aforementioned users is unsecure. For increased security, we will reset the passwords of all aforementioned Elasticsearch users to random secrets.

  1. Reset passwords for default users

    The commands below reset the passwords of the elastic, logstash_internal and kibana_system users. Take note of them.

    docker-compose exec elasticsearch bin/elasticsearch-reset-password --batch --user elastic
    docker-compose exec elasticsearch bin/elasticsearch-reset-password --batch --user logstash_internal
    docker-compose exec elasticsearch bin/elasticsearch-reset-password --batch --user kibana_system

    If the need for it arises (e.g. if you want to collect monitoring information through Beats and other components), feel free to repeat this operation at any time for the rest of the built-in users.

  2. Replace usernames and passwords in configuration files

    Replace the password of the elastic user inside the .env file with the password generated in the previous step. Its value isn't used by any core component, but extensions use it to connect to Elasticsearch.

    [!NOTE] In case you don't plan on using any of the provided extensions, or prefer to create your own roles and users to authenticate these services, it is safe to remove the ELASTIC_PASSWORD entry from the .env file altogether after the stack has been initialized.

    Replace the password of the logstash_internal user inside the .env file with the password generated in the previous step. Its value is referenced inside the Logstash pipeline file (logstash/pipeline/logstash.conf).

    Replace the password of the kibana_system user inside the .env file with the password generated in the previous step. Its value is referenced inside the Kibana configuration file (kibana/config/kibana.yml).

    See the Configuration section below for more information about these configuration files.

  3. Restart Logstash and Kibana to re-connect to Elasticsearch using the new passwords

    docker-compose up -d logstash kibana

Note

Learn more about the security of the Elastic stack at Secure the Elastic Stack.

Injecting data

Launch the Kibana web UI by opening http://localhost:5601 in a web browser, and use the following credentials to log in:

  • user: elastic
  • password: <your generated elastic password>

Now that the stack is fully configured, you can go ahead and inject some log entries.

The shipped Logstash configuration allows you to send data over the TCP port 50000. For example, you can use one of the following commands — depending on your installed version of nc (Netcat) — to ingest the content of the log file /path/to/logfile.log in Elasticsearch, via Logstash:

# Execute `nc -h` to determine your `nc` version

cat /path/to/logfile.log | nc -q0 localhost 50000          # BSD
cat /path/to/logfile.log | nc -c localhost 50000           # GNU
cat /path/to/logfile.log | nc --send-only localhost 50000  # nmap

You can also load the sample data provided by your Kibana installation.

Cleanup

Elasticsearch data is persisted inside a volume by default.

In order to entirely shutdown the stack and remove all persisted data, use the following Docker Compose command:

docker-compose down -v

Version selection

This repository stays aligned with the latest version of the Elastic stack. The main branch tracks the current major version (8.x).

To use a different version of the core Elastic components, simply change the version number inside the .env file. If you are upgrading an existing stack, remember to rebuild all container images using the docker-compose build command.

Important

Always pay attention to the official upgrade instructions for each individual component before performing a stack upgrade.

Older major versions are also supported on separate branches:

Configuration

Important

Configuration is not dynamically reloaded, you will need to restart individual components after any configuration change.

How to configure Elasticsearch

The Elasticsearch configuration is stored in elasticsearch/config/elasticsearch.yml.

You can also specify the options you want to override by setting environment variables inside the Compose file:

elasticsearch:

  environment:
    network.host: _non_loopback_
    cluster.name: my-cluster

Please refer to the following documentation page for more details about how to configure Elasticsearch inside Docker containers: Install Elasticsearch with Docker.

How to configure Kibana

The Kibana default configuration is stored in kibana/config/kibana.yml.

You can also specify the options you want to override by setting environment variables inside the Compose file:

kibana:

  environment:
    SERVER_NAME: kibana.example.org

Please refer to the following documentation page for more details about how to configure Kibana inside Docker containers: Install Kibana with Docker.

How to configure Logstash

The Logstash configuration is stored in logstash/config/logstash.yml.

You can also specify the options you want to override by setting environment variables inside the Compose file:

logstash:

  environment:
    LOG_LEVEL: debug

Please refer to the following documentation page for more details about how to configure Logstash inside Docker containers: Configuring Logstash for Docker.

How to disable paid features

You can cancel an ongoing trial before its expiry date — and thus revert to a basic license — either from the License Management panel of Kibana, or using Elasticsearch's start_basic Licensing API. Please note that the second option is the only way to recover access to Kibana if the license isn't either switched to basic or upgraded before the trial's expiry date.

Changing the license type by switching the value of Elasticsearch's xpack.license.self_generated.type setting from trial to basic (see License settings) will only work if done prior to the initial setup. After a trial has been started, the loss of features from trial to basic must be acknowledged using one of the two methods described in the first paragraph.

How to scale out the Elasticsearch cluster

Follow the instructions from the Wiki: Scaling out Elasticsearch

How to re-execute the setup

To run the setup container again and re-initialize all users for which a password was defined inside the .env file, simply "up" the setup Compose service again:

$ docker-compose up setup
 ⠿ Container docker-elk-elasticsearch-1  Running
 ⠿ Container docker-elk-setup-1          Created
Attaching to docker-elk-setup-1
...
docker-elk-setup-1  | [+] User 'monitoring_internal'
docker-elk-setup-1  |    ⠿ User does not exist, creating
docker-elk-setup-1  | [+] User 'beats_system'
docker-elk-setup-1  |    ⠿ User exists, setting password
docker-elk-setup-1 exited with code 0

How to reset a password programmatically

If for any reason your are unable to use Kibana to change the password of your users (including built-in users), you can use the Elasticsearch API instead and achieve the same result.

In the example below, we reset the password of the elastic user (notice "/user/elastic" in the URL):

curl -XPOST -D- 'http://localhost:9200/_security/user/elastic/_password' \
    -H 'Content-Type: application/json' \
    -u elastic:<your current elastic password> \
    -d '{"password" : "<your new password>"}'

Extensibility

How to add plugins

To add plugins to any ELK component you have to:

  1. Add a RUN statement to the corresponding Dockerfile (eg. RUN logstash-plugin install logstash-filter-json)
  2. Add the associated plugin code configuration to the service configuration (eg. Logstash input/output)
  3. Rebuild the images using the docker-compose build command

How to enable the provided extensions

A few extensions are available inside the extensions directory. These extensions provide features which are not part of the standard Elastic stack, but can be used to enrich it with extra integrations.

The documentation for these extensions is provided inside each individual subdirectory, on a per-extension basis. Some of them require manual changes to the default ELK configuration.

JVM tuning

How to specify the amount of memory used by a service

The startup scripts for Elasticsearch and Logstash can append extra JVM options from the value of an environment variable, allowing the user to adjust the amount of memory that can be used by each component:

Service Environment variable
Elasticsearch ES_JAVA_OPTS
Logstash LS_JAVA_OPTS

To accommodate environments where memory is scarce (Docker Desktop for Mac has only 2 GB available by default), the Heap Size allocation is capped by default in the docker-compose.yml file to 512 MB for Elasticsearch and 256 MB for Logstash. If you want to override the default JVM configuration, edit the matching environment variable(s) in the docker-compose.yml file.

For example, to increase the maximum JVM Heap Size for Logstash:

logstash:

  environment:
    LS_JAVA_OPTS: -Xms1g -Xmx1g

When these options are not set:

  • Elasticsearch starts with a JVM Heap Size that is determined automatically.
  • Logstash starts with a fixed JVM Heap Size of 1 GB.

How to enable a remote JMX connection to a service

As for the Java Heap memory (see above), you can specify JVM options to enable JMX and map the JMX port on the Docker host.

Update the {ES,LS}_JAVA_OPTS environment variable with the following content (I've mapped the JMX service on the port 18080, you can change that). Do not forget to update the -Djava.rmi.server.hostname option with the IP address of your Docker host (replace DOCKER_HOST_IP):

logstash:

  environment:
    LS_JAVA_OPTS: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=18080 -Dcom.sun.management.jmxremote.rmi.port=18080 -Djava.rmi.server.hostname=DOCKER_HOST_IP -Dcom.sun.management.jmxremote.local.only=false

Going further

Plugins and integrations

See the following Wiki pages:

docker-elk's People

Contributors

a-wakeel avatar alkanarslan avatar antoineco avatar binary-signal avatar bobspadger avatar brint avatar bubbajames-docker avatar cdituri avatar danielabbatt avatar dawitnida avatar dependabot[bot] avatar deviantony avatar dm avatar docker-elk-updater[bot] avatar doubledi avatar github-actions[bot] avatar harissistek avatar harry-wood avatar jihchi avatar jwsy avatar kg-ops avatar kmisachenka avatar matulko avatar michaeltarleton avatar mkozjak avatar monobook avatar mtdavidson avatar opw0011 avatar penderi avatar thepill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-elk's Issues

Elasticsearch high CPU usage

Hello,

I am facing an issue with Elasticsearch, I see how it goes till 176% CPU usage in my server, so I was wondering whether this is a bug in this release or in the stack itself. It does not happen inmediatelly but after two hours I see these peaks on CPU usage that sometimes lead to an unresponsive server due to the load.

Is someone facing up with this issue as well?

My java version is

$~# java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

And a log I was recording with top to search for the CPU peaks

root@g:~# cat cpu-log.txt | grep colord

 6.9   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.8   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.8   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.8   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.7   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.7   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.6   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.6   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.6   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.6   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.6   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.5   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.5   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.5   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 6.5   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 7.3   389 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

**83.5  5864 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0**

57.5  5864 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0

 **110  6949 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0**

**86.6  8100 colord   /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/share/elasticsearch -cp /usr/share/elasticsearch/lib/elasticsearch-2.3.5.jar:/usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch start -Des.network.host=0.0.0.0**

How to process log file on host system

Say I have a web app that is logging to a file that I have mounted with docker volumes, and I want logstash to process this file. Is there a way to configure the logstash container to get access to this file? I am a bit newbie when it comes to Docker and ELK, but I cant seem to find any good docs on this particular case. Thanks!

specify max log json file size in docker compose

Hi, I am using docker 1.11.2. I am trying to specify the max file size for json-file in docker-compose.yml, like this,

elasticsearch:
   image: elasticsearch:latest
   command: elasticsearch -Des.network.host=0.0.0.0
   ports:
      - "9200:9200"
      - "9300:9300"
   options:
       max-size: 50m
logstash:
   build: logstash/
   command: logstash -f /etc/logstash/conf.d/logstash.conf
   volumes:
      - ./logstash/config:/etc/logstash/conf.d
   ports:
      - "5000:5000"
   links:
      - elasticsearch
   options:
      max-size: 50m
kibana:
   build: kibana/
   volumes:
     - ./kibana/config/:/opt/kibana/config/
   ports:
      - "5601:5601"
   links:
     - elasticsearch
   options:
      max-size: 50m`

I got the following errors,

ERROR: The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for elasticsearch: 'options'
Unsupported config option for kibana: 'options'
Unsupported config option for logstash: 'options'`

How to set the max file size right in docker-compose.yml?

Unable to start docker-elk

Disabling SELinux not issue for me. But

.-root@centos ~
-$ chcon -R system_u:object_r:admin_home_t:s0 docker-elk/

doesn't help. It still can't get access

kibana_1         | FATAL CLI ERROR Error: EACCES: permission denied, open '/opt/kibana/config/kibana.yml'
kibana_1         |     at Error (native)
kibana_1         |     at Object.fs.openSync (fs.js:549:18)
kibana_1         |     at Object.fs.readFileSync (fs.js:397:15)
kibana_1         |     at module.exports (/opt/kibana/src/cli/serve/read_yaml_config.js:52:31)
kibana_1         |     at Command.callee$1$0$ (/opt/kibana/src/cli/serve/serve.js:67:22)
kibana_1         |     at tryCatch (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:67:40)
kibana_1         |     at GeneratorFunctionPrototype.invoke [as _invoke] (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:315:22)
kibana_1         |     at GeneratorFunctionPrototype.prototype.(anonymous function) [as next] (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:100:21)
kibana_1         |     at invoke (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:136:37)
kibana_1         |     at enqueueResult (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:185:17)
kibana_1         |     at new Promise (/opt/kibana/node_modules/babel-core/node_modules/core-js/modules/es6.promise.js:201:7)
kibana_1         |     at Promise.exports.(anonymous function).target.(anonymous function).function.target.(anonymous function).F (/opt/kibana/node_modules/babel-runtime/node_modules/core-js/library/modules/$.export.js:30:36)
kibana_1         |     at AsyncIterator.enqueue (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:184:12)
kibana_1         |     at AsyncIterator.prototype.(anonymous function) [as next] (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:100:21)
kibana_1         |     at Object.runtime.async (/opt/kibana/node_modules/babel-runtime/regenerator/runtime.js:209:12)
kibana_1         |     at Command.callee$1$0 (/opt/kibana/src/cli/serve/serve.js:51:32)
logstash_1       | {:timestamp=>"2016-09-02T14:20:02.365000+0000", :message=>"No config files found: /etc/logstash/conf.d/logstash.conf\nCan you make sure this path is a logstash config file?", :level=>:error}
dockerelk_logstash_1 exited with code 1

What means .-root@centos ~ ?? Is that means that chcon should be executed under root??
Under what user should be executed docker-compose run??

Adding logstash references/different use cases

Hey,

I was thinking it would be worth while to add a reference folder for logstash in docker-elk, which can display how one can connect to different data sources. This can act as different use cases for docker-elk and logstash.

For example, imagine if someone was using docker-elk and they wanted to connect to a database. It would be really nice if there was a folder to the effect of "logstash-references" in which they can open up and get direction on how to connect to a database using logstash, which would essentially make their workflow very efficient, imho.

I am still thinking this through, but thats the initial thought I had.

Any other thoughts on this?

logstash container exit a liitle after start

yml

elasticsearch:
  image: elasticsearch:latest
  command: elasticsearch -Des.network.host=0.0.0.0
  ports:
    - "9200:9200"
    - "9300:9300"
logstash:
  image: logstash:latest
  command: logstash -f /etc/logstash/conf.d/multilinejson.conf
  volumes:
    - ./logstash/config:/etc/logstash/conf.d
    - ./logstash/data:/logstash
  ports:
    - "5000:5000"
  links:
    - elasticsearch
kibana:
  build: kibana/
  volumes:
    - ./kibana/config/kibana.yml:/opt/kibana/config/kibana.yml
  ports:
    - "5601:5601"
  links:
    - elasticsearch

multilinejson config

input 
{   
    file 
    {
        codec => multiline
        {
            pattern => '^\{'
            negate => true
            what => previous                
        }
        path => ["/logstash/test01.json"]
        start_position => "beginning"
        sincedb_path => "/dev/null"
        exclude => "*.gz"
    }
}

filter 
{
    mutate
    {
        replace => [ "message", "%{message}}" ]
        gsub => [ 'message','\n','']
    }
    if [message] =~ /^{.*}$/ 
    {
        json { source => message }
    }

}

output
{ 
  elasticsearch {
    protocol => "http"
    codec => json
    hosts => "elasticsearch:9200"
    index => "json"
    embedded => true
  }

    stdout { codec => rubydebug }

test01.json

{
    "SOURCE":"Source A",
    "Model":"ModelABC",
    "Qty":"3"
}

logstash protocol error

I am using Windows 7 with boot2docker. while using docker-compose up I am facing this error

Creating dockerelkmaster_logstash_1 [31mERROR←[0m: Cannot start container c25a5accedb6b554f40c7c7177ff933b703e26e454d8f1348b92fcc58278516f: stat /c/Users/****/My Documents/docker-elk-master/docker-elk-master/logstash/config: protocol error

However elastcisearch is created successfully.

Could you please suggest me whats wrong here?

cannot start syslog listener

cannot run syslog listener on port 514 because (nonexistent) user logstash runs as does not have privileges to start services on ports < 1024. i've tried calling the docker-compose up command as root, as well. this is probably because i don't know docker well at all :\

observe

How to update plugins inside logstash container

Howdy.
I'm trying to use this docker netflow v10/ipfix. The logstash-codec-netflow present is using 2.0.5. (gems/logstash-codec-netflow-2.0.5)

I tried adding the line to the dockerfile, but since it's already present as an old version, It doesn't update to 3.1.0 (current).

I'm getting an error when I try to use the logstash update.

root@be651f0dbb6f:/opt/logstash/bin# logstash-plugin update logstash-codec-netflow        
You are updating logstash-codec-netflow to a new version 3.1.0, which may not be compatible with 2.0.5. are you sure you want to proceed (Y/N)?
Y
Updating logstash-codec-netflow
Error Bundler::InstallError, retrying 1/10
An error occurred while installing ffi (1.9.13), and Bundler cannot continue.
Make sure that `gem install ffi -v '1.9.13'` succeeds before bundling.
WARNING: SSLSocket#session= is not supported

At first, I went through and found gem wasn't installed, so then I had to install ruby. I ended up with

root@be651f0dbb6f:/opt# whereis ruby
ruby: /usr/bin/ruby /usr/bin/ruby2.1 /usr/lib/ruby /usr/share/man/man1/ruby.1.gz
root@be651f0dbb6f:/opt# whereis gem 
gem: /usr/bin/gem2.1 /usr/bin/gem /usr/share/man/man1/gem.1.gz
root@be651f0dbb6f:/opt/logstash/bin# gem install ffi -v '1.9.13'                       
Fetching: ffi-1.9.13.gem (100%)
Building native extensions.  This could take a while...
ERROR:  Error installing ffi:
    ERROR: Failed to build gem native extension.

    current directory: /var/lib/gems/2.1.0/gems/ffi-1.9.13/ext/ffi_c
/usr/bin/ruby2.1 -r ./siteconf20160707-912-ebyoz5.rb extconf.rb
mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

extconf failed, exit code 1

Gem files will remain installed in /var/lib/gems/2.1.0/gems/ffi-1.9.13 for inspection.
Results logged to /var/lib/gems/2.1.0/extensions/x86_64-linux/2.1.0/ffi-1.9.13/gem_make.out
root@be651f0dbb6f:/opt/logstash/bin# logstash-plugin update logstash-codec-netflow     
You are updating logstash-codec-netflow to a new version 3.1.0, which may not be compatible with 2.0.5. are you sure you want to proceed (Y/N)?
Y
Updating logstash-codec-netflow
Error Bundler::InstallError, retrying 1/10
An error occurred while installing ffi (1.9.13), and Bundler cannot continue.
Make sure that `gem install ffi -v '1.9.13'` succeeds before bundling.
WARNING: SSLSocket#session= is not supported
Error Bundler::InstallError, retrying 2/10

Use case for question:

Since 2.0.5 doesn't support v9 and v10 as well as 3.1.0 now does, I'm getting this pretty exclusively for my logstash log.

{:timestamp=>"2016-07-07T19:21:51.463000+0000", :message=>"No matching template for flow id 259", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:51.490000+0000", :message=>"Unsupported Netflow version v10", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:51.909000+0000", :message=>"No matching template for flow id 259", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:51.989000+0000", :message=>"Unsupported Netflow version v10", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:52.457000+0000", :message=>"No matching template for flow id 258", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:52.957000+0000", :message=>"No matching template for flow id 258", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:53.353000+0000", :message=>"No matching template for flow id 258", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:54.054000+0000", :message=>"No matching template for flow id 258", :level=>:warn}
{:timestamp=>"2016-07-07T19:21:54.690000+0000", :message=>"Unsupported Netflow version v10", :level=>:warn}

FYI: Rubyn00b. I broke the container many times and had to rebuild trying to do this.

Thanks for any assistance.

Kibana container shutting down in less than a day

When upping docker-compose on Azure, the Kibana container shuts down after ~1 day. Everything ells keeps running but that.

The containers logs is as follows:

{"type":"log","@timestamp":"2016-04-28T01:14:45+00:00","tags":["status","plugin:elasticsearch","error"],"pid":1,"name":"plugin:elasticsearch","state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Kibana index ready"}
{"type":"log","@timestamp":"2016-04-28T01:15:25+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2016-04-28T01:16:20+00:00","tags":["status","plugin:elasticsearch","error"],"pid":1,"name":"plugin:elasticsearch","state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Kibana index ready"}
{"type":"log","@timestamp":"2016-04-28T01:17:04+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2016-04-28T01:17:42+00:00","tags":["status","plugin:elasticsearch","error"],"pid":1,"name":"plugin:elasticsearch","state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Kibana index ready"}
{"type":"log","@timestamp":"2016-04-28T01:19:11+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Request Timeout after 3000ms"}
{"type":"log","@timestamp":"2016-04-28T01:20:16+00:00","tags":["status","plugin:elasticsearch","error"],"pid":1,"name":"plugin:elasticsearch","state":"red","message":"Status changed from green to red - Request Timeout after 3000ms","prevState":"green","prevMsg":"Kibana index ready"}

Default centos|redhat selinux policies prevent proper start-up

Worth adding something to the README.md that selinux must either be in permissive mode or the correct contexts must be applied for fig-elk to start up properly with docker-compose up.

I've simply set selinux as permissive to address my needs for now but if I apply actual selinux contexts to fix the problem properly I'll open a pull-request and close this issue.

logstash-forwarder support

It would be nice if you could add logstash-forwarder support.
I believe this requires SSL. But not sure how to set this up??

Does logstash support exec command?

Hello,

I was wondering whether the image of logstash of this docker does support exec command by default in the logstash.conf or the plug in has to be preloaded in the Dockerconfig/

Thanks in advance!

how logstash forwarder works?

hello, I have installed this in less than 30 minutes, and it's an amazing work. one question raised here, where is the logstash forwarder located in your repo? bu using the logstash forwarder , I can collect many agent's log, then.

elasticsearch not accepting remote connections

I am trying to connect to elasticsearch using
https://github.com/uken/fluent-plugin-elasticsearch
(This plugin is a part of Deis Workflow)

However, the connection is refused:

2016-10-11T03:31:56.475556021Z 2016-10-11 03:31:56 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"52.66.185.54", :port=>9200, :scheme=>"http"}
2016-10-11T03:31:57.317253495Z 2016-10-11 03:31:57 +0000 [warn]: retry succeeded. plugin_id="object:2203710" chunk_id="53e8e734e9846b84d1554162ec3d661a"
2016-10-11T03:40:55.360427649Z 2016-10-11 03:40:55 +0000 [info]: Filtering works with worse performance, because [Fluent::KubernetesMetadataFilter] uses `#filter_stream` method.
2016-10-11T03:52:29.920548158Z 2016-10-11 03:52:29 +0000 [warn]: Could not push logs to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 52.66.18.54:9200 (Errno::ECONNREFUSED)
2016-10-11T03:52:35.186148891Z 2016-10-11 03:52:35 +0000 [warn]: failed to flush the buffer. plugin_id="object:2203710" retry_time=0 next_retry=2016-10-11 03:52:36 +0000 chunk="53e8ecd8b434faa31c32701b6bd059fc" error_class=Fluent::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster ({:host=>\"52.66.185.54\", :port=>9200, :scheme=>\"http\"})!" 

My config is the same unchanged except that I removed the elasticsearch output from logstash

ERR: Kibana: Configure an index pattern

Simply not working at all.

Am I supposed to take any extra steps, and if so, where is the documentation ?

#!/bin/sh
## sudo apt-get install git netcat
git clone https://github.com/deviantony/docker-elk.git
cd docker-elk
docker-compose up -d
dmesg | nc localhost 5000

Then in browser go to http://localhost:5601/ and see the show-stopper.

Error: No config files found

I'm trying to run this docker compose as is, without changing anything, and it fails with the following error:

logstash_1      | {:timestamp=>"2016-08-01T07:57:36.721000+0000", :message=>"No config files found: /etc/logstash/conf.d/logstash.conf\nCan you make sure this path is a logstash config file?", :level=>:error}

The config file exists, and contains the default configuration as per the cloned master branch of this repository.

Here is the full output of docker-compose up:

~/docker-elk$ docker-compose up
Creating dockerelk_elasticsearch_1
Creating dockerelk_logstash_1
Creating dockerelk_kibana_1
Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1
kibana_1        | Stalling for Elasticsearch
elasticsearch_1 | [2016-08-01 07:57:24,742][WARN ][bootstrap                ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
elasticsearch_1 | [2016-08-01 07:57:25,054][INFO ][node                     ] [Morlun] version[2.3.4], pid[1], build[e455fd0/2016-06-30T11:24:31Z]
elasticsearch_1 | [2016-08-01 07:57:25,054][INFO ][node                     ] [Morlun] initializing ...
elasticsearch_1 | [2016-08-01 07:57:25,897][INFO ][plugins                  ] [Morlun] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
elasticsearch_1 | [2016-08-01 07:57:25,946][INFO ][env                      ] [Morlun] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [22.9gb], net total_space [28.4gb], spins? [possibly], types [ext4]
elasticsearch_1 | [2016-08-01 07:57:25,947][INFO ][env                      ] [Morlun] heap size [1007.3mb], compressed ordinary object pointers [true]
elasticsearch_1 | [2016-08-01 07:57:30,007][INFO ][node                     ] [Morlun] initialized
elasticsearch_1 | [2016-08-01 07:57:30,007][INFO ][node                     ] [Morlun] starting ...
elasticsearch_1 | [2016-08-01 07:57:30,283][INFO ][transport                ] [Morlun] publish_address {172.17.0.8:9300}, bound_addresses {[::]:9300}
elasticsearch_1 | [2016-08-01 07:57:30,294][INFO ][discovery                ] [Morlun] elasticsearch/wMoCsRxwSgGDYQJC5uYE3g
elasticsearch_1 | [2016-08-01 07:57:33,474][INFO ][cluster.service          ] [Morlun] new_master {Morlun}{wMoCsRxwSgGDYQJC5uYE3g}{172.17.0.8}{172.17.0.8:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
elasticsearch_1 | [2016-08-01 07:57:33,548][INFO ][http                     ] [Morlun] publish_address {172.17.0.8:9200}, bound_addresses {[::]:9200}
elasticsearch_1 | [2016-08-01 07:57:33,550][INFO ][node                     ] [Morlun] started
kibana_1        | Starting Kibana
elasticsearch_1 | [2016-08-01 07:57:33,891][INFO ][gateway                  ] [Morlun] recovered [0] indices into cluster_state
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["warning","config"],"pid":1,"key":"bundled_plugin_ids","val":["plugins/dashboard/index","plugins/discover/index","plugins/doc/index","plugins/kibana/index","plugins/markdown_vis/index","plugins/metric_vis/index","plugins/settings/index","plugins/table_vis/index","plugins/vis_types/index","plugins/visualize/index"],"message":"Settings for \"bundled_plugin_ids\" were not applied, check for spelling errors and ensure the plugin is loaded."}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:sense","info"],"pid":1,"name":"plugin:sense","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:kibana","info"],"pid":1,"name":"plugin:kibana","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:kbn_vislib_vis_types","info"],"pid":1,"name":"plugin:kbn_vislib_vis_types","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:markdown_vis","info"],"pid":1,"name":"plugin:markdown_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:metric_vis","info"],"pid":1,"name":"plugin:metric_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:spyModes","info"],"pid":1,"name":"plugin:spyModes","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:statusPage","info"],"pid":1,"name":"plugin:statusPage","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["status","plugin:table_vis","info"],"pid":1,"name":"plugin:table_vis","state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:35+00:00","tags":["listening","info"],"pid":1,"message":"Server running at http://0.0.0.0:5601"}
logstash_1      | {:timestamp=>"2016-08-01T07:57:36.721000+0000", :message=>"No config files found: /etc/logstash/conf.d/logstash.conf\nCan you make sure this path is a logstash config file?", :level=>:error}
dockerelk_logstash_1 exited with code 1
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:40+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
elasticsearch_1 | [2016-08-01 07:57:41,141][INFO ][cluster.metadata         ] [Morlun] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [config]
elasticsearch_1 | [2016-08-01 07:57:41,345][INFO ][cluster.routing.allocation] [Morlun] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
kibana_1        | {"type":"log","@timestamp":"2016-08-01T07:57:43+00:00","tags":["status","plugin:elasticsearch","info"],"pid":1,"name":"plugin:elasticsearch","state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

Kibana Unable to connect to ElasticSearch

Kibana keep on getting Bad Gateway error connecting to Elasticsearch.
I checked elasticsearch instance, its working properly

{"statusCode":502,"error":"Bad Gateway","message":"connect ECONNREFUSED"}

any idea about this issue?

License expired message

Hi, I've installed branch shield-support of docker-elk (I just created the my own image to deploy to machine: https://github.com/lucj/docker-elk/tree/shield-support) and I now have the "Status: Red" message with:

 [security_exception] current license is non-compliant for [shield], with: {"license.expired.feature":"shield"}

How to you handle the license update when using shield ?

Docker Log Drivers and ELK Stack

Docker supports various types of log drivers as a way of gathering logs from your Docker containers. One common scenario for docker-elk might be to extend the docker-compose file to add an app to and have the log drivers build into Docker pipe the logs to the ELK stack. For example you can easily configure logstash with a syslog endpoint and then use the Docker syslog driver to pipe logs to logstash.

The problem appears to be that Docker tries to make a connection to the syslog endpoint before starting the containers for the ELK stack so the whole docker-compose up command fails. I am wondering if there is any way around this or other alternatives to solving this problem.

Index error

After running `docker-compose up``, Kibana returns a error in which I can't get passed.

image

Input file in logstash

Hello,

I'm trying to use your project with a input of type "file" in logstash, instead of sending them with TCP.
I mount my log folder on my logstash container, I can see them in the container, no problem.

But in Kibana, I can't create an index, so I guess logstash don't push my log files to elastic.
Do you know if it comes from your configuration ?

Your work is really awesome BTW !

Kibana Dockerfile - failing @installing sense

At image build time, line RUN kibana plugin --install elastic/sense is waiting endlessly for running elasticsearch instance...

Reading documentation, this is probably "new" thing: https://www.elastic.co/guide/en/sense/current/installing.html#installing

{"name":"Kibana","hostname":"fcdb0b513411","pid":6,"level":50,"err":"Request error, retrying -- connect ECONNREFUSED","msg":"","time":"2015-12-18T09:13:11.860Z","v":0}
{"name":"Kibana","hostname":"fcdb0b513411","pid":6,"level":40,"msg":"Unable to revive connection: http://localhost:9200/","time":"2015-12-18T09:13:11.863Z","v":0}

Kibana fails to connect to Elasticsearch node.

Hi,

I got the nodes running (thnx for the works!) but it seems there's an issue between Kibana 3 and the Elasticsearch version started by this compose. Straight of the bat it gives me the following message:
image

Cors does seem to be enabled in elasticsearch:
image

Any ideas? Editing the elasticsearch config is not really ez in the container btw. Is it possible to put that in an accessible folder as well? Just like the logstash config?

Unable to start kibana

I admit to not knowing a lot about docker and/or ELK, but when I try and start the stack using a freshly cloned repo, no changes on a clean CentOS 7 VM I get:

image

My google-fu has failed to find anything obvious, am I missing something there?

docker-compose start kibana failed!--No such file or directory/usr/bin/env: bash

I am use docker-machine on windows7 ,all of ELK can be up,but kibana,some error logs show for me:

$ docker-compose.exe up
dockerelk_elasticsearch_1 is up-to-date
Starting dockerelk_logstash_1
Creating dockerelk_kibana_1
Attaching to dockerelk_elasticsearch_1, dockerelk_logstash_1, dockerelk_kibana_1
: No such file or directory/usr/bin/env: bash

use docker logs , some errors show for me:

$ docker logs 0f4a58ad6b89
: No such file or directory

In docker-compose.yml command line is not working

The command line (logstash -f /etc/logstash/conf.d/logstash.conf) in docker-compose.yml is not responding..
After the docker-elk started(docker-compose up) we must go to logstash container and we must execute the command (logstash -f /etc/logstash/conf.d/logstash.conf ) or (logstash agent -f /etc/logstash/conf.d/logstash.conf -w 10 &).. Then only my logs are showing in kibana

Service 'kibana' failed to build

Thanks for the package!

Service 'kibana' failed to build: The command '/bin/sh -c kibana plugin --install elastic/sense' returned a non-zero code: 137

Do you have an idea about that?

docker-machine & volumes

not an issue with your work (it's great, thank you for this), but if anyone is using docker-machine to provision this, hopefully you'll find this first and save yourself the 15-20 min of WTF I just went through trying to debug why this wasn't working.

hadn't used docker-machine with docker-compose till now, but it would appear that:

if you try to use this with compose, you need to first move over the entire project directory to the directory returned by docker-machine ssh <name> pwd

here's the S/O to the issue:
http://stackoverflow.com/questions/30040708/how-to-mount-local-volumes-in-docker-machine

feel free to close this out. I just had to write that explanation out cuz there were weird symptoms (like..localhost:9200 instead of elasticsearch:9200 level weird. things worked 90% of the way.)

Yellow cluster health: Unassigned shards

Hi! Thank you very much for your image, it is great! :)

I have one issue - I always get yellow cluster status after startup:

curl http://localhost:9200/_cluster/health
{
  "cluster_name": "elasticsearch",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 2,
  "number_of_data_nodes": 1,
  "active_primary_shards": 1,
  "active_shards": 1,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 1,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0
}

Where does that unassigned replica goes from? What can be the problem?

Persisting elasticsearch data

At the bottom of the readme, there is a section entitled "How can I store Elasticsearch data?". However, even without following those directions, I can already bring up the docker images, get data into ES, bring the images down, bring them back up again, and everything is persisted.

I'm new to docker, so I still don't totally get the persistence thing, but I thought that it was possibly a mistake in the readme.

Service 'kibana' failed to build

On Ubuntu 14.04 LTS
ERROR: Service 'kibana' failed to build: The command '/bin/sh -c kibana plugin --install elastic/sense' returned a non-zero code: 137

Any advise?

In logstash.conf i likely to use nginx access.log file.Now how to check the output.

//--gork filter

NGINX_ACCESS %{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[%{HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}
//--------------logstash.conf
input {

  file {
    type => "nginx"
    start_position => "beginning"
    path => [ "/var/log/nginx/access.log" ]
  }
}
filter {
  if [type] == "nginx" {
    grok {
    patterns_dir => "/etc/logstash/patterns"
    match => { "message" => "%{NGINX_ACCESS}" }
    remove_tag => ["_grokparsefailure"]
    add_tag => ["nginx_access"]
    }

    geoip {
      source => "remote_addr"
    }
  }
}


output {
    elasticsearch {

        hosts => "elasticsearch:9200"
    }

}

Problem when "installing" the second time around

Hi,

I seem to have a problem with the compose build. It all worked fine for me the first time around, but after I started over (removed the git repo and deleted all old images) the docker-compose up fails.

The error message:

Service ‘kibana’ needs to be built, but –no-build was passed

Maybe it's a local thing?

I'm running docker 1.8.3 on CoreOS.

Sense Query fails

I worked through the instructions.

I can call cURL:

$ curl -XGET "http://localhost:9200/_cluster/health"
{"cluster_name":"elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":6,"active_shards":6,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":6,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.0}

but the same command in sense fails:

url: http://localhost:5601/app/sense
server: http://localhost:9200/

response:

Error connecting to 'http://localhost:9200/_search':

Client request error: connect ECONNREFUSED 127.0.0.1:9200

What am I missing?

How to install x-pack without error

I want use dockerelk install x-pack plugin,but when I installed find a error that I can't access to elasticsearch because I don't know account and password with elasticsearch so that report a authenticate exception.
What should I do to install x-pack.
All of these version is alpha-5.

Kibana fails to build

On running docker-compose up as root I get the following error:

Optimizing and caching browser bundles...
Killed
ERROR: Service 'kibana' failed to build: The command '/bin/sh -c kibana plugin --install elastic/sense' returned a non-zero code: 137

System error: not a directory

Hi, I ran into a problem when I was working with your images.

After all downloads and buildings I got this error.

Creating dockerelkmaster_kibana_1
ERROR: Cannot start container 3a4721181937361993ea01bb7bb350834acaf080c3a40c1330d91cf5848234eb: [8] System error: not a directory

I tried several modifications to your docker-compose.yml file and the problem seems to be located with the volumes section. If I comment out these lines the process continues.

#  volumes:
#    - ./kibana/config/kibana.yml:/opt/kibana/config/kibana.yml

But your costumizations didn't get applied.

I'm using docker-compose version: 1.5.1 and Docker version 1.9.0, build 76d6bc9

Is this normal?

Dynamic cluster using docker-compose with external storage

Hi,
Thanks for your project. I can't find a good source for information for setting up a cluster of ES 2.X containers locally (ie. on a single host), which can be dynamically scaled and has the data stored back on the host, using the -v flag.

This is what I've tried:

elk:
  image: elasticsearch:2.X
  ports:
    - "9200:9200"
  volumes:
   - /path_on_host_to/data:/path_in_container_to/elasticsearch/data

elkslave:
  image: elasticsearch:2.X
  links:
    - elk:elk
  volumes:
    - /path_on_host_to/data:/path_in_container_to/elasticsearch/data   
    - /path_on_host_to_config/elasticsearch-slave.yml:/etc/elasticsearch/elasticsearch.yml

where elasticsearch-slave.yml is:

network host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["elk"]

But it doesn't appear to work.

Ideally, I would be able to something like
docker-compose scale elkslave=3

Shield / auth for Kibana?

Hi! Thanks for this terrific work, I've tried nearly all docker elk images and find yours the best-crafted!

I'm wondering if auth for Kibana is already implemented or can be done?

Thanks

Multicast discovery support in logstash with Kibana 4

For now, the elasticsearch output in the logstash configuration must be specified that way:

output {
    elasticsearch { 
        protocol => "http"
        host => "elasticsearch"
    }
}

So Logstash will not be considered as an ES node and Kibana 4 will boot correctly. I'd like to keep the following configuration in order to use multicast discovery:

output {
    elasticsearch { }
}

See elastic/kibana#1629 (comment)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.