Giter Site home page Giter Site logo

shazchaudhry / docker-elastic Goto Github PK

View Code? Open in Web Editor NEW
346.0 12.0 188.0 2.14 MB

Deploy Elastic stack in a Docker Swarm cluster. Ship application logs and metrics using beats & GELF plugin to Elasticsearch

Shell 100.00%
logging gelf logstash metricbeat kibana travis filebeat log-aggregation elasticsearch jenkins-container

docker-elastic's People

Contributors

lewismc avatar shazchaudhry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-elastic's Issues

Create APM client

APM agent is needed to send metrics to APM server that was stood up in issue #34

This node is not a swarm manager. Use "docker swarm init"

docker network create --driver overlay --attachable elastic
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/health?v&pretty'
<html>
  <head>
    <!-- Bootstrap -->
    <link rel="stylesheet" type="text/css" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
    <style>
      body {
        padding-top: 50px
      }
    </style>
  </head>
  <body>
  <div class="container">
      <div class="panel panel-warning">
        <div class="panel-heading">
          <h3 class="panel-title">Docker Flow Proxy: 503 Service Unavailable</h3>
        </div>
      <div class="panel-body">
        No server is available to handle this request.
      </div>
  </div>
</body>
</html>

docker swarm init
Swarm initialized: current node (fcw7n672e51e6x8o6p1v4swj4) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4btyozhvqpc56rp1m3j37d500dmrg7x6gmoncs4qk4gre1x28c-ah0o1pmkp5nlxg12p8lfxhg7w 10.233.60.118:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

After

docker swarm join --token SWMTKN-1-4btyozhvqpc56rp1m3j37d500dmrg7x6gmoncs4qk4gre1x28c-ah0o1pmkp5nlxg12p8lfxhg7w 10.233.60.118:2377

curl -XGET -u ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} ${ELASTICSEARCH_HOST}':9200/_cat/health?v&pretty'
epoch      timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1553232949 05:35:49  DevOps  green           1         1      8   8    0    0        0             0                  -                100.0%

May be add docker swarm join to readme?

not working

Hi,

I am trying to use this with 7.17.9, and it is not working any more.

It seems like the master were not able to discover. Can you please help with this as it helps a lot?

ELB does not appear to support UDP protocole

The solution in this repository expect docker containers to send UDP messages to Logstash. However, it appears that UDP is not supported on AWS Elastic Load Balancer. Extremely annoying.....

Currently, I am using GELF log driver with docker services to send UDP messages to Logstash. Investigate what other log driver options are available

How to add elastic cluster monitoring the right way?

I have tried with the below module configuration.

metricbeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression

processors:
- add_cloud_metadata:

output.elasticsearch:
  hosts: ['elasticsearch:9200'] # Moonitoring cluster
  protocol: "http"

setup.kibana:
  host: "http://kibana:5601/monitoringkibana" #Monitoring KIbana
  protocol: "http"
  ssl.enabled: false
  
xpack.monitoring.enabled: true
setup.dashboards.enabled: true

metricbeat.modules:
- module: elasticsearch
  metricsets:
    - node
    - node_stats
    - index
    - index_summary
    - shard
  period: 20s
  hosts: ["http://192.168.0.1:9201"] #Production Elasticseaarch which I want to create monitoring for

I am getting only OS analytics

Need to filter the message section of logs

@shazChaudhry Hope you are doing good.

I am working on configuring ELK with filebeat on docker. I have all the logs available on Kinbana dashboard, which are coming from filebeat docker.

Now my query is how to filter the message content of a log file which is coming from another server.

I have included the following lines in logstash.conf file which is in ELK stack server.

filter {

#/var/log/xxx/error.log
if ([log][file][path] =~ "/logs/error.log") {
grok {
match => { "message" => "%{DATE:date} %{TIME:time} | %{LOGLEVEL:loglevel} | %{IP:client_ip} [%{NUMBER:bytes}] %{WORD:method} /%{NOTSPACE:request_page} HTTP/%{NUMBER:http_version} | %{GREEDYDATA:logmessage}" }
}
}
}

However, it is not working.

here is my filebeat-docker.yml

filebeat.config:
modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

processors:

  • add_cloud_metadata: ~

filebeat.inputs:

  • type: log
    enabled: true
    paths:
    • "/var/log/apache2/*.log"
      exclude_files: ['.gz$']
      json.message_key: log
      include_lines: ['^ERR', '^WARN']

output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'

Can you please suggest on this?

Sabil.

Permission denied when writing to /etc/sysctl.conf

Hi. Thanks for a really useful repository.

I saw you wrote this in README.md:
sudo echo 'vm.max_map_count=262144' >> /etc/sysctl.conf (to persist reboots)
However, as far as I research,

You can't use sudo to affect output redirection; > and >> (and, for completeness, <) are effected with the privilege of the calling user, because redirection is done by the calling shell, not the called subprocess.

You can see more details in this question.
So I think the command should be:
sudo sh -c 'echo "vm.max_map_count=262144" >> /etc/sysctl.conf'

Thank you.

Deployment not working Host not found

Hi,

I hope you can help me with the issue to get the version up and running. I used docker 18.06.3-ce and followed your instruction to start up the environment. I changed the node1 entry in the environment variables to the host name of my swarm master.

But I get the following error:
{"type": "server", "timestamp": "2019-11-18T09:25:15,179+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "DevOps", "node.name": "vala-n6", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [vala-n1] to bootstrap a cluster: have discovered []; discovery will continue using [] from hosts providers and [{vala-n6}{40UudO_vSM-tjXJt69uBMQ}{fjbzNvJaTDSmDmLKbufmEA}{10.0.4.15}{10.0.4.15:9300}{ml.machine_memory=16691216384, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }
{"type": "server", "timestamp": "2019-11-18T09:25:15,186+0000", "level": "WARN", "component": "o.e.d.SeedHostsResolver", "cluster.name": "DevOps", "node.name": "vala-n6", "message": "failed to resolve host [elasticsearch]" ,

Also the Proxy produces the following error:
2019/11/18 09:37:54 Error: Fetching config from swarm listener failed: Get http://swarm-listener:8080/v1/docker-flow-swarm-listener/get-services: dial tcp: lookup swarm-listener on 127.0.0.11:53: no such host. Will retry in 5 seconds.

Did you have an idea what the issue could be and how I can solve this. I have 6 nodes. Named vala-n1 to vala-n6.

BR and Thanks for your help.

Using https connexions don't work

Hello,

In the process of upgrading the elk stack from 6.X to 7.X i tried to follow your config, using dnsrr (to avoid issue with the ingress address)
The cluster seems up, (i can curl from the elastic container)
But other service can't access it (like kibana)

I use authentication for the connection, passing by the https. Did you try with it?

Regards

Adding SSL to kibana for alerts

I'm trying to add alerts but first I need to activate transport layer security. I have generated my self signed certificated using this bash


#!/bin/bash

# Generate Root Key rootCA.key with 2048
openssl genrsa -passout pass:"$1" -des3 -out rootCA.key 2048

# Generate Root PEM (rootCA.pem) with 1024 days validity.
openssl req -passin pass:"$1" -subj "/C=US/ST=Random/L=Random/O=Global Security/OU=IT Department/CN=Local Certificate"  -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem

# Add root cert as trusted cert
if [[ "$OSTYPE" == "linux-gnu"* ]]; then
        # Linux
        yum -y install ca-certificates
        update-ca-trust force-enable
        cp rootCA.pem /etc/pki/ca-trust/source/anchors/
        update-ca-trust
        #meeting ES requirement
        sysctl -w vm.max_map_count=262144
elif [[ "$OSTYPE" == "darwin"* ]]; then
        # Mac OSX
        security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain rootCA.pem
else
        # Unknown.
        echo "Couldn't find desired Operating System. Exiting Now ......"
        exit 1
fi

# Generate Kib01 Cert
openssl req -subj "/C=US/ST=Random/L=Random/O=Global Security/OU=IT Department/CN=localhost"  -new -sha256 -nodes -out kib01.csr -newkey rsa:2048 -keyout kib01.key
openssl x509 -req -passin pass:"$1" -in kib01.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out kib01.crt -days 500 -sha256 -extfile  <(printf "subjectAltName=DNS:localhost,DNS:kib01")

I have added the following SSL variables in Kibana Service:

  - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=Vss86whNwQrKjA3D8aKTCRN6SnZLX4rv
  - SERVER_SSL_ENABLED=false
  - SERVER_SSL_KEY=config/certs/kib01.key
  - SERVER_SSL_CERTIFICATE=config/certs/kib01.crt
  - SERVER_SSL_KEYPASSPHRASE=testest123
  - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/rootCA.pem

docker.compose.yml

version: "3.8"

# 10 Things to Consider When Planning Your Elasticsearch Project: https://ecmarchitect.com/archives/2015/07/27/4031
# Using Apache JMeter to Test Elasticsearch: https://ecmarchitect.com/archives/2014/09/02/3915

services:

  swarm-listener:
    image: dockerflow/docker-flow-swarm-listener:latest
    hostname: swarm-listener
    networks:
      - elastic
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    environment:
      - DF_NOTIFY_CREATE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/reconfigure
      - DF_NOTIFY_REMOVE_SERVICE_URL=http://proxy:8080/v1/docker-flow-proxy/remove
    deploy:
      placement:
        constraints: [node.role == manager]

  proxy:
    image: dockerflow/docker-flow-proxy:latest
    hostname: proxy
    ports:
      - "80:80"
      - "443:443"
      - "9200:9200"
      - "8200:8200"
    networks:
      - elastic
    environment:
      - LISTENER_ADDRESS=swarm-listener
      - MODE=swarm
      - BIND_PORTS=9200,8200
    deploy:
      replicas: 2

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION:-7.7.0}
    environment:
      # https://github.com/docker/swarmkit/issues/1951
      - node.name={{.Node.Hostname}}
      - discovery.seed_hosts=elasticsearch
      - cluster.initial_master_nodes=${INITIAL_MASTER_NODES:-node1}
      - cluster.name=DevOps
      - ELASTIC_PASSWORD=${ELASTICSEARCH_PASSWORD:-changeme}
      - xpack.security.enabled=true
      - xpack.monitoring.collection.enabled=true
      - xpack.security.audit.enabled=true
      - xpack.license.self_generated.type=trial
      - network.host=0.0.0.0
    networks:
      - elastic
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
    deploy:
      mode: 'global'
      endpoint_mode: dnsrr
      labels:
        - com.df.notify=true
        - com.df.distribute=true
        - com.df.servicePath=/
        - com.df.port=9200
        - com.df.srcPort=9200

  logstash:
    image: docker.elastic.co/logstash/logstash:${ELASTIC_VERSION:-7.7.0}
    hostname: "{{.Node.Hostname}}-logstash"
    environment:
      - XPACK_MONITORING_ELASTICSEARCH_URL=http://elasticsearch:9200
      - XPACK_MONITORING_ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-elastic}
      - XPACK_MONITORING_ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD:-changeme}
    ports:
      - "12201:12201/udp"
    networks:
      - elastic
    configs:
      - source: ls_config
        target: /usr/share/logstash/pipeline/logstash.conf

  kibana:
    image: docker.elastic.co/kibana/kibana:${ELASTIC_VERSION:-7.7.0}
    hostname: "{{.Node.Hostname}}-kibana"
    environment:
      - ELASTICSEARCH_URL=http://elasticsearch:9200
      - ELASTICSEARCH_USERNAME=${ELASTICSEARCH_USERNAME:-elastic}
      - ELASTICSEARCH_PASSWORD=${ELASTICSEARCH_PASSWORD:-changeme}
      - SERVER_NAME="{{.Node.Hostname}}-kibana"
      - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=Vss86whNwQrKjA3D8aKTCRN6SnZLX4rv
      - SERVER_SSL_ENABLED=false
      - SERVER_SSL_KEY=config/certs/kib01.key
      - SERVER_SSL_CERTIFICATE=config/certs/kib01.crt
      - SERVER_SSL_KEYPASSPHRASE=testest123
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/rootCA.pem
    configs:
      - source: key_config
        target: /usr/share/kibana/config/certs/kib01.key
      - source: crt_config
        target: /usr/share/kibana/config/certs/kib01.crt
      - source: root_config
        target: /usr/share/kibana/config/certs/rootCA.pem
    networks:
      - elastic
    volumes:
      - kibana:/usr/share/kibana/data
    deploy:
      labels:
        - com.df.notify=true
        - com.df.distribute=true
        - com.df.servicePath=/
        - com.df.port=5601
        - com.df.srcPort=80

  apm-server:
    image: docker.elastic.co/apm/apm-server:${ELASTIC_VERSION:-7.7.0}
    hostname: "{{.Node.Hostname}}-apm-server"
    networks:
      - elastic
    command: >
        --strict.perms=false -e
        -E apm-server.rum.enabled=true
        -E setup.kibana.host=kibana:5601
        -E setup.kibana.username=${ELASTICSEARCH_USERNAME}
        -E setup.kibana.password=${ELASTICSEARCH_PASSWORD}
        -E setup.template.settings.index.number_of_replicas=0
        -E apm-server.kibana.enabled=true
        -E apm-server.kibana.host=kibana:5601
        -E apm-server.kibana.username=${ELASTICSEARCH_USERNAME}
        -E apm-server.kibana.password=${ELASTICSEARCH_PASSWORD}
        -E output.elasticsearch.hosts=["elasticsearch:9200"]
        -E output.elasticsearch.username=${ELASTICSEARCH_USERNAME}
        -E output.elasticsearch.password=${ELASTICSEARCH_PASSWORD}
        -E xpack.monitoring.enabled=true
    deploy:
      labels:
        - com.df.notify=true
        - com.df.distribute=true
        - com.df.servicePath=/
        - com.df.port=8200
        - com.df.srcPort=8200

networks:
    elastic:
      external: true

volumes:
  elasticsearch:
  kibana:

configs:
  ls_config:
    file: $PWD/elk/logstash/config/pipeline/logstash.conf
  ||key_config:
    file: $PWD/keyskeys/kib01.key
  crt_config:
    file: $PWD/keyskeys/kib01.crt
  root_config:
    file: $PWD/keyskeys/rootCA.pem

0 results until now.

image

Can you please advise as to how I configure kibana to work with SSL?

By the way thx @shazChaudhry for the repo. It is very useful to me :)

Thanks in advance.

could you help me out on a couple of things please

Hello,
I'm trying to deploy your cluster in the hope that I can learn more about this ELK.
I have some questions:
1- can you please explain this:
alias git='docker run -it --rm --name git -v $PWD:/git -w /git indiehosters/git git'
2- I am unable to create this ( I did not install beats.):
On the Kibana Management tab, configure an index pattern, Index name or pattern = filebeat-*
3- Edit "filebeat-docker-compose.yml" file. Change environment variables for Kibana and Elasticseaerch
hosts
- Does this need the actual IP address?

Thanks

Use a better logstash pipeline filter

elk/logstash/config/pipeline/logstash.conf

# The following filter is a hack! The "de_dot" filter would be better.
filter {
  ruby {
    code => "
      event.to_hash.keys.each { |k| event[ k.gsub('.','_') ] = event.remove(k) if k.include?'.' }
    "
  }
}

Elastisearch cluster fails when change data path

Hello,

When I launch stack with this volumen path works fine

    volumes:
      - elasticsearch:/usr/share/elasticsearch/data

If I change the volume path to another location.

    volumes:
      - ./elasticsearch:/usr/share/elasticsearch/data

Shows me this error and never creates the cluster.

{"type": "server", "timestamp": "2020-05-28T07:35:05,570Z", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "DevOps", "node.name": "ig-dockerdev-01", "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node1] to bootstrap a cluster: have discovered [{ig-dockerdev-01}{yCtLfa_IS_OOL6gvDnMnsA}{f8BlMJRGQ7GJ7ChMnUZjMA}{10.0.15.145}{10.0.15.145:9300}{dimrt}{xpack.installed=true, transform.node=true}, {ig-dockerdev-03}{KoHiSiuITV2YmVGkEXxtIQ}{b95IUP-jRBuhyR-mvyetAQ}{10.0.15.143}{10.0.15.143:9300}{dimrt}{xpack.installed=true, transform.node=true}, {ig-dockerdev-02}{OVrgsX9NRp6OIh2Jo5FcQQ}{Ysl3vSx4SBWwT_Mq3U4Lkw}{10.0.15.144}{10.0.15.144:9300}{dimrt}{xpack.installed=true, transform.node=true}]; discovery will continue using [10.0.15.144:9300, 10.0.15.143:9300] from hosts providers and [{ig-dockerdev-01}{yCtLfa_IS_OOL6gvDnMnsA}{f8BlMJRGQ7GJ7ChMnUZjMA}{10.0.15.145}{10.0.15.145:9300}{dimrt}{xpack.installed=true, transform.node=true}] from last-known cluster state; node term 0, last-accepted version 0 in term 0" }

Configure filebeat to fetch from local filesystem

I am running a non-containerized application locally which produces timeseries logs in a text file (in addition to executing other application functionality). I wanted to containerize all of the ELK stack for analyzing the timeseries logs so this project is just what I needed. I then wanted to use filebeat to feed the local log4j text log to logstash and into Elasticsearch.

Having deployed everything as follows

$ docker stack services elastic
ID                  NAME                     MODE                REPLICAS            IMAGE                                                 PORTS
hwv9mjnnbady        elastic_kibana           replicated          1/1                 docker.elastic.co/kibana/kibana:7.9.1
k2yjhizxkcdd        elastic_logstash         replicated          1/1                 docker.elastic.co/logstash/logstash:7.9.1             *:12201->12201/udp
m3xqajcqmopx        elastic_swarm-listener   replicated          1/1                 dockerflow/docker-flow-swarm-listener:latest
vuvk78117tmj        elastic_elasticsearch    global              1/1                 docker.elastic.co/elasticsearch/elasticsearch:7.9.1
y76njf0lta3q        elastic_apm-server       replicated          1/1                 docker.elastic.co/apm/apm-server:7.9.1
yl3f8ha2frzt        elastic_proxy            replicated          2/2                 dockerflow/docker-flow-proxy:latest                   *:80->80/tcp, *:443->443/tcp, *:8200->8200/tcp, *:9200->9200/tcp

and then filebeat

docker stack services filebeat
ID                  NAME                MODE                REPLICAS            IMAGE                                    PORTS
51di10aflys8        filebeat_filebeat   global              1/1                 docker.elastic.co/beats/filebeat:7.9.1

The problem I am having is configuring the application log directory for filebeat to fetch. For example, my logs are written to /Users/$name/some/directory/application.log.

Can you please advise as to how I configure filebeat-docker-compose.yml to pick up the local logs?

Thanks in advance.

elastic search module fails to start.

I have started experimenting with this ,and i am facing something which seems to be a trivial issue ,just hard to solve for me.

"stacktrace": ["org.elasticsearch.bootstrap.StartupException: ElasticsearchException[failed to bind service]; nested: IndexFormatTooNewException[Format version is not supported (resource BufferedChecksumIndexInput(SimpleFSIndexInput(path=\"/usr/share/elasticsearch/data/nodes/0/_state/segments_2\"))): 10 (needs to be between 7 and 9)];",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:127) ~[elasticsearch-cli-7.7.0.jar:7.7.0]",
"at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.7.0.jar:7.7.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.7.0.jar:7.7.0]",
"Caused by: org.elasticsearch.ElasticsearchException: failed to bind service",
"at org.elasticsearch.node.Node.<init>(Node.java:638) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.node.Node.<init>(Node.java:264) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:227) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:227) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:393) ~[elasticsearch-7.7.0.jar:7.7.0]",
"at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.7.0.jar:7.7.0]",
"... 6 more",

Only the elastic search docker fails to start. Everything else starting up and waiting for elasticsearch to start.
Does anyone has any idea what am I missing?

Deploying Beats - Why are indices not created?

Hey, i'm trying out your code, but don't see the "filebeat-" or "metricbeat-" or any other beat when I curl for indices.. (this line: Running the following command should print elasticsearch index and one of the rows should have filebeat-*). Any Ideas why this could happen?

screenshot from 2019-01-19 11-31-35

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.