elastic-central-logging
is an example project that demonstrates the use of the Elastic Stack to provide central logging functionality.
The project leverages Docker technology to deliver a containerized Elastic Stack, orchestrated using Docker Compose.
To simulate a real world usage of the Elastic Stack, Yelp's elastalert is included to apply alerting thresholds to the aggregated logging data. If a threshold is breached, elastalert will send an alert email (using a dummy SMTP server, FakeSMTP).
"A picture paints a thousand words" - so here is a visual representation of the project architecture.
elastic-central-logging
is composed of the following components which are deployed within component specific Docker containers.
Component | Description |
---|---|
Elasticsearch | A distributed, RESTful search and analytics engine capable of solving a growing number of use cases. The heart of the Elastic Stack. |
Kibana | Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. |
Logstash | Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to Elasticsearch. |
Filebeat | Filebeat offers a lightweight way to forward and centralize logs and files. |
Metricbeat | Collects metrics from your systems and services. From CPU to memory, Redis to NGINX, and much more, Metricbeat is a lightweight way to send system and service statistics. |
Elasticalerts | ElasticAlerts uses the Yelp elastalert framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch. |
Servicemock | A Spring Boot application that generates multiple log files with random log data with different log levels and exceptions. |
The process flow of the project can be described as follows:
Servicemock
component generates log files. These log files are generated in the servicemock/logs directory. 3 separate log files are generated with the following format;
- env.SERVICEMOCK_NAME-ENGLISH-$HOSTNAME.log
- env.SERVICEMOCK_NAME-ESPERANTO-$HOSTNAME.log
- env.SERVICEMOCK_NAME-LATIN-$HOSTNAME.log
-
Filebeat
component monitors the log files in the servicemock/logs directory and ships any new updates contained within the logs to thelogstash
component. -
logstash
component injects the log file data provided by thefilebeat
component and stashes (persists) the data in theelasticsearch
component. -
metricbeat
component gathers system metric data (CPU usage, RAM usage, Disk I/O etc) and ships the data directly to theelasticsearch
component. -
elasticsearch
component is the heart of the Elastic Stack and the Source of Truth for theelastic-central-logging
project. This component is the endpoint to which all data in the project is stored. -
Kibana
component is used to visualize the data stored in theelasticsearch
component. -
elasticalert
component is used to execute rules against theelasticsearch
component. If any of these rules are violated then theelasticalert
component sends an email alert notification.
The first step in getting started with the elastic-central-logging
project is to grab the source code.
git clone --progress --verbose https://github.com/damianmcdonald/elastic-central-logging.git elastic-central-logging
With the source code downloaded, the next step is to ensure that the target system meets the minimum prerequisites.
From a hardware specifications perspective, the minimum requirements to run the project are:
- 64-bit system
- Processor: 1 gigahertz (GHz) or faster processor
- RAM: 16 gigabyte (GB) 64-bit
- Disk: 20 GB free space
At a minimum, the following software is required to support the execution of the elastic-central-logging
project.
-
Docker version 18+
-
Docker Compose version 1.24+
-
OpenJDK version 1.8+
-
FakeSMTP latest
-
Windows and Mac users get Docker Compose installed automatically with Docker for Windows/Mac.
-
Linux users can read the install instructions or can install via pip:
pip install docker-compose
-
Windows Users must set the following 2 ENV vars:
-
COMPOSE_CONVERT_WINDOWS_PATHS=1
-
PWD=/path/to/checkout/for/stack-docker
-
for example:
/c/Users/myusername/elastic/stack-docker
-
Note: your paths must be in the form of
/c/path/to/place
usingC:\path\to\place
will not work -
You can set these three ways:
- Temporarily add an env var in powershell use:
$Env:COMPOSE_CONVERT_WINDOWS_PATHS=1
- Permanently add an env var in powershell use:
[Environment]::SetEnvironmentVariable("COMPOSE_CONVERT_WINDOWS_PATHS", "1", "Machine")
Note: you will need to refresh or create a new powershell for this env var to take effect
- in System Properties add the environment variables.
- Temporarily add an env var in powershell use:
-
At least 8GiB of RAM for the containers. Windows and Mac users must configure their Docker virtual machine to have more than the default 2 GiB of RAM:
- Linux Users must set the following configuration as
root
:
sysctl -w vm.max_map_count=262144
By default, the amount of Virtual Memory is not enough.
An example prerequisite installation script is provided that has been tested to work on a clean Ubuntu 18.04 instance.
# global variables
DEVEL_DIR=/opt/devel
DISTRIB_DIR=/opt/devel/distrib
SOURCE_DIR=/opt/devel/source
TOOLS_DIR=/opt/devel/tools
FAKESMTP_BINARY_URL="http://nilhcem.github.com/FakeSMTP/downloads/fakeSMTP-latest.zip"
FAKESMTP_BINARY_FILE=fakeSMTP-latest.zip
FAKESMTP_EXTRACTED_DIR=fakeSMTP-2.0.jar
# create base directories
sudo mkdir -p $DEVEL_DIR
sudo mkdir -p $DISTRIB_DIR
sudo mkdir -p $SOURCE_DIR
sudo mkdir -p $TOOLS_DIR
sudo chown -R ${USER} $DEVEL_DIR
sudo chgrp -R ${USER} $DEVEL_DIR
sudo chmod -R 775 $DEVEL_DIR
# prepare ubuntu
sudo apt update
sudo apt -y upgrade
sudo apt -y dist-upgrade
# install pre-requisites
sudo apt -y install python3 unzip build-essential dkms linux-headers-$(uname -r) apt-transport-https ca-certificates curl wget software-properties-common openjdk-8-jdk ncftp git
# install and configure docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
sudo apt update
sudo apt -y install docker-ce
sudo usermod -aG docker ${USER}
sudo cp -v /etc/sysctl.conf /etc/sysctl.conf.backup
echo "#### Added to increase memory for Docker" | sudo tee -a /etc/sysctl.conf
echo "vm.max_map_count=262144" | sudo tee -a /etc/sysctl.conf
# install and configure docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# install fakesmtp
cd $DISTRIB_DIR
wget $FAKESMTP_BINARY_URL
unzip $FAKESMTP_BINARY_FILE
mkdir -p $TOOLS_DIR/fakesmtp
mv -v $DISTRIB_DIR/$FAKESMTP_EXTRACTED_DIR $TOOLS_DIR/fakesmtp/$FAKESMTP_EXTRACTED_DIR
With the prerequisites fulfilled, you are now ready to run the stack.
We will assume that $PROJECT_ROOT
refers to the directory where you cloned the elastic-central-logging
project.
- Run the FakeSMTP server, we will need this to receive the alert emails.
cd $PROJECT_ROOT/fakesmtp
./run-fakesmtp.sh
- Run the
servicemock
component to start generating log files. The log files will be written to$PROJECT_ROOT/servicemock/logs
.
cd $PROJECT_ROOT/servicemock
./run-servicemock.sh
- Spin up the stack using
docker-compose
.
docker-compose up
- Be patient! The stack will take about 3-5 minutes to be fully operational.
If everything has gone well, you will eventually (make sure to wait between 5-10 minutes) start receiving alert emails into FakeSMTP.
Some URLs that can be accessed to check different elements of the stack:
URL | Purpose |
---|---|
http://localhost:9200 | Base endpoint of Elasticsearch. This URL can be used to check that Elasticsearch is healthy |
http://localhost:9200/_cat/indices?v&pretty | This endpoint will allow you to check the Elasticsearch indexes. |
http://localhost:5601 | Base endpoint of Kibana. This URL can be used to visualize the data stored in Elasticsearch |
You can also check the health of the stack by listing the running Docker processes from a terminal.
docker ps
Each running container will list its health status (Healthy or Unhealthy).
Given that the elastic-central-logging
project is orchestrated using Docker Compose, you can view all the key details of the project by reading the docker-compose.yml file.
version: '3.6'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1
container_name: elasticsearch
ports: ['9200:9200']
networks: ['stack']
volumes:
- './elasticsearch/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
- './elasticsearch/data:/usr/share/elasticsearch/data:rw'
healthcheck:
test: curl -s https://localhost:9200 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
kibana:
image: docker.elastic.co/kibana/kibana:7.0.1
container_name: kibana
ports: ['5601:5601']
networks: ['stack']
volumes:
- './kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
- './kibana/data:/usr/share/kibana/data:rw'
healthcheck:
test: curl -s https://localhost:5601 >/dev/null; if [[ $$? == 52 ]]; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
logstash:
image: docker.elastic.co/logstash/logstash:7.0.1
container_name: logstash
ports: ['5044:5044']
networks: ['stack']
volumes:
- './logstash/conf:/usr/share/logstash/config:ro'
- './logstash/pipeline:/usr/share/logstash/pipeline:ro'
healthcheck:
test: bin/logstash -t
interval: 60s
timeout: 50s
retries: 5
metricbeat:
image: docker.elastic.co/beats/metricbeat:7.0.1
container_name: metricbeat
volumes:
- /proc:/hostfs/proc:ro
- /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro
- /:/hostfs:ro
command: metricbeat --strict.perms=false -e
networks: ['stack']
volumes:
- './metricbeat/conf/metricbeat.yml:/usr/share/metricbeat/metricbeat.yml:ro'
healthcheck:
test: metricbeat test config
interval: 30s
timeout: 15s
retries: 5
filebeat:
image: docker.elastic.co/beats/filebeat:7.0.1
container_name: filebeat
volumes:
- './filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro'
- './servicemock/logs:/var/log:ro'
command: filebeat --strict.perms=false -e
networks: ['stack']
healthcheck:
test: filebeat test config
interval: 30s
timeout: 15s
retries: 5
elasticalert:
image: damianmcdonald/elasticalert:1.0.0
container_name: elasticalert
environment:
- ELASTICSEARCH_HOST=localhost
- ELASTICSEARCH_PORT=9200
volumes:
- './elasticalert/conf:/app/config:ro'
network_mode: "host"
healthcheck:
test: if ps -aux | grep elastalert ; then echo 0; else echo 1; fi
interval: 30s
timeout: 10s
retries: 5
networks: {stack: {}}
The Elastic Stack components all use the official Docker images provided by Elastic.
The Servicemock
and Elastalert
component use custom Docker images.
For more details of the Servicemock
project please refer to the respective Github project.
For more details of the Elastalert
project please refer to the respective Github project.
All components of the elastic-central-logging
project have their respective conf
folders where configuration files required by the components are defined.
├── elasticalert
│ └── conf
│ ├── config.yaml
│ └── rules
│ ├── error_checks.yaml
│ └── exception_checks.yaml
├── elasticsearch
│ ├── conf
│ │ └── elasticsearch.yml
│ └── data
├── fakesmtp
│ ├── bin
│ │ └── fakeSMTP-2.0.jar
│ ├── mail-output
│ └── run-fakesmtp.sh
├── filebeat
│ ├── conf
│ │ └── filebeat.yml
│ └── create-dashboards.sh
├── kibana
│ ├── conf
│ │ └── kibana.yml
│ ├── data
│ └── real-time-monitor-dashboard.json
├── logstash
│ ├── conf
│ │ ├── jvm.options
│ │ ├── log4j2.properties
│ │ ├── logstash.conf
│ │ ├── logstash.yml
│ │ ├── pipelines.yml
│ │ └── startup.options
│ └── pipeline
│ └── logstash.conf
├── metricbeat
│ ├── conf
│ │ └── metricbeat.yml
│ └── create-dashboards.sh
└── servicemock
├── conf
│ ├── application.properties
│ └── log4j2.xml
├── logs
└── run-servicemock.sh
Configuration of the Elastic components use the standard configuration approach as prescribed by Elastic.
Servicemock
is a Spring Boot application that reads its configuration at launch time from the servicemock/conf directory.
There are two configuration files:
application.properties
: this is the standard Spring Boot configuration file. In the context of theServicemock
component, this configuration file is used to define the frequency of logging events.log4j2.xml
: this file specifies the logging behaviour (log file locations, log file name, logger levels etc.) of theServicemock
component.
The Elasticalert
component is a containerized version of Yelp's elastalert.
Elastalert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch.
Elastalert is well documented so please be sure to take a look at the Elastalert Official Documentation.
Elastalert defines a config.yaml file which contains the basic details required to setup the rules engine.
rules_folder: /app/config/rules
run_every:
seconds: 15
buffer_time:
minutes: 15
es_host: localhost
es_port: 9200
writeback_index: elastalert_status
alert_time_limit:
days: 2
See the Configuration reference for additional details.
In addition to the config.yaml, Elastalert requires the creation of rule files which define the alert conditions that Elastalert will monitor.
elastic-central-logging
defines two simple frequency based rules; error_checks.yaml and exception_checks.yaml.
Exentensive documentation is available describing the Rule Types and Configuration Options.
Furthermore, you can take a look at the example rules that are provided within the Elastalert project.
With the elastic-central-logging
stack running, you can setup dashboards in Kibana to visualize the real-time monitoring data that is provided by the metricbeat
component.
- Execute the metricbeat
create-dashboards.sh
script to create the required index and dashboards in Kibana.
cd $PROJECT_ROOT/metricbeat
./create-dashboards.sh
-
Open Kibana in your preferred web browser; http://localhost:5602
-
Navigate to Management -> Kibana -> Index Patterns (click the icon highlighted in red).
- Select the
metricbeat-*
index and make it the default by clicking the * icon.
- Create a new dashboard (click the icon highlighted in red).
- Click the
Add
button to add some Visualizations to the dashboard.
- Add the following Dashboards
- CPU Usage Gauge [Metricbeat System] ECS
- System Load [Metricbeat System] ECS
- Memory Usage Gauge [Metricbeat System] ECS
- Memory Usage [Metricbeat System] ECS
- CPU Usage [Metricbeat System] ECS
- Load Gauge [Metricbeat System] ECS
- Disk Usage [Metricbeat System] ECS
- Swap usage [Metricbeat System] ECS
- Inbound Traffic [Metricbeat System] ECS
- Outbound Traffic [Metricbeat System] ECS
- Network Traffic (Packets) [Metricbeat System] ECS
- Hosts histogram by CPU usage [Metricbeat System] ECS
- The final dashboard will look something like the screenshot below.
- It is also possible to import this dashboard via the
$PROJECT_ROOT/kibana/real-time-monitor-dashboard.json
.