How To Install Elasticsearch, Logstash, and Kibana (Elastic Stack) in a WSL Docker container (Ubuntu 20.04)
In this document, you'll learn how to set up an ELK stack in a docker container running Ubuntu 20.04. These steps are applicable to setup ELK on WSL2 as well, with minimal adjustments - just skip installing docker.
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Pull an ubuntu:20.04 image from Docker's Official Images:
sudo docker pull ubuntu:20.04
Create a directory to mount with docker container: (optional)
cd security-analytics
mkdir elk_share
Create a Docker container with a mounted directory (elk_share) & expose the 9200 and 5601 ports:
sudo docker run -it \
--name elk-stack \
-d -p 9200:9200 -p 5601:5601 \
--shm-size=1g \
--ulimit memlock=-1 \
--ulimit stack=67108864 \
-v ~/security-analytics/elk_share:/elk_share \
ubuntu:20.04
Start the docker container:
sudo docker start elk-stack
sudo docker exec -it elk-stack /bin/bash
apt update && apt upgrade -y && apt-get install nginx -y && apt-get install wget -y && apt-get install gpg -y && apt-get install nano -y && apt install systemctl -y && apt install curl -y && apt install net-tools -y
In this tutorial, we'll go through the steps to install rsyslog
and configure it to log shell commands to the syslog. Then using filebeats to send the logs to Logstash.
First, we need to install rsyslog
, which is a powerful logging system for Linux.
sudo apt install rsyslog
- Open the
.bashrc
file in a text editor. You can usenano
or any other text editor of your choice.
nano ~/.bashrc
- Add the following line at the end of the file to set the
PROMPT_COMMAND
environment variable. This command will log every command to syslog as it's executed.
PROMPT_COMMAND='history -a >(logger -t "[$USER] $SSH_CONNECTION")'
-
Save the file and exit the text editor. (for nano: ctrl-s + ctrl-x)
-
To apply the changes immediately, you need to source the
.bashrc
file:
source ~/.bashrc
Now that you have enabled command logging, you can view the logged commands in the syslog. Use the cat
command to display the syslog:
cat /var/log/syslog
You should see entries that look like:
Oct 7 12:34:56 hostname [username] [ip_address]: command_here
This format includes the date and time, the username, the IP address (if applicable), and the executed command.
apt-get install openjdk-8-jdk -y
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
apt-get install apt-transport-https -y
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-8.x.list
apt-get update
apt-get install elasticsearch -y
nano /etc/elasticsearch/elasticsearch.yml
Add/uncomment the following lines:
network.host: localhost
http.port:9200
discovery.type: single-node
As we're using discovery.type: single-node
, comment out the following line at the bottom of the file:
#cluster.initial_master_nodes: ["……"]
By default, JVM heap size is set at 1GB.
nano /etc/elasticsearch/jvm.options
It is recommended to set it to no more than half the size of your total memory.
Set the heap size by uncommenting the following lines. Here, I've configured it to be 4GB:
-Xms4g
-Xmx4g
As ELK stack is being set up on Docker or if is being set up as Root user in WSL2, do this first to avoid file access permission errors:
mkdir -p /var/run/elasticsearch
chown elasticsearch:elasticsearch /var/run/elasticsearch
When you start Elasticsearch for the first time, passwords are generated for the elastic
user and TLS is automatically configured for you.
systemctl start elasticsearch.service
#Enable Elasticsearch to start on boot:
systemctl enable elasticsearch.service
#Check Elasticsearch status:
systemctl status elasticsearch.service
/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic
Press 'Y' to print the password in terminal when prompted. This will be your elastic
user password.
As the generated password might be difficult to remember and inconvenient to log in with later, we'll create a supeuser role account.
/usr/share/elasticsearch/bin/elasticsearch-users useradd <your-newacct-username> -p <your-newacct-password> -r superuser
There are many more possible roles available, but we're sticking with the same role as elastic
user.
In /etc/elasticsearch/elasticsearch.yml
, if xpack.security.enabled: true
: (default)
This means that security is enabled.
curl -k -u elastic:<generated-password> https://localhost:9200
# OR
curl -k -u <your-newacct-username>:<your-newacct-password> https://localhost:9200
Else if, in /etc/elasticsearch/elasticsearch.yml
, if xpack.security.enabled: false
:
This means that security is disabled.
curl -X GET "localhost:9200"
The name of your system should display, and elasticsearch for the cluster name. This indicates that Elasticsearch is functional and is listening on port 9200.
apt-get install kibana
nano /etc/kibana/kibana.yml
Edit the following lines:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.host: ["http://localhost:9200"]
Then start Kibana:
systemctl start kibana
systemctl enable kibana
systemctl status kibana
Inside Docker terminal: curl -k -u <your-newacct-username>:<your-newacct-password> "localhost:5601"
In WSL terminal: curl -k -u <your-newacct-username>:<your-newacct-password> "<Your-Docker-IP-Address:5601"
On Windows host machine: In a local web browser -- visit localhost:5601
**Inside Docker terminal: **
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token --scope kibana
#<obtain an enrollment token>
Paste the enrollment token in the web browser on your local Windows machine.
Inside Docker terminal: Run the following command for verification code:
/usr/share/kibana/bin/kibana-verification-code
Paste the verification code in the web browser on your local Windows machine.
Use the elastic
account or the new superuser
account and its corresponding password that you've created previously.
apt-get install logstash
systemctl start logstash
systemctl enable logstash
systemctl status logstash
chmod 777 -R /usr/share/logstash/ # Optional; for file permissions issues.
/etc/logstash/conf.d/
nano /etc/logstash/logstash.yml
Uncomment the two lines: pipeline.batch.size: 125
and pipeline.batch.delay: 50
And add this line: path.config: /etc/logstash/conf.d
nano /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
}
}
nano /etc/logstash/conf.d/10-syslog-filter.conf
filter {
if [fileset][module] == "system" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{GREEDYMULTILINE:[system][auth][message]}" }
pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
remove_field => "message"
}
date {
match => [ "[system][auth][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
geoip {
source => "[system][auth][ssh][ip]"
target => "[system][auth][ssh][geoip]"
}
}
else if [fileset][name] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{GREEDYMULTILINE:[system][syslog][message]}" }
pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)*" }
remove_field => "message"
}
date {
match => [ "[system][syslog][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
nano /etc/logstash/conf.d/30-elasticsearch-output.conf
IMPORTANT!! - Modify according to your username and password.
output {
elasticsearch {
hosts => [ "https://localhost:9200" ]
ssl_certificate_verification => false
user => "elastic" # or your new super user account
password => "<your-password>" # enter your password
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
/etc/logstash/conf.d/logstash-simple.conf
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
}
}
output {
elasticsearch {
hosts => [ "https://localhost:9200" ]
ssl_certificate_verification => false
user => "elastic"
password => "<password>"
}
}
Running as logstash user:
su -s /bin/bash -c '/usr/share/logstash/bin/logstash --path.settings /etc/logstash -t' logstash
There are many different beats available, such as Filebeat, Winlogbeat, Metricbeat and Packetbeat. For this tutorial, we'll be using Filebeat. We'll be sending logs to Logstash instead of Elasticsearch via Filebeats.
apt-get install filebeat
nano /etc/filebeat/filebeat.yml
Under the Elasticsearch output section, comment out the following lines:
# output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
Under the Logstash output section, remove the hash sign (#) in the following two lines:
output.logstash
hosts: ["localhost:5044"]
filebeat setup -e
filebeat modules list
# OR
ls -l /etc/filebeat/modules.d
filebeat modules disable <module-name.yml>
# OR
mv /etc/filebeat/modules.d/<module-name.yml> /etc/filebeat/modules.d/ <module-name.yml>.disabled
filebeat modules enable system
nano /etc/filebeat/filebeat.yml
In Filebeats Input section: enabled: false
nano /etc/filebeat/modules.d/system.yml
filebeat modules enable system
filebeat test config
systemctl start filebeat
If you’ve set up your Elastic Stack correctly, Filebeat will begin shipping your syslog and authorization logs to Logstash, which will then load that data into Elasticsearch.
To verify that Elasticsearch is indeed receiving this data, query the Filebeat index with this command:
curl -k -u <username>:<password> https://localhost:9200/filebeat-*/_search?pretty
Kibana main menu and selecting Stack Management > Kibana: Data Views > Create data view
You should be able to see a filebeat index. Set a name (eg. Filebeat) and the index pattern: filebeats-*
Save.
Then, Kibana Menu > Discover > Dropdown: choose Filebeat (or whatever name you've given).
systemctl start nginx elasticsearch kibana logstash filebeat
systemctl status nginx elasticsearch kibana logstash filebeat
systemctl enable nginx elasticsearch kibana logstash filebeat
systemctl restart nginx elasticsearch kibana logstash filebeat
nano /etc/filebeat/filebeat.yml
nano /etc/kibana/kibana.yml
nano /etc/logstash/logstash.yml
nano /etc/elasticsearch/elasticsearch.yml
cat /var/log/filebeat/*.ndjson
cat /var/log/kibana/kibana.log
cat /var/log/logstash/logstash-plain.log
cat /var/log/elasticsearch/elasticsearch.log
curl -k -u elastic:<password> https://localhost:9200/_cat/indices?v&s=index&pretty
Or in Kibana: Dev Tools > Console > GET "/_cat/indices?v&s=index&pretty"
1) reporting_user
2) logstash_admin
3) machine_learning_admin
4) kibana_user
5) rollup_user
6) rollup_admin
7) remote_monitoring_collector
8) superuser
9) transport_client
10) viewer
11) kibana_admin
12) watcher_admin
13) remote_monitoring_agent
14) monitoring_user
15) ingest_admin
16) transform_admin
17) logstash_system
18) apm_user
19) watcher_user
20) beats_system
21) data_frame_transforms_user
22) snapshot_user
23) beats_admin
24) apm_system
25) kibana_system
26) enrich_user
27) machine_learning_user
28) transform_user
29) data_frame_transforms_admin
30) editor