Setup ELK 5 (ElasticSearch 5, Logstash 5, Kibana 5) on Ubuntu
Setup Centralised logging using ELK Stack
Install ElasticSearch, Logstash and Kibana on Ubuntu
What is ELK stack?
Why ELK stack?
Working Strategy of ELK stack in this Tutorial
- ElasticSearch: Stores all the logs.
- Logstash: Processes incoming logs from client servers. Here we will only parse system logs.
- Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
- Filebeat: Log shipping agent, will be installed on client servers, that will send logs to Logstash.
Terms:
- ELK Server: Server on which ELK stack will be installed that is ElasticSearch, Logstash and Kibana.
- Client Server: Server from which we want to gether logs and on which filebeat will get install.
Prequisites:
- A Ubuntu server with sudo privileges.
- Server with atleast 4 GB RAM, and 2 CPUs.
- One or more client servers
1. Install JAVA 8
apt
:sudo add-apt-repository -y ppa:webupd8team/java
Update the apt package:
sudo apt-get update
Now, install the Java 8:
sudo apt-get -y install oracle-java8-installer
2. Install ElasticSearch 5.2.x
sudo apt-get update
Now, download the ElasticSearch Debian Package:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.0.deb
sudo dpkg -i elasticsearch-5.2.0.deb
Elasticsearch is now installed in /usr/share/elasticsearch/
with its configuration files placed in /etc/elasticsearch
and its init script added in /etc/init.d/elasticsearch
.
Now, edit the ElasticSearch’s configuration file to run it on localhost, so that strangers cannot read your data and cant play with your ElasticSearch, For this first open the configuration file:
sudo nano /etc/elasticsearch/elasticsearch.yml
network.host
, uncomment it, and replace its value with “localhost” so it looks like this:network.host: localhost
Now, save and close this file. To, automatically start ElasticSearch when server boots up, run:
sudo systemctl enable elasticsearch.service
Here, ElasticSearch is setup.
3. Install Kibana
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https
Now, save repository definition
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
Now, install kibana along with updating apt package:
sudo apt-get update && sudo apt-get install kibana
Now, kibana is installed. Lets configure it now. For this open the configuration file:
sudo nano /opt/kibana/config/kibana.yml
Find the line that specifies server.host
, and replace the IP address (“0.0.0.0” by default) with “localhost”. This setting will allow Kibana to be accessible from localhost only. This is fine because we will use an Nginx reverse proxy to allow external access.
server.host: "localhost"
Now, start the kibana service, along with setting it up so that it can be restarted automatically whenever server gets boot up.
sudo systemctl daemon-reload
sudo systemctl enable kibana
sudo systemctl start kibana
4. Install Nginx
Since, we configured Kibana to listen on localhost
, we will set up a reverse proxy via Nginx to allow external access to it. So install it:
sudo apt-get -y install nginx
For security, pupose, make your nginx password authenticated. For this
sudo -v
echo "kibanaadmin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users
Here kibanaadmin is a username, you can change it according to you. This command will ask for the password. Do remember the password, you are entering here, as it will require to access you kibana interface.
Now, edit the Nginx, default configuarion file, for this open it first:
sudo nano /etc/nginx/sites-available/default
Delete the file’s contents, and paste the following code block into the file. Be sure to update the server_name
to match your server’s name or public IP address:
server {
listen 80;
server_name example.com;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Save and exit. And restart the Nginx server.
sudo systemctl restart nginx
Now, access your server’s domain name or public IP address, And enter the credentaials, you will see the Kibana web interface.
5. Install Logstash
To install Logstash, run:
sudo apt-get install logstash
Now, Logstash is installed. Now, we need to configure it. Logstash configuration files are in the JSON-format, and reside in /etc/logstash/conf.d
. The configuration consists of three sections: inputs, filters, and outputs.
02-beats-input.conf
and set up our “filebeat” input:sudo nano /etc/logstash/conf.d/02-beats-input.conf
Insert following code:
input {
beats {
port => 5044
}
}
Save and exit. Now let’s create a configuration file called 10-syslog-filter.conf
, where we will add a filter for syslog messages. Here we are filtering logs that are tagged as “syslog” by filebeat then we will make this logs structured and queryable using grok parser.
sudo nano /etc/logstash/conf.d/10-syslog-filter.conf
Enter following filter configuration code:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
Save and exit. Now we will create output file named as 30-elasticsearch-output.conf
sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf
Insert the following output configuration code:
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Save and exit. This code, basically tells logstash to store parsed logs into ElasticSearch. Now, test your logstash configuration using:
sudo /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/
It should output the “Confuguration OK”, if not, correct the error it is displaying and continue.
Now, restart logstash, so that all the configurations that we have added will work.
sudo systemctl restart logstash
sudo systemctl enable logstash
Logstash configuration is done here.
6. Load Kibana dashboard
cd ~
curl -L -O https://download.elastic.co/beats/dashboards/beats-dashboards-1.2.2.zip
sudo apt-get -y install unzip
Now, extract the content.
unzip beats-dashboards-*.zip
And load the sample dashboards, visualizations and Beats index patterns into Elasticsearch with these commands:
cd beats-dashboards-*
./load.sh
It will load four index patterns that are as follow:
- packetbeat-*
- topbeat-*
- filebeat-*
- winlogbeat*
7. Load filebaet index template in ElasticSearch
cd ~
curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' [email protected]
If everything will be fine, you will see the output “acknowledged: true”.
Here, your ELK server is all setup. Now we will need to setup client server to send logs to ELK server. Lets do that.
8. Configure Filebeat on Client Server
echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
Now, install the filebeat package
sudo apt-get update
sudo apt-get install filebeat
sudo nano /etc/filebeat/filebeat.yml
Add the following code
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
# - /var/log/*.log
input_type: log
document_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["elk_server_private_ip:5044"]
bulk_max_size: 1024
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
At the place of elk_server_private_ip, replace your ELK server’s IP. Now restart Filebeat to put our changes into place:
sudo systemctl restart filebeat
sudo systemctl enable filebeat
Now Filebeat is sending syslog
and auth.log
to Logstash on your ELK server! Repeat this section for all of the other servers that you wish to gather logs for.
To test our filebeat installation, On ELK server run this command
curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
Since, filebeat on client server is sending logs to our ELK server, you must get log data in the output. If your output shows 0 total hits then there is something wrong with your configuration. Check it and correct it. Now, continue to the next step.