Setting up ELK Stack on Ubuntu 16.04

Share Button

SETTING UP ELK STACK ON UBUNTU 16.04

 

 

ELK stands for Elasticsearch, Logstash and Kibana and its a robust open source solution for searching, analyzing and visualizing data. Elasticsearch is a distributed, RESTful search and analytics engine based on Lucene, Logstash is a data processing pipeline for managing events and logs and Kibana is a web application for visualizing data in Elasticsearch.

Requirements:

  1. Ubuntu 16.04 OS
  2. A user account with Sudo privileges
  3. Hardware Configuration – 2 vCPU/ 1 x 4 Core CPU, Min – 4GB /Recommended – 8GB

 

ELK

In this tutorial, I will show you how to install ELK Stack on a single Ubuntu 16.04 Server. Follow the steps below for successful running of your own ELK Server.

Step 1: Update Ubuntu

Update the default installation of Ubuntu & install necessary packages needed

sudo apt update && apt -y upgrade

sudo apt install apt-transport-https software-properties-common wget curl

 

Step 2: Install Java

Java is required for the ELK stack deployment

sudo add-apt-repository -y ppa:webupd8team/java sudo apt update

sudo apt-get -y install oracle-java8-installer

Making sure Java is installed properly

java –version

 

Step 3: Adding Repositories Key

Setting up ELK repositories

wget -qO – https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –

wget -qO – https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add –

echo “deb https://artifacts.elastic.co/packages/6.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

 

Step 4: Installing Elasticsearch

Setting up Elasticsearch and configuring it

sudo apt-get update && sudo apt-get install elasticsearch

Edit the configuration & restrict access

sudo vi /etc/elasticsearch/elasticsearch.yml

Uncomment & replace it as shown below

network.host: localhosthttp.port: 9200

Restart Elasticsearch service

sudo service elasticsearch restart

Check if the Elasticsearch service is running properly

sudo service elasticsearch status

Start Elasticsearch service on boot up

systemctl enable elasticsearch.service

 

Step 5: Installing Kibana

Setting up Kibana & configuring it

sudo apt-get update && sudo apt-get install kibana

Edit the configuration & restrict access

sudo vi /etc/kibana/kibana.yml

Uncomment & replace it as shown below

server.port: 5601

server.host: “localhost”

elasticsearch.url: “http://localhost:9200”

Restart Kibana service

sudo service kibana restart

Check if the Kibana service is running properly

sudo service kibana status

Start Kibana service on boot up

systemctl enable kibana.service

 

Step 6: Installing Nginx as Reverse Proxy

Setting up Nginx for allowing access to kibana from outside (since we have configured kibana to listen on ‘localhost’.

apt-get install nginx apache2-utils

Use ‘htpasswd’ to create admin user (“kibanaadmin”), that can access the kibana web interface. You may choose to use different name other than “kibanaadmin”. Enter the password on the prompt, remember it, as you will need it for accessing Kibana.

htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

Edit Nginx setting and add the below configuration parameters. Modify the ‘server_name’ to match your server’s name or public IP address.

vi /etc/nginx/sites-available/default

server {

listen 80;

server_name example.com;

auth_basic “Restricted Access”;

auth_basic_user_file /etc/nginx/htpasswd.users;

location / {

proxy_pass http://localhost:5601;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection ‘upgrade’;

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

}

}

 

Save and exit. This configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and demand basic authentication.

 

Check the config for structure errors and restart Nginx if none are found.

sudo nginx -t

sudo systemctl restart nginx

Check if the Nginx service is running properly

sudo service nginx status

Start Nginx service on boot up

systemctl enable nginx.service

Allow connections to Nginx, we need to alter the rules by writing:

sudo ufw allow ‘Nginx Full’

 

Step 7: Installing Logstash

Setting up logstash & configuring it.

echo “deb https://artifacts.elastic.co/packages/6.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

sudo apt-get update && sudo apt-get install logstash

Test logstash configuration

sudo service logstash configtest

 

Generate SSL Certificates – since we shall be using filebeat to ship logs to our ELK server, we need to create an SSL certificate and key pair. This certificate is used by Filebeat to verify the identity of ELK server.

Create directories for storing the certificates & private keys.

sudo mkdir -p /etc/pki/tls/certs

sudo mkdir /etc/pki/tls/private

 

There are two options for SSL certificate generation – option 1: IP Address & option 2: Hostname (FQDN).

Option 1: IP Address

 If you are planning to use IP address instead of hostname, please follow the steps to create a SSL certificate for IP SAN.

We would need to add an IP address of logstash server to SubjectAltName in the OpenSSL config file.

nano /etc/ssl/openssl.cnf

Look for “[ v3_ca ]” section and replace “a.b.c.d” one with the IP of your logstash server.

subjectAltName = IP: a.b.c.d

Save and exit.

Now generate the SSL certificate and private important in the (/etc/pki/tls/…) directories, with the following commands:

cd /etc/pki/tls

sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

This logstash-forwarder.crt should be copied to all client servers those who send logs to logstash server. If you went with this action, skip option 2 and move on to Configure Logstash

 

Option 2: FQDN (Hostname)

 If you use the hostname in the beats (forwarder) configuration, make sure you have A record for logstash server; ensure that client machine can resolve the hostname of the logstash server.

 If you do not have a nameserver in your environment; make sure you add the host entry for logstash server in client machines as well as in the logstash server.

sudo nano /etc/hosts

It should look like below

192.168.100.10 elkserver.local

 Save and exit.

 

Now generate the SSL certificate and private important in the (/etc/pki/tls/…) directories, with the following commands:

cd /etc/pki/tls

sudo openssl req -subj ‘/CN=elkserver.local’ -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

This logstash-forwarder.crt should be copied to all client servers those who send logs to logstash server. Let’s finish our Logstash configuration.

  

Configure Logstash:

Logstash configuration can be found in /etc/logstash/conf.d/.  If the files don’t exist, create a new one. logstash configuration file consists of three sections input, filter, and the output; all three sections can be found either in a single file or each section will have separate files end with .conf.

It is recommended for you to use a single file to placing input, filter and output sections.

nano /etc/logstash/conf.d/logstash.conf

 attach the following input configuration:

input {

beats {

port => 5044

ssl => true

ssl_certificate => “/etc/pki/tls/logstash-forwarder.crt”

ssl_key => “/etc/pki/tls/private/logstash-forwarder.key”

congestion_threshold => “40”

}

}

Save and quit. This specifies a beats input that will listen on TCP port 5044, and it will use the SSL certificate and private key that we created earlier.

 

In the filter section.  We will use Grok to parse the logs ahead of sending it to Elasticsearch. The following grok filter will look for the “syslog” labeled logs and tries to parse them to make a structured index.

filter {

if [type] == “syslog” {

grok {

match => { “message” => “%{SYSLOGLINE}”

}

}

date {

match => [ “timestamp”, “MMM  d HH:mm:ss”, “MMM dd HH:mm:ss” ]

}

}

}

 

In the output section, we will define the location where the logs to get stored; it should be Elasticsearch.

output {

elasticsearch {

hosts => localhost    index => “%{[@metadata][beat]}-%{+YYYY.MM.dd}”

}

stdout {

codec => rubydebug

}

}

Save and exit.

 

Test logstash configuration

sudo service logstash –configtest –f /etc/logstash/conf.d/

After a few seconds, it should display Configuration OK if there are no structure errors. Otherwise, read the error output to see what’s erroneous with your Logstash configuration.

You can troubleshoot any issues by looking at below log.

cat /var/log/logstash/logstash-plain.log

 Restart Logstash, and enable it.

sudo systemctl restart logstash

sudo systemctl enable logstash

Check if the Logstash service is running properly

sudo service logstash status

Start Logstash service on boot up

systemctl enable logstash.service

Allow connections to Logstash, we need to alter the rules by writing:

sudo ufw allow 5044

 

You have successfully setup ELK stack on Ubuntu 16.04.

 

Mr. Rohan Patil is the Operations Head at VISTA InfoSec. He lead the company to expand its business and provided the much needed strategic service offerings and Innovation through R&D. He holds a Bachelor of Science in Information Technology from Mumbai University, a Post Graduate Diploma in Information Technology from Mumbai University. He has 14 years of experience in Information Technology and more specifically Information Security field. He has extensive experience and knowledge of Operating Systems, Databases, Networks and programming experience. He looks after the Project management activities and also leads the technology and security consulting teams at VISTA InfoSec. Prior to joining the company, he was assisting in Investigation (Cyber Forensics, Frauds, and Data Recovery); Training Law Enforcement Agency; Information Security Awareness Programs at Govt. of India Cyber Labs.

Leave a comment


Be the first to comment