ELK

What is ELK stack?
ELK stands for Elasticsearch, Logstash and Kibana.

Brief definitions:

Logstash: It is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs. It is fully free and fully open source.

Elasticsearch: Elasticsearch is a search server based on Lucene. It provides a distributed, multitenant-capable full-text search engine with a RESTful web interface and schema-free JSON documents.

Kibana: A nifty tool to visualize logs and timestamped data.

Packetbeat, Filebeat, Metricbeat, and Winlogbeat are a few examples of Beats.

Packetbeat is a network packet analyzer that ships information about the transactions exchanged between your application servers.
Filebeat ships log files from your servers.
Metricbeat is a server monitoring agent that periodically collects metrics from the operating systems and services running on your servers. And

Winlogbeat ships Windows event logs.

If you have a specific use case to solve, we encourage you to create your own Beat. We created an infrastructure to simplify the process. The libbeat library, written entirely in Golang, offers the API that all Beats use to ship data to Elasticsearch, configure the input options, implement logging, and more.

https://golang.org/

Beats Platform

Installing Elasticsearch

[mohammedrafi@elk ~]$ vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
52.170.219.224 elk.puppethub.in

[mohammedrafi@elk ~]$ hostname -f
elk.puppethub.in

[mohammedrafi@elk ~]$ ping elk.puppethub.in
PING elk.puppethub.in (52.170.219.224) 56(84) bytes of data.

[mohammedrafi@elk ~]$ sestatus
SELinux status: disabled

[mohammedrafi@elk ~]$ service firewalld status
Redirecting to /bin/systemctl status firewalld.service
● firewalld.service – firewalld – dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)

[mohammedrafi@elk ~]$ wget –no-cookies –no-check-certificate –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.rpm”

[mohammedrafi@elk ~]$ ll
total 156396
-rw-rw-r– 1 mohammedrafi mohammedrafi 160148266 Jan 30 2016 jdk-8u73-linux-x64.rpm

[mohammedrafi@elk ~]$ sudo rpm -ivh jdk-8u73-linux-x64.rpm
Preparing… ################################# [100%]
Updating / installing…
1:jdk1.8.0_73-2000:1.8.0_73-fcs ################################# [100%]
Unpacking JAR files…
tools.jar…
plugin.jar…
javaws.jar…
deploy.jar…
rt.jar…
jsse.jar…
charsets.jar…
localedata.jar…
jfxrt.jar…

[mohammedrafi@elk ~]$ java -version
java version “1.8.0_73”
Java(TM) SE Runtime Environment (build 1.8.0_73-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.73-b02, mixed mode)
[mohammedrafi@elk ~]$ curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.rpm
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 32.0M 100 32.0M 0 0 28.5M 0 0:00:01 0:00:01 –:–:– 28.5M

[mohammedrafi@elk ~]$ sudo rpm -ivh elasticsearch-5.3.0.rpm
warning: elasticsearch-5.3.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing… ################################# [100%]
Creating elasticsearch group… OK
Creating elasticsearch user… OK
Updating / installing…
1:elasticsearch-0:5.3.0-1 ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service

[mohammedrafi@elk ~]$ sudo systemctl daemon-reload
[mohammedrafi@elk ~]$ sudo systemctl enable elasticsearch.service
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

[mohammedrafi@elk ~]$ sudo service elasticsearch start
Starting elasticsearch (via systemctl): [ OK ]

[mohammedrafi@elk ~]$ netstat -tlpn |grep 9*00
(No info could be read for “-p”: geteuid()=1000 but you should be root.)
tcp6 0 0 127.0.0.1:9200 :::* LISTEN –
tcp6 0 0 ::1:9200 :::* LISTEN –
tcp6 0 0 127.0.0.1:9300 :::* LISTEN –
tcp6 0 0 ::1:9300 :::* LISTEN –

[mohammedrafi@elk ~]$ curl http://127.0.0.1:9200
{
“name” : “PXOxTqQ”,
“cluster_name” : “elasticsearch”,
“cluster_uuid” : “7EyIKnWGQ0GH3fazTVQ7xA”,
“version” : {
“number” : “5.3.0”,
“build_hash” : “3adb13b”,
“build_date” : “2017-03-23T03:31:50.652Z”,
“build_snapshot” : false,
“lucene_version” : “6.4.1”
},
“tagline” : “You Know, for Search”
}
[mohammedrafi@elk ~]$ sudo ls -l /etc/elasticsearch/
total 16
-rw-rw—- 1 root elasticsearch 2854 Mar 23 03:34 elasticsearch.yml
-rw-rw—- 1 root elasticsearch 3117 Mar 23 03:34 jvm.options
-rw-rw—- 1 root elasticsearch 4456 Mar 23 03:34 log4j2.properties
drwxr-x— 2 root elasticsearch 6 Mar 23 03:34 scripts

[mohammedrafi@elk ~]$ sudo cat /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ———————————- Cluster ———————————–
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ———————————— Node ————————————
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ———————————– Paths ————————————
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ———————————– Memory ———————————–
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ———————————- Network ———————————–
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# ——————————— Discovery ———————————-
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is [“127.0.0.1”, “[::1]”]
#
#discovery.zen.ping.unicast.hosts: [“host1”, “host2”]
#
# Prevent the “split brain” by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ———————————- Gateway ———————————–
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ———————————- Various ———————————–
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

——————–
[mohammedrafi@elk ~]$ sudo cat /etc/elasticsearch/jvm.options
## JVM configuration

################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################

# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space

-Xms2g
-Xmx2g

################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don’t tamper with them unless
## you understand what you are doing
##
################################################################

## GC configuration
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly

## optimizations

# disable calls to System#gc
-XX:+DisableExplicitGC

# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch

## basic

# force the server VM (remove on 32-bit client JVMs)
-server

# explicitly set the stack size (reduce to 320k on 32-bit client JVMs)
-Xss1m

# set to headless, just in case
-Djava.awt.headless=true

# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8

# use our provided JNA always versus the system one
-Djna.nosys=true

# use old-style file permissions on JDK9
-Djdk.io.permissionsUseCanonicalPath=true

# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0

# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Dlog4j.skipJansi=true

## heap dumps

# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError

# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
#-XX:HeapDumpPath=${heap.dump.path}

## GC logging

#-XX:+PrintGCDetails
#-XX:+PrintGCTimeStamps
#-XX:+PrintGCDateStamps
#-XX:+PrintClassHistogram
#-XX:+PrintTenuringDistribution
#-XX:+PrintGCApplicationStoppedTime

# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${loggc}

# By default, the GC log file will not rotate.
# By uncommenting the lines below, the GC log file
# will be rotated every 128MB at most 32 times.
#-XX:+UseGCLogFileRotation
#-XX:NumberOfGCLogFiles=32
#-XX:GCLogFileSize=128M

# Elasticsearch 5.0.0 will throw an exception on unquoted field names in JSON.
# If documents were already indexed with unquoted fields in a previous version
# of Elasticsearch, some operations may throw errors.
#
# WARNING: This option will be removed in Elasticsearch 6.0.0 and is provided
# only for migration purposes.
#-Delasticsearch.json.allow_unquoted_field_names=true
—————-
[mohammedrafi@elk ~]$ sudo cat /etc/elasticsearch/log4j2.properties
status = error

# log action execution errors for easier debugging
logger.action.name = org.elasticsearch.action
logger.action.level = debug

appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n

appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true

rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling

appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.log
appender.deprecation_rolling.layout.type = PatternLayout
appender.deprecation_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%.-10000m%n
appender.deprecation_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.log.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4

logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = warn
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.additivity = false

appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
appender.index_search_slowlog_rolling.layout.type = PatternLayout
appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%d{yyyy-MM-dd}.log
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.time.interval = 1
appender.index_search_slowlog_rolling.policies.time.modulate = true

logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %marker%.-10000m%n
appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%d{yyyy-MM-dd}.log
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.time.interval = 1
appender.index_indexing_slowlog_rolling.policies.time.modulate = true

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false
—————–

[mohammedrafi@elk ~]$ sudo vim /etc/elasticsearch/elasticsearch.yml
network.host: localhost

[mohammedrafi@elk ~]$ sudo service elasticsearch restart
Restarting elasticsearch (via systemctl): [ OK ]

[mohammedrafi@elk ~]$ systemctl status elasticsearch
● elasticsearch.service – Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-29 10:06:45 UTC; 6min ago
Docs: http://www.elastic.co
Process: 30912 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 30915 (java)
CGroup: /system.slice/elasticsearch.service
└─30915 /bin/java -Xms2g -Xmx2g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -…

################################################################

Installing Kibana

[mohammedrafi@elk ~]$ sudo rpm –import https://artifacts.elastic.co/GPG-KEY-elasticsearch

[mohammedrafi@elk ~]$ sudo vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-5.x]
name=Elasticsearch repository for 5.x packages
baseurl=https://artifacts.elastic.co/packages/5.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[mohammedrafi@elk ~]$ sudo yum install kibana -y
[mohammedrafi@elk ~]$ sudo ls -l /etc/kibana/
total 8
-rw-rw-r– 1 root root 4509 Mar 23 03:47 kibana.yml

[mohammedrafi@elk ~]$ sudo cat /etc/kibana/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is ‘localhost’, which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: “localhost”

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: “”

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server’s name. This is used for display purposes.
#server.name: “your-hostname”

# The URL of the Elasticsearch instance to use for all your queries.
#elasticsearch.url: “http://localhost:9200”

# When this setting’s value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn’t already exist.
#kibana.index: “.kibana”

# The default application to load.
#kibana.defaultAppId: “discover”

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: “user”
#elasticsearch.password: “pass”

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ “/path/to/your/CA.pem” ]

# To disregard the validity of SSL certificates, change this setting’s value to ‘none’.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
————————————-
[mohammedrafi@elk ~]$ sudo systemctl daemon-reload
[mohammedrafi@elk ~]$ sudo systemctl enable kibana.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
[mohammedrafi@elk ~]$ sudo systemctl start kibana.service

[mohammedrafi@elk ~]$ sudo systemctl status kibana.service
● kibana.service – Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-29 10:26:28 UTC; 197ms ago
Main PID: 58162 (node)
CGroup: /system.slice/kibana.service
└─58162 /usr/share/kibana/bin/../node/bin/node –no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Mar 29 10:26:28 elk.puppethub.in systemd[1]: Started Kibana.
Mar 29 10:26:28 elk.puppethub.in systemd[1]: Starting Kibana…
—————————

[mohammedrafi@elk ~]$ sudo yum -y install epel-release

[mohammedrafi@elk ~]$ sudo yum -y install nginx httpd-tools

[mohammedrafi@elk ~]$ sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin
New password:
Re-type new password:
Adding password for user kibanaadmin

[mohammedrafi@elk ~]$ sudo vim /etc/nginx/nginx.conf
include /etc/ginx/conf.d/*.conf;

[mohammedrafi@elk ~]$ sudo vim /etc/nginx/conf.d/kibana.conf
server {
listen 80;

server_name elk.puppethub.in;

auth_basic “Restricted Access”;
auth_basic_user_file /etc/nginx/htpasswd.users;

location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘upgrade’;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
[mohammedrafi@elk ~]$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

[mohammedrafi@elk ~]$ sudo systemctl start nginx
[mohammedrafi@elk ~]$ sudo systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

[mohammedrafi@elk ~]$ sudo systemctl status nginx
● nginx.service – The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-29 10:34:13 UTC; 28s ago
Main PID: 60342 (nginx)
CGroup: /system.slice/nginx.service
├─60342 nginx: master process /usr/sbin/nginx
├─60343 nginx: worker process
└─60344 nginx: worker process

Mar 29 10:34:13 elk.puppethub.in systemd[1]: Starting The nginx HTTP and reverse proxy server…
Mar 29 10:34:13 elk.puppethub.in nginx[60335]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Mar 29 10:34:13 elk.puppethub.in nginx[60335]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Mar 29 10:34:13 elk.puppethub.in systemd[1]: Failed to read PID from file /run/nginx.pid: Invalid argument
Mar 29 10:34:13 elk.puppethub.in systemd[1]: Started The nginx HTTP and reverse proxy server.

[mohammedrafi@elk ~]$ curl localhost -basic -u kibanaadmin
Enter host password for user ‘kibanaadmin’:
var hashRoute = ‘/app/kibana’;
var defaultRoute = ‘/app/kibana’;

var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;
}
[mohammedrafi@elk ~]$ curl http://127.0.0.1:5601
<script>var hashRoute = ‘/app/kibana’;
var defaultRoute = ‘/app/kibana’;

var hash = window.location.hash;
if (hash.length) {
window.location = hashRoute + hash;
} else {
window.location = defaultRoute;

#########################################################

Install Logstash

[mohammedrafi@elk ~]$sudo vim /etc/yum.repos.d/logstash.repo
[logstash-2.2]
name=logstash repository for 2.2 packages
baseurl=http://packages.elasticsearch.org/logstash/2.2/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1

[mohammedrafi@elk ~]$sudo vi /etc/pki/tls/openssl.cnf
[ v3_ca ]
subjectAltName = IP: 10.0.1.11
[mohammedrafi@elk ~]$ cd /etc/pki/tls

[mohammedrafi@elk tls]$ sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Generating a 2048 bit RSA private key
……..+++
………………………….+++
writing new private key to ‘private/logstash-forwarder.key’
—–

[mohammedrafi@elk tls]$ sudo vi /etc/logstash/conf.d/02-beats-input.conf
input {
beats {
port => 5044
ssl => true
ssl_certificate => “/etc/pki/tls/certs/logstash-forwarder.crt”
ssl_key => “/etc/pki/tls/private/logstash-forwarder.key”
}
}
[mohammedrafi@elk tls]$ sudo vi /etc/logstash/conf.d/10-syslog-filter.conf
filter {
if [type] == “syslog” {
grok {
match => { “message” => “%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}” }
add_field => [ “received_at”, “%{@timestamp}” ]
add_field => [ “received_from”, “%{host}” ]
}
syslog_pri { }
date {
match => [ “syslog_timestamp”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss” ]
}
}
}

[mohammedrafi@elk tls]$ sudo vi /etc/logstash/conf.d/30-elasticsearch-output.conf
output {
elasticsearch {
hosts => [“localhost:9200”]
sniffing => true
manage_template => false
index => “%{[@metadata][beat]}-%{+YYYY.MM.dd}”
document_type => “%{[@metadata][type]}”
}
}

[mohammedrafi@elk tls]$ sudo systemctl restart logstash
[mohammedrafi@elk tls]$ sudo chkconfig logstash on
Note: Forwarding request to ‘systemctl enable logstash.service’.
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.
[mohammedrafi@elk tls]$ sudo systemctl status logstash
● logstash.service – logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-29 11:32:03 UTC; 16s ago
Main PID: 4511 (java)
CGroup: /system.slice/logstash.service
└─4511 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancy…

Mar 29 11:32:03 elk.puppethub.in systemd[1]: Started logstash.
Mar 29 11:32:03 elk.puppethub.in systemd[1]: Starting logstash…
Mar 29 11:32:13 elk.puppethub.in logstash[4511]: Sending Logstash’s logs to /var/log/logstash which is now configured via log4j2.properties
Mar 29 11:32:14 elk.puppethub.in logstash[4511]: log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.Re…Cache).
Mar 29 11:32:14 elk.puppethub.in logstash[4511]: log4j:WARN Please initialize the log4j system properly.
Mar 29 11:32:14 elk.puppethub.in logstash[4511]: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Hint: Some lines were ellipsized, use -l to show in full.
[mohammedrafi@elk tls]$ sudo systemctl restart kibana

[mohammedrafi@elk tls]$ sudo systemctl restart elasticsearch.service

[mohammedrafi@elk ~]$ sudo yum install filebeat -y
[mohammedrafi@elk ~]$ sudo systemctl enable filebeat
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.

[mohammedrafi@elk ~]$ sudo systemctl start filebeat

[mohammedrafi@elk ~]$ sudo systemctl status filebeat
● filebeat.service – filebeat
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-03-29 11:41:12 UTC; 5s ago
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Main PID: 4969 (filebeat)
CGroup: /system.slice/filebeat.service
└─4969 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -p…

Mar 29 11:41:12 elk.puppethub.in systemd[1]: Started filebeat.
Mar 29 11:41:12 elk.puppethub.in systemd[1]: Starting filebeat…
[mohammedrafi@elk ~]$ sudo cat /etc/filebeat/filebeat.yml
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each – is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

– input_type: log

# Paths that should be crawled and fetched. Glob based paths.
paths:
– /var/log/*.log
#- c:\programdata\elasticsearch\logs\*

# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: [“^DBG”]

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: [“^ERR”, “^WARN”]

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [“.gz$”]

# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

### Multiline options

# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[

# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false

# Match can be set to “after” or “before”. It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: [“service-X”, “web-tier”]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#————————– Elasticsearch output ——————————
output.elasticsearch:
# Array of hosts to connect to.
hosts: [“localhost:9200”]

# Optional protocol and basic auth credentials.
#protocol: “https”
#username: “elastic”
#password: “changeme”

#—————————– Logstash output ——————————–
#output.logstash:
# The Logstash hosts
#hosts: [“localhost:5044”]

# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: [“/etc/pki/root/ca.pem”]

# Certificate for SSL client authentication
#ssl.certificate: “/etc/pki/client/cert.pem”

# Client Certificate Key
#ssl.key: “/etc/pki/client/cert.key”

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use [“*”]. Examples of other selectors are “beat”,
# “publish”, “service”.
#logging.selectors: [“*”]

 

#######################

Adding new index pattern

Goto-> Management(gear) on kiban dashboard
-> Select Index pattern
-> Add New
-> ex:winlogbeat-*(Index name or pattern)
-> @timestamp (Time-field name)

Mark it as * if u need that pattern as default

 

Using filter @dashboard
beat.hostname: “www.example.com”
beat.hostname: “www.example.com” && source: “/var/log/messages”

 

#########################
creating visualisation

click on -> Visualize
-> select any one Visualization ex: “Line chart”
-> select Index pattern ex:winlog or filebeat
-> By default “Y-Axis” will be our `count`we wont change it(If need we can change to average it depends upon visualization we are creating for better viewing)
-> select X-Axis & select “Aggregation” type ex:Date Histogram
-> Field select filed Ex:@timestamp
-> Interval :Auto
-> Custom Label: Any-thing for identification

And click on play button at the top to see graph

If u wan to see same graph for all nodes
Just simply add Sub-Bucket -> Split Chart -> Sub Aggregations -> Terms
Field -> beat.hostname
Save visualization with any comment which is easy for understanding for future ref.

#############################
create dashboard and add above visualisation
Click on -> DashBoard tab
-> Add visualization(as per ur name)
-> After selecting visualization click add
-> check with hover-over and add it dashboard by clicking -> save with any comment(My DashBoard-1)

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s