Setting Up ELK Stack on Kubernetes: A Step-by-Step Guide

Shubham Soni
7 min readMay 16, 2024

--

In the world of modern DevOps, the ELK stack (Elasticsearch, Logstash, and Kibana) has become an essential tool for log management and analysis. Combining the power of ELK with the scalability of Kubernetes (k8s) can significantly enhance your logging capabilities. This guide will walk you through the process of setting up an ELK stack on a Kubernetes cluster.

Table of Contents

  1. Introduction
  2. Prerequisites
  3. Setting Up Elasticsearch
  4. Setting Up Kibana
  5. Deploying Filebeat
  6. Deploying Logstash
  7. Configuring and Launching Kibana

1. Introduction

In today’s world of microservices and distributed systems, effective logging and monitoring are crucial for maintaining the health and performance of your applications. The ELK stack — comprising Elasticsearch, Logstash, and Kibana — along with Filebeat, provides a powerful solution for aggregating and analyzing log data. In this blog, we will walk through the steps to set up the ELK stack with Filebeat on a Kubernetes cluster.

2. Prerequisites

Before we begin, ensure you have the following:

  • A running Kubernetes cluster (Minikube, EKS, GKE, etc.).
  • kubectl command-line tool configured to interact with your cluster.
  • Lens software to connect and see all the information related to the cluster.

Install Operator

Things to consider before you start:

Install custom resource definitions and Install the operator with its RBAC rules:

kubectl create -f https://download.elastic.co/downloads/eck/2.2.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/2.2.0/operator.yaml

{MAKE SURE ALL YOUR RESOURCES ARE OF THE SAME VERSION THAT IS ELASTICSEARCH, KIBANA, LOGSTASH & FILEBEAT. }

3: Setting Up Elasticsearch

First, we’ll deploy Elasticsearch, which will store our log data. For this I am using a YAML file in which I have mention all the details for the Elasticsearch called “elasticsearch.yaml” as below.

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.13.4
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false

After creating the script file we have to apply this YAML file by applying the below command on Lens or Terminal

kubectl apply -f elasticsearch.yaml

To monitor cluster health and creation progress apply this command:

kubectl get elasticsearch

Request Elasticsearch access

A ClusterIP Service is automatically created for your cluster:

kubectl get service quickstart-es-http

3.1: Get the credentials.

A default user named elastic is automatically created with the password stored in a Kubernetes secret:

PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')

3.2: Request the Elasticsearch endpoint.

From inside the Kubernetes cluster:

curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

From your local workstation, use the following command in a separate terminal:

kubectl port-forward service/quickstart-es-http 9200

Then request localhost:

curl -u "elastic:$PASSWORD" -k "https://localhost:9200"

Or Open https://localhost:9200 in your browser. Your browser will show a warning because the self-signed certificate configured by default is not verified by a known certificate authority and not trusted by your browser. You can temporarily acknowledge the warning for the purposes of this quick start but it is highly recommended that you configure valid certificates for any production deployments.

Login as the elastic user and password is stored in lens (Your Cluster) -> Config -> Secrets -> (keys = elastic).

4: Setting Up Kibana

Kibana provides a web interface to visualize logs stored in Elasticsearch. For that create a Kibana instance and associate it with your Elasticsearch cluster and for that create a YAML file “kibana.yaml” as mentioned below

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 8.13.4
count: 1
elasticsearchRef:
name: quickstart
# http:
# tls:
# selfSignedCertificate:
# disabled: true

After creating the script file we have to apply this YAML file by applying the below command on Lens terminal:

kubectl apply -f kibana.yaml

4.1: Monitor Kibana health and creation progress.

Similar to Elasticsearch, you can retrieve details about Kibana instances:

kubectl get kibana

4.2: To access Kibana.

A ClusterIP Service is automatically created for Kibana:

kubectl get service quickstart-kb-http

Use kubectl port-forward to access Kibana from your local workstation:

kubectl port-forward service/quickstart-kb-http 5601

Open https://localhost:5601 in your browser. Your browser will show a warning because the self-signed certificate configured by default is not verified by a known certificate authority and not trusted by your browser. You can temporarily acknowledge the warning for the purposes of this quick start but it is highly recommended that you configure valid certificates for any production deployments.

Login as the elastic user. The password can be obtained with the following command:

kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

5: Deploying Filebeat

Filebeat is a lightweight shipper for forwarding and centralizing log data. Apply the following specification to deploy Filebeat and collect the logs of all containers running in the Kubernetes cluster.

For that create a filebeat instance and for that create a YAML file “filebeat.yaml” as mentioned below:

apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: quickstart
spec:
type: filebeat
version: 8.13.4
# elasticsearchRef:
# name: quickstart
# kibanaRef:
# name: kibana
config:
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- drop_event.when:
or:
- equals:
kubernetes.namespace: "kube-system"
- equals:
kubernetes.namespace: "kube-public"
- equals:
kubernetes.namespace: "quickstart"
- equals:
kubernetes.namespace: "kube-node-lease"
- equals:
kubernetes.namespace: "elastic-system"
output.logstash:
hosts: ["logstash.logging.svc:5044"]
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
tolerations:
- key: dedicated
operator: Exists
effect: NoSchedule
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
limits:
cpu: 500m
memory: 2000Mi
requests:
cpu: 100m
memory: 200Mi
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elk-test
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io

---

apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat

After creating the script file we have to apply this YAML file by applying the below command on Lens terminal:

kubectl apply -f filebeat.yaml

5.1: Monitor Beats.

Retrieve details about the Filebeat.

kubectl get beat

5.2: Access logs of the filebeat Pods.

kubectl logs -f quickstart-beat-filebeat-plr8h

5.3: Access logs ingested by Filebeat.

You have two options:

  • Follow the Elasticsearch deployment guide and run:
curl -u "elastic:$PASSWORD" -k "https://localhost:9200/filebeat-*/_search"
  • Follow the Kibana deployment guide, log in and go to Kibana > Discover.

6: Deploying Logstash

Add the following specification to create a minimal Logstash deployment that will listen to a Beats agent or Elastic Agent configured to send to Logstash on port 5044, create the service and write the output to an Elasticsearch cluster named quickstart, created in the Elasticsearch quickstart.

For that create a logstash instance and for that create a YAML file “logstash.yaml” as mentioned below:

apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
labels:
app.kubernetes.io/name: elasticsearch-logstash
app.kubernetes.io/component: logstash
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: elasticsearch-logstash
app.kubernetes.io/component: logstash
template:
metadata:
labels:
app.kubernetes.io/name: elasticsearch-logstash
app.kubernetes.io/component: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:8.13.4
ports:
- name: "tcp-beats"
containerPort: 5044
env:
- name: ES_HOSTS
value: "https://quickstart-es-http.elk-test.svc:9200"
- name: ES_USER
value: "elastic"
- name: ES_PASSWORD
valueFrom:
secretKeyRef:
name: quickstart-es-elastic-user
key: elastic
volumeMounts:
- name: config-volume
mountPath: /usr/share/logstash/config
- name: pipeline-volume
mountPath: /usr/share/logstash/pipeline
# - name: ca-certs
# mountPath: /etc/logstash/certificates
# readOnly: true
volumes:
- name: config-volume
configMap:
name: logstash-config
- name: pipeline-volume
configMap:
name: logstash-pipeline
# - name: ca-certs
# secret:
# secretName: elasticsearch-es-http-certs-public

---

apiVersion: v1
kind: Service
metadata:
name: logstash
labels:
app.kubernetes.io/name: elasticsearch-logstash
app.kubernetes.io/component: logstash
spec:
ports:
- name: "tcp-beats"
port: 5044
targetPort: 5044
selector:
app.kubernetes.io/name: elasticsearch-logstash
app.kubernetes.io/component: logstash

The config maps defined in the following YAML file are given as volume in the logstash deployment yaml file named as “logstash-config.yaml” as mentioned below:

apiVersion: v1
kind: ConfigMap
metadata:
name: quickstart
labels:
app.kubernetes.io/name: elasticsearch-logstash
app.kubernetes.io/component: logstash
data:
logstash.yml: |
http.host: 0.0.0.0
pipeline.ecs_compatibility: disabled
pipelines.yml: |
- pipeline.id: logstash
path.config: "/usr/share/logstash/pipeline/logstash.conf"

log4j2.properties: |
logger.logstashpipeline.name = logstash.inputs.beats
logger.logstashpipeline.level = error

---

apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-pipeline
labels:
app.kubernetes.io/name: elasticsearch-logstash
app.kubernetes.io/component: logstash
data:
logstash.conf: |
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
prune {
whitelist_names => [ "msg" ]
}
mutate {
rename => { "msg" => "message" }
}
}
output {
if [message] =~ "admission" {
elasticsearch {
index => "logstashadmission-%{+YYYY.MM.dd}"
hosts => [ "${ES_HOSTS}" ]
user => "${ES_USER}"
password => "${ES_PASSWORD}"
cacert => '/etc/logstash/certificates/ca.crt'
}
}
}

After creating the script file we have to apply this YAML file by applying the below command on Lens terminal:

kubectl apply -f logstash.yaml
kubectl apply -f logstash-config.yaml

6.1: Check the status of Logstash

kubectl get logstash

7: Accessing and Visualizing Logs in Kibana

Once Filebeat is running and shipping logs, you can access Kibana to visualize the logs.

  1. Navigate to the Discover tab in Kibana.
  2. Select the Filebeat index pattern (you may need to create this pattern if it’s your first time).
  3. Explore your logs!

Conclusion

Setting up the ELK stack with Filebeat on a Kubernetes cluster can greatly enhance your ability to monitor and troubleshoot your applications. With Elasticsearch storing your logs, Filebeat shipping them, and Kibana providing visualization, you have a powerful toolset at your disposal.

Thank you for reading….❤️❤️ I hope you learn something new.

--

--

Responses (4)