How to setup EFK stack on OpenShift Origin?

Elasticsearch, Fluentd, and Kibana (EFK) stack is one of the popular and recommended for centralized logging in OpenShift Origin (OKD) cluster. It is similar to ELK Stack but uses Fluentd instead of logstash. This blog post explains the process to use Fluentd to collect, transform, and ship log data to the Elasticsearch backend.

Fluentd is a popular open-source data collector that is setup on openshift cluster to tail container log files, filter and transform the log data, and deliver it to the Elasticsearch cluster, where it will be indexed and stored.

Elasticsearch is an open source, distributed, RESTful, JSON-based search engine. It is commonly used to index and search through large volumes of log data, but can also be used to search many different kinds of documents.

Kibana is a search engine at heart, users started using Elasticsearch for logs and wanted to easily ingest and visualize them. Enter Logstash, the powerful ingest pipeline, and Kibana, the flexible visualization tool.

Prerequisites

For this, to work we need have Open Shift cluster running on AWS. If you are not sure how to do that, please look at this medium post for step by step instructions.

Technologies

  1. OpenShift or OKD cluster on AWS
  2. OpenShift Origin version 3.11

Setup Instructions

  1. This post follows the steps mentioned in OpenShift docs but with more precise details
  2. Add the following instructions to inventory file
## Logging properties for EFK stack logging
openshift_logging_install_logging=true
openshift_logging_es_pvc_dynamic=true
openshift_logging_kibana_hostname=<custom hostname>
openshift_logging_master_public_url=https://<OpenShift Master Public>:8443
openshift_logging_es_ops_nodeselector={"node-role.kubernetes.io/infra":"true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra":"true"}
openshift_logging_namespace= logging
openshift_logging_fluentd_use_journal=true
openshift_logging_use_mux=false
openshift_logging_use_ops=false
openshift_logging_es_ops_allow_cluster_reader=true
openshift_logging_es_memory_limit=2G
openshift_logging_es_pvc_size=10G

3. openshift_logging_kibana_hostname should be unique in the cluster and will be used to access kibana ui from outside

4. Change the openshift_logging_es_pvc_size size based on your requirement.

5. Now login into openshift master and execute the following ansible-playbook command to install EFK stack

$  ansible-playbook -i "inventory"  --key-file "access-key.pem" openshift-ansible/playbooks/openshift-logging/config.yml

6. Playbook creates the projectopenshift-logging if not exists and three pods (names can be different)

  • logging-kibana (Kibana)
  • logging-es (Elastic Search)
  • logging-fluentd (Fluentd)

7. Now create an external route to Kibana https://kibana.demoproject.com

8. Click on https://kibana.demoproject.com to access your Kibana UI

9. That’s it. EFK stack on setup on AWS is done.

Pavan Kumar Jadda
Pavan Kumar Jadda
Articles: 36

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.