FAIR Consulting Group ELK Implementation for Hybris (SAP Commerce) - FAIR Consulting Group

ELK Implementation for Hybris (SAP Commerce)

Share on facebook
Share on twitter
Share on linkedin

Industry

Services
Platform
Domain

Industry

Services
Platform
Domain

ELK Implementation for Hybris (SAP Commerce)

Overview

The following article will explain the need and implementation of ELK (Elasticsearch, Logstash, Kibana) stack for stashing and viewing logs of SAP Commerce/Hybris.

Hybris, by default, prints its logs in the files which makes it really difficult to analyze and debug the development. To overcome this problem ELK stack is implemented. ELK stack is the complete logging solution which can be implemented in a variety of scenarios.

Why ELK for Hybris?

Following are the key benefits of ELK stack:

  • It provides remote access for developers and DevOps to view logs. Till now they have to login to the machine where Hybris is deployed
  • It provides extensive searching functionality which enables developers to filter their logs easily
  • Easy and robust way to use

ELK stack introduction

The detailed introduction can be found here.

The diagram (source) below display how Elasticsearch, Logstash and Kibana have been placed to form ELK stack and also how they are able to communicate with one another

Beats and Logstash lay at the bottom and takes responsibility to receive, filter and deliver logs.

Elasticsearch is a search and analytics engine and is the middle point where logstash and filebeat can save filtered data for kibana.

Kibana acts as the front end where logs can be visualized as charts and graphs and act as the “cherry on the top” of the ‘ELK’ake.

Level 1 DFD of implementation of ELK stack for HYBRIS commerce

ELK Stack Server

  1. Kibana: it is a front end of the whole stack, it provides multiple plugins to view logs in useful views
  2. Elasticsearch: it is used to save data in ELK stack, it provides data to Kibana.
  3. Logstash: it provides the functionality to collect data from multiple sources (in this case Filebeat, HTTP and TCP), transform data and outputs or stashes data in different destinations (in this case Elasticsearch)

Hybris Server

  1. Hybris Commerce
  2. ELKLogger: is the java plugin that can be used by Hybris developers to send structured logs to logstash through TCP or HTTP as required.
  3. Beats (filebeats): it is a lightweight log analyser that can efficiently read and filter logs from files and able to output that data to multiple sources (in this case Logstash)

How to setup ELK?

You can find the detailed installation guide for multiple operating systems on this link → https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html

Behind the Scenes:

The following points from top to bottom define the flow of logs which is also elaborated in the DFD given above, it also explains the role of every component used in this case. The configurations are defined below:

  1. Hybris is the system which generates its logs in directory and sub-directories of {hybris-installation-folder-on-server}/hybris/log, also from which developers can send structured logs using ELKLogger plugin/library
  2. Beats/Filebeat reads logs from files present in paths specified in the configuration of filebeat, which acts as input for filebeat then it sends data to logstash.
  3. Logstash recieves data through multiple sources, if sent from filebeat logstash has a filebeat plugin to listen to data, if sent from ELKLogger plugin logstash has TCP and HTTP plugin to listen to data. TCP is defined because it is fast channel to send data, but it has the limitation of number of bytes while sending payloads where as HTTP takes some time in sending requests but as compared to TCP it can send huge volumes of payload.
  4. For each listener separate pipeline is defined, so that it can entertain data from different sources in different manner
  5. Logstash after receiving data from different sources, it saves data in elasticsearch. For every pipeline of logstash, data is saved in different index. For every date a separate index is made
  6. Now when a person open the kibana from provided link he/she can search and view logs. Kibana fetches data from elasticsearch as defined in the configuration file.

Configurations and Code

Elasticsearch Configuration

Following are the configuration of elasticsearch in this scenario

Put this file installation directory of elasticsearch

#Filename = elasticsearch.yml

#this tells elasticsearch where to save data on file system
path.data: /var/lib/elasticsearch
 
#this tells elasticsearch where to print logs of elasticsearch.
path.logs: /var/log/elasticsearch
 
#this has to set to 0.0.0.0 therefore it can be accessed remotely
network.host: 0.0.0.0

For further configuration detail, you can visit → https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html

Logstash Configuration

Following are the configuration of logstash configuration files in this scenario

Put this file in installation directory of logstash

#FileName = pipelines.yml

#pipeline.id is used to differentiate between logstash pipelines
#pipeline.config is used to tell logstash the path of configuration for specific pipeline
 
- pipeline.id: filebeat
  path.config: "/etc/logstash/conf.d/filebeat.conf"
 
- pipeline.id: http
  path.config: "/etc/logstash/conf.d/http.conf"
 
 
- pipeline.id: tcp
  path.config: "/etc/logstash/conf.d/tcp.conf"

Note: Since we have three different pipelines and each pipeline have their own configuration therefore there will be three different configuration files.

Put this file in conf.d directory in installation directory of logstash

#FileName = filebeat.conf

input {
    
   #tells logstash to use beats as input plugin
   beats {
 
    #defines on which port to listen data coming from filebeat
    port => 5044
   
  }
}
 
output {
   
  #tells logstash to use elasticsearch as output plugin
  elasticsearch {
     
    #defines where is elasticsearch
    hosts => ["http://localhost:9200"]
     
    #tells the name of index to logstash in elasticsearch
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

Put this file in conf.d directory in installation directory of logstash

#FileName = http.conf

input {
    
   #tells logstash to use http as input plugin
   http {
 
    #defines on which port to listen data coming on http protocol
    port => 8080
 
   }
}
 
output {
 
  #tells logstash to use elasticsearch as output plugin
  elasticsearch {
 
    #defines where is elasticsearch
    hosts => ["http://localhost:9200"]
 
    #tells the name of index to logstash in elasticsearch
    index => "http-%{+YYYY.MM.dd}"
  }
}

Put this file in conf.d directory in installation directory of logstash

#FileName = tcp.conf

input {
 
    #tells logstash to use tcp as input plugin
    tcp {
 
        #defines on which port to listen data coming on tcp protocol
        port => 4444
 
    }
}
 
output {
 
  #tells logstash to use elasticsearch as output plugin
  elasticsearch {
 
    #defines where is elasticsearch
    hosts => ["http://localhost:9200"]
 
    #tells the name of index to logstash in elasticsearch
    index => "tcp-%{+YYYY.MM.dd}"
  }
}

For further configuration detail you can visit → https://www.elastic.co/guide/en/logstash/current/configuration.html

Filebeat Configuration

Put this file in the installation directory of filebeat

#FileName = filebeat.yml

#path to read logs
#you can see that how multiple paths are defined
filebeat:
    prospectors:
        paths:
            - "{hybris-installation-directory}/hybris/log/*/*"
            - "{hybris-installation-directory}/hybris/log/*"
            input_type: log
 
 
#path to load modules configurations before using them
filebeat.config.modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false
 
#tells filebeat to throw output on logstash
output.logstash:
    #The Logstash hosts for example 192.168.1.15
    #Port is set to 5044 because it must match the port defined in filebeat.conf in logstash configuration, it can be any allowed port but must be matched
    hosts: ["XXX.XXX.X.XXX:5044"]

Put this file in modules.d directory in installation directory of filebeat

#FileName = logstash.yml

- module: logstash
 
    log:
        enabled: true
    #enables an ability to log when a specific event takes an abnormal amount of time
    slowlog:
        enabled: true

For further configuration detail you can visit→ https://www.elastic.co/guide/en/beats/filebeat/current/configuring-howto-filebeat.html

ELK Logger plugin

The pseudo code for plugin is as follow

public class LoggerModel {



    private String className;
    private String packageName;
    private String developer;
    private String message;
    private Map<String,Object> messageMap;
    private LogLevel level;

    //Required Constructor(s) here

    //Getters and Setters


}
public class ELKLogger {
    private static final String USER_AGENT = "Mozilla/5.0";
    public enum LogLevel{
        ALL("ALL"),
        DEBUG("DEBUG"),
        INFO("INFO"),
        WARN("WARN"),
        ERROR("ERROR"),
        FATAL("FATAL"),
        OFF("OFF"),
        TRACE("TRACE");

        //Enum Getter Setter

    }

    //Required Constructor(s) here

    private Class tClass;
    private String developerName;
    private boolean isHttp=false;
    private String host;
    private int httpPort=8080;
    private int tcpPort=4444;
    private Socket socket;


    //Getters and Setters

 
    public void sendLog(LogLevel level, String message){
        //Logic to Initialize LoggerModel for this overload
        sendLog(loggerModel);
    }

    public void sendLog(LogLevel level,String message, Map<String,Object> messageMap){
        //Logic to Initialize LoggerModel for this overload
        sendLog(loggerModel);
    }
    public void sendLog(LogLevel level,LinkedHashMap<String,Object> messageMap){
        //Logic to Initialize LoggerModel for this overload
        sendLog(loggerModel);
    }


    public void sendLogSpecial(LogLevel level,Object... objects){

        //Logic to Initialize LoggerModel for this method
        sendLog(loggerModel);


    }


    private void sendLog(LoggerModel loggerModel){


        if(isHttp){
            sendHttp(loggerModel);
        }
        else {

            if(socket==null)
                setSocket();
            if(socket!=null) {
                sendTcp(loggerModel,socket);
            }
        }

    }

    private void sendHttp(LoggerModel loggerModel) {

        //Send HTTP request Code

    }

 
    private static void sendTcp(LoggerModel loggerModel,Socket socket) {
       
        //Send TCP request Code
    }

}

Developers can write directly in HYBRIS or they can make separate maven project, which they can import as maven dependency in external-dependencies.xml

    <dependencies>
        ...
        <dependency>
            <groupId>${group.id}</groupId>
            <artifactId>ELKLogger</artifactId>
            <version>${version}</version>
        </dependency>
        ...
    </dependencies>

This is how you can use the plugin in code

//import package
 
 
   public class HelloWorld {
 
    private static ELKLogger elkLogger = new ELKLogger(HelloWorld.class,"developer_name","XXX.XXX.X.XXX");
 
    public static void printHelloWorld() {
 
        //LogLevel can differ, DEBUG, ERROR, WARN etc
        elkLogger.sendLog(ELKLogger.LogLevel.INFO,"printHelloWorld method start");
 
        System.out.println("Hello, World");
 
        elkLogger.sendLog(ELKLogger.LogLevel.INFO,"printHelloWorld method end");
 
    }
 
}

Output

Finally, after setting up everything you can send and view your logs in kibana like shown in the below images

Summary and other consideration

  • Since this is just a basic implementation, but it is proved to be a complete logging solution for huge systems like SAP Commerce (Hybris)
  • The complete ELK stack can be configured and automated through any popular configuration tool like Ansiblepuppet etc
  • ELK has a lot of configurable options that can be explored to cater to needs of a system
  • Good knowledge base and large community of developers and users which is growing day by day is available to help and improve the product

Related Insights

Stay up-to-date with FAIR

Connect with FAIR today and get the latest tips and tricks on improving your customer experience tools and eCommerce platforms. Once you subscribe you will get access to workshops, webinars, recent projects and so much more.