The following article will explain the need and implementation of ELK (Elasticsearch, Logstash, Kibana) stack for stashing and viewing logs of SAP Commerce/Hybris.
Hybris, by default, prints its logs in the files which makes it really difficult to analyze and debug the development. To overcome this problem ELK stack is implemented. ELK stack is the complete logging solution which can be implemented in a variety of scenarios.
Following are the key benefits of ELK stack:
The detailed introduction can be found here.
The diagram (source) below display how Elasticsearch, Logstash and Kibana have been placed to form ELK stack and also how they are able to communicate with one another
Beats and Logstash lay at the bottom and takes responsibility to receive, filter and deliver logs.
Elasticsearch is a search and analytics engine and is the middle point where logstash and filebeat can save filtered data for kibana.
Kibana acts as the front end where logs can be visualized as charts and graphs and act as the “cherry on the top” of the ‘ELK’ake.
You can find the detailed installation guide for multiple operating systems on this link → https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html
The following points from top to bottom define the flow of logs which is also elaborated in the DFD given above, it also explains the role of every component used in this case. The configurations are defined below:
Following are the configuration of elasticsearch in this scenario
Put this file installation directory of elasticsearch
#Filename = elasticsearch.yml
#this tells elasticsearch where to save data on file system
path.data: /var/lib/elasticsearch
#this tells elasticsearch where to print logs of elasticsearch.
path.logs: /var/log/elasticsearch
#this has to set to 0.0.0.0 therefore it can be accessed remotely
network.host: 0.0.0.0
For further configuration detail, you can visit → https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html
Following are the configuration of logstash configuration files in this scenario
Put this file in installation directory of logstash
#FileName = pipelines.yml
#pipeline.id is used to differentiate between logstash pipelines
#pipeline.config is used to tell logstash the path of configuration for specific pipeline
- pipeline.id: filebeat
path.config: "/etc/logstash/conf.d/filebeat.conf"
- pipeline.id: http
path.config: "/etc/logstash/conf.d/http.conf"
- pipeline.id: tcp
path.config: "/etc/logstash/conf.d/tcp.conf"
Note: Since we have three different pipelines and each pipeline have their own configuration therefore there will be three different configuration files.
Put this file in conf.d directory in installation directory of logstash
#FileName = filebeat.conf
input {
#tells logstash to use beats as input plugin
beats {
#defines on which port to listen data coming from filebeat
port => 5044
}
}
output {
#tells logstash to use elasticsearch as output plugin
elasticsearch {
#defines where is elasticsearch
hosts => ["http://localhost:9200"]
#tells the name of index to logstash in elasticsearch
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
Put this file in conf.d directory in installation directory of logstash
#FileName = http.conf
input {
#tells logstash to use http as input plugin
http {
#defines on which port to listen data coming on http protocol
port => 8080
}
}
output {
#tells logstash to use elasticsearch as output plugin
elasticsearch {
#defines where is elasticsearch
hosts => ["http://localhost:9200"]
#tells the name of index to logstash in elasticsearch
index => "http-%{+YYYY.MM.dd}"
}
}
Put this file in conf.d directory in installation directory of logstash
#FileName = tcp.conf
input {
#tells logstash to use tcp as input plugin
tcp {
#defines on which port to listen data coming on tcp protocol
port => 4444
}
}
output {
#tells logstash to use elasticsearch as output plugin
elasticsearch {
#defines where is elasticsearch
hosts => ["http://localhost:9200"]
#tells the name of index to logstash in elasticsearch
index => "tcp-%{+YYYY.MM.dd}"
}
}
For further configuration detail you can visit → https://www.elastic.co/guide/en/logstash/current/configuration.html
Put this file in the installation directory of filebeat
#FileName = filebeat.yml
#path to read logs
#you can see that how multiple paths are defined
filebeat:
prospectors:
paths:
- "{hybris-installation-directory}/hybris/log/*/*"
- "{hybris-installation-directory}/hybris/log/*"
input_type: log
#path to load modules configurations before using them
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
#tells filebeat to throw output on logstash
output.logstash:
#The Logstash hosts for example 192.168.1.15
#Port is set to 5044 because it must match the port defined in filebeat.conf in logstash configuration, it can be any allowed port but must be matched
hosts: ["XXX.XXX.X.XXX:5044"]
Put this file in modules.d directory in installation directory of filebeat
#FileName = logstash.yml
- module: logstash
log:
enabled: true
#enables an ability to log when a specific event takes an abnormal amount of time
slowlog:
enabled: true
For further configuration detail you can visit→ https://www.elastic.co/guide/en/beats/filebeat/current/configuring-howto-filebeat.html
The pseudo code for plugin is as follow
public class LoggerModel {
private String className;
private String packageName;
private String developer;
private String message;
private Map<String,Object> messageMap;
private LogLevel level;
//Required Constructor(s) here
//Getters and Setters
}
public class ELKLogger {
private static final String USER_AGENT = "Mozilla/5.0";
public enum LogLevel{
ALL("ALL"),
DEBUG("DEBUG"),
INFO("INFO"),
WARN("WARN"),
ERROR("ERROR"),
FATAL("FATAL"),
OFF("OFF"),
TRACE("TRACE");
//Enum Getter Setter
}
//Required Constructor(s) here
private Class tClass;
private String developerName;
private boolean isHttp=false;
private String host;
private int httpPort=8080;
private int tcpPort=4444;
private Socket socket;
//Getters and Setters
public void sendLog(LogLevel level, String message){
//Logic to Initialize LoggerModel for this overload
sendLog(loggerModel);
}
public void sendLog(LogLevel level,String message, Map<String,Object> messageMap){
//Logic to Initialize LoggerModel for this overload
sendLog(loggerModel);
}
public void sendLog(LogLevel level,LinkedHashMap<String,Object> messageMap){
//Logic to Initialize LoggerModel for this overload
sendLog(loggerModel);
}
public void sendLogSpecial(LogLevel level,Object... objects){
//Logic to Initialize LoggerModel for this method
sendLog(loggerModel);
}
private void sendLog(LoggerModel loggerModel){
if(isHttp){
sendHttp(loggerModel);
}
else {
if(socket==null)
setSocket();
if(socket!=null) {
sendTcp(loggerModel,socket);
}
}
}
private void sendHttp(LoggerModel loggerModel) {
//Send HTTP request Code
}
private static void sendTcp(LoggerModel loggerModel,Socket socket) {
//Send TCP request Code
}
}
Developers can write directly in HYBRIS or they can make separate maven project, which they can import as maven dependency in external-dependencies.xml
<dependencies>
...
<dependency>
<groupId>${group.id}</groupId>
<artifactId>ELKLogger</artifactId>
<version>${version}</version>
</dependency>
...
</dependencies>
This is how you can use the plugin in code
//import package
public class HelloWorld {
private static ELKLogger elkLogger = new ELKLogger(HelloWorld.class,"developer_name","XXX.XXX.X.XXX");
public static void printHelloWorld() {
//LogLevel can differ, DEBUG, ERROR, WARN etc
elkLogger.sendLog(ELKLogger.LogLevel.INFO,"printHelloWorld method start");
System.out.println("Hello, World");
elkLogger.sendLog(ELKLogger.LogLevel.INFO,"printHelloWorld method end");
}
}
Finally, after setting up everything you can send and view your logs in kibana like shown in the below images