FAIR Consulting Group Deploying SAP Commerce Cluster to Kubernetes Engine (GCP) - FAIR Consulting Group

Deploying SAP Commerce Cluster to Kubernetes Engine (GCP)

Share on facebook
Share on twitter
Share on linkedin

Industry

Services
Platform
Domain

Industry

Services
Platform
Domain

Deploying SAP Commerce Cluster to Kubernetes Engine (GCP)

Overview

In this blog, I am going to demonstrate how to deploy a SAP Commerce docker cluster in a Kubernetes engine(Google cloud platform). Nowadays, all the hosting and cloud infrastructure of applications are slowly moving towards container-based technologies from a traditional instance/VM based architecture, for the benefits of isolation, security, scalability and ultimately the operational cost of running cloud infrastructure.

Architecture

I am going to deploy the SAP Commerce container cluster to the GCP Kubernetes engine.

PS: I will be referring to Google cloud platform as GCP throughout my blog. Below is the architecture of the SAP Commerce cluster used for my proof of concept

Prerequisites

  • A Google cloud platform account
  • Docker
  • Kompose
  • SAP Commerce-I used SAP Commerce version 1811 for my Proof of Concept(POC)
  • Mac or Linux based operating system(Recommended) or a Linux command prompt in windows

Technical information

Preparation

For this, I have reused y2ysync recipe of the SAP Commerce which produces the below docker images in the form of a cluster

  • 2 x SAP Commerce docker images(A source and target platform)
  • Datahub(Acts as a middle-ware for transferring information from source to target platform)
  • 2 x Solr docker images(For source and target platforms respectively)
  • 2 x HSQLDB docker images(For source and target platforms respectively)

Setup y2ysync framework using the steps from this link. From this page, follow the procedure only up to step 3, then generate the docker image structure by running the below command in the path HYBRIS_HOME/installer/recipes/y2ysync_dockerized/ 

./install.sh -r y2ysync_dockerized createImagesStructure

Building docker images

Once the imagestructure is generated, copy and paste the resources folder inside the path HYBRIS_HOME/installer/recipes/y2ysync_dockerized/ to both sourcePlatform and targetPlatform folders of the path HYBRIS_HOME/installer/work/output_images/y2ysync/

Also modify the contents of the Dockerfile of both the sourcePlatform and targetPlatform folders of the path HYBRIS_HOME/installer/work/output_images/y2ysync/as below

SourcePlatform Dockerfile contents

FROM ybase_jdk

ADD binaries/ /opt/
ADD aspects /opt/aspects
ADD startup.sh /opt/startup/
ADD resources/secrets/ /etc/ssl/certs/hybris
ADD resources/init/ /opt/hybris/bin/platform/resources/init
RUN chmod +x /opt/startup/startup.sh && \
    chmod +x /opt/tomcat/bin/catalina.sh && \
    chmod +x /opt/ytools/*.sh
ENV CATALINA_SECURITY_OPTS=-Djava.security.egd=file:/dev/./urandom \
    CATALINA_MEMORY_OPTS=-Xms2G\ -Xmx2G \
    HTTPS_PORT=8088 \
    HTTP_PORT=8081 \
    HTTP_CONNECTOR_SECURE=false \
    AJP_PORT=8009 \
    KEYSTORE_LOCATION=/etc/ssl/certs/hybris/keystore \
    KEYSTORE_ALIAS=1 \
    KEYSTORE_PASSWORD=123456 \
    PLATFORM_HOME=/opt/hybris/bin/platform/ \
    WAIT_FOR="" \
    JVM_ROUTE="" \
    PATH="/opt/ytools:${PATH}" \
    HYBRIS_BIN_DIR=/opt/hybris/bin \
    HYBRIS_CONFIG_DIR=/opt/hybris/config \
    HYBRIS_DATA_DIR=/opt/hybris/data \
    HYBRIS_LOG_DIR=/var/log/hybris \
    HYBRIS_TEMP_DIR=/opt/hybris/temp \
    CATALINA_LOG_DIR=${HYBRIS_LOG_DIR}/catalina \
    ACCESS_LOG_DIR=${HYBRIS_LOG_DIR}/tomcat \
    ACCESS_LOG_SUFFIX=.log \
    ACCESS_LOG_PATTERN=combined \
    ACCESS_LOG_PREFIX=access. \
    ERROR_SHOW_REPORT=false \
    ERROR_SHOW_SERVER_INFO=false
ENTRYPOINT ["/opt/startup/startup.sh"]

TargetPlatform Dockerfile contents

FROM ybase_jdk

ADD binaries/ /opt/
ADD aspects /opt/aspects
ADD startup.sh /opt/startup/
ADD resources/secrets/ /etc/ssl/certs/hybris
RUN chmod +x /opt/startup/startup.sh && \
    chmod +x /opt/tomcat/bin/catalina.sh && \
    chmod +x /opt/ytools/*.sh
ENV CATALINA_SECURITY_OPTS=-Djava.security.egd=file:/dev/./urandom \
    CATALINA_MEMORY_OPTS=-Xms2G\ -Xmx2G \
    HTTPS_PORT=8088 \
    HTTP_PORT=8081 \
    HTTP_CONNECTOR_SECURE=false \
    AJP_PORT=8009 \
    KEYSTORE_LOCATION=/etc/ssl/certs/hybris/keystore \
    KEYSTORE_ALIAS=1 \
    KEYSTORE_PASSWORD=123456 \
    PLATFORM_HOME=/opt/hybris/bin/platform/ \
    WAIT_FOR="" \
    JVM_ROUTE="" \
    PATH="/opt/ytools:${PATH}" \
    HYBRIS_BIN_DIR=/opt/hybris/bin \
    HYBRIS_CONFIG_DIR=/opt/hybris/config \
    HYBRIS_DATA_DIR=/opt/hybris/data \
    HYBRIS_LOG_DIR=/var/log/hybris \
    HYBRIS_TEMP_DIR=/opt/hybris/temp \
    CATALINA_LOG_DIR=${HYBRIS_LOG_DIR}/catalina \
    ACCESS_LOG_DIR=${HYBRIS_LOG_DIR}/tomcat \
    ACCESS_LOG_SUFFIX=.log \
    ACCESS_LOG_PATTERN=combined \
    ACCESS_LOG_PREFIX=access. \
    ERROR_SHOW_REPORT=false \
    ERROR_SHOW_SERVER_INFO=false
ENTRYPOINT ["/opt/startup/startup.sh"]

Now build the docker images by running the below command from this path HYBRIS_HOME/installer/work/output_images/y2ysync/

./build-images.sh

Please make sure the images are successfully created and available using the below command

sudo docker images

Push images to Google cloud platform

Tag these images using the docker command, there are multiple hostnames available for the container registry which will vary based on the region where the container registry is hosted.

The four options are:

  • gcr.io hosts the images in the United States, but the location may change in the future
  • us.gcr.io hosts the image in the United States, in a separate storage bucket from images hosted by gcr.io
  • eu.gcr.io hosts the images in the European Union
  • asia.gcr.io hosts the images in Asia

For my POC, I am using gcr.io

sudo docker tag ybase_os gcr.io/[GCP project ID]/ybase_os
sudo docker tag ybase_jdk gcr.io/[GCP project ID]/ybase_jdk
sudo docker tag ybase_load_balancer gcr.io/[GCP project ID]/ybase_load_balancer
sudo docker tag y2y_sync_datahub-webapp gcr.io/[GCP project ID]/y2y_sync_datahub-webapp
sudo docker tag y2y_sync_source_hsql gcr.io/[GCP project ID]/y2y_sync_source_hsql
sudo docker tag y2y_sync_source_platform gcr.io/[GCP project ID]/y2y_sync_source_platform
sudo docker tag y2y_sync_target_hsql gcr.io/[GCP project ID]/y2y_sync_target_hsql
sudo docker tag y2y_sync_target_platform gcr.io/[GCP project ID]/y2y_sync_target_platform

Once tagging is successful, push all the tagged images to the container registry using the below commands

sudo docker push gcr.io/[GCP project ID]/ybase_os
sudo docker push gcr.io/[GCP project ID]/ybase_jdk
sudo docker push gcr.io/[GCP project ID]/ybase_load_balancer
sudo docker push gcr.io/[GCP project ID]/y2y_sync_datahub-webapp
sudo docker push gcr.io/[GCP project ID]/y2y_sync_source_hsql
sudo docker push gcr.io/[GCP project ID]/y2y_sync_source_platform
sudo docker push gcr.io/[GCP project ID]/y2y_sync_target_hsql
sudo docker push gcr.io/[GCP project ID]/y2y_sync_target_platform

I have generated the Kubernetes yaml file based on the docker-compose.yaml available under HYBRIS_HOME/installer/recipes/y2ysync_dockerized folder using the Kompose utility through the command

kompose convert -f docker-compose.yaml -o kubernetes.yaml

I have made some modifications to the kubernetes.yaml file to support my POC and below are its contents

apiVersion: v1
items:
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: datahub
    name: datahub
  spec:
    ports:
    - name: "9999"
      port: 9999
      targetPort: 8080
    - name: "9793"
      port: 9793
      targetPort: 9793
    - name: "5005"
      port: 5005
      targetPort: 5005
    selector:
      io.kompose.service: datahub
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kompose.cmd: kompose convert -f docker-compose.yaml -o kubernetes1.yaml
      kompose.version: 1.16.0 (0c01309)
    creationTimestamp: null
    labels:
      io.kompose.service: sourcehsql
    name: sourcehsql
  spec:
    ports:
    - name: "9090"
      port: 9090
      targetPort: 9090
    selector:
      io.kompose.service: sourcehsql
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kompose.cmd: kompose convert -f docker-compose.yaml -o kubernetes1.yaml
      kompose.version: 1.16.0 (0c01309)
    creationTimestamp: null
    labels:
      io.kompose.service: targethsql
    name: targethsql
  spec:
    ports:
    - name: "9090"
      port: 9090
      targetPort: 9090
    selector:
      io.kompose.service: targethsql
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: sourceplatform
    name: sourceplatform
  spec:
    ports:
    - name: "8080"
      port: 8080
      targetPort: 8088
    type: LoadBalancer
    selector:
      io.kompose.service: sourceplatform
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: targetplatform
    name: targetplatform
  spec:
    ports:
    - name: "8081"
      port: 8081
      targetPort: 8088
    type: LoadBalancer
    selector:
      io.kompose.service: targetplatform
  status:
    loadBalancer: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: datahub
    name: datahub
  spec:
    replicas: 1
    strategy: {}
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: datahub
      spec:
        containers:
        - image: gcr.io/[GCP project ID]/y2y_sync_datahub-webapp
          name: datahub
          ports:
          - containerPort: 8080
          - containerPort: 9793
          - containerPort: 5005
          resources: {}
        restartPolicy: Always
  status: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: sourcehsql
    name: sourcehsql
  spec:
    replicas: 1
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: sourcehsql
      spec:
        containers:
        - image: gcr.io/[GCP project ID]/y2y_sync_source_hsql
          name: sourcehsql
          resources: {}
        restartPolicy: Always
  status: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: sourceplatform
    name: sourceplatform
  spec:
    replicas: 1
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: sourceplatform
      spec:
        containers:
        - args:
          - default
          env:
          - name: y_y2ysync_home_url
            value: http://sourceplatform:8081
          image: gcr.io/[GCP project ID]/y2y_sync_source_platform
          name: sourceplatform
          ports:
          - containerPort: 8088
          resources: {}
        restartPolicy: Always
  status: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: sourcesolr
    name: sourcesolr
  spec:
    replicas: 1
    strategy: {}
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: sourcesolr
      spec:
        containers:
        - args:
          - default
          image: gcr.io/[GCP project ID]/ybase_solr
          name: sourcesolr
          resources: {}
        restartPolicy: Always
  status: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: targethsql
    name: targethsql
  spec:
    replicas: 1
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: targethsql
      spec:
        containers:
        - image: gcr.io/[GCP project ID]/y2y_sync_target_hsql
          name: targethsql
          resources: {}
        restartPolicy: Always
  status: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: targetplatform
    name: targetplatform
  spec:
    replicas: 1
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: targetplatform
      spec:
        containers:
        - args:
          - default
          env:
          - name: WAIT_FOR
            value: targethsql:9091 datahub:8080 targetsolr:8983
          image: gcr.io/[GCP project ID]/y2y_sync_target_platform
          name: targetplatform
          ports:
          - containerPort: 8088
          resources: {}
        restartPolicy: Always
  status: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: targetsolr
    name: targetsolr
  spec:
    replicas: 1
    strategy: {}
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: targetsolr
      spec:
        containers:
        - args:
          - default
          image: gcr.io/[GCP project ID]/ybase_solr
          name: targetsolr
          resources: {}
        restartPolicy: Always
  status: {}
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      kompose.cmd: kompose convert -o kubernetes.yaml
      kompose.version: 1.17.0 (a74acad)
    creationTimestamp: null
    labels:
      io.kompose.service: upload-datahub-extension
    name: upload-datahub-extension
  spec:
    replicas: 1
    strategy:
      type: Recreate
    template:
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: upload-datahub-extension
      spec:
        containers:
        - args:
          - admin
          - executeScript
          - -Dresource=model://uploadDhExtension
          env:
          - name: WAIT_FOR
            value: datahub:8080 sourcehsql:9090 sourcesolr:8983
          image: gcr.io/[GCP project ID]/y2y_sync_source_platform
          name: upload-datahub-extension
          resources: {}
        restartPolicy: Always
  status: {}
kind: List
metadata: {}

Cluster setup in GCP Kubernetes engine

To Keep things simple, I have created a simple container cluster in GCP kubernetes engine(A single instance with 8 vCPUs and 30GB RAM to accommodate to multiple containers) with the below command which will take few minutes to complete

gcloud container clusters create helloapp --machine-type=n1-standard-8 --num-nodes=1

Make sure the instance is created by checking in the GCP console→Compute engine→VM instances

sudo kubectl create -f kubernetes.yaml

You can check the status of SAP Commerce cluster creation using this command

kubectl get pods,services,deployment
You can launch SAP Commerce using the IPs specified against the loadbalancer service from GCP available under GCP Console home → Kubernetes Engine → Services

Next steps

I am going to try the autoscaling feature of the Kubernetes engine by creating multiple container clusters nodes and observe the behaviour of the SAP Commerce cluster.

Summary

In fact, the SAP Commerce cloud v2 infrastructure is completely based on container technologies like docker and Kubernetes. In future, almost all the SAP Commerce environments would be based on the container technologies rather than the traditional VM instance setup. Hence it is highly recommended to get a good understanding of these technologies.

Reference

  1. https://kubernetes.io/
  2. https://cloud.google.com/kubernetes-engine/
  3. http://kompose.io/
  4. https://help.hybris.com/1811/hcd/08de5a4ee9934b94b9360cef48c1d400.html

Related Insights

Stay up-to-date with FAIR

Connect with FAIR today and get the latest tips and tricks on improving your customer experience tools and eCommerce platforms. Once you subscribe you will get access to workshops, webinars, recent projects and so much more.