Previously, we used to build the applications in our CI/CD environment. For example, if the application required maven tool, we would just install maven and execute it. What could possibly be wrong in that? Well, you could not be any more wrong. You could try, but you will not succeed. It was a nightmare, one had to manage all the different versions of SDK and build tools, as someone said it is “a random assortment of dependencies and tools.“. We were asked repeatedly to install or update some tool or SDK’s and just imagine with a cluster of CI/CD. You had to configure instances, even if you used snapshot you would still need to configure one or two, plus afterwards CI/CD node required clean up as well. we tried using SDK manager in some cases, but it still required us SSH into the machine, we hoped we could find some way, to stop doing this repetitive task again and again…
Then people came to our rescue, I was able to find a different practice going around on the internet. Building Inside Docker using containers in pipelines to perform any and all actions. We had found the holy grail for our problem, that would not only make it easier for the developer but also eliminates the need for us to intervene with a proper solution. Now one can control all the dependencies and tools required in your code without having to worry if build machine has that version or tool.
All we had to do was convert existing jobs to the new pattern and inform everyone how to change appropriately. Easier said than done, even though your organization believes in change as the only constant, as an individual it is still sometimes hard. It is sometimes perceived as more work, even though it makes life easier. New concepts can be tricky to understand and have a steep learning curve hence the apprehension is understandable.
So we waited for an opportunity to present itself. We were asked to install Newman on CI/CD server, our QA required to run some tests internally, a prerequisite was that CI/CD server had docker installed, which was true in our case.
To run Newman previously we just had to pull git repo with postman collection and execute;
The change was very subtle, just have to substitute Newman, at the front with docker cmd to bring us the required CLI of Newman.
docker run --rm -v $(pwd):/etc/newman postman/newman:alpine run
The difference is
docker run –rm -v $(pwd):/etc/newman postman/newman:alpine
link to an official image of Newman can be found here.
Some more examples could be;
docker run -it --
maven mvn clean package
docker run --
docker run -it --
--name my-running-script -
node:8 node your-daemon-or-script.js
bring it, right?
Let us go a little further and explore the feature Docker released called multistage builds. It was inspired by the Builder Pattern design of object-oriented programming. A container is executed to create a complex object. In this case, the object is a micro-service container image. Docker modified the Builder Pattern a bit and is much easier to use now. Previously, we used to either containerize the build process or if we wanted to build smaller images we used to copy the artefacts, both ways were not ideal. Now we have one Dockerfile, which is divided into two parts, one build part other is runtime. In the building part, we compile our application and copy the artefact only to the runtime part once finished.
Still with me right? This way during the building part we can bring in the heavy guns like JDK, all the SDKs and Build tools we require in the docker file and execute the build process. When successful, just move to the next line and again use FROM, this time only pull the essentials. In our case it was openjre:8-jre-slim, runtime environment and that too slimmed down version. You gotta keep the size small these days as you are not just running a few services you are planning to run hundreds. After FROM, the next line could be COPY artefacts from the previous build container. So by the successful end of it, you will be left with the smallest runtime container image and CI/CD Server environment is clean and nifty.
Below is a sample multistage build file
-alpine as target
RUN mkdir -p $APP_HOME
COPY pom.xml $APP_HOME
RUN mvn dependency:go-offline
RUN mkdir -p $APP_HOME/src
COPY src/ $APP_HOME/src
COPY --from=target /root/dev/application/target/application.jar app.jar
Although benefits of using Docker in the normal build process and multistage builds are evident, still just for posterity’s sake lets see if we can jot them down. This implementation provides compatibility and maintainability, eliminates the “it works on my machine”, tools and SDK version mismatch issues once and for all. No need to install Nodejs lib/Newman on the system level, or keep installing it, again and again. Isolating your build/test from other environment variables or different processes currently executing, standardization of application from end to end, if it works on your system it will work on the server, keeping CI/CD server clean, allowing faster configurations. Meaning no more delay for another team to configure your environment for you to continue rapid deployment of smaller container images, increase the count of continuous Deployment/ testing and if I have mentioned this before it requires another mention Isolation and Security.
This article is a tribute to Tony Stark, you will be missed...