ChatOps is the trending buzzword now and there are some really nice going in the StackStorm community who are some phenomenal work. According to Atlassian, “ChatOps is a collaboration model that helps connect people, process, tools and automation into a transparent workflow”.
I heard a podcast mentioning that ChatOps is like bringing in the work that you’re already doing, in line with the context of the conversations you are already having which give goosebumps to me!
shared command line – shared context(heard from the same podcast), this could solve so many incidents where people login into a server, in some corner of the world, individually, and do something. With this everybody can execute commands, they are happening in line and everybody sees them and collaborate around things that happen, this is super amazing in incident management space.
Google Home is an interesting bit of tech and interface. DevOps is all about automating lot of stuff so why not go a step further and integrate Google home and instead of chat room why not just talk to everything, to get something done it will be just a simple voice command given to Google home, either from a mobile phone or the Google home speaker.
So this gave me an idea, rather than connecting Jenkins to my slack room, why not connect it to the Google home speaker in my office. I felt it would be awesome to potentially trigger jobs, or automate anything in Jenkins through voice commands, so that is exactly what I did last week.
I started with Dialog Flow, previously known as api.ai, is google’s handy tool to build agent(apps) for Google Home, it’s pretty straight forward, it has divided the flow of dialog(pun intended) between agent and user, into entities which belong to intents, so when we speak to google home, it takes the sentence identifies entities in it, for example, “build XYZ job” has two entities the way I see it, its “build” entity named @action and “xyz” entity named @var for my agent. So in intent, we gave few training phrases similar to the one exampled here, so now agent knows which intent to attach to once something similar comes along.
Having multiple intents helps because it will give us the opportunity to define customized responses for each intent, one could easily attach it in one intent, and not use the beautiful training algorithm, we have at our disposal and it would work fine, but where is the fun in that.
Then comes the fulfillment part, Google has incorporated firebase function inside dialog flow, we have the ability to edit and deploy the function from within the dialog flow, but remember you must have a billing account for this attached to the project, and you can also have local setup for the function or just use to update the function, but once done through that, you will not be able to edit or deploy function from within the dialogue flow.
So in firebase function, it really depends upon your approach, I went ahead, and created a simple switch/case and filtered through the action and the job to match the one asked by user, once the api call has returned, google home would use one of the responses set in intent it matched to.