MLOPS:Continuous Integration & Automation of ML with Docker and Jenkins

5 min readMay 25, 2020

In machine Learning and deep learning ,Data scientists needs to change the hyper parameters of the model to get the model with more accuracy .By giving a lot of time to do changes in the hyper parameter, but there is no guarantee that the model he got would be the best model.

Our Objective

We are going to find the model with best accuracy by integrating the model with Jenkins and Docker.

What we are going to Do??

First we Create docker container image using Dockerfile that has python installed in it along with all the essential libraries required for training the Machine learning model. Then we Create jobs in jenkins to load,run , notify , rebuild ,tweak the machine learning model in order to get desired accuracy.

Let’s Begin !!

First we have to build our dockerfile to create our customised image that has required libraries installed.We are going to train CNN model.So we install libraries accordingly.


After creating the dockerfile we have to build it to create out custom image.

Now our image is ready,we will use it asour environment.

We are going to train mnist dataset by CNN.

The above program is to train the model to predict the number in the image using mnist dataset.As it has many hyper parameter like number of neurons in dense layer, number of convolution layer and pooling layer,number of epochs and many more. So by changing it manually , we do it by program that will be triggerd by jenkins job whenever we got accuracy of model less than desired.We have to tweak the previous code by doing some changes in hyper parameter, adding convolve and pooling layers, so that we can get the highest accuracy.

program to tweak the model.

For adding the layers we are using the concept of Transfer Learning , as the function will be called , one layer of convolve and one layer of pooling will be added to it until unless the output layer becomes (1,1).and it also changes the number of neurons.

Now our code is ready , so we push it to github. And Now the role of Jenkins come into play.We will create multiple jobs for integration and automation.


PULL the GitHub repo whenever the Data scientist push the code to GitHub

Since the github repo is downloaded in the workspace of Jenkins ,we will copy it to our baseOS workspace.Here we are using POll SCM to trigger from github.


When job-1 runs successfully ,job-2 will be triggered automatically.In this JOB we first check the type of python file, and accordingly docker image with suitable libraries will be launched ,the accuracy will be stored in result.txt.

If it failed due to any reason then JOB 6 will be triggered , and it will re-run the job2.


This job is for tweaking the code. First It check the accuracy of the,Itf the accuracy is greater than 99% then there is no need to tweak the model,so job will triggered the job4 ,But if the accuracy is less than 99%, then it will run the in the same environment, and it will run until it donot get the best model or reached the limit of count we have provided to run the program.

As we got the most accurate model , then job 5 will be triggerd to inform the Data Scientist that the best model has been trained.


This job will run if the model of has given the accuracy greater than 99%. It will send the mail to the Data Scientist , that his model has been trained.

JOB 5:

This job will be triggerd if the has been trained with most accurate model. So it will send the mail to the Data Scientist that his model has been trained with best accuracy.

Here is the code of

And how we got the mail

JOB 6:

IN this job that will be triggered by job2 whenever it fails to rebuild it.

My Jenkins jobs:


We have found the most accurate model by changing the hyper parameter with the integration of Jenkins and Docker.