Detecting the DDos attack with clustering by taking the log of http server.And after detecting, trigger the job of jenkins that will block that IPs from where the server getting the attack.
First we took the data from the the log of http web server and store it in a csv file.Then we scale our data for better accuracy,after that we dump this data on our Machine Learning model that will do clustering.Clustering helps us to detect the malicious IPs.As we get the IPs , It will be automatically blocked by the jenkins job.
Dumping the log Data,Since…
In this blog, we will be talking about chaos Engineering. To know What is Chaos Engineering first we have to know about Resilience.
Resilience is a system’s ability to recover from a fault and maintain persistency of service dependability when a fault occurs. Anyone who is looking at resilience will usually have a steady-state hypothesis for their system. If that steady state is regained after a fault occurs, then the system is said to be resilient against that fault.
Chaos Engineering is the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand…
In this article, We are going to configure the NameNode,DataNode and ClientNode of Hadoop cluster and start the Hadoop services Using Ansible.
At first, let’s discuss about some terminologies.
Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs. It is Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time. …
In this blog We are going to launch HAPROXY LoadBalancer and multiple WebServers on the top of the ec2-instance through the Ansible.
At first, let’s discuss about some terminologies.
A load balancer acts as the “traffic cop” sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers. …
We will follow step by step to create this setup.
First we check the connectivity with the both google.com and and facebook.com.
We live in the world of big data, artificial intelligence and machine learning. Companies are now becoming curious about the application and benefits of machine learning in business. Many people have heard of ML, but they don’t really know what exactly it is, what business related problems it can solve, and value it can add to the business. A 2016 research shows that by 2020,at least 30% of companies globally will use AI in at least one fragment of their sales processes. …
We are going to deploy a webserver on the top of K8s, using jenkins DSL .Using jenkins DSL ,we can share,deploy out job very easily. Basically jenkinsfile is a textfile that have definition of jenkins Pipeline,It uses a Domain Specific Language(DSL) i.e, Groovy Language.
1)Create container image that’s has Jenkins installed using Dockerfile or you can use the Jenkins Server on RHEL 8.
2) When we launch this image, it should automatically starts Jenkins service in the container
3) Create a job chain of job1, job2, job3 & job4 using build pipeline plugin in Jenkins .
Integrate Prometheus and Grafana and perform in following way:
1. Deploy them as pods on top of Kubernetes by creating resources Deployment, ReplicaSet, Pods or Services
2. And make their data to be remain persistent
3. And both of them should be exposed to outside world
Creating the configmap of prometheus
Create A dynamic Jenkins cluster and perform task-3 using the dynamic Jenkins cluster.
Steps to proceed as:
1.Create container image that’s has Linux and other basic configuration required to run Slave for Jenkins. ( example here we require kubectl to be configured )
2. When we launch the job it should automatically starts job on slave based on the label provided for dynamic approach.
3. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins
4. Job1 : Pull the Github repo automatically when some developers push repo to Github and perform the following operations as:
We are going to deploy an infrastructure with all the powerful features of Kubernetes , like deployment,service,pvc to host a website.We are also going to use Jenkins for CI/CD.
1.Create container image that’s has Jenkins installed using dockerfile Or You can use the Jenkins Server on RHEL 8/7
2. When we launch this image, it should automatically starts Jenkins service in the container.
3. Create a job chain of job1, job2, job3 and job4 using build pipeline plugin in Jenkins
4. Job1 : Pull the Github repo automatically when some developers push repo to Github.
5. Job2 :
1. By looking…