Header Ads



Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.


Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod, etc. environments. Today we are going to see a tool called "Octant" which helps all the users to understand their cluster status, view the logs, update the meta data, see the resources utilization, etc. In this blog will cover how to do the installation.

Requirements:

1.K8s cluster

2.Local desktop


Step 1: Installation of Octant,

The installation will be on the local machine not on the cluster.

OS : Linux (Installer are available for Windows and Mac as well)

Octant package to download : https://github.com/vmware-tanzu/octant/releases

Download the linux package in you local and extract it.

https://github.com/vmware-tanzu/octant/releases/download/v0.24.0/octant_0.24.0_Linux-64bit.tar.gz


Step 2: K8s cluster config

Keep your kubernetes cluster config at below path.

/root/.kube/config

By default octant will search the cluster configuration from above path.


Step 3: Start Octant

Go to the extracted path and start it,

cd octant_0.24.0_Linux-64bit

./octant

At the end you will get a message as "Dashboard is available at http://127.0.0.1:7777" it means its successfully started and we can access the dashboard.


Step 4: Access Octant dashboard

Go to browser and enter, http://localhost:7777


You can see your cluster name at the top right and you can select the namespace near to the cluster name to show the entire details. Through this we can view the Metadata, Logs, Update the deployments, etc.

Thats all we have successfuly installed the Octant dashboard and view the status of the cluster.


Do you want to apply any policy to avoid any changes happen in Kubernetes cluster? Kyverno is the right tool to achieve it.

Kyverno - Its a policy engine for kubernetes, define and enforce policies so that cluster users can maintain standard mechanism.

In this blog, we will see how to install Kyverno in Kubernetes and define policy.

Requirements:

Kubernetes cluster greater than v1.14

Step 1: Install Kyverno on kubernetes using manifest.

# kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

Validate the installation,

# kubectl get all -n kyverno



Step 2:

Create a policy that without label "app" in pod it should not deploy in cluster.

#cat policy.yaml

apiversion: kyverno.io/v1
kind: clusterpolicy
metadata:
  name: require-app-label
spec:
  validationfailureaction: enforce
  rules:
  - name: check-for-app-label
    match:
      resources:
        kinds:
        - pod
    validate:
      message: "label `app` is required"
      pattern:
        metadata:
          labels:
            app: "?*"

# kubectl apply -f policy.yaml

Now policy is created, Hereafter if any deployment without label "app" it will not deploy in the cluster.

For more Policies : https://github.com/kyverno/policies/tree/main/best-practices

Step 3:

Create a sample pod deployment without label "app"

#vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
spec:
  containers:
  - name: webapp
    image: nginx

#  kubectl apply -f nginx.yaml


You can see the pod is not deployed and it is restricted by our policy.

Now add the label app and try it.

# vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
    app: webapp
spec:
  containers:
  - name: webapp
    image: nginx

# kubectl apply -f nginx.yaml


Now the pod is deployed. Similarly we can create our own custom policies and restrict the deployment in any cluster.

That's all, Kyverno is installed in Kubernetes cluster and tested a policy.


Reference : https://kyverno.io/docs/introduction/


 

In this blog we will see step by step to install Cadvisor, NodeExporter, Prometheus, Grafana to monitor docker containers and its hosts.

Note : We are going to use only docker images for all the tools.

Requirements:

Docker running Linux server :  1

Step 1:

Deploy Cadvisor in docker:

Cadvisor : It provides container resource usage and performance characteristics of their running containers.

Execute the below docker command in linux server,

# docker run -d -p 8080:8080 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker/:/var/lib/docker:ro --name=cadvisor google/cadvisor:latest

We can access Cadvisor in browser by http://server-IP:8080

Step 2:

Deploy Node-Exporter in docker:

Node-Exporter : It helps to measure various machine resources like as CPU, memory, disk and network utilization.

Execute the below docker command in linux server,

# docker run -d -p 9100:9100 --name=node-exporter prom/node-exporter

We can access Node-Exporter metrics in browser by http://server-IP:9100/metrics


Step 3:

Deploy Prometheus in docker:

To deploy Prometheus, we need to create configuration file for prometheus like below,

#vi /root/config/prometheus.yml

global:

  scrape_interval: 15s

  evaluation_interval: 15s

scrape_configs:

  - job_name: 'prometheus'

    static_configs:

    - targets: ['Host-IP:9090']

      labels:

        alias: 'prometheus'

  - job_name: 'cadvisor'

    static_configs:

    - targets: ['Host-IP:8080']

      labels:

        alias: 'cadvisor'

  - job_name: 'node-exporter'

    static_configs:

    - targets: ['Host-IP:9100']

      labels:

        alias: 'node-exporter'


Save the file.

Here Cadvisor, Node-exporter metrics details are given.

Now run the Prometheus docker command,

# docker run -d -p 9090:9090 -v /root/config/prometheus.yml:/etc/prometheus/prometheus.yml --name=prometheus prom/prometheus

We can access Prometheus metrics in browser by http://server-IP:9090/metrics

We can check the targets are up or not in Prometheus by http://server-IP:9090/targets

Step 4:

Deploy Grafana in docker:

Execute the below docker command in linux server,

# docker run -d -p 3000:3000 --name=grafana grafana/grafana:latest


We can access Grafana in browser by http://server-IP:3000


To login Grafana, the default user name and password is "admin".

Once login to grafana we need to add datasource as Prometheus in grafana.

Go to configuration > datasources > Add data source > Select Prometheus > Give name for datasource and enter the Prometheus URL which we got it from step 3 eg : http://server-IP:9090


Now click Save&Test and it should show like above image after click Save&Test.

Next import the below Grafana dashboard json from the link or you can manually create the dashboard.

https://docs.google.com/document/d/1CDwcNQ_0UuPLlkRDSJvgAtILcKc4DZfl8EueMgg8tY4/edit?usp=sharing

Click "+" icon from left side and then choose "import" and copy and past the above link json inside the box below "Import via panel json" and load it and click "import".



Now open the dashboard and we can see the docker container status and docker host status.


Currently we are running totally 4 docker containers and it is showing correctly in dashboard,



Step 5:

Next we will check the monitoring is working or not by deploying a test web in docker,

Execute the below docker command in linux server,

# docker run -d -p 80:80 --name=tweb yeasy/simple-web:latest


Now wait for few minutes, it will take few minutes to reflect in Grafana dashboard.


Great!!! Now we can see the new docker container is reflecting in Grafana dashboard.

That's all we have successfully deployed Cadvisor, Node-Exporter, Prometheus and Grafana to monitor docker container and docker hosts.



Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.


Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod, etc. environments. Today we are going to see a tool called "Octant" which helps all the users to understand their cluster status, view the logs, update the meta data, see the resources utilization, etc. In this blog will cover how to do the installation.

Requirements:

1.K8s cluster

2.Local desktop


Step 1: Installation of Octant,

The installation will be on the local machine not on the cluster.

OS : Linux (Installer are available for Windows and Mac as well)

Octant package to download : https://github.com/vmware-tanzu/octant/releases

Download the linux package in you local and extract it.

https://github.com/vmware-tanzu/octant/releases/download/v0.24.0/octant_0.24.0_Linux-64bit.tar.gz


Step 2: K8s cluster config

Keep your kubernetes cluster config at below path.

/root/.kube/config

By default octant will search the cluster configuration from above path.


Step 3: Start Octant

Go to the extracted path and start it,

cd octant_0.24.0_Linux-64bit

./octant

At the end you will get a message as "Dashboard is available at http://127.0.0.1:7777" it means its successfully started and we can access the dashboard.


Step 4: Access Octant dashboard

Go to browser and enter, http://localhost:7777


You can see your cluster name at the top right and you can select the namespace near to the cluster name to show the entire details. Through this we can view the Metadata, Logs, Update the deployments, etc.

Thats all we have successfuly installed the Octant dashboard and view the status of the cluster.


Do you want to apply any policy to avoid any changes happen in Kubernetes cluster? Kyverno is the right tool to achieve it.

Kyverno - Its a policy engine for kubernetes, define and enforce policies so that cluster users can maintain standard mechanism.

In this blog, we will see how to install Kyverno in Kubernetes and define policy.

Requirements:

Kubernetes cluster greater than v1.14

Step 1: Install Kyverno on kubernetes using manifest.

# kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

Validate the installation,

# kubectl get all -n kyverno



Step 2:

Create a policy that without label "app" in pod it should not deploy in cluster.

#cat policy.yaml

apiversion: kyverno.io/v1
kind: clusterpolicy
metadata:
  name: require-app-label
spec:
  validationfailureaction: enforce
  rules:
  - name: check-for-app-label
    match:
      resources:
        kinds:
        - pod
    validate:
      message: "label `app` is required"
      pattern:
        metadata:
          labels:
            app: "?*"

# kubectl apply -f policy.yaml

Now policy is created, Hereafter if any deployment without label "app" it will not deploy in the cluster.

For more Policies : https://github.com/kyverno/policies/tree/main/best-practices

Step 3:

Create a sample pod deployment without label "app"

#vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
spec:
  containers:
  - name: webapp
    image: nginx

#  kubectl apply -f nginx.yaml


You can see the pod is not deployed and it is restricted by our policy.

Now add the label app and try it.

# vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
    app: webapp
spec:
  containers:
  - name: webapp
    image: nginx

# kubectl apply -f nginx.yaml


Now the pod is deployed. Similarly we can create our own custom policies and restrict the deployment in any cluster.

That's all, Kyverno is installed in Kubernetes cluster and tested a policy.


Reference : https://kyverno.io/docs/introduction/


 

In this blog we will see step by step to install Cadvisor, NodeExporter, Prometheus, Grafana to monitor docker containers and its hosts.

Note : We are going to use only docker images for all the tools.

Requirements:

Docker running Linux server :  1

Step 1:

Deploy Cadvisor in docker:

Cadvisor : It provides container resource usage and performance characteristics of their running containers.

Execute the below docker command in linux server,

# docker run -d -p 8080:8080 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker/:/var/lib/docker:ro --name=cadvisor google/cadvisor:latest

We can access Cadvisor in browser by http://server-IP:8080

Step 2:

Deploy Node-Exporter in docker:

Node-Exporter : It helps to measure various machine resources like as CPU, memory, disk and network utilization.

Execute the below docker command in linux server,

# docker run -d -p 9100:9100 --name=node-exporter prom/node-exporter

We can access Node-Exporter metrics in browser by http://server-IP:9100/metrics


Step 3:

Deploy Prometheus in docker:

To deploy Prometheus, we need to create configuration file for prometheus like below,

#vi /root/config/prometheus.yml

global:

  scrape_interval: 15s

  evaluation_interval: 15s

scrape_configs:

  - job_name: 'prometheus'

    static_configs:

    - targets: ['Host-IP:9090']

      labels:

        alias: 'prometheus'

  - job_name: 'cadvisor'

    static_configs:

    - targets: ['Host-IP:8080']

      labels:

        alias: 'cadvisor'

  - job_name: 'node-exporter'

    static_configs:

    - targets: ['Host-IP:9100']

      labels:

        alias: 'node-exporter'


Save the file.

Here Cadvisor, Node-exporter metrics details are given.

Now run the Prometheus docker command,

# docker run -d -p 9090:9090 -v /root/config/prometheus.yml:/etc/prometheus/prometheus.yml --name=prometheus prom/prometheus

We can access Prometheus metrics in browser by http://server-IP:9090/metrics

We can check the targets are up or not in Prometheus by http://server-IP:9090/targets

Step 4:

Deploy Grafana in docker:

Execute the below docker command in linux server,

# docker run -d -p 3000:3000 --name=grafana grafana/grafana:latest


We can access Grafana in browser by http://server-IP:3000


To login Grafana, the default user name and password is "admin".

Once login to grafana we need to add datasource as Prometheus in grafana.

Go to configuration > datasources > Add data source > Select Prometheus > Give name for datasource and enter the Prometheus URL which we got it from step 3 eg : http://server-IP:9090


Now click Save&Test and it should show like above image after click Save&Test.

Next import the below Grafana dashboard json from the link or you can manually create the dashboard.

https://docs.google.com/document/d/1CDwcNQ_0UuPLlkRDSJvgAtILcKc4DZfl8EueMgg8tY4/edit?usp=sharing

Click "+" icon from left side and then choose "import" and copy and past the above link json inside the box below "Import via panel json" and load it and click "import".



Now open the dashboard and we can see the docker container status and docker host status.


Currently we are running totally 4 docker containers and it is showing correctly in dashboard,



Step 5:

Next we will check the monitoring is working or not by deploying a test web in docker,

Execute the below docker command in linux server,

# docker run -d -p 80:80 --name=tweb yeasy/simple-web:latest


Now wait for few minutes, it will take few minutes to reflect in Grafana dashboard.


Great!!! Now we can see the new docker container is reflecting in Grafana dashboard.

That's all we have successfully deployed Cadvisor, Node-Exporter, Prometheus and Grafana to monitor docker container and docker hosts.



Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.


Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod, etc. environments. Today we are going to see a tool called "Octant" which helps all the users to understand their cluster status, view the logs, update the meta data, see the resources utilization, etc. In this blog will cover how to do the installation.

Requirements:

1.K8s cluster

2.Local desktop


Step 1: Installation of Octant,

The installation will be on the local machine not on the cluster.

OS : Linux (Installer are available for Windows and Mac as well)

Octant package to download : https://github.com/vmware-tanzu/octant/releases

Download the linux package in you local and extract it.

https://github.com/vmware-tanzu/octant/releases/download/v0.24.0/octant_0.24.0_Linux-64bit.tar.gz


Step 2: K8s cluster config

Keep your kubernetes cluster config at below path.

/root/.kube/config

By default octant will search the cluster configuration from above path.


Step 3: Start Octant

Go to the extracted path and start it,

cd octant_0.24.0_Linux-64bit

./octant

At the end you will get a message as "Dashboard is available at http://127.0.0.1:7777" it means its successfully started and we can access the dashboard.


Step 4: Access Octant dashboard

Go to browser and enter, http://localhost:7777


You can see your cluster name at the top right and you can select the namespace near to the cluster name to show the entire details. Through this we can view the Metadata, Logs, Update the deployments, etc.

Thats all we have successfuly installed the Octant dashboard and view the status of the cluster.


Do you want to apply any policy to avoid any changes happen in Kubernetes cluster? Kyverno is the right tool to achieve it.

Kyverno - Its a policy engine for kubernetes, define and enforce policies so that cluster users can maintain standard mechanism.

In this blog, we will see how to install Kyverno in Kubernetes and define policy.

Requirements:

Kubernetes cluster greater than v1.14

Step 1: Install Kyverno on kubernetes using manifest.

# kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

Validate the installation,

# kubectl get all -n kyverno



Step 2:

Create a policy that without label "app" in pod it should not deploy in cluster.

#cat policy.yaml

apiversion: kyverno.io/v1
kind: clusterpolicy
metadata:
  name: require-app-label
spec:
  validationfailureaction: enforce
  rules:
  - name: check-for-app-label
    match:
      resources:
        kinds:
        - pod
    validate:
      message: "label `app` is required"
      pattern:
        metadata:
          labels:
            app: "?*"

# kubectl apply -f policy.yaml

Now policy is created, Hereafter if any deployment without label "app" it will not deploy in the cluster.

For more Policies : https://github.com/kyverno/policies/tree/main/best-practices

Step 3:

Create a sample pod deployment without label "app"

#vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
spec:
  containers:
  - name: webapp
    image: nginx

#  kubectl apply -f nginx.yaml


You can see the pod is not deployed and it is restricted by our policy.

Now add the label app and try it.

# vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
    app: webapp
spec:
  containers:
  - name: webapp
    image: nginx

# kubectl apply -f nginx.yaml


Now the pod is deployed. Similarly we can create our own custom policies and restrict the deployment in any cluster.

That's all, Kyverno is installed in Kubernetes cluster and tested a policy.


Reference : https://kyverno.io/docs/introduction/


 

In this blog we will see step by step to install Cadvisor, NodeExporter, Prometheus, Grafana to monitor docker containers and its hosts.

Note : We are going to use only docker images for all the tools.

Requirements:

Docker running Linux server :  1

Step 1:

Deploy Cadvisor in docker:

Cadvisor : It provides container resource usage and performance characteristics of their running containers.

Execute the below docker command in linux server,

# docker run -d -p 8080:8080 -v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/docker/:/var/lib/docker:ro --name=cadvisor google/cadvisor:latest

We can access Cadvisor in browser by http://server-IP:8080

Step 2:

Deploy Node-Exporter in docker:

Node-Exporter : It helps to measure various machine resources like as CPU, memory, disk and network utilization.

Execute the below docker command in linux server,

# docker run -d -p 9100:9100 --name=node-exporter prom/node-exporter

We can access Node-Exporter metrics in browser by http://server-IP:9100/metrics


Step 3:

Deploy Prometheus in docker:

To deploy Prometheus, we need to create configuration file for prometheus like below,

#vi /root/config/prometheus.yml

global:

  scrape_interval: 15s

  evaluation_interval: 15s

scrape_configs:

  - job_name: 'prometheus'

    static_configs:

    - targets: ['Host-IP:9090']

      labels:

        alias: 'prometheus'

  - job_name: 'cadvisor'

    static_configs:

    - targets: ['Host-IP:8080']

      labels:

        alias: 'cadvisor'

  - job_name: 'node-exporter'

    static_configs:

    - targets: ['Host-IP:9100']

      labels:

        alias: 'node-exporter'


Save the file.

Here Cadvisor, Node-exporter metrics details are given.

Now run the Prometheus docker command,

# docker run -d -p 9090:9090 -v /root/config/prometheus.yml:/etc/prometheus/prometheus.yml --name=prometheus prom/prometheus

We can access Prometheus metrics in browser by http://server-IP:9090/metrics

We can check the targets are up or not in Prometheus by http://server-IP:9090/targets

Step 4:

Deploy Grafana in docker:

Execute the below docker command in linux server,

# docker run -d -p 3000:3000 --name=grafana grafana/grafana:latest


We can access Grafana in browser by http://server-IP:3000


To login Grafana, the default user name and password is "admin".

Once login to grafana we need to add datasource as Prometheus in grafana.

Go to configuration > datasources > Add data source > Select Prometheus > Give name for datasource and enter the Prometheus URL which we got it from step 3 eg : http://server-IP:9090


Now click Save&Test and it should show like above image after click Save&Test.

Next import the below Grafana dashboard json from the link or you can manually create the dashboard.

https://docs.google.com/document/d/1CDwcNQ_0UuPLlkRDSJvgAtILcKc4DZfl8EueMgg8tY4/edit?usp=sharing

Click "+" icon from left side and then choose "import" and copy and past the above link json inside the box below "Import via panel json" and load it and click "import".



Now open the dashboard and we can see the docker container status and docker host status.


Currently we are running totally 4 docker containers and it is showing correctly in dashboard,



Step 5:

Next we will check the monitoring is working or not by deploying a test web in docker,

Execute the below docker command in linux server,

# docker run -d -p 80:80 --name=tweb yeasy/simple-web:latest


Now wait for few minutes, it will take few minutes to reflect in Grafana dashboard.


Great!!! Now we can see the new docker container is reflecting in Grafana dashboard.

That's all we have successfully deployed Cadvisor, Node-Exporter, Prometheus and Grafana to monitor docker container and docker hosts.

Read more

Show more

Cabot an Opensource Tool for Monitoring and Alerting

Monitoring and Alerting are the most important words in DevOps world. Today we …

Hypertrace installation steps for distributed tracing

In this blog, we will see how to install Hypertrace for the docker container ap…

"Checkov" A static code analysis tool for IAC

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, …

Octant dashboard step by step installation for Kubernetes cluster

Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod…

Steps by step installation of Kyverno and apply policy in Kubernetes

Do you want to apply any policy to avoid any changes happen in Kubernetes clust…

Step by Step to Monitor Docker Containers with Cadvisor Prometheus Grafana

In this blog we will see step by step to install Cadvisor, NodeExporter, Prom…

Load More That is All