Header Ads


Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.

Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.

Requirements :

1. Kubernetes cluster

2. Helm in terminal

3. Helm repository(I will use Bitnami repo)

4. Golang in terminal

5. kubectl in terminal

6. Any machine which connects to the K8s cluster(Mine is Macbook)


Step 1: Installation of Nova

Execute the below commands to install in MacBook,

brew tap fairwindsops/tap


brew install fairwindsops/tap/nova


You can check the below link for other OS,

https://nova.docs.fairwinds.com/installation


Now find and install the required packages for nova by the below command,


go get github.com/fairwindsops/nova


Step 2: How to use Nova.


Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.












If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.


helm repo add bitnami https://charts.bitnami.com/bitnami

helm repo list

helm search repo nginx -l|head -n10




Next, install older version of Nginx,


helm install nginx-web bitnami/nginx --version=12.0.5





Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.

The below command will give the output of installed and depreciated version status.


nova find




From the above image, you can see the latest version and installed version details.


The below command will give some more details for the namespace, helm version, etc


nova find -wide




The below command will give containers versions that are outdated in the cluster,


nova find --containers




That's all, we have successfully installed the Nova tool and validated the deployed version.




Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.

Requirements :

1. Kubernetes(K8s) cluster

2. A machine(mine is Mac) to install Polaris and have access to the cluster


Step 1: Install Polaris

Execute the following commands in the terminal,

brew tap reactiveops/tap


brew install reactiveops/tap/polaris


polaris dashboard --port 8080

Make sure you are able to access the K8s cluster from the machine where you installed Polaris.

To install via helm charts use below commands,

helm repo add fairwinds-stable https://charts.fairwinds.com/stable


helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace


kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80


Step 2: Polaris Dashboard

Next, go to the browser using http://127.0.0.1:8080

The overview gives you the following details,

  • Grade
  • Score
  • Passed checks
  • Warning
  • Critical/Dangerous
  • k8s version, no. of namespaces, pods,etc
If you scroll down it will give much more details about each deployment and its open items. For example I have deployed "Grafana" in K8s cluster and see the status below,


In the above image, you can see how many criticals and warnings are there for Grafana deployment. Next, we need to fix one by one and make the best practices to bring the K8s cluster for running smoothly.

We can create our own custom checks and the details are here,
https://polaris.docs.fairwinds.com/customization/custom-checks/#basic-example

Step 3: Polaris Command line checks

We can run the checks from the command line as well.

For example, Am using the below nginx deployment file, to check from Polaris commands and see how many open items are there.

https://github.com/Kurento/Kubernetes/blob/master/nginx-deployment-service.yaml

File name: nginx.yaml and copied the file into the devopsart folder.

Below Polaris command to run locally,


polaris audit --audit-path ./devopsart --format=pretty



If we want to fix the issues by running the below command it will fix all the critical items.

polaris fix --files-path ./devopsart/ --checks=all

We can run it CI pipeline the details are available here,

https://polaris.docs.fairwinds.com/infrastructure-as-code/#running-in-a-ci-pipeline


That's all, We have installed the Polaris tool and successfully checked the Critical and warning items.





Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.


Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod, etc. environments. Today we are going to see a tool called "Octant" which helps all the users to understand their cluster status, view the logs, update the meta data, see the resources utilization, etc. In this blog will cover how to do the installation.

Requirements:

1.K8s cluster

2.Local desktop


Step 1: Installation of Octant,

The installation will be on the local machine not on the cluster.

OS : Linux (Installer are available for Windows and Mac as well)

Octant package to download : https://github.com/vmware-tanzu/octant/releases

Download the linux package in you local and extract it.

https://github.com/vmware-tanzu/octant/releases/download/v0.24.0/octant_0.24.0_Linux-64bit.tar.gz


Step 2: K8s cluster config

Keep your kubernetes cluster config at below path.

/root/.kube/config

By default octant will search the cluster configuration from above path.


Step 3: Start Octant

Go to the extracted path and start it,

cd octant_0.24.0_Linux-64bit

./octant

At the end you will get a message as "Dashboard is available at http://127.0.0.1:7777" it means its successfully started and we can access the dashboard.


Step 4: Access Octant dashboard

Go to browser and enter, http://localhost:7777


You can see your cluster name at the top right and you can select the namespace near to the cluster name to show the entire details. Through this we can view the Metadata, Logs, Update the deployments, etc.

Thats all we have successfuly installed the Octant dashboard and view the status of the cluster.


Do you want to apply any policy to avoid any changes happen in Kubernetes cluster? Kyverno is the right tool to achieve it.

Kyverno - Its a policy engine for kubernetes, define and enforce policies so that cluster users can maintain standard mechanism.

In this blog, we will see how to install Kyverno in Kubernetes and define policy.

Requirements:

Kubernetes cluster greater than v1.14

Step 1: Install Kyverno on kubernetes using manifest.

# kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

Validate the installation,

# kubectl get all -n kyverno



Step 2:

Create a policy that without label "app" in pod it should not deploy in cluster.

#cat policy.yaml

apiversion: kyverno.io/v1
kind: clusterpolicy
metadata:
  name: require-app-label
spec:
  validationfailureaction: enforce
  rules:
  - name: check-for-app-label
    match:
      resources:
        kinds:
        - pod
    validate:
      message: "label `app` is required"
      pattern:
        metadata:
          labels:
            app: "?*"

# kubectl apply -f policy.yaml

Now policy is created, Hereafter if any deployment without label "app" it will not deploy in the cluster.

For more Policies : https://github.com/kyverno/policies/tree/main/best-practices

Step 3:

Create a sample pod deployment without label "app"

#vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
spec:
  containers:
  - name: webapp
    image: nginx

#  kubectl apply -f nginx.yaml


You can see the pod is not deployed and it is restricted by our policy.

Now add the label app and try it.

# vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
    app: webapp
spec:
  containers:
  - name: webapp
    image: nginx

# kubectl apply -f nginx.yaml


Now the pod is deployed. Similarly we can create our own custom policies and restrict the deployment in any cluster.

That's all, Kyverno is installed in Kubernetes cluster and tested a policy.


Reference : https://kyverno.io/docs/introduction/


Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.

Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.

Requirements :

1. Kubernetes cluster

2. Helm in terminal

3. Helm repository(I will use Bitnami repo)

4. Golang in terminal

5. kubectl in terminal

6. Any machine which connects to the K8s cluster(Mine is Macbook)


Step 1: Installation of Nova

Execute the below commands to install in MacBook,

brew tap fairwindsops/tap


brew install fairwindsops/tap/nova


You can check the below link for other OS,

https://nova.docs.fairwinds.com/installation


Now find and install the required packages for nova by the below command,


go get github.com/fairwindsops/nova


Step 2: How to use Nova.


Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.












If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.


helm repo add bitnami https://charts.bitnami.com/bitnami

helm repo list

helm search repo nginx -l|head -n10




Next, install older version of Nginx,


helm install nginx-web bitnami/nginx --version=12.0.5





Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.

The below command will give the output of installed and depreciated version status.


nova find




From the above image, you can see the latest version and installed version details.


The below command will give some more details for the namespace, helm version, etc


nova find -wide




The below command will give containers versions that are outdated in the cluster,


nova find --containers




That's all, we have successfully installed the Nova tool and validated the deployed version.




Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.

Requirements :

1. Kubernetes(K8s) cluster

2. A machine(mine is Mac) to install Polaris and have access to the cluster


Step 1: Install Polaris

Execute the following commands in the terminal,

brew tap reactiveops/tap


brew install reactiveops/tap/polaris


polaris dashboard --port 8080

Make sure you are able to access the K8s cluster from the machine where you installed Polaris.

To install via helm charts use below commands,

helm repo add fairwinds-stable https://charts.fairwinds.com/stable


helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace


kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80


Step 2: Polaris Dashboard

Next, go to the browser using http://127.0.0.1:8080

The overview gives you the following details,

  • Grade
  • Score
  • Passed checks
  • Warning
  • Critical/Dangerous
  • k8s version, no. of namespaces, pods,etc
If you scroll down it will give much more details about each deployment and its open items. For example I have deployed "Grafana" in K8s cluster and see the status below,


In the above image, you can see how many criticals and warnings are there for Grafana deployment. Next, we need to fix one by one and make the best practices to bring the K8s cluster for running smoothly.

We can create our own custom checks and the details are here,
https://polaris.docs.fairwinds.com/customization/custom-checks/#basic-example

Step 3: Polaris Command line checks

We can run the checks from the command line as well.

For example, Am using the below nginx deployment file, to check from Polaris commands and see how many open items are there.

https://github.com/Kurento/Kubernetes/blob/master/nginx-deployment-service.yaml

File name: nginx.yaml and copied the file into the devopsart folder.

Below Polaris command to run locally,


polaris audit --audit-path ./devopsart --format=pretty



If we want to fix the issues by running the below command it will fix all the critical items.

polaris fix --files-path ./devopsart/ --checks=all

We can run it CI pipeline the details are available here,

https://polaris.docs.fairwinds.com/infrastructure-as-code/#running-in-a-ci-pipeline


That's all, We have installed the Polaris tool and successfully checked the Critical and warning items.





Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.


Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod, etc. environments. Today we are going to see a tool called "Octant" which helps all the users to understand their cluster status, view the logs, update the meta data, see the resources utilization, etc. In this blog will cover how to do the installation.

Requirements:

1.K8s cluster

2.Local desktop


Step 1: Installation of Octant,

The installation will be on the local machine not on the cluster.

OS : Linux (Installer are available for Windows and Mac as well)

Octant package to download : https://github.com/vmware-tanzu/octant/releases

Download the linux package in you local and extract it.

https://github.com/vmware-tanzu/octant/releases/download/v0.24.0/octant_0.24.0_Linux-64bit.tar.gz


Step 2: K8s cluster config

Keep your kubernetes cluster config at below path.

/root/.kube/config

By default octant will search the cluster configuration from above path.


Step 3: Start Octant

Go to the extracted path and start it,

cd octant_0.24.0_Linux-64bit

./octant

At the end you will get a message as "Dashboard is available at http://127.0.0.1:7777" it means its successfully started and we can access the dashboard.


Step 4: Access Octant dashboard

Go to browser and enter, http://localhost:7777


You can see your cluster name at the top right and you can select the namespace near to the cluster name to show the entire details. Through this we can view the Metadata, Logs, Update the deployments, etc.

Thats all we have successfuly installed the Octant dashboard and view the status of the cluster.


Do you want to apply any policy to avoid any changes happen in Kubernetes cluster? Kyverno is the right tool to achieve it.

Kyverno - Its a policy engine for kubernetes, define and enforce policies so that cluster users can maintain standard mechanism.

In this blog, we will see how to install Kyverno in Kubernetes and define policy.

Requirements:

Kubernetes cluster greater than v1.14

Step 1: Install Kyverno on kubernetes using manifest.

# kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

Validate the installation,

# kubectl get all -n kyverno



Step 2:

Create a policy that without label "app" in pod it should not deploy in cluster.

#cat policy.yaml

apiversion: kyverno.io/v1
kind: clusterpolicy
metadata:
  name: require-app-label
spec:
  validationfailureaction: enforce
  rules:
  - name: check-for-app-label
    match:
      resources:
        kinds:
        - pod
    validate:
      message: "label `app` is required"
      pattern:
        metadata:
          labels:
            app: "?*"

# kubectl apply -f policy.yaml

Now policy is created, Hereafter if any deployment without label "app" it will not deploy in the cluster.

For more Policies : https://github.com/kyverno/policies/tree/main/best-practices

Step 3:

Create a sample pod deployment without label "app"

#vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
spec:
  containers:
  - name: webapp
    image: nginx

#  kubectl apply -f nginx.yaml


You can see the pod is not deployed and it is restricted by our policy.

Now add the label app and try it.

# vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
    app: webapp
spec:
  containers:
  - name: webapp
    image: nginx

# kubectl apply -f nginx.yaml


Now the pod is deployed. Similarly we can create our own custom policies and restrict the deployment in any cluster.

That's all, Kyverno is installed in Kubernetes cluster and tested a policy.


Reference : https://kyverno.io/docs/introduction/


Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.

Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.

Requirements :

1. Kubernetes cluster

2. Helm in terminal

3. Helm repository(I will use Bitnami repo)

4. Golang in terminal

5. kubectl in terminal

6. Any machine which connects to the K8s cluster(Mine is Macbook)


Step 1: Installation of Nova

Execute the below commands to install in MacBook,

brew tap fairwindsops/tap


brew install fairwindsops/tap/nova


You can check the below link for other OS,

https://nova.docs.fairwinds.com/installation


Now find and install the required packages for nova by the below command,


go get github.com/fairwindsops/nova


Step 2: How to use Nova.


Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.












If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.


helm repo add bitnami https://charts.bitnami.com/bitnami

helm repo list

helm search repo nginx -l|head -n10




Next, install older version of Nginx,


helm install nginx-web bitnami/nginx --version=12.0.5





Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.

The below command will give the output of installed and depreciated version status.


nova find




From the above image, you can see the latest version and installed version details.


The below command will give some more details for the namespace, helm version, etc


nova find -wide




The below command will give containers versions that are outdated in the cluster,


nova find --containers




That's all, we have successfully installed the Nova tool and validated the deployed version.




Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.

Requirements :

1. Kubernetes(K8s) cluster

2. A machine(mine is Mac) to install Polaris and have access to the cluster


Step 1: Install Polaris

Execute the following commands in the terminal,

brew tap reactiveops/tap


brew install reactiveops/tap/polaris


polaris dashboard --port 8080

Make sure you are able to access the K8s cluster from the machine where you installed Polaris.

To install via helm charts use below commands,

helm repo add fairwinds-stable https://charts.fairwinds.com/stable


helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace


kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80


Step 2: Polaris Dashboard

Next, go to the browser using http://127.0.0.1:8080

The overview gives you the following details,

  • Grade
  • Score
  • Passed checks
  • Warning
  • Critical/Dangerous
  • k8s version, no. of namespaces, pods,etc
If you scroll down it will give much more details about each deployment and its open items. For example I have deployed "Grafana" in K8s cluster and see the status below,


In the above image, you can see how many criticals and warnings are there for Grafana deployment. Next, we need to fix one by one and make the best practices to bring the K8s cluster for running smoothly.

We can create our own custom checks and the details are here,
https://polaris.docs.fairwinds.com/customization/custom-checks/#basic-example

Step 3: Polaris Command line checks

We can run the checks from the command line as well.

For example, Am using the below nginx deployment file, to check from Polaris commands and see how many open items are there.

https://github.com/Kurento/Kubernetes/blob/master/nginx-deployment-service.yaml

File name: nginx.yaml and copied the file into the devopsart folder.

Below Polaris command to run locally,


polaris audit --audit-path ./devopsart --format=pretty



If we want to fix the issues by running the below command it will fix all the critical items.

polaris fix --files-path ./devopsart/ --checks=all

We can run it CI pipeline the details are available here,

https://polaris.docs.fairwinds.com/infrastructure-as-code/#running-in-a-ci-pipeline


That's all, We have installed the Polaris tool and successfully checked the Critical and warning items.





Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.


Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod, etc. environments. Today we are going to see a tool called "Octant" which helps all the users to understand their cluster status, view the logs, update the meta data, see the resources utilization, etc. In this blog will cover how to do the installation.

Requirements:

1.K8s cluster

2.Local desktop


Step 1: Installation of Octant,

The installation will be on the local machine not on the cluster.

OS : Linux (Installer are available for Windows and Mac as well)

Octant package to download : https://github.com/vmware-tanzu/octant/releases

Download the linux package in you local and extract it.

https://github.com/vmware-tanzu/octant/releases/download/v0.24.0/octant_0.24.0_Linux-64bit.tar.gz


Step 2: K8s cluster config

Keep your kubernetes cluster config at below path.

/root/.kube/config

By default octant will search the cluster configuration from above path.


Step 3: Start Octant

Go to the extracted path and start it,

cd octant_0.24.0_Linux-64bit

./octant

At the end you will get a message as "Dashboard is available at http://127.0.0.1:7777" it means its successfully started and we can access the dashboard.


Step 4: Access Octant dashboard

Go to browser and enter, http://localhost:7777


You can see your cluster name at the top right and you can select the namespace near to the cluster name to show the entire details. Through this we can view the Metadata, Logs, Update the deployments, etc.

Thats all we have successfuly installed the Octant dashboard and view the status of the cluster.


Do you want to apply any policy to avoid any changes happen in Kubernetes cluster? Kyverno is the right tool to achieve it.

Kyverno - Its a policy engine for kubernetes, define and enforce policies so that cluster users can maintain standard mechanism.

In this blog, we will see how to install Kyverno in Kubernetes and define policy.

Requirements:

Kubernetes cluster greater than v1.14

Step 1: Install Kyverno on kubernetes using manifest.

# kubectl create -f https://raw.githubusercontent.com/kyverno/kyverno/master/definitions/release/install.yaml

Validate the installation,

# kubectl get all -n kyverno



Step 2:

Create a policy that without label "app" in pod it should not deploy in cluster.

#cat policy.yaml

apiversion: kyverno.io/v1
kind: clusterpolicy
metadata:
  name: require-app-label
spec:
  validationfailureaction: enforce
  rules:
  - name: check-for-app-label
    match:
      resources:
        kinds:
        - pod
    validate:
      message: "label `app` is required"
      pattern:
        metadata:
          labels:
            app: "?*"

# kubectl apply -f policy.yaml

Now policy is created, Hereafter if any deployment without label "app" it will not deploy in the cluster.

For more Policies : https://github.com/kyverno/policies/tree/main/best-practices

Step 3:

Create a sample pod deployment without label "app"

#vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
spec:
  containers:
  - name: webapp
    image: nginx

#  kubectl apply -f nginx.yaml


You can see the pod is not deployed and it is restricted by our policy.

Now add the label app and try it.

# vi nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: webapp
  namespace: application
  labels:
    name: webapp
    app: webapp
spec:
  containers:
  - name: webapp
    image: nginx

# kubectl apply -f nginx.yaml


Now the pod is deployed. Similarly we can create our own custom policies and restrict the deployment in any cluster.

That's all, Kyverno is installed in Kubernetes cluster and tested a policy.


Reference : https://kyverno.io/docs/introduction/

Read more

Show more

Popeye - A scanning tool to check potential issues in Kubernetes Cluster

Today we will see a new tool called "Popeye" which helps to find misc…

Nova an opensource tool to find outdated Helm Chart release version for Kubernetes

Today we will see step by step of installation and demonstration of an open-sou…

Polaris a best practices tool for Kubernetes

Today we will see a tool called "Polaris" which helps to keep your Ku…

Cabot an Opensource Tool for Monitoring and Alerting

Monitoring and Alerting are the most important words in DevOps world. Today we …

Hypertrace installation steps for distributed tracing

In this blog, we will see how to install Hypertrace for the docker container ap…

"Checkov" A static code analysis tool for IAC

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, …

Octant dashboard step by step installation for Kubernetes cluster

Nowadays every organization is using Kubernetes orchestration for Dev, QA, Prod…

Steps by step installation of Kyverno and apply policy in Kubernetes

Do you want to apply any policy to avoid any changes happen in Kubernetes clust…

Load More That is All