Header Ads


If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster. 

Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.

Requirements :

1. Kubernetes cluster

2. Helm

Steps :

Step 1: Installation

Step 1.1: Overview of my existing cluster setup:

Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.

Step 1.2: Installation of helm dashboard,

execute the below command where the helm is installed,

# helm plugin install https://github.com/komodorio/helm-dashboard.git

Then execute the below command to start the helm dashboard,

# helm dashboard

If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".

If you want to run it in debug mode, set DEBUG=1 in the environment variable.

If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.

Step 2:  Access the helm dashboard,

Go to the browser and access the dashboard, http://localhost:8080

Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.

We can see the list of helm repositories from the UI,

Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.

We can install the application from the available helm repositories from the UI.

And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.

That's all, the helm dashboard is installed successfully and able to view the deployment.



Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.

Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.

Requirements :

1. Kubernetes cluster

2. Helm in terminal

3. Helm repository(I will use Bitnami repo)

4. Golang in terminal

5. kubectl in terminal

6. Any machine which connects to the K8s cluster(Mine is Macbook)


Step 1: Installation of Nova

Execute the below commands to install in MacBook,

brew tap fairwindsops/tap


brew install fairwindsops/tap/nova


You can check the below link for other OS,

https://nova.docs.fairwinds.com/installation


Now find and install the required packages for nova by the below command,


go get github.com/fairwindsops/nova


Step 2: How to use Nova.


Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.












If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.


helm repo add bitnami https://charts.bitnami.com/bitnami

helm repo list

helm search repo nginx -l|head -n10




Next, install older version of Nginx,


helm install nginx-web bitnami/nginx --version=12.0.5





Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.

The below command will give the output of installed and depreciated version status.


nova find




From the above image, you can see the latest version and installed version details.


The below command will give some more details for the namespace, helm version, etc


nova find -wide




The below command will give containers versions that are outdated in the cluster,


nova find --containers




That's all, we have successfully installed the Nova tool and validated the deployed version.




Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.

Requirements :

1. Kubernetes(K8s) cluster

2. A machine(mine is Mac) to install Polaris and have access to the cluster


Step 1: Install Polaris

Execute the following commands in the terminal,

brew tap reactiveops/tap


brew install reactiveops/tap/polaris


polaris dashboard --port 8080

Make sure you are able to access the K8s cluster from the machine where you installed Polaris.

To install via helm charts use below commands,

helm repo add fairwinds-stable https://charts.fairwinds.com/stable


helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace


kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80


Step 2: Polaris Dashboard

Next, go to the browser using http://127.0.0.1:8080

The overview gives you the following details,

  • Grade
  • Score
  • Passed checks
  • Warning
  • Critical/Dangerous
  • k8s version, no. of namespaces, pods,etc
If you scroll down it will give much more details about each deployment and its open items. For example I have deployed "Grafana" in K8s cluster and see the status below,


In the above image, you can see how many criticals and warnings are there for Grafana deployment. Next, we need to fix one by one and make the best practices to bring the K8s cluster for running smoothly.

We can create our own custom checks and the details are here,
https://polaris.docs.fairwinds.com/customization/custom-checks/#basic-example

Step 3: Polaris Command line checks

We can run the checks from the command line as well.

For example, Am using the below nginx deployment file, to check from Polaris commands and see how many open items are there.

https://github.com/Kurento/Kubernetes/blob/master/nginx-deployment-service.yaml

File name: nginx.yaml and copied the file into the devopsart folder.

Below Polaris command to run locally,


polaris audit --audit-path ./devopsart --format=pretty



If we want to fix the issues by running the below command it will fix all the critical items.

polaris fix --files-path ./devopsart/ --checks=all

We can run it CI pipeline the details are available here,

https://polaris.docs.fairwinds.com/infrastructure-as-code/#running-in-a-ci-pipeline


That's all, We have installed the Polaris tool and successfully checked the Critical and warning items.





Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.



If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster. 

Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.

Requirements :

1. Kubernetes cluster

2. Helm

Steps :

Step 1: Installation

Step 1.1: Overview of my existing cluster setup:

Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.

Step 1.2: Installation of helm dashboard,

execute the below command where the helm is installed,

# helm plugin install https://github.com/komodorio/helm-dashboard.git

Then execute the below command to start the helm dashboard,

# helm dashboard

If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".

If you want to run it in debug mode, set DEBUG=1 in the environment variable.

If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.

Step 2:  Access the helm dashboard,

Go to the browser and access the dashboard, http://localhost:8080

Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.

We can see the list of helm repositories from the UI,

Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.

We can install the application from the available helm repositories from the UI.

And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.

That's all, the helm dashboard is installed successfully and able to view the deployment.



Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.

Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.

Requirements :

1. Kubernetes cluster

2. Helm in terminal

3. Helm repository(I will use Bitnami repo)

4. Golang in terminal

5. kubectl in terminal

6. Any machine which connects to the K8s cluster(Mine is Macbook)


Step 1: Installation of Nova

Execute the below commands to install in MacBook,

brew tap fairwindsops/tap


brew install fairwindsops/tap/nova


You can check the below link for other OS,

https://nova.docs.fairwinds.com/installation


Now find and install the required packages for nova by the below command,


go get github.com/fairwindsops/nova


Step 2: How to use Nova.


Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.












If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.


helm repo add bitnami https://charts.bitnami.com/bitnami

helm repo list

helm search repo nginx -l|head -n10




Next, install older version of Nginx,


helm install nginx-web bitnami/nginx --version=12.0.5





Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.

The below command will give the output of installed and depreciated version status.


nova find




From the above image, you can see the latest version and installed version details.


The below command will give some more details for the namespace, helm version, etc


nova find -wide




The below command will give containers versions that are outdated in the cluster,


nova find --containers




That's all, we have successfully installed the Nova tool and validated the deployed version.




Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.

Requirements :

1. Kubernetes(K8s) cluster

2. A machine(mine is Mac) to install Polaris and have access to the cluster


Step 1: Install Polaris

Execute the following commands in the terminal,

brew tap reactiveops/tap


brew install reactiveops/tap/polaris


polaris dashboard --port 8080

Make sure you are able to access the K8s cluster from the machine where you installed Polaris.

To install via helm charts use below commands,

helm repo add fairwinds-stable https://charts.fairwinds.com/stable


helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace


kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80


Step 2: Polaris Dashboard

Next, go to the browser using http://127.0.0.1:8080

The overview gives you the following details,

  • Grade
  • Score
  • Passed checks
  • Warning
  • Critical/Dangerous
  • k8s version, no. of namespaces, pods,etc
If you scroll down it will give much more details about each deployment and its open items. For example I have deployed "Grafana" in K8s cluster and see the status below,


In the above image, you can see how many criticals and warnings are there for Grafana deployment. Next, we need to fix one by one and make the best practices to bring the K8s cluster for running smoothly.

We can create our own custom checks and the details are here,
https://polaris.docs.fairwinds.com/customization/custom-checks/#basic-example

Step 3: Polaris Command line checks

We can run the checks from the command line as well.

For example, Am using the below nginx deployment file, to check from Polaris commands and see how many open items are there.

https://github.com/Kurento/Kubernetes/blob/master/nginx-deployment-service.yaml

File name: nginx.yaml and copied the file into the devopsart folder.

Below Polaris command to run locally,


polaris audit --audit-path ./devopsart --format=pretty



If we want to fix the issues by running the below command it will fix all the critical items.

polaris fix --files-path ./devopsart/ --checks=all

We can run it CI pipeline the details are available here,

https://polaris.docs.fairwinds.com/infrastructure-as-code/#running-in-a-ci-pipeline


That's all, We have installed the Polaris tool and successfully checked the Critical and warning items.





Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.



If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster. 

Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.

Requirements :

1. Kubernetes cluster

2. Helm

Steps :

Step 1: Installation

Step 1.1: Overview of my existing cluster setup:

Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.

Step 1.2: Installation of helm dashboard,

execute the below command where the helm is installed,

# helm plugin install https://github.com/komodorio/helm-dashboard.git

Then execute the below command to start the helm dashboard,

# helm dashboard

If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".

If you want to run it in debug mode, set DEBUG=1 in the environment variable.

If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.

Step 2:  Access the helm dashboard,

Go to the browser and access the dashboard, http://localhost:8080

Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.

We can see the list of helm repositories from the UI,

Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.

We can install the application from the available helm repositories from the UI.

And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.

That's all, the helm dashboard is installed successfully and able to view the deployment.



Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.

Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.

Requirements :

1. Kubernetes cluster

2. Helm in terminal

3. Helm repository(I will use Bitnami repo)

4. Golang in terminal

5. kubectl in terminal

6. Any machine which connects to the K8s cluster(Mine is Macbook)


Step 1: Installation of Nova

Execute the below commands to install in MacBook,

brew tap fairwindsops/tap


brew install fairwindsops/tap/nova


You can check the below link for other OS,

https://nova.docs.fairwinds.com/installation


Now find and install the required packages for nova by the below command,


go get github.com/fairwindsops/nova


Step 2: How to use Nova.


Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.












If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.


helm repo add bitnami https://charts.bitnami.com/bitnami

helm repo list

helm search repo nginx -l|head -n10




Next, install older version of Nginx,


helm install nginx-web bitnami/nginx --version=12.0.5





Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.

The below command will give the output of installed and depreciated version status.


nova find




From the above image, you can see the latest version and installed version details.


The below command will give some more details for the namespace, helm version, etc


nova find -wide




The below command will give containers versions that are outdated in the cluster,


nova find --containers




That's all, we have successfully installed the Nova tool and validated the deployed version.




Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.

Requirements :

1. Kubernetes(K8s) cluster

2. A machine(mine is Mac) to install Polaris and have access to the cluster


Step 1: Install Polaris

Execute the following commands in the terminal,

brew tap reactiveops/tap


brew install reactiveops/tap/polaris


polaris dashboard --port 8080

Make sure you are able to access the K8s cluster from the machine where you installed Polaris.

To install via helm charts use below commands,

helm repo add fairwinds-stable https://charts.fairwinds.com/stable


helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace


kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80


Step 2: Polaris Dashboard

Next, go to the browser using http://127.0.0.1:8080

The overview gives you the following details,

  • Grade
  • Score
  • Passed checks
  • Warning
  • Critical/Dangerous
  • k8s version, no. of namespaces, pods,etc
If you scroll down it will give much more details about each deployment and its open items. For example I have deployed "Grafana" in K8s cluster and see the status below,


In the above image, you can see how many criticals and warnings are there for Grafana deployment. Next, we need to fix one by one and make the best practices to bring the K8s cluster for running smoothly.

We can create our own custom checks and the details are here,
https://polaris.docs.fairwinds.com/customization/custom-checks/#basic-example

Step 3: Polaris Command line checks

We can run the checks from the command line as well.

For example, Am using the below nginx deployment file, to check from Polaris commands and see how many open items are there.

https://github.com/Kurento/Kubernetes/blob/master/nginx-deployment-service.yaml

File name: nginx.yaml and copied the file into the devopsart folder.

Below Polaris command to run locally,


polaris audit --audit-path ./devopsart --format=pretty



If we want to fix the issues by running the below command it will fix all the critical items.

polaris fix --files-path ./devopsart/ --checks=all

We can run it CI pipeline the details are available here,

https://polaris.docs.fairwinds.com/infrastructure-as-code/#running-in-a-ci-pipeline


That's all, We have installed the Polaris tool and successfully checked the Critical and warning items.





Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting. 

Requirements:

1. docker and docker-compose

2.Git tool

3.Graphite server


Steps 1: Clone Cabot repository in local.

git clone https://github.com/cabotapp/docker-cabot

Steps 2: Update the settings based on our needs.

cd docker-cabot/conf

mv production.env.example production.env 

Step 3: Install Cabot via docker-compose

cd docker-cabot

docker-compose up -d

Wait for few minutes until the containers are comes up

Step 4: Login to Cabot portal

URL : http://localhost:5000/


Initially it will ask to setup username and login details.

Step 5:  Setup a simple service and checks.

There are three options to Monitor in Cabot, they are instance, check and service.

"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:

ping: a ping to a host

HTTP: call an URL and check the HTTP status.

"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.

"Service" is the macro stuff you want to monitor.

Am running a nginx webserver locally, will enable check for that.

After login go to checks Tab and click the "+" icon and rest add similar like below,


After saved the configuration,


Step 6: Test the Alert

Lets stop the nginx webserver and see if we are getting an email.


Successfully received an email.

Ref: https://github.com/cabotapp/docker-cabot





In this blog, we will see how to install Hypertrace for the docker container application to collect distributed tracing and visualize it.

What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.

Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.

Requirements:

- Docker engine & Docker compose

Step 1: Clone and Install hypertrace,

# git clone https://github.com/hypertrace/hypertrace.git 

# cd hypertrace/docker 

# docker-compose pull 

# docker-compose up --force-recreate


Step 2:  Access Hypertrace Dashboard

Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.

URL: http://IP of the VM:2020



Step 3: Sample Application test with Hypertrace

The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.

# cd hypertrace/docker 

# docker-compose -f docker-compose-zipkin-example.yml up

Once the containers are up, we can check the frontend in the browser by, 

URL: http://IP of the VM:8081



Step 4: View the metrics in Hypertrace

Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.

We can see the list of APIs, Errors, latency, etc. Here are few screenshots.







Here is the list of docker containers that are running at the end,



That's all, Hypertrace is installed successfully in docker, tested with a sample application and validated.

We can deploy Hypertrace in Kubernetes as well and collect the metrics. Refer to the below link
https://docs.hypertrace.org/deployments 

 Ref. : https://github.com/hypertrace/hypertrace

           https://docs.hypertrace.org/

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, etc. In order to make sure our code is healthy, we need to have a tool to get a visibility of any security issues and vulnerabilities.

In this blog, We will see how to use the "checkov" tool to identify vulnerability and issues in terraform script, Dockerfile, and K8s deployment manifest.

For more details about checkov : https://github.com/bridgecrewio/checkov

Requirements:

OS : Linux

Python >= 3.7

Terraform >= 0.12


Checkov Installation:

# pip install checkov

To find the installed version,

# checkov --version

All the list of checks can be view by below command,

# checkov --list

Next, we will experiment with checkov with Terraform Code, K8s Yaml file and Dockerfile.


Check Terraform code with checkov:

Cmd:

# checkov -d path-of-the-Tf-scripts

eg :

# checkov -d /root/terraform-code

Under this terraform-code directory, I have multiple scripts.

In the checkov result, we can see what action needs to take. In the below result we can see 26 checks are failed, so we can validate one by one and fix it.


Check Dockerfile with checkov:

Cmd:

# checkov -f dockerfile-path

eg :

# checkov -f /root/Dockerfile

 In the above screenshot result, we can see 2 checks are failed, so we can validate one by one and fix it.

Check Kubernetes deployment file with checkov:

Cmd:

# checkov -f  Yaml-file-path

eg :

# checkov -f /root/pod.yaml

In the above screenshot result, we can see 20 checks are failed, so we can validate one by one and fix it.

We can skip the checks in the command,

eg : checkov -f /root/Dockerfile --skip-check CKV_AWS_28


That's all, we have installed checkov and tested with some terraform code, dockerfile and K8s yaml file.


Read more

Show more

Helm Dashboard - An Open source by Komodor

If you are using Kubernetes, then you will definitely know about helm charts. H…

Popeye - A scanning tool to check potential issues in Kubernetes Cluster

Today we will see a new tool called "Popeye" which helps to find misc…

Nova an opensource tool to find outdated Helm Chart release version for Kubernetes

Today we will see step by step of installation and demonstration of an open-sou…

Polaris a best practices tool for Kubernetes

Today we will see a tool called "Polaris" which helps to keep your Ku…

Cabot an Opensource Tool for Monitoring and Alerting

Monitoring and Alerting are the most important words in DevOps world. Today we …

Hypertrace installation steps for distributed tracing

In this blog, we will see how to install Hypertrace for the docker container ap…

"Checkov" A static code analysis tool for IAC

As a DevOps/SRE, We used to write terraform code, Kubernetes Yaml, Dockerfile, …

Load More That is All