Header Ads


In this blog, we will cover the installation, configuration, and validation of the Terrakube tool.

Terrakube is a full replacement for Terraform Enterprise/Terraform Cloud, and it is also open source.

Requirement : 

Docker & DockerCompose

AWS/Azure/GCP account


Steps 1: Install Terrakube in docker,

We can install Terrakube in a Kubernetes cluster, but I am following the docker-compose method to install this tool. This link, https://docs.terrakube.org/getting-started/deployment, provides guidance for installing it in Kubernetes

Clone the below git repo,

https://github.com/AzBuilder/terrakube.git

cd terrakube/docker-compose

If you are using AWS/Azure/GCP we need to update the below values according to the cloud provider.

By default AWS storage account configuration will be there, we need to update it according to our environment.

Am using azure for this experiment so here is the configuration,

Open the api.env file,

For Azure purposes comment out all the AWS configurations and add below two lines,

AzureAccountName=tfdevopsart   (Storage account name of the TF backend)

AzureAccountKey=XXXXXX         (Storage account Key)

And change this variable to local in docker-compose.yaml


volumes:

  minio_data:

    driver: bridge


to


volumes:

  minio_data:

    driver: local

Next, run the below command to bring up the docker containers,

docker-compose up -d

Wait for 3 to 5 minutes for all the containers to be up and running.

Execute the below command to check the status of all the containers,

docker ps -a


Once all the containers are up and running we can try to access the Terrakube web ui

Step 2: Accessing Terrakube UI

Add the below entries in the local machine host file where the docker is running,


127.0.0.1 terrakube-api

127.0.0.1 terrakube-ui

127.0.0.1 terrakube-executor

127.0.0.1 terrakube-dex

127.0.0.1 terrakube-registry


For Linux, the file path is, /etc/hosts

Now try to access the UI by, http://terrakube-ui:3000

The default admin credential is, 

User : admin@example.com, Password : admin

Once logged in, provide Grant access then it will go to the homepage.

Here, I am using Azure, so I am selecting Azure workspace.

Once we select "Azure", it will show the default modules which are available.


Next, we need to create a workspace by selecting "New Workspace" with our Terraform script. We need to provide details of our Terraform script repository and branch, which will be used to provision the resources.


Here select "Version control flow" for repository-based infra provisioning.

Test repo link, https://github.com/prabhu87/azure-tf-storageaccount.git




Then submit "Create workspace".

Next, click "Start Job" and choose "Plan" (it is the same as Terraform plan). There is an option to choose "Plan and apply" and "Destroy" as well.

It will run and helps to understand what changes are going to happen in the given infra.

In my repo, I had given a simple TF script to create a storage account in Azure.

If we expand, we can see the TF logs.

Next, We will do Plan and Apply option, and then see if it created the storage account on the Azure end.

Next, go to azure and check whether the storage account is created or not,


The storage account is created successfully via Terrakube.

There are multiple options are available in this tool,

- We can schedule the script to run

- We can set the environmental variables

- We can check the state file details of each successful execution

- We can set execution based on approvals

- There are multiple plugins available to customise the Terrakube workflow.


That's all, We successfully installed, configured and created a PAAS with the Terrakube tool.


In this blog, We will see how to install Kubescape and how to identify the security issues and best practices in the Kubernetes cluster.

Kubescape is a security and compliance tool for Kubernetes, it helps to identify risk analysis, security compliance, and misconfiguration in the Kubernetes cluster.

Requirements,

1. Kubernetes cluster

2. kubectl

Step 1: Install kubescape on a Linux machine.

I have one master and one node k3s cluster to experiment with kubescape.

Execute the below command to install kubscape on the Linux machine,

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

Within a few seconds, it will install.

Step 2: Scan the Kubernetes cluster

I have my cluster configuration in the default path.

Scan Kubernetes cluster with the below command,

kubescape scan --enable-host-scan --verbose

It will scan all the resources in the Kubernetes cluster and give the current status of the cluster,

Here is the number of vulnerabilities found in my cluster, Need to check them one by one and fix them.

We can scan based on the framework available for kubescape.

Here is the list of frameworks it's supported, NSA-CISA, MITRE ATT&CK and CIS Benchmark. Below command to use to scan for the specific framework.

kubescape scan framework cis

We can export the result in HTML,  JSON, PDF, and XML by using the below command,

kubescape scan framework cis --format pdf --output cis_output.pdf

Step 3: Types of kubescape methods to scan,

Use an alternate kubeconfig file to scan,

kubescape scan --kubeconfig cluster.conf

Include specific namespaces to scan,

kubescape scan --include-namespaces devopsart,nginx

Exclude specific namespaces to scan,

kubescape scan --exclude-namespaces kube-system

kubescape scan --exclude-namespaces kube-system,default

Scan yaml files,

kubescape scan nginx.yaml
kubescape scan *.yaml

That's all, Today we have seen how to install kubescape tool and scan kubernetes cluster.



In this blog, We will see step-by-step of k3s installation in Centos 8.

K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.

Requirements:

Linux servers: 2

OS: Centos 8.5

Step 1: Update OS and install Kubectl

Here am using one master and node to do the installation.

Master: k3smaster.devopsart.com (10.12.247.54)

Worker Node:  k3snode1.devopsart.com (10.12.247.55)

Go to each server and run "yum update" to get the latest packages and do a reboot.

Make sure a firewall is enabled between these two Linux servers.

Install Kubectl,

Go to the root path of the master node and run the below commands,

curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl

chmod +x kubectl

cp kubectl /usr/bin

check the kubectl version to make sure the command is working or not,

kubectl version --short

Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving. 


Step 2: Install K3s in the Master server

Use below command in master server to install k3s,

curl -sfL https://get.k3s.io | sh -

Once successfully installed, you can run below to check the k3s service status,

systemctl status k3s

We can see the k3s config file in the below path in Master,

cat /etc/rancher/k3s/k3s.yaml

Next, we need to copy the config file to use in kubectl.

mkdir ~/.kube

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then check,

kubectl get nodes

K3s master node is successfully installed. next will do the worker node installation.


Step 3 : Install k3s agent in WorkerNode

Go to the worker node and execute the below command,

curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -

k3s_Master_url = https://k3smaster.devopsart.com:6443

k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token 

Once the installation is successful, we can check the k3s agent status by executing the below command,

systemctl status k3s-agent.service


Step 4: K3s Installation validation

Go to the master node and check new worker node is listed or not by the below command,

kubectl get nodes

Great!, worker node is attached successfully with the k3s master.


Step 5: Deploy the Nginx webserver in K3s and validate,

Am using helm chart installation for this purpose. 

Helm install,

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

cp -r /usr/local/bin/helm /usr/bin/helm

Add Bitnami repo in Helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

Deploy Nginx webserver by using below helm command,

helm install nginx-web bitnami/nginx

Check the pod status,

kubectl get pods -o wide

The Nginx pod is running fine now.

Access Nginx webserver,

I took the clusterIP of the Nginx service and tried to access it, and it's working.


That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.


If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster. 

Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.

Requirements :

1. Kubernetes cluster

2. Helm

Steps :

Step 1: Installation

Step 1.1: Overview of my existing cluster setup:

Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.

Step 1.2: Installation of helm dashboard,

execute the below command where the helm is installed,

# helm plugin install https://github.com/komodorio/helm-dashboard.git

Then execute the below command to start the helm dashboard,

# helm dashboard

If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".

If you want to run it in debug mode, set DEBUG=1 in the environment variable.

If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.

Step 2:  Access the helm dashboard,

Go to the browser and access the dashboard, http://localhost:8080

Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.

We can see the list of helm repositories from the UI,

Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.

We can install the application from the available helm repositories from the UI.

And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.

That's all, the helm dashboard is installed successfully and able to view the deployment.



Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




In this blog, we will cover the installation, configuration, and validation of the Terrakube tool.

Terrakube is a full replacement for Terraform Enterprise/Terraform Cloud, and it is also open source.

Requirement : 

Docker & DockerCompose

AWS/Azure/GCP account


Steps 1: Install Terrakube in docker,

We can install Terrakube in a Kubernetes cluster, but I am following the docker-compose method to install this tool. This link, https://docs.terrakube.org/getting-started/deployment, provides guidance for installing it in Kubernetes

Clone the below git repo,

https://github.com/AzBuilder/terrakube.git

cd terrakube/docker-compose

If you are using AWS/Azure/GCP we need to update the below values according to the cloud provider.

By default AWS storage account configuration will be there, we need to update it according to our environment.

Am using azure for this experiment so here is the configuration,

Open the api.env file,

For Azure purposes comment out all the AWS configurations and add below two lines,

AzureAccountName=tfdevopsart   (Storage account name of the TF backend)

AzureAccountKey=XXXXXX         (Storage account Key)

And change this variable to local in docker-compose.yaml


volumes:

  minio_data:

    driver: bridge


to


volumes:

  minio_data:

    driver: local

Next, run the below command to bring up the docker containers,

docker-compose up -d

Wait for 3 to 5 minutes for all the containers to be up and running.

Execute the below command to check the status of all the containers,

docker ps -a


Once all the containers are up and running we can try to access the Terrakube web ui

Step 2: Accessing Terrakube UI

Add the below entries in the local machine host file where the docker is running,


127.0.0.1 terrakube-api

127.0.0.1 terrakube-ui

127.0.0.1 terrakube-executor

127.0.0.1 terrakube-dex

127.0.0.1 terrakube-registry


For Linux, the file path is, /etc/hosts

Now try to access the UI by, http://terrakube-ui:3000

The default admin credential is, 

User : admin@example.com, Password : admin

Once logged in, provide Grant access then it will go to the homepage.

Here, I am using Azure, so I am selecting Azure workspace.

Once we select "Azure", it will show the default modules which are available.


Next, we need to create a workspace by selecting "New Workspace" with our Terraform script. We need to provide details of our Terraform script repository and branch, which will be used to provision the resources.


Here select "Version control flow" for repository-based infra provisioning.

Test repo link, https://github.com/prabhu87/azure-tf-storageaccount.git




Then submit "Create workspace".

Next, click "Start Job" and choose "Plan" (it is the same as Terraform plan). There is an option to choose "Plan and apply" and "Destroy" as well.

It will run and helps to understand what changes are going to happen in the given infra.

In my repo, I had given a simple TF script to create a storage account in Azure.

If we expand, we can see the TF logs.

Next, We will do Plan and Apply option, and then see if it created the storage account on the Azure end.

Next, go to azure and check whether the storage account is created or not,


The storage account is created successfully via Terrakube.

There are multiple options are available in this tool,

- We can schedule the script to run

- We can set the environmental variables

- We can check the state file details of each successful execution

- We can set execution based on approvals

- There are multiple plugins available to customise the Terrakube workflow.


That's all, We successfully installed, configured and created a PAAS with the Terrakube tool.


In this blog, We will see how to install Kubescape and how to identify the security issues and best practices in the Kubernetes cluster.

Kubescape is a security and compliance tool for Kubernetes, it helps to identify risk analysis, security compliance, and misconfiguration in the Kubernetes cluster.

Requirements,

1. Kubernetes cluster

2. kubectl

Step 1: Install kubescape on a Linux machine.

I have one master and one node k3s cluster to experiment with kubescape.

Execute the below command to install kubscape on the Linux machine,

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

Within a few seconds, it will install.

Step 2: Scan the Kubernetes cluster

I have my cluster configuration in the default path.

Scan Kubernetes cluster with the below command,

kubescape scan --enable-host-scan --verbose

It will scan all the resources in the Kubernetes cluster and give the current status of the cluster,

Here is the number of vulnerabilities found in my cluster, Need to check them one by one and fix them.

We can scan based on the framework available for kubescape.

Here is the list of frameworks it's supported, NSA-CISA, MITRE ATT&CK and CIS Benchmark. Below command to use to scan for the specific framework.

kubescape scan framework cis

We can export the result in HTML,  JSON, PDF, and XML by using the below command,

kubescape scan framework cis --format pdf --output cis_output.pdf

Step 3: Types of kubescape methods to scan,

Use an alternate kubeconfig file to scan,

kubescape scan --kubeconfig cluster.conf

Include specific namespaces to scan,

kubescape scan --include-namespaces devopsart,nginx

Exclude specific namespaces to scan,

kubescape scan --exclude-namespaces kube-system

kubescape scan --exclude-namespaces kube-system,default

Scan yaml files,

kubescape scan nginx.yaml
kubescape scan *.yaml

That's all, Today we have seen how to install kubescape tool and scan kubernetes cluster.



In this blog, We will see step-by-step of k3s installation in Centos 8.

K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.

Requirements:

Linux servers: 2

OS: Centos 8.5

Step 1: Update OS and install Kubectl

Here am using one master and node to do the installation.

Master: k3smaster.devopsart.com (10.12.247.54)

Worker Node:  k3snode1.devopsart.com (10.12.247.55)

Go to each server and run "yum update" to get the latest packages and do a reboot.

Make sure a firewall is enabled between these two Linux servers.

Install Kubectl,

Go to the root path of the master node and run the below commands,

curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl

chmod +x kubectl

cp kubectl /usr/bin

check the kubectl version to make sure the command is working or not,

kubectl version --short

Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving. 


Step 2: Install K3s in the Master server

Use below command in master server to install k3s,

curl -sfL https://get.k3s.io | sh -

Once successfully installed, you can run below to check the k3s service status,

systemctl status k3s

We can see the k3s config file in the below path in Master,

cat /etc/rancher/k3s/k3s.yaml

Next, we need to copy the config file to use in kubectl.

mkdir ~/.kube

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then check,

kubectl get nodes

K3s master node is successfully installed. next will do the worker node installation.


Step 3 : Install k3s agent in WorkerNode

Go to the worker node and execute the below command,

curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -

k3s_Master_url = https://k3smaster.devopsart.com:6443

k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token 

Once the installation is successful, we can check the k3s agent status by executing the below command,

systemctl status k3s-agent.service


Step 4: K3s Installation validation

Go to the master node and check new worker node is listed or not by the below command,

kubectl get nodes

Great!, worker node is attached successfully with the k3s master.


Step 5: Deploy the Nginx webserver in K3s and validate,

Am using helm chart installation for this purpose. 

Helm install,

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

cp -r /usr/local/bin/helm /usr/bin/helm

Add Bitnami repo in Helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

Deploy Nginx webserver by using below helm command,

helm install nginx-web bitnami/nginx

Check the pod status,

kubectl get pods -o wide

The Nginx pod is running fine now.

Access Nginx webserver,

I took the clusterIP of the Nginx service and tried to access it, and it's working.


That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.


If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster. 

Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.

Requirements :

1. Kubernetes cluster

2. Helm

Steps :

Step 1: Installation

Step 1.1: Overview of my existing cluster setup:

Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.

Step 1.2: Installation of helm dashboard,

execute the below command where the helm is installed,

# helm plugin install https://github.com/komodorio/helm-dashboard.git

Then execute the below command to start the helm dashboard,

# helm dashboard

If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".

If you want to run it in debug mode, set DEBUG=1 in the environment variable.

If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.

Step 2:  Access the helm dashboard,

Go to the browser and access the dashboard, http://localhost:8080

Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.

We can see the list of helm repositories from the UI,

Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.

We can install the application from the available helm repositories from the UI.

And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.

That's all, the helm dashboard is installed successfully and able to view the deployment.



Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.




In this blog, we will cover the installation, configuration, and validation of the Terrakube tool.

Terrakube is a full replacement for Terraform Enterprise/Terraform Cloud, and it is also open source.

Requirement : 

Docker & DockerCompose

AWS/Azure/GCP account


Steps 1: Install Terrakube in docker,

We can install Terrakube in a Kubernetes cluster, but I am following the docker-compose method to install this tool. This link, https://docs.terrakube.org/getting-started/deployment, provides guidance for installing it in Kubernetes

Clone the below git repo,

https://github.com/AzBuilder/terrakube.git

cd terrakube/docker-compose

If you are using AWS/Azure/GCP we need to update the below values according to the cloud provider.

By default AWS storage account configuration will be there, we need to update it according to our environment.

Am using azure for this experiment so here is the configuration,

Open the api.env file,

For Azure purposes comment out all the AWS configurations and add below two lines,

AzureAccountName=tfdevopsart   (Storage account name of the TF backend)

AzureAccountKey=XXXXXX         (Storage account Key)

And change this variable to local in docker-compose.yaml


volumes:

  minio_data:

    driver: bridge


to


volumes:

  minio_data:

    driver: local

Next, run the below command to bring up the docker containers,

docker-compose up -d

Wait for 3 to 5 minutes for all the containers to be up and running.

Execute the below command to check the status of all the containers,

docker ps -a


Once all the containers are up and running we can try to access the Terrakube web ui

Step 2: Accessing Terrakube UI

Add the below entries in the local machine host file where the docker is running,


127.0.0.1 terrakube-api

127.0.0.1 terrakube-ui

127.0.0.1 terrakube-executor

127.0.0.1 terrakube-dex

127.0.0.1 terrakube-registry


For Linux, the file path is, /etc/hosts

Now try to access the UI by, http://terrakube-ui:3000

The default admin credential is, 

User : admin@example.com, Password : admin

Once logged in, provide Grant access then it will go to the homepage.

Here, I am using Azure, so I am selecting Azure workspace.

Once we select "Azure", it will show the default modules which are available.


Next, we need to create a workspace by selecting "New Workspace" with our Terraform script. We need to provide details of our Terraform script repository and branch, which will be used to provision the resources.


Here select "Version control flow" for repository-based infra provisioning.

Test repo link, https://github.com/prabhu87/azure-tf-storageaccount.git




Then submit "Create workspace".

Next, click "Start Job" and choose "Plan" (it is the same as Terraform plan). There is an option to choose "Plan and apply" and "Destroy" as well.

It will run and helps to understand what changes are going to happen in the given infra.

In my repo, I had given a simple TF script to create a storage account in Azure.

If we expand, we can see the TF logs.

Next, We will do Plan and Apply option, and then see if it created the storage account on the Azure end.

Next, go to azure and check whether the storage account is created or not,


The storage account is created successfully via Terrakube.

There are multiple options are available in this tool,

- We can schedule the script to run

- We can set the environmental variables

- We can check the state file details of each successful execution

- We can set execution based on approvals

- There are multiple plugins available to customise the Terrakube workflow.


That's all, We successfully installed, configured and created a PAAS with the Terrakube tool.


In this blog, We will see how to install Kubescape and how to identify the security issues and best practices in the Kubernetes cluster.

Kubescape is a security and compliance tool for Kubernetes, it helps to identify risk analysis, security compliance, and misconfiguration in the Kubernetes cluster.

Requirements,

1. Kubernetes cluster

2. kubectl

Step 1: Install kubescape on a Linux machine.

I have one master and one node k3s cluster to experiment with kubescape.

Execute the below command to install kubscape on the Linux machine,

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

Within a few seconds, it will install.

Step 2: Scan the Kubernetes cluster

I have my cluster configuration in the default path.

Scan Kubernetes cluster with the below command,

kubescape scan --enable-host-scan --verbose

It will scan all the resources in the Kubernetes cluster and give the current status of the cluster,

Here is the number of vulnerabilities found in my cluster, Need to check them one by one and fix them.

We can scan based on the framework available for kubescape.

Here is the list of frameworks it's supported, NSA-CISA, MITRE ATT&CK and CIS Benchmark. Below command to use to scan for the specific framework.

kubescape scan framework cis

We can export the result in HTML,  JSON, PDF, and XML by using the below command,

kubescape scan framework cis --format pdf --output cis_output.pdf

Step 3: Types of kubescape methods to scan,

Use an alternate kubeconfig file to scan,

kubescape scan --kubeconfig cluster.conf

Include specific namespaces to scan,

kubescape scan --include-namespaces devopsart,nginx

Exclude specific namespaces to scan,

kubescape scan --exclude-namespaces kube-system

kubescape scan --exclude-namespaces kube-system,default

Scan yaml files,

kubescape scan nginx.yaml
kubescape scan *.yaml

That's all, Today we have seen how to install kubescape tool and scan kubernetes cluster.



In this blog, We will see step-by-step of k3s installation in Centos 8.

K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.

Requirements:

Linux servers: 2

OS: Centos 8.5

Step 1: Update OS and install Kubectl

Here am using one master and node to do the installation.

Master: k3smaster.devopsart.com (10.12.247.54)

Worker Node:  k3snode1.devopsart.com (10.12.247.55)

Go to each server and run "yum update" to get the latest packages and do a reboot.

Make sure a firewall is enabled between these two Linux servers.

Install Kubectl,

Go to the root path of the master node and run the below commands,

curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl

chmod +x kubectl

cp kubectl /usr/bin

check the kubectl version to make sure the command is working or not,

kubectl version --short

Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving. 


Step 2: Install K3s in the Master server

Use below command in master server to install k3s,

curl -sfL https://get.k3s.io | sh -

Once successfully installed, you can run below to check the k3s service status,

systemctl status k3s

We can see the k3s config file in the below path in Master,

cat /etc/rancher/k3s/k3s.yaml

Next, we need to copy the config file to use in kubectl.

mkdir ~/.kube

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then check,

kubectl get nodes

K3s master node is successfully installed. next will do the worker node installation.


Step 3 : Install k3s agent in WorkerNode

Go to the worker node and execute the below command,

curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -

k3s_Master_url = https://k3smaster.devopsart.com:6443

k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token 

Once the installation is successful, we can check the k3s agent status by executing the below command,

systemctl status k3s-agent.service


Step 4: K3s Installation validation

Go to the master node and check new worker node is listed or not by the below command,

kubectl get nodes

Great!, worker node is attached successfully with the k3s master.


Step 5: Deploy the Nginx webserver in K3s and validate,

Am using helm chart installation for this purpose. 

Helm install,

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

cp -r /usr/local/bin/helm /usr/bin/helm

Add Bitnami repo in Helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

Deploy Nginx webserver by using below helm command,

helm install nginx-web bitnami/nginx

Check the pod status,

kubectl get pods -o wide

The Nginx pod is running fine now.

Access Nginx webserver,

I took the clusterIP of the Nginx service and tried to access it, and it's working.


That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.


If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster. 

Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.

Requirements :

1. Kubernetes cluster

2. Helm

Steps :

Step 1: Installation

Step 1.1: Overview of my existing cluster setup:

Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.

Step 1.2: Installation of helm dashboard,

execute the below command where the helm is installed,

# helm plugin install https://github.com/komodorio/helm-dashboard.git

Then execute the below command to start the helm dashboard,

# helm dashboard

If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".

If you want to run it in debug mode, set DEBUG=1 in the environment variable.

If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.

Step 2:  Access the helm dashboard,

Go to the browser and access the dashboard, http://localhost:8080

Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.

We can see the list of helm repositories from the UI,

Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.

We can install the application from the available helm repositories from the UI.

And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.

That's all, the helm dashboard is installed successfully and able to view the deployment.



Today we will see a new tool called "Popeye" which helps to find misconfigured resources and help us to ensure best practices are in place for the Kubernetes cluster.

Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.

Note:  This is a read-only tool, it will not make any changes in the K8s cluster.

In this blog, we will see how to install it and use this tool

Requirements:

1. K8s cluster

2. Linux VM

Step 1: Install the Popeye tool

Use the below command to install in MacBook,

brew install derailed/popeye/popeye

For other OS use the below link to install it.



You can install with "krew" as well by the using below command,

kubectl krew install popeye

Step 2: Run the Popeye tool to scan the Kubernetes cluster,

Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.

This command runs in all nodes and namespaces by default,

popeye





In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.

If you need to run a specific namespace and configuration you can use the below command,

For the specific namespace,

popeye -n devopsart

For specific configurations like config map,

popeye -n devopsart -s configmap

For specific deployments,

popeye -n devopsart -s deploy 

Step 3: HTML report generation and Save the report locally

To save the report in the current directory use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save

then run the required popeye command, and the scan will be saved in the current directory

To save the report in HTML use the below command,

POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html

And run the required popeye command and see the report.html file in the browser,


That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.



Read more

Show more

Terrakube - An opensource Terraform UI tool overview

In this blog, we will cover the installation, configuration, and validation of …

An overview of Kubescape - Kubernetes security and compliance Tool

In this blog, We will see how to install Kubescape and how to identify the secu…

Step by Step installation of K3s in Centos 8

In this blog, We will see step-by-step of k3s installation in Centos 8. K3s, It…

Helm Dashboard - An Open source by Komodor

If you are using Kubernetes, then you will definitely know about helm charts. H…

Popeye - A scanning tool to check potential issues in Kubernetes Cluster

Today we will see a new tool called "Popeye" which helps to find misc…

Load More That is All