Header Ads

In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In this blog, we will see installation and demo of security tool called "tfsec"

TFSec : It is a security scanner tool, which helps to find any misconfiguration in terraform code which leads to any security risks

Github link : https://github.com/aquasecurity/tfsec

Requirements:

1.linux/Mac machine

2.Terraform code

Installation :

Install tfsec

we can install mac/linux,

in Linux use below command,

curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash

in mac use below command,

brew install tfsec

Run tfsec :

Go to the terraform script location and run "tfsec ." inside the directory

I have sample tf code to create a storage account in Azure cloud. Here is my main.tf

resource "azurerm_storage_account" "storageaccount" {

  name                     = "devopsartstrgtest"

  resource_group_name      = "devopsart-non-prod"

  location                 = "East US"

  account_tier             = "Standard"

  account_replication_type = "GRS"

}


It will give a nice output which says where are the places the misconfiguration and fix needs to apply.

In the above result it says, "CRITICAL Storage account uses an insecure TLS version" so need to update with right configuration and run it.

We can run docker by using below command,

docker run --rm -it -v "$(pwd):/src" aquasec/tfsec /src



That's all. We have installed tfsec and experimented with it. 


Few more functionalities with tfsec:

- We can create a new policy based on our requirements.

- We can set it to ignore any of the tf scripts by defining it in the tfscript.

#tfsec:ignore:az-vm-no-public-ip-rg

- We can also set it to ignore with an expiration date, so the section will be enabled after the given date

#tfsec:ignore:ignore:az-vm-no-public-ip-rg:2023-08-30


You can try with this tool and give your comments!


In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog post, we will cover the installation and experimentation of the Kubectl AI plugin(kubectl-ai), a plugin for Kubectl that combines the functionalities of Kubectl and OpenAI. This tool enables users to create and deploy Kubernetes manifests using OpenAI GPT.

Kubectl: It is a command-line tool used to interact with Kubernetes clusters. It is part of the Kubernetes distribution and allows users to deploy, inspect, and manage applications running on a Kubernetes cluster.

OpenAI GPT: It is a series of language models developed by OpenAI. These models are pre-trained on large datasets of text and then fine-tuned for specific natural languages processing tasks such as language translation, sentiment analysis, or question answering.

Requirements :

1. Kubernetes cluster

2. Linux terminal (My machine is Centos 8.5)

3. OpenAI API key (Take the )

Step 1: Install the Kubectl-ai plugin,

Download the latest binary from the below url,

https://github.com/sozercan/kubectl-ai/releases

Wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.6/kubectl-ai_linux_amd64.tar.gz

Extract the compressed file and give execute permission for kubectl-ai

tar -xvzf kubectl-ai_linux_amd64.tar.gz

Next generate the OpenAPI key from below url,

https://platform.openai.com/account/api-keys


Add the OpenAPI key in the terminal environment variable which will be used by the Kubectl-ai plugin

export OPENAI_API_KEY=XXXXXXXXX

export OPENAI_DEPLOYMENT_NAME= <I will not set this as an environment variable, as I will be using the default model as, "gpt-3.5-turbo">

Refer here to see the supported models, https://github.com/sozercan/kubectl-ai

Step 2: Experiment with the plugin,

Through this plugin. we will experiment with the followings,

1. Create a new namespace "devopsart"

kubectl get namespace

./kubectl-ai "create a namespace "devopsart""

2. Deploy "WordPress" in the "devopsart" namespace with one replica

./kubectl-ai "create a wordpress deployment with 1 replica in devopsart namespace"

3. Increase the "Wordpress" replica from 1 to 3

./kubectl-ai "Increase wordpress deployment replica count to 3 in devopsart namespace"

Here is the list of resources under the "devopsart" namespace which is created using this plugin.

We can do a lot with the plugin based on our needs. Above, I have tried a few experiments.

That's all. We have installed and configured the kubectl-ai plugin and experimented with this plugin on the Kubernetes cluster. 


In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In this blog, we will see installation and demo of security tool called "tfsec"

TFSec : It is a security scanner tool, which helps to find any misconfiguration in terraform code which leads to any security risks

Github link : https://github.com/aquasecurity/tfsec

Requirements:

1.linux/Mac machine

2.Terraform code

Installation :

Install tfsec

we can install mac/linux,

in Linux use below command,

curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash

in mac use below command,

brew install tfsec

Run tfsec :

Go to the terraform script location and run "tfsec ." inside the directory

I have sample tf code to create a storage account in Azure cloud. Here is my main.tf

resource "azurerm_storage_account" "storageaccount" {

  name                     = "devopsartstrgtest"

  resource_group_name      = "devopsart-non-prod"

  location                 = "East US"

  account_tier             = "Standard"

  account_replication_type = "GRS"

}


It will give a nice output which says where are the places the misconfiguration and fix needs to apply.

In the above result it says, "CRITICAL Storage account uses an insecure TLS version" so need to update with right configuration and run it.

We can run docker by using below command,

docker run --rm -it -v "$(pwd):/src" aquasec/tfsec /src



That's all. We have installed tfsec and experimented with it. 


Few more functionalities with tfsec:

- We can create a new policy based on our requirements.

- We can set it to ignore any of the tf scripts by defining it in the tfscript.

#tfsec:ignore:az-vm-no-public-ip-rg

- We can also set it to ignore with an expiration date, so the section will be enabled after the given date

#tfsec:ignore:ignore:az-vm-no-public-ip-rg:2023-08-30


You can try with this tool and give your comments!


In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog post, we will cover the installation and experimentation of the Kubectl AI plugin(kubectl-ai), a plugin for Kubectl that combines the functionalities of Kubectl and OpenAI. This tool enables users to create and deploy Kubernetes manifests using OpenAI GPT.

Kubectl: It is a command-line tool used to interact with Kubernetes clusters. It is part of the Kubernetes distribution and allows users to deploy, inspect, and manage applications running on a Kubernetes cluster.

OpenAI GPT: It is a series of language models developed by OpenAI. These models are pre-trained on large datasets of text and then fine-tuned for specific natural languages processing tasks such as language translation, sentiment analysis, or question answering.

Requirements :

1. Kubernetes cluster

2. Linux terminal (My machine is Centos 8.5)

3. OpenAI API key (Take the )

Step 1: Install the Kubectl-ai plugin,

Download the latest binary from the below url,

https://github.com/sozercan/kubectl-ai/releases

Wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.6/kubectl-ai_linux_amd64.tar.gz

Extract the compressed file and give execute permission for kubectl-ai

tar -xvzf kubectl-ai_linux_amd64.tar.gz

Next generate the OpenAPI key from below url,

https://platform.openai.com/account/api-keys


Add the OpenAPI key in the terminal environment variable which will be used by the Kubectl-ai plugin

export OPENAI_API_KEY=XXXXXXXXX

export OPENAI_DEPLOYMENT_NAME= <I will not set this as an environment variable, as I will be using the default model as, "gpt-3.5-turbo">

Refer here to see the supported models, https://github.com/sozercan/kubectl-ai

Step 2: Experiment with the plugin,

Through this plugin. we will experiment with the followings,

1. Create a new namespace "devopsart"

kubectl get namespace

./kubectl-ai "create a namespace "devopsart""

2. Deploy "WordPress" in the "devopsart" namespace with one replica

./kubectl-ai "create a wordpress deployment with 1 replica in devopsart namespace"

3. Increase the "Wordpress" replica from 1 to 3

./kubectl-ai "Increase wordpress deployment replica count to 3 in devopsart namespace"

Here is the list of resources under the "devopsart" namespace which is created using this plugin.

We can do a lot with the plugin based on our needs. Above, I have tried a few experiments.

That's all. We have installed and configured the kubectl-ai plugin and experimented with this plugin on the Kubernetes cluster. 


In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In this blog, we will see installation and demo of security tool called "tfsec"

TFSec : It is a security scanner tool, which helps to find any misconfiguration in terraform code which leads to any security risks

Github link : https://github.com/aquasecurity/tfsec

Requirements:

1.linux/Mac machine

2.Terraform code

Installation :

Install tfsec

we can install mac/linux,

in Linux use below command,

curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash

in mac use below command,

brew install tfsec

Run tfsec :

Go to the terraform script location and run "tfsec ." inside the directory

I have sample tf code to create a storage account in Azure cloud. Here is my main.tf

resource "azurerm_storage_account" "storageaccount" {

  name                     = "devopsartstrgtest"

  resource_group_name      = "devopsart-non-prod"

  location                 = "East US"

  account_tier             = "Standard"

  account_replication_type = "GRS"

}


It will give a nice output which says where are the places the misconfiguration and fix needs to apply.

In the above result it says, "CRITICAL Storage account uses an insecure TLS version" so need to update with right configuration and run it.

We can run docker by using below command,

docker run --rm -it -v "$(pwd):/src" aquasec/tfsec /src



That's all. We have installed tfsec and experimented with it. 


Few more functionalities with tfsec:

- We can create a new policy based on our requirements.

- We can set it to ignore any of the tf scripts by defining it in the tfscript.

#tfsec:ignore:az-vm-no-public-ip-rg

- We can also set it to ignore with an expiration date, so the section will be enabled after the given date

#tfsec:ignore:ignore:az-vm-no-public-ip-rg:2023-08-30


You can try with this tool and give your comments!


In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog post, we will cover the installation and experimentation of the Kubectl AI plugin(kubectl-ai), a plugin for Kubectl that combines the functionalities of Kubectl and OpenAI. This tool enables users to create and deploy Kubernetes manifests using OpenAI GPT.

Kubectl: It is a command-line tool used to interact with Kubernetes clusters. It is part of the Kubernetes distribution and allows users to deploy, inspect, and manage applications running on a Kubernetes cluster.

OpenAI GPT: It is a series of language models developed by OpenAI. These models are pre-trained on large datasets of text and then fine-tuned for specific natural languages processing tasks such as language translation, sentiment analysis, or question answering.

Requirements :

1. Kubernetes cluster

2. Linux terminal (My machine is Centos 8.5)

3. OpenAI API key (Take the )

Step 1: Install the Kubectl-ai plugin,

Download the latest binary from the below url,

https://github.com/sozercan/kubectl-ai/releases

Wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.6/kubectl-ai_linux_amd64.tar.gz

Extract the compressed file and give execute permission for kubectl-ai

tar -xvzf kubectl-ai_linux_amd64.tar.gz

Next generate the OpenAPI key from below url,

https://platform.openai.com/account/api-keys


Add the OpenAPI key in the terminal environment variable which will be used by the Kubectl-ai plugin

export OPENAI_API_KEY=XXXXXXXXX

export OPENAI_DEPLOYMENT_NAME= <I will not set this as an environment variable, as I will be using the default model as, "gpt-3.5-turbo">

Refer here to see the supported models, https://github.com/sozercan/kubectl-ai

Step 2: Experiment with the plugin,

Through this plugin. we will experiment with the followings,

1. Create a new namespace "devopsart"

kubectl get namespace

./kubectl-ai "create a namespace "devopsart""

2. Deploy "WordPress" in the "devopsart" namespace with one replica

./kubectl-ai "create a wordpress deployment with 1 replica in devopsart namespace"

3. Increase the "Wordpress" replica from 1 to 3

./kubectl-ai "Increase wordpress deployment replica count to 3 in devopsart namespace"

Here is the list of resources under the "devopsart" namespace which is created using this plugin.

We can do a lot with the plugin based on our needs. Above, I have tried a few experiments.

That's all. We have installed and configured the kubectl-ai plugin and experimented with this plugin on the Kubernetes cluster. 


Read more

Show more

Trivy - An opensource tool to scan Container image, k8s, Git, VM Image, AWS and FileSystems

In this blog, we will install and examine a new tool called Trivy , which helps…

KOR - Kubernetes Orphaned Resources Tool Overview

In this blog post, We will explore a new tool called "KOR" (Kubernete…

Azure Export (aztfexport) Tool for Reverse Terraform(IAC)

In this blog, We will see an interesting tool that helps DevOps/SRE professiona…

TFSec - A security scanner tool for Terraform code

In this blog, we will see installation and demo of security tool called "t…

Terraformer - A Tool to export existing infrastructure as Terraform Code

In this blog, we will explore a tool called 'Terraformer,' which aids i…

Kube-green Auto-Shutdown K8s pods when not in use

Today, we will explore an interestin…

Kubectl OpenAI Plugin - Installation and Experimentation

In this blog post, we will cover the installation and experimentation of the Ku…

Load More That is All