Header Ads


In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan

Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.

We will use the "Rover" docker image, to do our setup and visualize the infra.

Requirements:

1.Linux/Windows VM

2. Docker

Steps 1 : Generate terraform plan output

I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.

cd devopsart

terraform plan -out tfplan.out

terraform show -json tfplan.out > tfplan.json

Now both the files are generated.


Step 2 : Run Rover tool locally,

Execute below docker command to run rover from the same step 1 path,

docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json

Its run the webUI in port number 9000.


Step 3 : Accessing Rover WebUI,

Lets access the WebUI and check it,

Go to browser, and enter http://localhost:9000


In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply

When a specific resource is selected from the image, it will provide the name and parameter information.

Additionally, the image can be saved locally by clicking the 'Save' option

I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.


Thanks for reading!! We have tried Rover tool and experimented with examples.


Reference:

https://github.com/im2nguyen/rover


In this blog, we will see a new tool called Infracost, which helps provide expected cloud cost estimates based on Terraform code. We will cover the installation and demonstrate how to use this tool.

Infracost :  It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.

Requirement :

1. One window/Linux VM

2.Terraform

3.Terraform examples


Step 1 : infracost installation,

For Mac, use below brew command to do the installation,

brew install infracost

For other Operating systems, follow below link,

https://www.infracost.io/docs/#quick-start


Step 2 : Infracost configuration,

We need to set up the Infracost API key by signing up here,

https://dashboard.infracost.io

Once logged in, visit the following URL to obtain the API key,

https://dashboard.infracost.io/org/praboosingh/settings/general

Next, open the terminal and set the key as an environment variable using the following command,

# export INFRACOST_API_KEY=XXXXXXXXXXXXX

or You can log in to the Infracost UI and grant terminal access by using the following command,

# infracost auth login

NoteInfracost will not send any cloud information to their server.


Step 3 : Infracost validation

Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.

# git clone https://github.com/alfonsof/terraform-azure-examples.git

# cd terraform-azure-examples/code/01-hello-world

try infracost by using below command to get the estimated cost for a month,

# infracost breakdown --path .

To save the report in json format and upload to infracost server, use below command,

# infracost breakdown --path . --format json --out-file infracost-demo.json

# infracost upload --path infracost-demo.json

In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.

# infracost diff --path . --compare-to infracost-demo.json


Thanks for reading!! We have installed infracost and experimented with examples.


References:

https://github.com/infracost/infracost

https://www.infracost.io/docs/#quick-start




In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In this blog, we will see installation and demo of security tool called "tfsec"

TFSec : It is a security scanner tool, which helps to find any misconfiguration in terraform code which leads to any security risks

Github link : https://github.com/aquasecurity/tfsec

Requirements:

1.linux/Mac machine

2.Terraform code

Installation :

Install tfsec

we can install mac/linux,

in Linux use below command,

curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash

in mac use below command,

brew install tfsec

Run tfsec :

Go to the terraform script location and run "tfsec ." inside the directory

I have sample tf code to create a storage account in Azure cloud. Here is my main.tf

resource "azurerm_storage_account" "storageaccount" {

  name                     = "devopsartstrgtest"

  resource_group_name      = "devopsart-non-prod"

  location                 = "East US"

  account_tier             = "Standard"

  account_replication_type = "GRS"

}


It will give a nice output which says where are the places the misconfiguration and fix needs to apply.

In the above result it says, "CRITICAL Storage account uses an insecure TLS version" so need to update with right configuration and run it.

We can run docker by using below command,

docker run --rm -it -v "$(pwd):/src" aquasec/tfsec /src



That's all. We have installed tfsec and experimented with it. 


Few more functionalities with tfsec:

- We can create a new policy based on our requirements.

- We can set it to ignore any of the tf scripts by defining it in the tfscript.

#tfsec:ignore:az-vm-no-public-ip-rg

- We can also set it to ignore with an expiration date, so the section will be enabled after the given date

#tfsec:ignore:ignore:az-vm-no-public-ip-rg:2023-08-30


You can try with this tool and give your comments!


In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan

Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.

We will use the "Rover" docker image, to do our setup and visualize the infra.

Requirements:

1.Linux/Windows VM

2. Docker

Steps 1 : Generate terraform plan output

I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.

cd devopsart

terraform plan -out tfplan.out

terraform show -json tfplan.out > tfplan.json

Now both the files are generated.


Step 2 : Run Rover tool locally,

Execute below docker command to run rover from the same step 1 path,

docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json

Its run the webUI in port number 9000.


Step 3 : Accessing Rover WebUI,

Lets access the WebUI and check it,

Go to browser, and enter http://localhost:9000


In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply

When a specific resource is selected from the image, it will provide the name and parameter information.

Additionally, the image can be saved locally by clicking the 'Save' option

I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.


Thanks for reading!! We have tried Rover tool and experimented with examples.


Reference:

https://github.com/im2nguyen/rover


In this blog, we will see a new tool called Infracost, which helps provide expected cloud cost estimates based on Terraform code. We will cover the installation and demonstrate how to use this tool.

Infracost :  It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.

Requirement :

1. One window/Linux VM

2.Terraform

3.Terraform examples


Step 1 : infracost installation,

For Mac, use below brew command to do the installation,

brew install infracost

For other Operating systems, follow below link,

https://www.infracost.io/docs/#quick-start


Step 2 : Infracost configuration,

We need to set up the Infracost API key by signing up here,

https://dashboard.infracost.io

Once logged in, visit the following URL to obtain the API key,

https://dashboard.infracost.io/org/praboosingh/settings/general

Next, open the terminal and set the key as an environment variable using the following command,

# export INFRACOST_API_KEY=XXXXXXXXXXXXX

or You can log in to the Infracost UI and grant terminal access by using the following command,

# infracost auth login

NoteInfracost will not send any cloud information to their server.


Step 3 : Infracost validation

Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.

# git clone https://github.com/alfonsof/terraform-azure-examples.git

# cd terraform-azure-examples/code/01-hello-world

try infracost by using below command to get the estimated cost for a month,

# infracost breakdown --path .

To save the report in json format and upload to infracost server, use below command,

# infracost breakdown --path . --format json --out-file infracost-demo.json

# infracost upload --path infracost-demo.json

In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.

# infracost diff --path . --compare-to infracost-demo.json


Thanks for reading!! We have installed infracost and experimented with examples.


References:

https://github.com/infracost/infracost

https://www.infracost.io/docs/#quick-start




In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In this blog, we will see installation and demo of security tool called "tfsec"

TFSec : It is a security scanner tool, which helps to find any misconfiguration in terraform code which leads to any security risks

Github link : https://github.com/aquasecurity/tfsec

Requirements:

1.linux/Mac machine

2.Terraform code

Installation :

Install tfsec

we can install mac/linux,

in Linux use below command,

curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash

in mac use below command,

brew install tfsec

Run tfsec :

Go to the terraform script location and run "tfsec ." inside the directory

I have sample tf code to create a storage account in Azure cloud. Here is my main.tf

resource "azurerm_storage_account" "storageaccount" {

  name                     = "devopsartstrgtest"

  resource_group_name      = "devopsart-non-prod"

  location                 = "East US"

  account_tier             = "Standard"

  account_replication_type = "GRS"

}


It will give a nice output which says where are the places the misconfiguration and fix needs to apply.

In the above result it says, "CRITICAL Storage account uses an insecure TLS version" so need to update with right configuration and run it.

We can run docker by using below command,

docker run --rm -it -v "$(pwd):/src" aquasec/tfsec /src



That's all. We have installed tfsec and experimented with it. 


Few more functionalities with tfsec:

- We can create a new policy based on our requirements.

- We can set it to ignore any of the tf scripts by defining it in the tfscript.

#tfsec:ignore:az-vm-no-public-ip-rg

- We can also set it to ignore with an expiration date, so the section will be enabled after the given date

#tfsec:ignore:ignore:az-vm-no-public-ip-rg:2023-08-30


You can try with this tool and give your comments!


In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan

Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.

We will use the "Rover" docker image, to do our setup and visualize the infra.

Requirements:

1.Linux/Windows VM

2. Docker

Steps 1 : Generate terraform plan output

I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.

cd devopsart

terraform plan -out tfplan.out

terraform show -json tfplan.out > tfplan.json

Now both the files are generated.


Step 2 : Run Rover tool locally,

Execute below docker command to run rover from the same step 1 path,

docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json

Its run the webUI in port number 9000.


Step 3 : Accessing Rover WebUI,

Lets access the WebUI and check it,

Go to browser, and enter http://localhost:9000


In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply

When a specific resource is selected from the image, it will provide the name and parameter information.

Additionally, the image can be saved locally by clicking the 'Save' option

I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.


Thanks for reading!! We have tried Rover tool and experimented with examples.


Reference:

https://github.com/im2nguyen/rover


In this blog, we will see a new tool called Infracost, which helps provide expected cloud cost estimates based on Terraform code. We will cover the installation and demonstrate how to use this tool.

Infracost :  It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.

Requirement :

1. One window/Linux VM

2.Terraform

3.Terraform examples


Step 1 : infracost installation,

For Mac, use below brew command to do the installation,

brew install infracost

For other Operating systems, follow below link,

https://www.infracost.io/docs/#quick-start


Step 2 : Infracost configuration,

We need to set up the Infracost API key by signing up here,

https://dashboard.infracost.io

Once logged in, visit the following URL to obtain the API key,

https://dashboard.infracost.io/org/praboosingh/settings/general

Next, open the terminal and set the key as an environment variable using the following command,

# export INFRACOST_API_KEY=XXXXXXXXXXXXX

or You can log in to the Infracost UI and grant terminal access by using the following command,

# infracost auth login

NoteInfracost will not send any cloud information to their server.


Step 3 : Infracost validation

Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.

# git clone https://github.com/alfonsof/terraform-azure-examples.git

# cd terraform-azure-examples/code/01-hello-world

try infracost by using below command to get the estimated cost for a month,

# infracost breakdown --path .

To save the report in json format and upload to infracost server, use below command,

# infracost breakdown --path . --format json --out-file infracost-demo.json

# infracost upload --path infracost-demo.json

In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.

# infracost diff --path . --compare-to infracost-demo.json


Thanks for reading!! We have installed infracost and experimented with examples.


References:

https://github.com/infracost/infracost

https://www.infracost.io/docs/#quick-start




In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In this blog, we will see installation and demo of security tool called "tfsec"

TFSec : It is a security scanner tool, which helps to find any misconfiguration in terraform code which leads to any security risks

Github link : https://github.com/aquasecurity/tfsec

Requirements:

1.linux/Mac machine

2.Terraform code

Installation :

Install tfsec

we can install mac/linux,

in Linux use below command,

curl -s https://raw.githubusercontent.com/aquasecurity/tfsec/master/scripts/install_linux.sh | bash

in mac use below command,

brew install tfsec

Run tfsec :

Go to the terraform script location and run "tfsec ." inside the directory

I have sample tf code to create a storage account in Azure cloud. Here is my main.tf

resource "azurerm_storage_account" "storageaccount" {

  name                     = "devopsartstrgtest"

  resource_group_name      = "devopsart-non-prod"

  location                 = "East US"

  account_tier             = "Standard"

  account_replication_type = "GRS"

}


It will give a nice output which says where are the places the misconfiguration and fix needs to apply.

In the above result it says, "CRITICAL Storage account uses an insecure TLS version" so need to update with right configuration and run it.

We can run docker by using below command,

docker run --rm -it -v "$(pwd):/src" aquasec/tfsec /src



That's all. We have installed tfsec and experimented with it. 


Few more functionalities with tfsec:

- We can create a new policy based on our requirements.

- We can set it to ignore any of the tf scripts by defining it in the tfscript.

#tfsec:ignore:az-vm-no-public-ip-rg

- We can also set it to ignore with an expiration date, so the section will be enabled after the given date

#tfsec:ignore:ignore:az-vm-no-public-ip-rg:2023-08-30


You can try with this tool and give your comments!


In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/

Read more

Show more

Rover - An Open Source Terraform Visualizer Tool

In this blog, we will explore a new tool called 'Rover,' which helps to…

Infracost - It reveals the expected cloud costs from Terraform script

In this blog, we will see a new tool called Infracost, which helps provide expe…

Trivy - An opensource tool to scan Container image, k8s, Git, VM Image, AWS and FileSystems

In this blog, we will install and examine a new tool called Trivy , which helps…

KOR - Kubernetes Orphaned Resources Tool Overview

In this blog post, We will explore a new tool called "KOR" (Kubernete…

Azure Export (aztfexport) Tool for Reverse Terraform(IAC)

In this blog, We will see an interesting tool that helps DevOps/SRE professiona…

TFSec - A security scanner tool for Terraform code

In this blog, we will see installation and demo of security tool called "t…

Terraformer - A Tool to export existing infrastructure as Terraform Code

In this blog, we will explore a tool called 'Terraformer,' which aids i…

Kube-green Auto-Shutdown K8s pods when not in use

Today, we will explore an interestin…

Load More That is All