Header Ads

In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog post, we will cover the installation and experimentation of the Kubectl AI plugin(kubectl-ai), a plugin for Kubectl that combines the functionalities of Kubectl and OpenAI. This tool enables users to create and deploy Kubernetes manifests using OpenAI GPT.

Kubectl: It is a command-line tool used to interact with Kubernetes clusters. It is part of the Kubernetes distribution and allows users to deploy, inspect, and manage applications running on a Kubernetes cluster.

OpenAI GPT: It is a series of language models developed by OpenAI. These models are pre-trained on large datasets of text and then fine-tuned for specific natural languages processing tasks such as language translation, sentiment analysis, or question answering.

Requirements :

1. Kubernetes cluster

2. Linux terminal (My machine is Centos 8.5)

3. OpenAI API key (Take the )

Step 1: Install the Kubectl-ai plugin,

Download the latest binary from the below url,

https://github.com/sozercan/kubectl-ai/releases

Wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.6/kubectl-ai_linux_amd64.tar.gz

Extract the compressed file and give execute permission for kubectl-ai

tar -xvzf kubectl-ai_linux_amd64.tar.gz

Next generate the OpenAPI key from below url,

https://platform.openai.com/account/api-keys


Add the OpenAPI key in the terminal environment variable which will be used by the Kubectl-ai plugin

export OPENAI_API_KEY=XXXXXXXXX

export OPENAI_DEPLOYMENT_NAME= <I will not set this as an environment variable, as I will be using the default model as, "gpt-3.5-turbo">

Refer here to see the supported models, https://github.com/sozercan/kubectl-ai

Step 2: Experiment with the plugin,

Through this plugin. we will experiment with the followings,

1. Create a new namespace "devopsart"

kubectl get namespace

./kubectl-ai "create a namespace "devopsart""

2. Deploy "WordPress" in the "devopsart" namespace with one replica

./kubectl-ai "create a wordpress deployment with 1 replica in devopsart namespace"

3. Increase the "Wordpress" replica from 1 to 3

./kubectl-ai "Increase wordpress deployment replica count to 3 in devopsart namespace"

Here is the list of resources under the "devopsart" namespace which is created using this plugin.

We can do a lot with the plugin based on our needs. Above, I have tried a few experiments.

That's all. We have installed and configured the kubectl-ai plugin and experimented with this plugin on the Kubernetes cluster. 



In this blog, we will cover the installation, configuration, and validation of the Terrakube tool.

Terrakube is a full replacement for Terraform Enterprise/Terraform Cloud, and it is also open source.

Requirement : 

Docker & DockerCompose

AWS/Azure/GCP account


Steps 1: Install Terrakube in docker,

We can install Terrakube in a Kubernetes cluster, but I am following the docker-compose method to install this tool. This link, https://docs.terrakube.org/getting-started/deployment, provides guidance for installing it in Kubernetes

Clone the below git repo,

https://github.com/AzBuilder/terrakube.git

cd terrakube/docker-compose

If you are using AWS/Azure/GCP we need to update the below values according to the cloud provider.

By default AWS storage account configuration will be there, we need to update it according to our environment.

Am using azure for this experiment so here is the configuration,

Open the api.env file,

For Azure purposes comment out all the AWS configurations and add below two lines,

AzureAccountName=tfdevopsart   (Storage account name of the TF backend)

AzureAccountKey=XXXXXX         (Storage account Key)

And change this variable to local in docker-compose.yaml


volumes:

  minio_data:

    driver: bridge


to


volumes:

  minio_data:

    driver: local

Next, run the below command to bring up the docker containers,

docker-compose up -d

Wait for 3 to 5 minutes for all the containers to be up and running.

Execute the below command to check the status of all the containers,

docker ps -a


Once all the containers are up and running we can try to access the Terrakube web ui

Step 2: Accessing Terrakube UI

Add the below entries in the local machine host file where the docker is running,


127.0.0.1 terrakube-api

127.0.0.1 terrakube-ui

127.0.0.1 terrakube-executor

127.0.0.1 terrakube-dex

127.0.0.1 terrakube-registry


For Linux, the file path is, /etc/hosts

Now try to access the UI by, http://terrakube-ui:3000

The default admin credential is, 

User : admin@example.com, Password : admin

Once logged in, provide Grant access then it will go to the homepage.

Here, I am using Azure, so I am selecting Azure workspace.

Once we select "Azure", it will show the default modules which are available.


Next, we need to create a workspace by selecting "New Workspace" with our Terraform script. We need to provide details of our Terraform script repository and branch, which will be used to provision the resources.


Here select "Version control flow" for repository-based infra provisioning.

Test repo link, https://github.com/prabhu87/azure-tf-storageaccount.git




Then submit "Create workspace".

Next, click "Start Job" and choose "Plan" (it is the same as Terraform plan). There is an option to choose "Plan and apply" and "Destroy" as well.

It will run and helps to understand what changes are going to happen in the given infra.

In my repo, I had given a simple TF script to create a storage account in Azure.

If we expand, we can see the TF logs.

Next, We will do Plan and Apply option, and then see if it created the storage account on the Azure end.

Next, go to azure and check whether the storage account is created or not,


The storage account is created successfully via Terrakube.

There are multiple options are available in this tool,

- We can schedule the script to run

- We can set the environmental variables

- We can check the state file details of each successful execution

- We can set execution based on approvals

- There are multiple plugins available to customise the Terrakube workflow.


That's all, We successfully installed, configured and created a PAAS with the Terrakube tool.


In this blog, We will see how to install Kubescape and how to identify the security issues and best practices in the Kubernetes cluster.

Kubescape is a security and compliance tool for Kubernetes, it helps to identify risk analysis, security compliance, and misconfiguration in the Kubernetes cluster.

Requirements,

1. Kubernetes cluster

2. kubectl

Step 1: Install kubescape on a Linux machine.

I have one master and one node k3s cluster to experiment with kubescape.

Execute the below command to install kubscape on the Linux machine,

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

Within a few seconds, it will install.

Step 2: Scan the Kubernetes cluster

I have my cluster configuration in the default path.

Scan Kubernetes cluster with the below command,

kubescape scan --enable-host-scan --verbose

It will scan all the resources in the Kubernetes cluster and give the current status of the cluster,

Here is the number of vulnerabilities found in my cluster, Need to check them one by one and fix them.

We can scan based on the framework available for kubescape.

Here is the list of frameworks it's supported, NSA-CISA, MITRE ATT&CK and CIS Benchmark. Below command to use to scan for the specific framework.

kubescape scan framework cis

We can export the result in HTML,  JSON, PDF, and XML by using the below command,

kubescape scan framework cis --format pdf --output cis_output.pdf

Step 3: Types of kubescape methods to scan,

Use an alternate kubeconfig file to scan,

kubescape scan --kubeconfig cluster.conf

Include specific namespaces to scan,

kubescape scan --include-namespaces devopsart,nginx

Exclude specific namespaces to scan,

kubescape scan --exclude-namespaces kube-system

kubescape scan --exclude-namespaces kube-system,default

Scan yaml files,

kubescape scan nginx.yaml
kubescape scan *.yaml

That's all, Today we have seen how to install kubescape tool and scan kubernetes cluster.



In this blog, We will see step-by-step of k3s installation in Centos 8.

K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.

Requirements:

Linux servers: 2

OS: Centos 8.5

Step 1: Update OS and install Kubectl

Here am using one master and node to do the installation.

Master: k3smaster.devopsart.com (10.12.247.54)

Worker Node:  k3snode1.devopsart.com (10.12.247.55)

Go to each server and run "yum update" to get the latest packages and do a reboot.

Make sure a firewall is enabled between these two Linux servers.

Install Kubectl,

Go to the root path of the master node and run the below commands,

curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl

chmod +x kubectl

cp kubectl /usr/bin

check the kubectl version to make sure the command is working or not,

kubectl version --short

Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving. 


Step 2: Install K3s in the Master server

Use below command in master server to install k3s,

curl -sfL https://get.k3s.io | sh -

Once successfully installed, you can run below to check the k3s service status,

systemctl status k3s

We can see the k3s config file in the below path in Master,

cat /etc/rancher/k3s/k3s.yaml

Next, we need to copy the config file to use in kubectl.

mkdir ~/.kube

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then check,

kubectl get nodes

K3s master node is successfully installed. next will do the worker node installation.


Step 3 : Install k3s agent in WorkerNode

Go to the worker node and execute the below command,

curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -

k3s_Master_url = https://k3smaster.devopsart.com:6443

k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token 

Once the installation is successful, we can check the k3s agent status by executing the below command,

systemctl status k3s-agent.service


Step 4: K3s Installation validation

Go to the master node and check new worker node is listed or not by the below command,

kubectl get nodes

Great!, worker node is attached successfully with the k3s master.


Step 5: Deploy the Nginx webserver in K3s and validate,

Am using helm chart installation for this purpose. 

Helm install,

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

cp -r /usr/local/bin/helm /usr/bin/helm

Add Bitnami repo in Helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

Deploy Nginx webserver by using below helm command,

helm install nginx-web bitnami/nginx

Check the pod status,

kubectl get pods -o wide

The Nginx pod is running fine now.

Access Nginx webserver,

I took the clusterIP of the Nginx service and tried to access it, and it's working.


That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.

In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog post, we will cover the installation and experimentation of the Kubectl AI plugin(kubectl-ai), a plugin for Kubectl that combines the functionalities of Kubectl and OpenAI. This tool enables users to create and deploy Kubernetes manifests using OpenAI GPT.

Kubectl: It is a command-line tool used to interact with Kubernetes clusters. It is part of the Kubernetes distribution and allows users to deploy, inspect, and manage applications running on a Kubernetes cluster.

OpenAI GPT: It is a series of language models developed by OpenAI. These models are pre-trained on large datasets of text and then fine-tuned for specific natural languages processing tasks such as language translation, sentiment analysis, or question answering.

Requirements :

1. Kubernetes cluster

2. Linux terminal (My machine is Centos 8.5)

3. OpenAI API key (Take the )

Step 1: Install the Kubectl-ai plugin,

Download the latest binary from the below url,

https://github.com/sozercan/kubectl-ai/releases

Wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.6/kubectl-ai_linux_amd64.tar.gz

Extract the compressed file and give execute permission for kubectl-ai

tar -xvzf kubectl-ai_linux_amd64.tar.gz

Next generate the OpenAPI key from below url,

https://platform.openai.com/account/api-keys


Add the OpenAPI key in the terminal environment variable which will be used by the Kubectl-ai plugin

export OPENAI_API_KEY=XXXXXXXXX

export OPENAI_DEPLOYMENT_NAME= <I will not set this as an environment variable, as I will be using the default model as, "gpt-3.5-turbo">

Refer here to see the supported models, https://github.com/sozercan/kubectl-ai

Step 2: Experiment with the plugin,

Through this plugin. we will experiment with the followings,

1. Create a new namespace "devopsart"

kubectl get namespace

./kubectl-ai "create a namespace "devopsart""

2. Deploy "WordPress" in the "devopsart" namespace with one replica

./kubectl-ai "create a wordpress deployment with 1 replica in devopsart namespace"

3. Increase the "Wordpress" replica from 1 to 3

./kubectl-ai "Increase wordpress deployment replica count to 3 in devopsart namespace"

Here is the list of resources under the "devopsart" namespace which is created using this plugin.

We can do a lot with the plugin based on our needs. Above, I have tried a few experiments.

That's all. We have installed and configured the kubectl-ai plugin and experimented with this plugin on the Kubernetes cluster. 



In this blog, we will cover the installation, configuration, and validation of the Terrakube tool.

Terrakube is a full replacement for Terraform Enterprise/Terraform Cloud, and it is also open source.

Requirement : 

Docker & DockerCompose

AWS/Azure/GCP account


Steps 1: Install Terrakube in docker,

We can install Terrakube in a Kubernetes cluster, but I am following the docker-compose method to install this tool. This link, https://docs.terrakube.org/getting-started/deployment, provides guidance for installing it in Kubernetes

Clone the below git repo,

https://github.com/AzBuilder/terrakube.git

cd terrakube/docker-compose

If you are using AWS/Azure/GCP we need to update the below values according to the cloud provider.

By default AWS storage account configuration will be there, we need to update it according to our environment.

Am using azure for this experiment so here is the configuration,

Open the api.env file,

For Azure purposes comment out all the AWS configurations and add below two lines,

AzureAccountName=tfdevopsart   (Storage account name of the TF backend)

AzureAccountKey=XXXXXX         (Storage account Key)

And change this variable to local in docker-compose.yaml


volumes:

  minio_data:

    driver: bridge


to


volumes:

  minio_data:

    driver: local

Next, run the below command to bring up the docker containers,

docker-compose up -d

Wait for 3 to 5 minutes for all the containers to be up and running.

Execute the below command to check the status of all the containers,

docker ps -a


Once all the containers are up and running we can try to access the Terrakube web ui

Step 2: Accessing Terrakube UI

Add the below entries in the local machine host file where the docker is running,


127.0.0.1 terrakube-api

127.0.0.1 terrakube-ui

127.0.0.1 terrakube-executor

127.0.0.1 terrakube-dex

127.0.0.1 terrakube-registry


For Linux, the file path is, /etc/hosts

Now try to access the UI by, http://terrakube-ui:3000

The default admin credential is, 

User : admin@example.com, Password : admin

Once logged in, provide Grant access then it will go to the homepage.

Here, I am using Azure, so I am selecting Azure workspace.

Once we select "Azure", it will show the default modules which are available.


Next, we need to create a workspace by selecting "New Workspace" with our Terraform script. We need to provide details of our Terraform script repository and branch, which will be used to provision the resources.


Here select "Version control flow" for repository-based infra provisioning.

Test repo link, https://github.com/prabhu87/azure-tf-storageaccount.git




Then submit "Create workspace".

Next, click "Start Job" and choose "Plan" (it is the same as Terraform plan). There is an option to choose "Plan and apply" and "Destroy" as well.

It will run and helps to understand what changes are going to happen in the given infra.

In my repo, I had given a simple TF script to create a storage account in Azure.

If we expand, we can see the TF logs.

Next, We will do Plan and Apply option, and then see if it created the storage account on the Azure end.

Next, go to azure and check whether the storage account is created or not,


The storage account is created successfully via Terrakube.

There are multiple options are available in this tool,

- We can schedule the script to run

- We can set the environmental variables

- We can check the state file details of each successful execution

- We can set execution based on approvals

- There are multiple plugins available to customise the Terrakube workflow.


That's all, We successfully installed, configured and created a PAAS with the Terrakube tool.


In this blog, We will see how to install Kubescape and how to identify the security issues and best practices in the Kubernetes cluster.

Kubescape is a security and compliance tool for Kubernetes, it helps to identify risk analysis, security compliance, and misconfiguration in the Kubernetes cluster.

Requirements,

1. Kubernetes cluster

2. kubectl

Step 1: Install kubescape on a Linux machine.

I have one master and one node k3s cluster to experiment with kubescape.

Execute the below command to install kubscape on the Linux machine,

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

Within a few seconds, it will install.

Step 2: Scan the Kubernetes cluster

I have my cluster configuration in the default path.

Scan Kubernetes cluster with the below command,

kubescape scan --enable-host-scan --verbose

It will scan all the resources in the Kubernetes cluster and give the current status of the cluster,

Here is the number of vulnerabilities found in my cluster, Need to check them one by one and fix them.

We can scan based on the framework available for kubescape.

Here is the list of frameworks it's supported, NSA-CISA, MITRE ATT&CK and CIS Benchmark. Below command to use to scan for the specific framework.

kubescape scan framework cis

We can export the result in HTML,  JSON, PDF, and XML by using the below command,

kubescape scan framework cis --format pdf --output cis_output.pdf

Step 3: Types of kubescape methods to scan,

Use an alternate kubeconfig file to scan,

kubescape scan --kubeconfig cluster.conf

Include specific namespaces to scan,

kubescape scan --include-namespaces devopsart,nginx

Exclude specific namespaces to scan,

kubescape scan --exclude-namespaces kube-system

kubescape scan --exclude-namespaces kube-system,default

Scan yaml files,

kubescape scan nginx.yaml
kubescape scan *.yaml

That's all, Today we have seen how to install kubescape tool and scan kubernetes cluster.



In this blog, We will see step-by-step of k3s installation in Centos 8.

K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.

Requirements:

Linux servers: 2

OS: Centos 8.5

Step 1: Update OS and install Kubectl

Here am using one master and node to do the installation.

Master: k3smaster.devopsart.com (10.12.247.54)

Worker Node:  k3snode1.devopsart.com (10.12.247.55)

Go to each server and run "yum update" to get the latest packages and do a reboot.

Make sure a firewall is enabled between these two Linux servers.

Install Kubectl,

Go to the root path of the master node and run the below commands,

curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl

chmod +x kubectl

cp kubectl /usr/bin

check the kubectl version to make sure the command is working or not,

kubectl version --short

Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving. 


Step 2: Install K3s in the Master server

Use below command in master server to install k3s,

curl -sfL https://get.k3s.io | sh -

Once successfully installed, you can run below to check the k3s service status,

systemctl status k3s

We can see the k3s config file in the below path in Master,

cat /etc/rancher/k3s/k3s.yaml

Next, we need to copy the config file to use in kubectl.

mkdir ~/.kube

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then check,

kubectl get nodes

K3s master node is successfully installed. next will do the worker node installation.


Step 3 : Install k3s agent in WorkerNode

Go to the worker node and execute the below command,

curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -

k3s_Master_url = https://k3smaster.devopsart.com:6443

k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token 

Once the installation is successful, we can check the k3s agent status by executing the below command,

systemctl status k3s-agent.service


Step 4: K3s Installation validation

Go to the master node and check new worker node is listed or not by the below command,

kubectl get nodes

Great!, worker node is attached successfully with the k3s master.


Step 5: Deploy the Nginx webserver in K3s and validate,

Am using helm chart installation for this purpose. 

Helm install,

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

cp -r /usr/local/bin/helm /usr/bin/helm

Add Bitnami repo in Helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

Deploy Nginx webserver by using below helm command,

helm install nginx-web bitnami/nginx

Check the pod status,

kubectl get pods -o wide

The Nginx pod is running fine now.

Access Nginx webserver,

I took the clusterIP of the Nginx service and tried to access it, and it's working.


That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.

In this blog, we will explore a tool called 'Terraformer,' which aids in exporting existing cloud infrastructure as Terraform code (Reverse Terraform). 

Terraformer generates tf, JSON, and tfstate files from the existing infrastructure, allowing us to utilize the generated Terraform code for new infrastructure.

Requirements:

1.Linux VM(I am using Mac)

2. Cloud account (I am using Azure)

3. latest terraform


Step 1 : Install Terraformer

Execute the below commands to install terraformer,

export PROVIDER=azure

curl -LO "https://github.com/GoogleCloudPlatform/terraformer/releases/download/$(curl -s https://api.github.com/repos/GoogleCloudPlatform/terraformer/releases/latest | grep tag_name | cut -d '"' -f 4)/terraformer-${PROVIDER}-darwin-amd64"

chmod +x terraformer-${PROVIDER}-darwin-amd64

mv terraformer-${PROVIDER}-darwin-amd64 /usr/local/bin/terraformer

terraformer -v

terraform -v

There are various installation methods are available here,

https://github.com/GoogleCloudPlatform/terraformer


Step 2: Download the Cloud provider plugin

Create versions.tf file to download the cloud plugin, here Azure is used, so the azurearm plugin is required for terraformer

terraform {

  required_providers {

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "=3.59.0"

    }

  }

}

Above is the azure versions.tf file, we can change it accordingly to your cloud provider.

Execute the below command to download the plugin,

terraform init


Step 3: Cloud Provider authentication

We need to login with the cloud account in the terminal, Below command is for azure.

az login

export ARM_SUBSCRIPTION_ID=<yourazuresubscriptionid>

In my Azure account, I have the following resources, We will download them with Terraformer.


Step 4 : Terraformer execution

Use The below command to download the terraform code from the existing infrastructure.

Syntax : terraformer import azure -R <resourcegrpname> -r <Servicename>

terraformer import azure -R devopsart-testrg -r storage_account

With this command, am downloading only the storage account. Once the command is successful. There will be a folder called "generated". under that, we can see our storage account related to terraform code.

And here is the output of "storage_account.tf"

by using this download terraform code, we can create a new storage account by changing the parameters in the terraform code.

That's all. We have installed Terraformer and experimented with it


Note:  Currently this tool supports a few Azure services.

Reference:  https://github.com/GoogleCloudPlatform/terraformer


                                          

Today, we will explore an interesting K8s plugin called 'kube-green' that can help scale down or up the pods as needed during working hours/weekends. Once the initial configuration is complete, this plugin will automatically manage it.

Kube-green: This Kubernetes(k8s) operator enables the shutdown of environments or specific resources, allowing for optimal resource utilisation and minimising energy waste. It helps to bring up/down deployments and cronjobs.

Requirements:

K8s cluster (min. version 1.19) (I Am using version 1.25.4)


Step 1: Install kube-green,

Clone the below repo which has all the configuration details,

git clone https://github.com/DevOpsArts/devopsart-kubegreen.git

cd devopsart-kubegreen

Install cert-manager,

kubectl apply -f cert-manager.yaml

Install kube-green,

kubectl apply -f kube-green.yaml

Get all resources related to kube green by,

kubectl get all -n kube-green


Step 2: Deploy Nginx web for validation using helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

helm install nginx-web bitnami/nginx

kubectl get pods

Now Nginx pod is up and running in the k8s cluster


Step 3: Configure kube-green 

We will scale down and up the nginx web using kube-green,

Go to the same git cloned folder and update the timing and deployment namespace details,

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours.yml

cd devopsart-kubegreen

cat working-hours.yml

 apiVersion: kube-green.com/v1alpha1

kind: SleepInfo

metadata:

  name: working-hours

  namespace: default

spec:

  weekdays: "1-5"

  sleepAt: "08:40"

  wakeUpAt: "08:42"

  timeZone: "Etc/UTC"

Update the bold letters according to your requirements, and it will scale down and up all the deployments in the cluster.

The screenshot below shows how it worked. The pod was scheduled to scale down at 8:40 AM UTC and scale up at 8:42 AM UTC, and it scaled down at 8:40 AM UTC according to the configuration.


And at 8.42AM UTC, the pod came up as per the configuration,

Below configuration will help to scale down the pods with exceptions.

https://github.com/DevOpsArts/devopsart-kubegreen/blob/main/working-hours-expection.yml

We can check the kube-green logs from kube-green pods for scale down/up status,

kubectl get pods -n kube-green

kubectl logs -f kube-green-controller-manager-6446b47f7c-hbmtx -n kube-green


Step 4: Kube-green monitoring,

We can monitor the resource utilization of the Kube Green resources and check the status of pod scale-down and scale-up. Kube Green exposes Prometheus metrics on port number 8080 with the /metrics path. We can configure it in Grafana to monitor the status.

That's all. We have deployed Kube-Green in the K8s cluster and validated it by scaling down and up.

Reference,

https://kube-green.dev/docs/


In this blog post, we will cover the installation and experimentation of the Kubectl AI plugin(kubectl-ai), a plugin for Kubectl that combines the functionalities of Kubectl and OpenAI. This tool enables users to create and deploy Kubernetes manifests using OpenAI GPT.

Kubectl: It is a command-line tool used to interact with Kubernetes clusters. It is part of the Kubernetes distribution and allows users to deploy, inspect, and manage applications running on a Kubernetes cluster.

OpenAI GPT: It is a series of language models developed by OpenAI. These models are pre-trained on large datasets of text and then fine-tuned for specific natural languages processing tasks such as language translation, sentiment analysis, or question answering.

Requirements :

1. Kubernetes cluster

2. Linux terminal (My machine is Centos 8.5)

3. OpenAI API key (Take the )

Step 1: Install the Kubectl-ai plugin,

Download the latest binary from the below url,

https://github.com/sozercan/kubectl-ai/releases

Wget https://github.com/sozercan/kubectl-ai/releases/download/v0.0.6/kubectl-ai_linux_amd64.tar.gz

Extract the compressed file and give execute permission for kubectl-ai

tar -xvzf kubectl-ai_linux_amd64.tar.gz

Next generate the OpenAPI key from below url,

https://platform.openai.com/account/api-keys


Add the OpenAPI key in the terminal environment variable which will be used by the Kubectl-ai plugin

export OPENAI_API_KEY=XXXXXXXXX

export OPENAI_DEPLOYMENT_NAME= <I will not set this as an environment variable, as I will be using the default model as, "gpt-3.5-turbo">

Refer here to see the supported models, https://github.com/sozercan/kubectl-ai

Step 2: Experiment with the plugin,

Through this plugin. we will experiment with the followings,

1. Create a new namespace "devopsart"

kubectl get namespace

./kubectl-ai "create a namespace "devopsart""

2. Deploy "WordPress" in the "devopsart" namespace with one replica

./kubectl-ai "create a wordpress deployment with 1 replica in devopsart namespace"

3. Increase the "Wordpress" replica from 1 to 3

./kubectl-ai "Increase wordpress deployment replica count to 3 in devopsart namespace"

Here is the list of resources under the "devopsart" namespace which is created using this plugin.

We can do a lot with the plugin based on our needs. Above, I have tried a few experiments.

That's all. We have installed and configured the kubectl-ai plugin and experimented with this plugin on the Kubernetes cluster. 



In this blog, we will cover the installation, configuration, and validation of the Terrakube tool.

Terrakube is a full replacement for Terraform Enterprise/Terraform Cloud, and it is also open source.

Requirement : 

Docker & DockerCompose

AWS/Azure/GCP account


Steps 1: Install Terrakube in docker,

We can install Terrakube in a Kubernetes cluster, but I am following the docker-compose method to install this tool. This link, https://docs.terrakube.org/getting-started/deployment, provides guidance for installing it in Kubernetes

Clone the below git repo,

https://github.com/AzBuilder/terrakube.git

cd terrakube/docker-compose

If you are using AWS/Azure/GCP we need to update the below values according to the cloud provider.

By default AWS storage account configuration will be there, we need to update it according to our environment.

Am using azure for this experiment so here is the configuration,

Open the api.env file,

For Azure purposes comment out all the AWS configurations and add below two lines,

AzureAccountName=tfdevopsart   (Storage account name of the TF backend)

AzureAccountKey=XXXXXX         (Storage account Key)

And change this variable to local in docker-compose.yaml


volumes:

  minio_data:

    driver: bridge


to


volumes:

  minio_data:

    driver: local

Next, run the below command to bring up the docker containers,

docker-compose up -d

Wait for 3 to 5 minutes for all the containers to be up and running.

Execute the below command to check the status of all the containers,

docker ps -a


Once all the containers are up and running we can try to access the Terrakube web ui

Step 2: Accessing Terrakube UI

Add the below entries in the local machine host file where the docker is running,


127.0.0.1 terrakube-api

127.0.0.1 terrakube-ui

127.0.0.1 terrakube-executor

127.0.0.1 terrakube-dex

127.0.0.1 terrakube-registry


For Linux, the file path is, /etc/hosts

Now try to access the UI by, http://terrakube-ui:3000

The default admin credential is, 

User : admin@example.com, Password : admin

Once logged in, provide Grant access then it will go to the homepage.

Here, I am using Azure, so I am selecting Azure workspace.

Once we select "Azure", it will show the default modules which are available.


Next, we need to create a workspace by selecting "New Workspace" with our Terraform script. We need to provide details of our Terraform script repository and branch, which will be used to provision the resources.


Here select "Version control flow" for repository-based infra provisioning.

Test repo link, https://github.com/prabhu87/azure-tf-storageaccount.git




Then submit "Create workspace".

Next, click "Start Job" and choose "Plan" (it is the same as Terraform plan). There is an option to choose "Plan and apply" and "Destroy" as well.

It will run and helps to understand what changes are going to happen in the given infra.

In my repo, I had given a simple TF script to create a storage account in Azure.

If we expand, we can see the TF logs.

Next, We will do Plan and Apply option, and then see if it created the storage account on the Azure end.

Next, go to azure and check whether the storage account is created or not,


The storage account is created successfully via Terrakube.

There are multiple options are available in this tool,

- We can schedule the script to run

- We can set the environmental variables

- We can check the state file details of each successful execution

- We can set execution based on approvals

- There are multiple plugins available to customise the Terrakube workflow.


That's all, We successfully installed, configured and created a PAAS with the Terrakube tool.


In this blog, We will see how to install Kubescape and how to identify the security issues and best practices in the Kubernetes cluster.

Kubescape is a security and compliance tool for Kubernetes, it helps to identify risk analysis, security compliance, and misconfiguration in the Kubernetes cluster.

Requirements,

1. Kubernetes cluster

2. kubectl

Step 1: Install kubescape on a Linux machine.

I have one master and one node k3s cluster to experiment with kubescape.

Execute the below command to install kubscape on the Linux machine,

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

Within a few seconds, it will install.

Step 2: Scan the Kubernetes cluster

I have my cluster configuration in the default path.

Scan Kubernetes cluster with the below command,

kubescape scan --enable-host-scan --verbose

It will scan all the resources in the Kubernetes cluster and give the current status of the cluster,

Here is the number of vulnerabilities found in my cluster, Need to check them one by one and fix them.

We can scan based on the framework available for kubescape.

Here is the list of frameworks it's supported, NSA-CISA, MITRE ATT&CK and CIS Benchmark. Below command to use to scan for the specific framework.

kubescape scan framework cis

We can export the result in HTML,  JSON, PDF, and XML by using the below command,

kubescape scan framework cis --format pdf --output cis_output.pdf

Step 3: Types of kubescape methods to scan,

Use an alternate kubeconfig file to scan,

kubescape scan --kubeconfig cluster.conf

Include specific namespaces to scan,

kubescape scan --include-namespaces devopsart,nginx

Exclude specific namespaces to scan,

kubescape scan --exclude-namespaces kube-system

kubescape scan --exclude-namespaces kube-system,default

Scan yaml files,

kubescape scan nginx.yaml
kubescape scan *.yaml

That's all, Today we have seen how to install kubescape tool and scan kubernetes cluster.



In this blog, We will see step-by-step of k3s installation in Centos 8.

K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.

Requirements:

Linux servers: 2

OS: Centos 8.5

Step 1: Update OS and install Kubectl

Here am using one master and node to do the installation.

Master: k3smaster.devopsart.com (10.12.247.54)

Worker Node:  k3snode1.devopsart.com (10.12.247.55)

Go to each server and run "yum update" to get the latest packages and do a reboot.

Make sure a firewall is enabled between these two Linux servers.

Install Kubectl,

Go to the root path of the master node and run the below commands,

curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl

chmod +x kubectl

cp kubectl /usr/bin

check the kubectl version to make sure the command is working or not,

kubectl version --short

Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving. 


Step 2: Install K3s in the Master server

Use below command in master server to install k3s,

curl -sfL https://get.k3s.io | sh -

Once successfully installed, you can run below to check the k3s service status,

systemctl status k3s

We can see the k3s config file in the below path in Master,

cat /etc/rancher/k3s/k3s.yaml

Next, we need to copy the config file to use in kubectl.

mkdir ~/.kube

cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Then check,

kubectl get nodes

K3s master node is successfully installed. next will do the worker node installation.


Step 3 : Install k3s agent in WorkerNode

Go to the worker node and execute the below command,

curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -

k3s_Master_url = https://k3smaster.devopsart.com:6443

k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token 

Once the installation is successful, we can check the k3s agent status by executing the below command,

systemctl status k3s-agent.service


Step 4: K3s Installation validation

Go to the master node and check new worker node is listed or not by the below command,

kubectl get nodes

Great!, worker node is attached successfully with the k3s master.


Step 5: Deploy the Nginx webserver in K3s and validate,

Am using helm chart installation for this purpose. 

Helm install,

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

cp -r /usr/local/bin/helm /usr/bin/helm

Add Bitnami repo in Helm,

helm repo add bitnami https://charts.bitnami.com/bitnami

Deploy Nginx webserver by using below helm command,

helm install nginx-web bitnami/nginx

Check the pod status,

kubectl get pods -o wide

The Nginx pod is running fine now.

Access Nginx webserver,

I took the clusterIP of the Nginx service and tried to access it, and it's working.


That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.

Read more

Show more

Terraformer - A Tool to export existing infrastructure as Terraform Code

In this blog, we will explore a tool called 'Terraformer,' which aids i…

Kube-green Auto-Shutdown K8s pods when not in use

Today, we will explore an interestin…

Kubectl OpenAI Plugin - Installation and Experimentation

In this blog post, we will cover the installation and experimentation of the Ku…

Terrakube - An opensource Terraform UI tool overview

In this blog, we will cover the installation, configuration, and validation of …

An overview of Kubescape - Kubernetes security and compliance Tool

In this blog, We will see how to install Kubescape and how to identify the secu…

Step by Step installation of K3s in Centos 8

In this blog, We will see step-by-step of k3s installation in Centos 8. K3s, It…

Load More That is All