Header Ads


In Part 1, We have covered how to setup Grafana Loki and Grafana Agent to view 
Kubernetes pod logs

In Part 2, We have covered how to configure Grafana Agent on Windows VM and export application logs to Grafana Loki.

In this Part 3, We will see how to export Azure PAAS service logs to Grafana loki and view it from Grafana Dashboard.

Requirement: 
  • Grafana loki
  • Azure eventhub
  • Azure AKS or any PAAS which is having option with "Diagnostics Settings"

Step 8: Create Azure Eventhub namespace,

Go to the Azure Portal and create the Event Hub namespace with one Event Hub. (Currently, we are going to use Azure AKS, so we will create one Event Hub named "aks" under the Event Hub namespace)






Step 9: Configure Azure AKS to send logs to Azure Eventhub,

Go to Azure AKS, in the side blade select "Diagnostic Settings", and choose "Add Diagnostic Setting".

Then, in the new page, select which logs need to be sent to the Event Hub and choose "Stream to an Event Hub". Here, provide the newly created Event Hub namespace and Event Hub.


Step 10: Configure Grafana Agent to scrap the messages from Azure eventhub,

Next, We need to pull the data from Azure eventhub and push it to Grafana loki,

In our existing grafana-agent-values.yaml add below lines to pull the messages from Azure eventhub and redeploy grafana agent in AKS.

Here is the reference github url and below is the yaml.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values-azure-aks.yaml


loki.source.azure_event_hubs "azure_aks" {

      fully_qualified_namespace = "==XXX Eventhub namespace hostname XX===:9093"

      event_hubs = ["aks"]

      forward_to = [loki.write.local.receiver]

      labels = {

        "job" = "azure_aks",

        }

      authentication {

        mechanism = "connection_string"

        connection_string = " ===XXX Eventhub connection String XX==="

      }

      }

Replace the correct value for the above RED color. We can add multiple Event hubs in the Grafana agent by providing different Job names for each Azure PAAS. 

Note : Make sure the communication is established between Azure AKS and Azure Eventhub to send the messages on port 9093.

Redeploy grafana agent in AKS using below command,

helm install --values grafana-agent-values-azure-aks.yaml grafana-agent grafana/grafana-agent -n observability



Check all the Grafana agent pods are up and running using below command,

kubectl get all -n observability

Now, the Grafana agent will pull the messages from Azure Event Hub and push them to Grafana Loki for Azure AKS, which is configured to send the logs in Diagnostic Settings.

We can verify the status of message processing from Azure Event Hub, including the status of incoming and outgoing messages.


Step 11: Access Azure AKS logs in Grafana dashboard,

Go to Grafan Dashboard, Home > Explore > Select Loki Datasource

In the filter section, select "Job" and value as the job name which is given in the grafana-agent-values-azure-aks.yaml. In our case the job name is "azure_aks"


Thats all, We have successfully deployed centralized logging with Grafana Loki, Grafana Agent for Kubernetes, VM application and Azure PAAS.



In Part 1, We covered how to setup Grafana Loki and Grafana Agent to view Kubernetes pod logs

In Part 2, We will explore how to configure Grafana Agent on VM and export application logs to Grafana Loki.

Requirement:

  • Grafana Loki
  • Grafana agent
  • Windows VM with one application
  • Grafana Dashboard

Step 6: Install Grafana Agent in Windows VM,

Download latest Windows Grafana agent from this location,

Windows: https://github.com/grafana/agent/releases/download/v0.40.2/grafana-agent-installer.exe.zip

For other operating system refer here,

Next double click the downloaded exe and install it, by default in windows the installer path is,

C:\Program Files\Grafana Agent


Once installation is completed, We need to update the configuration based on our needs like which application logs we need to send to Grafana loki.

In our case, we installed Grafana Dashboard in the windows VM and configured the Grafana dashboard logs in Grafana agent.

Similarly, we can add multiple application with different Job names.

Copy the grafana agent config file from below repo and update the required changes according on your needs.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/agent-config.yaml

Next start the grafana agent service from services.msc

We can start manually by below command in command prompt as well.

In command Prompt go to, C:\Program Files\Grafana Agent

Execute below command,

grafana-agent-windows-amd64.exe --config.file=agent-config.yaml

This will help to find any issue with the configuration.

Note : Here the Grafana loki distributed service endpoint(which is configured in the agent-config.yaml) should be accessible from the windows VM


Step 7 : Access VM application logs in Grafana Loki,

Go to Grafana Dashboard > Home > Explore > Select Loki Datasource

In the filter section, select "Job" and value as the job name which is given in the agent-config.yaml. In our case the job name is "devopsart-vm"


Now We are able to view the Grafana Dashboard logs in Grafana Loki. You can create the Dashboard from here based on your preference.

In Part 2, We covered how to export Windows VM application logs to Grafana Loki and how to view them from the Grafana Dashboard.

In Part 3, We will cover how to export Azure PAAS services logs to Grafana Loki




Dealing with multiple tools for capturing application logs from different sources can be a hassle for anyone. In this blog post, we'll dive into the steps required to establish centralized logging with Grafana Loki and Grafana Agent. This solution will allow us to unify the collection of logs from Kubernetes pods, VM services, and Azure PAAS services.

Grafana Loki : It is a highly scalable log aggregation system designed for cloud-native environments

Grafana Agent : It is an observability agent that collects metrics and logs from various application for visualization and analysis in Grafana

Requirement:

  • Kuberentes Cluster (Latest version)
  • Helm
  • Azure PAAS(Eventhub and AKS, IOT, etc)
  • VM with any application
  • Azure storage account(Loki backend)
  • Azure subscription with admin privileges
Step 1: Deploy Grafana loki in Kubernetes(K8s) cluster,

Ensure you have admin permission for the k8s cluster.

Before deploying Grafana Loki in k8s cluster, there are certain changes are required in the configuration.

Note : We are going to use backend as Azure Storage container in loki to store the logs and we will use Loki distributed version.

Execute below helm commands to add the Grafana repository,

helm repo add grafana https://grafana.github.io/helm-charts

helm repo update


Execute below command to export the Grafana Loki and Grafana agent configuration via helm,

helm show values grafana/loki-distributed > loki-values.yaml

helm show values grafana/grafana-agent > grafana-agent-values.yaml

In the loki-values.yaml, update the below configuration to use Azure storage account as backend.

schemaConfig:

    configs:

    - from: "2020-09-07"

      store: boltdb-shipper

      object_store: azure

      schema: v11

      index:

        prefix: index_

        period: 24h

  storageConfig:

    boltdb_shipper:

      shared_store: azure

      active_index_directory: /var/loki/index

      cache_location: /var/loki/cache

      cache_ttl: 1h

    filesystem:

      directory: /var/loki/chunks

    azure:

      account_name: === Azure Storage name ===

      account_key: === Azure Storage access key ===

      container_name: === Container Name ===

      request_timeout: 0


Here is the loki-values.yaml.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/loki-distributed-values.yaml

Next deploy Grafana loki,

Execute below command to deploy Loki in the k8s cluster,

helm upgrade --install --values loki-distributed-values.yaml loki grafana/loki-distributed -n observability --create-namespace


Verify the pods are up and running by using below command,

kubectl get all -n observability


Now all the pods are up and running.


Step 2: Deploy Grafana agent in K8s cluster,

Deploy Grafana Agent in k8s cluster to export the k8s Pod logs to Loki

Update the grafana agent values before deploying.

Replace the grafana-agent-values.yaml file with loki distributed gateway service endpoint in line number 169 in the below file with your namespace. Currently observability is used.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values.yaml

loki.write "local" {
        endpoint {
          url = "http://loki-loki-distributed-gateway.observability.svc.cluster.local/loki/api/v1/push"
        }
      }

In the grafana-agent-values.yaml, Currently it is added to export k8s pod logs, k8s events, etc.

Next deploy Grafana-Agent using below command,

helm install --values grafana-agent-values.yaml grafana-agent grafana/grafana-agent -n observability


Verify the pods are up and running by using below command,

kubectl get all -n observability

Now you can go to azure storage account which is configured in Grafana Loki and verify the logs are getting updated to the respective container.


Step 3: Deploy Grafana, to view the pod logs

Execute below command to install Grafana in K8s cluster,

helm install grafana grafana/grafana -n observability


Verify the pods are up and running by using below command,

kubectl get all -n observability

use below command to get the password for Grafana,

kubectl get secret --namespace observability grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Step 4:  Access Grafana and view the pod logs

Use port forward or use NodePort or Ingress to access the Grafana Dashboard.

- Configure loki as Datasource in Grafana,

Once login to the Grafana, Go to Home > Connections > Search for "Loki", then select it.


Next give connection url as, 

http://loki-loki-distributed-gateway.monitoring1.svc.cluster.local

then select save and test.


- Next import Loki dashboard,

Go to Grafana Home > Dashboard > select New and Import.

Use this below Grafana template ID "15141" , load it and save it.

https://grafana.com/grafana/dashboards/15141-kubernetes-service-logs/?source=post_page-----d2847d526f9e--------------------------------


Next click the dashboard, You can able to view all the pod logs now.


Step 5:  View the Kubernetes(K8s) Events in Grafana,

To view the Kubernetes events, Go to Grafana Home page > Explore > Select "Datasource name" > Select "Job" and value as "loki.source.kubernetes_events"


In this part, We have covered how to deploy Grafana Loki with Azure Storage as the backend, deployed Grafana Agent to view Kubernetes pod logs, and deployed Grafana dashboard to visualize the pod logs and Kubernetes events.

In the next Part 2, We will explain how to configure Grafana Agent for VM applications and send the logs to the same Grafana Loki.

Part 3We will cover how to export Azure PAAS services logs to Grafana Loki



In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan

Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.

We will use the "Rover" docker image, to do our setup and visualize the infra.

Requirements:

1.Linux/Windows VM

2. Docker

Steps 1 : Generate terraform plan output

I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.

cd devopsart

terraform plan -out tfplan.out

terraform show -json tfplan.out > tfplan.json

Now both the files are generated.


Step 2 : Run Rover tool locally,

Execute below docker command to run rover from the same step 1 path,

docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json

Its run the webUI in port number 9000.


Step 3 : Accessing Rover WebUI,

Lets access the WebUI and check it,

Go to browser, and enter http://localhost:9000


In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply

When a specific resource is selected from the image, it will provide the name and parameter information.

Additionally, the image can be saved locally by clicking the 'Save' option

I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.


Thanks for reading!! We have tried Rover tool and experimented with examples.


Reference:

https://github.com/im2nguyen/rover


In this blog, we will see a new tool called Infracost, which helps provide expected cloud cost estimates based on Terraform code. We will cover the installation and demonstrate how to use this tool.

Infracost :  It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.

Requirement :

1. One window/Linux VM

2.Terraform

3.Terraform examples


Step 1 : infracost installation,

For Mac, use below brew command to do the installation,

brew install infracost

For other Operating systems, follow below link,

https://www.infracost.io/docs/#quick-start


Step 2 : Infracost configuration,

We need to set up the Infracost API key by signing up here,

https://dashboard.infracost.io

Once logged in, visit the following URL to obtain the API key,

https://dashboard.infracost.io/org/praboosingh/settings/general

Next, open the terminal and set the key as an environment variable using the following command,

# export INFRACOST_API_KEY=XXXXXXXXXXXXX

or You can log in to the Infracost UI and grant terminal access by using the following command,

# infracost auth login

NoteInfracost will not send any cloud information to their server.


Step 3 : Infracost validation

Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.

# git clone https://github.com/alfonsof/terraform-azure-examples.git

# cd terraform-azure-examples/code/01-hello-world

try infracost by using below command to get the estimated cost for a month,

# infracost breakdown --path .

To save the report in json format and upload to infracost server, use below command,

# infracost breakdown --path . --format json --out-file infracost-demo.json

# infracost upload --path infracost-demo.json

In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.

# infracost diff --path . --compare-to infracost-demo.json


Thanks for reading!! We have installed infracost and experimented with examples.


References:

https://github.com/infracost/infracost

https://www.infracost.io/docs/#quick-start




In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In Part 1, We have covered how to setup Grafana Loki and Grafana Agent to view 
Kubernetes pod logs

In Part 2, We have covered how to configure Grafana Agent on Windows VM and export application logs to Grafana Loki.

In this Part 3, We will see how to export Azure PAAS service logs to Grafana loki and view it from Grafana Dashboard.

Requirement: 
  • Grafana loki
  • Azure eventhub
  • Azure AKS or any PAAS which is having option with "Diagnostics Settings"

Step 8: Create Azure Eventhub namespace,

Go to the Azure Portal and create the Event Hub namespace with one Event Hub. (Currently, we are going to use Azure AKS, so we will create one Event Hub named "aks" under the Event Hub namespace)






Step 9: Configure Azure AKS to send logs to Azure Eventhub,

Go to Azure AKS, in the side blade select "Diagnostic Settings", and choose "Add Diagnostic Setting".

Then, in the new page, select which logs need to be sent to the Event Hub and choose "Stream to an Event Hub". Here, provide the newly created Event Hub namespace and Event Hub.


Step 10: Configure Grafana Agent to scrap the messages from Azure eventhub,

Next, We need to pull the data from Azure eventhub and push it to Grafana loki,

In our existing grafana-agent-values.yaml add below lines to pull the messages from Azure eventhub and redeploy grafana agent in AKS.

Here is the reference github url and below is the yaml.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values-azure-aks.yaml


loki.source.azure_event_hubs "azure_aks" {

      fully_qualified_namespace = "==XXX Eventhub namespace hostname XX===:9093"

      event_hubs = ["aks"]

      forward_to = [loki.write.local.receiver]

      labels = {

        "job" = "azure_aks",

        }

      authentication {

        mechanism = "connection_string"

        connection_string = " ===XXX Eventhub connection String XX==="

      }

      }

Replace the correct value for the above RED color. We can add multiple Event hubs in the Grafana agent by providing different Job names for each Azure PAAS. 

Note : Make sure the communication is established between Azure AKS and Azure Eventhub to send the messages on port 9093.

Redeploy grafana agent in AKS using below command,

helm install --values grafana-agent-values-azure-aks.yaml grafana-agent grafana/grafana-agent -n observability



Check all the Grafana agent pods are up and running using below command,

kubectl get all -n observability

Now, the Grafana agent will pull the messages from Azure Event Hub and push them to Grafana Loki for Azure AKS, which is configured to send the logs in Diagnostic Settings.

We can verify the status of message processing from Azure Event Hub, including the status of incoming and outgoing messages.


Step 11: Access Azure AKS logs in Grafana dashboard,

Go to Grafan Dashboard, Home > Explore > Select Loki Datasource

In the filter section, select "Job" and value as the job name which is given in the grafana-agent-values-azure-aks.yaml. In our case the job name is "azure_aks"


Thats all, We have successfully deployed centralized logging with Grafana Loki, Grafana Agent for Kubernetes, VM application and Azure PAAS.



In Part 1, We covered how to setup Grafana Loki and Grafana Agent to view Kubernetes pod logs

In Part 2, We will explore how to configure Grafana Agent on VM and export application logs to Grafana Loki.

Requirement:

  • Grafana Loki
  • Grafana agent
  • Windows VM with one application
  • Grafana Dashboard

Step 6: Install Grafana Agent in Windows VM,

Download latest Windows Grafana agent from this location,

Windows: https://github.com/grafana/agent/releases/download/v0.40.2/grafana-agent-installer.exe.zip

For other operating system refer here,

Next double click the downloaded exe and install it, by default in windows the installer path is,

C:\Program Files\Grafana Agent


Once installation is completed, We need to update the configuration based on our needs like which application logs we need to send to Grafana loki.

In our case, we installed Grafana Dashboard in the windows VM and configured the Grafana dashboard logs in Grafana agent.

Similarly, we can add multiple application with different Job names.

Copy the grafana agent config file from below repo and update the required changes according on your needs.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/agent-config.yaml

Next start the grafana agent service from services.msc

We can start manually by below command in command prompt as well.

In command Prompt go to, C:\Program Files\Grafana Agent

Execute below command,

grafana-agent-windows-amd64.exe --config.file=agent-config.yaml

This will help to find any issue with the configuration.

Note : Here the Grafana loki distributed service endpoint(which is configured in the agent-config.yaml) should be accessible from the windows VM


Step 7 : Access VM application logs in Grafana Loki,

Go to Grafana Dashboard > Home > Explore > Select Loki Datasource

In the filter section, select "Job" and value as the job name which is given in the agent-config.yaml. In our case the job name is "devopsart-vm"


Now We are able to view the Grafana Dashboard logs in Grafana Loki. You can create the Dashboard from here based on your preference.

In Part 2, We covered how to export Windows VM application logs to Grafana Loki and how to view them from the Grafana Dashboard.

In Part 3, We will cover how to export Azure PAAS services logs to Grafana Loki




Dealing with multiple tools for capturing application logs from different sources can be a hassle for anyone. In this blog post, we'll dive into the steps required to establish centralized logging with Grafana Loki and Grafana Agent. This solution will allow us to unify the collection of logs from Kubernetes pods, VM services, and Azure PAAS services.

Grafana Loki : It is a highly scalable log aggregation system designed for cloud-native environments

Grafana Agent : It is an observability agent that collects metrics and logs from various application for visualization and analysis in Grafana

Requirement:

  • Kuberentes Cluster (Latest version)
  • Helm
  • Azure PAAS(Eventhub and AKS, IOT, etc)
  • VM with any application
  • Azure storage account(Loki backend)
  • Azure subscription with admin privileges
Step 1: Deploy Grafana loki in Kubernetes(K8s) cluster,

Ensure you have admin permission for the k8s cluster.

Before deploying Grafana Loki in k8s cluster, there are certain changes are required in the configuration.

Note : We are going to use backend as Azure Storage container in loki to store the logs and we will use Loki distributed version.

Execute below helm commands to add the Grafana repository,

helm repo add grafana https://grafana.github.io/helm-charts

helm repo update


Execute below command to export the Grafana Loki and Grafana agent configuration via helm,

helm show values grafana/loki-distributed > loki-values.yaml

helm show values grafana/grafana-agent > grafana-agent-values.yaml

In the loki-values.yaml, update the below configuration to use Azure storage account as backend.

schemaConfig:

    configs:

    - from: "2020-09-07"

      store: boltdb-shipper

      object_store: azure

      schema: v11

      index:

        prefix: index_

        period: 24h

  storageConfig:

    boltdb_shipper:

      shared_store: azure

      active_index_directory: /var/loki/index

      cache_location: /var/loki/cache

      cache_ttl: 1h

    filesystem:

      directory: /var/loki/chunks

    azure:

      account_name: === Azure Storage name ===

      account_key: === Azure Storage access key ===

      container_name: === Container Name ===

      request_timeout: 0


Here is the loki-values.yaml.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/loki-distributed-values.yaml

Next deploy Grafana loki,

Execute below command to deploy Loki in the k8s cluster,

helm upgrade --install --values loki-distributed-values.yaml loki grafana/loki-distributed -n observability --create-namespace


Verify the pods are up and running by using below command,

kubectl get all -n observability


Now all the pods are up and running.


Step 2: Deploy Grafana agent in K8s cluster,

Deploy Grafana Agent in k8s cluster to export the k8s Pod logs to Loki

Update the grafana agent values before deploying.

Replace the grafana-agent-values.yaml file with loki distributed gateway service endpoint in line number 169 in the below file with your namespace. Currently observability is used.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values.yaml

loki.write "local" {
        endpoint {
          url = "http://loki-loki-distributed-gateway.observability.svc.cluster.local/loki/api/v1/push"
        }
      }

In the grafana-agent-values.yaml, Currently it is added to export k8s pod logs, k8s events, etc.

Next deploy Grafana-Agent using below command,

helm install --values grafana-agent-values.yaml grafana-agent grafana/grafana-agent -n observability


Verify the pods are up and running by using below command,

kubectl get all -n observability

Now you can go to azure storage account which is configured in Grafana Loki and verify the logs are getting updated to the respective container.


Step 3: Deploy Grafana, to view the pod logs

Execute below command to install Grafana in K8s cluster,

helm install grafana grafana/grafana -n observability


Verify the pods are up and running by using below command,

kubectl get all -n observability

use below command to get the password for Grafana,

kubectl get secret --namespace observability grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Step 4:  Access Grafana and view the pod logs

Use port forward or use NodePort or Ingress to access the Grafana Dashboard.

- Configure loki as Datasource in Grafana,

Once login to the Grafana, Go to Home > Connections > Search for "Loki", then select it.


Next give connection url as, 

http://loki-loki-distributed-gateway.monitoring1.svc.cluster.local

then select save and test.


- Next import Loki dashboard,

Go to Grafana Home > Dashboard > select New and Import.

Use this below Grafana template ID "15141" , load it and save it.

https://grafana.com/grafana/dashboards/15141-kubernetes-service-logs/?source=post_page-----d2847d526f9e--------------------------------


Next click the dashboard, You can able to view all the pod logs now.


Step 5:  View the Kubernetes(K8s) Events in Grafana,

To view the Kubernetes events, Go to Grafana Home page > Explore > Select "Datasource name" > Select "Job" and value as "loki.source.kubernetes_events"


In this part, We have covered how to deploy Grafana Loki with Azure Storage as the backend, deployed Grafana Agent to view Kubernetes pod logs, and deployed Grafana dashboard to visualize the pod logs and Kubernetes events.

In the next Part 2, We will explain how to configure Grafana Agent for VM applications and send the logs to the same Grafana Loki.

Part 3We will cover how to export Azure PAAS services logs to Grafana Loki



In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan

Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.

We will use the "Rover" docker image, to do our setup and visualize the infra.

Requirements:

1.Linux/Windows VM

2. Docker

Steps 1 : Generate terraform plan output

I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.

cd devopsart

terraform plan -out tfplan.out

terraform show -json tfplan.out > tfplan.json

Now both the files are generated.


Step 2 : Run Rover tool locally,

Execute below docker command to run rover from the same step 1 path,

docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json

Its run the webUI in port number 9000.


Step 3 : Accessing Rover WebUI,

Lets access the WebUI and check it,

Go to browser, and enter http://localhost:9000


In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply

When a specific resource is selected from the image, it will provide the name and parameter information.

Additionally, the image can be saved locally by clicking the 'Save' option

I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.


Thanks for reading!! We have tried Rover tool and experimented with examples.


Reference:

https://github.com/im2nguyen/rover


In this blog, we will see a new tool called Infracost, which helps provide expected cloud cost estimates based on Terraform code. We will cover the installation and demonstrate how to use this tool.

Infracost :  It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.

Requirement :

1. One window/Linux VM

2.Terraform

3.Terraform examples


Step 1 : infracost installation,

For Mac, use below brew command to do the installation,

brew install infracost

For other Operating systems, follow below link,

https://www.infracost.io/docs/#quick-start


Step 2 : Infracost configuration,

We need to set up the Infracost API key by signing up here,

https://dashboard.infracost.io

Once logged in, visit the following URL to obtain the API key,

https://dashboard.infracost.io/org/praboosingh/settings/general

Next, open the terminal and set the key as an environment variable using the following command,

# export INFRACOST_API_KEY=XXXXXXXXXXXXX

or You can log in to the Infracost UI and grant terminal access by using the following command,

# infracost auth login

NoteInfracost will not send any cloud information to their server.


Step 3 : Infracost validation

Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.

# git clone https://github.com/alfonsof/terraform-azure-examples.git

# cd terraform-azure-examples/code/01-hello-world

try infracost by using below command to get the estimated cost for a month,

# infracost breakdown --path .

To save the report in json format and upload to infracost server, use below command,

# infracost breakdown --path . --format json --out-file infracost-demo.json

# infracost upload --path infracost-demo.json

In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.

# infracost diff --path . --compare-to infracost-demo.json


Thanks for reading!! We have installed infracost and experimented with examples.


References:

https://github.com/infracost/infracost

https://www.infracost.io/docs/#quick-start




In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios


In Part 1, We have covered how to setup Grafana Loki and Grafana Agent to view 
Kubernetes pod logs

In Part 2, We have covered how to configure Grafana Agent on Windows VM and export application logs to Grafana Loki.

In this Part 3, We will see how to export Azure PAAS service logs to Grafana loki and view it from Grafana Dashboard.

Requirement: 
  • Grafana loki
  • Azure eventhub
  • Azure AKS or any PAAS which is having option with "Diagnostics Settings"

Step 8: Create Azure Eventhub namespace,

Go to the Azure Portal and create the Event Hub namespace with one Event Hub. (Currently, we are going to use Azure AKS, so we will create one Event Hub named "aks" under the Event Hub namespace)






Step 9: Configure Azure AKS to send logs to Azure Eventhub,

Go to Azure AKS, in the side blade select "Diagnostic Settings", and choose "Add Diagnostic Setting".

Then, in the new page, select which logs need to be sent to the Event Hub and choose "Stream to an Event Hub". Here, provide the newly created Event Hub namespace and Event Hub.


Step 10: Configure Grafana Agent to scrap the messages from Azure eventhub,

Next, We need to pull the data from Azure eventhub and push it to Grafana loki,

In our existing grafana-agent-values.yaml add below lines to pull the messages from Azure eventhub and redeploy grafana agent in AKS.

Here is the reference github url and below is the yaml.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values-azure-aks.yaml


loki.source.azure_event_hubs "azure_aks" {

      fully_qualified_namespace = "==XXX Eventhub namespace hostname XX===:9093"

      event_hubs = ["aks"]

      forward_to = [loki.write.local.receiver]

      labels = {

        "job" = "azure_aks",

        }

      authentication {

        mechanism = "connection_string"

        connection_string = " ===XXX Eventhub connection String XX==="

      }

      }

Replace the correct value for the above RED color. We can add multiple Event hubs in the Grafana agent by providing different Job names for each Azure PAAS. 

Note : Make sure the communication is established between Azure AKS and Azure Eventhub to send the messages on port 9093.

Redeploy grafana agent in AKS using below command,

helm install --values grafana-agent-values-azure-aks.yaml grafana-agent grafana/grafana-agent -n observability



Check all the Grafana agent pods are up and running using below command,

kubectl get all -n observability

Now, the Grafana agent will pull the messages from Azure Event Hub and push them to Grafana Loki for Azure AKS, which is configured to send the logs in Diagnostic Settings.

We can verify the status of message processing from Azure Event Hub, including the status of incoming and outgoing messages.


Step 11: Access Azure AKS logs in Grafana dashboard,

Go to Grafan Dashboard, Home > Explore > Select Loki Datasource

In the filter section, select "Job" and value as the job name which is given in the grafana-agent-values-azure-aks.yaml. In our case the job name is "azure_aks"


Thats all, We have successfully deployed centralized logging with Grafana Loki, Grafana Agent for Kubernetes, VM application and Azure PAAS.



In Part 1, We covered how to setup Grafana Loki and Grafana Agent to view Kubernetes pod logs

In Part 2, We will explore how to configure Grafana Agent on VM and export application logs to Grafana Loki.

Requirement:

  • Grafana Loki
  • Grafana agent
  • Windows VM with one application
  • Grafana Dashboard

Step 6: Install Grafana Agent in Windows VM,

Download latest Windows Grafana agent from this location,

Windows: https://github.com/grafana/agent/releases/download/v0.40.2/grafana-agent-installer.exe.zip

For other operating system refer here,

Next double click the downloaded exe and install it, by default in windows the installer path is,

C:\Program Files\Grafana Agent


Once installation is completed, We need to update the configuration based on our needs like which application logs we need to send to Grafana loki.

In our case, we installed Grafana Dashboard in the windows VM and configured the Grafana dashboard logs in Grafana agent.

Similarly, we can add multiple application with different Job names.

Copy the grafana agent config file from below repo and update the required changes according on your needs.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/agent-config.yaml

Next start the grafana agent service from services.msc

We can start manually by below command in command prompt as well.

In command Prompt go to, C:\Program Files\Grafana Agent

Execute below command,

grafana-agent-windows-amd64.exe --config.file=agent-config.yaml

This will help to find any issue with the configuration.

Note : Here the Grafana loki distributed service endpoint(which is configured in the agent-config.yaml) should be accessible from the windows VM


Step 7 : Access VM application logs in Grafana Loki,

Go to Grafana Dashboard > Home > Explore > Select Loki Datasource

In the filter section, select "Job" and value as the job name which is given in the agent-config.yaml. In our case the job name is "devopsart-vm"


Now We are able to view the Grafana Dashboard logs in Grafana Loki. You can create the Dashboard from here based on your preference.

In Part 2, We covered how to export Windows VM application logs to Grafana Loki and how to view them from the Grafana Dashboard.

In Part 3, We will cover how to export Azure PAAS services logs to Grafana Loki




Dealing with multiple tools for capturing application logs from different sources can be a hassle for anyone. In this blog post, we'll dive into the steps required to establish centralized logging with Grafana Loki and Grafana Agent. This solution will allow us to unify the collection of logs from Kubernetes pods, VM services, and Azure PAAS services.

Grafana Loki : It is a highly scalable log aggregation system designed for cloud-native environments

Grafana Agent : It is an observability agent that collects metrics and logs from various application for visualization and analysis in Grafana

Requirement:

  • Kuberentes Cluster (Latest version)
  • Helm
  • Azure PAAS(Eventhub and AKS, IOT, etc)
  • VM with any application
  • Azure storage account(Loki backend)
  • Azure subscription with admin privileges
Step 1: Deploy Grafana loki in Kubernetes(K8s) cluster,

Ensure you have admin permission for the k8s cluster.

Before deploying Grafana Loki in k8s cluster, there are certain changes are required in the configuration.

Note : We are going to use backend as Azure Storage container in loki to store the logs and we will use Loki distributed version.

Execute below helm commands to add the Grafana repository,

helm repo add grafana https://grafana.github.io/helm-charts

helm repo update


Execute below command to export the Grafana Loki and Grafana agent configuration via helm,

helm show values grafana/loki-distributed > loki-values.yaml

helm show values grafana/grafana-agent > grafana-agent-values.yaml

In the loki-values.yaml, update the below configuration to use Azure storage account as backend.

schemaConfig:

    configs:

    - from: "2020-09-07"

      store: boltdb-shipper

      object_store: azure

      schema: v11

      index:

        prefix: index_

        period: 24h

  storageConfig:

    boltdb_shipper:

      shared_store: azure

      active_index_directory: /var/loki/index

      cache_location: /var/loki/cache

      cache_ttl: 1h

    filesystem:

      directory: /var/loki/chunks

    azure:

      account_name: === Azure Storage name ===

      account_key: === Azure Storage access key ===

      container_name: === Container Name ===

      request_timeout: 0


Here is the loki-values.yaml.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/loki-distributed-values.yaml

Next deploy Grafana loki,

Execute below command to deploy Loki in the k8s cluster,

helm upgrade --install --values loki-distributed-values.yaml loki grafana/loki-distributed -n observability --create-namespace


Verify the pods are up and running by using below command,

kubectl get all -n observability


Now all the pods are up and running.


Step 2: Deploy Grafana agent in K8s cluster,

Deploy Grafana Agent in k8s cluster to export the k8s Pod logs to Loki

Update the grafana agent values before deploying.

Replace the grafana-agent-values.yaml file with loki distributed gateway service endpoint in line number 169 in the below file with your namespace. Currently observability is used.

https://github.com/DevOpsArts/grafana_loki_agent/blob/main/grafana-agent-values.yaml

loki.write "local" {
        endpoint {
          url = "http://loki-loki-distributed-gateway.observability.svc.cluster.local/loki/api/v1/push"
        }
      }

In the grafana-agent-values.yaml, Currently it is added to export k8s pod logs, k8s events, etc.

Next deploy Grafana-Agent using below command,

helm install --values grafana-agent-values.yaml grafana-agent grafana/grafana-agent -n observability


Verify the pods are up and running by using below command,

kubectl get all -n observability

Now you can go to azure storage account which is configured in Grafana Loki and verify the logs are getting updated to the respective container.


Step 3: Deploy Grafana, to view the pod logs

Execute below command to install Grafana in K8s cluster,

helm install grafana grafana/grafana -n observability


Verify the pods are up and running by using below command,

kubectl get all -n observability

use below command to get the password for Grafana,

kubectl get secret --namespace observability grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Step 4:  Access Grafana and view the pod logs

Use port forward or use NodePort or Ingress to access the Grafana Dashboard.

- Configure loki as Datasource in Grafana,

Once login to the Grafana, Go to Home > Connections > Search for "Loki", then select it.


Next give connection url as, 

http://loki-loki-distributed-gateway.monitoring1.svc.cluster.local

then select save and test.


- Next import Loki dashboard,

Go to Grafana Home > Dashboard > select New and Import.

Use this below Grafana template ID "15141" , load it and save it.

https://grafana.com/grafana/dashboards/15141-kubernetes-service-logs/?source=post_page-----d2847d526f9e--------------------------------


Next click the dashboard, You can able to view all the pod logs now.


Step 5:  View the Kubernetes(K8s) Events in Grafana,

To view the Kubernetes events, Go to Grafana Home page > Explore > Select "Datasource name" > Select "Job" and value as "loki.source.kubernetes_events"


In this part, We have covered how to deploy Grafana Loki with Azure Storage as the backend, deployed Grafana Agent to view Kubernetes pod logs, and deployed Grafana dashboard to visualize the pod logs and Kubernetes events.

In the next Part 2, We will explain how to configure Grafana Agent for VM applications and send the logs to the same Grafana Loki.

Part 3We will cover how to export Azure PAAS services logs to Grafana Loki



In this blog, we will explore a new tool called 'Rover,' which helps to visualize the Terraform plan

Rover : This open-source tool is designed to visualize Terraform Plan output, offering insights into infrastructure and its dependencies.

We will use the "Rover" docker image, to do our setup and visualize the infra.

Requirements:

1.Linux/Windows VM

2. Docker

Steps 1 : Generate terraform plan output

I have a sample Azure terraform block in devopsart folder, will generate terraform plan output from there and store is locally.

cd devopsart

terraform plan -out tfplan.out

terraform show -json tfplan.out > tfplan.json

Now both the files are generated.


Step 2 : Run Rover tool locally,

Execute below docker command to run rover from the same step 1 path,

docker run --rm -it -p 9000:9000 -v $(pwd)/tfplan.json:/src/tfplan.json im2nguyen/rover:latest -planJSONPath=tfplan.json

Its run the webUI in port number 9000.


Step 3 : Accessing Rover WebUI,

Lets access the WebUI and check it,

Go to browser, and enter http://localhost:9000


In the UI, color codes on the left side provide assistance in understanding the actions that will take place for the resources when running terraform apply

When a specific resource is selected from the image, it will provide the name and parameter information.

Additionally, the image can be saved locally by clicking the 'Save' option

I hope this is helpful for someone who is genuinely confused by the Terraform plan output, especially when dealing with a large infrastructure.


Thanks for reading!! We have tried Rover tool and experimented with examples.


Reference:

https://github.com/im2nguyen/rover


In this blog, we will see a new tool called Infracost, which helps provide expected cloud cost estimates based on Terraform code. We will cover the installation and demonstrate how to use this tool.

Infracost :  It provides cloud cost projections from Terraform. It enables engineers to view a detailed cost breakdown and comprehend expenses before implementions.

Requirement :

1. One window/Linux VM

2.Terraform

3.Terraform examples


Step 1 : infracost installation,

For Mac, use below brew command to do the installation,

brew install infracost

For other Operating systems, follow below link,

https://www.infracost.io/docs/#quick-start


Step 2 : Infracost configuration,

We need to set up the Infracost API key by signing up here,

https://dashboard.infracost.io

Once logged in, visit the following URL to obtain the API key,

https://dashboard.infracost.io/org/praboosingh/settings/general

Next, open the terminal and set the key as an environment variable using the following command,

# export INFRACOST_API_KEY=XXXXXXXXXXXXX

or You can log in to the Infracost UI and grant terminal access by using the following command,

# infracost auth login

NoteInfracost will not send any cloud information to their server.


Step 3 : Infracost validation

Next, We will do the validation. For validation purpose i have cloned below github repo which contains terraform examples.

# git clone https://github.com/alfonsof/terraform-azure-examples.git

# cd terraform-azure-examples/code/01-hello-world

try infracost by using below command to get the estimated cost for a month,

# infracost breakdown --path .

To save the report in json format and upload to infracost server, use below command,

# infracost breakdown --path . --format json --out-file infracost-demo.json

# infracost upload --path infracost-demo.json

In case we plan to upgrade the infrastructure and need to understand the new cost, execute the following command to compare it with the previously saved output from the Terraform code path.

# infracost diff --path . --compare-to infracost-demo.json


Thanks for reading!! We have installed infracost and experimented with examples.


References:

https://github.com/infracost/infracost

https://www.infracost.io/docs/#quick-start




In this blog, we will install and examine a new tool called Trivy, which helps identify vulnerabilities, misconfigurations, licenses, secrets, and software dependencies in the following,

1.Container image

2.Kubernetes Cluster

3.Virtual machine image

4.FileSystem

5.Git Repo

6.AWS


Requirements,

1.One Virtual Machine

2.Above mentioned tools anyone


Step 1 : Install Trivy

Exceute below command based on your OS,

For Mac : 

brew install trivy

For other OS, please refer below link,
https://aquasecurity.github.io/trivy/v0.45/getting-started/installation/



Step 2 : Check an image with Trivy,

Let's try with the latest Nginx web server image to identify security vulnerabilities.

Execute the below command,

Syntax : trivy image <image name > : <version>

trivy image nginx:latest



It will provide a detailed view of the image, including the base image, each layer's information, and their vulnerability status in the report.


Step 3 : Check a github repo with Trivy,

Example github repo, https://github.com/akveo/kittenTricks.git

Execute the following command to check for vulnerabilities in the Git repo,

trivy repo https://github.com/akveo/kittenTricks.git

If you want to see only critical vulnerabilities, you can specify the severity using the following command,

trivy repo --severity CRITICAL  https://github.com/akveo/kittenTricks.git



Step 4: Check a YAML file with Trivy,

I have used below yaml from k8s website to check this,

https://k8s.io/examples/application/deployment.yaml

Execute the below command to find the misconfiguration in the yaml,

trivy conf nginx.yaml



Step 5 : Check terraform script with Trivy,

I have used below sample tf script to check it,

https://github.com/alfonsof/terraform-aws-examples/tree/master/code/01-hello-world

Execute the below command to find the misconfiguration in the tf script,

trivy conf 01-hello-world



Thats all, We have installed the Trivy tool and validated it in each category. Thank you for reading!!!


References,

https://github.com/aquasecurity/trivy
https://aquasecurity.github.io/trivy/v0.45/docs/






In this blog post, We will explore a new tool called "KOR" (Kubernetes Orphaned Resources), which assists in identifying unused resources within a Kubernetes(K8S) cluster. This tool will be beneficial for those who are managing Kubernetes clusters.

Requirements:

1.One machine(Linux/Windows/Mac)

2.K8s cluster


Step 1 : Install kor in the machine.

Am using linux VM to do the experiment and for other flavours download the binaries from below link,

https://github.com/yonahd/kor/releases

Download the linux binary for linux VM,

wget https://github.com/yonahd/kor/releases/download/v0.1.8/kor_Linux_x86_64.tar.gz

tar -xvzf kor_Linux_x86_64.tar.gz

chmod 777 kor

cp -r kor /usr/bin

kor --help


Step 2 : Nginx Webserver deployment in K8s

I have a k8s cluster, We will deploy nginx webserver in K8s and try out "kor" tool

Create a namespace as "nginxweb"

kubectl create namespace nginxweb

Using helm, we will deploy nginx webserver by below command,

helm install nginx bitnami/nginx --namespace nginxweb 

kubectl get all -n nginxweb


Step 3 : Validate with kor tool

lets check the unused resources with kor tool in the nginx namespace,

Below command will list all the unused resources available in the given namespace,

Syntax : kor all -n namespace

kor all -n nginxweb

lets delete one service from the nginxweb namespace and try it.

kubectl delete deployments nginx -n nginxweb

Now check what are the resources are available in the namespace,

kubectl get all -n nginxweb

it gives the result of one k8s service is available under the nginxweb namespace

And now try out with kor tool using below command,

kor all -n nginxweb

it gives the same result, that the nginx service is not used anywhere in the namespace.

We can check only configmap/secret/services/serviceaccount/deployments/statefulsets/role/hpa by,

kor services -n nginxweb

kor serviceaccount -n nginxweb

kor secret -n nginxweb


That's all. We have installed the KOR tool and validated it by deleting one of the component in the Nginx web server deployment.


References:

https://github.com/yonahd/kor


In this blog, We will see an interesting tool that helps DevOps/SRE professionals working in the Azure Cloud.

Are you worried that your Infrastructure as Code (IAC) is not in a good state, and there have been lots of manual changes? Here is a solution provided by Azure - a tool named "Azure Export for Terraform (aztfexport)".

This tool assists in exporting the current Azure resources into Terraform code. Below, we will see the installation of this tool and how to use it.

Requirements:

1.A linux/Window machine

2.Terraform (>= v0.12)

3.az-cli

4.Azure subscription account


Step 1 : aztfexport installation,

This tool can be installed on all operating systems. Refer to the link below for installation instructions for other OS:

https://github.com/Azure/aztfexport

If you are installing it on macOS, open the terminal and execute the following command:

brew install aztfexport


Step 2 : Configure azure subscription

Execute below commands to configure the azure subscription in terminal,

az login    or  

az login --use-device-code

next set the subscription id,

az account set --subscription "subscription id"

Now that the Azure subscription is configured, let's proceed with trying out the tool.

In this subscription, I have a resource group named "devopsart-dev-rg" which contains a virtual machine (VM). We will generate the Terraform code for this VM.


Step 3 : Experiment "aztfexport" tool

Execute the below commands to generate the tf code,

Create a new directory in any name,

mkdir aztfexport && cd aztfexport

Below command will help to check the available option for this tool.

aztfexport --help

Execute the below command to generate the terraform code from "devopsart-dev-rg" rg

Syntax : aztfexport resource-group resource-grp-name

aztfexport resource-group devopsart-dev-rg

It will take few seconds to list the available resources in the given resource group(RG).

and it will list all the resources under the RG like below,

next enter "w" to import the resources and it will take some more time to generate it.

Once its completed, we can validate the tf files.


Step 4 : Validate the tf files

We will validate the generated files, and the following files are present in the directory,

main.tf, 

provider.tf

terraform.tf

aztfexportResourceMapping.json

terraform.state (We can save this state file remotely by using below parameters)

aztfexport [subcommand] --backend-type=azurerm \

                        --backend-config=resource_group_name=<resource group name> \

                        --backend-config=storage_account_name=<account name> \

                        --backend-config=container_name=<container name> \

                        --backend-config=key=terraform.tfstate


Run, terraform plan

Nice!, it says there is no change is required in the Azure cloud infra.


Step 5 : Delete the azure resource and recreate with generated tf files,

The resources are deleted from Azure Portal under the dev rg,


Now run the terraform commands to create the resource,

cd aztfexport

terraform plan


Next execute,

terraform apply


Now all the resources are recreated with the generated tf files.

Thats all, We have installed aztfexport tool, generated tf files, Destroyed the azure resources and recreated with generated files.


check below link for the current limitations,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-concepts#limitations


References,

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-terraform-overview

https://github.com/Azure/aztfexport

https://www.youtube.com/watch?v=LWk9SU7AmDA

https://learn.microsoft.com/en-us/azure/developer/terraform/azure-export-for-terraform/export-advanced-scenarios

Read more

Show more

Steps to set up centralized logging with Grafana Loki and Grafana Agent for Kubernetes, VM Applications, and Azure services - Part 3

In Part 1 , We have covered how to setup Grafana Loki and Grafana Agent to view…

Steps to set up centralized logging with Grafana Loki and Grafana Agent for Kubernetes, VM Applications, and Azure services - Part 2

In Part 1 , We covered how to setup Grafana Loki and Grafana Agent to view  Kub…

Steps to set up centralized logging with Grafana Loki and Grafana Agent for Kubernetes, VM Applications, and Azure services - Part1

Dealing with multiple tools for capturing application logs from different sourc…

Rover - An Open Source Terraform Visualizer Tool

In this blog, we will explore a new tool called 'Rover,' which helps to…

Infracost - It reveals the expected cloud costs from Terraform script

In this blog, we will see a new tool called Infracost, which helps provide expe…

Trivy - An opensource tool to scan Container image, k8s, Git, VM Image, AWS and FileSystems

In this blog, we will install and examine a new tool called Trivy , which helps…

KOR - Kubernetes Orphaned Resources Tool Overview

In this blog post, We will explore a new tool called "KOR" (Kubernete…

Azure Export (aztfexport) Tool for Reverse Terraform(IAC)

In this blog, We will see an interesting tool that helps DevOps/SRE professiona…

Load More That is All