Step by Step installation of K3s in Centos 8
In this blog, We will see step-by-step of k3s installation in Centos 8. K3s, It…
In this blog, We will see step-by-step of k3s installation in Centos 8.
K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.
Requirements:
Linux servers: 2
OS: Centos 8.5
Step 1: Update OS and install Kubectl
Here am using one master and node to do the installation.
Master: k3smaster.devopsart.com (10.12.247.54)
Worker Node: k3snode1.devopsart.com (10.12.247.55)
Go to each server and run "yum update" to get the latest packages and do a reboot.
Make sure a firewall is enabled between these two Linux servers.
Install Kubectl,
Go to the root path of the master node and run the below commands,
curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl
chmod +x kubectl
cp kubectl /usr/bin
check the kubectl version to make sure the command is working or not,
kubectl version --short
Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving.
Step 2: Install K3s in the Master server
Use below command in master server to install k3s,
curl -sfL https://get.k3s.io | sh -
Once successfully installed, you can run below to check the k3s service status,
systemctl status k3s
We can see the k3s config file in the below path in Master,
cat /etc/rancher/k3s/k3s.yaml
Next, we need to copy the config file to use in kubectl.
mkdir ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Then check,
kubectl get nodes
K3s master node is successfully installed. next will do the worker node installation.
Step 3 : Install k3s agent in WorkerNode
Go to the worker node and execute the below command,
curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -
k3s_Master_url = https://k3smaster.devopsart.com:6443
k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token
Once the installation is successful, we can check the k3s agent status by executing the below command,
systemctl status k3s-agent.service
Step 4: K3s Installation validation
Go to the master node and check new worker node is listed or not by the below command,
kubectl get nodes
Great!, worker node is attached successfully with the k3s master.
Step 5: Deploy the Nginx webserver in K3s and validate,
Am using helm chart installation for this purpose.
Helm install,
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
cp -r /usr/local/bin/helm /usr/bin/helm
Add Bitnami repo in Helm,
helm repo add bitnami https://charts.bitnami.com/bitnami
Deploy Nginx webserver by using below helm command,
helm install nginx-web bitnami/nginx
Check the pod status,
kubectl get pods -o wide
The Nginx pod is running fine now.
Access Nginx webserver,
I took the clusterIP of the Nginx service and tried to access it, and it's working.
That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.
If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster.
Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.
Requirements :
1. Kubernetes cluster
2. Helm
Steps :
Step 1: Installation
Step 1.1: Overview of my existing cluster setup:
Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.
Step 1.2: Installation of helm dashboard,
execute the below command where the helm is installed,
# helm plugin install https://github.com/komodorio/helm-dashboard.git
Then execute the below command to start the helm dashboard,
# helm dashboard
If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".
If you want to run it in debug mode, set DEBUG=1 in the environment variable.
If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.
Step 2: Access the helm dashboard,
Go to the browser and access the dashboard, http://localhost:8080
Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.
We can see the list of helm repositories from the UI,
Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.
We can install the application from the available helm repositories from the UI.
And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.
That's all, the helm dashboard is installed successfully and able to view the deployment.
Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.
Note: This is a read-only tool, it will not make any changes in the K8s cluster.
In this blog, we will see how to install it and use this tool
Requirements:
1. K8s cluster
2. Linux VM
Step 1: Install the Popeye tool
Use the below command to install in MacBook,
brew install derailed/popeye/popeye
For other OS use the below link to install it.
You can install with "krew" as well by the using below command,
kubectl krew install popeye
Step 2: Run the Popeye tool to scan the Kubernetes cluster,
Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.
This command runs in all nodes and namespaces by default,
popeye
In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.
If you need to run a specific namespace and configuration you can use the below command,
For the specific namespace,
popeye -n devopsart
For specific configurations like config map,
popeye -n devopsart -s configmap
For specific deployments,
popeye -n devopsart -s deploy
Step 3: HTML report generation and Save the report locally
To save the report in the current directory use the below command,
POPEYE_REPORT_DIR=$(pwd) popeye --save
then run the required popeye command, and the scan will be saved in the current directory
To save the report in HTML use the below command,
POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html
And run the required popeye command and see the report.html file in the browser,
That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.
Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.
Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.
Requirements :
1. Kubernetes cluster
2. Helm in terminal
3. Helm repository(I will use Bitnami repo)
4. Golang in terminal
5. kubectl in terminal
6. Any machine which connects to the K8s cluster(Mine is Macbook)
Step 1: Installation of Nova
Execute the below commands to install in MacBook,
brew tap fairwindsops/tap
brew install fairwindsops/tap/nova
You can check the below link for other OS,
https://nova.docs.fairwinds.com/installation
Now find and install the required packages for nova by the below command,
go get github.com/fairwindsops/nova
Step 2: How to use Nova.
Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.
If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo list
helm search repo nginx -l|head -n10
Next, install older version of Nginx,
helm install nginx-web bitnami/nginx --version=12.0.5
Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.
The below command will give the output of installed and depreciated version status.
nova find
From the above image, you can see the latest version and installed version details.
The below command will give some more details for the namespace, helm version, etc
nova find -wide
The below command will give containers versions that are outdated in the cluster,
nova find --containers
That's all, we have successfully installed the Nova tool and validated the deployed version.
Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.
Requirements :
1. Kubernetes(K8s) cluster
2. A machine(mine is Mac) to install Polaris and have access to the cluster
Step 1: Install Polaris
Execute the following commands in the terminal,
brew tap reactiveops/tap
brew install reactiveops/tap/polaris
polaris dashboard --port 8080
Make sure you are able to access the K8s cluster from the machine where you installed Polaris.
To install via helm charts use below commands,
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace
kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80
Step 2: Polaris Dashboard
Next, go to the browser using http://127.0.0.1:8080
The overview gives you the following details,
polaris fix --files-path ./devopsart/ --checks=all
Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting.
Requirements:
1. docker and docker-compose
2.Git tool
3.Graphite server
Steps 1: Clone Cabot repository in local.
git clone https://github.com/cabotapp/docker-cabot
Steps 2: Update the settings based on our needs.
cd docker-cabot/conf
mv production.env.example production.env
Step 3: Install Cabot via docker-compose
cd docker-cabot
docker-compose up -d
Wait for few minutes until the containers are comes up
Step 4: Login to Cabot portal
URL : http://localhost:5000/
Initially it will ask to setup username and login details.
Step 5: Setup a simple service and checks.
There are three options to Monitor in Cabot, they are instance, check and service.
"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:
ping: a ping to a host
HTTP: call an URL and check the HTTP status.
"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.
"Service" is the macro stuff you want to monitor.
Am running a nginx webserver locally, will enable check for that.
After login go to checks Tab and click the "+" icon and rest add similar like below,
After saved the configuration,
Step 6: Test the Alert
Lets stop the nginx webserver and see if we are getting an email.
Successfully received an email.
Ref: https://github.com/cabotapp/docker-cabot
What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.
Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.
Requirements:
- Docker engine & Docker compose
Step 1: Clone and Install hypertrace,
# git clone https://github.com/hypertrace/hypertrace.git
# cd hypertrace/docker
# docker-compose pull
# docker-compose up --force-recreate
Step 2: Access Hypertrace Dashboard
Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.
URL: http://IP of the VM:2020
Step 3: Sample Application test with Hypertrace
The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.
# cd hypertrace/docker
# docker-compose -f docker-compose-zipkin-example.yml up
Once the containers are up, we can check the frontend in the browser by,
URL: http://IP of the VM:8081
Step 4: View the metrics in Hypertrace
Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.
We can see the list of APIs, Errors, latency, etc. Here are few screenshots.
Ref. : https://github.com/hypertrace/hypertrace
https://docs.hypertrace.org/
In this blog, We will see step-by-step of k3s installation in Centos 8.
K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.
Requirements:
Linux servers: 2
OS: Centos 8.5
Step 1: Update OS and install Kubectl
Here am using one master and node to do the installation.
Master: k3smaster.devopsart.com (10.12.247.54)
Worker Node: k3snode1.devopsart.com (10.12.247.55)
Go to each server and run "yum update" to get the latest packages and do a reboot.
Make sure a firewall is enabled between these two Linux servers.
Install Kubectl,
Go to the root path of the master node and run the below commands,
curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl
chmod +x kubectl
cp kubectl /usr/bin
check the kubectl version to make sure the command is working or not,
kubectl version --short
Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving.
Step 2: Install K3s in the Master server
Use below command in master server to install k3s,
curl -sfL https://get.k3s.io | sh -
Once successfully installed, you can run below to check the k3s service status,
systemctl status k3s
We can see the k3s config file in the below path in Master,
cat /etc/rancher/k3s/k3s.yaml
Next, we need to copy the config file to use in kubectl.
mkdir ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Then check,
kubectl get nodes
K3s master node is successfully installed. next will do the worker node installation.
Step 3 : Install k3s agent in WorkerNode
Go to the worker node and execute the below command,
curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -
k3s_Master_url = https://k3smaster.devopsart.com:6443
k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token
Once the installation is successful, we can check the k3s agent status by executing the below command,
systemctl status k3s-agent.service
Step 4: K3s Installation validation
Go to the master node and check new worker node is listed or not by the below command,
kubectl get nodes
Great!, worker node is attached successfully with the k3s master.
Step 5: Deploy the Nginx webserver in K3s and validate,
Am using helm chart installation for this purpose.
Helm install,
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
cp -r /usr/local/bin/helm /usr/bin/helm
Add Bitnami repo in Helm,
helm repo add bitnami https://charts.bitnami.com/bitnami
Deploy Nginx webserver by using below helm command,
helm install nginx-web bitnami/nginx
Check the pod status,
kubectl get pods -o wide
The Nginx pod is running fine now.
Access Nginx webserver,
I took the clusterIP of the Nginx service and tried to access it, and it's working.
That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.
If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster.
Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.
Requirements :
1. Kubernetes cluster
2. Helm
Steps :
Step 1: Installation
Step 1.1: Overview of my existing cluster setup:
Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.
Step 1.2: Installation of helm dashboard,
execute the below command where the helm is installed,
# helm plugin install https://github.com/komodorio/helm-dashboard.git
Then execute the below command to start the helm dashboard,
# helm dashboard
If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".
If you want to run it in debug mode, set DEBUG=1 in the environment variable.
If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.
Step 2: Access the helm dashboard,
Go to the browser and access the dashboard, http://localhost:8080
Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.
We can see the list of helm repositories from the UI,
Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.
We can install the application from the available helm repositories from the UI.
And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.
That's all, the helm dashboard is installed successfully and able to view the deployment.
Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.
Note: This is a read-only tool, it will not make any changes in the K8s cluster.
In this blog, we will see how to install it and use this tool
Requirements:
1. K8s cluster
2. Linux VM
Step 1: Install the Popeye tool
Use the below command to install in MacBook,
brew install derailed/popeye/popeye
For other OS use the below link to install it.
You can install with "krew" as well by the using below command,
kubectl krew install popeye
Step 2: Run the Popeye tool to scan the Kubernetes cluster,
Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.
This command runs in all nodes and namespaces by default,
popeye
In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.
If you need to run a specific namespace and configuration you can use the below command,
For the specific namespace,
popeye -n devopsart
For specific configurations like config map,
popeye -n devopsart -s configmap
For specific deployments,
popeye -n devopsart -s deploy
Step 3: HTML report generation and Save the report locally
To save the report in the current directory use the below command,
POPEYE_REPORT_DIR=$(pwd) popeye --save
then run the required popeye command, and the scan will be saved in the current directory
To save the report in HTML use the below command,
POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html
And run the required popeye command and see the report.html file in the browser,
That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.
Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.
Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.
Requirements :
1. Kubernetes cluster
2. Helm in terminal
3. Helm repository(I will use Bitnami repo)
4. Golang in terminal
5. kubectl in terminal
6. Any machine which connects to the K8s cluster(Mine is Macbook)
Step 1: Installation of Nova
Execute the below commands to install in MacBook,
brew tap fairwindsops/tap
brew install fairwindsops/tap/nova
You can check the below link for other OS,
https://nova.docs.fairwinds.com/installation
Now find and install the required packages for nova by the below command,
go get github.com/fairwindsops/nova
Step 2: How to use Nova.
Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.
If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo list
helm search repo nginx -l|head -n10
Next, install older version of Nginx,
helm install nginx-web bitnami/nginx --version=12.0.5
Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.
The below command will give the output of installed and depreciated version status.
nova find
From the above image, you can see the latest version and installed version details.
The below command will give some more details for the namespace, helm version, etc
nova find -wide
The below command will give containers versions that are outdated in the cluster,
nova find --containers
That's all, we have successfully installed the Nova tool and validated the deployed version.
Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.
Requirements :
1. Kubernetes(K8s) cluster
2. A machine(mine is Mac) to install Polaris and have access to the cluster
Step 1: Install Polaris
Execute the following commands in the terminal,
brew tap reactiveops/tap
brew install reactiveops/tap/polaris
polaris dashboard --port 8080
Make sure you are able to access the K8s cluster from the machine where you installed Polaris.
To install via helm charts use below commands,
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace
kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80
Step 2: Polaris Dashboard
Next, go to the browser using http://127.0.0.1:8080
The overview gives you the following details,
polaris fix --files-path ./devopsart/ --checks=all
Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting.
Requirements:
1. docker and docker-compose
2.Git tool
3.Graphite server
Steps 1: Clone Cabot repository in local.
git clone https://github.com/cabotapp/docker-cabot
Steps 2: Update the settings based on our needs.
cd docker-cabot/conf
mv production.env.example production.env
Step 3: Install Cabot via docker-compose
cd docker-cabot
docker-compose up -d
Wait for few minutes until the containers are comes up
Step 4: Login to Cabot portal
URL : http://localhost:5000/
Initially it will ask to setup username and login details.
Step 5: Setup a simple service and checks.
There are three options to Monitor in Cabot, they are instance, check and service.
"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:
ping: a ping to a host
HTTP: call an URL and check the HTTP status.
"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.
"Service" is the macro stuff you want to monitor.
Am running a nginx webserver locally, will enable check for that.
After login go to checks Tab and click the "+" icon and rest add similar like below,
After saved the configuration,
Step 6: Test the Alert
Lets stop the nginx webserver and see if we are getting an email.
Successfully received an email.
Ref: https://github.com/cabotapp/docker-cabot
What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.
Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.
Requirements:
- Docker engine & Docker compose
Step 1: Clone and Install hypertrace,
# git clone https://github.com/hypertrace/hypertrace.git
# cd hypertrace/docker
# docker-compose pull
# docker-compose up --force-recreate
Step 2: Access Hypertrace Dashboard
Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.
URL: http://IP of the VM:2020
Step 3: Sample Application test with Hypertrace
The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.
# cd hypertrace/docker
# docker-compose -f docker-compose-zipkin-example.yml up
Once the containers are up, we can check the frontend in the browser by,
URL: http://IP of the VM:8081
Step 4: View the metrics in Hypertrace
Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.
We can see the list of APIs, Errors, latency, etc. Here are few screenshots.
Ref. : https://github.com/hypertrace/hypertrace
https://docs.hypertrace.org/
In this blog, We will see step-by-step of k3s installation in Centos 8.
K3s, It is a lightweight Kubernetes container service which is created by Rancher and it is a simplified version of K8s with less than 100MB of binary size. It uses sqlite3 as a backend storage and etcd3, MySQL, Postgres database options are available. It is secured by default with standard practices.
Requirements:
Linux servers: 2
OS: Centos 8.5
Step 1: Update OS and install Kubectl
Here am using one master and node to do the installation.
Master: k3smaster.devopsart.com (10.12.247.54)
Worker Node: k3snode1.devopsart.com (10.12.247.55)
Go to each server and run "yum update" to get the latest packages and do a reboot.
Make sure a firewall is enabled between these two Linux servers.
Install Kubectl,
Go to the root path of the master node and run the below commands,
curl -LO https://dl.k8s.io/release/v1.26.0/bin/linux/amd64/kubectl
chmod +x kubectl
cp kubectl /usr/bin
check the kubectl version to make sure the command is working or not,
kubectl version --short
Go to the Master and worker nodes and make sure the host file is updated with the below details if the DNS is not resolving.
Step 2: Install K3s in the Master server
Use below command in master server to install k3s,
curl -sfL https://get.k3s.io | sh -
Once successfully installed, you can run below to check the k3s service status,
systemctl status k3s
We can see the k3s config file in the below path in Master,
cat /etc/rancher/k3s/k3s.yaml
Next, we need to copy the config file to use in kubectl.
mkdir ~/.kube
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Then check,
kubectl get nodes
K3s master node is successfully installed. next will do the worker node installation.
Step 3 : Install k3s agent in WorkerNode
Go to the worker node and execute the below command,
curl -sfL https://get.k3s.io | K3S_URL=${k3s_Master_url} K3S_TOKEN=${k3s_master_token} sh -
k3s_Master_url = https://k3smaster.devopsart.com:6443
k3s_master_token= "Get the token from the master by executing the below command"
cat /var/lib/rancher/k3s/server/node-token
Once the installation is successful, we can check the k3s agent status by executing the below command,
systemctl status k3s-agent.service
Step 4: K3s Installation validation
Go to the master node and check new worker node is listed or not by the below command,
kubectl get nodes
Great!, worker node is attached successfully with the k3s master.
Step 5: Deploy the Nginx webserver in K3s and validate,
Am using helm chart installation for this purpose.
Helm install,
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
cp -r /usr/local/bin/helm /usr/bin/helm
Add Bitnami repo in Helm,
helm repo add bitnami https://charts.bitnami.com/bitnami
Deploy Nginx webserver by using below helm command,
helm install nginx-web bitnami/nginx
Check the pod status,
kubectl get pods -o wide
The Nginx pod is running fine now.
Access Nginx webserver,
I took the clusterIP of the Nginx service and tried to access it, and it's working.
That's all, K3s is successfully in centos 8.5 and deployed Nginx webserver and validated.
If you are using Kubernetes, then you will definitely know about helm charts. Helm chart is nothing but a deployment tool, it helps us to deploy, upgrade, and manage our applications in the Kubernetes cluster.
Recently Komodo announced an open-source dashboard for helm. Today we will see how to install and use it.
Requirements :
1. Kubernetes cluster
2. Helm
Steps :
Step 1: Installation
Step 1.1: Overview of my existing cluster setup:
Am running minikube version 1.26.0 and the helm version is 3.9.2. Am going to use this setup for this installation.
Step 1.2: Installation of helm dashboard,
execute the below command where the helm is installed,
# helm plugin install https://github.com/komodorio/helm-dashboard.git
Then execute the below command to start the helm dashboard,
# helm dashboard
If your port 8080 is already used, we can change it by using the environment variable as "HD_PORT".
If you want to run it in debug mode, set DEBUG=1 in the environment variable.
If you see the bey default helm dashboard will check for checkov and trivy plugins to use these tools for scanning purposes.
Step 2: Access the helm dashboard,
Go to the browser and access the dashboard, http://localhost:8080
Now, we can see the already installed applications through helm, which we have seen in step 1 by using helm commands.
We can see the list of helm repositories from the UI,
Whatever we do from the helm command, Now we can do it from UI itself. We can view the existing manifest, upgrade, uninstall, etc.
We can install the application from the available helm repositories from the UI.
And by default, this dashboard detects checkov and trivy scanners. And this dashboard uses these tools to scan the manifest during deployment.
That's all, the helm dashboard is installed successfully and able to view the deployment.
Popeye - It's a utility which scans K8s clusters and reports potential issues in deployed resources and configurations.
Note: This is a read-only tool, it will not make any changes in the K8s cluster.
In this blog, we will see how to install it and use this tool
Requirements:
1. K8s cluster
2. Linux VM
Step 1: Install the Popeye tool
Use the below command to install in MacBook,
brew install derailed/popeye/popeye
For other OS use the below link to install it.
You can install with "krew" as well by the using below command,
kubectl krew install popeye
Step 2: Run the Popeye tool to scan the Kubernetes cluster,
Note: Popeye CLI works like the kubectl command, so make sure you have the Kube config in local to connect to the cluster.
This command runs in all nodes and namespaces by default,
popeye
In the above output, you can see the overall status of the cluster and its configurations and it gives the score as well at the end. The current score is 87% and a B rank. To improve the score, we need to work on the suggestions which are recommended.
If you need to run a specific namespace and configuration you can use the below command,
For the specific namespace,
popeye -n devopsart
For specific configurations like config map,
popeye -n devopsart -s configmap
For specific deployments,
popeye -n devopsart -s deploy
Step 3: HTML report generation and Save the report locally
To save the report in the current directory use the below command,
POPEYE_REPORT_DIR=$(pwd) popeye --save
then run the required popeye command, and the scan will be saved in the current directory
To save the report in HTML use the below command,
POPEYE_REPORT_DIR=$(pwd) popeye --save --out html --output-file report.html
And run the required popeye command and see the report.html file in the browser,
That's all, we have successfully installed the "Popeye" tool and validated it with the K8s cluster. This helps to improve our K8s cluster configuration and make our cluster more stable.
Today we will see step by step of installation and demonstration of an open-source tool called "Nova" to check the outdated or depreciated version in the Kubernetes cluster.
Nova: An Opensource tool, it will scan your deployed(used by helm charts) components version in your Kubernetes cluster and check the currently deployed version vs the latest version which is in Helm repositories.
Requirements :
1. Kubernetes cluster
2. Helm in terminal
3. Helm repository(I will use Bitnami repo)
4. Golang in terminal
5. kubectl in terminal
6. Any machine which connects to the K8s cluster(Mine is Macbook)
Step 1: Installation of Nova
Execute the below commands to install in MacBook,
brew tap fairwindsops/tap
brew install fairwindsops/tap/nova
You can check the below link for other OS,
https://nova.docs.fairwinds.com/installation
Now find and install the required packages for nova by the below command,
go get github.com/fairwindsops/nova
Step 2: How to use Nova.
Make sure you are able to connect the k8s cluster from the machine where you installed the Nova tool.
If you see the above screenshot, there are no helm charts installed, let's add the bitnami repo and try to install an older version of the Nginx webserver and we will try with nova findings.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo list
helm search repo nginx -l|head -n10
Next, install older version of Nginx,
helm install nginx-web bitnami/nginx --version=12.0.5
Now we installed Nginx version 12.0.5 via helm chart, Let's check the nova command now.
The below command will give the output of installed and depreciated version status.
nova find
From the above image, you can see the latest version and installed version details.
The below command will give some more details for the namespace, helm version, etc
nova find -wide
The below command will give containers versions that are outdated in the cluster,
nova find --containers
That's all, we have successfully installed the Nova tool and validated the deployed version.
Today we will see a tool called "Polaris" which helps to keep your Kubernetes cluster running perfectly using best practices without any issues.
Requirements :
1. Kubernetes(K8s) cluster
2. A machine(mine is Mac) to install Polaris and have access to the cluster
Step 1: Install Polaris
Execute the following commands in the terminal,
brew tap reactiveops/tap
brew install reactiveops/tap/polaris
polaris dashboard --port 8080
Make sure you are able to access the K8s cluster from the machine where you installed Polaris.
To install via helm charts use below commands,
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm upgrade --install polaris fairwinds-stable/polaris --namespace polaris --create-namespace
kubectl port-forward --namespace polaris svc/polaris-dashboard 8080:80
Step 2: Polaris Dashboard
Next, go to the browser using http://127.0.0.1:8080
The overview gives you the following details,
polaris fix --files-path ./devopsart/ --checks=all
Monitoring and Alerting are the most important words in DevOps world. Today we are going to see how to install Cabot tool for monitoring and alerting.
Requirements:
1. docker and docker-compose
2.Git tool
3.Graphite server
Steps 1: Clone Cabot repository in local.
git clone https://github.com/cabotapp/docker-cabot
Steps 2: Update the settings based on our needs.
cd docker-cabot/conf
mv production.env.example production.env
Step 3: Install Cabot via docker-compose
cd docker-cabot
docker-compose up -d
Wait for few minutes until the containers are comes up
Step 4: Login to Cabot portal
URL : http://localhost:5000/
Initially it will ask to setup username and login details.
Step 5: Setup a simple service and checks.
There are three options to Monitor in Cabot, they are instance, check and service.
"Check" is some particular task you want to run to check something. Checks can be of some predefined types, like:
ping: a ping to a host
HTTP: call an URL and check the HTTP status.
"Instance" is an actual instance of a machine that will have some service running. It will have a IP/hostname.
"Service" is the macro stuff you want to monitor.
Am running a nginx webserver locally, will enable check for that.
After login go to checks Tab and click the "+" icon and rest add similar like below,
After saved the configuration,
Step 6: Test the Alert
Lets stop the nginx webserver and see if we are getting an email.
Successfully received an email.
Ref: https://github.com/cabotapp/docker-cabot
What is Hypertrace: It is a cloud-native distributed tracing based observability platform that gives visibility into any environment distributed system. It converts distributed trace data into relevant insight for everyone.
Hypertrace supports all standard instrumentation libraries and agents. If your application is already instrumented with OpenTelemetry, Jaeger or Zipkin, Hypertrace will work out of the box with your application telemetry data.
Requirements:
- Docker engine & Docker compose
Step 1: Clone and Install hypertrace,
# git clone https://github.com/hypertrace/hypertrace.git
# cd hypertrace/docker
# docker-compose pull
# docker-compose up --force-recreate
Step 2: Access Hypertrace Dashboard
Once step 1 is completed successfully, We can access the Hypertrace dashboard from the browser.
URL: http://IP of the VM:2020
Step 3: Sample Application test with Hypertrace
The above-cloned repo is having a sample application which is having frontend and backend APIs and it sends data to Zipkin. Let's check that.
# cd hypertrace/docker
# docker-compose -f docker-compose-zipkin-example.yml up
Once the containers are up, we can check the frontend in the browser by,
URL: http://IP of the VM:8081
Step 4: View the metrics in Hypertrace
Hit the frontend URL multiple times and see Hypertrace dashboard to see the data.
We can see the list of APIs, Errors, latency, etc. Here are few screenshots.
Ref. : https://github.com/hypertrace/hypertrace
https://docs.hypertrace.org/
In this blog, We will see step-by-step of k3s installation in Centos 8. K3s, It…
If you are using Kubernetes, then you will definitely know about helm charts. H…
Today we will see a new tool called "Popeye" which helps to find misc…
Today we will see step by step of installation and demonstration of an open-sou…
Today we will see a tool called "Polaris" which helps to keep your Ku…
Monitoring and Alerting are the most important words in DevOps world. Today we …
In this blog, we will see how to install Hypertrace for the docker container ap…