• Data Science
  • 8 min read

How to Setup Prometheus Monitoring On Kubernetes Cluster?

setup prometheus monitoring on kubernets clusters
First of all, you all need to know what Prometheus is? Prometheus is a platform that absorbs all sorts of data and it makes it suitable for complex workloads. Not only all sort of data it may also absorb the massive amount of data and makes it more accessible.

Why You Should Use Prometheus?

Well, to be honest, Prometheus can be useful in more than one way. You can monitor your servers using Prometheus, VMs and databases can also be monitored by using Prometheus. Prometheus can be useful to draw data and the application and infrastructure performance.
 
Now about the Kubernetes cluster, how would you be able to set up Prometheus monitoring in that? First and for every process or solution requires pre-requisites in order to run so do prerequisites.

Prerequisites

1. Kubernetes cluster
2. A command line interface is configured with kubect1.
 

How to Monitor The Kubernetes Cluster With Prometheus?

Prometheus is a system built to make everything easy. It functions by sending out a scrape, which is an HTTP request based on the configuration in its deployment file. Prometheus' response to this scrape, i.e., the information stored and parsed after it, is sent back to storage where it is all kept neatly together until one simply needs it.
 
The storage in the custom database can handle a massive level of data. Using Prometheus you will be able to monitor thousands of machines too with a single server and simultaneously. But for monitoring the data perfectly the data should be appropriately exposed and formatted so that Prometheus can collect and operate it easily.
 
You will have to use an exporter in order to have full control over data. But you must be wondering what is exporter. The exporter is basically a piece of software that might be placed next to your own application. Exporters are allowed to accept the HTTP requests from Prometheus and make sure the data which has been provided is in a supported format. Once your applications are well equipped and ready to provide data to Prometheus still you will have to inform Prometheus where should it be looking for data. 
 
All Prometheus do is discover a target to scrape which is called Service Discovery. If you want to keep track and change the status of the elements, just don’t worry because Kubernetes already has labels and annotations. Kubernetes API discover the targets using Prometheus.

There are services you may expose to Prometheus

? Node
? Endpoint
? Service
? Pod
? Ingress
 
Prometheus collects metrics from machines at the machine level, but retrieving application information is done separately. To monitor more in-depth pieces of data, you need to use a node exporter. Additionally, metrics about cgroups must be collected separately.
 
Kubernetes is already embedded with cAdvisor exporter, it can be readily exposed on Kubernetes. Once you've gathered the data, you can access it using the PromQL query language or by exporting it to graphical interfaces like Grafana. You can use Alertmanager to send relevant alerts about your server infrastructure.
 

How To Install Prometheus Monitoring On Kubernetes.

It can be installed on the Kubernetes cluster by just using a set of YAML files. The file should be containing configuration, permissions, and services that allow Prometheus to access resources and also pulls the information with the help of scrapping the elements of your cluster.
 
YAML can be used in more than one way it can easily be tracked, edited, and also can be reused. Now let’s move on to creating a monitoring namespace.

Create Monitoring NameSpace

If you know that all the Kubernetes resources are started in a namespace. The system uses the default namespace unless one is specified. Once you specify the monitoring namespace you will be able to control the cluster monitoring process with ease. Let’s name the namespace with “Monitoring” for ease. Remember the namespace needs to be a label compatible with DNS.
 
Let’s get started with it, there are two ways to create a monitoring namespace.

Option 1:

Enter the simple command mentioned below in your command line, and create the monitoring namespace on your own host.
 
“kubectl create namespace monitoring”

Option 2:

You will have to create and apply a.yml file for this
 
“apiVersion: v1
kind: Namespace
metadata:
name: monitoring”
 
For the future instance, the second option is considered to be more convenient. You may also apply your file to your cluster just by entering the command that has been mentioned below.
 
“kubectl -f apply namespace monitoring.yml”
 
You will have to use the command to list the existing namespace, well it is not dependent on any method you will have to use it regardless of method.
 
“kubectl get namespaces”
 

How Will You Configure Prometheus?

In order to configure Prometheus, you will have to have some necessary elements which are mentioned in this section. These sections can be implemented as individual .yml files executed in sequence. Once you have created every file, you will have to enter the following command.
 
The files should contain the following component in order to instruct kubectl to submit the request to the Kubernetes API server.
 
1. Give permission to allow Prometheus to access all pods and nodes.
2. The Prometheus configmap help defines which element should be scrapped.
3. Deployment instructions for Prometheus
4. A service that provides you access to the Prometheus interface.

Introduction To Cluster Role, Service Account, And Cluster Role Binding

If you want to retrieve cluster-wide data then you will have to give access to all the resources of that particular cluster. Let’s start the process by defining the cluster.

1.   Define The Role Of Cluster

“apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
resources:
- ingresses
verbs: ["get", "list", "watch"]”

2.   Create Service Account

“apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: monitoring”

3.   Apply Cluster Role Binding 

“apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring”
 
The addition of these files provides cluster-wide access from the monitoring namespace.
 

Prometheus Configmap

In particular, this section is important for the introduction of the scrapping process. According to your monitoring requirements and cluster setup, you may customize your Kubernetes cluster.

1.   Global Scrape Rule

“apiVersion: v1
data:
prometheus.yml: |
global:
scrape_interval: 10s”

2.   Scrape Node

Scrape node helps you find the nodes that make up your Kubernetes cluster.
 
2.1 Scrape Kubelet
“scrape_configs:
- job_name: 'kubelet'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true  # Required with Minikube.”
 
2.2 Scrape cAdvisor

Scrape cAdvisor is important to receive information about the containers, meanwhile, kubelet only provides you the information about itself. In order to collect container data, the command shown below can be used.
 
“- job_name: 'cadvisor'
kubernetes_sd_configs:
- role: node
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true  # Required with Minikube.
metrics_path: /metrics/cadvisor”

3.   Scrape API server

The scrape API server is an endpoint role that targets each application instantly. This particular section is used to scrape all API servers.
 
“- job_name: 'k8apiserver'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: true  # Required if using Minikube.
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https”

4.   Scrape Pods

This section back-ups all the Kubernetes services
 
 “- job_name: 'k8services'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels:
- __meta_kubernetes_namespace
- __meta_kubernetes_service_name
action: drop
regex: default;kubernetes
- source_labels:
- __meta_kubernetes_namespace
regex: default
action: keep
- source_labels: [__meta_kubernetes_service_name]
target_label: job”

5.   Pod Role

Enter the following command for Pod Role.
 
“- job_name: 'k8pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
regex: metrics
action: keep
- source_labels: [__meta_kubernetes_pod_container_name]
target_label: job
kind: ConfigMap
metadata:
name: prometheus-config”

6. Configure ReplicaSet

You will have to define the number of replicas that you need.
 
“apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: prometheus
spec:
selector:
matchLabels:
app: prometheus
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
serviceAccountName: prometheus
containers:
- name: prometheus
image: prom/prometheus:v2.1.0
ports:
- containerPort: 9090
name: default
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
volumes:
- name: config-volume
configMap:
name: prometheus-config”

7.   Define Node Port

Finally, your Prometheus is running in the cluster. Add the final section to your prometheus.yml file which gives access to the data which has been collected recently.
 
“kind: Service
apiVersion: v1
metadata:
name: prometheus
spec:
selector:
app: prometheus
type: LoadBalancer
ports:
- protocol: TCP
port: 9090
targetPort: 9090
nodePort: 30909”
 

Applying Prometheus.yml file

The below command gives configuration data to every pod on the deployment.
 
“kubectl apply -f prometheus.yml”
 
Individual node url could be use in order to define the prometheus.yml file to access. Let’s enter the command below for the example
 
“http://192.153.99.106:30909”
 
Once you enter the URL shown above, you have successfully gaine the access to the prometheus monitoring.
 
 
Courtesy: https://phoenixnap.com/kb/prometheus-kubernetes-monitoring

Best data science service provider company - HData Systems

Conclusion 

Now that you’re using Prometheus Monitoring on a Kubernetes cluster, you can track overall system behavior and analyze your microservices-based architecture. No matter how large and complex your operations are, it’s vital to have first-class tools like this in place for ensuring the smooth and efficient function of each individual service within an application.

Harnil Oza is a CEO of HData Systems - Data Science Company & Hyperlink InfoSystem a top mobile app development company in Canada, USA, UK, and India having a team of best app developers who deliver best mobile solutions mainly on Android and iOS platform and also listed as one of the top app development companies by leading research platform.

Powered By Hyperlink InfoSystem

Hyperlink InfoSystem is one of the leading software development companies based in India and has offices in USA, UK, UAE, France, and Canada. With 10+ years of experience in the industry, Hyperlink InfoSystem served more than 2,300 clients worldwide. The company has a team of 450+ highly skilled developers who works on any custom solutions using the latest technologies.

Get In Touch With Us

Project Budget: 0
Thank You!

Our Business Team Will Get Back to You Soon.

Quick Inquiry
+