Integrate with Prometheus and Visualize with Grafana and Kiali ​
| Category | |
|---|---|
| Signal types | metrics |
| Backend type | custom local |
| OTLP-native | yes |
Learn how to configure the Telemetry module to ingest metrics in a custom Prometheus instance deployed with the kube-prometheus-stack. Additionally install Grafana and Kiali for visualizations.
Table of Content ​
Prerequisites ​
- Kyma as the target deployment environment.
- The Telemetry module is added. For details, see Quick Install.
- If you want to use Istio metrics, make sure that the Istio module is added. This is mandatory for the use with Kiali.
- Kubernetes CLI (kubectl) (see Install the Kubernetes Command Line Tool).
- UNIX shell or Windows Subsystem for Linux (WSL) to execute commands.
WARNING
- This guide describes a basic setup that you should not use in production. Typically, a production setup needs further configuration, like optimizing the amount of data to be collected and the required resource footprint of the installation. To achieve qualities like high availability, scalability, or durable long-term storage, you need a more advanced setup.
- This example uses the latest Grafana version, which is under AGPL-3.0 and might not be free of charge for commercial usage.
Context ​
The Telemetry module supports shipping metrics from applications and the Istio service mesh to Prometheus using the OpenTelemetry protocol (OTLP). Prometheus is a widely used backend for collection and storage of metrics. To provide an instant and comprehensive monitoring experience, the kube-prometheus-stack Helm chart bundles Prometheus together with Grafana and the Alertmanager. Furthermore, it brings community-driven best practices on Kubernetes monitoring, including the components node-exporter and kube-state-metrics.
Because the OpenTelemetry community is not that advanced yet in providing a full-blown Kubernetes monitoring solution (without node-exporter as additional tools), this guide shows how to combine the two worlds by integrating application and Istio metrics based on the Telemetry module, and the Kubernetes monitoring based on the features of the bundle.
First, you deploy the kube-prometheus-stack. Then, you configure the Telemetry module to start metric ingestion and you deploy the sample application to illustrate custom metric consumption. Finally, you install Kiali to illustrate Istio metrics.
Procedure ​
Install the kube-prometheus-stack ​
Export your namespace as a variable. Replace the
{namespace}placeholder in the following command and run it:bashexport K8S_PROM_NAMESPACE="{namespace}"If you haven't created the namespace yet, now is the time to do so:
bashkubectl create namespace $K8S_PROM_NAMESPACENote: This namespace must have no Istio sidecar injection enabled; that is, there must be no
istio-injectionlabel present on the namespace. The Helm chart applies Kubernetes Jobs that fail when Istio sidecar injection is enabled.Export the Helm release name that you want to use. It can be any name, but be aware that all resources in the cluster will be prefixed with that name. Run the following command:
bashexport HELM_PROM_RELEASE="prometheus"Update your Helm installation with the required Helm repository:
bashhelm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo add kiali https://kiali.org/helm-charts helm repo updateRun the Helm upgrade command, which installs the chart if it's not present yet. At the end of the command, change the Grafana admin password to some value of your choice.
bashhelm upgrade --install -n ${K8S_PROM_NAMESPACE} ${HELM_PROM_RELEASE} prometheus-community/kube-prometheus-stack -f https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/prometheus/prometheus-values.yaml --set grafana.adminPassword=myPwd
You can use the values.yaml provided with this guide, which contains customized settings deviating from the default settings, or create your own one. The provided
values.yamlcovers the following adjustments:- Basic Istio setup to secure communication between Prometheus, Grafana, and Alertmanager
- Native OTLP receiver enabled for Prometheus
- Basic configuration of data persistence with retention
- Basic resource limits for involved components
Verify the kube-prometheus-stack ​
If the stack was provided successfully, you see several Pods coming up in the Namespace, especially Prometheus, Grafana, and Alertmanager. Assure that all Pods have the "Running" state.
Browse the Prometheus dashboard and verify that all "Status->Targets" are healthy. To see the dashboard on
http://localhost:9090, run:bashkubectl -n ${K8S_PROM_NAMESPACE} port-forward $(kubectl -n ${K8S_PROM_NAMESPACE} get service -l app=kube-prometheus-stack-prometheus -oname) 9090Browse the Grafana dashboard and verify that the dashboards are showing data. The user
adminis preconfigured in the Helm chart; the password was provided in yourhelm installcommand. To see the dashboard onhttp://localhost:3000, run:bashkubectl -n ${K8S_PROM_NAMESPACE} port-forward svc/${HELM_PROM_RELEASE}-grafana 3000:80
Activate a MetricPipeline ​
Apply a MetricPipeline resource that has the output configured with the local Prometheus URL and has the inputs enabled for collecting Istio metrics and collecting application metrics whose workloads are annotated with Prometheus annotations.
yamlSERVICE=$(kubectl -n ${K8S_PROM_NAMESPACE} get service -l app=kube-prometheus-stack-prometheus -ojsonpath='{.items[*].metadata.name}') kubectl apply -f - <<EOF apiVersion: telemetry.kyma-project.io/v1alpha1 kind: MetricPipeline metadata: name: prometheus spec: input: prometheus: enabled: true namespaces: exclude: - kyma-system istio: enabled: true output: otlp: protocol: http endpoint: value: "http://${SERVICE}.${K8S_PROM_NAMESPACE}:9090/api/v1/otlp" EOFVerify the MetricPipeline health by verifying that all attributes are "true":
shkubectl get metricpipeline prometheus
Deploy the Sample Application ​
Deploy the sample app:
bashkubectl apply -f https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/sample-app/deployment/deployment.yaml -n $K8S_PROM_NAMESPACEVerify that the Deployment of the sample app is healthy:
shkubectl rollout status deployment sample-app
Verify the Setup In Grafana ​
Port forward to Grafana once more.
In the Explore view, search for the metrics with prefix "istio_", which are collected by the MetricPipeline using the
istioinput.Optionally, import the Istio Grafana dashboards (see Istio: Import from grafana.com into an existing deployment) and verify that the dashboards are showing data.
In the Explore view, search for the metric
cpu_temperature_celsius, which is pushed by the sample app to the gateway managed by the MetricPipeline.
Install Kiali ​
Kiali is a visualization tool for the Istio ServiceMesh and relies on the data of the Kubernetes APIServer as well as the Istio metrics stored in a Prometheus instance. It as well can integrate dashboards served by a Grafana instance.
Kiali is best installed by the Kiali-Operator using Helm:
Run the Helm upgrade command, which installs the chart if it's not present yet.
bashhelm upgrade --install -n ${K8S_PROM_NAMESPACE} ${HELM_PROM_RELEASE} kiali/kiali-operator -f https://raw.githubusercontent.com/kyma-project/telemetry-manager/main/docs/user/integration/prometheus/kiali-values.yaml
- You can use the values.yaml provided with this guide, which contains customized settings deviating from the default settings, or create your own one. The provided
values.yamlcovers the following adjustments:- Creates a default Kiali resource for the Kiali Operator
- Enables anonymous access (do not use for productive setups!)
- Configures Grafana integration
- Configures Prometheus integration
Verify the Setup in Kiali ​
Browse the Kiali dashboard and verify that the dashboards are showing data. To see the dashboard on
http://localhost:20001, run:bashkubectl -n ${K8S_PROM_NAMESPACE} port-forward svc/kiali 20001:20001
Cleanup ​
To remove the installation from the cluster, call Helm:
bashhelm delete -n ${K8S_PROM_NAMESPACE} ${HELM_PROM_RELEASE}To remove the MetricPipeline, call kubectl:
bashhelm delete MetricPipeline prometheusTo remove the example app and all its resources from the cluster, run the following command:
bashkubectl delete all -l app.kubernetes.io/name=sample-app -n $K8S_PROM_NAMESPACE