Collecting Logs ​
With the Telemetry module, you can observe and debug your applications by collecting, processing, and exporting logs. To begin collecting logs, you create a LogPipeline resource. It automatically collects OTLP logs and application logs from the stdout/stderr channel. You can also activate Istio log collection.
Overview ​
A LogPipeline is a Kubernetes custom resource (CR) that configures log collection for your cluster. When you create a LogPipeline, the Telemetry Manager automatically deploys the necessary components (for details, see Logs Architecture):
- The OTLP Gateway provides a central OTLP endpoint for receiving logs pushed from your applications.
- A Log Agent that runs on each cluster node to collect logs written to
stdoutandstderrby your application containers.
The pipeline enriches all collected logs with Kubernetes metadata and transforms them into the OTLP format before sending them to your chosen backend.
Log collection is optional. If you don't create a LogPipeline, no logs are collected.
Prerequisites ​
Before you can collect logs from a component, it must emit the logs. Typically, it uses a logger framework for the used language runtime (like Node.js) and prints them to the
stdoutorstderrchannel (see Kubernetes: How nodes handle container logs). Alternatively, you can use the OTel SDK to use the push-based OTLP format.If you want to emit the logs to the
stdout/stderrchannel, use structured logs in a JSON format with a logger library like log4J. With that, the Log Agent can parse your log and enrich all JSON attributes as log attributes, and a backend can use that.If you prefer the push-based alternative with OTLP, also use a logger library like log4J. However, you must additionally instrument that logger and bridge it to the OTel SDK. For details, see OpenTelemetry: New First-Party Application Logs.
Minimal LogPipeline ​
For a minimal setup, you only need to create a LogPipeline that specifies your backend destination (see Integrate With Your OTLP Backend):
apiVersion: telemetry.kyma-project.io/v1beta1
kind: LogPipeline
metadata:
name: backend
output:
otlp:
endpoint:
value: http://myEndpoint:4317By default, this minimal pipeline enables the following types of log collection:
- Application logs: Collects
stdoutandstderrlogs from all containers running in non-system namespaces (such askyma-systemandkube-system). - OTLP logs: Activates cluster-internal endpoints to receive logs in the OTLP format. Your applications can push logs directly to these URLs:
- gRPC:
http://telemetry-otlp.kyma-system:4317 - HTTP:
http://telemetry-otlp.kyma-system:4318
- gRPC:
Configure Log Collection ​
You can customize your LogPipeline using the available parameters and attributes (see LogPipeline: Custom Resource Parameters):
- Configure or disable the collection of application logs from the
stdout/stderrchannel (see Configure Application Logs). - Set up the collection of Istio access logs (see Configure Istio Access Logs).
- Choose from which specific namespaces you want to include or exclude logs (see Filter Logs).
- If you have more than one backend, specify which input source sends logs to which backend (see Route Specific Inputs to Different Backends).
Limitations ​
- Throughput:
- When pushing OTLP logs of an average size of 2KB to the OTLP Gateway, the Telemetry module can process approximately 12,000 logs per second (LPS) per node. The OTLP Gateway runs one instance per cluster node.
- The Log Agent, running one instance per node, handles tailing logs from stdout using the runtime input. When writing logs of an average size of 2KB to stdout, a single Log Agent instance can process approximately 9,000 LPS.
- Unavailability of Output: For up to 5 minutes, a retry for data is attempted when the destination is unavailable. After that, data is dropped.
- No Guaranteed Delivery: The used buffers are volatile. If any gateway or agent instance crashes, logs data can be lost.
- Multiple LogPipeline Support: The maximum amount of LogPipeline resources is 5.