Logging
Overview
Logging in Kyma uses Loki which is a Prometheus-like log management system. This lightweight solution, integrated with Grafana, is easy to understand and operate. The main elements of the logging stack include:
- The Agent acting as a log router for Docker containers. It runs inside Docker, checks each container, and routes the logs to the log management system. Currently, Kyma supports the Fluent Bit log collector.
- Loki main server which stores logs and processes queries.
- Grafana logging and metrics platform used for quering and displaying logs.
NOTE: At the moment, Kyma provides an alpha version of the Logging component. The default Loki Pod log tailing configuration does not work with Kubernetes version 1.14 (for GKE version 1.12.6-gke.X) and above. For setup and preparation of deployment see the cluster installation guide.
CAUTION: Loki is designed for application logging. Do not log any sensitive information, such as passwords or credit card numbers.
Architecture
This document provides an overview of the logging architecture in Kyma.
- Container logs are stored under the
var/log
directory and its subdirectories. - The agent queries the Kubernetes API Server which validates and configures data for objects such as Pods or Services.
- The agent fetches Pod and container details. Based on that, it tails the logs.
- The agent enriches log data with Pod labels and sends them to the Loki server. To enable faster data processing, log data is organized in log chunks. A log chunk consists of metadata, such as labels, collected over a certain time period.
- The Loki server processes the log data and stores it in the log store. The labels are stored in the index store.
The user queries the logs using the following tools:
- Grafana dashboards to analyze and visualize logs fetched and processed by Loki.
- API clients to query log data using the HTTP API for Loki.
- Log UI, accessed from the Kyma Console, to display and analyze logs.
Details
Access logs
To access the logs, follow these steps:
Run the following command to get the Pod name:
Click to copykubectl get pods -l app=loki -n kyma-systemRun the following command to configure port forwarding, replace {pod_name} with output of the previous command:
Click to copykubectl port-forward -n kyma-system <pod_name> 3100:3100To get first 1000 lines of error logs for components in the
kyma-system
Namespace, run the following command:Click to copycurl -X GET -G 'http://localhost:3100/api/prom/query' --data-urlencode 'query={namespace="kyma-system"}' --data-urlencode 'limit=1000' --data-urlencode 'regexp=error'
Storage configuration examples
Storage
By default, Loki comes with the boltDB storage configuration. It includes label and index storage, and the filesystem for object storage. Additionally, Loki supports other object stores, such as S3 or GCS.
This is an example of Loki configuration using boltDB and filesystem storage:
apiVersion: v1kind: ConfigMapmetadata: labels: app: loki name: lokidata: loki.yaml: | auth_enabled: false server: http_listen_port: 3100 ingester: lifecycler: ring: store: inmemory replication_factor: 1 schema_config: configs: - from: 0 store: boltdb object_store: filesystem schema: v9 index: prefix: index_ period: 168h storage_config: - name: boltdb directory: /tmp/loki/index - name: filesystem directory: /tmp/loki/chunks
A sample configuration for GCS looks as follows:
apiVersion: v1kind: ConfigMapmetadata: labels: app: loki name: lokidata: loki.yaml: | auth_enabled: false server: http_listen_port: 3100 ingester: lifecycler: ring: store: inmemory replication_factor: 1 schema_config: configs: - from: 0 store: gcs object_store: gsc schema: v9 index: prefix: index_ period: 168h storage_config: gcs: bucket_name: <YOUR_GCS_BUCKETNAME> project: <BIG_TABLE_PROJECT_ID> instance: <BIG_TABLE_INSTANCE_ID> grpc_client_config: <YOUR_CLIENT_SETTINGS>
Configuration
Logging chart
To configure the Logging chart, override the default values of its values.yaml
file. This document describes parameters that you can configure.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
Configurable parameters
This table lists the configurable parameters, their descriptions, and default values:
Parameter | Description | Default value |
---|---|---|
persistence.enabled | Specifies whether you store logs on a persistent volume instead of a volatile mounted volume. | true |
persistence.size | Defines the size of the persistent volume. | 10Gi |
config.auth_enabled | Authenticates the tenant sending the request to the logging service when Loki runs in the multi-tenant mode. Setting it to true requires authentication using the HTTP (X-Scope-OrgID ) header. Since Kyma supports the single-tenant mode only, you must set this parameter to false . This way, Loki does not require the X-Scope-OrgID header and the tenant ID defaults to fake . | false |
config.ingester.lifecycler.address | Specifies the address of the lifecycler that coordinates distributed logging services. | 127.0.0.1 |
config.ingester.lifecycler.ring.store | Specifies the storage for information on logging data and their copies. | inmemory |
config.ingester.lifecycler.ring.replication_factor | Specifies the number of data copies on separate storages. | 1 |
config.schema_configs.from | Specifies the date from which index data is stored. | 0 |
config.schema_configs.store | Specifies the storage type. boltdb is an embedded key-value storage that stores the index data. | boltdb |
config.schema_configs.object_store | Specifies if you use local or cloud storages for data. | filesystem |
config.schema_configs.schema | Defines the schema version that Loki provides. | v9 |
config.schema_configs.index.prefix | Specifies the prefix added to all index file names to distinguish them from log chunks. | index_ |
config.schema_configs.index.period | Defines how long indexes and log chunks are retained. | 168h |
config.storage_config.boltdb.directory | Specifies the physical location of indexes in boltdb . | /data/loki/index |
config.storage_config.filesystem.directory | Specifies the physical location of log chunks in filesystem . | /data/loki/chunks |
loki.resources.limits.memory | Maximum amount of memory available for Loki to use. | 300Mi |
loki.resources.limits.cpu | Maximum amount of CPU available for Loki to use. | 200m |
fluent-bit.resources.limits.memory | Maximum amount of memory available for Fluent Bit to use. | 128Mi |
fluent-bit.resources.limits.cpu | Maximum amount of CPU available for Fluent Bit to use. | 100m |
NOTE: The Loki storage configuration consists of the schema_config and storage_config definitions. Use schema_config to define the storage types and storage_config to configure storage types that are already defined.
Logging production profile
To use Logging in a mid-size production environment, you can install Kyma with the Logging production profile. Higher memory limits set for Loki and Fluent Bit logging solutions ensure stable log processing for 40 active Pods without causing any memory issues. If you want to work with a larger number of active Pods or experience a prolonged query time, configure the Logging chart to increase the memory and CPU values.
NOTE: This profile does not allow for horizontal scaling for which you need additional, dedicated storage systems.
Parameters
The table shows the parameters used in the production profile and their values:
Parameter | Description | Value |
---|---|---|
loki.resources.limits.memory | Maximum amount of memory available for Loki to use. | 512Mi |
fluent-bit.resources.limits.memory | Maximum amount of memory available for Fluent Bit to use. | 256Mi |
Use the production profile
You can deploy a Kyma cluster with Logging configured to use the production profile, or add the configuration in the runtime. Follow these steps:
- Install Kyma with production-ready Logging
- Enable configuration in a running cluster