Hide navigation
Components

Logging

Overview

Logging in Kyma uses Loki which is a Prometheus-like log management system. This lightweight solution, integrated with Grafana, is easy to understand and operate. Loki provides Promtail which is a log router for Docker containers. Promtail runs inside Docker, checks each container and routes the logs to the log management system.

NOTE: At the moment, Kyma provides an alpha version of the Logging component. The default Loki Pod log tailing configuration does not work with Kubernetes version 1.14 (for GKE version 1.12.6-gke.X) and above. For setup and preparation of deployment see the cluster installation guide.

CAUTION: Loki is designed for application logging. Do not log any sensitive information, such as passwords or credit card numbers.

Architecture

This document provides an overview of the logging architecture in Kyma.

Logging architecture in Kyma

Agent (Promtail)

Promtail is the agent responsible for collecting reliable metadata, consistent with the time series or metrics metadata. To achieve this, the agent uses the same service discovery and relabelling libraries as Prometheus. Promtail is used as a Deamon Set to discover targets, create metadata labels, and tail log files to produce a stream of logs. The logs are buffered on the client side and then sent to the service.

Log chunks

A log chunk consists of all logs for metadata, such as labels, collected over a certain time period. Log chunks support append, seek, and stream operations on requests.

Life of a write request

The write request path resembles Cortex architecture, using the same server-side components. It looks as follows: 1. The write request reaches the distributor service, which is responsible for distributing and replicating the requests to ingesters. Loki uses the Cortex consistent hash ring and distributes requests based on the hash of the entire metadata set. 2. The write request goes to the log ingester which batches the requests for the same stream into the log chunks stored in memory. When the log chunks reach a predefined size or age, they are flushed out to the Cortex chunk store. 3. The Cortex chunk store will be updated to reduce copying of chunk data on the read and write path and add support for writing chunks of google cloud storage.

Life of a query request

Log chunks are larger than Prometheus Cortex chunks (Cortex chunks do not exceed 1KB). As a result, you cannot load and decompress them as a whole. To solve this problem Loki supports streaming and iterating over the chunks. This means it can decompress only the necessary chunk parts.

For further information, see the design documentation.

Access logs

To access the logs, follow these steps:

  1. Run the following command to get the current Pod name:
Click to copy
kubectl get pods -l app=loki -n kyma-system
  1. Run the following command to configure port forwarding, replace <pod_name> with output of previous command:
Click to copy
kubectl port-forward -n kyma-system <pod_name> 3100:3100
  1. To get first 1000 lines of error logs for components in the 'kyma-system' Namespace, run the following command:
Click to copy
curl -X GET -G 'http://localhost:3100/api/prom/query' --data-urlencode 'query={namespace="kyma-system"}' --data-urlencode 'limit=1000' --data-urlencode 'regexp=error'

For further information, see the Loki API documentation.

Storage configuration

Storage

By default, Loki comes with the boltDB storage configuration. It includes label and index storage, and the filesystem for object storage. Additionally, Loki supports other object stores, such as S3 or GCS.

This is an example of Loki configuration using boltDB and filesystem storage:

Click to copy
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: loki
name: loki
data:
loki.yaml: |
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
ring:
store: inmemory
replication_factor: 1
schema_config:
configs:
- from: 0
store: boltdb
object_store: filesystem
schema: v9
index:
prefix: index_
period: 168h
storage_configs:
- name: boltdb
directory: /tmp/loki/index
- name: filesystem
directory: /tmp/loki/chunks

The Loki storage configuration consists of the schema_config and storage_configs definitions. Use the schema_config to define your storage types, and storage_configs to configure the already defined storage types.

A sample configuration for GCS looks as follows:

Click to copy
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: loki
name: loki
data:
loki.yaml: |
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
ring:
store: inmemory
replication_factor: 1
schema_config:
configs:
- from: 0
store: gcs
object_store: gsc
schema: v9
index:
prefix: index_
period: 168h
storage_configs:
gcs:
bucket_name: <YOUR_GCS_BUCKETNAME>
project: <BIG_TABLE_PROJECT_ID>
instance: <BIG_TABLE_INSTANCE_ID>
grpc_client_config: <YOUR_CLIENT_SETTINGS>