Logging

Overview

Logging in Kyma uses Loki which is a Prometheus-like log management system. This lightweight solution, integrated with Grafana, is easy to understand and operate. Loki provides Promtail which is a log router for Docker containers. Promtail runs inside Docker, checks each container and routes the logs to the log management system.

NOTE: At the moment, Kyma provides an alpha version of the Logging component. The default Loki Pod log tailing configuration does not work with Kubernetes version 1.14 (for GKE version 1.12.6-gke.X) and above. For setup and preparation of deployment see the cluster installation guide.

CAUTION: Loki is designed for application logging. Do not log any sensitive information, such as passwords or credit card numbers.

Architecture

This document provides an overview of the logging architecture in Kyma.

Logging architecture in Kyma

Agent (Promtail)

Promtail is the agent responsible for collecting reliable metadata, consistent with the time series or metrics metadata. To achieve this, the agent uses the same service discovery and relabelling libraries as Prometheus. Promtail is used as a Deamon Set to discover targets, create metadata labels, and tail log files to produce a stream of logs. The logs are buffered on the client side and then sent to the service.

Log chunks

A log chunk consists of all logs for metadata, such as labels, collected over a certain time period. Log chunks support append, seek, and stream operations on requests.

Life of a write request

The write request path resembles Cortex architecture, using the same server-side components. It looks as follows: 1. The write request reaches the distributor service, which is responsible for distributing and replicating the requests to ingesters. Loki uses the Cortex consistent hash ring and distributes requests based on the hash of the entire metadata set. 2. The write request goes to the log ingester which batches the requests for the same stream into the log chunks stored in memory. When the log chunks reach a predefined size or age, they are flushed out to the Cortex chunk store. 3. The Cortex chunk store will be updated to reduce copying of chunk data on the read and write path and add support for writing chunks of google cloud storage.

Life of a query request

Log chunks are larger than Prometheus Cortex chunks (Cortex chunks do not exceed 1KB). As a result, you cannot load and decompress them as a whole. To solve this problem Loki supports streaming and iterating over the chunks. This means it can decompress only the necessary chunk parts.

For further information, see the design documentation.

Details

Access logs

To access the logs, follow these steps:

  1. Run the following command to get the current Pod name:
Click to copy
kubectl get pods -l app=loki -n kyma-system
  1. Run the following command to configure port forwarding, replace <pod_name> with output of previous command:
Click to copy
kubectl port-forward -n kyma-system <pod_name> 3100:3100
  1. To get first 1000 lines of error logs for components in the 'kyma-system' Namespace, run the following command:
Click to copy
curl -X GET -G 'http://localhost:3100/api/prom/query' --data-urlencode 'query={namespace="kyma-system"}' --data-urlencode 'limit=1000' --data-urlencode 'regexp=error'

For further information, see the Loki API documentation.

Storage configuration examples

Storage

By default, Loki comes with the boltDB storage configuration. It includes label and index storage, and the filesystem for object storage. Additionally, Loki supports other object stores, such as S3 or GCS.

This is an example of Loki configuration using boltDB and filesystem storage:

Click to copy
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: loki
name: loki
data:
loki.yaml: |
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
ring:
store: inmemory
replication_factor: 1
schema_config:
configs:
- from: 0
store: boltdb
object_store: filesystem
schema: v9
index:
prefix: index_
period: 168h
storage_config:
- name: boltdb
directory: /tmp/loki/index
- name: filesystem
directory: /tmp/loki/chunks

A sample configuration for GCS looks as follows:

Click to copy
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: loki
name: loki
data:
loki.yaml: |
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
ring:
store: inmemory
replication_factor: 1
schema_config:
configs:
- from: 0
store: gcs
object_store: gsc
schema: v9
index:
prefix: index_
period: 168h
storage_config:
gcs:
bucket_name: <YOUR_GCS_BUCKETNAME>
project: <BIG_TABLE_PROJECT_ID>
instance: <BIG_TABLE_INSTANCE_ID>
grpc_client_config: <YOUR_CLIENT_SETTINGS>

Configuration

Logging chart

To configure the Logging chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values:

ParameterDescriptionDefault value
persistence.enabledSpecifies whether you store logs on a persistent volume instead of a volatile mounted volume.true
persistence.sizeDefines the size of the persistent volume.10Gi
config.auth_enabledSpecifies the authentication mechanism you use to access the logging service. Set it to false to use built-in Istio authentication, or to true to use the basic HTTP authentication instead.false
config.ingester.lifecycler.addressSpecifies the address of the lifecycler that coordinates distributed logging services.127.0.0.1
config.ingester.lifecycler.ring.storeSpecifies the storage for information on logging data and their copies.inmemory
config.ingester.lifecycler.ring.replication_factorSpecifies the number of data copies on separate storages.1
config.schema_configs.fromSpecifies the date from which index data is stored.0
config.schema_configs.storeSpecifies the storage type. boltdb is an embedded key-value storage that stores the index data.boltdb
config.schema_configs.object_storeSpecifies if you use local or cloud storages for data.filesystem
config.schema_configs.schemaDefines the schema version that Loki provides.v9
config.schema_configs.index.prefixSpecifies the prefix added to all index file names to distinguish them from log chunks.index_
config.schema_configs.index.periodDefines how long indexes and log chunks are retained.168h
config.storage_config.boltdb.directorySpecifies the physical location of indexes in boltdb./data/loki/index
config.storage_config.filesystem.directorySpecifies the physical location of log chunks in filesystem./data/loki/chunks

NOTE: The Loki storage configuration consists of the schema_config and storage_config definitions. Use schema_config to define the storage types and storage_config to configure storage types that are already defined.