Event Mesh


The Event Mesh allows you to easily integrate external applications with Kyma. Under the hood, the Event Mesh implements Knative Eventing to ensure Kyma receives business events from external sources and is able to trigger business flows using Functions or services.

The Event Mesh implementation relies on Knative's Broker and Trigger custom resources that define the event delivery process. The Broker receives events from external solutions and dispatches them to subscribers, such as Functions. To make sure certain subscribers receive exactly the events they want, the Trigger defines filters that use event attributes, such as version or type, to pick up specific events from the Broker.


The architecture of Event Mesh relies heavily on the functionality provided by the Knative Eventing components. To ensure a stable event flow between the sender and the subscriber, the Event Mesh wires Knative and Kyma components together.

This diagram shows how the Event Mesh components work together.

Eventing implementation

  1. The user creates an Application CR and binds it to a Namespace.

  2. The Application Operator watches the creation of the Application CR and creates an HTTPSource CR which defines the source sending the events.

  3. The Event Source Controller watches the creation of the HTTPSource CR and deploys these resources:

    • HTTP Source Adapter which is an HTTP server deployed inside the kyma-integration Namespace. This adapter acts as a gateway to the Channel, and is responsible for exposing an endpoint to which the Application sends the events.

    • Channel which defines the way messages are dispatched in the Namespace. Its underlying implementation is responsible for forwarding events to the Broker or additional Channels. Kyma uses NATS Streaming as its default Channel, but you can change it to InMemoryChannel, Kafka, or Google PubSub.

  4. The Application Broker watches the creation of the Application CR and performs the following actions:

    • Exposes the Events API of an external system as a ServiceClass. Once the user provisions this ServiceClass in the Namespace, the Application Broker makes events available to use.

    • Deploys Knative Subscription and defines the Broker as the subscriber for the Channel to allow communication between them.

    • Adds the knative-eventing-injection label to the user's Namespace. As a result, the Namespace controller creates the Broker which automatically receives the default name. The Broker acts as an entry point for the events which it receives at the cluster-local endpoint it exposes.

  5. The user creates the Trigger which references the Broker and defines the subscriber along with the conditions for filtering events. This way, subscribers receive only the events of a given type.

For details on the Trigger specification, read about event processing and delivery.


Event processing and delivery

The event processing and delivery flow in the Event Mesh uses the Broker and Trigger concepts to forward events and deliver them to the subscribers. This diagram explains the event flow in Kyma, from the moment the Application sends an event, to the point when the event triggers the Function.

Eventing flow

  1. The Application sends events to the HTTP Source Adapter which forwards them to a resource such as the Broker.

    NOTE: The HTTP Source Adapter accepts only CloudEvents in version 1.0.

    Before Kyma 1.11, applications sent events to the /v1/events endpoint exposed by the Event Service. This service was responsible for forwarding events to the Event Bus that used to handle event processing and delivery. In the current implementation based on the Event Mesh, applications send events to the directly exposed /events endpoint. These events must comply with the CloudEvents specification.

    It may happen that you still use the /v1/events endpoint to receive events. To support it, we extended the Event Service's logic to ensure all events sent to this endpoint are forwarded to the subscriber. This way, the Event Service acts as a compatibility layer that consumes the events in the non-CloudEvent format, transforms them into CloudEvents, and propagates them to the new Event Mesh.

  2. The Subscription defines the Broker as the subscriber. This way, the Channel can communicate with the Broker to send events.

  3. The Channel listens for incoming events. When it receives an event, the underlying messaging layer dispatches it to the Broker.

  4. The Broker sends the event to the Trigger which is configured to receive events of this type.

  5. The Trigger filters the events based on the attributes you can find in the Trigger specification. See the example of a Trigger CR:

Click to copy
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
name: test-trigger
namespace: serverless
broker: default
type: user.created # Event name
eventtypeversion: v1 # Event version
source: mock # Application name
apiVersion: v1
kind: Service
name: test-function # Function name

The filter section of the Trigger CR specification defines the Broker which receives events and parameters you must provide for the Trigger to forward events to subscribers. These parameters are not fixed and depend on a given use case. In Kyma, these are the mandatory ones:

spec.brokerName of the Broker that receives events. By default, it receives the value default when the user's Namespace is labeled with knative-eventing-injection.
spec.filter.attributes.typeSpecific event type to which you want to subscribe your Function, such as user.created.
spec.filter.attributes.eventtypeversionEvent version, such as v1.
spec.filter.attributes.sourceName of the Application which sends events.

To learn how to trigger a Function with an event, follow the tutorial.


NATS Streaming chart

To configure NATS Streaming chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

To learn how to configure a Kafka channel instead of the default NATS one, see this tutorial:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values:

ParameterDescriptionDefault value
global.natsStreaming.persistence.enabledEnables / disables saving events to a persistent volumetrue
global.natsStreaming.persistence.maxAgeSpecifies the time for which the given Event is stored in NATS Streaming.24h
global.natsStreaming.persistence.sizeSpecifies the size of the persistent volume in NATS Streaming.1Gi
global.natsStreaming.resources.limits.memorySpecifies the memory limits for NATS Streaming.256M
global.natsStreaming.channel.maxInactivitySpecifies the time after which the autocleaner removes all backing resources related to a given Event type from the NATS Streaming database if there is no activity for this Event type.48h

CAUTION: If persistence is disabled, nats will store undelivered messages in memory. All restarts of nats will lead to the loss of undelivered messages. Do not use this in a production setup.

Custom Resource


The httpsources.sources.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an event source in Kyma. The HTTPSource custom resource (CR) specifies an external Application that sends events to subscribers, such as Functions or services. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy
kubectl get crd httpsources.sources.kyma-project.io -o yaml

Sample custom resource

This is a sample resource that receives events from an Application.

Click to copy
apiVersion: sources.kyma-project.io/v1alpha1
kind: HTTPSource
name: sample-application
namespace: prod
source: sample-application

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

metadata.nameYesSpecifies the name of the CR.
metadata.namespaceNoSpecifies the Namespace in which the CR is created. It is set to default unless you specify otherwise.
spec.sourceYesSpecifies a human-readable name of the Application that sends the events.


Configure the Kafka Channel

Instead of the default NATS Channel implementation, you can use the Knative-compatible Kafka Channel. To ensure Kafka works properly, you must:

  • Set up a Kafka cluster using Azure Event Hubs.
  • Create a Secret which the controller uses to communicate with the cluster.
  • Install Kyma with the knative-eventing-kafka component to deploy the Kafka controller.

NOTE: Although Event Hubs and Kafka are nearly identical as concepts, they use a different naming convention. To avoid confusion, read this document.


Follow these steps:

  1. Use the Azure portal to create a resource group where your will place your cluster.

  2. Create an Event Hub namespace which is an Event Hub representation of the cluster.

    NOTE: You can use Confluent Cloud or install Kafka locally, but bear in mind that these configurations are experimental.

  3. Export the variables. To retrieve the credentials, go to Azure Portal > All services > Event Hubs and select your Event Hub.

    Click to copy
    export kafkaBrokersHost={BROKER_URL_HOST}
    export kafkaBrokersPort={BROKER_URL_PORT}
    export kafkaNamespace={KAFKA_CLUSTER_NAME}
    export kafkaPassword={PASSWORD}
    export kafkaUsername=$ConnectionString
    export kafkaProvider=azure
  4. Prepare the override which creates the Azure Secret for Kafka and save it to a file called azure-secret.yaml.

    Click to copy
    apiVersion: v1
    kind: Secret
    name: knative-kafka-overrides
    namespace: kyma-installer
    installer: overrides
    component: knative-eventing-kafka
    kyma-project.io/installation: ""
    type: Opaque
    kafka.brokers.hostname: $kafkaBrokersHost
    kafka.brokers.port : $kafkaBrokersPort
    kafka.namespace: $kafkaNamespace
    kafka.password: $kafkaPassword
    kafka.username: $kafkaUsername
    kafka.secretName: knative-kafka
    environment.kafkaProvider: $kafkaProvider

    NOTE: For additional values, see the values.yaml file.

  5. Use Kyma CLI to install Kyma with the override to the installer-cr-azure-eventhubs.yaml.tpl installer file.

    Click to copy
    kyma install -o {azure-secret.yaml} -c {installer-cr-azure-eventhubs.yaml.tpl}

    NOTE: Use -o instead of -c if you're using Kyma CLI 1.13 or lower.

    TIP: If you want to set up Kafka Channel as a default Channel, follow the tutorial.

Set up a default Channel

In the Event Mesh, Channels define an event forwarding and persistence layer. They receive incoming events and dispatch them to resources such as Brokers or other Channels. By default, Kyma comes with NatssChannel, but you can change it to a different implementation or even use multiple Channels simultaneously. This tutorial shows how to set up Kafka Channel as the default one.


Follow these steps to set up a new default Channel and allow communication between the Channel and the Kafka cluster.

  1. Define a ConfigMap with the Kafka Channel override and save it to a file called kafka-channel.yaml.
Click to copy
apiVersion: v1
kind: ConfigMap
name: knative-eventing-overrides
namespace: kyma-installer
installer: overrides
component: knative-eventing
kyma-project.io/installation: ""
knative-eventing.channel.default.apiVersion: knativekafka.kyma-project.io/v1alpha1
knative-eventing.channel.default.kind: KafkaChannel
  1. Create a YAML file with the Azure Secret using the specification provided in the tutorial.

  2. Use Kyma CLI to install Kyma with the overrides.

    Click to copy
    kyma install -o {azure-secret.yaml} -o {kafka-channel.yaml} -c {installer-cr-azure-eventhubs.yaml.tpl}

    NOTE: Use -o instead of -c if you're using Kyma CLI 1.13 or lower.