Hide navigation



The security model in Kyma uses the Service Mesh component to enforce authorization through Kubernetes Role Based Authentication (RBAC) in the cluster. The identity federation is managed through Dex, which is an open-source, OpenID Connect identity provider.

Dex implements a system of connectors that allow you to delegate authentication to external OpenID Connect and SAML2-compliant Identity Providers and use their user stores. Read this document to learn how to enable authentication with an external Identity Provider by using a Dex connector.

Out of the box, Kyma comes with its own static user store used by Dex to authenticate users. This solution is designed for use with local Kyma deployments as it allows to easily create predefined users' credentials by creating Secret objects with a custom dex-user-config label. Read this document to learn how to manage users in the static store used by Dex.

Kyma uses a group-based approach to managing authorizations. To give users that belong to a group access to resources in Kyma, you must create:

  • Role and RoleBinding - for resources in a given Namespace.
  • ClusterRole and ClusterRoleBinding - for resources available in the entire cluster.

The RoleBinding or ClusterRoleBinding must have a group specified as their subject. See this document to learn how to manage Roles and RoleBindings.

NOTE: You cannot define groups for the static user store. Instead, bind the user directly to a role or a cluster role by setting the user as the subject of a RoleBinding or ClusterRoleBinding.

By default, there are five roles used to manage permissions in every Kyma cluster. These roles are:

  • kyma-essentials
  • kyma-view
  • kyma-edit
  • kyma-developer
  • kyma-admin

For more details about roles, read this document.

NOTE: The Global permissions view in the Settings section of the Kyma Console UI allows you to manage bindings between user groups and roles.


The following diagram illustrates the authorization and authentication flow in Kyma. The representation assumes the Kyma Console UI as the user's point of entry.


  1. The user opens the Kyma Console UI. If the Console application doesn't find a JWT token in the browser session storage, it redirects the user's browser to the Open ID Connect (OIDC) provider, Dex.
  2. Dex lists all defined Identity Provider connectors to the user. The user selects the Identity Provider to authenticate with. After successful authentication, the browser is redirected back to the OIDC provider which issues a JWT token to the user. After obtaining the token, the browser is redirected back to the Console UI. The Console UI stores the token in the Session Storage and uses it for all subsequent requests.
  3. The Authorization Proxy validates the JWT token passed in the Authorization Bearer request header. It extracts the user and groups details, the requested resource path, and the request method from the token. The Proxy uses this data to build an attributes record, which it sends to the Kubernetes Authorization API.
  4. The Proxy sends the attributes record to the Kubernetes Authorization API. If the authorization fails, the flow ends with a 403 code response.
  5. If the authorization succeeds, the request is forwarded to the Kubernetes API Server.

NOTE: The Authorization Proxy can verify JWT tokens issued by Dex because Dex is registered as a trusted issuer through OIDC parameters during the Kyma installation.

Kubeconfig generator

The Kubeconfig generator is a proprietary tool that generates a kubeconfig file which allows the user to access the Kyma cluster through the Command Line Interface (CLI), and to manage the connected cluster within the permission boundaries of the user.

The Kubeconfig generator rewrites the ID token issued for the user by Dex into the generated kubeconfig file. The time to live (TTL) of the ID token is 8 hours, which effectively means that the TTL of the generated kubeconfig file is 8 hours as well.

The generator is a publicly exposed service. You can access it directly under the https://configurations-generator.{YOUR_CLUSTER_DOMAIN} address. The service requires a valid ID token issued by Dex to return a code 200 result.

Get the kubeconfig file and configure the CLI

Follow these steps to get the kubeconfig file and configure the CLI to connect to the cluster:

  1. Access the Console UI of your Kyma cluster.
  2. Click Administration.
  3. Click the Download config button to download the kubeconfig file to a selected location on your machine.
  4. Open a terminal window.
  5. Export the KUBECONFIG environment variable to point to the downloaded kubeconfig. Run this command:

    Click to copy

    NOTE: Drag and drop the kubeconfig file in the terminal to easily add the path of the file to the export KUBECONFIG command you run.

  6. Run kubectl cluster-info to check if the CLI is connected to the correct cluster.

NOTE: Exporting the KUBECONFIG environment variable works only in the context of the given terminal window. If you close the window in which you exported the variable, or if you switch to a new terminal window, you must export the environment variable again to connect the CLI to the desired cluster.

Alternatively, get the kubeconfig file by sending a GET request with a valid ID token issued for the user to the /kube-config endpoint of the https://configurations-generator.{YOUR_CLUSTER_DOMAIN} service. For example:

Click to copy
curl GET https://configurations-generator.{YOUR_CLUSTER_DOMAIN}/kube-config -H "Authorization: Bearer {VALID_ID_TOKEN}"


Kyma uses a custom GraphQL implementation in the Console Backend Service and deploys an RBAC-based logic to control the access to the GraphQL endpoint. All calls to the GraphQL endpoint require a valid Kyma token for authentication.

The authorization in GraphQL uses RBAC, which means that:

  • All of the Roles, RoleBindings, ClusterRoles and CluserRoleBindings that you create and assign are effective and give the same permissions when users interact with the cluster resources both through the CLI and the GraphQL endpoints.
  • To give users access to specific queries you must create appropriate Roles and bindings in your cluster.

The implementation assigns GraphQL actions to specific Kubernetes verbs:

GraphQL actionKubernetes verb(s)
queryget (for a single resource) list (for multiple resources)
mutationcreate, delete

NOTE: Due to the nature of Kubernetes, you can secure specific resources specified by their name only for queries and mutations. Subscriptions work only with entire resource groups, such as kinds, and therefore don't allow for such level of granularity.

Available GraphQL actions

To access cluster resources through GraphQL, an action securing given resource must be defined and implemented in the cluster. See the GraphQL schema file for the list of actions implemented in every Kyma cluster by default.

Secure a defined GraphQL action

This is an example GraphQL action implemented in Kyma out of the box.

Click to copy
IDPPreset(name: String!): IDPPreset @HasAccess(attributes: {resource: "IDPPreset", verb: "get", apiGroup: "authentication.kyma-project.io", apiVersion: "v1alpha1"})

This query secures the access to IDPPreset custom resources with specific names. To access it, the user must be bound to a role that allows to access:

  • resources of the IDPPreset kind
  • the Kubernetes verb get
  • the authentication.kyma-project.io apiGroup

To allow access specifically to the example query, create this RBAC role in the cluster and bind it to a user or a client:

Click to copy
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
name: kyma-idpp-query-example
- apiGroups: ["authentication.kyma-project.io"]
resources: ["idppresets"]
verbs: ["get"]

NOTE: To learn more about RBAC authorization in a Kubernetes cluster, read this document.

GraphQL request flow

This diagram illustrates the request flow for the Console Backend Service which uses a custom GraphQL implementation:

GraphQL request flow

  1. The user sends a request with an ID token to the GraphQL application.
  2. The GraphQL application validates the user token and extracts user data required to perform Subject Access Review (SAR).
  3. The Kubernetes API Server performs SAR.
  4. Based on the results of SAR, the Kubernetes API Server informs the GraphQL application whether the user can perform the requested GraphQL action.
  5. Based on the information provided by the Kubernetes API Server, the GraphQL application returns an appropriate response to the user.

NOTE: Read this document to learn more about the custom GraphQL implementation in Kyma.

TLS in Tiller

Kyma comes with a custom installation of Tiller which secures all incoming traffic with TLS certificate verification. To enable communication with Tiller, you must save the client certificate, key, and the cluster Certificate Authority (CA) to Helm Home.

Saving the client certificate, key, and CA to Helm Home is manual on cluster deployments. When you install Kyma locally, this process is handled by the run.sh script.

Additionally, you must add the --tls flag to every Helm command. If you don't save the required certificates in Helm Home, or you don't include the --tls flag when you run a Helm command, you get this error:

Click to copy
Error: transport is closing

Add certificates to Helm Home

To get the client certificate, key, and the cluster CA and add them to Helm Home, run these commands:

Click to copy
kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data['global\.helm\.ca\.crt']}" | base64 --decode > "$(helm home)/ca.pem";
kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data['global\.helm\.tls\.crt']}" | base64 --decode > "$(helm home)/cert.pem";
kubectl get -n kyma-installer secret helm-secret -o jsonpath="{.data['global\.helm\.tls\.key']}" | base64 --decode > "$(helm home)/key.pem";

CAUTION: All certificates are saved to Helm Home under the same, default path. When you save certificates of multiple clusters to Helm Home, one set of certificates overwrites the ones that already exist in Helm Home. As a result, you must save the cluster certificate set to Helm Home every time you switch the cluster context you work in.


To connect to the Tiller server, for example, using the Helm GO library, mount the Helm client certificates into the application you want to connect. These certificates are stored as a Kubernertes Secret.

To get this Secret, run:

Click to copy
kubectl get secret -n kyma-installer helm-secret

Additionally, those secrets are also available as overrides during Kyma installation:

global.helm.ca.crtCertificate Authority for the Helm client
global.helm.tls.crtClient certificate for the Helm client
global.helm.tls.keyClient certificate key for the Helm client

Roles in Kyma

Kyma uses roles and groups to manage access in the cluster. Every cluster comes with five predefined roles which give the assigned users different level of permissions suitable for different purposes. These roles are defined as ClusterRoles and use the Kubernetes mechanism of aggregation which allows you to combine multiple ClusterRoles into a single ClusterRole. Use the aggregation mechanism to efficiently manage access to Kubernetes and Kyma-specific resources.

NOTE: Read this Kubernetes documentation to learn more about the aggregation mechanism used to define Kyma roles.

You can assign any of the predefined roles to a user or to a group of users in the context of:

TIP: To ensure proper Namespace separation, use RoleBindings to give users access to the cluster. This way a group or a user can have different permissions in different Namespaces.

The predefined roles, arranged in the order of increasing access level, are:

kyma-essentialsThe basic role required to allow the users to see the Console UI of the cluster. This role doesn't give the user rights to modify any resources.
kyma-viewThe role for listing Kubernetes and Kyma-specific resources.
kyma-editThe role for editing Kyma-specific resources.
kyma-developerThe role created for developers who build implementations using Kyma. It allows you to list and edit Kubernetes and Kyma-specific resources.
kyma-adminThe role with the highest permission level which gives access to all Kubernetes and Kyma resources and components with administrative rights.

To learn more about the default roles and how they are constructed, see this file.


The groups.authentication.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format that represents user groups available in the ID provider in the Kyma cluster. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy
kubectl get crd groups.authentication.kyma-project.io -o yaml

Sample custom resource

This is a sample CR that represents an user group available in the ID provider in the Kyma cluster.

Click to copy
apiVersion: authentication.kyma-project.io/v1alpha1
kind: Group
name: "sample-group"
name: "admins"
idpName: "github"
description: "'admins' represents the group of users with administrative privileges in the organization."

This table analyses the elements of the sample CR and the information it contains:

metadata.nameYESSpecifies the name of the CR.
spec.nameYESSpecifies the name of the group.
spec.idpNameYESSpecifies the name of the ID provider in which the group exists.
spec.descriptionNODescription of the group available in the ID provider.


The idppresets.authentication.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format that represents presets of the Identity Provider configuration used to secure API through the Console UI. Presets are a convenient way to configure the authentication section in the API custom resource.

To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy
kubectl get crd idppresets.authentication.kyma-project.io -o yaml

Sample custom resource

This is a sample CR used to create an IDPPreset:

Click to copy
apiVersion: authentication.kyma-project.io/v1alpha1
kind: IDPPreset
name: "sample-idppreset"
issuer: https://example.com
jwksUri: https://example.com/keys

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

metadata.nameYESSpecifies the name of the CR.
spec.issuerYESSpecifies the issuer of the JWT tokens used to access the services.
spec.jwksUriYESSpecifies the URL of the OpenID Provider’s public key set to validate the signature of the JWT token.

Usage in the UI

The issuer and jwksUri fields originate from the Api CR specification. In most cases, these values are reused many times. Use the IDPPreset CR to store these details in a single object and reuse them in a convenient way. In the UI, the IDPPreset CR allows you to choose a preset with details of a specific identity provider from the drop-down menu instead of entering them manually every time you expose a secured API. Apart from consuming IDPPresets, you can also manage them in the Console UI. To create and delete IDPPresets, select IDP Presets from the Integration section.

These components use this CR:

IDP PresetGenerates a Go client which allows components and tests to create, delete, or get IDP Preset resources.
Console Backend ServiceEnables the IDP Preset management with GraphQL API.

Update TLS certificate

The TLS certificate is a vital security element. Follow this tutorial to update the TLS certificate in Kyma.

NOTE: This procedure can interrupt the communication between your cluster and the outside world for a limited period of time.


  • New TLS certificates
  • Kyma administrator access


  1. Export the new TLS certificate and key as environment variables. Run:

    Click to copy
    export KYMA_TLS_CERT=$(cat {NEW_CERT_PATH})
    export KYMA_TLS_KEY=$(cat {NEW_KEY_PATH})
  2. Update the Ingress Gateway certificate. Run:

    Click to copy
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Secret
    type: kubernetes.io/tls
    name: istio-ingressgateway-certs
    namespace: istio-system
    tls.crt: $(echo "$KYMA_TLS_CERT" | base64)
    tls.key: $(echo "$KYMA_TLS_KEY" | base64)
  3. Update the kyma-system Namespace certificate:

    Click to copy
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Secret
    type: Opaque
    name: ingress-tls-cert
    namespace: kyma-system
    tls.crt: $(echo "$KYMA_TLS_CERT" | base64)
  4. Update the kyma-integration Namespace certificate:

    Click to copy
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Secret
    type: Opaque
    name: ingress-tls-cert
    namespace: kyma-integration
    tls.crt: $(echo "$KYMA_TLS_CERT" | base64)
  5. Restart the Ingress Gateway Pod to apply the new certificate:

    Click to copy
    kubectl delete pod -l app=istio-ingressgateway -n istio-system
  6. Restart the Pods in the kyma-system Namespace to apply the new certificate:

    Click to copy
    kubectl delete pod -l tlsSecret=ingress-tls-cert -n kyma-system
  7. Restart the Pods in the kyma-integration Namespace to apply the new certificate:

    Click to copy
    kubectl delete pod -l tlsSecret=ingress-tls-cert -n kyma-integration

Manage static users in Dex

Create a new static user

To create a static user in Dex, create a Secret with the dex-user-config label set to true. Run:

Click to copy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
"dex-user-config": "true"
email: {BASE64_USER_EMAIL}
username: {BASE64_USERNAME}
password: {BASE64_USER_PASSWORD}
type: Opaque

NOTE: If you don't specify the Namespace in which you want to create the Secret, the system creates it in the default Namespace.

The following table describes the fields that are mandatory to create a static user. If you don't include any of these fields, the user is not created.

data.emailBase64-encoded email address used to sign-in to the console UI. Must be unique.
data.usernameBase64-encoded username displayed in the console UI.
data.passwordBase64-encoded user password. There are no specific requirements regarding password strength, but it is recommended to use a password that is at least 8-characters-long.

Create the Secrets in the cluster before Dex is installed. The Dex init-container with the tool that configures Dex generates user configuration data basing on properly labeled Secrets, and adds the data to the ConfigMap.

If you want to add a new static user after Dex is installed, restart the Dex Pod. This creates a new Pod with an updated ConfigMap.

Bind a user to a Role or a ClusterRole

A newly created static user has no access to any resources of the cluster as there is no Role or ClusterRole bound to it.By default, Kyma comes with the following ClusterRoles:

  • kyma-admin: gives full admin access to the entire cluster
  • kyma-edit: gives full access to all Kyma-managed resources
  • kyma-developer: gives full access to Kyma-managed resources and basic Kubernetes resources
  • kyma-view: allows viewing and listing all of the resources of the cluster
  • kyma-essentials: gives a set of minimal view access right to use the Kyma Console

To bind a newly created user to the kyma-view ClusterRole, run this command:

Click to copy
kubectl create clusterrolebinding {BINDING_NAME} --clusterrole=kyma-view --user={USER_EMAIL}

To check if the binding is created, run:

Click to copy
kubectl get clusterrolebinding {BINDING_NAME}

Add an Identity Provider to Dex

Add external, OpenID Connect compliant authentication providers to Kyma using Dex connectors. Follow this tutorial to add a GitHub connector and use it to authenticate users in Kyma.

NOTE: Groups in the Github are represented as teams. See this document to learn how to manage teams in Github.


To add a GitHub connector to Dex, register a new OAuth application in GitHub. Set the authorization callback URL to https://dex.kyma.local/callback. After you complete the registration, request for an organization approval.

NOTE: To authenticate in Kyma using GitHub, the user must be a member of a GitHub organization that has at least one team.

Configure Dex

Register the GitHub Dex connector by editing the dex-config-map.yaml ConfigMap file located in the kyma/resources/dex/templates directory. Follow this template:

Click to copy
- type: github
id: github
name: GitHub
redirectURI: https://dex.kyma.local/callback

This table explains the placeholders used in the template:

GITHUB_CLIENT_IDSpecifies the application's client ID.
GITHUB_CLIENT_SECRETSpecifies the application's client Secret.
GITHUB_ORGANIZATIONSpecifies the name of the GitHub organization.

Configure authorization rules

To bind Github groups to default Kyma roles, add the bindings section to this file. Follow this template:

Click to copy

TIP: You can bind GitHub teams to any of the five predefined Kyma roles. Use these aliases: kymaAdmin, kymaView, kymaDeveloper, kymaEdit or kymaEssentials. To learn more about the predefined roles, read this document.

This table explains the placeholders used in the template:

GITHUB_ORGANIZATIONSpecifies the name of the GitHub organization.
GITHUB_TEAM_ASpecifies the name of GitHub team to bind to the kyma-admin role.
GITHUB_TEAM_BSpecifies the name of GitHub team to bind to the kyma-view role.