Security

Overview

To ensure a stable and secure way of extending your applications and creating functions or microservices, Kyma comes with a comprehensive set of tools that aim to mitigate any security issues, and, at the same time, enable a streamlined experience.

To ensure a safe work environment, the Kyma security component uses the following tools:

  • Predefined Kubernetes RBAC roles to manage the user access to the functionality provided by Kyma.
  • Istio Service Mesh with the global mTLS setup and ingress configuration to ensure secure service-to-service communication.
  • Dex, a local identity provider with federation capabilities that acts as a proxy between the client application and external identity providers to allow seamless authorization.
  • ORY Oathkeeper and ORY Hydra used by the API Gateway to authorize HTTP requests, provide the OAuth2 server functionality and.

Details

Authentication in Kyma

User authentication

The identity federation in Kyma is managed through Dex, an open-source, OpenID Connect identity provider.

The diagram shows the user authentication flow, focusing on the role Dex plays in the process.

Dex diagram

  1. Access the client application, such as the Kyma Console, Grafana UI, or Jaeger UI.
  2. If the application does not find a JWT token in the browser session storage, it will redirect you to Dex to handle the authentication.
  3. Dex lists all the defined identity providers in your browser window.

    NOTE: Out of the box, Kyma implements the static user store Dex uses to authenticate users. You can add a custom external identity provider by following steps in this tutorial.

  4. Select the identity provider and provide the data required for authentication.

  5. After successful authentication, Dex issues a JWT token that is stored in the browser session and used for all subsequent requests. This means that if you want to use a different UI, such as Jaeger or Grafana, Dex will use the stored token instead of requesting you to log in again.

Static user store

The static user store is designed for use with local Kyma deployments as it allows to easily create predefined users' credentials by creating Secret objects with a custom dex-user-config label. Read the tutorial to learn how to manage users in the static store used by Dex.

NOTE: The static user connector serves as a demo, and does not offer full functionality when compared to other connectors. As such it does not provide the groups claim, which is extensively used in Kyma.

ID Tokens

ID Tokens are JSON Web Tokens (JWTs) signed by Dex and returned as part of the OAuth2 response that attest to the end user's identity. An example decoded JWT looks as follows:

Click to copy
{
"iss": "http://127.0.0.1:5556/dex",
"sub": "CgcyMzQyNzQ5EgZnaXRodWI",
"aud": "example-app",
"exp": 1492882042,
"iat": 1492795642,
"at_hash": "bi96gOXZShvlWYtal9Eqiw",
"email": "jane.doe@coreos.com",
"email_verified": true,
"groups": [
"admins",
"developers"
],
"name": "Jane Doe"
}

NOTE: You can customize the expiration settings of the tokens by creating overrides to the configuration of the Dex component chart.

Service-to-service authentication

As Kyma is build on top of Istio Service Mesh, service-to-service authentication and encryption is enabled with Istio MutualTLS. For details, read the Kyma-specific Istio configuration documentation.

User-to-service authentication

Kyma uses a custom API Gateway component that is build on top of ORY Oathkeeper. The API Gateway allows exposing user applications within the Kyma environment and secures them if necessary. You can then access the secured resources using authentication options.

Authorization in Kyma

User authorization

Kyma uses roles and user groups to manage access to the cluster. If you want to access the system through the Kyma Console or using kubectl, you must be authenticated with a JWT token. This token collects user information such as username, email, or groups claim for the system to determine whether you have access to perform certain operations.

Cluster-wide authorization

Roles in Kyma are defined as ClusterRoles and use the Kubernetes mechanism of aggregation which allows you to combine multiple ClusterRoles into a single ClusterRole. Use the aggregation mechanism to efficiently manage access to Kubernetes and Kyma-specific resources.

NOTE: Read the Kubernetes documentation to learn more about the aggregation mechanism used to define Kyma roles.

The predefined roles are:

RoleDefault groupDescription
kyma-essentialsruntimeDeveloperThe basic role required to allow the users to access the Console UI of the cluster. This role doesn't give the user rights to modify any resources.
kyma-namespace-admin-essentialsruntimeNamespaceAdminThe role that allows the user to access the Console UI and create Namespaces, built on top of the kyma-essentials role. Used to give the members of selected groups the ability to create Namespaces in which the Permission Controller binds them to the kyma-namespace-admin role.
kyma-viewruntimeOperatorThe role for listing Kubernetes and Kyma-specific resources.
kyma-editNoneThe role for editing Kyma-specific resources. It's aggregated by other roles.
kyma-developerNoneThe role created for developers who build implementations using Kyma. It allows you to list and edit Kubernetes and Kyma-specific resources. You need to bind it manually to a user or a group in the Namespaces of your choice. Use the runtimeDeveloper group when you run Kyma with the default cluster-users chart configuration.
kyma-adminruntimeAdminThe role with the highest permission level which gives access to all Kubernetes and Kyma resources and components with administrative rights.
kyma-namespace-adminruntimeNamespaceAdminThe role that has the same rights as the kyma-admin role, except for the write access to AddonsConfigurations. The Permission Controller automatically creates a RoleBinding to the runtimeNamespaceAdmin group in all non-system Namespaces.

To learn more about the default roles and how they are constructed, see the rbac-roles.yaml file.

Namespace-wide authorization

To ensure that users can manage their Namespaces effectively and create bindings to suit their needs, a Permission Controller is deployed within the cluster.

The Permission Controller is a Kubernetes controller which listens for new Namespaces and creates RoleBindings for the users of the specified group to the kyma-namespace-admin role within these Namespaces. The Controller uses a blacklist mechanism, which defines the Namespaces in which the users of the defined group are not assigned the kyma-namespace-admin role.

When the Controller is deployed in a cluster, it checks all existing Namespaces and assigns the roles accordingly. By default, the controller binds users of the runtimeNamespaceAdmin group to the kyma-namespace-admin role in the Namespaces they create. Additionally, the controller creates a RoleBinding for the static namespace.admin@kyma.cx user to the kyma-admin role in every Namespace that is not blacklisted. This allows clients to manage their Namespaces and create additional bindings to suit their needs.

CAUTION: To give a user the kyma-developer role permissions in a Namespace, create a RoleBinding to the kyma-developer cluster role in that Namespace. You can define a subject of the RoleBinding by specifying either a Group, or a User. If you decide to specify a User, provide a user email.

Role binding

You can assign any of the predefined roles to a user or to a group of users in the context of:

TIP: The Global permissions view in the Settings section of the Kyma Console UI allows you to manage cluster-level bindings between user groups and roles. To manage bindings between user groups and roles in a Namespace, select the Namespace and go to the Configuration section of the Permissions view.

TIP: To ensure proper Namespace separation, use RoleBindings to give users access to the cluster. This way a group or a user can have different permissions in different Namespaces.

The RoleBinding or ClusterRoleBinding must have a group specified as their subject. See the Kubernetes documentation to learn how to manage Roles and RoleBindings.

NOTE: You cannot define groups for the static user store. Instead, bind the user directly to a role or a cluster role by setting the user as the subject of a RoleBinding or ClusterRoleBinding.

Service-to-service authorization

As Kyma is build on top of the Istio Service Mesh, it supports the native Istio RBAC mechanism the mesh provides. The RBAC enabled the creation of ServiceRoles and ServiceRoleBindings which employ a fine-grained method of restricting access to services inside the kubernetes cluster. For more details, see Istio RBAC configuration.

User-to-service authorization

Kyma uses a custom API Gateway component that is build on top of ORY Oathkeeper. The API Gateway allows exposing user applications within the Kyma environment and secures them if necessary.

Access Kyma

As a user, you can access Kyma using the following:

  • Kyma Console which allows you to view, create, and manage your resources.
  • Kubernetes-native CLI (kubectl), which you can also use to manage your resources using a command-line interface. Kyma uses a custom API Server Proxy component to handle all connections between the user and the Kubernetes API server. To access and manage your resources, you need a config file which includes the JWT token required for authentication. You have two options:

    • Cluster config file you can obtain directly from your cloud provider. It allows you to directly access the Kubernetes API server, usually as the admin user. Kyma does not manage this config in any way.
    • Kyma-generated config file you can download using the Kyma Console. This config uses the Kyma api-server proxy to access the Kubernetes API server and predefined user configuration to manage access and restrictions.

Console UI

The diagram shows the Kyma access flow using the Console UI.

Kyma access Console

NOTE: The Console is permission-aware so it only shows elements to which you have access as a logged-in user. The access is RBAC-based.

  1. Access the Kyma Console UI exposed by the Istio Ingress Gateway component.
  2. Under the hood, the Ingress Gateway component redirects all traffic to TLS, performs TLS termination to decrypt the incoming data, and forwards you to the Kyma Console.
  3. If the Kyma Console does not find a JWT token in the browser session storage, it redirects you to Dex, the Open ID Connect (OIDC) provider. Dex lists all defined identity provider connectors, so you can select one to authenticate with.
  4. After successful authentication, Dex issues a JWT token for you. The token is stored in the browser session so it can be used for further interaction.
  5. When you interact with the Console, the UI queries the backend implementation and comes back to you with the response.

kubectl

To manage the connected cluster using the kubectl Command Line Interface (CLI), you first need to generate and download the kubeconfig file that allows you to access the cluster within your permission boundaries.

Kyma access kubectl

  1. Use the Console UI to request the IAM Kubeconfig Service to generate the kubeconfig file.
  2. Under the hood, the Ingress Gateway performs TLS termination to decrypt the incoming data and allows the Kyma Console to proceed with the request.
  3. The request goes out from the Kyma Console to the IAM Kubeconfig Service.
  4. IAM Kubeconfig Service validates your in-session ID token and rewrites it into the generated kubeconfig file.

    NOTE: The time to live (TTL) of the ID token is 8 hours, which effectively means that the TTL of the generated kubeconfig file is 8 hours as well. The content of the file looks as follows:

    Click to copy
    apiVersion: v1
    clusters:
    - cluster:
    certificate-authority-data: SERVER_CERTIFICATE_REDACTED
    server: https://apiserver.kyma.local:9443
    name: kyma.local
    contexts:
    - context:
    cluster: kyma.local
    user: OIDCUser
    name: kyma.local
    current-context: kyma.local
    kind: Config
    preferences: {}
    users:
    - name: OIDCUser
    user:
    token: TOKEN_REDACTED
  5. Use your terminal to run a command, for example to get a list of resources.

  6. Since the Kubernetes API server is not exposed directly, your request goes to the API Server Proxy service. It validates the incoming JWT token and forwards requests to the Kubernetes API server.

Ingress and Egress traffic

Ingress

Kyma uses the Istio Ingress Gateway to handle all incoming traffic, manage TLS termination, and handle mTLS communication between the cluster and external services. By default, the kyma-gateway configuration defines the points of entry to expose all applications using the supplied domain and certificates. Applications are exposed using the API Gateway controller.

The configuration specifies the following parameters and their values:

ParameterDescriptionValue
spec.servers.portThe ports gateway listens on. Port 80 is automatically redirected to 443.443, 80.
spec.servers.tls.minProtocolVersionThe minimum protocol version required by the TLS connection.TLSV1_2 protocol version. TLSV1_0 and TLSV1_1 are rejected.
spec.servers.tls.cipherSuitesAccepted cypher suites.ECDHE-RSA-CHACHA20-POLY1305, ECDHE-RSA-AES256-GCM-SHA384, ECDHE-RSA-AES256-SHA, ECDHE-RSA-AES128-GCM-SHA256, ECDHE-RSA-AES128-SHA

TLS management

Kyma employs the Bring Your Own Domain/Certificates model that requires you to supply the certificate and key during installation. You can do it using the Helm overrides for Kyma installation. See a sample ConfigMap specifying the values to override:

Click to copy
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-certificate-overrides
namespace: kyma-installer
labels:
installer: overrides
kyma-project.io/installation: ""
data:
global.tlsCrt: "CERT"
global.tlsKey: "CERT_KEY"

During installation, the values are propagated in the cluster to all components that require them.

Demo setup with xip.io

If you don't supply any certificates or domain during installation, the kyma-installer will default to a demo setup using the xip.io DNS-as-a-Service (DNSaaS) provider. In this case the domain is generated on demand using the clusters LoadBalancer IP in the form of *.LoadBalancerIP.xip.io along with a self-signed certificate for the domain.

NOTE: Due to limited availability of the DNSaaS provider and a self-singed certificate which can be rejected by some browsers and applications, this setup is regarded as a working, visual demo. Do not use it for other scenarios.

Gardener

You can install Kyma on top of Gardener managed instances which use the already present certificate management service. Because Gardener creates the certificates or domain, you don't have to provide them. The certificate is a Custom Resource managed by Gardener, and is a wildcard certificate for the whole domain.

API Server Proxy

The API Server Proxy component is a reverse proxy which acts as an intermediary for the Kubernetes API. By default, it is exposed as a LoadBalancer Service, meaning it requires a dedicated certificate and DNS entry.

To learn more about about API Server Proxy configuration, see the configuration section.

Certificate propagation paths

The certificate data is propagated though Kyma and delivered to several components.

Certificate propagation

  1. Kyma Installer reads the override files you have configured.
  2. Kyma installer passes the values to the specific Kyma components.
  3. Each of these components generates Secrets, ConfigMaps, and Certificates in a certain order.

The table shows the order in which the configuration elements are created.

NOTE: The table does not include all possible dependencies between the components and elements that use the certificates. It serves as a reference to know where to find information about the certificates and configurations.

The order differs depending on the mode:

  • Bring Your Own Certificate
  • Demo xip.io setup
  • Gardener-managed

Egress

Currently no Egress limitations are implemented, meaning that all applications deployed in the Kyma cluster can access outside resources without limitations.

NOTE: in the case of connection problems with external services it may be required to create an Service Entry object to register the service.

GraphQL

Kyma uses a custom GraphQL implementation in the Console Backend Service and deploys an RBAC-based logic to control the access to the GraphQL endpoint. All calls to the GraphQL endpoint require a valid Kyma token for authentication.

The authorization in GraphQL uses RBAC, which means that:

  • All of the Roles, RoleBindings, ClusterRoles and CluserRoleBindings that you create and assign are effective and give the same permissions when users interact with the cluster resources both through the CLI and the GraphQL endpoints.
  • To give users access to specific queries you must create appropriate Roles and bindings in your cluster.

The implementation assigns GraphQL actions to specific Kubernetes verbs:

GraphQL actionKubernetes verb(s)
queryget (for a single resource) list (for multiple resources)
mutationcreate, delete
subscriptionwatch

NOTE: Due to the nature of Kubernetes, you can secure specific resources specified by their name only for queries and mutations. Subscriptions work only with entire resource groups, such as kinds, and therefore don't allow for such level of granularity.

Available GraphQL actions

To access cluster resources through GraphQL, an action securing given resource must be defined and implemented in the cluster. See the GraphQL schema file for the list of actions implemented in every Kyma cluster by default.

Secure a defined GraphQL action

This is an example GraphQL action implemented in Kyma out of the box.

Click to copy
microFrontends(namespace: String!): [MicroFrontend!]! @HasAccess(attributes: {resource: "microfrontends", verb: "list", apiGroup: "ui.kyma-project.io", apiVersion: "v1alpha1"})

This query secures the access to MicroFrontend custom resources with specific names. To access it, the user must be bound to a role that allows to access:

  • resources of the MicroFrontend kind
  • the Kubernetes verb list
  • the ui.kyma-project.io apiGroup

To allow access specifically to the example query, create this RBAC role in the cluster and bind it to a user or a client:

Click to copy
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: kyma-microfrontends-query-example
rules:
- apiGroups: ["ui.kyma-project.io"]
resources: ["microfrontends"]
verbs: ["list"]

Request flow

This diagram illustrates the request flow for the Console Backend Service which uses a custom GraphQL implementation:

GraphQL request flow

  1. The user sends a request with an ID token to the GraphQL application.
  2. The GraphQL application validates the user token and extracts user data required to perform Subject Access Review (SAR).
  3. The Kubernetes API Server performs SAR.
  4. Based on the results of SAR, the Kubernetes API Server informs the GraphQL application whether the user can perform the requested GraphQL action.
  5. Based on the information provided by the Kubernetes API Server, the GraphQL application returns an appropriate response to the user.

NOTE: Read more about the custom GraphQL implementation in Kyma.

Configuration

API Server Proxy chart

To configure the API Server Proxy chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values:

ParameterDescriptionDefault value
port.secureSpecifies the port that exposes API Server Proxy through the load balancer.9443
port.insecureSpecifies the port that exposes API Server Proxy through Istio Ingress.8444
hpa.minReplicasDefines the initial number of created API Server Proxy instances.1
hpa.maxReplicasDefines the maximum number of created API Server Proxy instances.3
hpa.metrics.resource.targetAverageUtilizationSpecifies the average percentage of a given instance memory utilization. After exceeding this limit, Kubernetes creates another API Server Proxy instance.50

IAM Kubeconfig Service chart

To configure the IAM Kubeconfig Service chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values:

ParameterDescriptionDefault value
service.portSpecifies the port that exposes the IAM Kubeconfig Service service.8000

Dex chart

To configure the Dex chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values:

ParameterDescriptionDefault value
dex.expiry.signingKeysSpecifies the period of time after which the public key validating the token to the Console expires.720h
dex.expiry.idTokensSpecifies the period of time after which the token to the Console expires.8h

Cluster Users chart

To configure the Cluster Users chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values:

ParameterDescriptionDefault value
bindings.kymaEssentials.groupsSpecifies the array of groups used in ClusterRoleBinding to the kyma-essentials ClusterRole.[]
bindings.kymaView.groupsSpecifies the array of groups used in ClusterRoleBinding to the kyma-view ClusterRole.[]
bindings.kymaEdit.groupsSpecifies the array of groups used in ClusterRoleBinding to the kyma-edit ClusterRole.[]
bindings.kymaAdmin.groupsSpecifies the array of groups used in ClusterRoleBinding to the kyma-admin ClusterRole.[]
bindings.kymaDeveloper.groupsSpecifies the array of groups used in ClusterRoleBinding to the kyma-developer ClusterRole.[]
users.adminGroupSpecifies the name of the group used in ClusterRoleBinding to the kyma-admin ClusterRole.""

OAuth2 server customization and operations

Credentials backup

The ory-hydra-credentials Secret stores all the crucial data required to establish a connection with your database. Nevertheless, it is regenerated every time the ORY chart is upgraded and you may accidentally overwrite your credentials. For this reason, it is recommended to backup the Secret. Run this command to save the contents of the Secret to a file:

Click to copy
kubectl get secret -n kyma-system ory-hydra-credentials -o yaml > ory-hydra-credentials-$(date +%Y%m%d).yaml

Postgres password update

If Hydra is installed with the default settings, a Postgres-based database is provided out-of-the-box. If no password has been specified, one is generated and set for the Hydra user. This behavior may not always be desired, so in some cases you may want to modify this password.

In order to set a custom password, provide the .Values.global.postgresql.postgresqlPassword override during installation.

In order to update the password for an existing installation, provide the .Values.global.postgresql.postgresqlPassword override and perform the update procedure. However, this only changes the environmental setting for the database and does not modify the internal database data. In order to update the password in the database, please refer to the Postgres documentation.

OAuth2 server profiles

By default, every Kyma deployment is installed with the OAuth2 server using what is considered a default profile. This configuration is not considered production-ready. To use the Kyma OAuth2 server in a production environment, configure Hydra to use the production profile.

Default profile

In the case of the ORY Hydra OAuth2 server, the default profile includes:

  • An in-cluster database that stores the registered client data.
  • A job that reads the generated database credentials and saves them to the configuration of Hydra before the installation and update.
  • Default resource quotas.

Persistence mode for the default profile

The default profile for the OAuth2 server enables the use of a preconfigured PostgreSQL database, which is installed together with the Hydra server. The database is created in the cluster as a StatefulSet and uses a PersistentVolume that is provider-specific. This means that the PersistentVolume used by the database uses the default StorageClass of the cluster's host provider. The internal PostgreSQL database is installed with every Kyma deployment and doesn't require manual configuration.

Production profile

The production profile introduces the following changes to the Hydra OAuth2 server deployment:

  • The registered client data is saved in a user-managed database.
  • Optionally, a Gcloud proxy service is deployed.
  • The Oathkeeper authorization and authentication proxy has raised CPU limits and requests. It starts with more replicas and can scale up horizontally to higher numbers.

Oathkeeper settings for the production profile

The production profile requires the following parameters in order to operate:

ParameterDescriptionRequired value
oathkeeper.deployment.resources.limits.cpuDefines limits for CPU resources.800m
oathkeeper.deployment.resources.requests.cpuDefines requests for CPU resources.200m
hpa.oathkeeper.minReplicasDefines the initial number of created Oathkeeper instances.3

Persistence modes for the production profile

The production profile for the OAuth2 server enables the use of a custom database, which can be one of the following options:

  • A user-maintained database to which credentials are supplied.
  • A GCP Cloud SQL instance to which credentials are supplied. In this case, an extra gcloud-proxy deployment is created allowing secured connections.

Custom database

Alternatively, you can use a compatible, custom database to store the registered client data. To use a database, you must create a Kubernetes Secret with the database password as an override for the Hydra OAuth2 server. The details of the database are passed using these parameters of the production profile override:

General settings:

ParameterDescriptionRequired value
global.ory.hydra.persistence.postgresql.enabledDefines whether Hydra should initiate the deployment of an in-cluster database. Set to false to use a self-provided database. If set to true, Hydra always uses an in-cluster database and ignores the custom database details.false
hydra.hydra.config.secrets.systemSets the system encryption string for Hydra.An at least 16 characters long alphanumerical string
hydra.hydra.config.secrets.cookieSets the cookie session encryption string for Hydra.An at least 16 characters long alphanumerical string

Database settings:

ParameterDescriptionExample value
global.ory.hydra.persistence.userSpecifies the name of the user with permissions to access the database.dbuser
global.ory.hydra.persistence.secretNameSpecifies the name of the Secret in the same Namespace as Hydra that stores the database password.my-secret
global.ory.hydra.persistence.secretKeySpecifies the name of the key in the Secret that contains the database password.my-db-password
global.ory.hydra.persistence.dbUrlSpecifies the database URL. For more information, see the configuration file.mydb.my-namespace:1234
global.ory.hydra.persistence.dbNameSpecifies the name of the database saved in Hydra.db
global.ory.hydra.persistence.dbTypeSpecifies the type of the database. The supported protocols are postgres, mysql, cockroach. For more information, see the configuration file.postgres

Google Cloud SQL

The Cloud SQL is a provider-supplied and maintained database, which requires a special proxy deployment in order to provide a secured connection. In Kyma we provide a pre-installed deployment, which requires the following parameters in order to operate:

General settings:

ParameterDescriptionRequired value
global.ory.hydra.persistence.postgresql.enabledDefines whether Hydra should initiate the deployment of an in-cluster database. Set to false to use a self-provided database. If set to true, Hydra always uses an in-cluster database and ignores the custom database details.false
global.ory.hydra.persistence.gcloud.enabledDefines whether Hydra should initiate the deployment of Google SQL proxy.true
hydra.hydra.config.secrets.systemSets the system encryption string for Hydra.An at least 16 characters long alphanumerical string
hydra.hydra.config.secrets.cookieSets the cookie session encryption string for Hydra.An at least 16 characters long alphanumerical string

Database settings:

ParameterDescriptionExample value
data.global.ory.hydra.persistence.userSpecifies the name of the user with permissions to access the database.dbuser
data.global.ory.hydra.persistence.secretNameSpecifies the name of the Secret in the same Namespace as Hydra that stores the database password.my-secret
data.global.ory.hydra.persistence.secretKeySpecifies the name of the key in the Secret that contains the database password.my-db-password
data.global.ory.hydra.persistence.dbUrlSpecifies the database URL. For more information, see the configuration file.Required: ory-gcloud-sqlproxy.kyma-system
data.global.ory.hydra.persistence.dbNameSpecifies the name of the database saved in Hydra.db
data.global.ory.hydra.persistence.dbTypeSpecifies the type of the database. The supported protocols are postgres, mysql, cockroach. For more information, see the configuration file.postgres

Proxy settings:

ParameterDescriptionExample value
gcloud-sqlproxy.cloudsql.instance.instanceNameSpecifies the name of the database instance in GCP. This value is the last part of the string returned by the Cloud SQL Console for Instance connection name - the one after the final :. For example, if the value for Instance connection name is my_project:my_region:mydbinstance, use only mydbinstance.mydbinstance
gcloud-sqlproxy.cloudsql.instance.projectSpecifies the name of the GCP project used.my-gcp-project
gcloud-sqlproxy.cloudsql.instance.regionSpecifies the name of the GCP region used. Note, that it does not equal the GCP zone.europe-west4
gcloud-sqlproxy.cloudsql.instance.portSpecifies the port used by the database to handle connections. Database dependent.postgres: 5432 mysql: 3306
gcloud-sqlproxy.existingSecretSpecifies the name of the Secret in the same Namespace as the proxy, that stores the database password.my-secret
gcloud-sqlproxy.existingSecretKeySpecifies the name of the key in the Secret that contains the GCP ServiceAccount json key.sa.json

NOTE: When using any kind of custom database (gcloud, or self-maintained), it is important to provide the hydra.hydra.config.secrets variables, otherwise a random secret will be generated. This secret needs to be common to all Hydra instances using the same instance of the chosen database.

Use the production profile

Follow these steps to migrate your Oauth2 server to the production profile:

  1. Apply an override that forces the Hydra OAuth2 server to use the database of your choice. Follow these links to find an example of override data for each persistence mode:
  1. Run the cluster update process.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

TIP: All the client data registered by Hydra Maester is migrated to the new database as a part of the update process. During this process, the clients will not be available which may result in errors on issuing the token. If you notice missing or inconsistent data, delete the Hydra Maester Pod to force reconciliation. For more information, read about Hydra Maester controller and Oauth2 client registration in Kyma.

Permission Controller chart

To configure the Permission Controller chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

The following table lists the configurable parameters of the permission-controller chart and their default values.

ParameterDescriptionDefault value
global.kymaRuntime.namespaceAdminGroupDetermines the user group for which a RoleBinding to the kyma-namespace-admin role is created in all Namespaces except those specified in the config.namespaceBlacklist parameter.runtimeNamespaceAdmin
config.namespaceBlacklistComma-separated list of Namespaces in which a RoleBinding to the kyma-namespace-admin role is not created for the members of the group specified in the global.kymaRuntime.namespaceAdminGroup parameter.kyma-system, istio-system, default, knative-eventing, kube-node-lease, kube-public, kube-system, kyma-installer, kyma-integration, natss, compass-system
config.enableStaticUserDetermines if a RoleBinding to the kyma-namespace-admin role for the static namespace.admin@kyma.cx user is created for every Namespace that is not blacklisted.true

Customization examples

You can adjust the default settings of the Permission Controller by applying these overrides to the cluster either before installation, or at runtime:

  1. To change the default group, run:
Click to copy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: namespace-admin-groups-overrides
namespace: kyma-installer
labels:
installer: overrides
kyma-project.io/installation: ""
data:
global.kymaRuntime.namespaceAdminGroup: "{CUSTOM-GROUP}"
EOF
  1. To change the blacklisted Namespaces and decide whether the namespace.admin@kyma.cx static user should be assigned the kyma-admin role, run:
Click to copy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: permission-controller-overrides
namespace: kyma-installer
labels:
component: permission-controller
installer: overrides
kyma-project.io/installation: ""
data:
config.namespaceBlacklist: "kyma-system, istio-system, default, knative-eventing, kube-node-lease, kube-public, kube-system, kyma-installer, kyma-integration, natss, compass-system, {USER-DEFINED-NAMESPACE-1}, {USER-DEFINED-NAMESPACE-2}"
config.enableStaticUser: "{BOOLEAN-VALUE-FOR-NAMESPACE-ADMIN-STATIC-USER}"
EOF
  1. To change the group mappings run:
Click to copy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: permission-controller-overrides
namespace: kyma-installer
labels:
component: permission-controller
installer: overrides
kyma-project.io/installation: ""
data:
# namespace admins group name
global.kymaRuntime.namespaceAdminGroup: "runtimeNamespaceAdmin"
# namespace developers group name
global.kymaRuntime.developerGroup: "runtimeDeveloper"
# cluster wide kyma admins group name
global.kymaRuntime.adminGroup: "runtimeAdmin"
EOF

Tutorials

Overview

The Tutorials section aims to identify the most common and recurring configuration tasks within the Kyma security layer, as well as provide a step-by-step solution for those tasks.

If you can't find a solution that suits your case, don't hesitate to create a GitHub issue or use the #security Slack channel to get direct support from the community.

Update TLS certificate

The TLS certificate is a vital security element. Follow this tutorial to update the TLS certificate in Kyma.

NOTE: This procedure can interrupt the communication between your cluster and the outside world for a limited period of time.

Prerequisites

  • New TLS certificate and key for custom domain deployments, base64-encoded
  • kubeconfig file generated for the Kubernetes cluster that hosts the Kyma instance

Steps

  • Custom domain certificate
  • Self-signed certificate
  1. Trigger the update process. Run:

    Click to copy
    kubectl -n default label installation/kyma-installation action=install

    To watch the progress of the update, run:

    Click to copy
    while true; do \
    kubectl -n default get installation/kyma-installation -o jsonpath="{'Status: '}{.status.state}{', description: '}{.status.description}"; echo; \
    sleep 5; \
    done

    The process is complete when you see the Kyma installed message.

  2. Restart the Console Backend Service to propagate the new certificate. Run:

    Click to copy
    kubectl delete pod -n kyma-system -l app=console-backend
  3. Add the newly generated certificate to the trusted certificates of your OS. For MacOS, run:

    Click to copy
    tmpfile=$(mktemp /tmp/temp-cert.XXXXXX) \
    && kubectl get configmap net-global-overrides -n kyma-installer -o jsonpath='{.data.global\.ingress\.tlsCrt}' | base64 --decode > $tmpfile \
    && sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain $tmpfile \
    && rm $tmpfile

Manage static users in Dex

Create a new static user

To create a static user in Dex, create a Secret with the dex-user-config label set to true. Run:

Click to copy
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: {SECRET_NAME}
namespace: {SECRET_NAMESPACE}
labels:
"dex-user-config": "true"
data:
email: {BASE64_USER_EMAIL}
username: {BASE64_USERNAME}
password: {BASE64_USER_PASSWORD}
type: Opaque
EOF

NOTE: If you don't specify the Namespace in which you want to create the Secret, the system creates it in the default Namespace.

The following table describes the fields that are mandatory to create a static user. If you don't include any of these fields, the user is not created.

FieldDescription
data.emailBase64-encoded email address used to sign-in to the console UI. Must be unique.
data.usernameBase64-encoded username displayed in the console UI.
data.passwordBase64-encoded user password. There are no specific requirements regarding password strength, but it is recommended to use a password that is at least 8-characters-long.

Create the Secrets in the cluster before Dex is installed. The Dex init-container with the tool that configures Dex generates user configuration data basing on properly labeled Secrets, and adds the data to the ConfigMap.

If you want to add a new static user after Dex is installed, restart the Dex Pod. This creates a new Pod with an updated ConfigMap.

Bind a user to a Role or a ClusterRole

A newly created static user has no access to any resources of the cluster as there is no Role or ClusterRole bound to it.
By default, Kyma comes with the following ClusterRoles:

  • kyma-admin: gives full admin access to the entire cluster
  • kyma-namespace-admin: gives full admin access except for the write access to AddonsConfigurations
  • kyma-edit: gives full access to all Kyma-managed resources
  • kyma-developer: gives full access to Kyma-managed resources and basic Kubernetes resources
  • kyma-view: allows viewing and listing all of the resources of the cluster
  • kyma-essentials: gives a set of minimal view access right to use the Kyma Console

To bind a newly created user to the kyma-view ClusterRole, run this command:

Click to copy
kubectl create clusterrolebinding {BINDING_NAME} --clusterrole=kyma-view --user={USER_EMAIL}

To check if the binding is created, run:

Click to copy
kubectl get clusterrolebinding {BINDING_NAME}

Add an Identity Provider to Dex

Add external identity providers to Kyma using Dex connectors. You can add connectors to Dex by creating component overrides. This tutorial shows how to add a GitHub or XSUAA connector and use it to authenticate users in Kyma.

NOTE: Groups in Github are represented as teams. Read more to learn how to manage teams in GitHub.

Prerequisites

  • GitHub
  • XSUAA

Configure Dex

Register the connector by creating a Helm override for Dex. Create the override ConfigMap in the Kubernetes cluster before Dex is installed. If you want to register a connector at runtime, trigger the update process after creating the override.

TIP: You can use Go Template expressions in the override value. These expressions are resolved by Helm using the same set of overrides as configured for the entire chart.

  • GitHub
  • XSUAA

TIP: The dex.useStaticConnector parameter set to false overrides the default configuration and disables the Dex static user store. As a result, you can login to the cluster using only the registered connectors. If you want to keep the Dex static user store enabled, remove the dex.useStaticConnector parameter from the ConfigMap template.

Configure authorization rules for the GitHub connector

To bind Github groups to the default Kyma roles, edit the bindings section in the values.yaml file. Follow this template:

Click to copy
bindings:
kymaAdmin:
groups:
- "{GITHUB_ORGANIZATION}:{GITHUB_TEAM_A}"
kymaView:
groups:
- "{GITHUB_ORGANIZATION}:{GITHUB_TEAM_B}"

TIP: You can bind GitHub teams to any of the five predefined Kyma roles. Use these aliases: kymaAdmin, kymaView, kymaDeveloper, kymaEdit or kymaEssentials. For more information, read about predefined roles in Kyma.

This table explains the placeholders used in the template:

PlaceholderDescription
GITHUB_ORGANIZATIONSpecifies the name of the GitHub organization.
GITHUB_TEAM_ASpecifies the name of GitHub team to bind to the kyma-admin role.
GITHUB_TEAM_BSpecifies the name of GitHub team to bind to the kyma-view role.

Invalidate Dex signing keys

By default, Dex in Kyma stores private and public keys used to sign and validate JWT tokens on a cluster using custom resources. If, for some reason, the private keys leak, you must invalidate the private-public key pair to prevent the attacker from issuing tokens and validating the existing ones. It is critical to do so, because otherwise the attackers can use a private key to issue a new JWT token to call third party services which have Dex JWT authentication enabled. Follow this tutorial to learn how to invalidate the signing keys.

Prerequisites

To complete this tutorial, you must have either the cluster-admin kubeconfig file issued from a cloud provider or the Kyma kubeconfig file the and kyma-admin role assigned.

Steps

Perform these steps to invalidate the keys:

  1. Delete all signing keys on a cluster:
Click to copy
kubectl delete signingkeies.dex.coreos.com -n kyma-system --all

NOTE: Although it is not recommended to interact with any of Dex CRs, in this situation it is the only way to invalidate the keys.

  1. Restart the Dex Pod:
Click to copy
kubectl delete po -n kyma-system -lapp=dex

Dex will create a new CR with a private-public key pair.

  1. Restart kyma-system Pods that validate tokens issued from Dex to drop the existing public keys:
Click to copy
kubectl delete po -n kyma-system -l'app in (apiserver-proxy,iam-kubeconfig-service,console-backend-service,kiali-kcproxy,log-ui)'; kubectl delete po -n kyma-system -l 'app.kubernetes.io/name in (oathkeeper,tracing)'
  1. Manually restart all your applications that validate Dex JWT tokens internally to get the new public keys.

NOTE: Following the tutorial steps results in the downtime of Dex, a couple of Kyma components, and potentially your applications. If you want to use kubectl scale command to scale replicas, bear in mind that this downtime is intentional. It prevents Dex from issuing new tokens signed by a compromised private key and forces at least Kyma applications to fetch new public keys, and at the same time reject all existing tokens signed by the compromised private key during JWT token validation.

Get the kubeconfig file

The IAM Kubeconfig Service is a proprietary tool that generates a kubeconfig file which allows the user to access the Kyma cluster through the Command Line Interface (CLI), and to manage the connected cluster within the permission boundaries of the user.

The service is a publicly exposed service. You can access it directly under the https://configurations-generator.{YOUR_CLUSTER_DOMAIN} address. The service requires a valid ID token issued by Dex to return a code 200 result.

Steps

Follow these steps to get the kubeconfig file and configure the CLI to connect to the cluster:

  1. Access the Console UI of your Kyma cluster.
  2. Click the user icon in the upper right corner of the screen.
  3. Click the Get Kubeconfig button to download the kubeconfig file to a selected location on your machine.
  4. Open a terminal window.
  5. Export the KUBECONFIG environment variable to point to the downloaded kubeconfig. Run this command:

    Click to copy
    export KUBECONFIG={KUBECONFIG_FILE_PATH}

    NOTE: Drag and drop the kubeconfig file in the terminal to easily add the path of the file to the export KUBECONFIG command you run.

  6. Run kubectl cluster-info to check if the CLI is connected to the correct cluster.

    NOTE: Exporting the KUBECONFIG environment variable works only in the context of the given terminal window. If you close the window in which you exported the variable, or if you switch to a new terminal window, you must export the environment variable again to connect the CLI to the desired cluster.

    Alternatively, get the kubeconfig file by sending a GET request with a valid ID token issued for the user to the /kube-config endpoint of the https://configurations-generator.{YOUR_CLUSTER_DOMAIN} service. For example:

    Click to copy
    curl GET https://configurations-generator.{YOUR_CLUSTER_DOMAIN}/kube-config -H "Authorization: Bearer {VALID_ID_TOKEN}"

Troubleshooting

Overview

The Troubleshooting section aims to identify the most common and recurring issues within the Kyma security layer, as well as provide the most suitable solutions to these issues.

If you can't find a solution that suits your case, don't hesitate to create a GitHub issue or use the #security Slack channel to get direct support from the community.

"403 Forbidden" in the Console

If you log to the Console and get the 403 Forbidden error, do the following:

  1. Fetch the ID Token. For example, use the Chrome Developer Tools and search for the token in sent requests.
  2. Decode the ID Token. For example, use the jwt.io page.
  3. Check if the token contains groups claims:

    Click to copy
    "groups": [
    "example-group"
    ]
  4. Make sure the group you are assigned to has permissions to view resources you requested.

Issues with certificates on Gardener

During installation on Gardener, Kyma requests domain SSL certificates using the Gardener's Certificate custom resource to ensure secure communication through both Kyma UI and Kubernetes CLI.

This process can result in the following issues:

  • xip-patch or apiserver-proxy installation takes too long.
  • Certificate is still not ready, status is {STATUS}. Exiting... error occurs.
  • Certificates are no longer valid.

If any of these issues appears, follow these steps:

  1. Check the status of the Certificate resource:

    Click to copy
    kubectl get certificates.cert.gardener.cloud --all-namespaces
  2. If the status of any Certificate is Error, run:

    Click to copy
    kubectl get certificates -n {CERTIFICATE_NAMESPACE} {CERTIFICATE_NAME} -o jsonpath='{ .status.message }'

The result describes the reason for the failure of issuing a domain SSL certificate. Depending on the moment when the error occurred, you can perform different actions.

  • Error during the installation
  • Error after the installation

"Not enough permissions" error

If you log in to the Kyma Console and receive Not enough permissions error message:

  1. Fetch the ID Token. For example, use Chrome DevTools and search for the token in the sent requests authorization header.
  2. Decode the ID Token. For example, use the jwt.io page.
  3. Check the value of the "email_verified" property:
Click to copy
{
...
"iss": "https://dex.c-6d073c0.kyma-stage.shoot.live.k8s-hana.ondemand.com",
"aud": [
"kyma-client",
"console"
],
"exp": 1595525592,
"iat": 1595496792,
"azp": "console",
...
"email": "{YOUR_EMAIL_ADDRESS}",
"email_verified": false,
}
  1. If the value is set to false, it means that the identity provider was unable to verify your email address. Contact your identity provider for further guidance.