Security
Overview
The security model in Kyma uses the Service Mesh component to enforce authorization through Kubernetes Role Based Authentication (RBAC) in the cluster. The identity federation is managed through Dex, which is an open-source, OpenID Connect identity provider.
Dex implements a system of connectors that allow you to delegate authentication to external OpenID Connect and SAML2-compliant Identity Providers and use their user stores. Read the tutorial to learn how to enable authentication with an external Identity Provider by using a Dex connector.
Out of the box, Kyma comes with its own static user store used by Dex to authenticate users. This solution is designed for use with local Kyma deployments as it allows to easily create predefined users' credentials by creating Secret objects with a custom dex-user-config
label.
Read the tutorial to learn how to manage users in the static store used by Dex.
Kyma uses a group-based approach to managing authorizations. To give users that belong to a group access to resources in Kyma, you must create:
- Role and RoleBinding - for resources in a given Namespace.
- ClusterRole and ClusterRoleBinding - for resources available in the entire cluster.
The RoleBinding or ClusterRoleBinding must have a group specified as their subject. See the Kubernetes documentation to learn how to manage Roles and RoleBindings.
NOTE: You cannot define groups for the static user store. Instead, bind the user directly to a role or a cluster role by setting the user as the subject of a RoleBinding or ClusterRoleBinding.
By default, there are 7 roles used to manage permissions in every Kyma cluster. These roles are:
- kyma-essentials
- kyma-view
- kyma-namespace-admin-essentials
- kyma-edit
- kyma-developer
- kyma-admin
- kyma-namespace-admin
Read more about roles in Kyma.
NOTE: The Global permissions view in the Settings section of the Kyma Console UI allows you to manage cluster-level bindings between user groups and roles. To manage bindings between user groups and roles in a Namespace, select the Namespace and go to the Configuration section of the Permissions view.
Architecture
The following diagram illustrates the authorization and authentication flow in Kyma. The representation assumes the Kyma Console UI as the user's point of entry.
- The user opens the Kyma Console UI. If the Console application doesn't find a JWT token in the browser session storage, it redirects the user's browser to the Open ID Connect (OIDC) provider, Dex.
- Dex lists all defined Identity Provider connectors to the user. The user selects the Identity Provider to authenticate with. After successful authentication, the browser is redirected back to the OIDC provider which issues a JWT token to the user. After obtaining the token, the browser is redirected back to the Console UI. The Console UI stores the token in the Session Storage and uses it for all subsequent requests.
- The Authorization Proxy validates the JWT token passed in the Authorization Bearer request header. It extracts the user and groups details from the token, and sets the required impersonation headers.
- The request is forwarded to the Kubernetes API Server.
NOTE: The Authorization Proxy can verify JWT tokens issued by Dex because Dex is registered as a trusted issuer through OIDC parameters during the Kyma installation.
Details
IAM Kubeconfig Service
The IAM Kubeconfig Service is a proprietary tool that generates a kubeconfig
file which allows the user to access the Kyma cluster through the Command Line Interface (CLI), and to manage the connected cluster within the permission boundaries of the user.
The IAM Kubeconfig Service rewrites the ID token issued for the user by Dex into the generated kubeconfig
file. The time to live (TTL) of the ID token is 8 hours, which effectively means that the TTL of the generated kubeconfig
file is 8 hours as well.
The service is a publicly exposed service. You can access it directly under the https://configurations-generator.{YOUR_CLUSTER_DOMAIN}
address. The service requires a valid ID token issued by Dex to return a code 200
result.
Get the kubeconfig file and configure the CLI
Follow these steps to get the kubeconfig
file and configure the CLI to connect to the cluster:
- Access the Console UI of your Kyma cluster.
- Click Administration.
- Click the Download config button to download the
kubeconfig
file to a selected location on your machine. - Open a terminal window.
Export the KUBECONFIG environment variable to point to the downloaded
kubeconfig
. Run this command:Click to copyexport KUBECONFIG={KUBECONFIG_FILE_PATH}NOTE: Drag and drop the
kubeconfig
file in the terminal to easily add the path of the file to theexport KUBECONFIG
command you run.Run
kubectl cluster-info
to check if the CLI is connected to the correct cluster.NOTE: Exporting the KUBECONFIG environment variable works only in the context of the given terminal window. If you close the window in which you exported the variable, or if you switch to a new terminal window, you must export the environment variable again to connect the CLI to the desired cluster.
Alternatively, get the
kubeconfig
file by sending aGET
request with a valid ID token issued for the user to the/kube-config
endpoint of thehttps://configurations-generator.{YOUR_CLUSTER_DOMAIN}
service. For example:Click to copycurl GET https://configurations-generator.{YOUR_CLUSTER_DOMAIN}/kube-config -H "Authorization: Bearer {VALID_ID_TOKEN}"
GraphQL
Kyma uses a custom GraphQL implementation in the Console Backend Service and deploys an RBAC-based logic to control the access to the GraphQL endpoint. All calls to the GraphQL endpoint require a valid Kyma token for authentication.
The authorization in GraphQL uses RBAC, which means that:
- All of the Roles, RoleBindings, ClusterRoles and CluserRoleBindings that you create and assign are effective and give the same permissions when users interact with the cluster resources both through the CLI and the GraphQL endpoints.
- To give users access to specific queries you must create appropriate Roles and bindings in your cluster.
The implementation assigns GraphQL actions to specific Kubernetes verbs:
GraphQL action | Kubernetes verb(s) |
---|---|
query | get (for a single resource) list (for multiple resources) |
mutation | create, delete |
subscription | watch |
NOTE: Due to the nature of Kubernetes, you can secure specific resources specified by their name only for queries and mutations. Subscriptions work only with entire resource groups, such as kinds, and therefore don't allow for such level of granularity.
Available GraphQL actions
To access cluster resources through GraphQL, an action securing given resource must be defined and implemented in the cluster. See the GraphQL schema file for the list of actions implemented in every Kyma cluster by default.
Secure a defined GraphQL action
This is an example GraphQL action implemented in Kyma out of the box.
microFrontends(namespace: String!): [MicroFrontend!]! @HasAccess(attributes: {resource: "microfrontends", verb: "list", apiGroup: "ui.kyma-project.io", apiVersion: "v1alpha1"})
This query secures the access to MicroFrontend custom resources with specific names. To access it, the user must be bound to a role that allows to access:
- resources of the MicroFrontend kind
- the Kubernetes verb
list
- the
ui.kyma-project.io
apiGroup
To allow access specifically to the example query, create this RBAC role in the cluster and bind it to a user or a client:
apiVersion: rbac.authorization.k8s.io/v1beta1kind: Rolemetadata: name: kyma-microfrontends-query-examplerules:- apiGroups: ["ui.kyma-project.io"] resources: ["microfrontends"] verbs: ["list"]
NOTE: Read also about RBAC authorization in a Kubernetes cluster.
GraphQL request flow
This diagram illustrates the request flow for the Console Backend Service which uses a custom GraphQL implementation:
- The user sends a request with an ID token to the GraphQL application.
- The GraphQL application validates the user token and extracts user data required to perform Subject Access Review (SAR).
- The Kubernetes API Server performs SAR.
- Based on the results of SAR, the Kubernetes API Server informs the GraphQL application whether the user can perform the requested GraphQL action.
- Based on the information provided by the Kubernetes API Server, the GraphQL application returns an appropriate response to the user.
NOTE: Read more about the custom GraphQL implementation in Kyma.
Roles in Kyma
Kyma uses roles and groups to manage access in the cluster. Every cluster comes with predefined roles which give the assigned users different level of permissions suitable for different purposes. These roles are defined as ClusterRoles and use the Kubernetes mechanism of aggregation which allows you to combine multiple ClusterRoles into a single ClusterRole. Use the aggregation mechanism to efficiently manage access to Kubernetes and Kyma-specific resources.
NOTE: Read the Kubernetes documentation to learn more about the aggregation mechanism used to define Kyma roles.
You can assign any of the predefined roles to a user or to a group of users in the context of:
- The entire cluster by creating a ClusterRoleBinding
- A specific Namespace by creating a RoleBinding
TIP: To ensure proper Namespace separation, use RoleBindings to give users access to the cluster. This way a group or a user can have different permissions in different Namespaces.
The predefined roles are:
Role | Default group | Description |
---|---|---|
kyma-essentials | runtimeDeveloper | The basic role required to allow the users to access the Console UI of the cluster. This role doesn't give the user rights to modify any resources. |
kyma-namespace-admin-essentials | runtimeNamespaceAdmin | The role that allows the user to access the Console UI and create Namespaces, built on top of the kyma-essentials role. Used to give the members of selected groups the ability to create Namespaces in which the Permission Controller binds them to the kyma-namespace-admin role. |
kyma-view | runtimeOperator | The role for listing Kubernetes and Kyma-specific resources. |
kyma-edit | None | The role for editing Kyma-specific resources. It's aggregated by other roles. |
kyma-developer | None | The role created for developers who build implementations using Kyma. It allows you to list and edit Kubernetes and Kyma-specific resources. You need to bind it manually to a user or a group in the Namespaces of your choice. Use the runtimeDeveloper group when you run Kyma with the default cluster-users chart configuration. |
kyma-admin | runtimeAdmin | The role with the highest permission level which gives access to all Kubernetes and Kyma resources and components with administrative rights. |
kyma-namespace-admin | runtimeNamespaceAdmin | The role that has the same rights as the kyma-admin role, except for the write access to AddonsConfigurations. The Permission Controller automatically creates a RoleBinding to the runtimeNamespaceAdmin group in all non-system Namespaces. |
CAUTION: To give a user the kyma-developer role permissions in a Namespace, create a RoleBinding to the kyma-developer cluster role in that Namespace. You can define a subject of the RoleBinding by specifying either a Group, or a User. If you decide to specify a User, provide a user email.
To learn more about the default roles and how they are constructed, see the rbac-roles.yaml
file.
OAuth2 and OpenID Connect server
By default, every Kyma deployment comes with an OAuth2 authorization server solution from ORY. The ory
component consists of four elements:
- Hydra OAuth2 and OpenID Connect server which issues access, refresh, and ID tokens to registered clients which call services in a Kyma cluster.
- By default, Hydra is deployed with a database backend for persistent data management. The database used is the official Bitnami Postgres Package, however, we use the Official Postgres Docker images instead of those provided by Bitnami. This is because Postgres provides Alpine based images, which are lighter and have a reduced attack surface.
- Oathkeeper authorization & authentication proxy which authenticates and authorizes incoming requests basing on the list of defined Access Rules.
- Oathkeeper Maester Kubernetes controller which feeds Access Rules to the Oathkeeper proxy by creating or updating the Oathkeeper ConfigMap and populating it with rules found in instances of the
rules.oathkeeper.ory.sh/v1alpha1
custom resource. - Hydra Maester Kubernetes controller which manages OAuth2 clients by communicating the data found in instances of the
oauth2clients.hydra.ory.sh
custom resource to the ORY Hydra API.
Out of the box, the Kyma implementation of the ORY stack supports the OAuth 2.0 Client Credentials Grant.
NOTE: The implementation of the ORY Oauth2 server in Kyma is still in the early stages and is subject to changes and development. Read the blog post to get a better understanding of the target integration of the ORY stack in Kyma.
Register an OAuth2 client
To interact with the Kyma OAuth2 server, you must register an OAuth2 client. To register a client, create an instance of the OAuth2Client custom resource (CR) which triggers the Hydra Maester controller to send a client registration request to the OAuth2 server.
CAUTION: If you run Kyma on a Minikube cluster, Hydra stores client data in an in-memory database. This configuration is prone to data loss and may cause erratic behavior of the Oauth2 server. By default, Hydra Maester reconciles the database every 10 hours, but you can resolve any discrepancies manually by deleting the Hydra Maester Pod.
When you register an OAuth2 client, you can set its redirect URI used in user-facing flows. To add a redirect URI for the client you register, use the optional spec.redirectUris property. For more details, see the full ORY OAuth2Client Custom Resource Definition(CRD).
For each client, you can provide client ID and secret. If you don't provide the credentials, Hydra generates a random client ID and secret pair. Client credentials are stored as Kubernetes Secret in the same Namespace as the CR instances of the corresponding clients.
Use your own credentials
Create a Kubernetes Secret that contains the client ID and secret you want to use to create a client:
Click to copyapiVersion: v1kind: Secretmetadata:name: {NAME_OF_SECRET}namespace: {CLIENT_NAMESPACE}type: Opaquedata:client_id: {BASE64_ENCODED_ID}client_secret: {BASE64_ENCODED_PASSWORD}Create a CR with the secretName property set to the name of Kubernetes Secret you created. Run this command to trigger the creation of a client:
Click to copycat <<EOF | kubectl apply -f -apiVersion: hydra.ory.sh/v1alpha1kind: OAuth2Clientmetadata:name: {NAME_OF_CLIENT}namespace: {CLIENT_NAMESPACE}spec:grantTypes:- "client_credentials"scope: "read write"secretName: {NAME_OF_SECRET}redirectUris: ["{URI1}" , "{URI2}"]EOFNOTE: This sample OAuth2Client CR has a redirect URI defined through the optional spec.redirectUris property. See the CRD for more details.
Use Hydra-generated credentials
Run this command to create a CR that triggers the creation of a client. The OAuth2 server generates a client ID and secret pair and saves it to a Kubernetes secret with the name specified in the secretName property.
cat <<EOF | kubectl apply -f -apiVersion: hydra.ory.sh/v1alpha1kind: OAuth2Clientmetadata: name: {NAME_OF_CLIENT} namespace: {CLIENT_NAMESPACE}spec: grantTypes: - "client_credentials" scope: "read write" secretName: {NAME_OF_KUBERNETES_SECRET}EOF
Get the registered client credentials
Run this command to get the credentials of the registered OAuth2 client:
kubectl get secret -n {CLIENT_NAMESPACE} {NAME_OF_KUBERNETES_SECRET} -o yaml
Update the OAuth2 client credentials
If the credentials of your OAuth2 client are compromised, follow these steps to change them to a new pair:
- Create a new Kubernetes Secret with a new client ID and client secret.
- Edit the instance of the client's corresponding
oauth2clients.hydra.ory.sh/v1alpha1
CR by replacing the value of the SecretName property with the name of the newly created Secret.
TIP: When you complete these steps, remember to delete the Secret that stores the old client credentials.
OAuth2 server in action
To see the OAuth2 server in action, complete the tutorial which shows you how to expose a service, secure it with OAuth2 tokens, and interact with it using the registered client.
You can also interact with the OAuth2 server using its REST API. Read the official ORY documentation to learn more about the available endpoints.
TIP: If you have any questions about the ORY–Kyma integration, you can ask them on the #security Slack channel and get answers directly from Kyma developers.
Configuration guidelines
OAuth2 client data persistence
To prevent data loss, the OAuth2 server stores the registered client data in a database. By default, Kyma comes with a pre-configured in-cluster PostgreSQL database that requires no manual setup. This configuration is not, however, considered production-ready and we recommend using an external database. This section provides guidance on migrating your OAuth2 server to the persistence mode of your choice, and describes the migration mechanism itself.
The ory-hydra-credentials
Secret
To establish a connection with a database, Hydra needs a set of credentials provided by the user as Helm overrides. Depending on the desired persistence mode, some of those values are also required to configure optional ORY sub-charts, i.e. the PostgreSQL database and Gcloud proxy mechanism. To reduce the number of in-cluster Kubernetes Secrets and to avoid confusion, the components involved follow the single Secret policy. Namely, they all use the ory-hydra-credentials
Secret as the only source of credentials. Being an ORY-related object, the Secret resides in the kyma-system
Namespace.
TIP: We strongly recommend backing up this Secret after every installation or upgrade procedure.
Reaping the parameters
To ensure that the OAuth2 server is configured properly, Helm runs a preliminary job prior to the ORY chart installation or upgrade. This job combines the overrides containing credentials into one Kubernetes Secret accessible to all components involved in the persistence mechanism. The job is also responsible for identifying missing overrides, if any. If a required override has not been specified, the job either falls back to an alternative source of credentials, or fails and logs the missing override key, thus interrupting the installation or upgrade procedure.
Prioritization of parameters
This list presents the priority of parameters in descending order:
Use the overrides provided by the user before the installation or update process.
Reuse the parameters stored in existing Kubernetes Secrets and accessible to the job's container through a volume mount. This option is available only for the update process.
CAUTION: The initial implementation of OAuth2 client persistence in Kyma doesn't follow the single Secret policy, but rather distributes the credentials per component. This mechanism is backward-compatible. However, we recommend removing the deprecated Secrets after upgrading. Deprecated Secrets are:
ory-hydra
,ory-postgres
, andory-gcloud-sqlproxy
.Generate random values or fail, depending on the nature of a given value.
The following table lists all the possible keys aggregated in the ory-hydra-credentials
Secret, along with their fallback policies.
Secret | Override | Fallback policy |
---|---|---|
dsn | This value is constructed | User provided database parameters or default to in-memory settings |
secretsSystem | hydra.hydra.config.secrets.system | Generate random string |
secretsCookie | hydra.hydra.config.secrets.cookie | Generate random string |
gcp-sa.json | global.ory.hydra.persistence.gcloud.saJson | Interrupt the procedure |
Helm Overrides
For required overrides and their descriptions, see this document.
Permission Controller
The Permission Controller is a Kubernetes controller which listens for new Namespaces and creates RoleBindings for the users of the specified group to the kyma-namespace-admin role within these Namespaces. The Controller uses a blacklist mechanism, which defines the Namespaces in which the users of the defined group are not assigned the kyma-admin role.
When the Controller is deployed in a cluster, it checks all existing Namespaces and assigns the roles accordingly.
By default, the controller binds users of the runtimeNamespaceAdmin group to the kyma-namespace-admin role in the Namespaces they create. Additionally, the controller creates a RoleBinding for the static namespace.admin@kyma.cx
user to the kyma-admin role in every Namespace that is not blacklisted.
You can adjust the default settings of the Permission Controller by applying these overrides to the cluster either before installation, or at runtime:
TIP: Read the configuration document to learn more about the adjustable parameters of the Permission Controller.
- To change the default group, run:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: ConfigMapmetadata: name: namespace-admin-groups-overrides namespace: kyma-installer labels: installer: overrides kyma-project.io/installation: ""data: global.kymaRuntime.namespaceAdminGroup: "{CUSTOM-GROUP}"EOF
- To change the blacklisted Namespaces and decide whether the
namespace.admin@kyma.cx
static user should be assigned the kyma-admin role, run:
apiVersion: v1kind: ConfigMapmetadata: name: permission-controller-overrides namespace: kyma-installer labels: component: permission-controller installer: overrides kyma-project.io/installation: ""data: config.namespaceBlacklist: "kyma-system, istio-system, default, knative-eventing, knative-serving, kube-node-lease, kube-public, kube-system, kyma-installer, kyma-integration, natss, compass-system, {USER-DEFINED-NAMESPACE-1}, {USER-DEFINED-NAMESPACE-2}" config.enableStaticUser: "{BOOLEAN-VALUE-FOR-NAMESPACE-ADMIN-STATIC-USER}"EOF
Configuration
API Server Proxy chart
To configure the API Server Proxy chart, override the default values of its values.yaml
file. This document describes parameters that you can configure.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
Configurable parameters
This table lists the configurable parameters, their descriptions, and default values:
Parameter | Description | Default value |
---|---|---|
port.secure | Specifies the port that exposes API Server Proxy through the load balancer. | 9443 |
port.insecure | Specifies the port that exposes API Server Proxy through Istio Ingress. | 8444 |
hpa.minReplicas | Defines the initial number of created API Server Proxy instances. | 1 |
hpa.maxReplicas | Defines the maximum number of created API Server Proxy instances. | 3 |
hpa.metrics.resource.targetAverageUtilization | Specifies the average percentage of a given instance memory utilization. After exceeding this limit, Kubernetes creates another API Server Proxy instance. | 50 |
IAM Kubeconfig Service chart
To configure the IAM Kubeconfig Service chart, override the default values of its values.yaml
file. This document describes parameters that you can configure.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
Configurable parameters
This table lists the configurable parameters, their descriptions, and default values:
Parameter | Description | Default value |
---|---|---|
service.port | Specifies the port that exposes the IAM Kubeconfig Service service. | 8000 |
Dex chart
To configure the Dex chart, override the default values of its values.yaml
file. This document describes parameters that you can configure.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
Configurable parameters
This table lists the configurable parameters, their descriptions, and default values:
Parameter | Description | Default value |
---|---|---|
dex.expiry.signingKeys | Specifies the period of time after which the public key validating the token to the Console expires. | 720h |
dex.expiry.idTokens | Specifies the period of time after which the token to the Console expires. | 8h |
Cluster Users chart
To configure the Cluster Users chart, override the default values of its values.yaml
file. This document describes parameters that you can configure.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
Configurable parameters
This table lists the configurable parameters, their descriptions, and default values:
Parameter | Description | Default value |
---|---|---|
bindings.kymaEssentials.groups | Specifies the array of groups used in ClusterRoleBinding to the kyma-essentials ClusterRole. | [] |
bindings.kymaView.groups | Specifies the array of groups used in ClusterRoleBinding to the kyma-view ClusterRole. | [] |
bindings.kymaEdit.groups | Specifies the array of groups used in ClusterRoleBinding to the kyma-edit ClusterRole. | [] |
bindings.kymaAdmin.groups | Specifies the array of groups used in ClusterRoleBinding to the kyma-admin ClusterRole. | [] |
bindings.kymaDeveloper.groups | Specifies the array of groups used in ClusterRoleBinding to the kyma-developer ClusterRole. | [] |
users.adminGroup | Specifies the name of the group used in ClusterRoleBinding to the kyma-admin ClusterRole. | "" |
OAuth2 server customization and operations
Credentials backup
The ory-hydra-credentials
Secret stores all the crucial data required to establish a connection with your database. Nevertheless, it is regenerated every time the ORY chart is upgraded and you may accidentally overwrite your credentials. For this reason, it is recommended to backup the Secret. Run this command to save the contents of the Secret to a file:
kubectl get secret -n kyma-system ory-hydra-credentials -o yaml > ory-hydra-credentials-$(date +%Y%m%d).yaml
Postgres password update
If Hydra is installed with the default settings, a Postgres-based database is provided out-of-the-box. If no password has been specified, one is generated and set for the Hydra user. This behavior may not always be desired, so in some cases you may want to modify this password.
In order to set a custom password, provide the .Values.global.postgresql.postgresqlPassword
override during installation.
In order to update the password for an existing installation, provide the .Values.global.postgresql.postgresqlPassword
override and perform the update procedure. However, this only changes the environmental setting for the database and does not modify the internal database data. In order to update the password in the database, please refer to the Postgres documentation.
Permission Controller chart
To configure the Permission Controller chart, override the default values of its values.yaml
file. This document describes parameters that you can configure.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
Configurable parameters
The following table lists the configurable parameters of the permission-controller chart and their default values.
Parameter | Description | Default value |
---|---|---|
global.kymaRuntime.namespaceAdminGroup | Determines the user group for which a RoleBinding to the kyma-namespace-admin role is created in all Namespaces except those specified in the config.namespaceBlacklist parameter. | runtimeNamespaceAdmin |
config.namespaceBlacklist | Comma-separated list of Namespaces in which a RoleBinding to the kyma-namespace-admin role is not created for the members of the group specified in the global.kymaRuntime.namespaceAdminGroup parameter. | kyma-system, istio-system, default, knative-eventing, knative-serving, kube-node-lease, kube-public, kube-system, kyma-installer, kyma-integration, natss, compass-system |
config.enableStaticUser | Determines if a RoleBinding to the kyma-namespace-admin role for the static namespace.admin@kyma.cx user is created for every Namespace that is not blacklisted. | true |
OAuth2 server profiles
By default, every Kyma deployment is installed with the OAuth2 server using what is considered a default profile. This configuration is not considered production-ready. To use the Kyma OAuth2 server in a production environment, configure Hydra to use the production profile.
Default profile
In the case of the ORY Hydra OAuth2 server, the default profile includes:
- An in-cluster database that stores the registered client data.
- A job that reads the generated database credentials and saves them to the configuration of Hydra before the installation and update.
- Default resource quotas.
Persistence mode for the default profile
The default profile for the OAuth2 server enables the use of a preconfigured PostgreSQL database, which is installed together with the Hydra server. The database is created in the cluster as a StatefulSet and uses a PersistentVolume that is provider-specific. This means that the PersistentVolume used by the database uses the default StorageClass of the cluster's host provider. The internal PostgreSQL database is installed with every Kyma deployment and doesn't require manual configuration.
Production profile
The production profile introduces the following changes to the Hydra OAuth2 server deployment:
- The registered client data is saved in a user-managed database.
- Optionally, a Gcloud proxy service is deployed.
- The Oathkeeper authorization and authentication proxy has raised CPU limits and requests. It starts with more replicas and can scale up horizontally to higher numbers.
Oathkeeper settings for the production profile
The production profile requires the following parameters in order to operate:
Parameter | Description | Required value |
---|---|---|
oathkeeper.deployment.resources.limits.cpu | Defines limits for CPU resources. | 800m |
oathkeeper.deployment.resources.requests.cpu | Defines requests for CPU resources. | 200m |
hpa.oathkeeper.minReplicas | Defines the initial number of created Oathkeeper instances. | 3 |
Persistence modes for the production profile
The production profile for the OAuth2 server enables the use of a custom database, which can be one of the following options:
- A user-maintained database to which credentials are supplied.
- A GCP Cloud SQL instance to which credentials are supplied. In this case, an extra gcloud-proxy deployment is created allowing secured connections.
Custom database
Alternatively, you can use a compatible, custom database to store the registered client data. To use a database, you must create a Kubernetes Secret with the database password as an override for the Hydra OAuth2 server. The details of the database are passed using these parameters of the production profile override:
General settings:
Parameter | Description | Required value |
---|---|---|
global.ory.hydra.persistence.postgresql.enabled | Defines whether Hydra should initiate the deployment of an in-cluster database. Set to false to use a self-provided database. If set to true , Hydra always uses an in-cluster database and ignores the custom database details. | false |
hydra.hydra.config.secrets.system | Sets the system encryption string for Hydra. | An at least 16 characters long alphanumerical string |
hydra.hydra.config.secrets.cookie | Sets the cookie session encryption string for Hydra. | An at least 16 characters long alphanumerical string |
Database settings:
Parameter | Description | Example value |
---|---|---|
global.ory.hydra.persistence.user | Specifies the name of the user with permissions to access the database. | dbuser |
global.ory.hydra.persistence.secretName | Specifies the name of the Secret in the same Namespace as Hydra that stores the database password. | my-secret |
global.ory.hydra.persistence.secretKey | Specifies the name of the key in the Secret that contains the database password. | my-db-password |
global.ory.hydra.persistence.dbUrl | Specifies the database URL. For more information, see the configuration file. | mydb.my-namespace:1234 |
global.ory.hydra.persistence.dbName | Specifies the name of the database saved in Hydra. | db |
global.ory.hydra.persistence.dbType | Specifies the type of the database. The supported protocols are postgres , mysql , cockroach . For more information, see the configuration file. | postgres |
Google Cloud SQL
The Cloud SQL is a provider-supplied and maintained database, which requires a special proxy deployment in order to provide a secured connection. In Kyma we provide a pre-installed deployment, which requires the following parameters in order to operate:
General settings:
Parameter | Description | Required value |
---|---|---|
global.ory.hydra.persistence.postgresql.enabled | Defines whether Hydra should initiate the deployment of an in-cluster database. Set to false to use a self-provided database. If set to true , Hydra always uses an in-cluster database and ignores the custom database details. | false |
global.ory.hydra.persistence.gcloud.enabled | Defines whether Hydra should initiate the deployment of Google SQL proxy. | true |
hydra.hydra.config.secrets.system | Sets the system encryption string for Hydra. | An at least 16 characters long alphanumerical string |
hydra.hydra.config.secrets.cookie | Sets the cookie session encryption string for Hydra. | An at least 16 characters long alphanumerical string |
Database settings:
Parameter | Description | Example value |
---|---|---|
data.global.ory.hydra.persistence.user | Specifies the name of the user with permissions to access the database. | dbuser |
data.global.ory.hydra.persistence.secretName | Specifies the name of the Secret in the same Namespace as Hydra that stores the database password. | my-secret |
data.global.ory.hydra.persistence.secretKey | Specifies the name of the key in the Secret that contains the database password. | my-db-password |
data.global.ory.hydra.persistence.dbUrl | Specifies the database URL. For more information, see the configuration file. | Required: ory-gcloud-sqlproxy.kyma-system |
data.global.ory.hydra.persistence.dbName | Specifies the name of the database saved in Hydra. | db |
data.global.ory.hydra.persistence.dbType | Specifies the type of the database. The supported protocols are postgres , mysql , cockroach . For more information, see the configuration file. | postgres |
Proxy settings:
Parameter | Description | Example value |
---|---|---|
gcloud-sqlproxy.cloudsql.instance.instanceName | Specifies the name of the database instance in GCP. This value is the last part of the string returned by the Cloud SQL Console for Instance connection name - the one after the final : . For example, if the value for Instance connection name is my_project:my_region:mydbinstance , use only mydbinstance . | mydbinstance |
gcloud-sqlproxy.cloudsql.instance.project | Specifies the name of the GCP project used. | my-gcp-project |
gcloud-sqlproxy.cloudsql.instance.region | Specifies the name of the GCP region used. Note, that it does not equal the GCP zone. | europe-west4 |
gcloud-sqlproxy.cloudsql.instance.port | Specifies the port used by the database to handle connections. Database dependent. | postgres: 5432 mysql: 3306 |
gcloud-sqlproxy.existingSecret | Specifies the name of the Secret in the same Namespace as the proxy, that stores the database password. | my-secret |
gcloud-sqlproxy.existingSecretKey | Specifies the name of the key in the Secret that contains the GCP ServiceAccount json key. | sa.json |
NOTE: When using any kind of custom database (gcloud, or self-maintained), it is important to provide the hydra.hydra.config.secrets variables, otherwise a random secret will be generated. This secret needs to be common to all Hydra instances using the same instance of the chosen database.
Use the production profile
Follow these steps to migrate your Oauth2 server to the production profile:
- Apply an override that forces the Hydra OAuth2 server to use the database of your choice. Follow these links to find an example of override data for each persistence mode:
- Run the cluster update process.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
TIP: All the client data registered by Hydra Maester is migrated to the new database as a part of the update process. During this process, the clients will not be available which may result in errors on issuing the token. If you notice missing or inconsistent data, delete the Hydra Maester Pod to force reconciliation. For more information, read about Hydra Maester controller and Oauth2 client registration in Kyma.
Custom Resource
Group
The groups.authentication.kyma-project.io
CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format that represents user groups available in the ID provider in the Kyma cluster. To get the up-to-date CRD and show the output in the yaml
format, run this command:
kubectl get crd groups.authentication.kyma-project.io -o yaml
Sample custom resource
This is a sample CR that represents an user group available in the ID provider in the Kyma cluster.
apiVersion: authentication.kyma-project.io/v1alpha1kind: Groupmetadata: name: "sample-group"spec: name: "admins" idpName: "github" description: "'admins' represents the group of users with administrative privileges in the organization."
This table analyses the elements of the sample CR and the information it contains:
Field | Required | Description |
---|---|---|
metadata.name | Yes | Specifies the name of the CR. |
spec.name | Yes | Specifies the name of the group. |
spec.idpName | Yes | Specifies the name of the ID provider in which the group exists. |
spec.description | No | Description of the group available in the ID provider. |
Tutorials
Update TLS certificate
The TLS certificate is a vital security element. Follow this tutorial to update the TLS certificate in Kyma.
NOTE: This procedure can interrupt the communication between your cluster and the outside world for a limited period of time.
Prerequisites
- New TLS certificate and key for custom domain deployments, base64-encoded
kubeconfig
file generated for the Kubernetes cluster that hosts the Kyma instance
Steps
- Custom domain certificate
- Self-signed certificate
Trigger the update process. Run:
Click to copykubectl -n default label installation/kyma-installation action=installTo watch the progress of the update, run:
Click to copywhile true; do \kubectl -n default get installation/kyma-installation -o jsonpath="{'Status: '}{.status.state}{', description: '}{.status.description}"; echo; \sleep 5; \doneThe process is complete when you see the
Kyma installed
message.Restart the Console Backend Service to propagate the new certificate. Run:
Click to copykubectl delete pod -n kyma-system -l app=console-backendAdd the newly generated certificate to the trusted certificates of your OS. For MacOS, run:
Click to copytmpfile=$(mktemp /tmp/temp-cert.XXXXXX) \&& kubectl get configmap net-global-overrides -n kyma-installer -o jsonpath='{.data.global\.ingress\.tlsCrt}' | base64 --decode > $tmpfile \&& sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain $tmpfile \&& rm $tmpfile
Manage static users in Dex
Create a new static user
To create a static user in Dex, create a Secret with the dex-user-config label set to true
. Run:
cat <<EOF | kubectl apply -f -apiVersion: v1kind: Secretmetadata: name: {SECRET_NAME} namespace: {SECRET_NAMESPACE} labels: "dex-user-config": "true"data: email: {BASE64_USER_EMAIL} username: {BASE64_USERNAME} password: {BASE64_USER_PASSWORD} type: OpaqueEOF
NOTE: If you don't specify the Namespace in which you want to create the Secret, the system creates it in the
default
Namespace.
The following table describes the fields that are mandatory to create a static user. If you don't include any of these fields, the user is not created.
Field | Description |
---|---|
data.email | Base64-encoded email address used to sign-in to the console UI. Must be unique. |
data.username | Base64-encoded username displayed in the console UI. |
data.password | Base64-encoded user password. There are no specific requirements regarding password strength, but it is recommended to use a password that is at least 8-characters-long. |
Create the Secrets in the cluster before Dex is installed. The Dex init-container with the tool that configures Dex generates user configuration data basing on properly labeled Secrets, and adds the data to the ConfigMap.
If you want to add a new static user after Dex is installed, restart the Dex Pod. This creates a new Pod with an updated ConfigMap.
Bind a user to a Role or a ClusterRole
A newly created static user has no access to any resources of the cluster as there is no Role or ClusterRole bound to it.
By default, Kyma comes with the following ClusterRoles:
- kyma-admin: gives full admin access to the entire cluster
- kyma-namespace-admin: gives full admin access except for the write access to AddonsConfigurations
- kyma-edit: gives full access to all Kyma-managed resources
- kyma-developer: gives full access to Kyma-managed resources and basic Kubernetes resources
- kyma-view: allows viewing and listing all of the resources of the cluster
- kyma-essentials: gives a set of minimal view access right to use the Kyma Console
To bind a newly created user to the kyma-view ClusterRole, run this command:
kubectl create clusterrolebinding {BINDING_NAME} --clusterrole=kyma-view --user={USER_EMAIL}
To check if the binding is created, run:
kubectl get clusterrolebinding {BINDING_NAME}
Add an Identity Provider to Dex
Add external identity providers to Kyma using Dex connectors. You can add connectors to Dex by creating component overrides. This tutorial shows how to add a GitHub or XSUAA connector and use it to authenticate users in Kyma.
NOTE: Groups in Github are represented as teams. Read more to learn how to manage teams in GitHub
Prerequisites
- GitHub
- XSUAA
Configure Dex
Register the connector by creating a Helm override for Dex. Create the override ConfigMap in the Kubernetes cluster before Dex is installed. If you want to register a connector at runtime, trigger the update process after creating the override.
TIP: You can use Go Template expressions in the override value. These expressions are resolved by Helm using the same set of overrides as configured for the entire chart.
- GitHub
- XSUAA
TIP: The dex.useStaticConnector parameter set to
false
overrides the default configuration and disables the Dex static user store. As a result, you can login to the cluster using only the registered connectors. If you want to keep the Dex static user store enabled, remove the dex.useStaticConnector parameter from the ConfigMap template.
Configure authorization rules for the GitHub connector
To bind Github groups to the default Kyma roles, edit the bindings section in the values.yaml
file. Follow this template:
bindings: kymaAdmin: groups: - "{GITHUB_ORGANIZATION}:{GITHUB_TEAM_A}" kymaView: groups: - "{GITHUB_ORGANIZATION}:{GITHUB_TEAM_B}"
TIP: You can bind GitHub teams to any of the five predefined Kyma roles. Use these aliases:
kymaAdmin
,kymaView
,kymaDeveloper
,kymaEdit
orkymaEssentials
. For more information, read about predefined roles in Kyma.
This table explains the placeholders used in the template:
Placeholder | Description |
---|---|
GITHUB_ORGANIZATION | Specifies the name of the GitHub organization. |
GITHUB_TEAM_A | Specifies the name of GitHub team to bind to the kyma-admin role. |
GITHUB_TEAM_B | Specifies the name of GitHub team to bind to the kyma-view role. |
Invalidate Dex signing keys
By default, Dex in Kyma stores private and public keys used to sign and validate JWT tokens on a cluster using custom resources. If, for some reason, the private keys leak, you must invalidate the private-public key pair to prevent the attacker from issuing tokens and validating the existing ones. It is critical to do so, because otherwise the attackers can use a private key to issue a new JWT token to call third party services which have Dex JWT authentication enabled. Follow this tutorial to learn how to invalidate the signing keys.
Prerequisites
To complete this tutorial, you must have either the cluster-admin kubeconfig
file issued from a cloud provider or the Kyma kubeconfig
file the and kyma-admin role assigned.
Steps
Perform these steps to invalidate the keys:
- Delete all signing keys on a cluster:
kubectl delete signingkeies.dex.coreos.com -n kyma-system --all
NOTE: Although it is not recommended to interact with any of Dex CRs, in this situation it is the only way to invalidate the keys.
- Restart the Dex Pod:
kubectl delete po -n kyma-system -lapp=dex
Dex will create a new CR with a private-public key pair.
- Restart
kyma-system
Pods that validate tokens issued from Dex to drop the existing public keys:
kubectl delete po -n kyma-system -l'app in (apiserver-proxy,iam-kubeconfig-service,console-backend-service,kiali-kcproxy,log-ui)'; kubectl delete po -n kyma-system -l 'app.kubernetes.io/name in (oathkeeper,tracing)'
- Manually restart all your applications that validate Dex JWT tokens internally to get the new public keys.
NOTE: Following the tutorial steps results in the downtime of Dex, a couple of Kyma components, and potentially your applications. If you want to use
kubectl scale
command to scale replicas, bear in mind that this downtime is intentional. It prevents Dex from issuing new tokens signed by a compromised private key and forces at least Kyma applications to fetch new public keys, and at the same time reject all existing tokens signed by the compromised private key during JWT token validation.
Troubleshooting
Overview
The Troubleshooting section aims to identify the most common and recurring issues within the Kyma security layer, as well as provide the most suitable solutions to these issues.
If you can't find a solution that suits your case, don't hesitate to create a GitHub issue or use the #security Slack channel to get direct support from the community.
"403 Forbidden" in the Console
If you log to the Console and get the 403 Forbidden
error, do the following:
- Fetch the ID Token. For example, use the Chrome Developer Tools and search for the token in sent requests.
- Decode the ID Token. For example, use the jwt.io page.
Check if the token contains groups claims:
Click to copy"groups": ["example-group"]Make sure the group you are assigned to has permissions to view resources you requested.
Issues with certificates on Gardener
During installation on Gardener, Kyma requests domain SSL certificates using the Gardener's Certificate
custom resource to ensure secure communication through both Kyma UI and Kubernetes CLI.
This process can result in the following issues:
xip-patch
orapiserver-proxy
installation takes too long.Certificate is still not ready, status is {STATUS}. Exiting...
error occurs.- Certificates are no longer valid.
If any of these issues appears, follow these steps:
Check the status of the Certificate resource:
Click to copykubectl get certificates.cert.gardener.cloud --all-namespacesIf the status of any Certificate is
Error
, run:Click to copykubectl get certificates -n {CERTIFICATE_NAMESPACE} {CERTIFICATE_NAME} -o jsonpath='{ .status.message }'
The result describes the reason for the failure of issuing a domain SSL certificate. Depending on the moment when the error occurred, you can perform different actions.
- Error during the installation
- Error after the installation