Kyma

Overview

Kyma is the easiest and fastest way to connect and extend products in a cloud-native way. Kyma is designed as a centerpiece that brings together different external products and increases their agility and customizability.

Kyma allows you to extend and customize the functionality of your products in a quick and modern way, using serverless computing and microservice architecture. The extensions and customizations you create are decoupled from the core applications, which means that deployments are quick, scaling is independent from the core applications, and the changes you make can be easily reverted without causing downtime of the production system.

Living outside of the core product, Kyma allows you to be completely language-agnostic and customize your solution using the technology stack you want to use, not the one the core product dictates. Additionally, Kyma follows the "batteries included" principle and comes with all of the "plumbing code" ready to use, allowing you to focus entirely on writing the domain code and business logic.

Out of the box, Kyma comes with:

  • Security (Service Identity, TLS, Role Based Access Control)
  • Resilience
  • Telemetry and reporting
  • Traffic routing
  • Fault injection

When it comes to technology stacks, Kyma is all about the latest, most modern, and most powerful technologies available. The entire solution is containerized and runs on a Kubernetes cluster hosted in the Microsoft Azure cloud environment. Customers can access the cluster easily using a single sign on solution based on the Dex identity provider integrated with any OpenID Connect-compliant identity provider or a SAML2-based enterprise authentication server.

The communication between services is handled by the Istio service mesh component, which enables security, monitoring, and tracing without the need to change the application code. Build your applications using services provisioned by one of the many Service Brokers compatible with the Open Service Broker API, and monitor the speed and efficiency of your solutions using Prometheus, which gives you the most accurate and up-to-date tracing and telemetry data.

Using Minikube, you can run Kyma locally, develop, and test your solutions on a small scale before you push them to a cluster. Follow the Local Kyma installation and Cluster Kyma installation Getting Started guides to start exploring in a matter of minutes.

Details

Components

Kyma is built on the foundation of the best and most advanced open-source projects which make up the components readily available for customers to use. This section describes the Kyma components.

Service Catalog

The Service Catalog lists all of the services available to Kyma users through the registered Service Brokers. Using the Service Catalog, you can provision new services in the Kyma Kubernetes cluster and create bindings between the provisioned service and an application.

Service Brokers

Service Brokers are Open Service Broker API-compatible servers that manage the lifecycle of one or more services. Each Service Broker registered in Kyma presents the services it offers to the Service Catalog. You can provision these services on a cluster level through the Service Catalog. Out of the box, Kyma comes with three Service Brokers. You can register more Open Service Broker API-compatible Service Brokers in Kyma and provision the services they offer using the Service Catalog.

Application Connector

The Application Connector is a proprietary Kyma solution. This endpoint is the Kyma side of the connection between Kyma and the external solutions. The Application Connector allows you to register the APIs and the Event Catalog, which lists all of the available events, of the connected solution. Additionally, the Application Connector proxies the calls from Kyma to external APIs in a secure way.

Event Bus

Kyma Event Bus receives Events from external solutions and triggers the business logic created with lambda functions and services in Kyma. The Event Bus is based on the NATS Streaming open source messaging system for cloud-native applications.

Service Mesh

The Service Mesh is an infrastructure layer that handles service-to-service communication, proxying, service discovery, traceability, and security independent of the code of the services. Kyma uses the Istio Service Mesh that enforces RBAC (Role Based Access Control) in the cluster. Dex handles the identity management and identity provider integration. It allows you to integrate any OpenID Connect-compliant identity provider with Kyma.

Serverless

The Kyma Serverless component allows you to reduce the implementation and operation effort of an application to the absolute minimum. Kyma Serverless provides a platform to run lightweight functions in a cost-efficient and scalable way using JavaScript and Node.js. Kyma Serverless is built on the Kubeless framework, which allows you to deploy lambda functions, and uses the NATS messaging system that monitors business events and triggers functions accordingly.

Monitoring

Kyma comes bundled with tools that give you the most accurate and up-to-date monitoring data. Prometheus open source monitoring and alerting toolkit provides this data, which is consumed by different add-ons, including Grafana for analytics and monitoring, and Alertmanager for handling alerts.

Tracing

The tracing in Kyma uses the Jaeger distributed tracing system. Use it to analyze performance by scrutinizing the path of the requests sent to and from your service. This information helps you optimize the latency and performance of your solution.

Logging

Logging in Kyma uses Logspout and OK Log. Use a plaintext or a regular expression to fetch logs from pods using the OK Log UI.

Environments

An Environment is a custom Kyma security and organizational unit based on the concept of Kubernetes Namespaces. Kyma Environments allow you to divide the cluster into smaller units to use for different purposes, such as development and testing.

Kyma Environment is a user-created Namespace marked with the env: "true" label. The Kyma UI only displays the Namespaces marked with the env: "true" label.

Default Kyma Namespaces

Kyma comes configured with default Namespaces dedicated for system-related purposes. The user cannot modify or remove any of these Namespaces.

  • kyma-system - This Namespace contains all of the Kyma Core components.
  • kyma-integration - This Namespace contains all of the Application Connector components responsible for the integration of Kyma and external solutions.
  • kyma-installer - This Namespace contains all of the Kyma installer components, objects, and Secrets.
  • istio-system - This Namespace contains all of the Istio-related components.

Environments in Kyma

Kyma comes with three Environments ready for you to use. These environments are:

  • production
  • qa
  • stage

Create a new Environment

To create a new Environment, create a Namespace and mark it with the env: "true" label. Use this command to do that in a single step:

Click to copy
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Namespace
metadata:
name: my-environment
labels:
env: "true"
EOF

Initially, the system deploys two template roles: kyma-reader-role and kyma-admin-role. The controller finds the template roles by filtering available roles in the namespace kyma-system by the label env: "true". The controller copies these roles into the Environment.

Reinstall Kyma

The custom scripts allow you to remove Kyma from a Minikube cluster and reinstall Kyma without removing the cluster.

NOTE: These scripts do not delete the cluster from your Minikube. This allows you to quickly reinstall Kyma.

  1. Use the clean-up.sh script to uninstall Kyma from the cluster. Run:

    Click to copy
    scripts/clean-up.sh
  2. Run this script to reinstall Kyma on an existing cluster:

    Click to copy
    cmd/run.sh --skip-minikube-start

Testing Kyma

For testing, the Kyma components use the Helm test concept. Place your test under the templates directory as a Pod definition that specifies a container with a given command to run.

Add a new test

The system bases tests on the Helm broker concept with one modification: adding a Pod label. Before you create a test, see the official Chart Tests documentation. Then, add the "helm-chart-test": "true" label to your Pod template.

See the following example of a test prepared for Dex:

Click to copy
# Chart tree
dex
├── Chart.yaml
├── README.md
├── templates
│   ├── tests
│   │ └── test-dex-connection.yaml
│   ├── dex-deployment.yaml
│   ├── dex-ingress.yaml
│   ├── dex-rbac-role.yaml
│   ├── dex-service.yaml
│   ├── pre-install-dex-account.yaml
│   ├── pre-install-dex-config-map.yaml
│   └── pre-install-dex-secrets.yaml
└── values.yaml

The test adds a new test-dex-connection.yaml under the templates/tests directory. This simple test calls the Dex endpoint with cURL, defined as follows:

Click to copy
apiVersion: v1
kind: Pod
metadata:
name: "test-{{ template "fullname" . }}-connection-dex"
annotations:
"helm.sh/hook": test-success
labels:
"helm-chart-test": "true" # ! Our customization
spec:
hostNetwork: true
containers:
- name: "test-{{ template "fullname" . }}-connection-dex"
image: tutum/curl:alpine
command: ["/usr/bin/curl"]
args: [
"--fail",
"http://dex-service.{{ .Release.Namespace }}.svc.cluster.local:5556/.well-known/openid-configuration"
]
restartPolicy: Never

Test execution

All tests created for charts under /resources/core/ run automatically after starting Kyma. If any of the tests fail, the system prints the Pod logs in the terminal, then deletes all the Pods.

NOTE: If you run Kyma locally, by default, the system does not take into account the test's exit code. As a result, the system does not terminate Kyma Docker container, and you can still access it. To force a termination in case of failing tests, use --exit-on-test-fail flag when executing run.sh script.

CI propagates the exit status of tests. If any test fails, the whole CI job fails as well.

Follow the same guidelines to add a test which is not a part of any core component. However, for test execution, see the Run a test manually section in this document.

Run a test manually

To run a test manually, use the testing.sh script located in the /installation/scripts/ directory which runs all tests defined for core releases. If any of the tests fail, the system prints the Pod logs in the terminal, then deletes all the Pods.

Another option is to run a Helm test directly on your release.

Click to copy
helm test {your_release_name}

You can also run your test on custom releases. If you do this, remember to always delete the Pods after a test ends.

Charts

Kyma uses Helm charts to deliver single components and extensions, as well as the core components. This document contains information about the chart-related technical concepts, dependency management to use with Helm charts, and chart examples.

Manage dependencies with Init Containers

The ADR 003: Init Containers for dependency management document declares the use of Init Containers as the primary dependency mechanism.

Init Containers present a set of distinctive behaviors:

  • They always run to completion.
  • They start sequentially, only after the preceding Init Container completes successfully. If any of the Init Containers fails, the Pod restarts. This is always true, unless the restartPolicy equals never.

Readiness Probes ensure that the essential containers are ready to handle requests before you expose them. At a minimum, probes are defined for every container accessible from outside of the Pod. It is recommended to pair the Init Containers with readiness probes to provide a basic dependency management solution.

Examples

Here are some examples:

  1. Generic
Click to copy
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 30
timeoutSeconds: 1
Click to copy
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'until nslookup nginx; do echo waiting for nginx; sleep 2; done;']
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  1. Kyma
Click to copy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helm-broker
labels:
app: helm-broker
spec:
replicas: 1
selector:
matchLabels:
app: helm-broker
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
app: helm-broker
spec:
initContainers:
- name: init-helm-broker
image: eu.gcr.io/kyma-project/alpine-net:0.2.74
command: ['sh', '-c', 'until nc -zv core-catalog-controller-manager.kyma-system.svc.cluster.local 8080; do echo waiting for etcd service; sleep 2; done;']
containers:
- name: helm-broker
ports:
- containerPort: 6699
readinessProbe:
tcpSocket:
port: 6699
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 3
successThreshold: 1
timeoutSeconds: 2

Support for the Helm wait flag

High level Kyma components, such as core, come as Helm charts. These charts are installed as part of a single Helm release. To provide ordering for these core components, the Helm client runs with the --wait flag. As a result, Tiller waits for the readiness of all of the components, and then evaluates the readiness.

For Deployments, set the strategy to RollingUpdate and set the MaxUnavailable value to a number lower than the number of replicas. This setting is necessary, as readiness in Helm v2.8.2 is fulfilled if the number of replicas in ready state is not lower than the expected number of replicas:

Click to copy
ReadyReplicas >= TotalReplicas - MaxUnavailable

Chart installation details

The Tiller server performs the chart installation process. This is the order of operations that happen during the chart installation:

  • resolve values
  • recursively gather all templates with the corresponding values
  • sort all templates
  • render all templates
  • separate hooks and manifests from files into sorted lists
  • aggregate all valid manifests from all sub-charts into a single manifest file
  • execute PreInstall hooks
  • create a release using the ReleaseModule API and, if requested, wait for the actual readiness of the resources
  • execute PostInstall hooks

Notes

All notes are based on Helm v2.7.2 implementation and are subject to change in feature releases.

  • Regardless of how complex a chart is, and regardless of the number of sub-charts it references or consists of, it's always evaluated as one. This means that each Helm release is compiled into a single Kubernetes manifest file when applied on API server.

  • Hooks are parsed in the same order as manifest files and returned as a single, global list for the entire chart. For each hook the weight is calculated as a part of this sort.

  • Manifests are sorted by Kind. You can find the list and the order of the resources on the Kubernetes Tiller website.

Glossary

  • resource is any document in a chart recognized by Helm or Tiller. This includes manifests, hooks, and notes.
  • template is a valid Go template. Many of the resources are also Go templates.

Deploy with a private Docker registry

Docker is a free tool to deploy applications and servers. To run an application on Kyma, provide the application binary file as a Docker image located in a Docker registry. Use the DockerHub public registry to upload your Docker images for free access to the public. Use a private Docker registry to ensure privacy, increased security, and better availability.

This document shows how to deploy a Docker image from your private Docker registry to the Kyma cluster.

Details

The deployment to Kyma from a private registry differs from the deployment from a public registry. You must provide Secrets accessible in Kyma, and referenced in the .yaml deployment file. This section describes how to deploy an image from a private Docker registry to Kyma. Follow the deployment steps:

  1. Create a Secret resource.
  2. Write your deployment file.
  3. Submit the file to the Kyma cluster.

Create a Secret for your private registry

A Secret resource passes your Docker registry credentials to the Kyma cluster in an encrypted form. For more information on Secrets, refer to the Kubernetes documentation.

To create a Secret resource for your Docker registry, run the following command:

Click to copy
kubectl create secret docker-registry {secret-name} --docker-server={registry FQN} --docker-username={user-name} --docker-password={password} --docker-email={registry-email} --namespace={namespace}

Refer to the following example:

Click to copy
kubectl create secret docker-registry docker-registry-secret --docker-server=myregistry:5000 --docker-username=root --docker-password=password --docker-email=example@github.com --namespace=production

The Secret is associated with a specific Namespace. In the example, the Namespace is production. However, you can modify the Secret to point to any desired Namespace.

Write your deployment file

  1. Create the deployment file with the .yml extension and name it deployment.yml.

  2. Describe your deployment in the .yml file. Refer to the following example:

Click to copy
apiVersion: apps/v1beta2
kind: Deployment
metadata:
namespace: production # {production/stage/qa}
name: my-deployment # Specify the deployment name.
annotations:
sidecar.istio.io/inject: true
spec:
replicas: 3 # Specify your replica - how many instances you want from that deployment.
selector:
matchLabels:
app: app-name # Specify the app label. It is optional but it is a good practice.
template:
metadata:
labels:
app: app-name # Specify app label. It is optional but it is a good practice.
version: v1 # Specify your version.
spec:
containers:
- name: container-name # Specify a meaningful container name.
image: myregistry:5000/user-name/image-name:latest # Specify your image {registry FQN/your-username/your-space/image-name:image-version}.
ports:
- containerPort: 80 # Specify the port to your image.
imagePullSecrets:
- name: docker-registry-secret # Specify the same Secret name you generated in the previous step for this Namespace.
- name: example-secret-name # Specify your Namespace Secret, named `example-secret-name`.
  1. Submit you deployment file using this command:
Click to copy
kubectl apply -f deployment.yml

Your deployment is now running on the Kyma cluster.

Install subcomponents

It is up to you to decide which subcomponents you install as part of the core release. By default, most of the core subcomponents are enabled. If you want to install only specific subcomponents, follow the steps that you need to perform before the local and cluster installation.

Install subcomponents locally

To specify whether to install a given core subcomponent on Minikube, use the manage-component.sh script before you trigger the Kyma installation. The script consumes two parameters:

  • the name of the core subcomponent
  • a Boolean value that determines whether to install the subcomponent (true) or not (false)

Example:

To enable the Azure Broker subcomponent, run the following command:

Click to copy
scripts/manage-component.sh azure-broker true

Alternatively, to disable the Azure Broker subcomponent, run this command:

Click to copy
scripts/manage-component.sh azure-broker false

Install subcomponents on a cluster

Install subcomponents on a cluster based on Helm conditions described in the requirements.yaml file. Read more about the fields in the requirements.yaml file here.

To specify whether to install a given core subcomponent, provide override values before you trigger the installation.

Example:

Click to copy
apiVersion: v1
kind: ConfigMap
metadata:
name: kyma-sub-components
namespace: kyma-installer
labels:
installer: overrides
data:
azure-broker.enabled: "true"

NOTE: Some subcomponents can require additional configuration to work properly.

Specify subcomponents versions

Versions of the Kyma components are specified in the values.yaml file in charts. Two properties, version and dir, describe each component version. The first one defines the actual docker image tag. The second property describes the directory under which the tagged image is pushed. It is optional and is followed by a forward slash (/).

Possible values of the dir property:

  • pr/ contains images built from the pull request
  • develop/ contains images built from the master branch
  • rc/ contains images built for a pre-release
  • (empty) contains images built for a release

To override subcomponents versions during Kyma startup, create the versions-overrides.env file in the installation directory.

The example overrides the Environments component and sets the image version to 0.0.1, based on the version from the develop directory.

Example:

Click to copy
global.environments.dir=develop/
global.environments.version=0.0.1

Getting Started

Local Kyma installation

This Getting Started guide shows developers how to quickly deploy Kyma locally on a Mac, Linux, or Windows. Kyma installs locally using a proprietary installer based on a Kubernetes operator. The document provides prerequisites, instructions on how to install Kyma locally and verify the deployment, as well as the troubleshooting tips.

Prerequisites

To run Kyma locally, clone this Git repository to your machine and checkout the latest tag. After you clone the repository, run this command:

Click to copy
git checkout latest

Additionally, download these tools:

Virtualization:

NOTE: To work with Kyma, use only the provided installation and deinstallation scripts. Kyma does not work on a basic Minikube cluster that you can start using the minikube start command or stop with the minikube stop command. If you don't need Kyma on Minikube anymore, remove the cluster with the minikube delete command.

Set up certificates

Kyma comes with a local wildcard self-signed server.crt certificate that you can find under the /installation/certs/workspace/raw/ directory of the kyma repository. Trust it on the OS level for convenience.

Follow these steps to "always trust" the Kyma certificate on Mac:

  1. Change the working directory to installation:
    Click to copy
    cd installation
  2. Run this command:
    Click to copy
    sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain certs/workspace/raw/server.crt

NOTE: "Always trusting" the certificate does not work with Mozilla Firefox.

Install Kyma on Minikube

You can install Kyma with all core subcomponents or only with the selected ones. This section describes how to install all core subcomponents. To learn how to install only the specific ones, see the Install subcomponents document for details.

NOTE: Running the installation script deletes any previously existing cluster from your Minikube.

  1. Change the working directory to installation:

    Click to copy
    cd installation
  2. Depending on your operating system, run run.sh for Mac and Linux or run.ps1 for Windows

    Click to copy
    cmd/run.sh

The run.sh script does not show the progress of the Kyma installation, which allows you to perform other tasks in the terminal window. However, to see the status of the Kyma installation, run this script after you set up the cluster and the installer:

Click to copy
scripts/is-installed.sh

Read the Reinstall Kyma document to learn how to reinstall Kyma without deleting the cluster from Minikube. To learn how to test Kyma, see the Testing Kyma document.

Verify the deployment

Follow the guidelines in the subsections to confirm that your Kubernetes API Server is up and running as expected.

Access Kyma with CLI

Verify the cluster deployment with the kubectl command line interface (CLI).

Run this command to fetch all Pods in all Namespaces:

Click to copy
kubectl get pods --all-namespaces

The command retrieves all Pods from all Namespaces, the status of the Pods, and their instance numbers. Check if the STATUS column shows Running for all Pods. If any of the Pods that you require do not start successfully, perform the installation again.

Access the Kyma console

Access your local Kyma instance through this link.

  • Click Login with Email and sign in with the admin@kyma.cx email address and the generic password from the dex-config-map.yaml file in the /resources/dex/templates/ directory.

  • Click the Environments section and select an Environment from the drop-down menu to explore Kyma further.

Access the Kubernetes Dashboard

Additionally, confirm that you can access your Kubernetes Dashboard. Run the following command to check the IP address on which Minikube is running:

Click to copy
minikube ip

The address of your Kubernetes Dashboard looks similar to this:

Click to copy
http://{ip-address}:30000

See the example of the website address:

Click to copy
http://192.168.64.44:30000

Troubleshooting

If the installer does not respond as expected, check the installation status using the is-installed.sh script with the --verbose flag added. Run:

Click to copy
scripts/is-installed.sh --verbose

Cluster Kyma installation

This Getting Started guide shows developers how to quickly deploy Kyma on a cluster. Kyma installs on a cluster using a proprietary installer based on a Kubernetes operator. The document provides prerequisites, instructions on how to install Kyma on a cluster and verify the deployment, as well as the troubleshooting tips.

Prerequisites

The cluster on which you install Kyma must run Kubernetes version 1.10 or higher.

Prepare these items:

  • A domain name such as kyma.example.com.
  • A wildcard TLS certificate for your cluster domain. Generate it with Let's Encrypt.
  • The certificate for Remote Environments.
  • A static IP address for the Kyma Istio Ingress (public external IP). Create a DNS record *.kyma.example.com that points to Kyma Istio Ingress IP Address.
  • A Static IP address for Remote Environments Ingress. Create a DNS record gateway.kyma.example.com that points to Remote Environments Ingress IP Address.

Some providers don't allow to pre-allocate IP addresses, such as is the case with AWS which does not support static IP assignment during ELB creation. For such providers, you must complete the configuration after you install Kyma. See the DNS configuration section for more details.

NOTE: See the Application Connector documentation for more details on Remote Environments.

Configure the Kubernetes API Server following this template:

NOTE: Apply this configuration only when you set up your own cluster. This configuration does not work with managed clusters.

Click to copy
"apiServerConfig": {
"--enable-admission-plugins": "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,DefaultStorageClass,ResourceQuota",
"--runtime-config": "batch/v2alpha1=true,settings.k8s.io/v1alpha1=true,admissionregistration.k8s.io/v1alpha1=true",
"--cors-allowed-origins": ".*",
"--feature-gates": "ReadOnlyAPIDataVolumes=false"
},
"kubeletConfig": {
"--feature-gates": "ReadOnlyAPIDataVolumes=false",
"--authentication-token-webhook": "true",
"--authorization-mode": "Webhook"
}

Installation

You can install Kyma with all core subcomponents or only with the selected ones. This section describes how to install all core subcomponents. To learn how to install only the specific ones, see the Install subcomponents document for details.

  1. Create the kyma-installer Namespace.

Run the following command:

Click to copy
kubectl create ns kyma-installer
  1. Fill in the installer-config-cluster.yaml.tpl template.

The Kyma installation process requires installation data specified in the installer-config-cluster.yaml file. Copy the installer-config-cluster.yaml.tpl template, rename it to installer-config-cluster.yaml, and fill in these placeholder values:

  • __TLS_CERT__ for the TLS certificate, this value must be a PEM format, base64-encoded TLS certifcate
  • __TLS_KEY__ for the TLS certificate key, this value must be a base64-encoded TLS private key
  • __REMOTE_ENV_CA__ for the Remote Environments CA
  • __REMOTE_ENV_CA_KEY__ for the Remote Environments CA key
  • __IS_LOCAL_INSTALLATION__ for controlling installation procedure. Set to true for local installation, otherwise cluster installation is assumed.
  • __DOMAIN__ for the domain name such as kyma.example.com
  • __EXTERNAL_PUBLIC_IP__ for the IP address of Kyma Istio Gateway (optional)
  • __REMOTE_ENV_IP__ for the IP address for Remote Environments Ingress (optional)
  • __ADMIN_GROUP__ for the additional admin group. This value is optional.
  • __ENABLE_ETCD_BACKUP__ set to true to install the etcd-operator and CronJob which executes periodically the Etcd Backup application.
  • __ETCD_BACKUP_ABS_CONTAINER_NAME__ for the Azure Blob Storage name of etcd backups. You can leave the value blank when the backup operator is disabled.

NOTE: As the etcd backup feature is in development, replace __ENABLE_ETCD_BACKUP__ to false.

When you fill in all required placeholder values, run the following command to provide the cluster with the installation data:

Click to copy
kubectl apply -f installer-config-cluster.yaml
  1. Bind the default RBAC role.

Kyma installation requires increased permissions granted by the cluster-admin role. To bind the role to the default ServiceAccount, run the following command:

Click to copy
kubectl apply -f installation/resources/default-sa-rbac-role.yaml
  1. Deploy tiller.

To deploy the tiller component on your cluster, run the following command:

Click to copy
kubectl apply -f installation/resources/tiller.yaml

Wait until the tiller Pod is ready. Execute the following command to check that it is running:

Click to copy
kubectl get pods -n kube-system | grep tiller
  1. Deploy the Installer component.

To deploy the Installer component on your cluster, run this command:

Click to copy
kubectl apply -f installation/resources/installer.yaml -n kyma-installer
  1. Trigger the installation.

To trigger the installation of Kyma, you need a Custom Resource file. Duplicate the installer-cr.yaml.tpl file, rename it to installer-cr.yaml, and fill in these placeholder values:

  • __VERSION__ for the version number of Kyma to install. When manually installing Kyma on a cluster, specify any valid SemVer notation string. For example, 0.0.1.
  • __URL__ for the URL to the Kyma tar.gz package to install. For example, for the master branch of Kyma, the address is https://github.com/kyma-project/kyma/archive/master.tar.gz.

NOTE: Read the Installation document to learn more about the Custom Resource that controls the Kyma installer.

Once the file is ready, run this command to trigger the installation:

Click to copy
kubectl apply -f installer-cr.yaml
  1. Verify the installation.

To check the progress of the installation process, verify the Custom Resource:

Click to copy
kubectl get installation kyma-installation -o yaml

A successful installation ends by setting status.state to Installed and status.description to Kyma installed.

DNS configuration

If the cluster provider doesn't allow to pre-allocate IP addresses, the cluster gets the required details from the underlying cloud provider infrastructure. Get the allocated IP addresses and set up the DNS entries required for Kyma.

  • List all Services and look for "LoadBalancer":

    Click to copy
    kubectl get services --all-namespaces | grep LoadBalancer
  • Find istio-ingressgateway in the istio-system Namespace. This entry specifies the IP address for the Kyma Ingress. Create a DNS entry *.kyma.example.com that points to this IP address.

  • Find core-nginx-ingress-controller in the kyma-system Namespace. This entry specifies the IP address for the Remote Environments Ingress. Create a DNS entry gateway.kyma.example.com that points to this address.

Troubleshooting

To troubleshoot the installation, start by reviewing logs of the Installer component:

Click to copy
kubectl logs -n kyma-installer $(kubectl get pods --all-namespaces -l name=kyma-installer --no-headers -o jsonpath='{.items[*].metadata.name}')

Sample service deployment on local

This Getting Started guide is intended for the developers who want to quickly learn how to deploy a sample service and test it with Kyma installed locally on Mac.

This guide uses a standalone sample service written in the Go language .

Prerequisites

To use the Kyma cluster and install the example, download these tools:

Steps

Deploy and expose a sample standalone service

Follow these steps:

  1. Deploy the sample service to any of your Environments. Use the stage Environment for this guide:

    Click to copy
    kubectl create -n stage -f https://raw.githubusercontent.com/kyma-project/examples/master/http-db-service/deployment/deployment.yaml
  2. Create an unsecured API for your example service:

    Click to copy
    kubectl apply -n stage -f https://raw.githubusercontent.com/kyma-project/examples/master/gateway/service/api-without-auth.yaml
  3. Add the IP address of Minikube to the hosts file on your local machine for your APIs:

    Click to copy
    echo "$(minikube ip) http-db-service.kyma.local" | sudo tee -a /etc/hosts
  4. Access the service using the following call:

    Click to copy
    curl -ik https://http-db-service.kyma.local/orders

    The system returns a response similar to the following:

    Click to copy
    HTTP/2 200
    content-type: application/json;charset=UTF-8
    vary: Origin
    date: Mon, 01 Jun 2018 00:00:00 GMT
    content-length: 2
    x-envoy-upstream-service-time: 131
    server: envoy
    []

Update your service's API to secure it

Run the following command:

Click to copy
kubectl apply -n stage -f https://raw.githubusercontent.com/kyma-project/examples/master/gateway/service/api-with-auth.yaml

After you apply this update, you must include a valid bearer ID token in the Authorization header to access the service.

NOTE: The update might take some time.

Sample service deployment on a cluster

This Getting Started guide is intended for the developers who want to quickly learn how to deploy a sample service and test it with the Kyma cluster.

This guide uses a standalone sample service written in the Go language.

Prerequisites

To use the Kyma cluster and install the example, download these tools:

Steps

Download configuration for kubectl

Follow these steps to download kubeconfig and configure kubectl to access the Kyma cluster: 1. Access the Console UI and download the kubectl file from the settings page. 2. Place downloaded file in the following location: $HOME/.kube/kubeconfig. 3. Point kubectl to the configuration file using the terminal: export KUBECONFIG=$HOME/.kube/kubeconfig. 4. Confirm kubectl is configured to use your cluster: kubectl cluster-info.

Set the cluster domain variable

The commands throughout this guide use URLs that require you to provide the domain of the cluster which you are using. To complete this configuration, set the variable yourClusterDomain to the domain of your cluster.

For example, if your cluster's domain is demo.cluster.kyma.cx, run the following command:

Click to copy
export yourClusterDomain='demo.cluster.kyma.cx'

Deploy and expose a sample standalone service

Follow these steps:

  1. Deploy the sample service to any of your Environments. Use the stage Environment for this guide:

    Click to copy
    kubectl create -n stage -f https://minio.$yourClusterDomain/content/root/kyma/assets/deployment.yaml
  2. Create an unsecured API for your service:

    Click to copy
    curl -k https://minio.$yourClusterDomain/content/root/kyma/assets/api-without-auth.yaml | sed "s/.kyma.local/.$yourClusterDomain/" | kubectl apply -n stage -f -
  3. Access the service using the following call:

    Click to copy
    curl -ik https://http-db-service.$yourClusterDomain/orders

    The system returns a response similar to the following:

    Click to copy
    HTTP/2 200
    content-type: application/json;charset=UTF-8
    vary: Origin
    date: Mon, 01 Jun 2018 00:00:00 GMT
    content-length: 2
    x-envoy-upstream-service-time: 131
    server: envoy
    []

Update your service's API to secure it

Run the following command:

Click to copy
curl -k https://minio.$yourClusterDomain/content/root/kyma/assets/api-with-auth.yaml | sed "s/.kyma.local/.$yourClusterDomain/" | kubectl apply -n stage -f -

After you apply this update, you must include a valid bearer ID token in the Authorization header to access the service.

NOTE: The update might take some time.

Develop a service locally without using Docker

You can develop services in the local Kyma installation without extensive Docker knowledge or a need to build and publish a Docker image. The minikube mount feature allows you to mount a directory from your local disk into the local Kubernetes cluster.

This guide shows how to use this feature, using the service example implemented in Golang.

Prerequisites

Install Golang.

Steps

Install the example on your local machine

  1. Install the example:
Click to copy
go get -insecure github.com/kyma-project/examples/http-db-service
  1. Navigate to installed example and the http-db-service folder inside it:
Click to copy
cd ~/go/src/github.com/kyma-project/examples/http-db-service
  1. Build the executable to run the application:
Click to copy
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

Mount the example directory into Minikube

For this step, you need a running local Kyma instance. Read the Local Kyma installation Getting Started guide to learn how to install Kyma locally.

  1. Open the terminal window. Do not close it until the development finishes.
  2. Mount your local drive into Minikube:
Click to copy
# Use the following pattern:
minikube mount {LOCAL_DIR_PATH}:{CLUSTER_DIR_PATH}`
# To follow this guide, call:
minikube mount ~/go/src/github.com/kyma-project/examples/http-db-service:/go/src/github.com/kyma-project/examples/http-db-service

See the example and expected result:

Click to copy
# Terminal 1
minikube mount ~/go/src/github.com/kyma-project/examples/http-db-service:/go/src/github.com/kyma-project/examples/http-db-service
Mounting /Users/{USERNAME}/go/src/github.com/kyma-project/examples/http-db-service into /go/src/github.com/kyma-project/examples/http-db-service on the minikube VM
This daemon process must stay alive for the mount to still be accessible...
ufs starting

Run your local service inside Minikube

  1. Create Pod that uses the base Golang image to run your executable located on your local machine:
Click to copy
# Terminal 2
kubectl run mydevpod --image=golang:1.9.2-alpine --restart=Never -n stage --overrides='
{
"spec":{
"containers":[
{
"name":"mydevpod",
"image":"golang:1.9.2-alpine",
"command": ["./main"],
"workingDir":"/go/src/github.com/kyma-project/examples/http-db-service",
"volumeMounts":[
{
"mountPath":"/go/src/github.com/kyma-project/examples/http-db-service",
"name":"local-disk-mount"
}
]
}
],
"volumes":[
{
"name":"local-disk-mount",
"hostPath":{
"path":"/go/src/github.com/kyma-project/examples/http-db-service"
}
}
]
}
}
'
  1. Expose the Pod as a service from Minikube to verify it:
Click to copy
kubectl expose pod mydevpod --name=mypodservice --port=8017 --type=NodePort -n stage
  1. Check the Minikube IP address and Port, and use them to access your service.
Click to copy
# Get the IP address.
minikube ip
# See the example result: 192.168.64.44
# Check the Port.
kubectl get services -n stage
# See the example result: mypodservice NodePort 10.104.164.115 <none> 8017:32226/TCP 5m
  1. Call the service from your terminal.
Click to copy
curl {minikube ip}:{port}/orders -v
# See the example: curl http://192.168.64.44:32226/orders -v
# The command returns an empty array.

Modify the code locally and see the results immediately in Minikube

  1. Edit the main.go file by adding a new test endpoint to the startService function
Click to copy
router.HandleFunc("/test", func (w http.ResponseWriter, r *http.Request) {
w.Write([]byte("test"))
})
  1. Build a new executable to run the application inside Minikube:
Click to copy
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
  1. Replace the existing Pod with the new version:
Click to copy
kubectl get pod mydevpod -n stage -o yaml | kubectl replace --force -f -
  1. Call the new test endpoint of the service from your terminal. The command returns the Test string:
Click to copy
curl http://192.168.64.44:32226/test -v

Publish a service Docker image and deploy it to Kyma

In the Getting Started guide for local development of a service, you can learn how to develop a service locally. You can immediately see all the changes made in the local Kyma installation based on Minikube, without building a Docker image and publishing it to a Docker registry, such as the Docker Hub.

Using the same example service, this guide explains how to build a Docker image for your service, publish it to the Docker registry, and deploy it to the local Kyma installation. The instructions base on Minikube, but you can also use the image that you create, and the Kubernetes resource definitions that you use on the Kyma cluster.

NOTE: The deployment works both on local Kyma installation and on the Kyma cluster.

Steps

Build a Docker image

The http-db-service example used in this guide provides you with the Dockerfile necessary for building Docker images. Examine the Dockerfile to learn how it looks and how it uses the Docker Multistaging feature, but do not use it one-to-one for production. There might be custom LABEL attributes with values to override.

  1. In your terminal, go to the examples/http-db-service directory. If you did not follow the Sample service deployment on local guide and you do not have this directory locally, get the http-db-service example from the examples repository.
  2. Run the build with ./build.sh.

NOTE: Ensure that the new image builds and is available in your local Docker registry by calling docker images. Find an image called example-http-db-service and tagged as latest.

Register the image in the Docker Hub

This guide bases on Docker Hub. However, there are many other Docker registries available. You can use a private Docker registry, but it must be available in the Internet. For more details about using a private Docker registry, see the How to deploy a Docker image from a private registry document.

  1. Open the Docker Hub webpage.
  2. Provide all of the required details and sign up.

Sign in to the Docker Hub registry in the terminal

  1. Call docker login.
  2. Provide the username and password, and select the ENTER key.

Push the image to the Docker Hub

  1. Tag the local image with a proper name required in the registry: docker tag example-http-db-service {username}/example-http-db-service:0.0.1.
  2. Push the image to the registry: docker push {username}/example-http-db-service:0.0.1.
Click to copy
#This is how it looks in the terminal
The push refers to repository [docker.io/{username}/example-http-db-service]
4302273b9e11: Pushed
5835bd463c0e: Pushed
0.0.1: digest: sha256:9ec28342806f50b92c9b42fa36d979c0454aafcdda6845b362e2efb9816d1439 size: 734

NOTE: To verify if the image is successfully published, check if it is available online at the following address: https://hub.docker.com/r/{username}/example-http-db-service/

Deploy to Kyma

The http-db-service example contains sample Kubernetes resource definitions needed for the basic Kyma deployment. Find them in the deployment folder. Perform the following modifications to use your newly-published image in the local Kyma installation:

  1. Go to the deployment directory.
  2. Edit the deployment.yaml file. Change the image attribute to {username}/example-http-db-service:0.0.1.
  3. Create the new resources in local Kyma using these commands: kubectl create -f deployment.yaml -n stage && kubectl create -f ingress.yaml -n stage.
  4. Edit your /etc/hosts to add the new http-db-service.kyma.local host to the list of hosts associated with your minikube ip. Follow these steps:
    • Open a terminal window and run: sudo vim /etc/hosts
    • Select the i key to insert a new line at the top of the file.
    • Add this line: {YOUR.MINIKUBE.IP} http-db-service.kyma.local
    • Type :wq and select the Enter key to save the changes.
  5. Run this command to check if you can access the service: curl https://http-db-service.kyma.local/orders. The response should return an empty array.

Custom Resource

Installation

The installations.installer.kyma-project.io Custom Resource Definition (CRD) is a detailed description of the kind of data and the format used to control the Kyma Installer, a proprietary solution based on the Kubernetes operator principles. To get the up-to-date CRD and show the output in the yaml format, run this command:

Click to copy
kubectl get crd installations.installer.kyma-project.io -o yaml

Sample Custom Resource

This is a sample CR that controls the Kyma installer. This example has the action label set to install, which means that it triggers the installation of Kyma.

Click to copy
apiVersion: "installer.kyma-project.io/v1alpha1"
kind: Installation
metadata:
name: kyma-installation
labels:
action: install
finalizers:
- finalizer.installer.kyma-project.io
spec:
version: "1.0.0"
url: "https://sample.url.com/kyma_release.tar.gz"

This table lists all the possible parameters of a given resource together with their descriptions:

FieldMandatory?Description
metadata.nameYESSpecifies the name of the CR.
metadata.labels.actionYESDefines the behavior of the Kyma installer. Available options are install and uninstall.
metadata.finalizersNOProtects the CR from deletion. Read this Kubernetes document to learn more about finalizers.
spec.versionNOWhen manually installing Kyma on a cluster, specify any valid SemVer notation string.
spec.urlYESSpecifies the location of the Kyma sources tar.gz package. For example, for the master branch of Kyma, the address is https://github.com/kyma-project/kyma/archive/master.tar.gz