Serverless

Overview

"Serverless" refers to an architecture in which the infrastructure of your applications is managed by cloud providers. Contrary to its name, a serverless application does require a server but it doesn't require you to run and manage it on your own. Instead, you subscribe to a given cloud provider, such as AWS, Azure or GCP, and pay a subscription fee only for the resources you actually use. Since the resource allocation can be dynamic and depends on your current needs, the serverless model is particularly cost-effective when you want to implement a certain logic that is triggered on demand. Simply, you get your things done and don't pay for the infrastructure that sits idle.

Similarly to cloud providers, Kyma offers a service (known as "functions-as-a-service" or "FaaS") that provides a platform on which you can build, run, and manage serverless applications in Kubernetes. These applications are called Functions and are based on the Function CR objects. They are simple code snippets that implement the exact business logic you define in them. After you create a Function, you can:

  • Configure it to be triggered by events coming from external sources to which you subscribe.
  • Expose it to an external endpoint (HTTPS).

CAUTION: In its default configuration, Serverless uses persistent volumes as the internal registry to store Docker images for Functions. The default storage size of a single volume is 20 GB. This internal registry is suitable for local development. For production purposes, we recommend to use an external Docker registry.

Architecture

Serverless relies heavily on Kubernetes resources. It uses Deployments, Services and HorizontalPodAutoscalers to deploy and manage Functions, and Kubernetes Jobs to create Docker images. See how these and other resources process a Function within a Kyma cluster:

Serverless architecture

CAUTION: Serverless imposes some requirements on the setup of Namespaces. If you create a new Namespace, do not disable sidecar injection in it as Serverless requires Istio for other resources to communicate with Functions correctly. Also, if you apply custom LimitRanges for a new Namespace, they must be higher than or equal to the limits for building Jobs' resources.

  1. Create a Function either through the UI or by applying a Function custom resource (CR). This CR contains the Function definition (business logic that you want to execute) and information on the environment on which it should run.

  2. Before the Function can be saved or modified, it is first updated and then verified by the defaulting and validation webhooks respectively.

  3. Function Controller (FC) detects the new, validated Function CR.

  4. FC creates a ConfigMap with the Function definition.

  5. Based on the ConfigMap, FC creates a Kubernetes Job that triggers the creation of a Function image.

  6. The Job creates a Pod which builds the production Docker image based on the Function's definition. The Job then pushes this image to a Docker registry.

  7. FC monitors the Job status. When the image creation finishes successfully, FC creates a Deployment that uses the newly built image.

  8. FC creates a Service that points to the Deployment.

  9. FC creates a HorizontalPodAutoscaler that automatically scales the number of Pods in the Deployment based on the observed CPU utilization.

  10. FC waits for the Deployment to become ready.

Details

Runtimes

Functions support multiple languages through the use of runtimes. To use a chosen runtime, add its name and version as a value in the spec.runtime field of the Function custom resource (CR). If this value is not specified, it defaults to nodejs12. Dependencies for a Node.js Function should be specified using the package.json file format. Dependencies for a Python Function should follow the format used by pip.

See sample Functions for all available runtimes:

CAUTION: When you create a Function, the exported object in the Function's body must have main as the handler name.

  • Node.js 12
  • Node.js 10
  • Python 3.8

Git source type

Serverless in Kyma allows you to choose where you want to keep your Function's source code and dependencies. You can either place them directly in the Function custom resource (CR) under the spec.source and spec.deps fields as an inline Function, or store the code and dependencies in a public or private Git repository. Choosing the second option ensures your Function is versioned and gives you more development freedom in the choice of a project structure or an IDE.

Depending on a runtime you use to build your Function (Node.js 12, Node.js 10, or Python 3.8), your Git repository must contain at least a directory with these files:

  • handler.js or handler.py with Function's code
  • package.json or requirements.txt with Function's dependencies

The Function CR must contain type: git to specify that you use a Git repository for the Function's sources.

To create a Function with the Git source, you must:

  1. Create a GitRepository CR with details of your Git repository.
  2. Create a Secret (optional, only if you must authenticate to the repository).
  3. Create a Function CR with your Function definition and references to the Git repository.

NOTE: For detailed steps, see the tutorial on creating a Function from Git repository sources.

You can have various setups for your Function's Git source with different:

  • Directory structures

    You can specify the location of your code dependencies with the baseDir parameter in the Function CR. For example, use "/" if you keep the source files at the root of your repository.

  • Authentication methods

    You can define with the spec.auth parameter in the GitRepository CR that you must authenticate to the repository with a password or token (basic), or an SSH key (key).

  • Function's rebuild triggers

    You can use the reference parameter in the GitRepository CR to define whether the Function Controller must monitor a given branch or commit in the Git repository to rebuild the Function upon their changes.

Supported webhooks

A newly created or modified Function CR is first updated by the defaulting webhook and then verified by the validation webhook before the Function Controller starts to process it.

Defaulting webhook

NOTE: It only applies to the Function custom resource (CR).

The defaulting webhook:

  • Sets default values for CPU and memory requests and limits for a Function.
  • Sets default values for CPU and memory requests and limits for a Kubernetes Job responsible for building the Function's image.
  • Adds the maximum and the minimum number of replicas, if not specified already in the Function CR.
  • Sets the default runtime nodejs12 unless specified otherwise.

    ParameterDefault value
    resources.requests.cpu50m
    resources.requests.memory64Mi
    resources.limits.cpu100m
    resources.limits.memory128Mi
    buildResources.requests.cpu700m
    buildResources.requests.memory700Mi
    buildResources.limits.cpu1100m
    buildResources.limits.memory1100Mi
    minReplicas1
    maxReplicas1
    runtimenodejs12

NOTE: Function's resources and replicas as well as resources for a Kubernetes Job are based on presets. Read about all available presets to find out more.

Validation webhook

It checks the following conditions for these CRs:

  1. Function CR

    • Minimum values requested for a Function (CPU, memory, and replicas) and a Kubernetes Job (CPU and memory) responsible for building the Function's image must not be lower than the required ones:
    ParameterRequired value
    minReplicas1
    resources.requests.cpu10m
    resources.requests.memory16Mi
    buildResources.requests.cpu200m
    buildResources.requests.memory200Mi
    • Requests are lower than or equal to limits, and the minimum number of replicas is lower than or equal to the maximum one.
    • The Function CR contains all the required parameters.
    • If you decide to set a Git repository as the source of your Function's code and dependencies (spec.type set to git), the spec.reference and spec.baseDir fields must contain values.
    • The format of deps, envs, labels, and the Function name (RFC 1035) is correct.
    • The Function CR contains any envs reserved for the Deployment: FUNC_RUNTIME, FUNC_HANDLER, FUNC_PORT, MOD_NAME, NODE_PATH, PYTHONPATH.
  2. GitRepository CR

    • The spec.url parameter must:

      • Not be empty
      • Start with the http(s), git, or ssh prefix
      • End with the .git suffix
    • If you use SSH to authenticate to the repository:

      • spec.auth.type must be set to key
      • spec.auth.secretName must not be empty
    • If you use HTTP(S) to point to the repository that requires authentication (spec.auth):

      • spec.auth.type must be set to either key (SSH key) or basic (password or token)
      • spec.auth.secretName must not be empty

Available presets

Function's resources and replicas as well as resources for image-building Jobs are based on presets. A preset is a predefined group of values. There are three groups of presets defined for a Function CR and include the presents for:

  • Function's resources
  • Function's replicas
  • Image-building Job's resources

Configuration

To add a new preset to the Serverless configuration for the defaulting webhook to set it on all Function CRs, update the values.yaml file in the Serverless chart. To do it, change the configuration for the webhook.values.function.replicas.presets, webhook.values.function.resources.presets or webhook.values.buildJob.resources.presets parameters. Read the Serverless chart configuration to find out more.

Usage

If you want to apply values from a preset to a single Function, override the existing values for a given preset in the Function CR. To do it, first remove the relevant fields from the Function CR and then add the relevant preset labels. For example, to modify the default values for buildResources, remove all its entries from the Function CR and add an appropriate serverless.kyma-project.io/build-resources-preset: {PRESET} label to the Function CR.

Function's replicas

PresetMinimum numberMaximum number
S11
M12
L15
XL110

To apply values ​​from a given preset, use the serverless.kyma-project.io/replicas-preset: {PRESET} label in the Function CR.

Function's resources

PresetRequest CPURequest memoryLimit CPULimit memory
XS10m16Mi25m32Mi
S25m32Mi50m64Mi
M50m64Mi100m128Mi
L100m128Mi200m256Mi
XL200m256Mi400m512Mi

To apply values ​​from a given preset, use the serverless.kyma-project.io/function-resources-preset: {PRESET} label in the Function CR.

Build Job's resources

PresetRequest CPURequest memoryLimit CPULimit memory
local-dev200m200Mi400m400Mi
slow400m400Mi700m700Mi
normal700m700Mi1100m1100Mi
fast1100m1100Mi1700m1700Mi

To apply values ​​from a given preset, use the serverless.kyma-project.io/build-resources-preset: {PRESET} label in the Function CR.

Exposing Functions

To access a Function within the cluster, use the {function-name}.{namespace}.svc.cluster.local endpoint, such as test-function.default.svc.cluster.local. To expose a Function outside the cluster, you must create an APIRule custom resource (CR):

Expose a Function service

  1. Create the APIRule CR where you specify the Function to expose, define an Oathkeeper Access Rule to secure it, and list which HTTP request methods you want to enable for it.

  2. The API Gateway Controller detects a new APIRule CR and reads its definition.

  3. The API Gateway Controller creates an Istio Virtual Service and Access Rules according to details specified in the CR. Such a Function service is available under the {host-name}.{domain} endpoint, such as my-function.kyma.local.

This way you can specify multiple API Rules with different authentication methods for a single Function service.

TIP: See the tutorial for a detailed example.

NOTE: If you are using Minikube, before you can access the function you must add the endpoint to the Minikube IP entry in the /etc/hosts file.

Function processing

From the moment you create a Function (Function CR) until the time it is ready, it goes through three processing stages that are defined as these condition types:

  1. ConfigurationReady (PrinterColumn CONFIGURED)
  2. BuildReady (PrinterColumn BUILT)
  3. Running (PrinterColumn RUNNING)

For a Function to be considered ready, the status of all three conditions must be True:

Click to copy
NAME CONFIGURED BUILT RUNNING RUNTIME VERSION AGE
test-function True True True nodejs12 1 96s

When you update an existing Function, conditions change asynchronously depending on the change type.

The diagrams illustrate all three core status changes in the Function processing circle that the Function Controller handles. They also list all custom resources involved in this process and specify in which cases their update is required.

NOTE: Before you start reading, see the Function CR document for the custom resource detailed definition, the list of all Function's condition types and reasons for their success or failure.

Configured

This initial phase starts when you create a Function CR with configuration specifying the Function's setup. It ends with creating a ConfigMap that is used as a building block for a Function image.

Function configured

Built

This phase involves creating and processing the Job CR. It ends successfully when the Function image is built and sent to the Docker registry. If the image already exists and only an update is required, the Docker image receives a new tag.

Updating an existing Function requires an image rebuild only if you change the Function's body (source) or dependencies (deps). An update of Function's other configuration details, such as environment variables, replicas, resources, or labels, does not require image rebuild as it only affects the Deployment.

NOTE: Each time you update Function's configuration, the Function Controller deletes all previous Job CRs for the given Function's UID.

Function built

Running

This stage revolves around creating a Deployment, Service and HorizontalPodAutoscaler or updating them when configuration changes were made in the Function CR or the Function image was rebuilt.

In general, the Deployment is considered updated when both configuration and the image tag in the Deployment are up to date. Service and HorizontalPodAutoscaler are considered updated when there are proper labels set and configuration is up to date.

Thanks to the implemented reconciliation loop, the Function Controller constantly observes all newly created or updated resources. If it detects changes, it fetches the appropriate resource's status and only then updates the Function's status.

Function running

Security

To eliminate potential security risks when using Functions, bear in mind these few facts:

  • Kyma does not run any security scans against Functions and their images. Before you store any sensitive data in Functions, consider the potential risk of data leakage.

  • By default, JSON Web Tokens (JWTs) issued by Dex do not provide the scope parameter for Functions. This means that if you expose your Function and secure it with a JWT, you can use the token to validate access to all Functions within the cluster.

  • Kyma does not define any authorization policies that would restrict Functions' access to other resources within the Namespace. If you deploy a Function in a given Namespace, it can freely access all events and APIs of services within this Namespace.

  • All administrators and regular users who have access to a specific Namespace in a cluster can also access:

    • Source code of all Functions within this Namespace
    • Internal Docker registry that contains Function images
    • Secrets allowing the build Job to pull and push images from and to the Docker registry (in non-system Namespaces)

Configuration

Environment variables

Configuration of environment variables depends on the Function's runtime.

NodeJS runtime

To configure the Function with the Node.js runtime, override the default values of these environment variables:

Environment variableDescriptionTypeDefault value
FUNC_TIMEOUTSpecifies the number of seconds in which a runtime must execute the code.Number180
REQ_MB_LIMITSpecifies the payload body size limit in megabytes.Number1

See kubeless.js to get a deeper understanding of how the Express server, that acts as a runtime, uses these values internally to run Node.js Functions.

See the example of a Function with these environment variables set:

Click to copy
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
name: sample-fn-with-envs
namespace: default
spec:
env:
- name: FUNC_TIMEOUT
value: "2"
- name: REQ_MB_LIMIT
value: "10"
source: |
module.exports = {
main: function (event, context) {
return "Hello World!";
}
}

Python runtime

To configure a Function with the Python runtime, override the default values of these environment variables:

Environment variableDescriptionUnitDefault value
FUNC_MEMFILE_MAXMaximum size of memory buffer for the HTTP request body in bytes.Number100*1024*1024

See kubeless.py to get a deeper understanding of how the Bottle server, that acts as a runtime, uses these values internally to run Python Functions.

Click to copy
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
name: sample-fn-with-envs
namespace: default
spec:
env:
- name: FUNC_MEMFILE_MAX
value: "1048576" # 1MiB
source: |
def main(event. context):
return "Hello World!"

Serverless chart

To configure the Serverless chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values for both cluster and local installations.

NOTE: Limited memory and CPU resources on Minikube directly affect the Serverless functionality as you can process only a limited number of Functions at the same time. Also, their processing time is significantly longer. To process large workloads, we recommend using the cluster setup.

ParameterDescriptionDefault valueMinikube override
webhook.values.buildJob.resources.minRequestCpuMinimum number of CPUs requested by the image-building Pod to operate.200m200m
webhook.values.buildJob.resources.minRequestMemoryMinimum amount of memory requested by the image-building Pod to operate.200Mi200Mi
webhook.values.buildJob.resources.defaultPresetDefault preset for image-building Pod's resources.normallocal-dev
webhook.values.function.replicas.minValueMinimum number of replicas of a single Function.11
webhook.values.function.replicas.defaultPresetDefault preset for Function's replicas.SS
webhook.values.function.resources.minRequestCpuMaximum number of CPUs available for the image-building Pod to use.10m10m
webhook.values.function.resources.minRequestMemoryMaximum amount of memory available for the image-building Pod to use.16Mi16Mi
webhook.values.function.resources.defaultPresetDefault preset for Function's resources.MM

TIP: To learn more, read the official documentation on resource units in Kubernetes.

Custom Resource

Function

The functions.serverless.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to manage Functions within Kyma. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd functions.serverless.kyma-project.io -o yaml

Sample custom resource

The following Function object creates a Function which responds to HTTP requests with the "Hello John" message. The Function's code (source) and dependencies (deps) are specified in the Function CR.

Click to copy
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
name: my-test-function
namespace: default
spec:
env:
- name: PERSON_NAME
value: "John"
deps: |
{
"name": "hellowithdeps",
"version": "0.0.1",
"dependencies": {
"end-of-stream": "^1.4.1",
"from2": "^2.3.0",
"lodash": "^4.17.5"
}
}
labels:
app: my-test-function
minReplicas: 3
maxReplicas: 3
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 500m
memory: 500Mi
buildResources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 1
memory: 1Gi
source: |
module.exports = {
main: function(event, context) {
const name = process.env.PERSON_NAME;
return 'Hello ' + name;
}
}
runtime: nodejs12
status:
conditions:
- lastTransitionTime: "2020-04-14T08:17:11Z"
message: "Deployment my-test-function-nxjdp is ready"
reason: DeploymentReady
status: "True"
type: Running
- lastTransitionTime: "2020-04-14T08:16:55Z"
message: "Job my-test-function-build-552ft finished"
reason: JobFinished
status: "True"
type: BuildReady
- lastTransitionTime: "2020-04-14T08:16:16Z"
message: "ConfigMap my-test-function-xv6pc created"
reason: ConfigMapCreated
status: "True"
type: ConfigurationReady

If you store the Function's source code and dependencies in a Git repository and want the Function Controller to fetch them from it, use these parameters in the Function CR:

Click to copy
apiVersion: serverless.kyma-project.io/v1alpha1
kind: Function
metadata:
name: my-test-function
spec:
type: git
source: auth-basic
baseDir: "/"
reference: "branchA"
runtime: "nodejs12"

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

ParameterRequiredDescription
metadata.nameYesSpecifies the name of the CR.
metadata.namespaceNoDefines the Namespace in which the CR is available. It is set to default unless you specify otherwise.
spec.envNoSpecifies environment variables you need to export for the Function.
spec.depsNoSpecifies the Function's dependencies.
spec.labelsNoSpecifies the Function's Pod labels.
spec.minReplicasNoDefines the minimum number of Function's Pods to run at a time.
spec.maxReplicasNoDefines the maximum number of Function's Pods to run at a time.
spec.resources.limits.cpuNoDefines the maximum number of CPUs available for the Function's Pod to use.
spec.resources.limits.memoryNoDefines the maximum amount of memory available for the Function's Pod to use.
spec.resources.requests.cpuNoSpecifies the number of CPUs requested by the Function's Pod to operate.
spec.resources.requests.memoryNoSpecifies the amount of memory requested by the Function's Pod to operate.
spec.buildResources.limits.cpuNoDefines the maximum number of CPUs available to use for the Kubernetes Job's Pod responsible for building the Function's image.
spec.buildResources.limits.memoryNoDefines the maximum amount of memory available for the Job's Pod to use.
spec.buildResources.requests.cpuNoSpecifies the number of CPUs requested by the build Job's Pod to operate.
spec.buildResources.requests.memoryNoSpecifies the amount of memory requested by the build Job's Pod to operate.
spec.runtimeNoSpecifies the runtime of the Function. The available values are nodejs12, nodejs10, and python38. It is set to nodejs12 unless specified otherwise.
spec.typeNoDefines that you use a Git repository as the source of Function's code and dependencies. It must be set to git.
spec.sourceYesProvides the Function's full source code or the name of the Git directory in which the code and dependencies are stored.
spec.baseDirNoSpecifies the relative path to the Git directory that contains the source code from which the Function will be built​.
spec.referenceNoSpecifies either the branch name or the commit revision from which the Function Controller automatically fetches the changes in Function's code and dependencies.
status.conditions.lastTransitionTimeNot applicableProvides a timestamp for the last time the Function's condition status changed from one to another.
status.conditions.messageNot applicableDescribes a human-readable message on the CR processing progress, success, or failure.
status.conditions.reasonNot applicableProvides information on the Function CR processing success or failure. See the Reasons section for the full list of possible status reasons and their descriptions. All status reasons are in camelCase.
status.conditions.statusNot applicableDescribes the status of processing the Function CR by the Function Controller. It can be True for success, False for failure, or Unknown if the CR processing is still in progress. If the status of all conditions is True, the overall status of the Function CR is ready.
status.conditions.typeNot applicableDescribes a substage of the Function CR processing. There are three condition types that a Function has to meet to be ready: ConfigurationReady, BuildReady, and Running. When displaying the Function status in the terminal, these types are shown under CONFIGURED, BUILT, and RUNNING columns respectively. All condition types can change asynchronously depending on the type of Function modification, but all three need to be in the True status for the Function to be considered successfully processed.

Status reasons

Processing of a Function CR can succeed, continue, or fail for one of these reasons:

ReasonTypeDescription
ConfigMapCreatedConfigurationReadyA new ConfigMap was created based on the Function CR definition.
ConfigMapUpdatedConfigurationReadyThe existing ConfigMap was updated after changes in the Function CR name, its source code or dependencies.
SourceUpdatedConfigurationReadyThe Function Controller managed to fetch changes in the Functions's source code and configuration from the Git repository (type: git).
SourceUpdateFailedConfigurationReadyThe Function Controller failed to fetch changes in the Functions's source code and configuration from the Git repository.
JobFailedBuildReadyThe image with the Function's configuration could not be created due to an error.
JobCreatedBuildReadyThe Kubernetes Job resource that builds the Function image was created.
JobUpdatedBuildReadyThe existing Job was updated after changing the Function's metadata or spec fields that do not affect the way of building the Function image, such as labels.
JobRunningBuildReadyThe Job is in progress.
JobsDeletedBuildReadyPrevious Jobs responsible for building Function images were deleted.
JobFinishedBuildReadyThe Job was finished and the Function's image was uploaded to the Docker Registry.
DeploymentCreatedRunningA new Deployment referencing the Function's image was created.
DeploymentUpdatedRunningThe existing Deployment was updated after changing the Function's image, scaling parameters, variables, or labels.
DeploymentFailedRunningThe Function's Pod crashed or could not start due to an error.
DeploymentWaitingRunningThe Function was deployed and is waiting for the Deployment to be ready.
DeploymentReadyRunningThe Function was deployed and is ready.
ServiceCreatedRunningA new Service referencing the Function's Deployment was created.
ServiceUpdatedRunningThe existing Service was updated after applying required changes.
HorizontalPodAutoscalerCreatedRunningA new HorizontalPodScaler referencing the Function's Deployment was created.
HorizontalPodAutoscalerUpdatedRunningThe existing HorizontalPodScaler was updated after applying required changes.

These are the resources related to this CR:

Custom resourceDescription
ConfigMapStores the Function's source code and dependencies.
JobBuilds an image with the Function's code in a runtime.
DeploymentServes the Function's image as a microservice.
ServiceExposes the Function's Deployment as a network service inside the Kubernetes cluster.
HorizontalPodAutoscalerAutomatically scales the number of Function's Pods.

These components use this CR:

ComponentDescription
Function ControllerUses the Function CR for the detailed Function definition, including the environment on which it should run.

GitRepository

The gitrepositories.serverless.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define and manage Git repositories that store the Function's source code and dependencies. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd gitrepositories.serverless.kyma-project.io -o yaml

Sample custom resource

This is a sample custom resource that creates a GitRepository object pointing to a Git repository with the Function's source code and dependencies. This resource specifies that the repository requires an SSH key to authenticate to it and points to the Secret that stores these credentials.

Click to copy
apiVersion: serverless.kyma-project.io/v1alpha1
kind: GitRepository
metadata:
name: sample-with-auth
spec:
url: "git@github.com:sample-organization/sample-repo.git"
auth:
type: key
secretName: kyma-git-creds

Custom resource parameters

This table lists all the possible parameters of a given resource together with their descriptions:

ParameterRequired for HTTP(S)Required for SSHDescription
spec.urlYesYesProvides the address to the Git repository with the Function's code and dependencies. Depending on whether the repository is public or private and what authentication method is used to access it, the URL must start with the http(s), git, or ssh prefix, and end with the .git suffix.
spec.authNoYesSpecifies that you must authenticate to the Git repository.
spec.auth.typeNoYesDefines if you must authenticate to the repository with a password or token (basic), or an SSH key (key). For SSH, this parameter must be set to key.
spec.auth.secretNameNoYesSpecifies the name of the Secret with credentials used by the Function Controller to authenticate to the Git repository in order to fetch the Function's source code and dependencies. This Secret must be stored in the same Namespace as the GitRepository CR. The spec.auth.secretName parameter is required if you provide spec.auth.

These are the resources related to this CR:

Custom resourceDescription
FunctionStores the Function's source code and dependencies on a cluster.

These components use this CR:

ComponentDescription
Function ControllerUses the GitRepository CR to locate the Function's source code and dependencies in a Git repository.

Tutorials

Create an inline Function

This tutorial shows how you can create a simple "Hello World!" Function in Node.js 12.

Steps

Follows these steps:

  • kubectl
  • Console UI

Create a Function from Git repository sources

Create a sample Function from code and dependencies stored in a Git repository. The tutorial is based on the Function from the orders service example. It shows how you can fetch Function's source code and dependencies from a public Git repository that does not require any authentication method.

NOTE: Read more about Git source type and different ways of securing your repository.

Steps

Follows these steps:

  • kubectl
  • Console UI

Expose a Function with an API Rule

This tutorial shows how you can expose your Function to access it outside the cluster, through an HTTP proxy. To expose it, use an APIRule custom resource (CR) managed by the in-house API Gateway Controller. This controller reacts to an instance of the APIRule CR and, based on its details, it creates an Istio Virtual Service and Oathkeeper Access Rules that specify your permissions for the exposed Function.

When you complete this tutorial, you get a Function that:

  • Is available on an unsecured endpoint (handler set to noop in the APIRule CR).
  • Accepts the GET, POST, PUT, and DELETE methods.

NOTE: To learn more about securing your Function, see the tutorial.

Prerequisites

This tutorial is based on an existing Function. To create one, follow the Create a Function tutorial.

Steps

Follows these steps:

  • kubectl
  • Console UI

Bind a ServiceInstance to a Function

This tutorial shows how you can bind a sample instance of the Redis service to a Function. After completing all steps, you will get the Function with encoded Secrets to the service. You can use them for authentication when you connect to the service to implement custom business logic of your Function.

To create the binding, you will use ServiceBinding and ServiceBindingUsage custom resources (CRs) managed by the Service Catalog.

NOTE: See the document on provisioning and binding to learn more about binding ServiceInstances to applications in Kyma.

Prerequisites

This tutorial is based on an existing Function. To create one, follow the Create a Function tutorial.

Steps

Follows these steps:

  • kubectl
  • Console UI

Test the Function

To test if the Secret has been properly connected to the Function:

  1. Change the Function's code to:​

    Click to copy
    module.exports = {
    main: function (event, context) {
    return "Redis port: " + process.env.REDIS_PORT;
    },
    };
  2. Expose the Function through an API Rule, and access the Function's external address. You should get this result:

    Click to copy
    Redis port: 6379

Trigger a Function with an event

This tutorial shows how to trigger a Function with an event from an Application connected to Kyma.

NOTE: To learn more about events flow in Kyma, read the eventing documentation.

Prerequisites

This tutorial is based on an existing Function. To create one, follow the Create a Function tutorial.

You must also have:

  • An Application bound to a specific Namespace. Read the tutorials to learn how to create an Application and bind it to a Namespace.
  • An event service (an API of AsyncAPI type) registered in the desired Application. Read the tutorial to learn how to do it.
  • A Service Instance created for the registered service to expose events in a specific Namespace. Read the tutorial for details.

Steps

Follows these steps:

  • kubectl
  • Console UI

Test the trigger

CAUTION: Before you follow steps in this section and send a sample event, bear in mind that it will be propagated to all services subscribed to this event type.

To test if the Trigger CR is properly connected to the Function:

  1. Change the Function's code to:​

    Click to copy
    module.exports = {
    main: function (event, context) {
    console.log("User created: ", event.data);
    }
    }
  2. Send an event manually to trigger the function. The first example shows the implementation introduced with the Kyma 1.11 release where a CloudEvent is sent directly to the Event Mesh. In the second example, an event also reaches the Event Mesh, but it is first modified by the compatibility layer to the format compliant with the CloudEvents specification. This solution ensures compatibility if your events follow a format other than CloudEvents, or you use the Event Bus available before 1.11.

    TIP: For details on CloudEvents, exposed endpoints, and the compatibility layer, read about event processing and delivery.

    • Send CloudEvents directly to Event Mesh
    • Send events to Event Mesh through compatibility layer
    • CLUSTER_DOMAIN is the domain of your cluster, such as kyma.local.

    • CERT_FILE_NAME and KEY_FILE_NAME are client certificates for a given Application. You can get them by completing steps in the tutorial.

  3. After sending an event, you should get this result from logs of your Function's latest Pod:

    Click to copy
    User created: 123456789

Set an external Docker registry

By default, you install Kyma with Serverless that uses the internal Docker registry running on a cluster. This tutorial shows how to switch to an external Docker registry from one of these cloud providers using an override:

CAUTION: Function images are not cached in the Docker Hub. The reason is that this registry is not compatible with the caching logic defined in Kaniko that Serverless uses for building images.

Prerequisites

  • Docker Hub
  • GCR
  • ACR

Steps

Create required cloud resources

  • Docker Hub
  • GCR
  • ACR

Override Serverless configuration

Apply the following Secret with an override to a cluster or Minikube. Run:

  • Docker Hub
  • GCR
  • ACR

CAUTION: If you want to set an external Docker registry before you install Kyma, manually apply the Secret to the cluster before you run the installation script.

Trigger installation

Trigger Kyma installation or update it by labeling the Installation custom resource. Run:

Click to copy
kubectl -n default label installation/kyma-installation action=install

Synchronize Git resources with the cluster using a GitOps operator

This tutorial shows how you can automate the deployment of local Kyma resources on a cluster using the GitOps logic. You will use Kyma CLI to create an inline Python Function with a trigger. You will later push both resources to a GitHub repository of your choice and set up a GitOps operator to monitor the given repository folder and synchronize any changes in it with your cluster. For the purpose of this tutorial, you will install and use the Flux GitOps operator and a lightweight k3d cluster.

TIP: Although this tutorial uses Flux to synchronize Git resources with the cluster, you can use an alternative GitOps operator for this purpose, such as Argo.

Prerequisites

All you need before you start is to have the following:

Steps

These sections will lead you through the whole installation, configuration, and synchronization process. You will first install k3d and create a cluster for your custom resources (CRs). Then, you will need to apply the necessary Custom Resource Definitions (CRDs) from Kyma to be able to create Functions and triggers. Finally, you will install Flux and authorize it with the write access to your GitHub repository in which you store the resource files. Flux will automatically synchronize any new changes pushed to your repository with your k3d cluster.

Install and configure a k3d cluster

  1. Install k3d using Homebrew on macOS:

    Click to copy
    brew install k3d
  2. Create a default k3d cluster with a single server node:

    Click to copy
    k3d cluster create {CLUSTER_NAME}

    This command also sets your context to the newly created cluster. Run this command to display the cluster information:

    Click to copy
    kubectl cluster-info
  3. Apply the functions.serverless.kyma-project.io and triggers.eventing.knative.dev CRDs from sources in the kyma repository. You will need them to create the Function and Trigger CRs on the cluster.

    Click to copy
    kubectl apply -f https://raw.githubusercontent.com/kyma-project/kyma/master/resources/cluster-essentials/files/functions.serverless.crd.yaml && kubectl apply -f https://raw.githubusercontent.com/kyma-project/kyma/master/resources/cluster-essentials/files/triggers.eventing.knative.dev.crd.yaml
  4. Run this command to make sure the CRs are applied:

    Click to copy
    kubectl get customresourcedefinitions

Prepare your local workspace

  1. Create a workspace folder in which you will create source files for your Function:

    Click to copy
    mkdir {WORKSPACE_FOLDER}
  2. Use the init Kyma CLI command to create a local workspace with default configuration for a Python Function:

    Click to copy
    kyma init function --runtime python38 --dir $PWD/{WORKSPACE_FOLDER}

    TIP: Python 3.8 is only one of the available runtimes. Read about all supported runtimes and sample Functions to run on them.

    This command will download the following files to your workspace folder:

  • config.yaml with the Function's configuration
  • handler.py with the Function's code and the simple "Hello World" logic
  • requirements.txt with an empty file for your Function's custom dependencies

    It will also set sourcePath in the config.yaml file to the full path of the workspace folder:

    Click to copy
    name: my-function
    namespace: default
    runtime: python38
    source:
    sourceType: inline
    sourcePath: {FULL_PATH_TO_WORKSPACE_FOLDER}

Install and configure Flux

You can now install the Flux operator, connect it with a specific Git repository folder, and authorize Flux to automatically pull changes from this repository folder and apply them on your cluster.

  1. Install Flux:

    Click to copy
    brew install fluxctl
  2. Create a flux Namespace for the Flux operator's CRDs:

    Click to copy
    kubectl create namespace flux
  3. Export details of your GitHub repository - its name, the account name, and related e-mail address. You must also specify the name of the folder in your GitHub repository to which you will push Function and Trigger CRs built from local sources. If you don't have this folder in your repository yet, you will create it in further steps. Flux will synchronize the cluster with the content of this folder on the main (master) branch.

    Click to copy
    export GH_USER="{USERNAME}"
    export GH_REPO="{REPOSITORY_NAME}"
    export GH_EMAIL="{EMAIL_OF_YOUR_GH_ACCOUNT}"
    export GH_FOLDER="{GIT_REPO_FOLDER_FOR_FUNCTION_RESOURCES}"
  4. Run this command to apply CRDs of the Flux operator to the flux Namespace on your cluster:

    Click to copy
    fluxctl install \
    --git-user=${GH_USER} \
    --git-email=${GH_EMAIL} \
    --git-url=git@github.com:${GH_USER}/${GH_REPO}.git \
    --git-path=${GH_FOLDER} \
    --namespace=flux | kubectl apply -f -

    You will see that Flux created these CRDs:

    Click to copy
    serviceaccount/flux created
    clusterrole.rbac.authorization.k8s.io/flux created
    clusterrolebinding.rbac.authorization.k8s.io/flux created
    deployment.apps/flux created
    secret/flux-git-deploy created
    deployment.apps/memcached created
    service/memcached created
  5. List all Pods in the flux Namespace to make sure that the one for Flux is in the Running state:

    Click to copy
    kubectl get pods --namespace flux

    Expect a response similar to this one:

    Click to copy
    NAME READY STATUS RESTARTS AGE
    flux-75758595b9-m4885 1/1 Running 0 32m
  6. Obtain the certificate (SSH key) that Flux generated:

    Click to copy
    fluxctl identity --k8s-fwd-ns flux
  7. Run this command to copy the SSH key to the clipboard:

    Click to copy
    fluxctl identity --k8s-fwd-ns flux | pbcopy
  8. Go to Settings in your GitHub account:

    GitHub account settings

  9. Go to the SSH and GPG keys section and select the New SSH key button:

    Create a new SSH key

  10. Provide the new key name, paste the previously copied SSH key, and confirm changes by selecting the Add SSH Key button:

    Add a new SSH key

Create a Function

Now that Flux is authenticated to pull changes from your Git repository, you can start creating CRs from your local workspace files. You will create a sample inline Function and modify it by adding a trigger to it.

  1. Back in the terminal, clone this GitHub repository to your current workspace location:

    Click to copy
    git clone git@github.com:${GH_USER}/${GH_REPO}.git
  2. Go to the repository folder:

    Click to copy
    cd ${GH_REPO}
  3. If the folder you specified during the Flux configuration does not exist yet in the Git repository, create it:

    Click to copy
    mkdir ${GH_FOLDER}
  4. Run the apply Kyma CLI command to create a Function CR in the YAML format in your remote GitHub repository. This command will generate the output in the my-function.yaml file.

    Click to copy
    kyma apply function --filename {FULL_PATH_TO_LOCAL_WORKSPACE_FOLDER}/config.yaml --output yaml --dry-run > ./${GH_FOLDER}/my-function.yaml
  5. Push the local changes to the remote repository:

    Click to copy
    git add . # Stage changes for the commit
    git commit -m 'Add my-function' # Add a commit message
    git push origin main # Push changes to the "main" branch of your Git repository. If you have a repository with the "master" branch, use this command instead: git push origin master
  6. Go to the GitHub repository to check that the changes were pushed.

  7. By default, Flux pulls CRs from the Git repository and pushes them to the cluster in 5-minute intervals. To enforce immediate synchronization, run this command from the terminal:

    Click to copy
    fluxctl sync --k8s-fwd-ns flux
  8. Make sure that the Function CR was applied by Flux to the cluster:

    Click to copy
    kubectl get functions

Create a Trigger

  1. From your workspace folder, modify the local config.yaml file by adding trigger details (triggers) to your Function as follows:

    Click to copy
    name: my-function
    namespace: default
    runtime: python38
    source:
    sourceType: inline
    sourcePath: {FULL_PATH_TO_WORKSPACE_FOLDER}
    triggers:
    - version: evt1
    source: the-source
    type: t1
  1. Create the Function resource from local sources and place the output in your Git repository folder:

    Click to copy
    kyma apply function --filename {FULL_PATH_TO_LOCAL_WORKSPACE_FOLDER}/config.yaml --output yaml --dry-run > ./{GH_REPO}/${GH_FOLDER}/my-function.yaml
  2. Push the local changes to the remote repository:

    Click to copy
    git add .
    git commit -m 'Update my-function'
    git push origin main # Or run: git push origin master
  3. Go to the GitHub repository and see that the my-function.yaml file was modified as intended.

  4. From the terminal, force Flux to immediately propagate the Git repository changes to the cluster:

    Click to copy
    fluxctl sync --k8s-fwd-ns flux
  5. Check that the new Trigger CR for the Function was created:

    Click to copy
    kubectl get triggers

You can see that Flux synchronized the resources and the new Trigger CR for the Function was added to your cluster.

Reverting feature

Once you set it up, Flux will keep monitoring the given Git repository folder for any changes. If you modify the existing resources directly on the cluster, Flux will automatically revert these changes and update the given resource back to its version on the main (master) branch of the Git repository.

Troubleshooting

Failure to build Functions

In its default configuration, Serverless uses persistent volumes as the internal registry to store Docker images for Functions. The default storage size of such a volume is 20 GB. When this storage becomes full, you will have issues with building your Functions. As a workaround, increase the default capacity up to a maximum of 100 GB by editing the serverless-docker-registry PersistentVolumeClaim (PVC) object on your cluster.

Follow these steps:

  1. Edit the serverless-docker-registry PVC:

    Click to copy
    kubectl edit pvc -n kyma-system serverless-docker-registry
  2. Change the value of spec.resources.requests.storage to higher, such as 30 GB, to increase the PVC capacity:

    Click to copy
    ...
    spec:
    resources:
    requests:
    storage: 30Gi
  3. Save the changes and wait for a few minutes. Use this command to check if the CAPACITY of the serverless-docker-registry PVC has changed as expected:

    Click to copy
    kubectl get pvc serverless-docker-registry -n kyma-system

    You will get this result:

    Click to copy
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    serverless-docker-registry Bound pvc-a69b96hc-ahbc-k85d-0gh6-19gkcr4yns4k 30Gi RWO standard 23d

If the value of the storage does not change, restart the Pod to which this PVC is bound to finish the volume resize.

To do this, follow these steps:

  1. List all available Pods in the kyma-system Namespace:

    Click to copy
    kubectl get pods -n kyma-system
  2. Search for the Pod with the serverless-docker-registry-{UNIQUE_ID} name and delete it. See the example below:

    Click to copy
    kubectl delete pod serverless-docker-registry-6869bd57dc-9tqxp -n kyma-system

    CAUTION: Do not remove Pods named serverless-docker-registry-self-signed-cert-{UNIQUE_ID}.

  3. Search for the serverless-docker-registry PVC again to check that the capacity was resized:

    Click to copy
    kubectl get pvc serverless-docker-registry -n kyma-system