These guidelines provide rules and tips to all who contribute code to the Kyma repositories.

Read about:


The file contains the official rules for naming objects in Kyma.

Naming and structure guidelines

General names

The product is called Kyma. Use kyma if uppercase letters are not available or not typical in a specific context.

Source code repositories

Place all repositories belonging to the product under the Kyma location on GitHub. Do not use Kyma-related prefixes, such as kyma-, in the repository names. Name the repository using lowercase, and separate words with dashes, for example catalog-service-api.

Kyma components

The components directory contains the sources of all Kyma components. A Kyma component is any Pod, container, or image deployed with and referenced in a Kyma module or chart to provide the module's functionality. Each subdirectory in the components directory defines one component.


Every Kyma component resides in a dedicated folder which contains its sources and a file. This file provides instructions on how to build and develop the component.

The component's name consists of a term describing the component, followed by the component type. The first part of the name may differ depending on the component's purpose. This table lists the available types:

controllerA Kubernetes Controller which reacts on a standard Kubernetes resource or manages Custom Resource Definition resources. The component's name reflects the name of the primary resource it controls.namespace-controller
controller-managerA daemon that embeds all Kubernetes Controllers of a domain. Such an approach brings operational benefits in comparison to shipping all controllers separately. A controller-manager takes the name of the domain it belongs to.asset-store-controller-manager
operatoris a Kubernetes Operator which covers the application-specific logic behind the operation of the application, such as steps to upscale a stateful application. It reacts on changes made to custom resources derived from a given CustomResourceDefinition. It uses the name of the application it operates.application-operator
jobA Kubernetes Job which performs a task once or periodically. It uses the name of the task it performs.istio-patch-job (not renamed yet)
proxyActs as a proxy for an existing component, usually introducing a security model for this component. It uses the component's name.apiserver-proxy
serviceServes an HTTP/S-based API, usually securely exposed to the public. It uses the domain name and the API it serves.connector-service
brokerImplements the OpenServiceBroker specification to enrich the Kyma Service Catalog with the services of a provider. It uses the name of the provider it integrates
configurerA one-time task which usually runs as an Init Container in order to configure the application.velero-plugins-configurer


Follow the development guide when you add a new component to the kyma repository.

Custom Resource Definition

This document provides guidelines on writing CustomResourceDefinition (CRD) files. The document explains where to place the CRD files and what content to include in them, and specifies the naming conventions to use.

Third party CRDs

If you use a third-party CRD, apply the location and file name recommendations from this guide. Keep the content unaltered if the technical requirements allow it. The content must comply with the company's software usage policy and the third-party CRD's license.

Location and file name

Place the Kyma CRDs in the cluster-essentials Helm chart folder under the files subdirectory, and use singular file names, for example crontab.crd.yaml. Include additional names or terms in the file names to differentiate them from other CRDs, for example crontab-v1.crd.yaml. In the file name, do not include words which appear in the file path. For example, /resources/cluster-essentials/templates/resources-crontab.crd.yaml is not compliant because the word "resources" appears both in the file name and the path.

To differentiate CRDs from other types of Kubernetes resource files, end the file names with the .crd.yaml suffix and include the CRD name or any subset of it. If a file name consists of several words, separate them with hyphens, and use only lowercase letters.

CRD ConfigMaps

During the initial phase of installation or upgrade, all CRDs from the files subdirectory are bundled by and mounted into ConfigMaps. Those ConfigMaps are located under resources/cluster-essentials/templates, in the same location as the installation and upgrade Job, the ServiceAccount that the Job uses to apply CRDs, and the ClusterRoleBinding which binds the ServiceAccount with the proper ClusterRole for adequate permissions. The number of ConfigMaps depends on the number of CRDs located under resources/cluster-essentials/files but is not equal to their number. One ConfigMap cannot exceed the maximum size limit of 1 MB so can hold only a limited number of CRDs. That is why, the number of ConfigMaps that Helm creates during Kyma upgrade and installation tightly depends on the overall number of CRDs to apply.

CRD installation and upgrade

To make the installation process more efficient and maintainable, we decoupled CRDs from charts and now store them in the cluster-essentials component.

All Kyma CRDs are installed or updated at the very beginning by the Kubernetes Job that is triggered by Helm's pre-install and pre-upgrade hooks.

There is one unified Job for all components. Its name starts with the crd-install- prefix, like all other Kubernetes objects that participate in the process of CRD installation and upgrade:


This is an example of a CRD:


Click to copy
kind: CustomResourceDefinition
# the name must match the spec fields below, and be in the format: <plural>.<group>
# the group name to use for REST API: /apis/<group>/<version>
# the version name to use for REST API: /apis/<group>/<version>
version: v1
# either Namespaced or Cluster
scope: Namespaced
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: crontabs
# singular name to be used as an alias on the CLI and for display
singular: crontab
# the kind is normally the CamelCased singular type. Your resource manifests use this.
kind: CronTab
# shortNames allow shorter string to match your resource on the CLI
- ct

When you deploy a CRD to the cluster, the Kubernetes API server delivers the specified custom resource. When you create a new CRD, the Kubernetes API server reacts by creating a new RESTful resource path. Therefore, when implementing a naming strategy for CRDs, keep in mind that the names in the spec field define the new resource path, which looks as follows:

Click to copy


To define a CRD for Kyma, refer to the example and follow these guidelines:

  • metadata:name: Use as a name for the CustomResourceDefinition. It must use the {plural-name}.{group} format and use the values from the group and plural fields. If you do not follow these rules, you receive a validation error when installing the CRD.

  • spec:group: The API group should reflect the collection of logically related objects. For example, all batch objects, such as Job or ScheduledJob, can belong to the batch API Group, such as As best practice, use the fully-qualified domain name of the organization ( preceded by a subgroup if necessary, for example The group name should reflect a capability-related and not an implementation-related name. For example, for eventing use and not Avoid prefixing the name with more than a subgroup, like in this example: If the subgroup consists of multiple words, do not use spaces, hyphens, or CamelCase.

  • spec:version: Each API Group can exist in multiple versions. Use the version name in the URL, for example v1alpha1, v1beta1, or v1. For more details, see the Consideration section. For more information on versioning CRDs, see the Versioning section.

  • names:plural: Use the plural name in the URL. The plural field must be the same as the resource in an API URL, for example crontabs. If the name consists of multiple words, do not use spaces, hyphens, or CamelCase.

  • names:singular: Use the singular name as an alias in the CLI and for display. If the name consists of multiple words, do not use spaces, hyphens, or CamelCase.

  • names:kind: The kind of objects that you can create. The type must use CamelCase, for example CronTab.

  • names:shortNames: Specify a shorter string to match your resource in the CLI. Even though it is a list, include a single entry that is the most intuitive short name for the resource definition name, for example ct or ctabs.

Guidelines for other terms:

  • spec:scope: Scope must be either namespaced or cluster. By default, a CustomResourceDefinition is cluster-scoped and available in all projects. The scope defined here is meant for the resources created using this CRD.


Because you install and use the CRDs in a Kubernetes cluster, make sure that their versioning is consistent with the Kubernetes versioning conventions.

The relevant convention implies the usage of different API versions for different levels of stability and support.

These are the versioning criteria:

  • The version names must contain the word alpha, for example v1alpha1, if the software contains bugs, if enabling a feature can expose the bugs, and if a feature might be disabled by default.
  • The version name must contain the word beta, for example v2beta3, if the software is well-tested, enabling features is safe, has the features enabled by default, and support for the features is available, even though the details can change.
  • A stable definition must be versioned as vX where X is an integer, for example v1, and it contains features which appear in multiple subsequent versions of the released software.

For more details about the criteria, see the API changes documentation.

GA release

Before the first GA release of Kyma, use the alpha versions to handle the unplanned scope changes. Alternatively, use a beta version if you do not plan to make any further changes, the CRD is covered by end-to-end tests, and you provide support, including the migration paths for version updates. After the GA release of Kyma, upgrade the existing CRDs to stable versions and ensure that you meet the requirements.


Because Kyma aims to use Kubernetes version V1.9 or higher, the system can validate custom objects. However, validation is a beta feature which can be disabled. Therefore, always check if the validation takes place and is reliable.

Use the available validation of custom objects with OpenAPI v3 schema. For more details, see the OpenAPI specification.

Additionally, these restrictions apply to the schema:

  • You cannot set the fields default, nullable, discriminator, readOnly, writeOnly, xml, and deprecated.
  • You cannot set the field uniqueItems to true.
  • You cannot set the field additionalProperties to false.


For more details, see these documents:

Docker images

This document provides guidelines for the Docker image provided in the context of Kyma.

Naming and structure guidelines

Place images in the Kyma Docker registry located at For development and proof of concepts, use the following location:

All images use the following attributes:

  • an image name which is the same as the related project. Do not use prefixes. If the image requires sub-modularization, append it as in "istio-mixer"
  • a tag with a semantic version number, like 0.3.2

Assume an initializer image for the Helm Broker extension. This is the example of the location and the name of the image:

Click to copy

Base images

Base all images on an image that is as small as possible in size and dependency. A base image must have a specified version. Do not use the latest tag.

An application based on Go should originate from a scratch image. If a scratch image does not have the specific tooling available, you can use an alpine base image having the package catalog updated. A JavaScript-based application should originate from an nginx-alpine base image with an updated package catalog.

Label images

All images use the source label with a link to the GitHub repository containing the sources.

Define labels as in the following example:

Click to copy
source =

Third-party images

Kyma uses some Docker images that originally were not built (and hosted) by us. For security and reliability reasons, we need to copy all external images to our own Docker registry. We have two solutions to this problem: the third-party-images repository and the image-syncer tool.

Third-party repository

If you want to rebuild the image from scratch, use the third-party-images repository. For every component, create a separate directory. You need to provide a Dockerfile, a Makefile, and create a ProwJob for building your images. See the repository content for more information.

Image syncer

If you want to "cache" an image from an external registry, use the image-syncer tool.

To copy the image to our registry, modify the external-images.yaml file. After your change is merged to the main branch, you can check the new image URL in the logs of the post-main-test-infra-image-syncer-run job.

For example, the source image grafana/grafana:7.0.6 will be transformed to". This URL can then be used in your Helm charts.


Go from scratch:

Click to copy
FROM scratch
ADD main /
CMD ["/main"]

Go from alpine:

Click to copy
FROM alpine:3.7
RUN apk --no-cache upgrade && apk --no-cache add curl
ADD main /
CMD ["/main"]

JavaScript from nginx:

Click to copy
FROM nginx:1.13-alpine
RUN apk --no-cache upgrade
COPY nginx.conf /etc/nginx/nginx.conf
COPY /build var/public
CMD ["nginx", "-g", "daemon off;"]


This guide covers the best practices for creating Helm charts every Kyma team should employ.

Do not use the crd-install hook

Helm doesn't trigger the crd-install hook in the upgrade process. Because of that, new Custom Resource Definitions (CRDs) aren't installed. See the alternatives to using the crd-install hook:

  1. Make the CRD part of a separate chart which must be installed before the chart that requires the CRD.
  • Implementation effort: low
  • Pros:

    • No additional implementation effort required.
    • The CRD is a separate file which can be used on its own, for example for tests.
  • Cons:

    • Requires creating more charts.
    • The CRD is managed by Helm and comes with all of the associated limitations.
  1. Register the CRD through its controller.
  • Implementation effort: medium
  • Pros:

    • The CRD is managed by a component that is logically responsible for it.
      • The CRD is not subject to the Helm limitations.
  • Cons:

    • Requires a controller for the CRD.
    • The CRD is not listed as a part of Helm release.
    • The CRD is not available as a file.
  1. Create a job that registers the new CRD and removes its old version. The job must be triggered on pre-install and pre-upgrade Helm hooks.
  • Implementation effort: high
  • Pros:

    • The CRD can be a separate entity.
    • Migration can be easily implemented in the job.
    • The CRD is not subject to the Helm limitations.
  • Cons:

    • Jobs are troublesome to debug.
    • The CRD is not listed as a part of Helm release.

Moving resources between charts

Moving resources, such as ConfigMaps, Deployments, CRDs, and others from one chart to another is problematic as it causes Kyma to lose backward compatibility. The deployment in which the CRD is moved cannot be upgraded to a newer version.

The ABC CRD is part of the YYYY chart in the 0.6 release. That CRD is moved to the ZZZZ chart in the 0.7 release. Kyma cannot apply changes to the ZZZZ chart because its CRD, ABC, exists in the 0.6 version as a part of the YYYY chart.

To avoid these problems, rename your resources when you move them from one chart to another.

NOTE: Using this approach removes the old resource and creates a new one.

When a CRD is deleted, all of the associated implementations are removed, which may cause the user to lose data. Because of this risk, migrate the CRDs instead of simply moving them between charts. Follow this procedure:

  1. Backup all existing implementations.
  2. Remove the old CRD.
  3. Run the upgrade.
  4. Restore all CRD implementations.

Defining metadata schema for Kyma charts

This section covers the minimal requirements and conventions of metadata schema definition for Kyma charts.

The schema defines configuration parameters important from a customer perspective, as opposed to all parameters you can find in the values.yaml files. These parameters can vary depending on installation. When defining a schema for a Kyma chart, follow these guidelines:

  • Make sure to place the values.schema.json metadata file where the chart's values.yaml file resides. See the screenshot for reference.

    Example 1

  • For each schema, define a description including detailed information about schema and Helm chart.

  • Schema definition does not support dot (.) notation. This means that if you have nested value properties, your schema definition must define the object structure. See the .Values.loki.port property for a sample object structure defined within a schema.

  • For each schema object, define a description to explain its purpose.

  • For each configuration property, declare:

    • A description property to explain the configuration purpose.
    • A default property and its value.
    • A data type.
    • An examples property to list of possible example values, such as supported storage types. This property is optional.

An example chart values.yaml file looks as follows:

Click to copy
port: 3100
nameOverride: loki
auth_enabled: false
store: inmemory
port: 3101
nameOverride: promtail

An example values.schema.json file looks as follows:

Click to copy
"$schema": "",
"description": "Schema definition for logging helm chart values",
"type": "object",
"properties": {
"loki": {
"description": "Configuration properties for component loki",
"type": "object",
"properties": {
"port": {
"description": "TCP port loki expose",
"default": 3100,
"type": "number"
"nameOverride": {
"description": "Property to override service name of loki",
"default": "loki",
"type": "string"
"config": {
"type": "object",
"description": "Loki service configuration",
"contentEncoding": "base64",
"properties": {
"auth_enabled": {
"description": "Setting to enable or disable loki basic http authentication",
"default": false,
"type": "boolean"
"store": {
"description": "Storage type of log chunks",
"type": "string",
"default": "inmemory",
"examples": ["consul", "inmemory"]
"promtail": {
"description": "Configuration properties for component promtail",
"type": "object",
"properties": {
"port": {
"description": "TCP port promtail expose",
"default": 3101,
"type": "number"
"nameOverride": {
"description": "Property to override service name of promtail",
"default": "promtail",
"type": "string"

HTTP API design

The file contains official guidelines for defining APIs in Kyma. This is an evolving set of guidelines which includes various aspects of API definition.

Target audience

The target audience includes these groups:

  • Internal developers who build various features of Kyma.
  • Customer developers who use Kyma to customize external solutions.

Name the HTTP headers

If possible, use a standard HTTP header rather than a custom one so that the semantics match.

For HTTP standard headers, refer to the registry of headers maintained by IANA.

For custom headers, use Kyma- as a prefix to indicate that the headers originate in Kyma. For example, Kyma-Event-Type. This helps to differentiate these headers from the headers that come from the external sources, such as a Storefront. According to the RFC 6648 recommendation, incorporate the organization's name in custom parameters that are never standardized. Do not use X- as a prefix because it is deprecated according to RFC 6648.

Test strategy

This document is a general guide to how tests are performed in Kyma. It aims to clearly describe technical requirements for all Kyma test suites. It also explains the rationale behind them so they can be challenged when the need arises. In particular, this document is about:

  • Types of tests to be implemented
  • Tools that are used for testing
  • Responsibilities of different persons in the quality assurance process
  • Test automation
  • Manual testing

Every contribution to Kyma must be compliant with this strategy.

Test types

We define several kinds of tests in Kyma. This section aims to describe all of them in detail. By the end of it you should be able to answer these questions:

  • What are those tests and what is their purpose?
  • Who defines test cases and what should they focus on?
  • How are tests implemented and where is the code located?
  • How are the tests integrated into continuous integration (CI) pipeline?

Code quality checks

The validation of syntactic and semantic structure of the code contributes to better readability and maintainability and helps to avoid some programming errors.

In general, it is a maintainer’s decision which checks are applied to a particular component, but we defined a minimal required set of checks that must be applied to all components.

The following tools have been defined as the minimal checks to be done for GO code:

  • go lint as a style validation linter
  • go vet for the static code analysis to find possible bugs or suspicious constructs
  • go fmt to standardize the code formatting

For JavaScript and TypeScript code, this validation will be done using:

  • ESLint for JavaScript code validation
  • TSLint for TypeScript code validation

Code quality checks are required to pass before any contribution can be merged into the main branch. All quality checks for a component should be put in its build script. Thanks to that, code analysis is executed automatically on CI. Anyone can also execute these tools manually in a local development environment.

Component tests

By component tests, we understand all tests that do not cross the component boundary. This may include tests with a different granularity of scope, e.g. tests for a single module (unit tests) or tests checking a component as a whole. The purpose of component testing is to provide fast feedback to a contributor that implements a given functionality. Test code must be placed in the same location in the repository as the tested code.

Every change in the code should be verified by a set of component tests, but strict test coverage is not required. The contributor is responsible for writing scenarios that are most beneficial and give confidence that the new software is working as expected. Maintainers decide if the implemented test suite covers the functionality sufficiently.

Component tests are required to pass before any contribution can be merged into the main branch. A command to run unit tests must be a part of the component build script. Thanks to that, unit tests are executed automatically on the CI server. Anyone can also execute component tests in the local environment.

Integration tests

Integration tests are applications run within a Kyma cluster and verify Kyma behavior. Their purpose is to check if Kyma components work as expected in a production-like environment. Their focus is on a single component and its interactions.

Integration tests should verify if a component communicates properly with its dependencies and clients. Test cases are not formalized and it is up to contributors to write them in the way they find it beneficial. The maintenance cost of integration tests is much higher than the cost of component tests so the latter should be preferred if possible. Any internal logic should be tested by component tests.

Integration tests are built as Docker containers and deployed to a Kyma cluster using Octopus. The test application must fail if the aspect of Kyma that is being tested does not meet its acceptance criteria. They should be fast and safe to run concurrently. The hard limit on how long a single test can run is 2 minutes. The code of test applications is located in the tests/integration/ folder in Kyma. Tests for a given component should be placed in a subdirectory named after that component, e.g. tests/integration/api-controller/.

Integration tests are required to pass before any change can be merged into the main branch. As the results may change when running on different Kubernetes implementations, they must finish successfully on Minikube and cloud Kubernetes cluster.

End-to-end tests

End-to-end tests are applications meant to verify complete user interactions with Kyma. They mimic user behavior in a set of predefined scenarios to check if Kyma meets the business requirements. Because of their overarching nature, they must use only entry points meant to be used by end users.

Test scenarios are provided by the Product Owner. Scenario descriptions should be written down in a user-facing document. It is meant to provide users with an easy-to-grasp introduction to a given Kyma functionality.

The implementation of E2E tests may vary. If possible, they should be Docker applications deployed using Octopus. However, some scenarios may cross cluster boundaries and thus require different methods of execution. A description of such a test must contain an explanation of how to run it manually. All E2E test code must be placed under tests/end-to-end/ in the kyma repository with a directory name reflecting the business requirement it is testing, e.g. tests/end-to-end/upgrade.

E2E scenarios are executed as periodic jobs on the CI server. They may be resource- and time-consuming so they are not required to pass before merging to the main branch. There can be exceptions to this rule such as an E2E test which is required to pass before merging is allowed.

Contract tests

Contract tests are Docker applications, just like integration tests. The difference is that they test if an external solution that Kyma relies on works as expected. Their main goal is to be able to safely upgrade 3rd parties and know where the API contract was broken.

Test cases are defined by a contributor who integrates the external solution with Kyma. They shouldn't test 3rd party code extensively, but rather check if the contract defined by the provider is being kept. Ideally, every API used by Kyma should be covered.

Contract tests are Docker applications run by Octopus. The code is placed in the tests/contract/ directory in a subdirectory named after a solution they are testing, e.g. tests/contract/knative-serving. They should be fast and safe to run concurrently. The hard limit on how long a single test can run is 2 minutes. They shouldn’t rely on any Kyma component.

Contract tests are required to pass before merging a change into the main branch. As they should rely only on the solution that is being tested, they may be skipped by the CI server if the change is not related to the solution.


Because tests are developed as code, some of the rules outlined above apply to them too. Every test must be covered at least by code quality checks. Tests run as applications can also be unit tested if applicable. Also, tests must pass a code review by one of Kyma maintainers. The code review process is documented in the contributing guidelines in the community repository.

The reviewer should not only for the quality of the code implementing functionality but also for the code validating it. Reviewer should pay attention to the implemented test cases. Test coverage should give confidence that the software is working as expected. There are currently no requirements in terms of measuring test coverage.

Continuous integration

Contributors should write tests at the same time then they make changes in production code according to test-driven development (TDD) practices. Such tests are automatically executed as a part of the CI process. Thanks to that approach, the newly created functionality is thoroughly tested before it is merged to the main branch.


Some tests described in this document are required to pass before a change can be merged into the main branch. No new code change can be merged if tests are not passing on the CI server. Tests shouldn’t be skipped or made less strict to make them pass if requirements were not changed.


Besides checks on pull requests, all tests are also run on the main branch to verify if new submissions haven’t broken Kyma. If the test execution fails, the corresponding team is responsible to evaluate the failure and determine the root cause. Problems found this way should be reported as GitHub issues and labeled test-failing.

Nightly and weekly builds

In addition to required and periodic checks, there are also nightly and weekly clusters that are meant to check Kyma stability. They are created every night or once a week respectively from the main branch at the time of the cluster creation. Only integration and contract tests are run on them. The tests are run in fixed intervals. Failures on those clusters should be treated the same way as postsubmit failures.

Manual testing

Kyma test coverage is not complete yet, and probably will never be due to the nature of software development. We cannot predict all the test cases and discover all bugs. If a bug is discovered later, we need to add an automated test to cover that scenario. Thanks to that, we will avoid making the same mistakes in the future.

Because of the facts outlined above, some manual tests are always required before Kyma is released. Release candidates have to be verified manually following this process:

  1. Every time a missing test case is identified or a bug is found, an issue to cover that scenario with automated tests is created. The issue is labeled test-missing and assigned to the backlog of a team maintaining this area. In the case of unclear responsibility, one team is chosen. The issue description must contain a procedure on how to verify this scenario manually.
  2. While creating a new release candidate, the Release Master creates a place to track progress (e.g. spreadsheet) with all open issues labeled test-missing. Teams assigned to issues are responsible to check manually if the functionality not covered with tests is working as expected, and report any problems. They mark their progress in the space provided by the Release Master.
  3. If any test is unstable and impedes the release, the Release Master may decide to disable that test. The Release Master creates an issue to enable that test again and labels it test-missing. The responsible team must then perform the test manually on the release candidate.

Using Telepresence for local Kyma development

This document is a general guide to local development with Telepresence.

Certain Kyma components store their state in Kubernetes custom resources and, therefore, depend on Kubernetes.
Mocking the dependency and developing locally is not possible, and manual deployment on every change is a mundane task.

Telepresence is a tool that connects your local process to a remote Kubernetes cluster through proxy, which lets you easily debug locally.
It replaces a container in the specified Pod, opens up a new local shell, and proxies the network traffic from the local shell through the Pod.

Telepresence enables you to make HTTP requests from your local machine to services in the cluster that are not exposed outside. When you run a server in this shell, other Kubernetes services can access it.

To start developing with Telepresence, follow these steps:

NOTE: This guide was tested on version 0.101.

  1. Install Telepresence.

  2. Run Kyma locally or connect to a remote cluster. Then, configure your local kubectl to use the desired Kyma cluster.

  3. To check the container name of the deployment to swap, run:

    Click to copy
    kubectl get deployment {DEPLOYMENT_NAME} -o jsonpath='{.spec.template.spec.containers[0].name}'
  4. Run this command to swap the deployment:

    Click to copy
    telepresence --namespace {NAMESPACE} --swap-deployment {DEPLOYMENT_NAME}:{CONTAINER_NAME} --run-shell
  5. Every Kubernetes Pod has the directory /var/run/secrets mounted. The Kubernetes client uses it in the component services. By default, Telepresence copies this directory. It stores the directory path in $TELEPRESENCE_ROOT, under the Telepresence shell. The $TELEPRESENCE_ROOT variable unwinds to /tmp/.... Move it to /var/run/secrets, where the service expects it. To move it there, create a symlink:

    Click to copy
    sudo ln -s $TELEPRESENCE_ROOT/var/run/secrets /var/run/secrets
  6. Run CGO_ENABLED=0 go build ./cmd/{COMPONENT-NAME} to build the component and give all Kubernetes services that call the component access to this process. The process runs locally on your machine. Use the same command to run various Application Connector services.