Rafter

Overview

Rafter is a solution for storing and managing different types of public assets, such as documents, files, images, API specifications, and client-side applications. It uses an external solution, MinIO, for storing assets. The whole concept relies on Kubernetes custom resources (CRs) managed by the Asset, Bucket, and AssetGroup controllers (and their cluster-wide counterparts) grouped under the Rafter Controller Manager. These CRs include:

  • Asset CR which manages a single asset or a package of assets
  • Bucket CR which manages buckets in which these assets are stored
  • AssetGroup CR which manages a group of Asset CRs of a specific type

Rafter enables you to manage assets using supported webhooks. For example, if you use Rafter to store a specification, you can additionally define a webhook service that Rafter should call before the file is sent to storage. The webhook service can:

  • Validate the file
  • Mutate the file
  • Extract some of the file information and put it in the status of the custom resource

Rafter comes with the following set of services and extensions compatible with Rafter webhooks:

CAUTION: Rafter does not enforce any access control. To protect the confidentiality of your information, use Rafter only to store public data. Do not use it to process and store any kind of confidential information, including personal data.

What Rafter can be used for

  • Rafter is based on CRs. Therefore, it is an extension of Kubernetes API and should be used mainly by developers building their solutions on top of Kubernetes,
  • Rafter is a file store that allows you to programmatically modify, validate the files and/or extract their metadata before they go to storage. Content of those files can be fetched using an API. This is a basic functionality of the headless CMS concept. If you want to deploy an application to Kubernetes and enrich it with additional documentation or specifications, you can do it using Rafter,
  • Rafter is an S3-like file store also for files written in HTML, CSS, and JS. It means that Rafter can be used as a hosting solution for client-side applications.

What Rafter is not

Benefits

This solution offers a number of benefits:

  • It's flexible. You can use it for storing various types of assets, such as Markdown documents, ZIP, PNG, or JS files.
  • It's scalable. It allows you to store assets on a production system, using cloud provider storage services. At the same time, you can apply it to local development and use MinIO to store assets on-premise.
  • It allows you to avoid vendor lock-in. When using Rafter in a production system, you can seamlessly switch between different major service providers, such as AWS S3 or Azure Blob.
  • It's location-independent. It allows you to expose files directly to the Internet and replicate them to different regions. This way, you can access them easily, regardless of your location.

Rafter in Kyma

Kyma provides a Kubernetes-based solution for managing content that relies on the custom resource (CR) extensibility feature and Rafter as a backend mechanism. This solution allows you to upload multiple and grouped data for a given documentation topic and store them as Asset CRs in external buckets located in MinIO storage. All you need to do is to specify topic details, such as documentation sources, in an AssetGroup CR or a ClusterAssetGroup CR and apply it to a given Namespace or cluster. The CR supports various documentation formats, including images, Markdown documents, AsyncAPI, OData, and OpenAPI specification files. You can upload them as single, direct file URLs and packed assets (ZIP or TAR).

The content management solution offers these benefits:

  • It provides a unified way of uploading different document types to a Kyma cluster.
  • It supports baked-in documentation. Apart from the default documentation, you can add your own and group it as you like, the same way you use micro frontends to personalize views in the Console UI. For example, you can add contextual help for a given Service Broker in the Service Catalog.

Architecture

Basic asset flow

This diagram shows a high-level overview of how Rafter works:

NOTE: This flow also applies to the cluster-wide counterparts of all CRs.

Basic architecture

  1. The user creates a Bucket CR. This propagates the creation of buckets in MinIO Gateway where assets will be stored.
  2. The user creates an Asset CR that contains the reference to assets.
  3. Services implemented for Rafter webhooks optionally validate, mutate, or extract data from assets before uploading them into buckets.
  4. Rafter uploads assets into buckets in MinIO Gateway.

Read more about the role of main Rafter components and controllers that manage them:

  • Asset custom resource (CR) is an obligatory CR in which you define the asset you want to store in a given storage bucket. Its definition requires the asset name and mode, the name of the Namespace in which it is available, the address of its web location, and the name of the bucket in which you want to store it. Optionally, you can specify the validation and mutation requirements that the asset must meet before it is stored.

  • Asset Controller (AC) manages the Asset CR lifecycle.

  • AssetGroup custom resource (CR) orchestrates the creation of multiple Asset CRs in a given Namespace.

  • AssetGroup Controller (AGC) creates Asset CRs based on an AssetGroup CR definition. If the AssetGroup CR defines two sources of assets, such as asyncapi and markdown, the AGC creates two Asset CRs. The AGC also monitors the status of the Asset CR defined in the appropriate AssetGroup CR and updates the status of the AssetGroup CR accordingly.

  • Bucket CR is an obligatory CR in which you define the name of the bucket for storing assets.

  • Bucket Controller (BC) manages the Bucket CR lifecycle.

  • Validation Service is an optional service which ensures that the asset meets the validation requirements specified in the Asset CR before uploading it to the bucket. The service returns the validation status to the AC. See the example of the AsyncAPI Service.

  • Mutation Service is an optional service which ensures that the asset is modified according to the mutation specification defined in the Asset CR before it is uploaded to the bucket. The service returns the modified asset to the AC. See the example of the AsyncAPI Service.

  • Extraction Service is an optional service which extracts metadata from assets. The metadata information is stored in the CR status. The service returns the asset metadata to the AC. See the example of the Front Matter Service.

  • MinIO Gateway is a MinIO cluster mode which is a production-scalable storage solution. It ensures flexibility of using asset storage services from major cloud providers, including Azure Blob Storage, Amazon S3, and Google Cloud Storage.

NOTE: All CRs and controllers have their cluster-wide counterparts, names of which start with the Cluster prefix, such as ClusterAssetGroup CR.

AssetGroup CR flow

This diagram provides more details of the AssetGroup CR flow, the controller that manages it, and underlying processes:

NOTE: This flow also applies to the ClusterAssetGroup CR.

AssetGroup CR flow

  1. The user creates an AssetGroup CR in a given Namespace.
  2. The AssetGroup Controller (AGC) reads the AssetGroup CR definition.
  3. If the AssetGroup CR definition does not provide a reference name (bucketRef) of the Bucket CR, the AssetsGroup Controller checks if the default Bucket CR already exists in this Namespace. If it does not exist yet, the AssetsGroup Controller creates a new Bucket CR with:
  • the rafter-public-{ID} name, where {ID} is a randomly generated string, such as rafter-public-6n32wwj5vzq1k.
  • the rafter.kyma-project.io/default: true label
  • the rafter.kyma-project.io/access: public label
  1. The AGC creates Asset CRs in the number corresponding to the number of sources specified in the AssetGroup CR. It adds rafter.kyma-project.io/type and rafter.kyma-project.io/asset-group labels to every Asset CR definition. It also adds the bucket name under the bucketRef field to every Asset CR definition.
  2. The AGC verifies if the Asset CRs are in the Ready phase and updates the status of the AssetGroup CR accordingly. It also adds the bucket reference name to the AssetGroup CR.

Asset and Bucket CRs flow

This diagram provides an overview of the basic Asset and Bucket CRs flow and the role of particular components in this process:

Rafter's architecture

  1. The user creates a bucket through a Bucket CR.
  2. The Bucket Controller (BC) listens for new events and acts upon receiving the Bucket CR creation event.
  3. The BC creates the bucket in the MinIO Gateway storage.
  4. The user creates an Asset CR which specifies the reference to the asset source location and the name of the bucket for storing the asset.
  5. The Asset Controller (AC) listens for new events and acts upon receiving the Asset CR creation event.
  6. The AC reads the CR definition, checks if the Bucket CR is available, and if its name matches the bucket name referenced in the Asset CR. It also verifies if the Bucket CR is in the Ready phase.
  7. If the Bucket CR is available, the AC fetches the asset from the source location provided in the CR. If the asset is a ZIP or TAR file and the Asset CR's mode is set to package, the AC unpacks and optionally filters the asset before uploading it into the bucket.
  8. Optionally, the AC validates, modifies the asset, or extracts asset's metadata if such a requirement is defined in the Asset CR. The AC communicates with the validation, mutation, and metadata services to validate, modify the asset, or extract asset's metadata according to the specification defined in the Asset CR.
  9. The AC uploads the asset to MinIO Gateway, into the bucket specified in the Asset CR.
  10. The AC updates the status of the Asset CR with the storage location of the file in the bucket.

Details

Asset custom resource lifecycle

Learn about the lifecycle of the Asset custom resource (CR) and how its creation, removal, or a change in the bucket reference affects other Rafter components.

NOTE: This lifecycle also applies to the ClusterAsset CR.

Create an Asset CR

When you create an Asset CR, the Asset Controller (AC) receives a CR creation Event, reads the CR definition, verifies if the bucket exists, downloads the asset, unpacks it, and stores it in MinIO Gateway.

Create an asset

Remove an Asset CR

When you remove the Asset CR, the AC receives a CR deletion Event and deletes the CR from MinIO Gateway.

Delete an asset

Change the bucket reference

When you modify an Asset CR by updating the bucket reference in the Asset CR to a new one while the previous bucket still exists, the lifecycle starts again. The asset is created in a new storage location and this location is updated in the Asset CR.

Unfortunately, this causes duplication of data as the assets from the previous bucket storage are not cleaned up by default. Thus, to avoid multiplication of assets, first remove one Bucket CR and then modify the existing Asset CR with a new bucket reference.

Change the bucket reference

Change the Asset CR specification

When you modify the Asset CR specification, the lifecycle starts again. The previous asset content is removed and no longer available.

Change the Asset CR specification

Bucket custom resource lifecycle

Learn about the lifecycle of the Bucket custom resource (CR) and how its creation and removal affect other Rafter components.

NOTE: This lifecycle also applies to the ClusterBucket CR.

Create a Bucket CR

When you create a Bucket CR, the Bucket Controller (BC) receives a CR creation Event and creates a bucket with the name specified in the CR. It is created in the MinIO Gateway storage under the {CR_name}-{ID} location, such as test-bucket-1b19rnbuc6ir8, where {CR_name} is the name field from the Bucket CR and {ID} is a randomly generated string. The status of the CR contains a reference URL to the created bucket.

Create a bucket

Remove a Bucket CR

When you remove the Bucket CR, the BC receives a CR deletion Event and removes the bucket with the whole content from MinIO Gateway.

The Asset Controller (AC) also monitors the status of the referenced bucket. The AC checks the Bucket CR status to make sure the bucket exists. If you delete the bucket, the AC receives information that the files are no longer accessible and the bucket was removed. The AC updates the status of the Asset CR to ready: False and removes the asset storage reference. The Asset CR is still available and you can use it later for a new bucket.

Delete a bucket

AssetGroup custom resource lifecycle

NOTE: This lifecycle also applies to the ClusterAssetGroup CR.

Asset CR manual changes

The AssetGroup custom resource (CR) coordinates Asset CR creation, deletion, and modification. The AssetGroup Controller (AGC) verifies AssetGroup definition on a regular basis and creates, deletes, or modifies Asset CRs accordingly.

The AssetGroup CR acts as the single source of truth for the Asset CRs it orchestrates. If you modify or remove any of them manually, the AGC automatically overwrites such an Asset CR or updates it based on the AssetGroup CR definition.

AssetGroup CR and Asset CR dependencies

Asset CRs and AssetGroup CRs are also interdependent in terms of names, definitions, and statuses.

Names

The name of every Asset CR created by the AGC consists of these three elements:

  • The name of the AssetGroup CR, such as service-catalog.
  • The source type of the given asset in the AssetGroup CR, such as asyncapi.
  • A randomly generated string, such as 1b38grj5vcu1l.

The full name of such an Asset CR that follows the {assetGroup-name}-{asset-source}-{suffix} pattern is service-catalog-asyncapi-1b38grj5vcu1l.

Labels

There are two labels in every Asset CR created from AssetGroup CRs. Both of them are based on AssetGroup CRs definitions:

  • rafter.kyma-project.io/type equals a given type parameter from the AssetGroup CR, such as asyncapi.

  • rafter.kyma-project.io/asset-group equals the name metadata from the AssetGroup CR, such as service-catalog.

Statuses

The status of the AssetGroup CR depends heavily on the status phase of all Asset CRs it creates. It is:

  • Ready when all related Asset CRs are already in the Ready phase.
  • Pending when it awaits the confirmation that all related Asset CRs are in the Ready phase. If any Asset CR is in the Failed phase, the status of the AssetGroup CR remains Pending.
  • Failed when processing of the AssetGroup CR fails. For example, the AssetGroup CR can fail if you provide incorrect or duplicated data in its specification.

MinIO and MinIO Gateway

The whole concept of Rafter relies on MinIO as the storage solution. It supports Kyma's manifesto and the "batteries included" rule by providing you with this on-premise solution by default.

Depending on the usage scenario, you can:

  • Use MinIO for local development.
  • Store your assets on a production scale using MinIO in Gateway mode.

Rafter ensures that both usage scenarios work for Kyma, without additional configuration of the built-in controllers.

Development mode storage

MinIO is an open-source asset storage server with Amazon S3-compatible API. You can use it to store various types of assets, such as documents, files, or images.

In the context of Rafter, the Asset Controller stores all assets in MinIO, in dedicated storage space.

MinIO

Production storage

For the production purposes, Rafter uses MinIO Gateway which:

  • Is a multi-cloud solution that offers the flexibility to choose a given cloud provider for the specific Kyma installation, including Azure, Amazon, and Google.
  • Allows you to use various cloud providers that support the data replication and CDN configuration.
  • Is compatible with Amazon S3 APIs.

MinIO Gateway

TIP: Using Gateway mode may generate additional costs for storing buckets, assets, or traffic in general. To avoid them, verify the payment policy with the given cloud provider before you switch to Gateway mode.

See this tutorial to learn how to set MinIO to Google Cloud Storage Gateway mode.

Access MinIO credentials

For security reasons, MinIO credentials are generated during Kyma installation and stored inside the Kubernetes Secret object. You can obtain both the access key and the secret key in the development (MinIO) and production (MinIO Gateway) mode using the same commands.

  • To get the access key, run:
  • MacOS
  • Linux
  • To get the secret key, run:
  • MacOS
  • Linux

You can also set MinIO credentials directly using values.yaml files. For more details, see the official MinIO documentation.

Supported webhooks

Types

Rafter supports the following types of webhooks:

  • Mutation webhook modifies fetched assets before the Asset Controller uploads them into the bucket. For example, this can mean asset rewriting through the regex operation or key-value, or the modification in the JSON specification. The mutation webhook service must return modified files to the Asset Controller.

  • Validation webhook validates fetched assets before the Asset Controller uploads them into the bucket. It can be a list of several different validation webhooks that process assets even if one of them fails. It can refer either to the validation of a specific file against a specification or to the security validation. The validation webhook service must return the validation status when the validation completes.

  • Metadata webhook allows you to extract metadata from assets and inserts it under the status.assetRef.files.metadata field in the (Cluster)Asset CR. For example, the Asset Metadata Service which is the metadata webhook implementation in Kyma, extracts front matter metadata from .md files and returns the status with such information as title and type.

Service specification requirements

If you create a specific mutation, validation, or metadata service for the available webhooks and you want Rafter to properly communicate with it, you must ensure that the API exposed by the given service meets the API contract requirements. These criteria differ depending on the webhook type:

NOTE: Services are described in the order in which Rafter processes them.

  • mutation service must expose endpoints that:

    • accept parameters and content properties.
    • return the 200 response with new file content.
    • return the 304 response informing that the file content was not modified.
  • validation service must expose endpoints that:

    • contain parameters and content properties.
    • return the 200 response confirming that validation succeeded.
    • return the 422 response informing why validation failed.
  • metadata service must expose endpoints that:

    • pass file data in the "object": "string" format in the request body, where object stands for the file name and string is the file content.
    • return the 200 response with extracted metadata.

See the example of an API specification with the /convert, /validate, and /extract endpoints.

Upload Service

The Upload Service is an HTTP server that exposes the file upload functionality for MinIO. It contains a simple HTTP endpoint which accepts multipart/form-data forms. It uploads files to public system buckets.

The main purpose of the service is to provide a solution for hosting static files for components that use Rafter, such as the Application Connector. You can also use the Upload Service for development purposes to host files for Rafter, without the need to rely on external providers.

System buckets

The Upload Service creates a system-public-{generated-suffix} system bucket, where {generated-suffix} is a Unix nano timestamp in the 32-base number system. The public bucket has a read-only policy specified.

To enable the service scaling and to maintain the bucket configuration data between the application restarts, the Upload Service stores its configuration in the rafter-upload-service ConfigMap.

Once you upload the files, system buckets store them permanently. There is no policy to clean system buckets periodically.

The diagram describes the Upload Service flow:

Upload Service

Use the service outside the Kyma cluster

You can expose the service for development purposes. To use the Upload Service on a local machine, run the following command:

Click to copy
kubectl port-forward deployment/rafter-upload-service 3000:3000 -n kyma-system

You can access the service on port 3000.

Upload files

To upload files, send the multipart form POST request to the /v1/upload endpoint. The endpoint recognizes the following field names:

  • public that is an array of files to upload to a public system bucket.
  • directory that is an optional directory for storing the uploaded files. If you do not specify it, the service creates a directory with a random name. If the directory and files already exist, the service overwrites them.

To do the multipart request using curl, run the following command:

Click to copy
curl -v -F directory='example' -F public=@sample.md -F public=@text-file.md -F public=@archive.zip http://localhost:3000/v1/upload

The result is as follows:

Click to copy
{
"uploadedFiles": [
{
"fileName": "text-file.md",
"remotePath": "{STORAGE_ADDRESS}/public-1b0sjap35m9o0/example/text-file.md",
"bucket": "public-1b0sjap35m9o0",
"size": 212
},
{
"fileName": "archive.zip",
"remotePath": "{STORAGE_ADDRESS}/public-1b0sjaq6t6jr8/example/archive.zip",
"bucket": "public-1b0sjaq6t6jr8",
"size": 630
},
{
"fileName": "sample.md",
"remotePath": "{STORAGE_ADDRESS}/public-1b0sjap35m9o0/example/sample.md",
"bucket": "public-1b0sjap35m9o0",
"size": 4414
}
]
}

See the OpenAPI specification for the full API documentation.

AsyncAPI Service

AsyncAPI Service is an HTTP server enabled by default in Kyma to process AsyncAPI specifications. It only accepts multipart/form-data forms and contains two endpoints:

  • /validate that validates the AsyncAPI specification against the AsyncAPI schema in version 2.0.0. AsyncAPI Service uses the AsyncAPI Parser for this purpose.

  • /convert that converts the version and format of the AsyncAPI files. The service uses the AsyncAPI Converter to change the AsyncAPI specifications from older versions to version 2.0.0, and convert any YAML input files to the JSON format that is required to render the specifications in the Console UI.

See the asyncapi-service-openapi.yaml file for the full OpenAPI specification of the service.

Front Matter Service

The Front Matter Service is an HTTP server that exposes the functionality for extracting metadata from files. It contains a simple HTTP endpoint which accepts multipart/form-data forms. The service extracts front matter YAML metadata from text files of all extensions.

The main purpose of the service is to provide metadata extraction for Rafter controllers. That's why it is only available inside the cluster. To use it, define metadataWebhookService in Asset and ClusterAsset custom resources.

Front matter metadata

Front matter YAML metadata are YAML properties added at the beginning of a file, between --- lines. The following snippet represents an exemplary Markdown file with metadata specified:

Click to copy
---
title: Example document title
description: Example page description
order: 3
array:
- foo
- bar
---
## Lorem ipsum
Dolores sit amet

Use the service outside the Kyma cluster

You can expose the service for development purposes. To use the Front Matter Service on a local machine, run the following command:

Click to copy
kubectl port-forward deployment/rafter-front-matter-service 3000:3000 -n kyma-system

You can access the service on port 3000.

Metadata files

To extract metadata from files, send the multipart form POST request to the /v1/extract endpoint. Specify the relative or absolute path to the file as a field name. To do the multipart request using curl, run the following command:

Click to copy
curl -v -F foo/foo.md=@foo.md -F bar/bar.yaml=@bar.yaml http://localhost:3000/v1/extract

The result is as follows:

Click to copy
{
"data": [
{
"filePath": "foo/foo.md",
"metadata": {
"no": 3,
"title": "Access logs",
"type": "Details"
}
},
{
"filePath": "bar/bar.yaml",
"metadata": {
"number": 9,
"title": "Hello world",
"url": "https://kyma-project.io"
}
}
]
}

See the OpenAPI specification for the full API documentation.

Configuration

Rafter chart

To configure the Rafter chart, override the default values of its values.yaml file. This document describes parameters that you can configure.

TIP: To learn more about how to use overrides in Kyma, see the following documents:

Configurable parameters

This table lists the configurable parameters, their descriptions, and default values.

NOTE: You can define all envs either by providing them as inline values or using the valueFrom object. See the example for reference.

ParameterDescriptionDefault value
controller-manager.minio.persistence.enabledParameter that enables MinIO persistence. Deactivate it only if you use Gateway mode.true
controller-manager.minio.environment.MINIO_BROWSERParameter that enables browsing MinIO storage. By default, the MinIO browser is turned off for security reasons. You can change the value to on to use the browser. If you enable the browser, it is available at https://storage.{DOMAIN}/minio/, for example at https://storage.kyma.local/minio/."off"
controller-manager.minio.resources.requests.memoryRequests for memory resources.32Mi
controller-manager.minio.resources.requests.cpuRequests for CPU resources.10m
controller-manager.minio.resources.limits.memoryLimits for memory resources.128Mi
controller-manager.minio.resources.limits.cpuLimits for CPU resources.100m
controller-manager.deployment.replicasNumber of service replicas.1
controller-manager.pod.resources.limits.cpuLimits for CPU resources.150m
controller-manager.pod.resources.limits.memoryLimits for memory resources.128Mi
controller-manager.pod.resources.requests.cpuRequests for CPU resources.10m
controller-manager.pod.resources.requests.memoryRequests for memory resources.32Mi
controller-manager.envs.clusterAssetGroup.relistIntervalTime intervals in which the Rafter Controller Manager verifies the ClusterAssetGroup for changes.5m
controller-manager.envs.assetGroup.relistIntervalTime intervals in which the Rafter Controller Manager verifies the AssetGroup for changes.5m
controller-manager.envs.clusterBucket.regionRegional location of the ClusterBucket in a given cloud storage. Use one of the available regions.us-east-1
controller-manager.envs.bucket.regionRegional location of the bucket in a given cloud storage. Use one of the available regions.us-east-1
controller-manager.envs.clusterBucket.maxConcurrentReconcilesMaximum number of cluster bucket concurrent reconciles which will run.1
controller-manager.envs.bucket.maxConcurrentReconcilesMaximum number of bucket concurrent reconciles which will run.1
controller-manager.envs.clusterAsset.maxConcurrentReconcilesMaximum number of cluster asset concurrent reconciles which will run.1
controller-manager.envs.asset.maxConcurrentReconcilesMaximum number of asset concurrent reconciles which will run.1
controller-manager.minio.secretKeySecret key. Add the parameter to set your own secretkey credentials.By default, secretKey is automatically generated.
controller-manager.minio.accessKeyAccess key. Add the parameter to set your own accesskey credentials.By default, accessKey is automatically generated.
controller-manager.envs.store.uploadWorkersNumber of workers used in parallel to upload files to the storage bucket.10
controller-manager.envs.webhooks.validation.workersNumber of workers used in parallel to validate files.10
controller-manager.envs.webhooks.mutation.workersNumber of workers used in parallel to mutate files.10
upload-service.deployment.replicasNumber of service replicas.1
upload-service.envs.verboseIf set to true, you enable the extended logging mode that records more information on AsyncAPI Service activities than the usual logging mode which registers only errors and warnings.true
front-matter-service.deployment.replicasNumber of service replicas. For more details, see the Kubernetes documentation.1
front-matter-service.envs.verboseIf set to true, you enable the extended logging mode that records more information on Front Matter Service activities than the usual logging mode which registers only errors and warnings.true
asyncapi-service.deployment.replicasNumber of service replicas.1
asyncapi-service.envs.verboseIf set to true, you enable the extended logging mode that records more information on AsyncAPI Service activities than the usual logging mode which registers only errors and warnings.true

Custom Resource

Asset

The assets.rafter.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an asset to store in a cloud storage bucket. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd assets.rafter.kyma-project.io -o yaml

Sample custom resource

This is a sample Asset CR configuration that contains mutation, validation, and metadata services:

Click to copy
apiVersion: rafter.kyma-project.io/v1beta1
kind: Asset
metadata:
name: my-package-assets
namespace: default
spec:
source:
mode: single
parameters:
disableRelativeLinks: "true"
url: https://some.domain.com/main.js
mutationWebhookService:
- name: swagger-operations-svc
namespace: default
endpoint: "/mutate"
filter: \.js$
parameters:
rewrite: keyvalue
pattern: \json|yaml
data:
basePath: /test/v2
validationWebhookService:
- name: swagger-operations-svc
namespace: default
endpoint: "/validate"
filter: \.js$
metadataWebhookService:
- name: swagger-operations-svc
namespace: default
endpoint: "/extract"
filter: \.js$
bucketRef:
name: my-bucket
displayName: "Operations svc"
status:
phase: Ready
reason: Uploaded
message: Asset content has been uploaded
lastHeartbeatTime: "2018-01-03T07:38:24Z"
observedGeneration: 1
assetRef:
baseUrl: https://{STORAGE_ADDRESS}/my-bucket-1b19rnbuc6ir8/my-package-assets
files:
- metadata:
title: Overview
name: README.md
- metadata:
title: Benefits of distributed storage
type: Details
name: directory/subdirectory/file.md

Custom resource parameters

This table lists all possible parameters of a given resource together with their descriptions:

ParameterRequiredDescription
metadata.nameYesSpecifies the name of the CR.
metadata.namespaceYesDefines the Namespace in which the CR is available.
spec.source.modeYesSpecifies if the asset consists of one file or a set of compressed files in the ZIP or TAR formats. Use single for one file and package for a set of files.
spec.source.parametersNoSpecifies a set of parameters for the Asset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format.
spec.source.urlYesSpecifies the location of the file.
spec.source.filterNoSpecifies the regex pattern used to select files to store from the package.
spec.source.validationWebhookServiceNoProvides specification of the validation webhook services.
spec.source.validationWebhookService.nameYesProvides the name of the validation webhook service.
spec.source.validationWebhookService.namespaceYesProvides the Namespace in which the service is available.
spec.source.validationWebhookService.endpointNoSpecifies the endpoint to which the service sends calls.
spec.source.validationWebhookService.parametersNoProvides detailed parameters specific for a given validation service and its functionality.
spec.source.validationWebhookService.filterNoSpecifies the regex pattern used to select files sent to the service.
spec.source.mutationWebhookServiceNoProvides specification of the mutation webhook services.
spec.source.mutationWebhookService.nameYesProvides the name of the mutation webhook service.
spec.source.mutationWebhookService.namespaceYesProvides the Namespace in which the service is available.
spec.source.mutationWebhookService.endpointNoSpecifies the endpoint to which the service sends calls.
spec.source.mutationWebhookService.parametersNoProvides detailed parameters specific for a given mutation service and its functionality.
spec.source.mutationWebhookService.filterNoSpecifies the regex pattern used to select files sent to the service.
spec.source.metadataWebhookServiceNoProvides specification of the metadata webhook services.
spec.source.metadataWebhookService.nameYesProvides the name of the metadata webhook service.
spec.source.metadataWebhookService.namespaceYesProvides the Namespace in which the service is available.
spec.source.metadataWebhookService.endpointNoSpecifies the endpoint to which the service sends calls.
spec.source.metadataWebhookService.filterNoSpecifies the regex pattern used to select files sent to the service.
spec.bucketRef.nameYesProvides the name of the bucket for storing the asset.
spec.displayNameNoSpecifies a human-readable name of the asset.
status.phaseNot applicableThe Asset Controller adds it to the Asset CR. It describes the status of processing the Asset CR by the Asset Controller. It can be Ready, Failed, or Pending.
status.reasonNot applicableProvides the reason why the Asset CR processing failed or is pending. See the Reasons section for the full list of possible status reasons and their descriptions.
status.messageNot applicableDescribes a human-readable message on the CR processing progress, success, or failure.
status.lastHeartbeatTimeNot applicableSpecifies when was the last time the Asset Controller processed the Asset CR.
status.observedGenerationNot applicableSpecifies the most recent Asset CR generation that the Asset Controller observed.
status.assetRefNot applicableProvides details on the location of the assets stored in the bucket.
status.assetRef.filesNot applicableProvides asset metadata and the relative path to the given asset in the storage bucket with metadata.
status.assetRef.files.metadataNot applicableLists metadata extracted from the asset.
status.assetRef.files.nameNot applicableSpecifies the relative path to the given asset in the storage bucket.
status.assetRef.baseUrlNot applicableSpecifies the absolute path to the location of the assets in the storage bucket.

NOTE: The Asset Controller automatically adds all parameters marked as Not applicable to the Asset CR.

TIP: Asset CRs have an additional configmap mode that allows you to refer to asset sources stored in ConfigMaps. If you use this mode, set the url parameter to {namespace}/{configMap-name}, like url: default/sample-configmap. This mode is not enabled in Kyma. To check how it works, see Rafter tutorials for examples.

Status reasons

Processing of an Asset CR can succeed, continue, or fail for one of these reasons:

ReasonPhaseDescription
PulledPendingThe Asset Controller pulled the asset content for processing.
PullingFailedFailedAsset content pulling failed due to the provided error.
UploadedReadyThe Asset Controller uploaded the asset content to MinIO.
UploadFailedFailedAsset content uploading failed due to the provided error.
BucketNotReadyPendingThe referenced bucket is not ready.
BucketErrorFailedReading the bucket status failed due to the provided error.
MutatedPendingMutation services changed the asset content.
MutationFailedFailedAsset mutation failed for one of the provided reasons.
MutationErrorFailedAsset mutation failed due to the provided error.
MetadataExtractedPendingMetadata services extracted metadata from the asset content.
MetadataExtractionFailedFailedMetadata extraction failed due to the provided error.
ValidatedPendingValidation services validated the asset content.
ValidationFailedFailedAsset validation failed for one of the provided reasons.
ValidationErrorFailedAsset validation failed due to the provided error.
MissingContentFailedThere is missing asset content in the cloud storage bucket.
RemoteContentVerificationErrorFailedAsset content verification in the cloud storage bucket failed due to the provided error.
CleanupErrorFailedThe Asset Controller failed to remove the old asset content due to the provided error.
CleanedPendingThe Asset Controller removed the old asset content that was modified.
ScheduledPendingThe asset you added is scheduled for processing.

These are the resources related to this CR:

Custom resourceDescription
BucketThe Asset CR uses the name of the bucket specified in the definition of the Bucket CR.

These components use this CR:

ComponentDescription
RafterUses the Asset CR for the detailed asset definition, including its location and the name of the bucket in which it is stored.

AssetGroup

The assetgroups.rafter.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an orchestrator that creates Asset CRs for a specific asset type. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd assetgroups.rafter.kyma-project.io -o yaml

Sample custom resource

This is a sample AssetGroup custom resource (CR) that provides details of the Asset CRs for the markdown, asyncapi, and openapi source types.

Click to copy
apiVersion: rafter.kyma-project.io/v1beta1
kind: AssetGroup
metadata:
name: slack
labels:
rafter.kyma-project.io/view-context: service-catalog
rafter.kyma-project.io/group-name: components
rafter.kyma-project.io/order: "6"
spec:
displayName: Slack
description: "Slack documentation"
bucketRef:
name: test-bucket
sources:
- type: markdown
name: markdown-slack
mode: single
parameters:
disableRelativeLinks: "true"
url: https://raw.githubusercontent.com/slackapi/slack-api-specs/master/README.md
- type: asyncapi
displayName: "Slack"
name: asyncapi-slack
mode: single
url: https://raw.githubusercontent.com/slackapi/slack-api-specs/master/events-api/slack_events_api_async_v1.json
- type: openapi
name: openapi-slack
mode: single
url: https://raw.githubusercontent.com/slackapi/slack-api-specs/master/web-api/slack_web_openapi_v2.json
status:
lastHeartbeatTime: "2019-03-18T13:42:55Z"
message: Assets are ready to use
phase: Ready
reason: AssetsReady

Custom resource parameters

This table lists all possible parameters of a given resource together with their descriptions:

ParameterRequiredDescription
metadata.nameYesSpecifies the name of the CR. It also defines the rafter.kyma-project.io/asset-group label added to the Asset CR that the AssetGroup CR defines. Because of label name limitations, AssetGroup CR names can have a maximum length of 63 characters.
metadata.labelsNoSpecifies how to filter and group Asset CRs that the AssetGroup CR defines. See the details to learn more about these labels.
spec.displayNameYesSpecifies a human-readable name of the AssetGroup CR.
spec.descriptionYesProvides more details on the purpose of the AssetGroup CR.
spec.bucketRef.nameNoSpecifies the name of the bucket that stores the assets from the AssetGroup.
spec.sourcesYesDefines the type of the asset and the rafter.kyma-project.io/type label added to the Asset CR.
spec.sources.typeYesSpecifies the type of assets included in the AssetGroup CR.
spec.sources.displayNameNoSpecifies a human-readable name of the asset.
spec.sources.nameYesDefines an identifier of a given asset. It must be unique if there is more than one asset of a given type in the AssetGroup CR.
spec.sources.modeYesSpecifies if the asset consists of one file or a set of compressed files in the ZIP or TAR format. Use single for one file and package for a set of files.
spec.sources.parametersNoSpecifies a set of parameters for the asset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format.
spec.sources.urlYesSpecifies the location of a single file or a package.
spec.sources.filterNoSpecifies a set of assets from the package to upload. The regex used in the filter must be RE2-compliant.
status.lastHeartbeatTimeNot applicableSpecifies when was the last time when the AssetGroup Controller processed the AssetGroup CR.
status.messageNot applicableDescribes a human-readable message on the CR processing progress, success, or failure.
status.phaseNot applicableThe AssetGroup Controller adds it to the AssetGroup CR. It describes the status of processing the AssetGroup CR by the AssetGroup Controller. It can be Ready, Pending, or Failed.
status.reasonNot applicableProvides the reason why the AssetGroup CR processing succeeded, is pending, or failed. See the Reasons section for the full list of possible status reasons and their descriptions.

NOTE: The AssetGroup Controller automatically adds all parameters marked as Not applicable to the AssetGroup CR.

TIP: ClusterAsset CRs have an additional configmap mode that allows you to refer to asset sources stored in ConfigMaps. If you use this mode, set the url parameter to {namespace}/{configMap-name}, like url: default/sample-configmap. This mode is not enabled in Kyma. To check how it works, see Rafter tutorials for examples.

Status reasons

Processing of an AssetGroup CR can succeed, continue, or fail for one of these reasons:

ReasonPhaseDescription
AssetCreatedPendingThe AssetGroup Controller created the specified asset.
AssetCreationFailedFailedThe AssetGroup Controller couldn't create the specified asset due to an error.
AssetsCreationFailedFailedThe AssetGroup Controller couldn't create assets due to an error.
AssetsListingFailedFailedThe AssetGroup Controller couldn't list assets due to an error.
AssetDeletedPendingThe AssetGroup Controller deleted specified assets.
AssetDeletionFailedFailedThe AssetGroup Controller couldn't delete the specified asset due to an error.
AssetsDeletionFailedFailedThe AssetGroup Controller couldn't delete assets due to an error.
AssetUpdatedPendingThe AssetGroup Controller updated the specified asset.
AssetUpdateFailedFailedThe AssetGroup Controller couldn't upload the specified asset due to an error.
AssetsUpdateFailedFailedThe AssetGroup Controller couldn't update assets due to an error.
AssetsReadyReadyAssets are ready to use.
WaitingForAssetsPendingWaiting for assets to be in the Ready status phase.
BucketErrorFailedBucket verification failed due to an error.
AssetsWebhookGetFailedFailedThe AssetGroup Controller failed to obtain proper webhook configuration.
AssetsSpecValidationFailedFailedAsset specification is invalid due to an error.

These are the resources related to this CR:

Custom resourceDescription
AssetThe AssetGroup CR orchestrates the creation of the Asset CR and defines its content.

These components use this CR:

ComponentDescription
RafterManages Asset CRs created based on the definition in the AssetGroup CR.

Bucket

The buckets.rafter.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define the name of the cloud storage bucket for storing assets. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd buckets.rafter.kyma-project.io -o yaml

Sample custom resource

This is a sample resource that defines the storage bucket configuration.

Click to copy
apiVersion: rafter.kyma-project.io/v1beta1
kind: Bucket
metadata:
name: test-sample
namespace: default
spec:
region: "us-east-1"
policy: readonly
status:
lastHeartbeatTime: "2019-02-04T11:50:26Z"
message: Bucket policy has been updated
phase: Ready
reason: BucketPolicyUpdated
remoteName: test-sample-1b19rnbuc6ir8
observedGeneration: 1
url: https://{STORAGE_ADDRESS}/test-sample-1b19rnbuc6ir8

Custom resource parameters

This table lists all possible parameters of a given resource together with their descriptions:

ParameterRequiredDescription
metadata.nameYesSpecifies the name of the CR which is also used to generate the name of the bucket in the bucket storage.
metadata.namespaceYesSpecifies the Namespace in which the CR is available.
spec.regionNoSpecifies the location of the region under which the Bucket Controller creates the bucket. If the field is empty, the Bucket Controller creates the bucket under the default location.
spec.policyNoSpecifies the type of bucket access. Use none, readonly, writeonly, or readwrite.
status.lastHeartbeatTimeNot applicableSpecifies when was the last time when the Bucket Controller processed the Bucket CR.
status.messageNot applicableDescribes a human-readable message on the CR processing success or failure.
status.phaseNot applicableThe Bucket Controller automatically adds it to the Bucket CR. It describes the status of processing the Bucket CR by the Bucket Controller. It can be Ready or Failed.
status.reasonNot applicableProvides information on the Bucket CR processing success or failure. See the Reasons section for the full list of possible status reasons and their descriptions.
status.urlNot applicableProvides the address of the bucket storage under which the asset is available.
status.remoteNameNot applicableProvides the name of the bucket in the storage.
status.observedGenerationNot applicableSpecifies the most recent Bucket CR generation that the Bucket Controller observed.

NOTE: The Bucket Controller automatically adds all parameters marked as Not applicable to the Bucket CR.

Status reasons

Processing of a Bucket CR can succeed, continue, or fail for one of these reasons:

ReasonPhaseDescription
BucketCreatedPendingThe bucket was created.
BucketNotFoundFailedThe specified bucket doesn't exist anymore.
BucketCreationFailureFailedThe bucket couldn't be created due to an error.
BucketVerificationFailureFailedThe bucket couldn't be verified due to an error.
BucketPolicyUpdatedReadyThe policy specifying bucket protection settings was updated.
BucketPolicyUpdateFailedFailedThe policy specifying bucket protection settings couldn't be set due to an error.
BucketPolicyVerificationFailedFailedThe policy specifying bucket protection settings couldn't be verified due to an error.
BucketPolicyHasBeenChangedReadyThe policy specifying cloud storage bucket protection settings was changed.

These are the resources related to this CR:

Custom resourceDescription
AssetProvides the name of the storage bucket which the Asset CR refers to.

These components use this CR:

ComponentDescription
RafterUses the Bucket CR for the storage bucket definition.

ClusterAsset

The clusterassets.rafter.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an asset to store in a cloud storage bucket. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd clusterassets.rafter.kyma-project.io -o yaml

Sample custom resource

This is a sample ClusterAsset CR configuration that contains mutation, validation, and metadata services:

Click to copy
apiVersion: rafter.kyma-project.io/v1beta1
kind: ClusterAsset
metadata:
name: my-package-assets
spec:
source:
mode: single
parameters:
disableRelativeLinks: "true"
url: https://some.domain.com/main.js
mutationWebhookService:
- name: swagger-operations-svc
namespace: default
endpoint: "/mutate"
filter: \.js$
parameters:
rewrite: keyvalue
pattern: \json|yaml
data:
basePath: /test/v2
validationWebhookService:
- name: swagger-operations-svc
namespace: default
endpoint: "/validate"
filter: \.js$
metadataWebhookService:
- name: swagger-operations-svc
namespace: default
endpoint: "/extract"
filter: \.js$
bucketRef:
name: my-bucket
displayName: "Operations svc"
status:
phase: Ready
reason: Uploaded
message: Asset content has been uploaded
lastHeartbeatTime: "2018-01-03T07:38:24Z"
observedGeneration: 1
assetRef:
baseUrl: https://{STORAGE_ADDRESS}/my-bucket-1b19rnbuc6ir8/my-package-assets
files:
- metadata:
title: Overview
name: README.md
- metadata:
title: Benefits of distributed storage
type: Details
name: directory/subdirectory/file.md

Custom resource parameters

This table lists all possible parameters of a given resource together with their descriptions:

ParameterRequiredDescription
metadata.nameYesSpecifies the name of the CR.
spec.source.modeYesSpecifies if the asset consists of one file or a set of compressed files in the ZIP or TAR formats. Use single for one file and package for a set of files.
spec.source.parametersNoSpecifies a set of parameters for the ClusterAsset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format.
spec.source.urlYesSpecifies the location of the file.
spec.source.filterNoSpecifies the regex pattern used to select files to store from the package.
spec.source.validationWebhookServiceNoProvides specification of the validation webhook services.
spec.source.validationWebhookService.nameYesProvides the name of the validation webhook service.
spec.source.validationWebhookService.namespaceYesProvides the Namespace in which the service is available.
spec.source.validationWebhookService.endpointNoSpecifies the endpoint to which the service sends calls.
spec.source.validationWebhookService.parametersNoProvides detailed parameters specific for a given validation service and its functionality.
spec.source.validationWebhookService.filterNoSpecifies the regex pattern used to select files sent to the service.
spec.source.mutationWebhookServiceNoProvides specification of the mutation webhook services.
spec.source.mutationWebhookService.nameYesProvides the name of the mutation webhook service.
spec.source.mutationWebhookService.namespaceYesProvides the Namespace in which the service is available.
spec.source.mutationWebhookService.endpointNoSpecifies the endpoint to which the service sends calls.
spec.source.mutationWebhookService.parametersNoProvides detailed parameters specific for a given mutation service and its functionality.
spec.source.mutationWebhookService.filterNoSpecifies the regex pattern used to select files sent to the service.
spec.source.metadataWebhookServiceNoProvides specification of the metadata webhook services.
spec.source.metadataWebhookService.nameYesProvides the name of the metadata webhook service.
spec.source.metadataWebhookService.namespaceYesProvides the Namespace in which the service is available.
spec.source.metadataWebhookService.endpointNoSpecifies the endpoint to which the service sends calls.
spec.source.metadataWebhookService.filterNoSpecifies the regex pattern used to select files sent to the service.
spec.bucketRef.nameYesProvides the name of the bucket for storing the asset.
spec.displayNameNoSpecifies a human-readable name of the asset.
status.phaseNot applicableThe ClusterAsset Controller adds it to the ClusterAsset CR. It describes the status of processing the ClusterAsset CR by the ClusterAsset Controller. It can be Ready, Failed, or Pending.
status.reasonNot applicableProvides the reason why the ClusterAsset CR processing failed or is pending. See the Reasons section for the full list of possible status reasons and their descriptions.
status.messageNot applicableDescribes a human-readable message on the CR processing progress, success, or failure.
status.lastHeartbeatTimeNot applicableSpecifies when was the last time when the ClusterAsset Controller processed the ClusterAsset CR.
status.observedGenerationNot applicableSpecifies the most recent ClusterAsset CR generation that the ClusterAsset Controller observed.
status.assetRefNot applicableProvides details on the location of the assets stored in the bucket.
status.assetRef.filesNot applicableProvides asset metadata and the relative path to the given asset in the storage bucket with metadata.
status.assetRef.files.metadataNot applicableLists metadata extracted from the asset.
status.assetRef.files.nameNot applicableSpecifies the relative path to the given asset in the storage bucket.
status.assetRef.baseUrlNot applicableSpecifies the absolute path to the location of the assets in the storage bucket.

NOTE: The ClusterAsset Controller automatically adds all parameters marked as Not applicable to the ClusterAsset CR.

Status reasons

Processing of a ClusterAsset CR can succeed, continue, or fail for one of these reasons:

ReasonPhaseDescription
PulledPendingThe ClusterAsset Controller pulled the asset content for processing.
PullingFailedFailedAsset content pulling failed due to an error.
UploadedReadyThe ClusterAsset Controller uploaded the asset content to MinIO.
UploadFailedFailedAsset content uploading failed due to an error.
BucketNotReadyPendingThe referenced bucket is not ready.
BucketErrorFailedReading the bucket status failed due to an error.
MutatedPendingMutation services changed the asset content.
MutationFailedFailedAsset mutation failed for one of the provided reasons.
MutationErrorFailedAsset mutation failed due to an error.
MetadataExtractedPendingMetadata services extracted metadata from the asset content.
MetadataExtractionFailedFailedMetadata extraction failed due to an error.
ValidatedPendingValidation services validated the asset content.
ValidationFailedFailedAsset validation failed for one of the provided reasons.
ValidationErrorFailedAsset validation failed due to an error.
MissingContentFailedThere is missing asset content in the cloud storage bucket.
RemoteContentVerificationErrorFailedAsset content verification in the cloud storage bucket failed due to an error.
CleanupErrorFailedThe ClusterAsset Controller failed to remove the old asset content due to an error.
CleanedPendingThe ClusterAsset Controller removed the old asset content that was modified.
ScheduledPendingThe asset you added is scheduled for processing.

These are the resources related to this CR:

Custom resourceDescription
ClusterBucketThe ClusterAsset CR uses the name of the bucket specified in the definition of the ClusterBucket CR.

These components use this CR:

ComponentDescription
RafterUses the ClusterAsset CR for the detailed asset definition, including its location and the name of the bucket in which it is stored.

ClusterAssetGroup

The clusterassetgroups.rafter.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an orchestrator that creates ClusterAsset CRs for a specific asset type. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd clusterassetgroups.rafter.kyma-project.io -o yaml

Sample custom resource

This is a sample ClusterAssetGroup custom resource (CR) that provides details of the ClusterAsset CR for the markdown source type.

Click to copy
apiVersion: rafter.kyma-project.io/v1beta1
kind: ClusterAssetGroup
metadata:
name: service-mesh
labels:
rafter.kyma-project.io/view-context: docs-ui
rafter.kyma-project.io/group-name: components
rafter.kyma-project.io/order: "6"
spec:
displayName: "Service Mesh"
description: "Overall documentation for Service Mesh"
bucketRef:
name: test-bucket
sources:
- type: markdown
displayName: "Documentation"
name: docs
mode: package
parameters:
disableRelativeLinks: "true"
url: https://github.com/kyma-project/kyma/archive/master.zip
filter: /docs/service-mesh/docs/
status:
lastHeartbeatTime: "2019-03-18T13:42:55Z"
message: Assets are ready to use
phase: Ready
reason: AssetsReady

Custom resource parameters

This table lists all possible parameters of a given resource together with their descriptions:

ParameterRequiredDescription
metadata.nameYesSpecifies the name of the CR. It also defines the rafter.kyma-project.io/asset-group label added to the ClusterAsset CR that the ClusterAssetGroup CR defines. Because of label name limitations, ClusterAssetGroup CR names can have a maximum length of 63 characters.
metadata.labelsNoSpecifies how to filter and group ClusterAsset CRs that the ClusterAssetGroup CR defines. See the details to learn more about these labels.
spec.displayNameYesSpecifies a human-readable name of the ClusterAssetGroup CR.
spec.descriptionYesProvides more details on the purpose of the ClusterAssetGroup CR.
spec.bucketRef.nameNoSpecifies the name of the bucket that stores the assets from the ClusterAssetGroup.
spec.sourcesYesDefines the type of the asset and the rafter.kyma-project.io/type label added to the ClusterAsset CR.
spec.sources.typeYesSpecifies the type of assets included in the ClusterAssetGroup CR.
spec.sources.displayNameNoSpecifies a human-readable name of the asset.
spec.sources.nameYesDefines a unique identifier of a given asset. It must be unique if there is more than one asset of a given type in a ClusterAssetGroup CR.
spec.sources.modeYesSpecifies if the asset consists of one file or a set of compressed files in the ZIP or TAR format. Use single for one file and package for a set of files.
spec.sources.parametersNoSpecifies a set of parameters for the ClusterAsset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format.
spec.sources.urlYesSpecifies the location of a single file or a package.
spec.sources.filterNoSpecifies a set of assets from the package to upload. The regex used in the filter must be RE2-compliant.
status.lastHeartbeatTimeNot applicableSpecifies when was the last time when the ClusterAssetGroup Controller processed the ClusterAssetGroup CR.
status.messageNot applicableDescribes a human-readable message on the CR processing progress, success, or failure.
status.phaseNot applicableThe ClusterAssetGroup Controller adds it to the ClusterAssetGroup CR. It describes the status of processing the ClusterAssetGroup CR by the ClusterAssetGroup Controller. It can be Ready, Pending, or Failed.
status.reasonNot applicableProvides the reason why the ClusterAssetGroup CR processing succeeded, is pending, or failed. See the Reasons section for the full list of possible status reasons and their descriptions.

NOTE: The ClusterAssetGroup Controller automatically adds all parameters marked as Not applicable to the ClusterAssetGroup CR.

Status reasons

Processing of a ClusterAssetGroup CR can succeed, continue, or fail for one of these reasons:

ReasonPhaseDescription
AssetCreatedPendingThe ClusterAssetGroup Controller created the specified asset.
AssetCreationFailedFailedThe ClusterAssetGroup Controller couldn't create the specified asset due to an error.
AssetsCreationFailedFailedThe ClusterAssetGroup Controller couldn't create assets due to an error.
AssetsListingFailedFailedThe ClusterAssetGroup Controller couldn't list assets due to an error.
AssetDeletedPendingThe ClusterAssetGroup Controller deleted specified assets.
AssetDeletionFailedFailedThe ClusterAssetGroup Controller couldn't delete the specified asset due to an error.
AssetsDeletionFailedFailedThe ClusterAssetGroup Controller couldn't delete assets due to an error.
AssetUpdatedPendingThe ClusterAssetGroup Controller updated the specified asset.
AssetUpdateFailedFailedThe ClusterAssetGroup Controller couldn't upload the specified asset due to an error.
AssetsUpdateFailedFailedThe ClusterAssetGroup Controller couldn't update assets due to an error.
AssetsReadyReadyAssets are ready to use.
WaitingForAssetsPendingWaiting for assets to be in the Ready status phase.
BucketErrorFailedBucket verification failed due to an error.
AssetsWebhookGetFailedFailedThe ClusterAssetGroup Controller failed to obtain proper webhook configuration.
AssetsSpecValidationFailedFailedAsset CR specification is invalid due to an error.

These are the resources related to this CR:

Custom resourceDescription
ClusterAssetThe ClusterAssetGroup CR orchestrates the creation of the ClusterAsset CR and defines its content.

These components use this CR:

ComponentDescription
RafterManages ClusterAsset CRs created based on the definition in the ClusterAssetGroup CR.

ClusterBucket

The clusterbuckets.rafter.kyma-project.io CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define the name of the cloud storage bucket for storing assets. To get the up-to-date CRD and show the output in the YAML format, run this command:

Click to copy
kubectl get crd clusterbuckets.rafter.kyma-project.io -o yaml

Sample custom resource

This is a sample resource that defines the storage bucket configuration.

Click to copy
apiVersion: rafter.kyma-project.io/v1beta1
kind: ClusterBucket
metadata:
name: test-sample
spec:
region: "us-east-1"
policy: readonly
status:
lastHeartbeatTime: "2019-02-04T11:50:26Z"
message: Bucket policy has been updated
phase: Ready
reason: BucketPolicyUpdated
remoteName: test-sample-1b19rnbuc6ir8
url: https://{STORAGE_ADDRESS}/test-sample-1b19rnbuc6ir8
observedGeneration: 1

Custom resource parameters

This table lists all possible parameters of a given resource together with their descriptions:

ParameterRequiredDescription
metadata.nameYesSpecifies the name of the CR which is also the prefix of the bucket name in the bucket storage.
spec.regionNoSpecifies the location of the region under which the ClusterBucket Controller creates the bucket. If the field is empty, the ClusterBucket Controller creates the bucket under the default location.
spec.policyNoSpecifies the type of bucket access. Use none, readonly, writeonly, or readwrite.
status.lastHeartbeatTimeNot applicableSpecifies when was the last time when the ClusterBucket Controller processed the ClusterBucket CR.
status.messageNot applicableDescribes a human-readable message on the CR processing success or failure.
status.phaseNot applicableThe ClusterBucket Controller automatically adds it to the ClusterBucket CR. It describes the status of processing the ClusterBucket CR by the ClusterBucket Controller. It can be Ready or Failed.
status.reasonNot applicableProvides information on the ClusterBucket CR processing success or failure. See the Reasons section for the full list of possible status reasons and their descriptions.
status.urlNot applicableProvides the address of the bucket storage under which the asset is available.
status.remoteNameNot applicableProvides the name of the bucket in storage.
status.observedGenerationNot applicableSpecifies the most recent ClusterBucket CR generation that the ClusterBucket Controller observed.

NOTE: The ClusterBucket Controller automatically adds all parameters marked as Not applicable to the ClusterBucket CR.

Status reasons

Processing of a ClusterBucket CR can succeed, continue, or fail for one of these reasons:

ReasonPhaseDescription
BucketCreatedPendingThe bucket was created.
BucketNotFoundFailedThe specified bucket doesn't exist anymore.
BucketCreationFailureFailedThe bucket couldn't be created due to an error.
BucketVerificationFailureFailedThe bucket couldn't be verified due to an error.
BucketPolicyUpdatedReadyThe policy specifying bucket protection settings was updated.
BucketPolicyUpdateFailedFailedThe policy specifying bucket protection settings couldn't be set due to an error.
BucketPolicyVerificationFailedFailedThe policy specifying bucket protection settings couldn't be verified due to an error.
BucketPolicyHasBeenChangedReadyThe policy specifying cloud storage bucket protection settings was changed.

These are the resources related to this CR:

Custom resourceDescription
ClusterAssetProvides the name of the storage bucket which the ClusterAsset CR refers to.

These components use this CR:

ComponentDescription
RafterUses the ClusterBucket CR for the storage bucket definition.

Tutorials

Add new documents to the Documentation view in the Console UI

This tutorial shows how you can customize the Documentation view that is available in the Console UI under the question mark icon on the top navigation panel. The purpose of this tutorial is to create a new Prometheus documentation section that contains Concepts and Guides documentation topics with a set of Markdown subdocuments. The Markdown sources used in this tutorial point to specific topics in the official Prometheus documentation.

NOTE: The Documentation view only displays documents uploaded through ClusterAssetGroup CRs. Make sure they have valid definitions and that the Markdown documents they render have correct metadata and structure.

Prerequisites

Steps

  1. Open the terminal and create these ClusterAssetGroup custom resources:

    Click to copy
    cat <<EOF | kubectl apply -f -
    apiVersion: rafter.kyma-project.io/v1beta1
    kind: ClusterAssetGroup
    metadata:
    labels:
    rafter.kyma-project.io/view-context: docs-ui # This label specifies that you want to render documents in the Documentation view.
    rafter.kyma-project.io/group-name: prometheus # This label defines the group under which you want to render the given asset in the Documentation view. The value cannot include spaces.
    rafter.kyma-project.io/order: "1" # This label specifies the position of the ClusterAssetGroup in relation to other ClusterAssetGroups in the Prometheus section.
    name: prometheus-concepts
    spec:
    displayName: "Concepts" # The name of the topic that shows in the Documentation view under the main Prometheus section.
    description: "Some docs about Prometheus concepts"
    sources:
    - type: markdown # This type indicates that the Asset Metadata Service must extract Front Matter metadata from the source Prometheus documents and add them to a ClusterAssetGroup as a status.
    displayName: "Concepts"
    name: docs
    mode: package # This mode indicates that the source file is compressed and the Asset Controller must unpack it first to process it.
    url: https://github.com/prometheus/docs/archive/master.zip # The source location of Prometheus documents.
    filter: content/docs/concepts # The exact location of the documents that you want to extract.
    ---
    apiVersion: rafter.kyma-project.io/v1beta1
    kind: ClusterAssetGroup
    metadata:
    labels:
    rafter.kyma-project.io/view-context: docs-ui
    rafter.kyma-project.io/group-name: prometheus
    rafter.kyma-project.io/order: "2"
    name: prometheus-guides
    spec:
    displayName: "Guides"
    description: "Some docs about Prometheus guides"
    sources:
    - type: markdown
    displayName: "Guides"
    name: docs
    mode: package
    url: https://github.com/prometheus/docs/archive/master.zip
    filter: content/docs/guides
    EOF

    NOTE: For a detailed explanation of all parameters, see the ClusterAssetGroup custom resource.

  2. Check the status of custom resources:

    Click to copy
    kubectl get clusterassetgroups

    The custom resources should be in the Ready phase:

    Click to copy
    NAME PHASE AGE
    prometheus-concepts Ready 59s
    prometheus-guides Ready 59s

    If a given custom resource is in the Ready phase and you want to get details of the created ClusterAssets, such as document names and the location of MinIO buckets, run this command:

    Click to copy
    kubectl get clusterasset -o yaml -l rafter.kyma-project.io/asset-group=prometheus-concepts

    The command lists details of the ClusterAsset created by the prometheus-concepts custom resource:

    Click to copy
    apiVersion: v1
    items:
    - apiVersion: rafter.kyma-project.io/v1beta1
    kind: ClusterAsset
    metadata:
    annotations:
    rafter.kyma-project.io/asset-short-name: docs
    creationTimestamp: "2019-05-15T13:27:11Z"
    finalizers:
    - deleteclusterasset.finalizers.rafter.kyma-project.io
    generation: 1
    labels:
    rafter.kyma-project.io/asset-group: prometheus-concepts
    rafter.kyma-project.io/type: markdown
    name: prometheus-concepts-docs-markdown-1b7mu6bmkmse4
    ownerReferences:
    - apiVersion: rafter.kyma-project.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ClusterAssetGroup
    name: prometheus-concepts
    uid: 253c311b-7715-11e9-b241-1e5325edb3d6
    resourceVersion: "6785"
    selfLink: /apis/rafter.kyma-project.io/v1beta1/clusterassets/prometheus-concepts-docs-markdown-1b7mu6bmkmse4
    uid: 253eee7d-7715-11e9-b241-1e5325edb3d6
    spec:
    bucketRef:
    name: rafter-public-1b7mtf1de5ost
    source:
    filter: content/docs/concepts
    metadataWebhookService:
    - endpoint: /v1/extract
    filter: \.md$
    name: rafter-front-matter-service
    namespace: kyma-system
    mode: package
    url: https://github.com/prometheus/docs/archive/master.zip
    status:
    assetRef:
    baseUrl: https://storage.kyma.local/rafter-public-1b7mtf1de5ost-1b7mtf1h187r7/prometheus-concepts-docs-markdown-1b7mu6bmkmse4
    files:
    - metadata:
    sort_rank: 1
    title: Data model
    name: docs-master/content/docs/concepts/data_model.md
    - metadata:
    nav_icon: flask
    sort_rank: 2
    title: Concepts
    name: docs-master/content/docs/concepts/index.md
    - metadata:
    sort_rank: 3
    title: Jobs and instances
    name: docs-master/content/docs/concepts/jobs_instances.md
    - metadata:
    sort_rank: 2
    title: Metric types
    name: docs-master/content/docs/concepts/metric_types.md
    lastHeartbeatTime: "2019-05-15T13:27:24Z"
    message: Asset content has been uploaded
    observedGeneration: 1
    phase: Ready
    reason: Uploaded
    kind: List
    metadata:
    resourceVersion: ""
    selfLink: ""

    In the status section of the ClusterAsset, you can see details of all documents and baseUrl with their location in MinIO:

    Click to copy
    status:
    assetRef:
    baseUrl: https://storage.kyma.local/rafter-public-1b7mtf1de5ost-1b7mtf1h187r7/prometheus-concepts-docs-markdown-1b7mu6bmkmse4
    files:
    - metadata:
    sort_rank: 1
    title: Data model
    name: docs-master/content/docs/concepts/data_model.md
  3. Open the Console UI and navigate to the Documentation view. The new Prometheus section with Concepts and Guides topic groups and alphabetically ordered Markdown documents appears at the bottom of the documentation panel:

    Prometheus section in navigation

    NOTE: Since the source Markdown documents are prepared for different UIs and can contain custom tags, there can be issues with rendering their full content. If you prepare your own input, use our content guidelines to make sure the documents render properly in the Console UI.

Troubleshooting

If you apply the ClusterAssetGroup custom resource but its status stays Pending or shows Failed, check the status details.

This command lists details of the prometheus-concepts ClusterAssetGroup:

Click to copy
kubectl get clusterasset -o yaml -l rafter.kyma-project.io/asset-group=prometheus-concepts

See the status details sample:

Click to copy
status:
phase: Failed
reason: ValidationFailed
message: "The file is not valid against the provided json schema"

You can also analyze logs of the Rafter Controller Manager:

Click to copy
kubectl -n kyma-system logs -l 'app.kubernetes.io/name=rafter-controller-manager'

Set MinIO to Gateway mode

By default, you install Kyma with Rafter in MinIO stand-alone mode. This tutorial shows how to set MinIO to Gateway mode on different cloud providers using an override.

CAUTION: The authentication and authorization measures required to edit the assets in the public cloud storage may differ from those used in Rafter. That's why we recommend using separate subscriptions for Minio Gateway to ensure that you only have access to data created by Rafter, and to avoid compromising other public data.

CAUTION: Cloud providers offer different payment policies for their services, such as bucket storage or network traffic. To avoid unexpected costs, verify the payment policy with the given provider before you start using Gateway mode.

Prerequisites

  • Google Cloud Storage
  • Azure Blob Storage
  • AWS S3
  • Alibaba Cloud OSS

Steps

You can set MinIO to the given Gateway mode both during and after Kyma installation. In both cases, you need to create and configure an access key for your cloud provider account, apply a Secret and a ConfigMap with an override to a cluster or Minikube, and trigger the Kyma installation process. This tutorial shows how to switch to MinIO Gateway in the runtime, when you already have Kyma installed locally or on a cluster.

CAUTION: Buckets created in MinIO without using Bucket CRs are not recreated or migrated while switching to MinIO Gateway mode.

Create required cloud resources

  • Google Cloud Storage
  • Azure Blob Storage
  • AWS S3
  • Alibaba Cloud OSS

Configure MinIO Gateway mode

  • Google Cloud Storage
  • Azure Blob Storage
  • AWS S3
  • Alibaba Cloud OSS

CAUTION: If you want to activate MinIO Gateway mode before you install Kyma, you need to manually add the ConfigMap and the Secret to the installer-config-local.yaml.tpl template located under the installation/resources subfolder before you run the installation script. In this case you start from scratch, so add the ConfigMap without these lines that trigger the default buckets migration from MinIO to MinIO Gateway:

Click to copy
controller-manager.minio.podAnnotations.persistence: "off"
upload-service.minio.podAnnotations.persistence: "off"

Trigger installation

Trigger Kyma installation or update it by labeling the Installation custom resource. Run:

Click to copy
kubectl -n default label installation/kyma-installation action=install

Troubleshooting

AssetGroup processing fails due to duplicated default buckets

It may happen that the processing of a (Cluster)AssetGroup CR fails due to too many buckets with the rafter.kyma-project.io/access:public label.

To fix this issue, manually remove all default buckets with the rafter.kyma-project.io/access:public label:

  1. Remove the cluster-wide default bucket:

    Click to copy
    kubectl delete clusterbuckets.rafter.kyma-project.io --selector='rafter.kyma-project.io/access=public'
  2. Remove buckets from the Namespaces where you use them:

    Click to copy
    kubectl delete buckets.rafter.kyma-project.io --selector='rafter.kyma-project.io/access=public' --namespace=default

    This allows the (Cluster)AssetGroup controller to recreate the default buckets successfully.

Upload Service returns "502 Bad Gateway"

It may happen that the Upload Service returns 502 Bad Gateway with the The specified bucket does not exist message. It means that the bucket has been removed or renamed.

If the bucket is removed, delete the ConfigMap:

Click to copy
kubectl -n kyma-system delete configmaps rafter-upload-service

If the bucket exists but with a different name, set a proper name in the ConfigMap:

Click to copy
kubectl -n kyma-system edit configmaps rafter-upload-service

After that, restart the Upload Service - it will create a new bucket or use the renamed one:

Click to copy
kubectl -n kyma-system delete pod -l app.kubernetes.io/name=upload-service

Metrics

Rafter Controller Manager

Metrics for the Rafter Controller Manager include:

To see a complete list of metrics, run this command:

Click to copy
kubectl -n kyma-system port-forward svc/rafter-controller-manager 8080

To check the metrics, open a new terminal window and run:

Click to copy
curl http://localhost:8080/metrics

TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port 8080, redirect the metrics to another one. For example, run kubectl -n kyma-system port-forward svc/rafter-controller-manager 3000:8080 and update the port in the localhost address.

See the Monitoring documentation to learn more about monitoring and metrics in Kyma.

AsyncAPI Service

This table shows the AsyncAPI Service custom metrics, their types, and descriptions.

NameTypeDescription
rafter_services_http_request_and_mutation_duration_secondshistogramSpecifies the number of assets that the service received for processing and mutated within a given time series.
rafter_services_http_request_and_validation_duration_secondshistogramSpecifies the number of assets that the service received for processing and validated within a given time series.
rafter_services_handle_mutation_status_codecounterSpecifies a number of different HTTP response status codes in a given time series.
rafter_services_handle_validation_status_codecounterSpecifies a number of different HTTP response status codes in a given time series.

Apart from the custom metrics, the AsyncAPI Service also exposes default Prometheus metrics for Go applications.

To see a complete list of metrics, run this command:

Click to copy
kubectl -n kyma-system port-forward svc/rafter-asyncapi-service 80

To check the metrics, open a new terminal window and run:

Click to copy
curl http://localhost:80/metrics

TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port 80, redirect the metrics to another one. For example, run kubectl -n kyma-system port-forward svc/rafter-asyncapi-service 8080:80 and update the port in the localhost address.

See the Monitoring documentation to learn more about monitoring and metrics in Kyma.

Upload Service

This table shows the Upload Service custom metrics, their types, and descriptions.

NameTypeDecription
rafter_upload_service_http_request_duration_secondshistogramSpecifies the number of HTTP requests the service processes in a given time series.
rafter_upload_service_http_request_returned_status_codecounterSpecifies the number of different HTTP response status codes in a given time series.

Apart from the custom metrics, the Upload Service also exposes default Prometheus metrics for Go applications.

To see a complete list of metrics, run this command:

Click to copy
kubectl -n kyma-system port-forward svc/rafter-upload-service 80

To check the metrics, open a new terminal window and run:

Click to copy
curl http://localhost:80/metrics

TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port 80, redirect the metrics to another one. For example, run kubectl -n kyma-system port-forward svc/rafter-upload-service 8080:80 and update the port in the localhost address.

See the Monitoring documentation to learn more about monitoring and metrics in Kyma.

Front Matter Service

This table shows the Front Matter Service custom metrics, their types, and descriptions.

NameTypeDescription
rafter_front_matter_service_http_request_duration_secondshistogramSpecifies the number of HTTP requests the service processes in a given time series.
rafter_front_matter_service_http_request_returned_status_codecounterSpecifies the number of different HTTP response status codes in a given time series.

Apart from the custom metrics, the Front Matter Service also exposes default Prometheus metrics for Go applications.

To see a complete list of metrics, run this command:

Click to copy
kubectl -n kyma-system port-forward svc/rafter-front-matter-service 80

To check the metrics, open a new terminal window and run:

Click to copy
curl http://localhost:80/metrics

TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port 80, redirect the metrics to another one. For example, run kubectl -n kyma-system port-forward svc/rafter-front-matter-service 8080:80 and update the port in the localhost address.

See the Monitoring documentation to learn more about monitoring and metrics in Kyma.

MinIO

As an external, open-source file storage solution, MinIO exposes its own metrics. See the official documentation for details. Rafter comes with a preconfigured ServiceMonitor CR that enables Prometheus to scrap MinIO metrics. Using the metrics, you can create your own Grafana dashboard or reuse the dashboard that is already prepared.

Apart from the custom metrics, MinIO also exposes default Prometheus metrics for Go applications.

To see a complete list of metrics, run this command:

Click to copy
kubectl -n kyma-system port-forward svc/rafter-minio 9000

To check the metrics, open a new terminal window and run:

Click to copy
curl http://localhost:9000/minio/prometheus/metrics

TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port 9000, redirect the metrics to another one. For example, run kubectl -n kyma-system port-forward svc/rafter-minio 8080:9000 and update the port in the localhost address.

See the Monitoring documentation to learn more about monitoring and metrics in Kyma.