Rafter
Overview
Rafter is a solution for storing and managing different types of public assets, such as documents, files, images, API specifications, and client-side applications. It uses an external solution, MinIO, for storing assets. The whole concept relies on Kubernetes custom resources (CRs) managed by the Asset, Bucket, and AssetGroup controllers (and their cluster-wide counterparts) grouped under the Rafter Controller Manager. These CRs include:
- Asset CR which manages a single asset or a package of assets
- Bucket CR which manages buckets in which these assets are stored
- AssetGroup CR which manages a group of Asset CRs of a specific type
Rafter enables you to manage assets using supported webhooks. For example, if you use Rafter to store a specification, you can additionally define a webhook service that Rafter should call before the file is sent to storage. The webhook service can:
- Validate the file
- Mutate the file
- Extract some of the file information and put it in the status of the custom resource
Rafter comes with the following set of services and extensions compatible with Rafter webhooks:
- Upload Service (optional service)
- AsyncAPI Service (extension)
- Front Matter Service (extension)
CAUTION: Rafter does not enforce any access control. To protect the confidentiality of your information, use Rafter only to store public data. Do not use it to process and store any kind of confidential information, including personal data.
What Rafter can be used for
- Rafter is based on CRs. Therefore, it is an extension of Kubernetes API and should be used mainly by developers building their solutions on top of Kubernetes,
- Rafter is a file store that allows you to programmatically modify, validate the files and/or extract their metadata before they go to storage. Content of those files can be fetched using an API. This is a basic functionality of the headless CMS concept. If you want to deploy an application to Kubernetes and enrich it with additional documentation or specifications, you can do it using Rafter,
- Rafter is an S3-like file store also for files written in HTML, CSS, and JS. It means that Rafter can be used as a hosting solution for client-side applications.
What Rafter is not
- Rafter is not a Wordpress-like Content Management System.
- Rafter is not a solution for Enterprise Content Management.
- Rafter doesn't come with any out-of-the-box UI that allows you to modify or consume files managed by Rafter.
Benefits
This solution offers a number of benefits:
- It's flexible. You can use it for storing various types of assets, such as Markdown documents, ZIP, PNG, or JS files.
- It's scalable. It allows you to store assets on a production system, using cloud provider storage services. At the same time, you can apply it to local development and use MinIO to store assets on-premise.
- It allows you to avoid vendor lock-in. When using Rafter in a production system, you can seamlessly switch between different major service providers, such as AWS S3 or Azure Blob.
- It's location-independent. It allows you to expose files directly to the Internet and replicate them to different regions. This way, you can access them easily, regardless of your location.
Rafter in Kyma
Kyma provides a Kubernetes-based solution for managing content that relies on the custom resource (CR) extensibility feature and Rafter as a backend mechanism. This solution allows you to upload multiple and grouped data for a given documentation topic and store them as Asset CRs in external buckets located in MinIO storage. All you need to do is to specify topic details, such as documentation sources, in an AssetGroup CR or a ClusterAssetGroup CR and apply it to a given Namespace or cluster. The CR supports various documentation formats, including images, Markdown documents, AsyncAPI, OData, and OpenAPI specification files. You can upload them as single, direct file URLs and packed assets (ZIP or TAR).
The content management solution offers these benefits:
- It provides a unified way of uploading different document types to a Kyma cluster.
- It supports baked-in documentation. Apart from the default documentation, you can add your own and group it as you like, the same way you use micro frontends to personalize views in the Console UI. For example, you can add contextual help for a given Service Broker in the Service Catalog.
Architecture
Basic asset flow
This diagram shows a high-level overview of how Rafter works:
NOTE: This flow also applies to the cluster-wide counterparts of all CRs.
- The user creates a Bucket CR. This propagates the creation of buckets in MinIO Gateway where assets will be stored.
- The user creates an Asset CR that contains the reference to assets.
- Services implemented for Rafter webhooks optionally validate, mutate, or extract data from assets before uploading them into buckets.
- Rafter uploads assets into buckets in MinIO Gateway.
Read more about the role of main Rafter components and controllers that manage them:
Asset custom resource (CR) is an obligatory CR in which you define the asset you want to store in a given storage bucket. Its definition requires the asset name and mode, the name of the Namespace in which it is available, the address of its web location, and the name of the bucket in which you want to store it. Optionally, you can specify the validation and mutation requirements that the asset must meet before it is stored.
Asset Controller (AC) manages the Asset CR lifecycle.
AssetGroup custom resource (CR) orchestrates the creation of multiple Asset CRs in a given Namespace.
AssetGroup Controller (AGC) creates Asset CRs based on an AssetGroup CR definition. If the AssetGroup CR defines two sources of assets, such as
asyncapi
andmarkdown
, the AGC creates two Asset CRs. The AGC also monitors the status of the Asset CR defined in the appropriate AssetGroup CR and updates the status of the AssetGroup CR accordingly.Bucket CR is an obligatory CR in which you define the name of the bucket for storing assets.
Bucket Controller (BC) manages the Bucket CR lifecycle.
Validation Service is an optional service which ensures that the asset meets the validation requirements specified in the Asset CR before uploading it to the bucket. The service returns the validation status to the AC. See the example of the AsyncAPI Service.
Mutation Service is an optional service which ensures that the asset is modified according to the mutation specification defined in the Asset CR before it is uploaded to the bucket. The service returns the modified asset to the AC. See the example of the AsyncAPI Service.
Extraction Service is an optional service which extracts metadata from assets. The metadata information is stored in the CR status. The service returns the asset metadata to the AC. See the example of the Front Matter Service.
MinIO Gateway is a MinIO cluster mode which is a production-scalable storage solution. It ensures flexibility of using asset storage services from major cloud providers, including Azure Blob Storage, Amazon S3, and Google Cloud Storage.
NOTE: All CRs and controllers have their cluster-wide counterparts, names of which start with the Cluster prefix, such as ClusterAssetGroup CR.
AssetGroup CR flow
This diagram provides more details of the AssetGroup CR flow, the controller that manages it, and underlying processes:
NOTE: This flow also applies to the ClusterAssetGroup CR.
- The user creates an AssetGroup CR in a given Namespace.
- The AssetGroup Controller (AGC) reads the AssetGroup CR definition.
- If the AssetGroup CR definition does not provide a reference name (bucketRef) of the Bucket CR, the AssetsGroup Controller checks if the default Bucket CR already exists in this Namespace. If it does not exist yet, the AssetsGroup Controller creates a new Bucket CR with:
- the
rafter-public-{ID}
name, where{ID}
is a randomly generated string, such asrafter-public-6n32wwj5vzq1k
. - the rafter.kyma-project.io/default: true label
- the rafter.kyma-project.io/access: public label
- The AGC creates Asset CRs in the number corresponding to the number of sources specified in the AssetGroup CR. It adds rafter.kyma-project.io/type and rafter.kyma-project.io/asset-group labels to every Asset CR definition. It also adds the bucket name under the bucketRef field to every Asset CR definition.
- The AGC verifies if the Asset CRs are in the
Ready
phase and updates the status of the AssetGroup CR accordingly. It also adds the bucket reference name to the AssetGroup CR.
Asset and Bucket CRs flow
This diagram provides an overview of the basic Asset and Bucket CRs flow and the role of particular components in this process:
- The user creates a bucket through a Bucket CR.
- The Bucket Controller (BC) listens for new events and acts upon receiving the Bucket CR creation event.
- The BC creates the bucket in the MinIO Gateway storage.
- The user creates an Asset CR which specifies the reference to the asset source location and the name of the bucket for storing the asset.
- The Asset Controller (AC) listens for new events and acts upon receiving the Asset CR creation event.
- The AC reads the CR definition, checks if the Bucket CR is available, and if its name matches the bucket name referenced in the Asset CR. It also verifies if the Bucket CR is in the
Ready
phase. - If the Bucket CR is available, the AC fetches the asset from the source location provided in the CR. If the asset is a ZIP or TAR file and the Asset CR's mode is set to
package
, the AC unpacks and optionally filters the asset before uploading it into the bucket. - Optionally, the AC validates, modifies the asset, or extracts asset's metadata if such a requirement is defined in the Asset CR. The AC communicates with the validation, mutation, and metadata services to validate, modify the asset, or extract asset's metadata according to the specification defined in the Asset CR.
- The AC uploads the asset to MinIO Gateway, into the bucket specified in the Asset CR.
- The AC updates the status of the Asset CR with the storage location of the file in the bucket.
Details
Asset custom resource lifecycle
Learn about the lifecycle of the Asset custom resource (CR) and how its creation, removal, or a change in the bucket reference affects other Rafter components.
NOTE: This lifecycle also applies to the ClusterAsset CR.
Create an Asset CR
When you create an Asset CR, the Asset Controller (AC) receives a CR creation Event, reads the CR definition, verifies if the bucket exists, downloads the asset, unpacks it, and stores it in MinIO Gateway.
Remove an Asset CR
When you remove the Asset CR, the AC receives a CR deletion Event and deletes the CR from MinIO Gateway.
Change the bucket reference
When you modify an Asset CR by updating the bucket reference in the Asset CR to a new one while the previous bucket still exists, the lifecycle starts again. The asset is created in a new storage location and this location is updated in the Asset CR.
Unfortunately, this causes duplication of data as the assets from the previous bucket storage are not cleaned up by default. Thus, to avoid multiplication of assets, first remove one Bucket CR and then modify the existing Asset CR with a new bucket reference.
Change the Asset CR specification
When you modify the Asset CR specification, the lifecycle starts again. The previous asset content is removed and no longer available.
Bucket custom resource lifecycle
Learn about the lifecycle of the Bucket custom resource (CR) and how its creation and removal affect other Rafter components.
NOTE: This lifecycle also applies to the ClusterBucket CR.
Create a Bucket CR
When you create a Bucket CR, the Bucket Controller (BC) receives a CR creation Event and creates a bucket with the name specified in the CR. It is created in the MinIO Gateway storage under the {CR_name}-{ID}
location, such as test-bucket-1b19rnbuc6ir8
, where {CR_name}
is the name field from the Bucket CR and {ID}
is a randomly generated string. The status of the CR contains a reference URL to the created bucket.
Remove a Bucket CR
When you remove the Bucket CR, the BC receives a CR deletion Event and removes the bucket with the whole content from MinIO Gateway.
The Asset Controller (AC) also monitors the status of the referenced bucket. The AC checks the Bucket CR status to make sure the bucket exists. If you delete the bucket, the AC receives information that the files are no longer accessible and the bucket was removed. The AC updates the status of the Asset CR to ready: False
and removes the asset storage reference. The Asset CR is still available and you can use it later for a new bucket.
AssetGroup custom resource lifecycle
NOTE: This lifecycle also applies to the ClusterAssetGroup CR.
Asset CR manual changes
The AssetGroup custom resource (CR) coordinates Asset CR creation, deletion, and modification. The AssetGroup Controller (AGC) verifies AssetGroup definition on a regular basis and creates, deletes, or modifies Asset CRs accordingly.
The AssetGroup CR acts as the single source of truth for the Asset CRs it orchestrates. If you modify or remove any of them manually, the AGC automatically overwrites such an Asset CR or updates it based on the AssetGroup CR definition.
AssetGroup CR and Asset CR dependencies
Asset CRs and AssetGroup CRs are also interdependent in terms of names, definitions, and statuses.
Names
The name of every Asset CR created by the AGC consists of these three elements:
- The name of the AssetGroup CR, such as
service-catalog
. - The source type of the given asset in the AssetGroup CR, such as
asyncapi
. - A randomly generated string, such as
1b38grj5vcu1l
.
The full name of such an Asset CR that follows the {assetGroup-name}-{asset-source}-{suffix}
pattern is service-catalog-asyncapi-1b38grj5vcu1l
.
Labels
There are two labels in every Asset CR created from AssetGroup CRs. Both of them are based on AssetGroup CRs definitions:
rafter.kyma-project.io/type equals a given type parameter from the AssetGroup CR, such as
asyncapi
.rafter.kyma-project.io/asset-group equals the name metadata from the AssetGroup CR, such as
service-catalog
.
Statuses
The status of the AssetGroup CR depends heavily on the status phase of all Asset CRs it creates. It is:
Ready
when all related Asset CRs are already in theReady
phase.Pending
when it awaits the confirmation that all related Asset CRs are in theReady
phase. If any Asset CR is in theFailed
phase, the status of the AssetGroup CR remainsPending
.Failed
when processing of the AssetGroup CR fails. For example, the AssetGroup CR can fail if you provide incorrect or duplicated data in its specification.
MinIO and MinIO Gateway
The whole concept of Rafter relies on MinIO as the storage solution. It supports Kyma's manifesto and the "batteries included" rule by providing you with this on-premise solution by default.
Depending on the usage scenario, you can:
- Use MinIO for local development.
- Store your assets on a production scale using MinIO in Gateway mode.
Rafter ensures that both usage scenarios work for Kyma, without additional configuration of the built-in controllers.
Development mode storage
MinIO is an open-source asset storage server with Amazon S3-compatible API. You can use it to store various types of assets, such as documents, files, or images.
In the context of Rafter, the Asset Controller stores all assets in MinIO, in dedicated storage space.
Production storage
For the production purposes, Rafter uses MinIO Gateway which:
- Is a multi-cloud solution that offers the flexibility to choose a given cloud provider for the specific Kyma installation, including Azure, Amazon, and Google.
- Allows you to use various cloud providers that support the data replication and CDN configuration.
- Is compatible with Amazon S3 APIs.
TIP: Using Gateway mode may generate additional costs for storing buckets, assets, or traffic in general. To avoid them, verify the payment policy with the given cloud provider before you switch to Gateway mode.
See this tutorial to learn how to set MinIO to Google Cloud Storage Gateway mode.
Access MinIO credentials
For security reasons, MinIO credentials are generated during Kyma installation and stored inside the Kubernetes Secret object. You can obtain both the access key and the secret key in the development (MinIO) and production (MinIO Gateway) mode using the same commands.
- To get the access key, run:
- MacOS
- Linux
- To get the secret key, run:
- MacOS
- Linux
You can also set MinIO credentials directly using values.yaml
files. For more details, see the official MinIO documentation.
Supported webhooks
Types
Rafter supports the following types of webhooks:
Mutation webhook modifies fetched assets before the Asset Controller uploads them into the bucket. For example, this can mean asset rewriting through the
regex
operation orkey-value
, or the modification in the JSON specification. The mutation webhook service must return modified files to the Asset Controller.Validation webhook validates fetched assets before the Asset Controller uploads them into the bucket. It can be a list of several different validation webhooks that process assets even if one of them fails. It can refer either to the validation of a specific file against a specification or to the security validation. The validation webhook service must return the validation status when the validation completes.
Metadata webhook allows you to extract metadata from assets and inserts it under the
status.assetRef.files.metadata
field in the (Cluster)Asset CR. For example, the Asset Metadata Service which is the metadata webhook implementation in Kyma, extracts front matter metadata from.md
files and returns the status with such information astitle
andtype
.
Service specification requirements
If you create a specific mutation, validation, or metadata service for the available webhooks and you want Rafter to properly communicate with it, you must ensure that the API exposed by the given service meets the API contract requirements. These criteria differ depending on the webhook type:
NOTE: Services are described in the order in which Rafter processes them.
mutation service must expose endpoints that:
- accept parameters and content properties.
- return the
200
response with new file content. - return the
304
response informing that the file content was not modified.
validation service must expose endpoints that:
- contain parameters and content properties.
- return the
200
response confirming that validation succeeded. - return the
422
response informing why validation failed.
metadata service must expose endpoints that:
- pass file data in the
"object": "string"
format in the request body, where object stands for the file name and string is the file content. - return the
200
response with extracted metadata.
- pass file data in the
See the example of an API specification with the /convert
, /validate
, and /extract
endpoints.
Upload Service
The Upload Service is an HTTP server that exposes the file upload functionality for MinIO. It contains a simple HTTP endpoint which accepts multipart/form-data
forms. It uploads files to public system buckets.
The main purpose of the service is to provide a solution for hosting static files for components that use Rafter, such as the Application Connector. You can also use the Upload Service for development purposes to host files for Rafter, without the need to rely on external providers.
System buckets
The Upload Service creates a system-public-{generated-suffix}
system bucket, where {generated-suffix}
is a Unix nano timestamp in the 32-base number system. The public bucket has a read-only policy specified.
To enable the service scaling and to maintain the bucket configuration data between the application restarts, the Upload Service stores its configuration in the rafter-upload-service
ConfigMap.
Once you upload the files, system buckets store them permanently. There is no policy to clean system buckets periodically.
The diagram describes the Upload Service flow:
Use the service outside the Kyma cluster
You can expose the service for development purposes. To use the Upload Service on a local machine, run the following command:
kubectl port-forward deployment/rafter-upload-svc 3000:3000 -n kyma-system
You can access the service on port 3000
.
Upload files
To upload files, send the multipart form POST request to the /v1/upload
endpoint. The endpoint recognizes the following field names:
public
that is an array of files to upload to a public system bucket.directory
that is an optional directory for storing the uploaded files. If you do not specify it, the service creates a directory with a random name. If the directory and files already exist, the service overwrites them.
To do the multipart request using curl
, run the following command:
curl -v -F directory='example' -F public=@sample.md -F public=@text-file.md -F public=@archive.zip http://localhost:3000/v1/upload
The result is as follows:
{ "uploadedFiles": [ { "fileName": "text-file.md", "remotePath": "{STORAGE_ADDRESS}/public-1b0sjap35m9o0/example/text-file.md", "bucket": "public-1b0sjap35m9o0", "size": 212 }, { "fileName": "archive.zip", "remotePath": "{STORAGE_ADDRESS}/public-1b0sjaq6t6jr8/example/archive.zip", "bucket": "public-1b0sjaq6t6jr8", "size": 630 }, { "fileName": "sample.md", "remotePath": "{STORAGE_ADDRESS}/public-1b0sjap35m9o0/example/sample.md", "bucket": "public-1b0sjap35m9o0", "size": 4414 } ]}
See the OpenAPI specification for the full API documentation.
AsyncAPI Service
AsyncAPI Service is an HTTP server enabled by default in Kyma to process AsyncAPI specifications. It only accepts multipart/form-data
forms and contains two endpoints:
/validate
that validates the AsyncAPI specification against the AsyncAPI schema in version 2.0.0. AsyncAPI Service uses the AsyncAPI Parser for this purpose./convert
that converts the version and format of the AsyncAPI files. The service uses the AsyncAPI Converter to change the AsyncAPI specifications from older versions to version 2.0.0, and convert any YAML input files to the JSON format that is required to render the specifications in the Console UI.
See the asyncapi-service-openapi.yaml
file for the full OpenAPI specification of the service.
Front Matter Service
The Front Matter Service is an HTTP server that exposes the functionality for extracting metadata from files. It contains a simple HTTP endpoint which accepts multipart/form-data
forms. The service extracts front matter YAML metadata from text files of all extensions.
The main purpose of the service is to provide metadata extraction for Rafter controllers. That's why it is only available inside the cluster. To use it, define metadataWebhookService
in Asset and ClusterAsset custom resources.
Front matter metadata
Front matter YAML metadata are YAML properties added at the beginning of a file, between ---
lines. The following snippet represents an exemplary Markdown file with metadata specified:
---title: Example document titledescription: Example page descriptionorder: 3array: - foo - bar---## Lorem ipsumDolores sit amet
Use the service outside the Kyma cluster
You can expose the service for development purposes. To use the Front Matter Service on a local machine, run the following command:
kubectl port-forward deployment/rafter-front-matter-svc 3000:3000 -n kyma-system
You can access the service on port 3000
.
Metadata files
To extract metadata from files, send the multipart form POST request to the /v1/extract
endpoint. Specify the relative or absolute path to the file as a field name.
To do the multipart request using curl
, run the following command:
curl -v -F foo/foo.md=@foo.md -F bar/bar.yaml=@bar.yaml http://localhost:3000/v1/extract
The result is as follows:
{ "data": [ { "filePath": "foo/foo.md", "metadata": { "no": 3, "title": "Access logs", "type": "Details" } }, { "filePath": "bar/bar.yaml", "metadata": { "number": 9, "title": "Hello world", "url": "https://kyma-project.io" } } ]}
See the OpenAPI specification for the full API documentation.
Configuration
Rafter chart
To configure the Rafter chart, override the default values of its values.yaml
file. This document describes parameters that you can configure.
TIP: To learn more about how to use overrides in Kyma, see the following documents:
Configurable parameters
This table lists the configurable parameters, their descriptions, and default values.
NOTE: You can define all envs either by providing them as inline values or using the valueFrom object. See the example for reference.
Parameter | Description | Default value |
---|---|---|
controller-manager.minio.persistence.enabled | Parameter that enables MinIO persistence. Deactivate it only if you use Gateway mode. | true |
controller-manager.minio.environment.MINIO_BROWSER | Parameter that enables browsing MinIO storage. By default, the MinIO browser is turned off for security reasons. You can change the value to on to use the browser. If you enable the browser, it is available at https://storage.{DOMAIN}/minio/ , for example at https://storage.kyma.local/minio/ . | "off" |
controller-manager.minio.resources.requests.memory | Requests for memory resources. | 32Mi |
controller-manager.minio.resources.requests.cpu | Requests for CPU resources. | 10m |
controller-manager.minio.resources.limits.memory | Limits for memory resources. | 128Mi |
controller-manager.minio.resources.limits.cpu | Limits for CPU resources. | 100m |
controller-manager.deployment.replicas | Number of service replicas. | 1 |
controller-manager.pod.resources.limits.cpu | Limits for CPU resources. | 150m |
controller-manager.pod.resources.limits.memory | Limits for memory resources. | 128Mi |
controller-manager.pod.resources.requests.cpu | Requests for CPU resources. | 10m |
controller-manager.pod.resources.requests.memory | Requests for memory resources. | 32Mi |
controller-manager.envs.clusterAssetGroup.relistInterval | Time intervals in which the Rafter Controller Manager verifies the ClusterAssetGroup for changes. | 5m |
controller-manager.envs.assetGroup.relistInterval | Time intervals in which the Rafter Controller Manager verifies the AssetGroup for changes. | 5m |
controller-manager.envs.clusterBucket.region | Regional location of the ClusterBucket in a given cloud storage. Use one of the available regions. | us-east-1 |
controller-manager.envs.bucket.region | Regional location of the bucket in a given cloud storage. Use one of the available regions. | us-east-1 |
controller-manager.envs.clusterBucket.maxConcurrentReconciles | Maximum number of cluster bucket concurrent reconciles which will run. | 1 |
controller-manager.envs.bucket.maxConcurrentReconciles | Maximum number of bucket concurrent reconciles which will run. | 1 |
controller-manager.envs.clusterAsset.maxConcurrentReconciles | Maximum number of cluster asset concurrent reconciles which will run. | 1 |
controller-manager.envs.asset.maxConcurrentReconciles | Maximum number of asset concurrent reconciles which will run. | 1 |
controller-manager.minio.secretKey | Secret key. Add the parameter to set your own secretkey credentials. | By default, secretKey is automatically generated. |
controller-manager.minio.accessKey | Access key. Add the parameter to set your own accesskey credentials. | By default, accessKey is automatically generated. |
controller-manager.envs.store.uploadWorkers | Number of workers used in parallel to upload files to the storage bucket. | 10 |
controller-manager.envs.webhooks.validation.workers | Number of workers used in parallel to validate files. | 10 |
controller-manager.envs.webhooks.mutation.workers | Number of workers used in parallel to mutate files. | 10 |
upload-service.deployment.replicas | Number of service replicas. | 1 |
upload-service.envs.verbose | If set to true , you enable the extended logging mode that records more information on AsyncAPI Service activities than the usual logging mode which registers only errors and warnings. | true |
front-matter-service.deployment.replicas | Number of service replicas. For more details, see the Kubernetes documentation. | 1 |
front-matter-service.envs.verbose | If set to true , you enable the extended logging mode that records more information on Front Matter Service activities than the usual logging mode which registers only errors and warnings. | true |
asyncapi-service.deployment.replicas | Number of service replicas. | 1 |
asyncapi-service.envs.verbose | If set to true , you enable the extended logging mode that records more information on AsyncAPI Service activities than the usual logging mode which registers only errors and warnings. | true |
Custom Resource
Asset
The assets.rafter.kyma-project.io
CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an asset to store in a cloud storage bucket. To get the up-to-date CRD and show the output in the YAML format, run this command:
kubectl get crd assets.rafter.kyma-project.io -o yaml
Sample custom resource
This is a sample Asset CR configuration that contains mutation, validation, and metadata services:
apiVersion: rafter.kyma-project.io/v1beta1kind: Assetmetadata: name: my-package-assets namespace: defaultspec: source: mode: single parameters: disableRelativeLinks: "true" url: https://some.domain.com/main.js mutationWebhookService: - name: swagger-operations-svc namespace: default endpoint: "/mutate" filter: \.js$ parameters: rewrite: keyvalue pattern: \json|yaml data: basePath: /test/v2 validationWebhookService: - name: swagger-operations-svc namespace: default endpoint: "/validate" filter: \.js$ metadataWebhookService: - name: swagger-operations-svc namespace: default endpoint: "/extract" filter: \.js$ bucketRef: name: my-bucket displayName: "Operations svc"status: phase: Ready reason: Uploaded message: Asset content has been uploaded lastHeartbeatTime: "2018-01-03T07:38:24Z" observedGeneration: 1 assetRef: baseUrl: https://{STORAGE_ADDRESS}/my-bucket-1b19rnbuc6ir8/my-package-assets files: - metadata: title: Overview name: README.md - metadata: title: Benefits of distributed storage type: Details name: directory/subdirectory/file.md
Custom resource parameters
This table lists all possible parameters of a given resource together with their descriptions:
Parameter | Required | Description |
---|---|---|
metadata.name | Yes | Specifies the name of the CR. |
metadata.namespace | Yes | Defines the Namespace in which the CR is available. |
spec.source.mode | Yes | Specifies if the asset consists of one file or a set of compressed files in the ZIP or TAR formats. Use single for one file and package for a set of files. |
spec.source.parameters | No | Specifies a set of parameters for the Asset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format. |
spec.source.url | Yes | Specifies the location of the file. |
spec.source.filter | No | Specifies the regex pattern used to select files to store from the package. |
spec.source.validationWebhookService | No | Provides specification of the validation webhook services. |
spec.source.validationWebhookService.name | Yes | Provides the name of the validation webhook service. |
spec.source.validationWebhookService.namespace | Yes | Provides the Namespace in which the service is available. |
spec.source.validationWebhookService.endpoint | No | Specifies the endpoint to which the service sends calls. |
spec.source.validationWebhookService.parameters | No | Provides detailed parameters specific for a given validation service and its functionality. |
spec.source.validationWebhookService.filter | No | Specifies the regex pattern used to select files sent to the service. |
spec.source.mutationWebhookService | No | Provides specification of the mutation webhook services. |
spec.source.mutationWebhookService.name | Yes | Provides the name of the mutation webhook service. |
spec.source.mutationWebhookService.namespace | Yes | Provides the Namespace in which the service is available. |
spec.source.mutationWebhookService.endpoint | No | Specifies the endpoint to which the service sends calls. |
spec.source.mutationWebhookService.parameters | No | Provides detailed parameters specific for a given mutation service and its functionality. |
spec.source.mutationWebhookService.filter | No | Specifies the regex pattern used to select files sent to the service. |
spec.source.metadataWebhookService | No | Provides specification of the metadata webhook services. |
spec.source.metadataWebhookService.name | Yes | Provides the name of the metadata webhook service. |
spec.source.metadataWebhookService.namespace | Yes | Provides the Namespace in which the service is available. |
spec.source.metadataWebhookService.endpoint | No | Specifies the endpoint to which the service sends calls. |
spec.source.metadataWebhookService.filter | No | Specifies the regex pattern used to select files sent to the service. |
spec.bucketRef.name | Yes | Provides the name of the bucket for storing the asset. |
spec.displayName | No | Specifies a human-readable name of the asset. |
status.phase | Not applicable | The Asset Controller adds it to the Asset CR. It describes the status of processing the Asset CR by the Asset Controller. It can be Ready , Failed , or Pending . |
status.reason | Not applicable | Provides the reason why the Asset CR processing failed or is pending. See the Reasons section for the full list of possible status reasons and their descriptions. |
status.message | Not applicable | Describes a human-readable message on the CR processing progress, success, or failure. |
status.lastHeartbeatTime | Not applicable | Specifies when was the last time the Asset Controller processed the Asset CR. |
status.observedGeneration | Not applicable | Specifies the most recent Asset CR generation that the Asset Controller observed. |
status.assetRef | Not applicable | Provides details on the location of the assets stored in the bucket. |
status.assetRef.files | Not applicable | Provides asset metadata and the relative path to the given asset in the storage bucket with metadata. |
status.assetRef.files.metadata | Not applicable | Lists metadata extracted from the asset. |
status.assetRef.files.name | Not applicable | Specifies the relative path to the given asset in the storage bucket. |
status.assetRef.baseUrl | Not applicable | Specifies the absolute path to the location of the assets in the storage bucket. |
NOTE: The Asset Controller automatically adds all parameters marked as Not applicable to the Asset CR.
TIP: Asset CRs have an additional
configmap
mode that allows you to refer to asset sources stored in ConfigMaps. If you use this mode, set the url parameter to{namespace}/{configMap-name}
, likeurl: default/sample-configmap
. This mode is not enabled in Kyma. To check how it works, see Rafter tutorials for examples.
Status reasons
Processing of an Asset CR can succeed, continue, or fail for one of these reasons:
Reason | Phase | Description |
---|---|---|
Pulled | Pending | The Asset Controller pulled the asset content for processing. |
PullingFailed | Failed | Asset content pulling failed due to the provided error. |
Uploaded | Ready | The Asset Controller uploaded the asset content to MinIO. |
UploadFailed | Failed | Asset content uploading failed due to the provided error. |
BucketNotReady | Pending | The referenced bucket is not ready. |
BucketError | Failed | Reading the bucket status failed due to the provided error. |
Mutated | Pending | Mutation services changed the asset content. |
MutationFailed | Failed | Asset mutation failed for one of the provided reasons. |
MutationError | Failed | Asset mutation failed due to the provided error. |
MetadataExtracted | Pending | Metadata services extracted metadata from the asset content. |
MetadataExtractionFailed | Failed | Metadata extraction failed due to the provided error. |
Validated | Pending | Validation services validated the asset content. |
ValidationFailed | Failed | Asset validation failed for one of the provided reasons. |
ValidationError | Failed | Asset validation failed due to the provided error. |
MissingContent | Failed | There is missing asset content in the cloud storage bucket. |
RemoteContentVerificationError | Failed | Asset content verification in the cloud storage bucket failed due to the provided error. |
CleanupError | Failed | The Asset Controller failed to remove the old asset content due to the provided error. |
Cleaned | Pending | The Asset Controller removed the old asset content that was modified. |
Scheduled | Pending | The asset you added is scheduled for processing. |
Related resources and components
These are the resources related to this CR:
Custom resource | Description |
---|---|
Bucket | The Asset CR uses the name of the bucket specified in the definition of the Bucket CR. |
These components use this CR:
Component | Description |
---|---|
Rafter | Uses the Asset CR for the detailed asset definition, including its location and the name of the bucket in which it is stored. |
AssetGroup
The assetgroups.rafter.kyma-project.io
CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an orchestrator that creates Asset CRs for a specific asset type. To get the up-to-date CRD and show the output in the YAML format, run this command:
kubectl get crd assetgroups.rafter.kyma-project.io -o yaml
Sample custom resource
This is a sample AssetGroup custom resource (CR) that provides details of the Asset CRs for the markdown, asyncapi, and openapi source types.
apiVersion: rafter.kyma-project.io/v1beta1kind: AssetGroupmetadata: name: slack labels: rafter.kyma-project.io/view-context: service-catalog rafter.kyma-project.io/group-name: components rafter.kyma-project.io/order: "6"spec: displayName: Slack description: "Slack documentation" bucketRef: name: test-bucket sources: - type: markdown name: markdown-slack mode: single parameters: disableRelativeLinks: "true" url: https://raw.githubusercontent.com/slackapi/slack-api-specs/master/README.md - type: asyncapi displayName: "Slack" name: asyncapi-slack mode: single url: https://raw.githubusercontent.com/slackapi/slack-api-specs/master/events-api/slack_events_api_async_v1.json - type: openapi name: openapi-slack mode: single url: https://raw.githubusercontent.com/slackapi/slack-api-specs/master/web-api/slack_web_openapi_v2.jsonstatus: lastHeartbeatTime: "2019-03-18T13:42:55Z" message: Assets are ready to use phase: Ready reason: AssetsReady
Custom resource parameters
This table lists all possible parameters of a given resource together with their descriptions:
Parameter | Required | Description |
---|---|---|
metadata.name | Yes | Specifies the name of the CR. It also defines the rafter.kyma-project.io/asset-group label added to the Asset CR that the AssetGroup CR defines. Because of label name limitations, AssetGroup CR names can have a maximum length of 63 characters. |
metadata.labels | No | Specifies how to filter and group Asset CRs that the AssetGroup CR defines. See the details to learn more about these labels. |
spec.displayName | Yes | Specifies a human-readable name of the AssetGroup CR. |
spec.description | Yes | Provides more details on the purpose of the AssetGroup CR. |
spec.bucketRef.name | No | Specifies the name of the bucket that stores the assets from the AssetGroup. |
spec.sources | Yes | Defines the type of the asset and the rafter.kyma-project.io/type label added to the Asset CR. |
spec.sources.type | Yes | Specifies the type of assets included in the AssetGroup CR. |
spec.sources.displayName | No | Specifies a human-readable name of the asset. |
spec.sources.name | Yes | Defines an identifier of a given asset. It must be unique if there is more than one asset of a given type in the AssetGroup CR. |
spec.sources.mode | Yes | Specifies if the asset consists of one file or a set of compressed files in the ZIP or TAR format. Use single for one file and package for a set of files. |
spec.sources.parameters | No | Specifies a set of parameters for the asset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format. |
spec.sources.url | Yes | Specifies the location of a single file or a package. |
spec.sources.filter | No | Specifies a set of assets from the package to upload. The regex used in the filter must be RE2-compliant. |
status.lastHeartbeatTime | Not applicable | Specifies when was the last time when the AssetGroup Controller processed the AssetGroup CR. |
status.message | Not applicable | Describes a human-readable message on the CR processing progress, success, or failure. |
status.phase | Not applicable | The AssetGroup Controller adds it to the AssetGroup CR. It describes the status of processing the AssetGroup CR by the AssetGroup Controller. It can be Ready , Pending , or Failed . |
status.reason | Not applicable | Provides the reason why the AssetGroup CR processing succeeded, is pending, or failed. See the Reasons section for the full list of possible status reasons and their descriptions. |
NOTE: The AssetGroup Controller automatically adds all parameters marked as Not applicable to the AssetGroup CR.
TIP: ClusterAsset CRs have an additional
configmap
mode that allows you to refer to asset sources stored in ConfigMaps. If you use this mode, set the url parameter to{namespace}/{configMap-name}
, likeurl: default/sample-configmap
. This mode is not enabled in Kyma. To check how it works, see Rafter tutorials for examples.
Status reasons
Processing of an AssetGroup CR can succeed, continue, or fail for one of these reasons:
Reason | Phase | Description |
---|---|---|
AssetCreated | Pending | The AssetGroup Controller created the specified asset. |
AssetCreationFailed | Failed | The AssetGroup Controller couldn't create the specified asset due to an error. |
AssetsCreationFailed | Failed | The AssetGroup Controller couldn't create assets due to an error. |
AssetsListingFailed | Failed | The AssetGroup Controller couldn't list assets due to an error. |
AssetDeleted | Pending | The AssetGroup Controller deleted specified assets. |
AssetDeletionFailed | Failed | The AssetGroup Controller couldn't delete the specified asset due to an error. |
AssetsDeletionFailed | Failed | The AssetGroup Controller couldn't delete assets due to an error. |
AssetUpdated | Pending | The AssetGroup Controller updated the specified asset. |
AssetUpdateFailed | Failed | The AssetGroup Controller couldn't upload the specified asset due to an error. |
AssetsUpdateFailed | Failed | The AssetGroup Controller couldn't update assets due to an error. |
AssetsReady | Ready | Assets are ready to use. |
WaitingForAssets | Pending | Waiting for assets to be in the Ready status phase. |
BucketError | Failed | Bucket verification failed due to an error. |
AssetsWebhookGetFailed | Failed | The AssetGroup Controller failed to obtain proper webhook configuration. |
AssetsSpecValidationFailed | Failed | Asset specification is invalid due to an error. |
Related resources and components
These are the resources related to this CR:
Custom resource | Description |
---|---|
Asset | The AssetGroup CR orchestrates the creation of the Asset CR and defines its content. |
These components use this CR:
Component | Description |
---|---|
Rafter | Manages Asset CRs created based on the definition in the AssetGroup CR. |
Bucket
The buckets.rafter.kyma-project.io
CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define the name of the cloud storage bucket for storing assets. To get the up-to-date CRD and show the output in the YAML format, run this command:
kubectl get crd buckets.rafter.kyma-project.io -o yaml
Sample custom resource
This is a sample resource that defines the storage bucket configuration.
apiVersion: rafter.kyma-project.io/v1beta1kind: Bucketmetadata: name: test-sample namespace: defaultspec: region: "us-east-1" policy: readonlystatus: lastHeartbeatTime: "2019-02-04T11:50:26Z" message: Bucket policy has been updated phase: Ready reason: BucketPolicyUpdated remoteName: test-sample-1b19rnbuc6ir8 observedGeneration: 1 url: https://{STORAGE_ADDRESS}/test-sample-1b19rnbuc6ir8
Custom resource parameters
This table lists all possible parameters of a given resource together with their descriptions:
Parameter | Required | Description |
---|---|---|
metadata.name | Yes | Specifies the name of the CR which is also used to generate the name of the bucket in the bucket storage. |
metadata.namespace | Yes | Specifies the Namespace in which the CR is available. |
spec.region | No | Specifies the location of the region under which the Bucket Controller creates the bucket. If the field is empty, the Bucket Controller creates the bucket under the default location. |
spec.policy | No | Specifies the type of bucket access. Use none , readonly , writeonly , or readwrite . |
status.lastHeartbeatTime | Not applicable | Specifies when was the last time when the Bucket Controller processed the Bucket CR. |
status.message | Not applicable | Describes a human-readable message on the CR processing success or failure. |
status.phase | Not applicable | The Bucket Controller automatically adds it to the Bucket CR. It describes the status of processing the Bucket CR by the Bucket Controller. It can be Ready or Failed . |
status.reason | Not applicable | Provides information on the Bucket CR processing success or failure. See the Reasons section for the full list of possible status reasons and their descriptions. |
status.url | Not applicable | Provides the address of the bucket storage under which the asset is available. |
status.remoteName | Not applicable | Provides the name of the bucket in the storage. |
status.observedGeneration | Not applicable | Specifies the most recent Bucket CR generation that the Bucket Controller observed. |
NOTE: The Bucket Controller automatically adds all parameters marked as Not applicable to the Bucket CR.
Status reasons
Processing of a Bucket CR can succeed, continue, or fail for one of these reasons:
Reason | Phase | Description |
---|---|---|
BucketCreated | Pending | The bucket was created. |
BucketNotFound | Failed | The specified bucket doesn't exist anymore. |
BucketCreationFailure | Failed | The bucket couldn't be created due to an error. |
BucketVerificationFailure | Failed | The bucket couldn't be verified due to an error. |
BucketPolicyUpdated | Ready | The policy specifying bucket protection settings was updated. |
BucketPolicyUpdateFailed | Failed | The policy specifying bucket protection settings couldn't be set due to an error. |
BucketPolicyVerificationFailed | Failed | The policy specifying bucket protection settings couldn't be verified due to an error. |
BucketPolicyHasBeenChanged | Ready | The policy specifying cloud storage bucket protection settings was changed. |
Related resources and components
These are the resources related to this CR:
Custom resource | Description |
---|---|
Asset | Provides the name of the storage bucket which the Asset CR refers to. |
These components use this CR:
Component | Description |
---|---|
Rafter | Uses the Bucket CR for the storage bucket definition. |
ClusterAsset
The clusterassets.rafter.kyma-project.io
CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an asset to store in a cloud storage bucket. To get the up-to-date CRD and show the output in the YAML format, run this command:
kubectl get crd clusterassets.rafter.kyma-project.io -o yaml
Sample custom resource
This is a sample ClusterAsset CR configuration that contains mutation, validation, and metadata services:
apiVersion: rafter.kyma-project.io/v1beta1kind: ClusterAssetmetadata: name: my-package-assetsspec: source: mode: single parameters: disableRelativeLinks: "true" url: https://some.domain.com/main.js mutationWebhookService: - name: swagger-operations-svc namespace: default endpoint: "/mutate" filter: \.js$ parameters: rewrite: keyvalue pattern: \json|yaml data: basePath: /test/v2 validationWebhookService: - name: swagger-operations-svc namespace: default endpoint: "/validate" filter: \.js$ metadataWebhookService: - name: swagger-operations-svc namespace: default endpoint: "/extract" filter: \.js$ bucketRef: name: my-bucket displayName: "Operations svc"status: phase: Ready reason: Uploaded message: Asset content has been uploaded lastHeartbeatTime: "2018-01-03T07:38:24Z" observedGeneration: 1 assetRef: baseUrl: https://{STORAGE_ADDRESS}/my-bucket-1b19rnbuc6ir8/my-package-assets files: - metadata: title: Overview name: README.md - metadata: title: Benefits of distributed storage type: Details name: directory/subdirectory/file.md
Custom resource parameters
This table lists all possible parameters of a given resource together with their descriptions:
Parameter | Required | Description |
---|---|---|
metadata.name | Yes | Specifies the name of the CR. |
spec.source.mode | Yes | Specifies if the asset consists of one file or a set of compressed files in the ZIP or TAR formats. Use single for one file and package for a set of files. |
spec.source.parameters | No | Specifies a set of parameters for the ClusterAsset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format. |
spec.source.url | Yes | Specifies the location of the file. |
spec.source.filter | No | Specifies the regex pattern used to select files to store from the package. |
spec.source.validationWebhookService | No | Provides specification of the validation webhook services. |
spec.source.validationWebhookService.name | Yes | Provides the name of the validation webhook service. |
spec.source.validationWebhookService.namespace | Yes | Provides the Namespace in which the service is available. |
spec.source.validationWebhookService.endpoint | No | Specifies the endpoint to which the service sends calls. |
spec.source.validationWebhookService.parameters | No | Provides detailed parameters specific for a given validation service and its functionality. |
spec.source.validationWebhookService.filter | No | Specifies the regex pattern used to select files sent to the service. |
spec.source.mutationWebhookService | No | Provides specification of the mutation webhook services. |
spec.source.mutationWebhookService.name | Yes | Provides the name of the mutation webhook service. |
spec.source.mutationWebhookService.namespace | Yes | Provides the Namespace in which the service is available. |
spec.source.mutationWebhookService.endpoint | No | Specifies the endpoint to which the service sends calls. |
spec.source.mutationWebhookService.parameters | No | Provides detailed parameters specific for a given mutation service and its functionality. |
spec.source.mutationWebhookService.filter | No | Specifies the regex pattern used to select files sent to the service. |
spec.source.metadataWebhookService | No | Provides specification of the metadata webhook services. |
spec.source.metadataWebhookService.name | Yes | Provides the name of the metadata webhook service. |
spec.source.metadataWebhookService.namespace | Yes | Provides the Namespace in which the service is available. |
spec.source.metadataWebhookService.endpoint | No | Specifies the endpoint to which the service sends calls. |
spec.source.metadataWebhookService.filter | No | Specifies the regex pattern used to select files sent to the service. |
spec.bucketRef.name | Yes | Provides the name of the bucket for storing the asset. |
spec.displayName | No | Specifies a human-readable name of the asset. |
status.phase | Not applicable | The ClusterAsset Controller adds it to the ClusterAsset CR. It describes the status of processing the ClusterAsset CR by the ClusterAsset Controller. It can be Ready , Failed , or Pending . |
status.reason | Not applicable | Provides the reason why the ClusterAsset CR processing failed or is pending. See the Reasons section for the full list of possible status reasons and their descriptions. |
status.message | Not applicable | Describes a human-readable message on the CR processing progress, success, or failure. |
status.lastHeartbeatTime | Not applicable | Specifies when was the last time when the ClusterAsset Controller processed the ClusterAsset CR. |
status.observedGeneration | Not applicable | Specifies the most recent ClusterAsset CR generation that the ClusterAsset Controller observed. |
status.assetRef | Not applicable | Provides details on the location of the assets stored in the bucket. |
status.assetRef.files | Not applicable | Provides asset metadata and the relative path to the given asset in the storage bucket with metadata. |
status.assetRef.files.metadata | Not applicable | Lists metadata extracted from the asset. |
status.assetRef.files.name | Not applicable | Specifies the relative path to the given asset in the storage bucket. |
status.assetRef.baseUrl | Not applicable | Specifies the absolute path to the location of the assets in the storage bucket. |
NOTE: The ClusterAsset Controller automatically adds all parameters marked as Not applicable to the ClusterAsset CR.
Status reasons
Processing of a ClusterAsset CR can succeed, continue, or fail for one of these reasons:
Reason | Phase | Description |
---|---|---|
Pulled | Pending | The ClusterAsset Controller pulled the asset content for processing. |
PullingFailed | Failed | Asset content pulling failed due to an error. |
Uploaded | Ready | The ClusterAsset Controller uploaded the asset content to MinIO. |
UploadFailed | Failed | Asset content uploading failed due to an error. |
BucketNotReady | Pending | The referenced bucket is not ready. |
BucketError | Failed | Reading the bucket status failed due to an error. |
Mutated | Pending | Mutation services changed the asset content. |
MutationFailed | Failed | Asset mutation failed for one of the provided reasons. |
MutationError | Failed | Asset mutation failed due to an error. |
MetadataExtracted | Pending | Metadata services extracted metadata from the asset content. |
MetadataExtractionFailed | Failed | Metadata extraction failed due to an error. |
Validated | Pending | Validation services validated the asset content. |
ValidationFailed | Failed | Asset validation failed for one of the provided reasons. |
ValidationError | Failed | Asset validation failed due to an error. |
MissingContent | Failed | There is missing asset content in the cloud storage bucket. |
RemoteContentVerificationError | Failed | Asset content verification in the cloud storage bucket failed due to an error. |
CleanupError | Failed | The ClusterAsset Controller failed to remove the old asset content due to an error. |
Cleaned | Pending | The ClusterAsset Controller removed the old asset content that was modified. |
Scheduled | Pending | The asset you added is scheduled for processing. |
Related resources and components
These are the resources related to this CR:
Custom resource | Description |
---|---|
ClusterBucket | The ClusterAsset CR uses the name of the bucket specified in the definition of the ClusterBucket CR. |
These components use this CR:
Component | Description |
---|---|
Rafter | Uses the ClusterAsset CR for the detailed asset definition, including its location and the name of the bucket in which it is stored. |
ClusterAssetGroup
The clusterassetgroups.rafter.kyma-project.io
CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define an orchestrator that creates ClusterAsset CRs for a specific asset type. To get the up-to-date CRD and show the output in the YAML format, run this command:
kubectl get crd clusterassetgroups.rafter.kyma-project.io -o yaml
Sample custom resource
This is a sample ClusterAssetGroup custom resource (CR) that provides details of the ClusterAsset CR for the markdown source type.
apiVersion: rafter.kyma-project.io/v1beta1kind: ClusterAssetGroupmetadata: name: service-mesh labels: rafter.kyma-project.io/view-context: docs-ui rafter.kyma-project.io/group-name: components rafter.kyma-project.io/order: "6"spec: displayName: "Service Mesh" description: "Overall documentation for Service Mesh" bucketRef: name: test-bucket sources: - type: markdown displayName: "Documentation" name: docs mode: package parameters: disableRelativeLinks: "true" url: https://github.com/kyma-project/kyma/archive/master.zip filter: /docs/service-mesh/docs/status: lastHeartbeatTime: "2019-03-18T13:42:55Z" message: Assets are ready to use phase: Ready reason: AssetsReady
Custom resource parameters
This table lists all possible parameters of a given resource together with their descriptions:
Parameter | Required | Description |
---|---|---|
metadata.name | Yes | Specifies the name of the CR. It also defines the rafter.kyma-project.io/asset-group label added to the ClusterAsset CR that the ClusterAssetGroup CR defines. Because of label name limitations, ClusterAssetGroup CR names can have a maximum length of 63 characters. |
metadata.labels | No | Specifies how to filter and group ClusterAsset CRs that the ClusterAssetGroup CR defines. See the details to learn more about these labels. |
spec.displayName | Yes | Specifies a human-readable name of the ClusterAssetGroup CR. |
spec.description | Yes | Provides more details on the purpose of the ClusterAssetGroup CR. |
spec.bucketRef.name | No | Specifies the name of the bucket that stores the assets from the ClusterAssetGroup. |
spec.sources | Yes | Defines the type of the asset and the rafter.kyma-project.io/type label added to the ClusterAsset CR. |
spec.sources.type | Yes | Specifies the type of assets included in the ClusterAssetGroup CR. |
spec.sources.displayName | No | Specifies a human-readable name of the asset. |
spec.sources.name | Yes | Defines a unique identifier of a given asset. It must be unique if there is more than one asset of a given type in a ClusterAssetGroup CR. |
spec.sources.mode | Yes | Specifies if the asset consists of one file or a set of compressed files in the ZIP or TAR format. Use single for one file and package for a set of files. |
spec.sources.parameters | No | Specifies a set of parameters for the ClusterAsset. For example, use it to define what to render, disable, or modify in the UI. Define it in a valid YAML or JSON format. |
spec.sources.url | Yes | Specifies the location of a single file or a package. |
spec.sources.filter | No | Specifies a set of assets from the package to upload. The regex used in the filter must be RE2-compliant. |
status.lastHeartbeatTime | Not applicable | Specifies when was the last time when the ClusterAssetGroup Controller processed the ClusterAssetGroup CR. |
status.message | Not applicable | Describes a human-readable message on the CR processing progress, success, or failure. |
status.phase | Not applicable | The ClusterAssetGroup Controller adds it to the ClusterAssetGroup CR. It describes the status of processing the ClusterAssetGroup CR by the ClusterAssetGroup Controller. It can be Ready , Pending , or Failed . |
status.reason | Not applicable | Provides the reason why the ClusterAssetGroup CR processing succeeded, is pending, or failed. See the Reasons section for the full list of possible status reasons and their descriptions. |
NOTE: The ClusterAssetGroup Controller automatically adds all parameters marked as Not applicable to the ClusterAssetGroup CR.
Status reasons
Processing of a ClusterAssetGroup CR can succeed, continue, or fail for one of these reasons:
Reason | Phase | Description |
---|---|---|
AssetCreated | Pending | The ClusterAssetGroup Controller created the specified asset. |
AssetCreationFailed | Failed | The ClusterAssetGroup Controller couldn't create the specified asset due to an error. |
AssetsCreationFailed | Failed | The ClusterAssetGroup Controller couldn't create assets due to an error. |
AssetsListingFailed | Failed | The ClusterAssetGroup Controller couldn't list assets due to an error. |
AssetDeleted | Pending | The ClusterAssetGroup Controller deleted specified assets. |
AssetDeletionFailed | Failed | The ClusterAssetGroup Controller couldn't delete the specified asset due to an error. |
AssetsDeletionFailed | Failed | The ClusterAssetGroup Controller couldn't delete assets due to an error. |
AssetUpdated | Pending | The ClusterAssetGroup Controller updated the specified asset. |
AssetUpdateFailed | Failed | The ClusterAssetGroup Controller couldn't upload the specified asset due to an error. |
AssetsUpdateFailed | Failed | The ClusterAssetGroup Controller couldn't update assets due to an error. |
AssetsReady | Ready | Assets are ready to use. |
WaitingForAssets | Pending | Waiting for assets to be in the Ready status phase. |
BucketError | Failed | Bucket verification failed due to an error. |
AssetsWebhookGetFailed | Failed | The ClusterAssetGroup Controller failed to obtain proper webhook configuration. |
AssetsSpecValidationFailed | Failed | Asset CR specification is invalid due to an error. |
Related resources and components
These are the resources related to this CR:
Custom resource | Description |
---|---|
ClusterAsset | The ClusterAssetGroup CR orchestrates the creation of the ClusterAsset CR and defines its content. |
These components use this CR:
Component | Description |
---|---|
Rafter | Manages ClusterAsset CRs created based on the definition in the ClusterAssetGroup CR. |
ClusterBucket
The clusterbuckets.rafter.kyma-project.io
CustomResourceDefinition (CRD) is a detailed description of the kind of data and the format used to define the name of the cloud storage bucket for storing assets. To get the up-to-date CRD and show the output in the YAML format, run this command:
kubectl get crd clusterbuckets.rafter.kyma-project.io -o yaml
Sample custom resource
This is a sample resource that defines the storage bucket configuration.
apiVersion: rafter.kyma-project.io/v1beta1kind: ClusterBucketmetadata: name: test-samplespec: region: "us-east-1" policy: readonlystatus: lastHeartbeatTime: "2019-02-04T11:50:26Z" message: Bucket policy has been updated phase: Ready reason: BucketPolicyUpdated remoteName: test-sample-1b19rnbuc6ir8 url: https://{STORAGE_ADDRESS}/test-sample-1b19rnbuc6ir8 observedGeneration: 1
Custom resource parameters
This table lists all possible parameters of a given resource together with their descriptions:
Parameter | Required | Description |
---|---|---|
metadata.name | Yes | Specifies the name of the CR which is also the prefix of the bucket name in the bucket storage. |
spec.region | No | Specifies the location of the region under which the ClusterBucket Controller creates the bucket. If the field is empty, the ClusterBucket Controller creates the bucket under the default location. |
spec.policy | No | Specifies the type of bucket access. Use none , readonly , writeonly , or readwrite . |
status.lastHeartbeatTime | Not applicable | Specifies when was the last time when the ClusterBucket Controller processed the ClusterBucket CR. |
status.message | Not applicable | Describes a human-readable message on the CR processing success or failure. |
status.phase | Not applicable | The ClusterBucket Controller automatically adds it to the ClusterBucket CR. It describes the status of processing the ClusterBucket CR by the ClusterBucket Controller. It can be Ready or Failed . |
status.reason | Not applicable | Provides information on the ClusterBucket CR processing success or failure. See the Reasons section for the full list of possible status reasons and their descriptions. |
status.url | Not applicable | Provides the address of the bucket storage under which the asset is available. |
status.remoteName | Not applicable | Provides the name of the bucket in storage. |
status.observedGeneration | Not applicable | Specifies the most recent ClusterBucket CR generation that the ClusterBucket Controller observed. |
NOTE: The ClusterBucket Controller automatically adds all parameters marked as Not applicable to the ClusterBucket CR.
Status reasons
Processing of a ClusterBucket CR can succeed, continue, or fail for one of these reasons:
Reason | Phase | Description |
---|---|---|
BucketCreated | Pending | The bucket was created. |
BucketNotFound | Failed | The specified bucket doesn't exist anymore. |
BucketCreationFailure | Failed | The bucket couldn't be created due to an error. |
BucketVerificationFailure | Failed | The bucket couldn't be verified due to an error. |
BucketPolicyUpdated | Ready | The policy specifying bucket protection settings was updated. |
BucketPolicyUpdateFailed | Failed | The policy specifying bucket protection settings couldn't be set due to an error. |
BucketPolicyVerificationFailed | Failed | The policy specifying bucket protection settings couldn't be verified due to an error. |
BucketPolicyHasBeenChanged | Ready | The policy specifying cloud storage bucket protection settings was changed. |
Related resources and components
These are the resources related to this CR:
Custom resource | Description |
---|---|
ClusterAsset | Provides the name of the storage bucket which the ClusterAsset CR refers to. |
These components use this CR:
Component | Description |
---|---|
Rafter | Uses the ClusterBucket CR for the storage bucket definition. |
Tutorials
Add new documents to the Documentation view in the Console UI
This tutorial shows how you can customize the Documentation view that is available in the Console UI under the question mark icon on the top navigation panel. The purpose of this tutorial is to create a new Prometheus documentation section that contains Concepts and Guides documentation topics with a set of Markdown subdocuments. The Markdown sources used in this tutorial point to specific topics in the official Prometheus documentation.
NOTE: The Documentation view only displays documents uploaded through ClusterAssetGroup CRs. Make sure they have valid definitions and that the Markdown documents they render have correct metadata and structure.
Prerequisites
- Kyma
- kubectl
Steps
Open the terminal and create these ClusterAssetGroup custom resources:
Click to copycat <<EOF | kubectl apply -f -apiVersion: rafter.kyma-project.io/v1beta1kind: ClusterAssetGroupmetadata:labels:rafter.kyma-project.io/view-context: docs-ui # This label specifies that you want to render documents in the Documentation view.rafter.kyma-project.io/group-name: prometheus # This label defines the group under which you want to render the given asset in the Documentation view. The value cannot include spaces.rafter.kyma-project.io/order: "1" # This label specifies the position of the ClusterAssetGroup in relation to other ClusterAssetGroups in the Prometheus section.name: prometheus-conceptsspec:displayName: "Concepts" # The name of the topic that shows in the Documentation view under the main Prometheus section.description: "Some docs about Prometheus concepts"sources:- type: markdown # This type indicates that the Asset Metadata Service must extract Front Matter metadata from the source Prometheus documents and add them to a ClusterAssetGroup as a status.displayName: "Concepts"name: docsmode: package # This mode indicates that the source file is compressed and the Asset Controller must unpack it first to process it.url: https://github.com/prometheus/docs/archive/master.zip # The source location of Prometheus documents.filter: content/docs/concepts # The exact location of the documents that you want to extract.---apiVersion: rafter.kyma-project.io/v1beta1kind: ClusterAssetGroupmetadata:labels:rafter.kyma-project.io/view-context: docs-uirafter.kyma-project.io/group-name: prometheusrafter.kyma-project.io/order: "2"name: prometheus-guidesspec:displayName: "Guides"description: "Some docs about Prometheus guides"sources:- type: markdowndisplayName: "Guides"name: docsmode: packageurl: https://github.com/prometheus/docs/archive/master.zipfilter: content/docs/guidesEOFNOTE: For a detailed explanation of all parameters, see the ClusterAssetGroup custom resource.
Check the status of custom resources:
Click to copykubectl get clusterassetgroupsThe custom resources should be in the
Ready
phase:Click to copyNAME PHASE AGEprometheus-concepts Ready 59sprometheus-guides Ready 59sIf a given custom resource is in the
Ready
phase and you want to get details of the created ClusterAssets, such as document names and the location of MinIO buckets, run this command:Click to copykubectl get clusterasset -o yaml -l rafter.kyma-project.io/asset-group=prometheus-conceptsThe command lists details of the ClusterAsset created by the prometheus-concepts custom resource:
Click to copyapiVersion: v1items:- apiVersion: rafter.kyma-project.io/v1beta1kind: ClusterAssetmetadata:annotations:rafter.kyma-project.io/asset-short-name: docscreationTimestamp: "2019-05-15T13:27:11Z"finalizers:- deleteclusterasset.finalizers.rafter.kyma-project.iogeneration: 1labels:rafter.kyma-project.io/asset-group: prometheus-conceptsrafter.kyma-project.io/type: markdownname: prometheus-concepts-docs-markdown-1b7mu6bmkmse4ownerReferences:- apiVersion: rafter.kyma-project.io/v1beta1blockOwnerDeletion: truecontroller: truekind: ClusterAssetGroupname: prometheus-conceptsuid: 253c311b-7715-11e9-b241-1e5325edb3d6resourceVersion: "6785"selfLink: /apis/rafter.kyma-project.io/v1beta1/clusterassets/prometheus-concepts-docs-markdown-1b7mu6bmkmse4uid: 253eee7d-7715-11e9-b241-1e5325edb3d6spec:bucketRef:name: rafter-public-1b7mtf1de5ostsource:filter: content/docs/conceptsmetadataWebhookService:- endpoint: /v1/extractfilter: \.md$name: rafter-front-matter-servicenamespace: kyma-systemmode: packageurl: https://github.com/prometheus/docs/archive/master.zipstatus:assetRef:baseUrl: https://storage.kyma.local/rafter-public-1b7mtf1de5ost-1b7mtf1h187r7/prometheus-concepts-docs-markdown-1b7mu6bmkmse4files:- metadata:sort_rank: 1title: Data modelname: docs-master/content/docs/concepts/data_model.md- metadata:nav_icon: flasksort_rank: 2title: Conceptsname: docs-master/content/docs/concepts/index.md- metadata:sort_rank: 3title: Jobs and instancesname: docs-master/content/docs/concepts/jobs_instances.md- metadata:sort_rank: 2title: Metric typesname: docs-master/content/docs/concepts/metric_types.mdlastHeartbeatTime: "2019-05-15T13:27:24Z"message: Asset content has been uploadedobservedGeneration: 1phase: Readyreason: Uploadedkind: Listmetadata:resourceVersion: ""selfLink: ""In the status section of the ClusterAsset, you can see details of all documents and baseUrl with their location in MinIO:
Click to copystatus:assetRef:baseUrl: https://storage.kyma.local/rafter-public-1b7mtf1de5ost-1b7mtf1h187r7/prometheus-concepts-docs-markdown-1b7mu6bmkmse4files:- metadata:sort_rank: 1title: Data modelname: docs-master/content/docs/concepts/data_model.mdOpen the Console UI and navigate to the Documentation view. The new Prometheus section will appear at the bottom of the documentation panel. It consists of the Concepts and Guides topic groups containing alphabetically ordered Markdown documents.
NOTE: Since the source Markdown documents are prepared for different UIs and can contain custom tags, there can be issues with rendering their full content. If you prepare your own input, use our content guidelines to make sure the documents render properly in the Console UI.
Troubleshooting
If you apply the ClusterAssetGroup custom resource but its status stays Pending
or shows Failed
, check the status details.
This command lists details of the prometheus-concepts
ClusterAssetGroup:
kubectl get clusterasset -o yaml -l rafter.kyma-project.io/asset-group=prometheus-concepts
See the status details sample:
status: phase: Failed reason: ValidationFailed message: "The file is not valid against the provided json schema"
You can also analyze logs of the Rafter Controller Manager:
kubectl -n kyma-system logs -l 'app.kubernetes.io/name=rafter-controller-manager'
Set MinIO to Gateway mode
By default, you install Kyma with Rafter in MinIO stand-alone mode. This tutorial shows how to set MinIO to Gateway mode on different cloud providers using an override.
CAUTION: The authentication and authorization measures required to edit the assets in the public cloud storage may differ from those used in Rafter. That's why we recommend using separate subscriptions for Minio Gateway to ensure that you only have access to data created by Rafter, and to avoid compromising other public data.
CAUTION: Cloud providers offer different payment policies for their services, such as bucket storage or network traffic. To avoid unexpected costs, verify the payment policy with the given provider before you start using Gateway mode.
Prerequisites
- Google Cloud Storage
- Azure Blob Storage
- AWS S3
- Alibaba Cloud OSS
Steps
You can set MinIO to the given Gateway mode both during and after Kyma installation. In both cases, you need to create and configure an access key for your cloud provider account, apply a Secret and a ConfigMap with an override to a cluster or Minikube, and trigger the Kyma installation process. This tutorial shows how to switch to MinIO Gateway in the runtime, when you already have Kyma installed locally or on a cluster.
CAUTION: Buckets created in MinIO without using Bucket CRs are not recreated or migrated while switching to MinIO Gateway mode.
Create required cloud resources
- Google Cloud Storage
- Azure Blob Storage
- AWS S3
- Alibaba Cloud OSS
Configure MinIO Gateway mode
- Google Cloud Storage
- Azure Blob Storage
- AWS S3
- Alibaba Cloud OSS
CAUTION: If you want to activate MinIO Gateway mode before you install Kyma, you need to manually add the ConfigMap and the Secret to the
installer-config-local.yaml.tpl
template located under theinstallation/resources
subfolder before you run the installation script. In this case you start from scratch, so add the ConfigMap without these lines that trigger the default buckets migration from MinIO to MinIO Gateway:Click to copy
controller-manager.minio.podAnnotations.persistence: "off"upload-service.minio.podAnnotations.persistence: "off"
Trigger installation
Trigger Kyma installation or update it by labeling the Installation custom resource. Run:
kubectl -n default label installation/kyma-installation action=install
Troubleshooting
AssetGroup processing fails due to duplicated default buckets
It may happen that the processing of a (Cluster)AssetGroup CR fails due to too many buckets with the rafter.kyma-project.io/access:public
label.
To fix this issue, manually remove all default buckets with the rafter.kyma-project.io/access:public
label:
Remove the cluster-wide default bucket:
Click to copykubectl delete clusterbuckets.rafter.kyma-project.io --selector='rafter.kyma-project.io/access=public'Remove buckets from the Namespaces where you use them:
Click to copykubectl delete buckets.rafter.kyma-project.io --selector='rafter.kyma-project.io/access=public' --namespace=defaultThis allows the (Cluster)AssetGroup controller to recreate the default buckets successfully.
Upload Service returns "502 Bad Gateway"
It may happen that the Upload Service returns 502 Bad Gateway
with the The specified bucket does not exist
message. It means that the bucket has been removed or renamed.
If the bucket is removed, delete the ConfigMap:
kubectl -n kyma-system delete configmaps rafter-upload-service
If the bucket exists but with a different name, set a proper name in the ConfigMap:
kubectl -n kyma-system edit configmaps rafter-upload-service
After that, restart the Upload Service - it will create a new bucket or use the renamed one:
kubectl -n kyma-system delete pod -l app.kubernetes.io/name=upload-service
Metrics
Rafter Controller Manager
Metrics for the Rafter Controller Manager include:
- default metrics instrumented by kubebuilder.
- default Prometheus metrics for Go applications.
To see a complete list of metrics, run this command:
kubectl -n kyma-system port-forward svc/rafter-controller-manager 8080
To check the metrics, open a new terminal window and run:
curl http://localhost:8080/metrics
TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port
8080
, redirect the metrics to another one. For example, runkubectl -n kyma-system port-forward svc/rafter-controller-manager 3000:8080
and update the port in the localhost address.
See the Monitoring documentation to learn more about monitoring and metrics in Kyma.
AsyncAPI Service
This table shows the AsyncAPI Service custom metrics, their types, and descriptions.
Name | Type | Description |
---|---|---|
rafter_services_http_request_and_mutation_duration_seconds | histogram | Specifies the number of assets that the service received for processing and mutated within a given time series. |
rafter_services_http_request_and_validation_duration_seconds | histogram | Specifies the number of assets that the service received for processing and validated within a given time series. |
rafter_services_handle_mutation_status_code | counter | Specifies a number of different HTTP response status codes in a given time series. |
rafter_services_handle_validation_status_code | counter | Specifies a number of different HTTP response status codes in a given time series. |
Apart from the custom metrics, the AsyncAPI Service also exposes default Prometheus metrics for Go applications.
To see a complete list of metrics, run this command:
kubectl -n kyma-system port-forward svc/rafter-asyncapi-service 80
To check the metrics, open a new terminal window and run:
curl http://localhost:80/metrics
TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port
80
, redirect the metrics to another one. For example, runkubectl -n kyma-system port-forward svc/rafter-asyncapi-service 8080:80
and update the port in the localhost address.
See the Monitoring documentation to learn more about monitoring and metrics in Kyma.
Upload Service
This table shows the Upload Service custom metrics, their types, and descriptions.
Name | Type | Decription |
---|---|---|
rafter_upload_service_http_request_duration_seconds | histogram | Specifies the number of HTTP requests the service processes in a given time series. |
rafter_upload_service_http_request_returned_status_code | counter | Specifies the number of different HTTP response status codes in a given time series. |
Apart from the custom metrics, the Upload Service also exposes default Prometheus metrics for Go applications.
To see a complete list of metrics, run this command:
kubectl -n kyma-system port-forward svc/rafter-upload-service 80
To check the metrics, open a new terminal window and run:
curl http://localhost:80/metrics
TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port
80
, redirect the metrics to another one. For example, runkubectl -n kyma-system port-forward svc/rafter-upload-service 8080:80
and update the port in the localhost address.
See the Monitoring documentation to learn more about monitoring and metrics in Kyma.
Front Matter Service
This table shows the Front Matter Service custom metrics, their types, and descriptions.
Name | Type | Description |
---|---|---|
rafter_front_matter_service_http_request_duration_seconds | histogram | Specifies the number of HTTP requests the service processes in a given time series. |
rafter_front_matter_service_http_request_returned_status_code | counter | Specifies the number of different HTTP response status codes in a given time series. |
Apart from the custom metrics, the Front Matter Service also exposes default Prometheus metrics for Go applications.
To see a complete list of metrics, run this command:
kubectl -n kyma-system port-forward svc/rafter-front-matter-service 80
To check the metrics, open a new terminal window and run:
curl http://localhost:80/metrics
TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port
80
, redirect the metrics to another one. For example, runkubectl -n kyma-system port-forward svc/rafter-front-matter-service 8080:80
and update the port in the localhost address.
See the Monitoring documentation to learn more about monitoring and metrics in Kyma.
MinIO
As an external, open-source file storage solution, MinIO exposes its own metrics. See the official documentation for details. Rafter comes with a preconfigured ServiceMonitor CR that enables Prometheus to scrap MinIO metrics. Using the metrics, you can create your own Grafana dashboard or reuse the dashboard that is already prepared.
Apart from the custom metrics, MinIO also exposes default Prometheus metrics for Go applications.
To see a complete list of metrics, run this command:
kubectl -n kyma-system port-forward svc/rafter-minio 9000
To check the metrics, open a new terminal window and run:
curl http://localhost:9000/minio/prometheus/metrics
TIP: To use these commands, you must have a running Kyma cluster and kubectl installed. If you cannot access port
9000
, redirect the metrics to another one. For example, runkubectl -n kyma-system port-forward svc/rafter-minio 8080:9000
and update the port in the localhost address.
See the Monitoring documentation to learn more about monitoring and metrics in Kyma.