Kyma integrates with Velero to provide backup and restore capabilities.
Velero backs up Kubernetes resources and stores them in buckets of supported cloud providers. It triggers physical volume snapshots and includes the snapshot references in the backup. Velero can create scheduled or on-demand backups, filter objects to include in the backup, and set time to live (TTL) for stored backups.
If you configured Velero when installing Kyma as explained here, backup is enabled with the default schedule and runs once a day every day from Monday to Friday. To change the settings of the backup change the schedules configuration in the Velero chart configuration.
For more details, see the official Velero documentation.
Install and configure Velero to back up and restore your Kyma cluster.
NOTE: To successfully set up Velero, define a supported storage location and credentials to access it. Currently, you can install Velero on GCP and Azure. AWS is not supported.
Follow the instructions to set up Velero:
Enable Velero components in the Kyma Installer configuration file:Click to copy- name: "backup-init"namespace: "kyma-system"- name: "backup"namespace: "kyma-system"
Override the default configuration by creating a Secret containing the required parameters for a chosen provider.
See the examples of such Secrets:
NOTE: The values are provided in plain text only for illustrative purposes. Remember to set them as base64-encoded strings. For details on Kyma overrides, see the this document.
- Google Cloud Platform
Run the Kyma installation with Velero overrides:
- Local installation
- Cluster installation
This document lists and describes the parameters that you can configure in velero, split into required and optional parameters.
This table lists the required parameters for velero to work, their descriptions, and default values:
|configuration.provider||Specifies the name of the cloud provider where you are deploying Velero to, such as ||None||Yes|
|configuration.backupStorageLocation.name||Specifies the name of the cloud provider used to store backups, such as ||None||Yes|
|configuration.backupStorageLocation.bucket||Specifies the storage bucket where backups are uploaded.||None||Yes|
|configuration.backupStorageLocation.config.region||Provides the region in which the bucket is created. It only applies to AWS.||None||Yes, if using AWS|
|configuration.backupStorageLocation.config.resourceGroup||Specifies the name of the resource group which contains the storage account for the backup storage location. It only applies to Azure.||none||yes, if using Azure|
|configuration.backupStorageLocation.config.storageAccount||Provides the name of the storage account for the backup storage location. It only applies to Azure.||None||Yes, if using Azure|
|configuration.volumeSnapshotLocation.name||Specifies the name of the cloud provider the cluster is using for persistent volumes.||None||Yes, if using PV snapshots|
|configuration.volumeSnapshotLocation.config.region||Provides the region in which the bucket is created. It only applies to AWS.||None||Yes, if using AWS|
|configuration.volumeSnapshotLocation.config.apitimeout||Defines the amount of time after which an API request returns a timeout status. It only applies to Azure.||None||Yes, if using Azure|
|credentials.useSecret||Specifies if a secret is required for IAM credentials. Set this to ||Yes|
|credentials.existingSecret||If specified and useSecret is ||None||Yes, if useSecret is |
|credentials.secretContents||If specified and useSecret is ||None||Yes, if useSecret is |
This table lists the non-required configurable parameters, their descriptions, and default values:
|schedules||Sets up a scheduled backup. By default, a scheduled backup runs at 07:00 daily on Monday through Friday.|
|configuration.volumeSnapshotLocation.bucket||Specifies the name of the storage bucket where volume snapshots are uploaded.||None|
|configuration.backupStorageLocation.prefix||Specifies the directory inside a storage bucket where backups are located.||None|
|configuration.backupStorageLocation.config.resourceGroup||Specifies the name of the resource group which contains the storage account for the backup storage location. It only applies to Azure.||None|
|configuration.backupStorageLocation.config.s3ForcePathStyle||Specifies whether to force path style URLs for S3 objects. Set it to ||None|
|configuration.backupStorageLocation.config.s3Url||Specifies the AWS S3 URL. If not provided, Velero generates it from region and bucket. Use this field for local storage services like MinIO.||None|
|configuration.backupStorageLocation.config.kmsKeyId||Specifies the AWS KMS key ID or alias to enable encryption of the backups stored in S3. It only works with AWS S3 and may require explicitly granting key usage rights.||None|
|configuration.backupStorageLocation.config.publicUrl||Specifies the parameter used instead of 3Url when generating download URLs, for example for logs. Use this field for local storage services like MinIO.||None|
Install Velero using these instructions.
Follow the steps to back up Kyma.
- Manual backup
- Scheduled backup
To set the retention period of a backup, define the ttl parameter in the Backup specification definition:
# The amount of time before this backup is eligible for garbage collection.ttl: 24h0m0s
Follow this tutorial to restore a backed up Kyma cluster. Start with restoring CRDs, services, and endpoints, then restore other resources.
To use the restore functionality, download and install the Velero CLI.
Follow these steps to restore resources:
Install the Velero server. Use the same bucket as for backups:Click to copyvelero install --bucket <BUCKET> --provider <CLOUD_PROVIDER> --secret-file <CREDENTIALS_FILE> --restore-only --wait
NOTE: Check out this guide to correctly fill the parameters of this command corresponding to the cloud provider in use.
Install Kyma backup plugins:Click to copyvelero plugin add eu.gcr.io/kyma-project/backup-plugins:e7df9098
List available backups:Click to copyvelero get backups
Restore Kyma CRDs, services, and endpoints:Click to copyvelero restore create --from-backup <BACKUP_NAME> --include-resources customresourcedefinitions.apiextensions.k8s.io,services,endpoints --include-cluster-resources --wait
Restore the rest of Kyma resources:Click to copyvelero restore create --from-backup <BACKUP_NAME> --exclude-resources customresourcedefinitions.apiextensions.k8s.io,services,endpoints --include-cluster-resources --restore-volumes --wait
Once the status of the restore is
COMPLETED, perform a Kyma health check by verifying the Pods:Click to copykubectl get pods --all-namespaces
Even if the restore process is complete, it may take some time for the resources to become available again.
NOTE: Because of this issue in Velero, custom resources may not be properly restored. In this case, run the second restore command again and check if the custom resources are restored. For example, run the following command to print several VirtualService custom resources:Click to copykubectl get virtualservices --all-namespaces
Once the restore succeeds, remove the
veleronamespace:Click to copykubectl delete ns velero
In case the
service-catalog-addons-service-binding-usage-controller Pod gets stuck in the
Init phase, try deleting the Pod:
kubectl delete $(kubectl get pod -l app=service-catalog-addons-service-binding-usage-controller -n kyma-system -o name) -n kyma-system
The restore tutorial assumes that the DNS and the public IP values for the new cluster are the same as for the backed up cluster. If they change, check the relevant fields in the Secrets and ConfigMaps overrides in the
kyma-installer Namespace and update them with new values. Then run the installer again to propagate them to all the components:
kubectl label installation/kyma-installation action=install
Check if all NatssChannels are reporting as ready:
kubectl get natsschannels.messaging.knative.dev -n kyma-system
If one or more channels report as not ready, delete their corresponding channel services. These services will be recreated by the controller automatically.
kubectl delete service -l messaging.knative.dev/role=natss-channel -n kyma-systemkubectl annotate natsschannels.messaging.knative.dev -n kyma-system restore=done --all