[[_backup-cr]] === Scheduling database backups [WARNING] ==== Backup CR is *deprecated* and could be removed in future releases. ==== You can use the Operator to schedule automatic backups of the database as defined by custom resources. The custom resource triggers a backup job ifeval::[{project_community}==true] (or a `CronJob` in the case of Periodic Backups) endif::[] and reports back its status. ifeval::[{project_community}==true] Two options exist to schedule backups: * xref:_backups-cr-aws[Backing up to AWS S3 storage] * xref:_backups-local-cr[Backing up to local storage] If you have AWS S3 storage, you can perform a one-time backup or periodic backups. If you do not have AWS S3 storage, you can back up to local storage. [[_backups-cr-aws]] ==== Backing up to AWS S3 storage You can back up your database to AWS S3 storage one time or periodically. To back up your data periodically, enter a valid `CronJob` into the `schedule`. For AWS S3 storage, you create a YAML file for the backup custom resource and a YAML file for the AWS secret. The backup custom resource requires a YAML file with the following structure: ```yaml apiVersion: keycloak.org/v1alpha1 kind: KeycloakBackup metadata: name: spec: aws: # Optional - used only for Periodic Backups. # Follows usual crond syntax (for example, use "0 1 * * *" to perform the backup every day at 1 AM.) schedule: # Required - the name of the secret containing the credentials to access the S3 storage credentialsSecretName: ``` The AWS secret requires a YAML file with the following structure: .AWS S3 `Secret` ```yaml apiVersion: v1 kind: Secret metadata: name: type: Opaque stringData: AWS_S3_BUCKET_NAME: AWS_ACCESS_KEY_ID: AWS_SECRET_ACCESS_KEY: ``` .Prerequisites * Your Backup custom resource YAML file includes a `credentialsSecretName` that references a `Secret` containing AWS S3 credentials. * Your `KeycloakBackup` custom resource has `aws` sub-properties. * You have a YAML file for the AWS S3 Secret that includes a `` that matches the one identified in the backup custom resource. * You have cluster-admin permission or an equivalent level of permissions granted by an administrator. .Procedure . Create the secret with credentials: `{create_cmd} -f .yaml`. For example: + [source,bash,subs=+attributes] ---- $ {create_cmd} -f secret.yaml keycloak.keycloak.org/aws_s3_secret created ---- . Create a backup job: `{create_cmd} -f .yaml`. For example: + [source,bash,subs=+attributes] ---- $ {create_cmd} -f aws_one-time-backup.yaml keycloak.keycloak.org/aws_s3_backup created ---- . View a list of backup jobs: + [source,bash,subs=+attributes] ---- $ {create_cmd_brief} get jobs NAME COMPLETIONS DURATION AGE aws_s3_backup 0/1 6s 6s ---- . View the list of executed backup jobs. + [source,bash,subs=+attributes] ---- $ {create_cmd_brief} get pods NAME READY STATUS RESTARTS AGE aws_s3_backup-5b4rfdd 0/1 Completed 0 24s keycloak-0 1/1 Running 0 52m keycloak-postgresql-c824c6-vv27m 1/1 Running 0 71m ---- . View the log of your completed backup job: + [source,bash,subs=+attributes] ---- $ {create_cmd_brief} logs aws_s3_backup-5b4rf ==> Component data dump completed . . . . [source,bash,subs=+attributes] ---- The status of the backup job also appears in the AWS console. [[_backups-local-cr]] ==== Backing up to Local Storage endif::[] You can use Operator to create a backup job that performs a one-time backup to a local Persistent Volume. .Example YAML file for a Backup custom resource ```yaml apiVersion: keycloak.org/v1alpha1 kind: KeycloakBackup metadata: name: test-backup ``` .Prerequisites * You have a YAML file for this custom resource. ifeval::[{project_community}==true] Be sure to omit the `aws` sub-properties from this file. endif::[] * You have a `PersistentVolume` with a `claimRef` to reserve it only for a `PersistentVolumeClaim` created by the {project_name} Operator. .Procedure . Create a backup job: `{create_cmd} -f `. For example: + [source,bash,subs=+attributes] ---- $ {create_cmd} -f one-time-backup.yaml keycloak.keycloak.org/test-backup ---- + The Operator creates a `PersistentVolumeClaim` with the following naming scheme: `Keycloak-backup-`. . View a list of volumes: + [source,bash,subs=+attributes] ---- $ {create_cmd_brief} get pvc NAME STATUS VOLUME keycloak-backup-test-backup Bound pvc-e242-ew022d5-093q-3134n-41-adff keycloak-postresql-claim Bound pvc-e242-vs29202-9bcd7-093q-31-zadj ---- . View a list of backup jobs: + [source,bash,subs=+attributes] ---- $ {create_cmd_brief} get jobs NAME COMPLETIONS DURATION AGE test-backup 0/1 6s 6s ---- . View the list of executed backup jobs: + [source,bash,subs=+attributes] ---- $ {create_cmd_brief} get pods NAME READY STATUS RESTARTS AGE test-backup-5b4rf 0/1 Completed 0 24s keycloak-0 1/1 Running 0 52m keycloak-postgresql-c824c6-vv27m 1/1 Running 0 71m ---- . View the log of your completed backup job: + [source,bash,subs=+attributes] ---- $ {create_cmd_brief} logs test-backup-5b4rf ==> Component data dump completed . . . . ---- .Additional resources * For more details on persistent volumes, see link:https://docs.openshift.com/container-platform/4.4/storage/understanding-persistent-storage.html[Understanding persistent storage].