KEYCLOAK-12273 update to the description of services and endpoints as suggested by Sebastian. Otherwise, I am done.

However, I still need input from Sebastian on technical accuracy.
This commit is contained in:
Andy Munro 2020-05-08 21:16:38 -04:00 committed by Hynek Mlnařík
parent 8c12072a8c
commit 0ddeec4e1e
42 changed files with 1003 additions and 199 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 82 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 92 KiB

After

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 176 KiB

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View file

@ -10,7 +10,7 @@
<orgname>Red Hat Customer Content Services</orgname> <orgname>Red Hat Customer Content Services</orgname>
</authorgroup> </authorgroup>
<legalnotice lang="en-US" version="5.0" xmlns="http://docbook.org/ns/docbook"> <legalnotice lang="en-US" version="5.0" xmlns="http://docbook.org/ns/docbook">
<para> Copyright <trademark class="copyright"></trademark> 2019 Red Hat, Inc. </para> <para> Copyright <trademark class="copyright"></trademark> 2020 Red Hat, Inc. </para>
<para>Licensed under the Apache License, Version 2.0 (the "License"); <para>Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. you may not use this file except in compliance with the License.
You may obtain a copy of the License at</para> You may obtain a copy of the License at</para>

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

View file

@ -47,14 +47,16 @@ include::topics/cache/eviction.adoc[]
include::topics/cache/replication.adoc[] include::topics/cache/replication.adoc[]
include::topics/cache/disable.adoc[] include::topics/cache/disable.adoc[]
include::topics/cache/clear.adoc[] include::topics/cache/clear.adoc[]
ifeval::[{project_community}==true]
include::topics/operator.adoc[] include::topics/operator.adoc[]
include::topics/operator/installation.adoc[] include::topics/operator/installation.adoc[]
ifeval::[{project_community}==true]
include::topics/operator/monitoring-stack.adoc[]
endif::[]
include::topics/operator/keycloak-cr.adoc[] include::topics/operator/keycloak-cr.adoc[]
include::topics/operator/keycloak-realm-cr.adoc[] include::topics/operator/keycloak-realm-cr.adoc[]
include::topics/operator/keycloak-client-cr.adoc[] include::topics/operator/keycloak-client-cr.adoc[]
include::topics/operator/keycloak-user-cr.adoc[] include::topics/operator/keycloak-user-cr.adoc[]
include::topics/operator/keycloak-backup-cr.adoc[]
include::topics/operator/external-database.adoc[] include::topics/operator/external-database.adoc[]
include::topics/operator/monitoring-stack.adoc[] include::topics/operator/keycloak-backup-cr.adoc[]
endif::[] include::topics/operator/extensions.adoc[]
include::topics/operator/command-options.adoc[]

View file

@ -2,13 +2,27 @@
[[_operator]] [[_operator]]
== {project_operator} == {project_operator}
{project_operator} has been designed to automate {project_name} administration in Kubernetes/OpenShift cluster. :tech_feature_name: The {project_name} Operator
It allows you to: :tech_feature_disabled: false
include::templates/techpreview.adoc[]
* Install Keycloak to a namespace The {project_operator} automates {project_name} administration in
* Import Keycloak Realms ifeval::[{project_community}==true]
* Import Keycloak Clients Kubernetes or
* Import Keycloak Users endif::[]
* Create scheduled backups of the database Openshift. You use this Operator to create custom resources (CRs), which automate administrative tasks. For example, instead of creating a client or a user in the {project_name} admin console, you can create custom resources to perform those tasks. A custom resource is a YAML file that defines the parameters for the administrative task.
* Install Extensions
You can create custom resources to perform the following tasks:
* xref:_keycloak_cr[Install {project_name}]
* xref:_realm-cr[Create realms]
* xref:_client-cr[Create clients]
* xref:_user-cr[Create users]
* xref:_external_database[Connect to an external database]
* xref:_backup-cr[Schedule database backups]
* xref:_operator-extensions[Install extensions and themes]
[NOTE]
After you create custom resources for realms, clients, and users, you can manage them by using the {project_name} admin console or as custom resources using the `{create_cmd_brief}` command. However, you cannot use both methods, because the Operator performs a one way sync for custom resources that you modify. For example, if you modify a realm custom resource, the changes show up in the admin console. However, if you modify the realm using the admin console, those changes have no effect on the custom resource.
Begin using the Operator by xref:_installing-operator[Installing the {project_operator} on a cluster].

View file

@ -0,0 +1,25 @@
[[_command-options]]
=== Command options for managing custom resources
After you create a custom request, you can edit it or delete using the `{create_cmd_brief}` command.
* To edit a custom request, use this command: `{create_cmd_brief} edit <cr-name>`
* To delete a custom request, use this command: `{create_cmd_brief} delete <cr-name>`
For example, to edit a realm custom request named `test-realm`, use this command:
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} edit test-realm
----
A window opens where you can make changes. For example, you might change the name `arealm` to `demo-realm`.
.Sample code for a custom resource
image:images/cr-code.png[]
[NOTE]
====
You can update the YAML file and changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
====

View file

@ -0,0 +1,41 @@
[[_operator-extensions]]
=== Installing extensions and themes
You can use the operator to install extensions and themes that you need for your company or organization. The extension or theme can be anything that {project_name} can consume. For example, you can add a metrics extension. You add the extension or theme to the Keycloak custom resource.
.Example YAML file for a Keycloak custom resource
```yaml
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: example-keycloak
labels:
app: sso
spec:
instances: 1
extensions:
- <url_for_extension_or_theme>
externalAccess:
enabled: True
```
.Prerequisites
* You have a YAML file for the Keycloak custom resource.
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Edit the YAML file for the Keycloak custom resource: `{create_cmd_brief} edit <cr-name>`
. Add a line called `extensions:` after the `instances` line.
. Add a URL to a JAR file for your custom extension or theme.
. Save the file.
The Operator downloads the extension or theme and installs it.

View file

@ -1,9 +1,10 @@
=== External database support [[_external_database]]
=== Connecting to an external database
Enabling external Postgresql database support requires creating `keycloak-db-secret` in the following form (note that values are Base64 encoded, see https://kubernetes.io/docs/concepts/configuration/secret/[Kubernetes Secrets manual]): You can use the Operator to connect to an external PostgreSQL database by modifying the Keycloak custom resource and creating a `keycloak-db-secret` YAML file. Note that values are Base64 encoded.
.`keycloak-db-secret` Secret .Example YAML file for `keycloak-db-secret`
```yaml ```yaml
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
@ -23,25 +24,25 @@ stringData:
type: Opaque type: Opaque
``` ```
There are two properties specifically responsible for specifying external database hostname or IP address and its port: The following properties set the hostname or IP address and port of the database.
* POSTGRES_EXTERNAL_ADDRESS - an IP address (or a Hostname) of the external database. This address needs to be resolvable from the perspective of a Kubernetes cluster. * `POSTGRES_EXTERNAL_ADDRESS` - an IP address or a hostname of the external database.
* POSTGRES_EXTERNAL_PORT - (Optional) A database port. ifeval::[{project_community}==true]
This address needs be resolvable in a Kubernetes cluster.
endif::[]
* `POSTGRES_EXTERNAL_PORT` - (Optional) A database port.
The other properties should to be set as follows: The other properties work in the same way for a hosted or external database. Set them as follows:
* `POSTGRES_DATABASE` - Database name to be used. * `POSTGRES_DATABASE` - Database name to be used.
* `POSTGRES_HOST` - The name of the `Service` used to communicate with a database. Typically `keycloak-postgresql`. * `POSTGRES_HOST` - The name of the `Service` used to communicate with a database. Typically `keycloak-postgresql`.
* `POSTGRES_USERNAME` - Database username * `POSTGRES_USERNAME` - Database username
* `POSTGRES_PASSWORD` - Database password * `POSTGRES_PASSWORD` - Database password
* `POSTGRES_SUPERUSER` - Indicates, whether backups should assume running as super user. Typically `true`. * `POSTGRES_SUPERUSER` - Indicates, whether backups should run as super user. Typically `true`.
The other properties remain the same for both operating modes (hosted Postgresql and an external one). The Keycloak custom resource requires updates to enable external database support.
The Keycloak Custom Resource also needs to be updated to turn the external database support on. .Example YAML file for `Keycloak` custom resource that supports an external database
Here's an example:
.`Keycloak` Custom Resource with external database support
```yaml ```yaml
apiVersion: keycloak.org/v1alpha1 apiVersion: keycloak.org/v1alpha1
kind: Keycloak kind: Keycloak
@ -56,6 +57,83 @@ spec:
instances: 1 instances: 1
``` ```
==== Implementation details .Prerequisites
{project_operator} allows you to use an external Postgresql database by modifying `<Keycloak Custom Resource Name>-postgresql` Endpoints object. Typically, this object is maintained by Kubernetes, however it might be used to connect to an external service. See https://docs.openshift.com/container-platform/3.11/dev_guide/integrating_external_services.html[OpenShift manual] for more details. * You have a YAML file for `keycloak-db-secret`.
* You have modified the Keycloak custom resource to set `externalDatabase` to `true`.
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Locate the secret for your PostgreSQL database: `{create_cmd_brief} get secret <secret_for_db> -o yaml`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get secret keycloak-db-secret -o yaml
apiVersion: v1
data
POSTGRES_DATABASE: cm9vdA==
POSTGRES_EXTERNAL_ADDRESS: MTcyLjE3LjAuMw==
POSTGRES_EXTERNAL_PORT: NTQzMg==
----
+
The `POSTGRES_EXTERNAL_ADDRESS` is in Base64 format.
. Decode the value for the secret: `echo "<encoded_secret>" | base64 -decode`. For example:
+
[source,bash,subs=+attributes]
----
$ echo "MTcyLjE3LjAuMw==" | base64 -decode
192.0.2.3
----
. Confirm that the decoded value matches the IP address for your database:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get pods -o wide
NAME READY STATUS RESTARTS AGE IP
keycloak-0 1/1 Running 0 13m 192.0.2.0
keycloak-postgresql-c8vv27m 1/1 Running 0 24m 192.0.2.3
----
. Confirm that `keycloak-postgresql` appears in a list of running services:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
keycloak ClusterIP 203.0.113.0 <none> 8443/TCP 27m
keycloak-discovery ClusterIP None <none> 8080/TCP 27m
keycloak-postgresql ClusterIP 203.0.113.1 <none> 5432/TCP 27m
----
+
The `keycloak-postgresql` service sends requests to a set of IP addresses in the backend. These IP addresses are called endpoints.
. View the endpoints used by the `keycloak-postgresql` service to confirm that they use the IP addresses for your database:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get endpoints keycloak-postgresql
NAME ENDPOINTS AGE
keycloak-postgresql 192.0.2.3.5432 27m
----
. Confirm that {project_name} is running with the external database. This example shows that everything is running:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get pods
NAME READY STATUS RESTARTS AGE IP
keycloak-0 1/1 Running 0 26m 192.0.2.0
keycloak-postgresql-c8vv27m 1/1 Running 0 36m 192.0.2.3
----
ifeval::[{project_community}==true]
.Additional Resources
* To back up your database using custom resources, see xref:_backup-cr[Scheduling database backups].
* For more information on Base64 encoding, see the https://kubernetes.io/docs/concepts/configuration/secret/[Kubernetes Secrets manual].
endif::[]

View file

@ -1,68 +1,154 @@
=== Installing {project_operator} into OpenShift cluster [[_installing-operator]]
=== Installing the {project_operator} on a cluster
There are two ways of running {project_operator} in Kubernetes/OpenShift cluster: To install the {project_operator}, you can use:
* Using Operator Lifecycle Manager (OLM) * xref:_install_by_olm[The Operator Lifecycle Manager (OLM)]
* Running manually * xref:_install_by_command[Command line installation]
==== Using Operator Lifecycle Manager [[_install_by_olm]]
==== Installing using the Operator Lifecycle Manager
In case of a newly installed OpenShift 4.x cluster, {project_operator} should already be available in ifeval::[{project_community}==true]
OperatorHub tab of the OpenShift administration panel (as shown in the image below). You can install the Operator on an xref:_openshift-olm[OpenShift] or xref:_kubernetes-olm[Kubernetes] cluster.
[[_openshift-olm]]
===== Installation on an OpenShift cluster
endif::[]
.Prerequisites
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
Perform this procedure on an OpenShift 4.4 cluster.
. Open the OpenShift Container Platform web console.
. In the left column, click `Operators, OperatorHub`.
. Search for {project_name} Operator.
+
.OperatorHub tab in OpenShift
image:{project_images}/operator-openshift-operatorhub.png[] image:{project_images}/operator-openshift-operatorhub.png[]
If {project_operator} is not available, please navigate to https://operatorhub.io/operator/keycloak-operator, click `Install` button and follow the instructions on screen. . Click the {project_name} Operator icon.
+
image:{project_images}/operator-operatorhub-install.png[] An Install page opens.
+
TIP: For Kubernetes installation, please follow link:https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/install/install.md[Operator Lifecycle Manager Installation Guide] .Operator Install page on OpenShift
Once {project_operator} is available in your cluster, click `Install` button and follow the instruction on
screen.
image:{project_images}/operator-olm-installation.png[] image:{project_images}/operator-olm-installation.png[]
For more information, navigate to link:https://docs.openshift.com/container-platform/4.2/operators/olm-what-operators-are.html[OpenShift Operators guide]. . Click `Install`.
==== Running manually . Select a namespace and click Subscribe.
+
.Namespace selection in OpenShift
image:images/installed-namespace.png[]
+
The Operator starts installing.
It is possible to run {project_operator} without Operator Lifecycle Manager. This requires running several .Additional resources
steps manually from link:{operatorRepo_link}[Github repo]:
* Install all required Custom Resource Definitions by using the following command: * When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource].
ifeval::[{project_community}==true]
However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
endif::[]
```bash * For more information on OpenShift Operators, see the link:https://docs.openshift.com/container-platform/4.4/operators/olm-what-operators-are.html[OpenShift Operators guide].
kubectl apply -f deploy/crds/
```
* Create a new namespace (or reuse an existing one). For this example, we'll be using namespace `myproject`. ifeval::[{project_community}==true]
```bash [[_kubernetes-olm]]
kubectl create namespace myproject ===== Installation on a Kubernetes cluster
```
* Deploy Role, Role Binding and a Service Account for {project_operator}. .Prerequisites
```bash * You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
kubectl apply -f deploy/role.yaml -n myproject
kubectl apply -f deploy/role_binding.yaml -n myproject
kubectl apply -f deploy/service_account.yaml -n myproject
```
* Deploy {project_operator} .Procedure
```bash For a Kubernetes cluster, perform these steps.
kubectl apply -f deploy/operator.yaml -n myproject
```
* Validate if {project_operator} is running . Go to link:https://operatorhub.io/operator/keycloak-operator[Keycloak Operator on OperatorHub.io].
```bash . Click `Install`.
kubectl get deployment keycloak-operator
. Follow the instructions on the screen.
+
.Operator Install page on Kubernetes
image:{project_images}/operator-operatorhub-install.png[]
.Additional resources
* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource]. However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
* For more information on a Kubernetes installation, see link:https://operatorhub.io/how-to-install-an-operator[How to install an Operator from OperatorHub.io].
endif::[]
[[_install_by_command]]
==== Installing from the command line
You can install the {project_operator} from the command line.
.Prerequisites
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Obtain the software to install from this location: link:{operatorRepo_link}[Github repo].
. Install all required custom resource definitions:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/crds/
----
. Create a new namespace (or reuse an existing one) such as the namespace `myproject`:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} namespace myproject
----
. Deploy a role, role binding, and service account for the Operator:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/role.yaml -n myproject
$ {create_cmd} -f deploy/role_binding.yaml -n myproject
$ {create_cmd} -f deploy/service_account.yaml -n myproject
----
. Deploy the Operator:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/operator.yaml -n myproject
----
. Confirm that the Operator is running:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get deployment keycloak-operator
NAME READY UP-TO-DATE AVAILABLE AGE NAME READY UP-TO-DATE AVAILABLE AGE
keycloak-operator 1/1 1 1 41s keycloak-operator 1/1 1 1 41s
``` ----
.Additional resources
* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource].
ifeval::[{project_community}==true]
However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
* For more information on a Kubernetes installation, see link:https://operatorhub.io/how-to-install-an-operator[How to install an Operator from OperatorHub.io].
endif::[]
* For more information on OpenShift Operators, see the link:https://docs.openshift.com/container-platform/4.4/operators/olm-what-operators-are.html[OpenShift Operators guide].
TIP: When deploying {project_operator} to an OpenShift cluster, `kubectl apply` commands can be replaced with their `oc create` counterparts.

View file

@ -1,15 +1,28 @@
=== KeycloakBackup Custom Resource [[_backup-cr]]
=== Scheduling database backups
{project_operator} provides automatic backups with manual restore in 3 modes: You can use the Operator to schedule automatic backups of the database as defined by custom resources. The custom resource triggers a backup job
ifeval::[{project_community}==true]
(or a `CronJob` in the case of Periodic Backups)
endif::[]
and reports back its status.
* One time backups to a local Persistent Volume. ifeval::[{project_community}==true]
* One time backups to Amazon S3 storage. Two options exist to schedule backups:
* Periodic backups to Amazon S3 storage.
The Operator uses `KeycloakBackup` Custom Resource (CR) to trigger a backup Job (or a `CronJob` in case of Periodic Backups) and reports back its status. The CR has the following structure: * xref:_backups-cr-aws[Backing up to AWS S3 storage]
* xref:_backups-local-cr[Backing up to local storage]
If you have AWS S3 storage, you can perform a one-time backup or periodic backups. If you do not have AWS S3 storage, you can back up to local storage.
[[_backups-cr-aws]]
==== Backing up to AWS S3 storage
You can back up your database to AWS S3 storage one time or periodically. To back up your data periodically, enter a valid `CronJob` into the `schedule`.
For AWS S3 storage, you create a YAML file for the backup custom resource and a YAML file for the AWS secret. The backup custom resource requires a YAML file with the following structure:
.`KeycloakBackup` Custom Resource
```yaml ```yaml
apiVersion: keycloak.org/v1alpha1 apiVersion: keycloak.org/v1alpha1
kind: KeycloakBackup kind: KeycloakBackup
@ -18,12 +31,14 @@ metadata:
spec: spec:
aws: aws:
# Optional - used only for Periodic Backups. # Optional - used only for Periodic Backups.
# Follows usual crond syntax (e.g. use "0 1 * * *") to perform the backup every day at 1 AM. # Follows usual crond syntax (for example, use "0 1 * * *" to perform the backup every day at 1 AM.)
schedule: <Cron Job Schedule> schedule: <Cron Job Schedule>
# Required - the name of the secret containing the credentials to access the S3 storage # Required - the name of the secret containing the credentials to access the S3 storage
credentialsSecretName: <A Secret containing S3 credentials> credentialsSecretName: <A Secret containing S3 credentials>
``` ```
The AWS secret requires a YAML file with the following structure:
.AWS S3 `Secret` .AWS S3 `Secret`
```yaml ```yaml
apiVersion: v1 apiVersion: v1
@ -37,74 +52,148 @@ stringData:
AWS_SECRET_ACCESS_KEY: <AWS Secret Key> AWS_SECRET_ACCESS_KEY: <AWS Secret Key>
``` ```
IMPORTANT: The above secret name needs to match the one referred in the `KeycloakBackup` Custom Resource. .Prerequisites
Once the `KeycloakBackup` Custom Resource is created, {project_operator} will create a corresponding Job to back up the PostgreSQL database. The status of the backup is reported in the `status` field. * Your Backup custom resource YAML file includes a `credentialsSecretName` that references a `Secret` containing AWS S3 credentials.
Here's an example:
.`KeycloakBackup` Status * Your `KeycloakBackup` custom resource has `aws` sub-properties.
```yaml
Name: example-keycloakbackup
Namespace: keycloak
Labels: <none>
Annotations: <none>
API Version: keycloak.org/v1alpha1
Kind: KeycloakBackup
Metadata:
Creation Timestamp: 2019-10-31T08:13:10Z
Generation: 1
Resource Version: 110940
Self Link: /apis/keycloak.org/v1alpha1/namespaces/keycloak/keycloakbackups/example-keycloakbackup
UID: 0ea2e038-c328-48a0-8d5a-52acbc826577
Status:
Message:
Phase: created
Ready: true
Secondary Resources:
Job:
example-keycloakbackup
Persistent Volume Claim:
keycloak-backup-example-keycloakbackup
```
==== Backups to AWS S3 * You have a YAML file for the AWS S3 Secret that includes a `<Secret Name>` that matches the one identified in the backup custom resource.
In order to create Backups uploaded to S3 storage, you need to create a `KeycloakBackup` Custom Resource with `aws` sub-properties. * You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
IMPORTANT: The `credentialsSecretName` field is required and needs to contain a valid reference to a `Secret` containing AWS S3 credentials.
If the `schedule` contains valid `CronJob` schedule definition, the Operator will backup your data periodically. .Procedure
==== Backups to a Local Storage . Create the secret with credentials: `{create_cmd} -f <secret_for_aws>.yaml`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f secret.yaml
keycloak.keycloak.org/aws_s3_secret created
----
{project_operator} can also create a backup to a local Persistent Volume. In order to do it, you need to create a `KeycloakBackup` Custom Resource without `aws` sub-properties. Here's an example: . Create a backup job: `{create_cmd} -f <backup_crname>.yaml`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f aws_one-time-backup.yaml
keycloak.keycloak.org/aws_s3_backup created
----
. View a list of backup jobs:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get jobs
NAME COMPLETIONS DURATION AGE
aws_s3_backup 0/1 6s 6s
----
. View the list of executed backup jobs.
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get pods
NAME READY STATUS RESTARTS AGE
aws_s3_backup-5b4rfdd 0/1 Completed 0 24s
keycloak-0 1/1 Running 0 52m
keycloak-postgresql-c824c6-vv27m 1/1 Running 0 71m
----
. View the log of your completed backup job:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} logs aws_s3_backup-5b4rf
==> Component data dump completed
.
.
.
.
[source,bash,subs=+attributes]
----
The status of the backup job also appears in the AWS console.
[[_backups-local-cr]]
==== Backing up to Local Storage
endif::[]
You can use Operator to create a backup job that performs a one-time backup to a local Persistent Volume.
.Example YAML file for a Backup custom resource
```yaml ```yaml
apiVersion: keycloak.org/v1alpha1 apiVersion: keycloak.org/v1alpha1
kind: KeycloakBackup kind: KeycloakBackup
metadata: metadata:
name: <CR Name> name: test-backup
``` ```
{project_operator} will create a new `PersistentVolumeClaim` with the following naming scheme: .Prerequisites
keycloak-backup-<Custom Resource Name> * You have a YAML file for this custom resource.
ifeval::[{project_community}==true]
Be sure to omit the `aws` sub-properties from this file.
endif::[]
It is a good practice to create a corresponding `PersistentVolume` for the upcoming backups upfront and use `claimRef` to reserve it only for `PersistentVolumeClaim` created by the Keycloak Operator (see https://docs.okd.io/3.6/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding[OKD manual for more details]). * You have a `PersistentVolume` with a `claimRef` to reserve it only for a `PersistentVolumeClaim` created by the {project_name} Operator.
==== Automatic Restore .Procedure
WARNING: This is not implemented! . Create a backup job: `{create_cmd} -f <backup_crname>`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f one-time-backup.yaml
keycloak.keycloak.org/test-backup
----
+
The Operator creates a `PersistentVolumeClaim` with the following naming scheme: `Keycloak-backup-<CR_name>`.
One of the design goals of {project_name} Backups is to maintain one-to-one relationship between . View a list of volumes:
`KeycloakBackup` object and a physical copy of the data. This relationship is then used to restore the data. All you need to do is to set the `restore` flag in the `KeycloakBackup` to true: +
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get pvc
NAME STATUS VOLUME
keycloak-backup-test-backup Bound pvc-e242-ew022d5-093q-3134n-41-adff
keycloak-postresql-claim Bound pvc-e242-vs29202-9bcd7-093q-31-zadj
----
. View a list of backup jobs:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get jobs
NAME COMPLETIONS DURATION AGE
test-backup 0/1 6s 6s
----
. View the list of executed backup jobs:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get pods
NAME READY STATUS RESTARTS AGE
test-backup-5b4rf 0/1 Completed 0 24s
keycloak-0 1/1 Running 0 52m
keycloak-postgresql-c824c6-vv27m 1/1 Running 0 71m
----
. View the log of your completed backup job:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} logs test-backup-5b4rf
==> Component data dump completed
.
.
.
.
----
.Additional resources
* For more details on persistent volumes, see link:https://docs.openshift.com/container-platform/4.4/storage/understanding-persistent-storage.html[Understanding persistent storage].
.`KeycloakBackup` with restore
```yaml
apiVersion: keycloak.org/v1alpha1
kind: KeycloakBackup
metadata:
name: <CR Name>
spec:
restore: true
```

View file

@ -1,20 +1,26 @@
=== KeycloakClient Custom Resource [[_client-cr]]
=== Creating a client custom resource
{project_operator} allows application developers to represent Keycloak Clients as Custom Resources: You can use the Operator to create clients in {project_name} as defined by a custom resource. You define the properties of the realm in a YAML file.
.`KeycloakClient` Custom Resource [NOTE]
====
You can update the YAML file and changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
====
.Example YAML file for a Client custom resource
```yaml ```yaml
apiVersion: keycloak.org/v1alpha1 apiVersion: keycloak.org/v1alpha1
kind: KeycloakClient kind: KeycloakClient
metadata: metadata:
name: <Keycloak Client name> name: example-client
labels: labels:
app: sso app: sso
spec: spec:
realmSelector: realmSelector:
matchLabels: matchLabels:
app: <matching labels for KeycloakRealm Custom Resource> app: <matching labels for KeycloakRealm custom resource>
client: client:
# auto-generated if not supplied # auto-generated if not supplied
#id: 123 #id: 123
@ -24,13 +30,50 @@ spec:
# other properties of Keycloak Client # other properties of Keycloak Client
``` ```
TIP: Note, that `realmSelector` needs to match labels of an existing `KeycloakRealm` Custom Resource. .Prerequisites
NOTE: {project_operator} synchronizes all the changes made to the Custom Resource with a running {project_name} instance. No manual changes via Keycloak Admin Console are allowed. * You have a YAML file for this custom resource.
Once {project_operator} reconciles the Custom Resource, it reports the status back: * You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.`KeycloakClient` Custom Resource Status .Procedure
. Use this command on the YAML file that you created: `{create_cmd} -f <client-name>.yaml`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f initial_client.yaml
keycloak.keycloak.org/example-client created
----
. Log into the {project_name} admin console for the related instance of {project_name}.
. Click Clients.
+
The new client appears in the list of clients.
+
image:images/clients.png[]
.Results
After a client is created, the Operator creates a Secret containing the `Client ID` and the client's secret using the following naming pattern: `keycloak-client-secret-<custom resource name>`. For example:
.Client's Secret
```yaml
apiVersion: v1
data:
CLIENT_ID: <base64 encoded Client ID>
CLIENT_SECRET: <base64 encoded Client Secret>
kind: Secret
```
After the Operator processes the custom resource, view the status with this command:
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} describe keycloak <cr_name>
----
.Client custom resource Status
```yaml ```yaml
Name: client-secret Name: client-secret
Namespace: keycloak Namespace: keycloak
@ -55,13 +98,6 @@ Status:
Events: <none> Events: <none>
``` ```
Once a Client is created, {project_operator} creates a Secret with, containing both `Client ID` as well as the client's secret using the following naming pattern: `keycloak-client-secret-<Custom Resource name>`. Here's an example: .Additional resources
.`KeycloakClient`'s Secret * When the client creation completes, you are ready to xref:_user-cr[create a user custom resource].
```
apiVersion: v1
data:
CLIENT_ID: <base64 encoded Client ID>
CLIENT_SECRET: <base64 encoded Client Secret>
kind: Secret
```

View file

@ -1,14 +1,47 @@
=== Keycloak Custom Resource [[_keycloak_cr]]
=== {project_name} installation using a custom resource
The {project_operator} allows users to manage {project_name} clusters by using Keycloak Custom Resources: You can use the Operator to automate the installation of {project_name} by creating a Keycloak custom resource. When you use a custom resource to install {project_name}, you create the components and services that are described here and illustrated in the graphic that follows.
.`Keycloak` Custom Resource * `keycloak-db-secret` - Stores properties such as the database username, password, and external address (if you connect to an external database)
* `credentials-<CR-Name>` - Admin username and password to log into the {project_name} admin console (the `<CR-Name>` is based on the `Keycloak` custom resource name)
* `keycloak` - Keycloak deployment specification that is implemented as a StatefulSet with high availability support
* `keycloak-postgresql` - Starts a PostgreSQL database installation
* `keycloak-discovery` Service - Performs `JDBC_PING` discovery
* `keycloak` Service - Connects to {project_name} through HTTPS (HTTP is not supported)
* `keycloak-postgresql` Service - Connects an internal and external, if used, database instance
* `keycloak` Route - The URL for accessing the {project_name} admin console from OpenShift
ifeval::[{project_community}==true]
* `keycloak` Ingress - The URL for accessing the {project_name} admin console from Kubernetes
endif::[]
.How Operator components and services interact
image:{project_images}/operator-components.png[]
==== The Keycloak custom resource
The Keycloak custom resource is a YAML file that defines the parameters for installation. This file contains three properties.
* `instances` - controls the number of instances running in high availability mode.
* `externalAccess` - if the `enabled` is `True`, the Operator creates a route for OpenShift
ifeval::[{project_community}==true]
or an Ingress for Kubernetes
endif::[]
for the {project_name} cluster.
* `externalDatabase` - applies only if you want to connect an externally hosted database. That topic is covered in the xref:_external_database[external database] section of this guide.
.Example YAML file for a Keycloak custom resource
```yaml ```yaml
apiVersion: keycloak.org/v1alpha1 apiVersion: keycloak.org/v1alpha1
kind: Keycloak kind: Keycloak
metadata: metadata:
ifeval::[{project_community}==true]
name: example-keycloak name: example-keycloak
endif::[]
ifeval::[{project_product}==true]
name: example-sso
endif::[]
labels: labels:
app: sso app: sso
spec: spec:
@ -17,14 +50,144 @@ spec:
enabled: True enabled: True
``` ```
The `Spec` contains three properties (two of them are listed above and the external database has been covered in the other paragraphs of this manual): [NOTE]
====
You can update the YAML file and the changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
====
* `instances` - controls the number of instances running in HA mode ==== Creating a Keycloak custom resource on OpenShift
* `externalAccess` - if the `enabled` flag is set to `True`, depending on the types available in the cluster {project_operator} will create a Route or an Ingress for {project_name} cluster.
Once {project_operator} reconciles the Custom Resource, it reports the status back: On OpenShift, you use the custom resource to create a route, which is the URL of the admin console, and find the secret, which holds the username and password for the admin console.
.`Keycloak` Custom Resource Status .Prerequisites
* You have a YAML file for this custom resource.
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
ifeval::[{project_community}==true]
* If you want to start tracking all Operator activities now, install the monitoring application before you create this custom resource. See xref:_monitoring-operator[The Application Monitoring Operator].
endif::[]
.Procedure
. Create a route using your YAML file: `{create_cmd} -f <filename>.yaml -n <namespace>`. For example:
+
[source,bash,subs=+attributes]
----
ifeval::[{project_community}==true]
$ {create_cmd} -f keycloak.yaml -n keycloak
keycloak.keycloak.org/example-keycloak created
endif::[]
ifeval::[{project_product}==true]
$ {create_cmd} -f sso.yaml -n sso
keycloak.keycloak.org/example-sso created
endif::[]
----
+
A route is created in OpenShift.
. Log into the OpenShift web console.
. Select `Networking`, `Routes` and search for Keycloak.
+
.Routes screen in OpenShift web console
image:images/route-ocp.png[]
. On the screen with the Keycloak route, click the URL under `Location`.
+
The {project_name} admin console login screen appears.
+
.Admin console login screen
image:images/login-empty.png[]
. Locate the username and password for the admin console in the OpenShift web console; under `Workloads`, click `Secrets` and search for Keycloak.
+
.Secrets screen in OpenShift web console
image:images/secrets-ocp.png[]
. Enter the username and password into the admin console login screen.
+
.Admin console login screen
image:images/login-complete.png[]
+
You are now logged into an instance of {project_name} that was installed by a Keycloak custom resource. You are ready to create custom resources for realms, clients, and users.
+
.{project_name} master realm
image:images/new_install_cr.png[]
. Check the status of the custom resource:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} describe keycloak <cr_name>
----
ifeval::[{project_community}==true]
==== Creating a Keycloak custom resource on Kubernetes
On Kubernetes, you use the custom resource to create an ingress, which is the IP address of the admin console, and find the secret, which holds the username and password for that console.
.Prerequisites
* You have a YAML file for this custom resource.
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Create the ingress using your YAML file. `{create_cmd} -f <filename>.yaml -n <namespace>`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f keycloak.yaml -n keycloak
keycloak.keycloak.org/example-keycloak created
----
. Find the ingress: `{create_cmd_brief} get ingress -n <cr_name>`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get ingress -n example-keycloak
NAME HOSTS ADDRESS PORTS AGE
keycloak keycloak.redhat.com 192.0.2.0 80 3m
----
. Copy and paste the ADDRESS (the ingress) into a web browser.
+
The {project_name} admin console login screen appears.
+
.Admin console login screen
image:images/login-empty.png[]
. Locate the username and password.
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get secret credentials-<CR-Name> -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
----
. Enter the username and password in the admin console login screen.
+
.Admin console login screen
image:images/login-complete.png[]
+
You are now logged into an instance of {project_name} that was installed by a Keycloak custom resource. You are ready to create custom resources for realms, clients, and users.
+
.Admin console master realm
image:images/new_install_cr.png[]
endif::[]
.Results
After the Operator processes the custom resource, view the status with this command:
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} describe keycloak <cr_name>
----
.Keycloak custom resource Status
```yaml ```yaml
Name: example-keycloak Name: example-keycloak
Namespace: keycloak Namespace: keycloak
@ -65,3 +228,9 @@ Status:
Version: Version:
Events: Events:
``` ```
.Additional resources
* Once the installation of {project_name} completes, you are ready to xref:_realm-cr[create a realm custom resource].
* If you have an external database, you can modify the Keycloak custom resource to support it. See xref:_external_database[Connecting to an external database].

View file

@ -1,14 +1,20 @@
=== KeycloakRealm Custom Resource [[_realm-cr]]
=== Creating a realm custom resource
The {project_operator} allows application developers to represent {project_name} Realms as Custom Resources: You can use the Operator to create realms in {project_name} as defined by a custom resource. You define the properties of the realm custom resource in a YAML file.
.`KeycloakRealm` Custom Resource [NOTE]
====
You can update the YAML file and changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
====
.Example YAML file for a `Realm` custom resource
```yaml ```yaml
apiVersion: keycloak.org/v1alpha1 apiVersion: keycloak.org/v1alpha1
kind: KeycloakRealm kind: KeycloakRealm
metadata: metadata:
name: example-keycloakrealm name: test
labels: labels:
app: sso app: sso
spec: spec:
@ -22,13 +28,43 @@ spec:
app: sso app: sso
``` ```
IMPORTANT: Note, that `instanceSelector` needs to match labels of an existing `Keycloak` Custom Resource. .Prerequisites
NOTE: {project_operator} synchronizes all the changes made to the Custom Resource with a running {project_name} instance. No manual changes via {project_name} Admin Console are allowed. * You have a YAML file for this custom resource.
Once {project_operator} reconciles the Custom Resource, it reports the status back: * In the YAML file, the `app` under `instanceSelector` matches the label of a Keycloak custom resource. Matching these values ensures that you create the realm in the right instance of {project_name}.
.`KeycloakRealm` Custom Resource Status * You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Use this command on the YAML file that you created: `{create_cmd} -f <realm-name>.yaml`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f initial_realm.yaml
keycloak.keycloak.org/test created
----
. Log into the admin console for the related instance of {project_name}.
. Click Select Realm and locate the realm that you created.
+
The new realm opens.
+
.Admin console master realm
image:images/test-realm-cr.png[]
.Results
After the Operator processes the custom resource, view the status with this command:
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} describe keycloak <cr_name>
----
.Realm custom resource status
```yaml ```yaml
Name: example-keycloakrealm Name: example-keycloakrealm
Namespace: keycloak Namespace: keycloak
@ -61,3 +97,7 @@ Status:
Events: <none> Events: <none>
``` ```
Additional resources
* When the realm creation completes, you are ready to xref:_client-cr[create a client custom resource].

View file

@ -1,14 +1,20 @@
=== KeycloakUser Custom Resource [[_user-cr]]
=== Creating a user custom resource
{project_name} Users can be represented as Custom Resources in {project_operator}: You can use the Operator to create users in {project_name} as defined by a custom resource. You define the properties of the user custom resource in a YAML file.
.`KeycloakUser` Custom Resource [NOTE]
====
You can update properties, except for the password, in the YAML file and changes appear in the {project_name} admin console, however changes to the admin console do not update the custom resource.
====
.Example YAML file for a user custom resource
```yaml ```yaml
apiVersion: keycloak.org/v1alpha1 apiVersion: keycloak.org/v1alpha1
kind: KeycloakUser kind: KeycloakUser
metadata: metadata:
name: example-realm-user name: example-user
spec: spec:
user: user:
username: "realm_user" username: "realm_user"
@ -29,13 +35,57 @@ spec:
app: sso app: sso
``` ```
IMPORTANT: Note, that `realmSelector` needs to match labels of an existing `KeycloakRealm` Custom Resource. .Prerequisites
NOTE: {project_operator} synchronizes all the changes made to the Custom Resource with a running {project_name} instance. No manual changes via {project_name} Admin Console are allowed. * You have a YAML file for this custom resource.
Once {project_operator} reconciles the Custom Resource, it reports the status back: * The `realmSelector` matches the labels of an existing realm custom resource.
.`KeycloakUser` Custom Resource Status * You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Use this command on the YAML file that you created: `{create_cmd} -f <user_cr>.yaml`. For example:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f initial_user.yaml
keycloak.keycloak.org/example-user created
----
. Log into the admin console for the related instance of {project_name}.
. Click Users.
. Search for the user that you defined in the YAML file.
+
The user is in the list.
+
image:images/user_list.png[]
.Results
After a user is created, the Operator creates a Secret containing the both username and password using the
following naming pattern: `credential-<realm name>-<username>-<namespace>`. Here's an example:
.`KeycloakUser` Secret
```yaml
kind: Secret
apiVersion: v1
data:
password: <base64 encoded password>
username: <base64 encoded username>
type: Opaque
```
Once the Operator processes the custom resource, view the status with this command:
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} describe keycloak <cr_name>
----
.User custom resource Status
```yaml ```yaml
Name: example-realm-user Name: example-realm-user
Namespace: keycloak Namespace: keycloak
@ -62,15 +112,8 @@ Status:
Events: <none> Events: <none>
``` ```
Once a User is created, {project_operator} creates a Secret containing both username and password using the .Additional resources
following naming pattern: `credential-<realm name>-<user name>-<namespace>`. Here's an example:
.`KeycloakUser` Secret * If you have an external database, you can modify the Keycloak custom resource to support it. See xref:_external_database[Connecting to an external database].
```
kind: Secret * To back up your database using custom resources, see xref:_backup-cr[schedule database backups].
apiVersion: v1
data:
password: <base64 encoded password>
username: <base64 encoded username>
type: Opaque
```

View file

@ -1,31 +1,56 @@
=== Monitoring Stack [[_monitoring-operator]]
=== The {application_monitoring_operator}
{project_operator} integrates with {application_monitoring_operator} providing Graphana Dashboard and Prometheus Alerts. The integration is seamless and doesn't require any action from the user. However, {application_monitoring_operator} needs to be properly installed in the cluster. Before using the Operator to install {project_name} or create components, we recommend that you install the {application_monitoring_operator}, which that tracks Operator activity. To view metrics for the Operator, you can use the Grafana Dashboard and Prometheus Alerts from the {application_monitoring_operator}. For example, you can view metrics such as the number of controller runtime reconciliation loops, the reconcile loop time, and errors.
==== Installing {application_monitoring_operator} The {project_operator} integration with the {application_monitoring_operator} requires no action. You only need to install the {application_monitoring_operator} in the cluster.
Prior to running {project_operator} it is recommended to install {application_monitoring_operator} by following the link:{application_monitoring_operator_installation_link}[documentation]. After {application_monitoring_operator} is up and running, you need to annotate the namespace used for {project_operator} installation. Here's an example: ==== Installing the {application_monitoring_operator}
.Annotating namespace for monitoring .Prerequisites
```bash
oc label namespace <namespace> monitoring-key=middleware
```
Once the {application_monitoring_operator} is installed and running, also the corresponding namespace is annotated for monitoring, you can search for Prometheus and Graphana route in `application-monitoring` namespace: * The {project_operator} is installed.
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Install the {application_monitoring_operator} by using the link:{application_monitoring_operator_installation_link}[documentation].
. Annotate the namespace used for the {project_operator} installation. For example:
+
[source,bash,subs=+attributes]
----
{create_cmd_brief} label namespace <namespace> monitoring-key=middleware
----
. Log into the OpenShift web console.
. Confirm monitoring is working by searching for Prometheus and Grafana route in the `application-monitoring` namespace.
+
.Routes in OpenShift web console
image:{project_images}/operator-application-monitoring-routes.png[] image:{project_images}/operator-application-monitoring-routes.png[]
==== Graphana Dashboard ==== Viewing Operator Metrics
{project_operator} installs a pre-defined Graphana Dashboard. Here's how it looks like: Grafana and Promotheus each provide graphical information about Operator activities.
* The Operator installs a pre-defined Grafana Dashboard as shown here:
+
.Grafana Dashboard
image:{project_images}/operator-graphana-dashboard.png[] image:{project_images}/operator-graphana-dashboard.png[]
+
[NOTE]
====
If you make customizations, we recommend that you clone the Grafana Dashboard so that your changes are not overwritten during an upgrade.
====
NOTE: Graphana Dashboard is likely to be changed in future releases. If you need any customizations, please clone it and adjust to your needs. * The Operator installs a set of pre-defined Prometheus Alerts as shown here:
+
==== Prometheus Alerts .Prometheus Alerts
{project_operator} installs a set of pre-defined Prometheus Alerts. Here's how they look like:
image:{project_images}/operator-prometheus-alerts.png[] image:{project_images}/operator-prometheus-alerts.png[]
.Additional resources
For more information, see link:https://docs.openshift.com/container-platform/latest/monitoring/cluster_monitoring/prometheus-alertmanager-and-grafana.html[Accessing Prometheus, Alertmanager, and Grafana].

View file

@ -24,6 +24,8 @@ endif::[]
:operatorRepo_link: https://github.com/keycloak/keycloak-operator :operatorRepo_link: https://github.com/keycloak/keycloak-operator
:application_monitoring_operator: Application Monitoring Operator :application_monitoring_operator: Application Monitoring Operator
:application_monitoring_operator_installation_link: https://github.com/integr8ly/application-monitoring-operator#installation :application_monitoring_operator_installation_link: https://github.com/integr8ly/application-monitoring-operator#installation
:create_cmd: kubectl apply
:create_cmd_brief: kubectl
:quickstartRepo_link: https://github.com/keycloak/keycloak-quickstarts :quickstartRepo_link: https://github.com/keycloak/keycloak-quickstarts
:quickstartRepo_name: Keycloak Quickstarts Repository :quickstartRepo_name: Keycloak Quickstarts Repository

View file

@ -20,6 +20,8 @@
:operatorRepo_link: https://github.com/keycloak/keycloak-operator :operatorRepo_link: https://github.com/keycloak/keycloak-operator
:application_monitoring_operator: Red Hat Managed Integration (RHMI) Application Monitoring Operator :application_monitoring_operator: Red Hat Managed Integration (RHMI) Application Monitoring Operator
:application_monitoring_operator_installation_link: https://github.com/integr8ly/application-monitoring-operator#installation :application_monitoring_operator_installation_link: https://github.com/integr8ly/application-monitoring-operator#installation
:create_cmd: oc create
:create_cmd_brief: oc
:project_dirref: RHSSO_HOME :project_dirref: RHSSO_HOME

152
xref Normal file
View file

@ -0,0 +1,152 @@
[[_installing-operator]]
=== Installing the {project_operator} on a cluster
To install the {project_operator}, you can use:
* xref:_install_by_olm[The Operator Lifecycle Manager (OLM)]
* xref:_install_by_command[Command line installation]
[[_install_by_olm]]
==== Installing using the Operator Lifecycle Manager
ifeval::[{project_community}==true]
You can install the Operator on an xref:_openshift-olm[OpenShift] or xref:_kubernetes-olm[Kubernetes] cluster.
[[_openshift-olm]]
===== Installation on an OpenShift cluster
endif::[]
.Prerequisites
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
Perform this procedure on an OpenShift 4.4 cluster.
. Open the OpenShift Container Platform web console.
. In the left column, click `Operators, OperatorHub`.
. Search for {project_name} Operator.
+
.OperatorHub tab in OpenShift
image:{project_images}/operator-openshift-operatorhub.png[]
. Click the {project_name} Operator icon.
+
An Install page opens.
+
.Operator Install page on OpenShift
image:{project_images}/operator-olm-installation.png[]
. Click `Install`.
. Select a namespace and click Subscribe.
+
.Namespace selection in OpenShift
image:images/installed-namespace.png[]
+
The Operator starts installing.
When the Operator installation completes, you are ready to create custom resources to install {project_name}, create realms, and perform other adminstrative tasks.
ifeval::[{project_community}==true]
[[_kubernetes-olm]]
===== Installation on a Kubernetes cluster
.Prerequisites
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
For a Kubernetes cluster, perform these steps.
. Go to link:https://operatorhub.io/operator/keycloak-operator[Keycloak Operator on OperatorHub.io].
. Click `Install`.
. Follow the instructions on the screen.
+
.Operator Install page on Kubernetes
image:{project_images}/operator-operatorhub-install.png[]
When the Operator installation completes, you are ready to create custom resources to install {project_name}, create realms, and perform other adminstrative tasks.
endif::[]
.Additional Resources
ifeval::[{project_community}==true]
* For more information on a Kubernetes installation, see link:https://operatorhub.io/how-to-install-an-operator[How to install an Operator from OperatorHub.io].
* To track what you install and create with the operator, you can now set up a monitoring application. See xref:_monitoring-operator[The Application Monitoring Operator].
endif::[]
* For more information on OpenShift Operators, see the link:https://docs.openshift.com/container-platform/4.4/operators/olm-what-operators-are.html[OpenShift Operators guide].
* For more information on OpenShift installation, see link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.4/html/installing/index[Installing and Configuring OpenShift Container Platform clusters].
[[_install_by_command]]
==== Installing from the command line
You can install the {project_operator} from the command line.
.Prerequisites
You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Obtain the software to install from this location: link:{operatorRepo_link}[Github repo].
. Install all required custom resource definitions:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/crds/
----
. Create a new namespace (or reuse an existing one) such as the namespace `myproject`:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} namespace myproject
----
. Deploy a role, role binding, and service account for the Operator:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/role.yaml -n myproject
$ {create_cmd} -f deploy/role_binding.yaml -n myproject
$ {create_cmd} -f deploy/service_account.yaml -n myproject
----
. Deploy the Operator:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/operator.yaml -n myproject
----
. Confirm that the Operator is running:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get deployment keycloak-operator
NAME READY UP-TO-DATE AVAILABLE AGE
keycloak-operator 1/1 1 1 41s
----
.Additional resources
* To create your first custom resource, see xref:_keycloak_cr[{project_name} installation using a custom resource].
ifeval::[{project_community}==true]
* If you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
endif::[]