KEYCLOAK-12273 Operator documentation

This commit is contained in:
Sebastian Laskawiec 2019-12-03 11:22:58 +01:00 committed by Hynek Mlnařík
parent 067ff33d26
commit 8c12072a8c
18 changed files with 579 additions and 1 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 82 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

View file

@ -46,4 +46,15 @@ include::topics/cache.adoc[]
include::topics/cache/eviction.adoc[] include::topics/cache/eviction.adoc[]
include::topics/cache/replication.adoc[] include::topics/cache/replication.adoc[]
include::topics/cache/disable.adoc[] include::topics/cache/disable.adoc[]
include::topics/cache/clear.adoc[] include::topics/cache/clear.adoc[]
ifeval::[{project_community}==true]
include::topics/operator.adoc[]
include::topics/operator/installation.adoc[]
include::topics/operator/keycloak-cr.adoc[]
include::topics/operator/keycloak-realm-cr.adoc[]
include::topics/operator/keycloak-client-cr.adoc[]
include::topics/operator/keycloak-user-cr.adoc[]
include::topics/operator/keycloak-backup-cr.adoc[]
include::topics/operator/external-database.adoc[]
include::topics/operator/monitoring-stack.adoc[]
endif::[]

View file

@ -0,0 +1,14 @@
[[_operator]]
== {project_operator}
{project_operator} has been designed to automate {project_name} administration in Kubernetes/OpenShift cluster.
It allows you to:
* Install Keycloak to a namespace
* Import Keycloak Realms
* Import Keycloak Clients
* Import Keycloak Users
* Create scheduled backups of the database
* Install Extensions

View file

@ -0,0 +1,61 @@
=== External database support
Enabling external Postgresql database support requires creating `keycloak-db-secret` in the following form (note that values are Base64 encoded, see https://kubernetes.io/docs/concepts/configuration/secret/[Kubernetes Secrets manual]):
.`keycloak-db-secret` Secret
```yaml
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
namespace: keycloak
stringData:
POSTGRES_DATABASE: <Database Name>
POSTGRES_EXTERNAL_ADDRESS: <External Database IP or URL (resolvable by K8s)>
POSTGRES_EXTERNAL_PORT: <External Database Port>
# Strongly recommended to use <'Keycloak CR Name'-postgresql>
POSTGRES_HOST: <Database Service Name>
POSTGRES_PASSWORD: <Database Password>
# Required for AWS Backup functionality
POSTGRES_SUPERUSER: true
POSTGRES_USERNAME: <Database Username>
type: Opaque
```
There are two properties specifically responsible for specifying external database hostname or IP address and its port:
* POSTGRES_EXTERNAL_ADDRESS - an IP address (or a Hostname) of the external database. This address needs to be resolvable from the perspective of a Kubernetes cluster.
* POSTGRES_EXTERNAL_PORT - (Optional) A database port.
The other properties should to be set as follows:
* `POSTGRES_DATABASE` - Database name to be used.
* `POSTGRES_HOST` - The name of the `Service` used to communicate with a database. Typically `keycloak-postgresql`.
* `POSTGRES_USERNAME` - Database username
* `POSTGRES_PASSWORD` - Database password
* `POSTGRES_SUPERUSER` - Indicates, whether backups should assume running as super user. Typically `true`.
The other properties remain the same for both operating modes (hosted Postgresql and an external one).
The Keycloak Custom Resource also needs to be updated to turn the external database support on.
Here's an example:
.`Keycloak` Custom Resource with external database support
```yaml
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: sso
name: example-keycloak
namespace: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
```
==== Implementation details
{project_operator} allows you to use an external Postgresql database by modifying `<Keycloak Custom Resource Name>-postgresql` Endpoints object. Typically, this object is maintained by Kubernetes, however it might be used to connect to an external service. See https://docs.openshift.com/container-platform/3.11/dev_guide/integrating_external_services.html[OpenShift manual] for more details.

View file

@ -0,0 +1,68 @@
=== Installing {project_operator} into OpenShift cluster
There are two ways of running {project_operator} in Kubernetes/OpenShift cluster:
* Using Operator Lifecycle Manager (OLM)
* Running manually
==== Using Operator Lifecycle Manager
In case of a newly installed OpenShift 4.x cluster, {project_operator} should already be available in
OperatorHub tab of the OpenShift administration panel (as shown in the image below).
image:{project_images}/operator-openshift-operatorhub.png[]
If {project_operator} is not available, please navigate to https://operatorhub.io/operator/keycloak-operator, click `Install` button and follow the instructions on screen.
image:{project_images}/operator-operatorhub-install.png[]
TIP: For Kubernetes installation, please follow link:https://github.com/operator-framework/operator-lifecycle-manager/blob/master/doc/install/install.md[Operator Lifecycle Manager Installation Guide]
Once {project_operator} is available in your cluster, click `Install` button and follow the instruction on
screen.
image:{project_images}/operator-olm-installation.png[]
For more information, navigate to link:https://docs.openshift.com/container-platform/4.2/operators/olm-what-operators-are.html[OpenShift Operators guide].
==== Running manually
It is possible to run {project_operator} without Operator Lifecycle Manager. This requires running several
steps manually from link:{operatorRepo_link}[Github repo]:
* Install all required Custom Resource Definitions by using the following command:
```bash
kubectl apply -f deploy/crds/
```
* Create a new namespace (or reuse an existing one). For this example, we'll be using namespace `myproject`.
```bash
kubectl create namespace myproject
```
* Deploy Role, Role Binding and a Service Account for {project_operator}.
```bash
kubectl apply -f deploy/role.yaml -n myproject
kubectl apply -f deploy/role_binding.yaml -n myproject
kubectl apply -f deploy/service_account.yaml -n myproject
```
* Deploy {project_operator}
```bash
kubectl apply -f deploy/operator.yaml -n myproject
```
* Validate if {project_operator} is running
```bash
kubectl get deployment keycloak-operator
NAME READY UP-TO-DATE AVAILABLE AGE
keycloak-operator 1/1 1 1 41s
```
TIP: When deploying {project_operator} to an OpenShift cluster, `kubectl apply` commands can be replaced with their `oc create` counterparts.

View file

@ -0,0 +1,110 @@
=== KeycloakBackup Custom Resource
{project_operator} provides automatic backups with manual restore in 3 modes:
* One time backups to a local Persistent Volume.
* One time backups to Amazon S3 storage.
* Periodic backups to Amazon S3 storage.
The Operator uses `KeycloakBackup` Custom Resource (CR) to trigger a backup Job (or a `CronJob` in case of Periodic Backups) and reports back its status. The CR has the following structure:
.`KeycloakBackup` Custom Resource
```yaml
apiVersion: keycloak.org/v1alpha1
kind: KeycloakBackup
metadata:
name: <CR Name>
spec:
aws:
# Optional - used only for Periodic Backups.
# Follows usual crond syntax (e.g. use "0 1 * * *") to perform the backup every day at 1 AM.
schedule: <Cron Job Schedule>
# Required - the name of the secret containing the credentials to access the S3 storage
credentialsSecretName: <A Secret containing S3 credentials>
```
.AWS S3 `Secret`
```yaml
apiVersion: v1
kind: Secret
metadata:
name: <Secret Name>
type: Opaque
stringData:
AWS_S3_BUCKET_NAME: <S3 Bucket Name>
AWS_ACCESS_KEY_ID: <AWS Access Key ID>
AWS_SECRET_ACCESS_KEY: <AWS Secret Key>
```
IMPORTANT: The above secret name needs to match the one referred in the `KeycloakBackup` Custom Resource.
Once the `KeycloakBackup` Custom Resource is created, {project_operator} will create a corresponding Job to back up the PostgreSQL database. The status of the backup is reported in the `status` field.
Here's an example:
.`KeycloakBackup` Status
```yaml
Name: example-keycloakbackup
Namespace: keycloak
Labels: <none>
Annotations: <none>
API Version: keycloak.org/v1alpha1
Kind: KeycloakBackup
Metadata:
Creation Timestamp: 2019-10-31T08:13:10Z
Generation: 1
Resource Version: 110940
Self Link: /apis/keycloak.org/v1alpha1/namespaces/keycloak/keycloakbackups/example-keycloakbackup
UID: 0ea2e038-c328-48a0-8d5a-52acbc826577
Status:
Message:
Phase: created
Ready: true
Secondary Resources:
Job:
example-keycloakbackup
Persistent Volume Claim:
keycloak-backup-example-keycloakbackup
```
==== Backups to AWS S3
In order to create Backups uploaded to S3 storage, you need to create a `KeycloakBackup` Custom Resource with `aws` sub-properties.
IMPORTANT: The `credentialsSecretName` field is required and needs to contain a valid reference to a `Secret` containing AWS S3 credentials.
If the `schedule` contains valid `CronJob` schedule definition, the Operator will backup your data periodically.
==== Backups to a Local Storage
{project_operator} can also create a backup to a local Persistent Volume. In order to do it, you need to create a `KeycloakBackup` Custom Resource without `aws` sub-properties. Here's an example:
```yaml
apiVersion: keycloak.org/v1alpha1
kind: KeycloakBackup
metadata:
name: <CR Name>
```
{project_operator} will create a new `PersistentVolumeClaim` with the following naming scheme:
keycloak-backup-<Custom Resource Name>
It is a good practice to create a corresponding `PersistentVolume` for the upcoming backups upfront and use `claimRef` to reserve it only for `PersistentVolumeClaim` created by the Keycloak Operator (see https://docs.okd.io/3.6/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding[OKD manual for more details]).
==== Automatic Restore
WARNING: This is not implemented!
One of the design goals of {project_name} Backups is to maintain one-to-one relationship between
`KeycloakBackup` object and a physical copy of the data. This relationship is then used to restore the data. All you need to do is to set the `restore` flag in the `KeycloakBackup` to true:
.`KeycloakBackup` with restore
```yaml
apiVersion: keycloak.org/v1alpha1
kind: KeycloakBackup
metadata:
name: <CR Name>
spec:
restore: true
```

View file

@ -0,0 +1,67 @@
=== KeycloakClient Custom Resource
{project_operator} allows application developers to represent Keycloak Clients as Custom Resources:
.`KeycloakClient` Custom Resource
```yaml
apiVersion: keycloak.org/v1alpha1
kind: KeycloakClient
metadata:
name: <Keycloak Client name>
labels:
app: sso
spec:
realmSelector:
matchLabels:
app: <matching labels for KeycloakRealm Custom Resource>
client:
# auto-generated if not supplied
#id: 123
clientId: client-secret
secret: client-secret
# ...
# other properties of Keycloak Client
```
TIP: Note, that `realmSelector` needs to match labels of an existing `KeycloakRealm` Custom Resource.
NOTE: {project_operator} synchronizes all the changes made to the Custom Resource with a running {project_name} instance. No manual changes via Keycloak Admin Console are allowed.
Once {project_operator} reconciles the Custom Resource, it reports the status back:
.`KeycloakClient` Custom Resource Status
```yaml
Name: client-secret
Namespace: keycloak
Labels: app=sso
API Version: keycloak.org/v1alpha1
Kind: KeycloakClient
Spec:
Client:
Client Authenticator Type: client-secret
Client Id: client-secret
Id: keycloak-client-secret
Realm Selector:
Match Labels:
App: sso
Status:
Message:
Phase: reconciling
Ready: true
Secondary Resources:
Secret:
keycloak-client-secret-client-secret
Events: <none>
```
Once a Client is created, {project_operator} creates a Secret with, containing both `Client ID` as well as the client's secret using the following naming pattern: `keycloak-client-secret-<Custom Resource name>`. Here's an example:
.`KeycloakClient`'s Secret
```
apiVersion: v1
data:
CLIENT_ID: <base64 encoded Client ID>
CLIENT_SECRET: <base64 encoded Client Secret>
kind: Secret
```

View file

@ -0,0 +1,67 @@
=== Keycloak Custom Resource
The {project_operator} allows users to manage {project_name} clusters by using Keycloak Custom Resources:
.`Keycloak` Custom Resource
```yaml
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: example-keycloak
labels:
app: sso
spec:
instances: 1
externalAccess:
enabled: True
```
The `Spec` contains three properties (two of them are listed above and the external database has been covered in the other paragraphs of this manual):
* `instances` - controls the number of instances running in HA mode
* `externalAccess` - if the `enabled` flag is set to `True`, depending on the types available in the cluster {project_operator} will create a Route or an Ingress for {project_name} cluster.
Once {project_operator} reconciles the Custom Resource, it reports the status back:
.`Keycloak` Custom Resource Status
```yaml
Name: example-keycloak
Namespace: keycloak
Labels: app=sso
Annotations: <none>
API Version: keycloak.org/v1alpha1
Kind: Keycloak
Spec:
External Access:
Enabled: true
Instances: 1
Status:
Credential Secret: credential-example-keycloak
Internal URL: https://<External URL to the deployed instance>
Message:
Phase: reconciling
Ready: true
Secondary Resources:
Deployment:
keycloak-postgresql
Persistent Volume Claim:
keycloak-postgresql-claim
Prometheus Rule:
keycloak
Route:
keycloak
Secret:
credential-example-keycloak
keycloak-db-secret
Service:
keycloak-postgresql
keycloak
keycloak-discovery
Service Monitor:
keycloak
Stateful Set:
keycloak
Version:
Events:
```

View file

@ -0,0 +1,63 @@
=== KeycloakRealm Custom Resource
The {project_operator} allows application developers to represent {project_name} Realms as Custom Resources:
.`KeycloakRealm` Custom Resource
```yaml
apiVersion: keycloak.org/v1alpha1
kind: KeycloakRealm
metadata:
name: example-keycloakrealm
labels:
app: sso
spec:
realm:
id: "basic"
realm: "basic"
enabled: True
displayName: "Basic Realm"
instanceSelector:
matchLabels:
app: sso
```
IMPORTANT: Note, that `instanceSelector` needs to match labels of an existing `Keycloak` Custom Resource.
NOTE: {project_operator} synchronizes all the changes made to the Custom Resource with a running {project_name} instance. No manual changes via {project_name} Admin Console are allowed.
Once {project_operator} reconciles the Custom Resource, it reports the status back:
.`KeycloakRealm` Custom Resource Status
```yaml
Name: example-keycloakrealm
Namespace: keycloak
Labels: app=sso
Annotations: <none>
API Version: keycloak.org/v1alpha1
Kind: KeycloakRealm
Metadata:
Creation Timestamp: 2019-12-03T09:46:02Z
Finalizers:
realm.cleanup
Generation: 1
Resource Version: 804596
Self Link: /apis/keycloak.org/v1alpha1/namespaces/keycloak/keycloakrealms/example-keycloakrealm
UID: b7b2f883-15b1-11ea-91e6-02cb885627a6
Spec:
Instance Selector:
Match Labels:
App: sso
Realm:
Display Name: Basic Realm
Enabled: true
Id: basic
Realm: basic
Status:
Login URL:
Message:
Phase: reconciling
Ready: true
Events: <none>
```

View file

@ -0,0 +1,76 @@
=== KeycloakUser Custom Resource
{project_name} Users can be represented as Custom Resources in {project_operator}:
.`KeycloakUser` Custom Resource
```yaml
apiVersion: keycloak.org/v1alpha1
kind: KeycloakUser
metadata:
name: example-realm-user
spec:
user:
username: "realm_user"
firstName: "John"
lastName: "Doe"
email: "user@example.com"
enabled: True
emailVerified: False
realmRoles:
- "offline_access"
clientRoles:
account:
- "manage-account"
realm-management:
- "manage-users"
realmSelector:
matchLabels:
app: sso
```
IMPORTANT: Note, that `realmSelector` needs to match labels of an existing `KeycloakRealm` Custom Resource.
NOTE: {project_operator} synchronizes all the changes made to the Custom Resource with a running {project_name} instance. No manual changes via {project_name} Admin Console are allowed.
Once {project_operator} reconciles the Custom Resource, it reports the status back:
.`KeycloakUser` Custom Resource Status
```yaml
Name: example-realm-user
Namespace: keycloak
Labels: app=sso
API Version: keycloak.org/v1alpha1
Kind: KeycloakUser
Spec:
Realm Selector:
Match Labels:
App: sso
User:
Email: realm_user@redhat.com
Credentials:
Type: password
Value: <user password>
Email Verified: false
Enabled: true
First Name: John
Last Name: Doe
Username: realm_user
Status:
Message:
Phase: reconciled
Events: <none>
```
Once a User is created, {project_operator} creates a Secret containing both username and password using the
following naming pattern: `credential-<realm name>-<user name>-<namespace>`. Here's an example:
.`KeycloakUser` Secret
```
kind: Secret
apiVersion: v1
data:
password: <base64 encoded password>
username: <base64 encoded username>
type: Opaque
```

View file

@ -0,0 +1,31 @@
=== Monitoring Stack
{project_operator} integrates with {application_monitoring_operator} providing Graphana Dashboard and Prometheus Alerts. The integration is seamless and doesn't require any action from the user. However, {application_monitoring_operator} needs to be properly installed in the cluster.
==== Installing {application_monitoring_operator}
Prior to running {project_operator} it is recommended to install {application_monitoring_operator} by following the link:{application_monitoring_operator_installation_link}[documentation]. After {application_monitoring_operator} is up and running, you need to annotate the namespace used for {project_operator} installation. Here's an example:
.Annotating namespace for monitoring
```bash
oc label namespace <namespace> monitoring-key=middleware
```
Once the {application_monitoring_operator} is installed and running, also the corresponding namespace is annotated for monitoring, you can search for Prometheus and Graphana route in `application-monitoring` namespace:
image:{project_images}/operator-application-monitoring-routes.png[]
==== Graphana Dashboard
{project_operator} installs a pre-defined Graphana Dashboard. Here's how it looks like:
image:{project_images}/operator-graphana-dashboard.png[]
NOTE: Graphana Dashboard is likely to be changed in future releases. If you need any customizations, please clone it and adjust to your needs.
==== Prometheus Alerts
{project_operator} installs a set of pre-defined Prometheus Alerts. Here's how they look like:
image:{project_images}/operator-prometheus-alerts.png[]

View file

@ -20,6 +20,11 @@ endif::[]
:project_dirref: KEYCLOAK_HOME :project_dirref: KEYCLOAK_HOME
:project_openshift_product_name: Keycloak for OpenShift :project_openshift_product_name: Keycloak for OpenShift
:project_operator: Keycloak Operator
:operatorRepo_link: https://github.com/keycloak/keycloak-operator
:application_monitoring_operator: Application Monitoring Operator
:application_monitoring_operator_installation_link: https://github.com/integr8ly/application-monitoring-operator#installation
:quickstartRepo_link: https://github.com/keycloak/keycloak-quickstarts :quickstartRepo_link: https://github.com/keycloak/keycloak-quickstarts
:quickstartRepo_name: Keycloak Quickstarts Repository :quickstartRepo_name: Keycloak Quickstarts Repository
:quickstartRepo_dir: keycloak-quickstarts :quickstartRepo_dir: keycloak-quickstarts

View file

@ -16,6 +16,11 @@
:project_openshift_product_name: {project_name} for OpenShift :project_openshift_product_name: {project_name} for OpenShift
:project_operator: Red Hat Single Sign-On Operator
:operatorRepo_link: https://github.com/keycloak/keycloak-operator
:application_monitoring_operator: Red Hat Managed Integration (RHMI) Application Monitoring Operator
:application_monitoring_operator_installation_link: https://github.com/integr8ly/application-monitoring-operator#installation
:project_dirref: RHSSO_HOME :project_dirref: RHSSO_HOME
:quickstartRepo_name: {project_name} Quickstarts Repository :quickstartRepo_name: {project_name} Quickstarts Repository