keycloak-scim/server_installation/topics/operator/installation.adoc
2022-06-24 09:43:29 -03:00

163 lines
5.5 KiB
Text

[[_installing-operator]]
=== Installing the {project_operator} on a cluster
To install the {project_operator}, you can use:
* xref:_install_by_olm[The Operator Lifecycle Manager (OLM)]
ifeval::[{project_community}==true]
* xref:_install_by_command[Command line installation]
endif::[]
[[_install_by_olm]]
==== Installing using the Operator Lifecycle Manager
ifeval::[{project_community}==true]
You can install the Operator on an xref:_openshift-olm[OpenShift] or xref:_kubernetes-olm[Kubernetes] cluster.
[[_openshift-olm]]
===== Installation on an OpenShift cluster
endif::[]
.Prerequisites
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
Perform this procedure on an OpenShift cluster.
. Open the OpenShift Container Platform web console.
. In the left column, click `Operators, OperatorHub`.
. Search for {project_name} Operator.
+
.OperatorHub tab in OpenShift
image:{project_images}/operator-openshift-operatorhub.png[]
. Click the {project_name} Operator icon.
+
An Install page opens.
+
.Operator Install page on OpenShift
image:{project_images}/operator-olm-installation.png[]
. Click `Install`.
. Select a namespace and click Subscribe.
+
.Namespace selection in OpenShift
image:images/installed-namespace.png[]
+
The Operator starts installing.
.Additional resources
* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource].
ifeval::[{project_community}==true]
However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
endif::[]
* For more information on OpenShift Operators, see the link:https://docs.openshift.com/container-platform/4.4/operators/olm-what-operators-are.html[OpenShift Operators guide].
ifeval::[{project_community}==true]
[[_kubernetes-olm]]
===== Installation on a Kubernetes cluster
.Prerequisites
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
For a Kubernetes cluster, perform these steps.
. Go to link:https://operatorhub.io/operator/keycloak-operator[Keycloak Operator on OperatorHub.io].
. Click `Install`.
. Follow the instructions on the screen.
+
.Operator Install page on Kubernetes
image:{project_images}/operator-operatorhub-install.png[]
.Additional resources
* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource]. However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
* For more information on a Kubernetes installation, see link:https://operatorhub.io/how-to-install-an-operator[How to install an Operator from OperatorHub.io].
[[_install_by_command]]
==== Installing from the command line
You can install the {project_operator} from the command line.
.Prerequisites
* You have cluster-admin permission or an equivalent level of permissions granted by an administrator.
.Procedure
. Obtain the software to install from this location: link:{operatorRepo_link}[Github repo].
. Install all required custom resource definitions:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/crds/
----
. Create a new namespace (or reuse an existing one) such as the namespace `myproject`:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} create namespace myproject
----
. Deploy a role, role binding, and service account for the Operator:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/role.yaml -n myproject
$ {create_cmd} -f deploy/role_binding.yaml -n myproject
$ {create_cmd} -f deploy/service_account.yaml -n myproject
----
. Deploy the Operator:
+
[source,bash,subs=+attributes]
----
$ {create_cmd} -f deploy/operator.yaml -n myproject
----
. Confirm that the Operator is running:
+
[source,bash,subs=+attributes]
----
$ {create_cmd_brief} get deployment keycloak-operator -n myproject
NAME READY UP-TO-DATE AVAILABLE AGE
keycloak-operator 1/1 1 1 41s
----
endif::[]
.Additional resources
* When the Operator installation completes, you are ready to create your first custom resource. See xref:_keycloak_cr[{project_name} installation using a custom resource].
ifeval::[{project_community}==true]
However, if you want to start tracking all Operator activities before creating custom resources, see the xref:_monitoring-operator[Application Monitoring Operator].
* For more information on a Kubernetes installation, see link:https://operatorhub.io/how-to-install-an-operator[How to install an Operator from OperatorHub.io].
endif::[]
* For more information on OpenShift Operators, see the link:https://docs.openshift.com/container-platform/4.4/operators/olm-what-operators-are.html[OpenShift Operators guide].
[[_operator_production_usage]]
=== Using the {project_operator} in production environment
* The usage of embedded DB is not supported in a production environment.
* Backup CRD is deprecated and not supported in a production environment.
* The `podDisruptionBudget` field in the Keycloak CR is deprecated and will be ignored when the Operator is deployed on Kubernetes version 1.25 and higher.
* We fully support using the rest of the CRDs in production, despite the `v1alpha1` version. We do not plan to make any breaking changes in this CRDs version.